hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b41dc9debef40e4c8fae941d8dbc7992cc72772d | 75,357 | py | Python | tests/export/html/test_numbering.py | botzill/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 127 | 2015-01-12T22:35:34.000Z | 2022-01-20T06:24:18.000Z | tests/export/html/test_numbering.py | turbo-q/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 156 | 2015-01-05T19:55:56.000Z | 2020-10-14T07:01:42.000Z | tests/export/html/test_numbering.py | turbo-q/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 45 | 2015-02-22T18:52:08.000Z | 2021-06-14T08:05:47.000Z | # coding: utf-8
from __future__ import (
absolute_import,
print_function,
unicode_literals,
)
from pydocx.export.numbering_span import BaseNumberingSpanBuilder
from pydocx.test import DocumentGeneratorTestCase
from pydocx.test.utils import (
PyDocXHTMLExporterNoStyle,
WordprocessingDocumentFactory,
)
from pydocx.openxml.packaging import (
MainDocumentPart,
NumberingDefinitionsPart,
StyleDefinitionsPart,
)
from pydocx.export.numbering_span import int_to_alpha, int_to_roman
class NumberingTestBase(object):
simple_list_item = '''
<p>
<pPr>
<numPr>
<ilvl val="{ilvl}" />
<numId val="{num_id}" />
</numPr>
</pPr>
<r><t>{content}</t></r>
</p>
'''
simple_list_item_with_indentation = '''
<p>
<pPr>
<numPr>
<ilvl val="{ilvl}" />
<numId val="{num_id}" />
</numPr>
<ind {ind} />
</pPr>
<r><t>{content}</t></r>
</p>
'''
simple_list_definition = '''
<num numId="{num_id}">
<abstractNumId val="{num_id}"/>
</num>
<abstractNum abstractNumId="{num_id}">
<lvl ilvl="0">
<numFmt val="{num_format}"/>
</lvl>
</abstractNum>
'''
class NumberingTestCase(NumberingTestBase, DocumentGeneratorTestCase):
def test_lowerLetter_numbering_format_is_handled(self):
num_id = 1
numbering_xml = self.simple_list_definition.format(
num_id=num_id,
num_format='lowerLetter',
)
document_xml = self.simple_list_item.format(
content='AAA',
num_id=num_id,
ilvl=0,
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_single_level_list_with_surrounding_paragraphs(self):
num_id = 1
numbering_xml = self.simple_list_definition.format(
num_id=num_id,
num_format='lowerLetter',
)
document_xml = '''
<p><r><t>Foo</t></r></p>
{aaa}
{bbb}
<p><r><t>Bar</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=num_id,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=num_id,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>Foo</p>
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>BBB</li>
</ol>
<p>Bar</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_multi_level_list_with_surrounding_paragraphs(self):
num_id = 1
numbering_xml = '''
<num numId="{num_id}">
<abstractNumId val="{num_id}"/>
</num>
<abstractNum abstractNumId="{num_id}">
<lvl ilvl="0">
<numFmt val="lowerLetter"/>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal"/>
</lvl>
<lvl ilvl="2">
<numFmt val="upperLetter"/>
</lvl>
</abstractNum>
'''.format(num_id=num_id)
document_xml = '''
<p><r><t>Foo</t></r></p>
{aaa}
{bbb}
{ccc}
<p><r><t>Bar</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=num_id,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=num_id,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=num_id,
ilvl=2,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>Foo</p>
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA
<ol class="pydocx-list-style-type-decimal">
<li>BBB
<ol class="pydocx-list-style-type-upperLetter">
<li>CCC</li>
</ol>
</li>
</ol>
</li>
</ol>
<p>Bar</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_adjacent_lists(self):
numbering_xml = '''
{letter}
{decimal}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
decimal=self.simple_list_definition.format(
num_id=2,
num_format='decimal',
),
)
document_xml = '''
<p><r><t>Foo</t></r></p>
{aaa}
{bbb}
<p><r><t>Bar</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=2,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>Foo</p>
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
<ol class="pydocx-list-style-type-decimal">
<li>BBB</li>
</ol>
<p>Bar</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_basic_list_followed_by_list_that_is_heading_and_paragraph(self):
numbering_xml = '''
{letter}
{decimal}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
decimal=self.simple_list_definition.format(
num_id=2,
num_format='decimal',
),
)
style_xml = '''
<style styleId="style1" type="paragraph">
<name val="Heading 1"/>
</style>
'''
list_item_with_parent_style_heading = '''
<p>
<pPr>
<pStyle val="style1" />
<numPr>
<ilvl val="{ilvl}" />
<numId val="{num_id}" />
</numPr>
</pPr>
<r><t>{content}</t></r>
</p>
'''
document_xml = '''
{aaa}
{bbb}
<p><r><t>Bar</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=list_item_with_parent_style_heading.format(
content='BBB',
num_id=2,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(StyleDefinitionsPart, style_xml)
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
<ol class="pydocx-list-style-type-decimal">
<li>
<strong>BBB</strong>
</li>
</ol>
<p>Bar</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_separate_lists_with_paragraph_in_between_and_after(self):
numbering_xml = '''
{letter}
{decimal}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
decimal=self.simple_list_definition.format(
num_id=2,
num_format='decimal',
),
)
document_xml = '''
<p><r><t>Foo</t></r></p>
{aaa}
<p><r><t>Bar</t></r></p>
{bbb}
<p><r><t>Baz</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=2,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>Foo</p>
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
<p>Bar</p>
<ol class="pydocx-list-style-type-decimal">
<li>BBB</li>
</ol>
<p>Baz</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_single_list_followed_by_paragraph(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p><r><t>Foo</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
<p>Foo</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_single_list_with_bare_paragraph_between_items(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p><r><t>Foo</t></r></p>
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA<br />Foo</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_list_with_empty_numbering_xml(self):
numbering_xml = ''
document_xml = '''
{aaa}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>AAA</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_single_paragraph_missing_level_definition(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
<p>
<pPr>
<numPr>
<numId val="1" />
</numPr>
</pPr>
<r><t>foo</t></r>
</p>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>foo</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_multiple_paragraphs_with_one_missing_level_definition(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
<p><r><t>foo</t></r></p>
<p>
<pPr>
<numPr>
<numId val="1" />
</numPr>
</pPr>
<r><t>bar</t></r>
</p>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>foo</p>
<p>bar</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_with_valid_list_level_followed_by_missing_level(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<pPr>
<numPr>
<numId val="1" />
</numPr>
</pPr>
<r><t>foo</t></r>
</p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
<p>foo</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_missing_level_in_between_valid_levels(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<pPr>
<numPr>
<numId val="1" />
</numPr>
</pPr>
<r><t>foo</t></r>
</p>
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>
AAA
<br />
foo
</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_empty_paragraph_after_list_item(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p />
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_empty_paragraph_in_between_list_items(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p />
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_and_run_with_empty_text_in_between_list_items(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r><t></t></r>
</p>
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_with_empty_run_in_between_list_items(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r></r>
</p>
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_with_empty_run_followed_by_non_empty_paragraph(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r></r>
</p>
<p>
<r><t>BBB</t></r>
</p>
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA<br />BBB</li>
<li>CCC</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_with_multiple_empty_runs_followed_by_non_empty_paragraph(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r></r>
</p>
<p>
<r></r>
</p>
<p>
<r></r>
</p>
<p>
<r><t>BBB</t></r>
</p>
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA<br />BBB</li>
<li>CCC</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_empty_run_paragraph_empty_run_paragraph(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r></r>
</p>
<p>
<r><t>Foo</t></r>
</p>
<p>
<r></r>
</p>
<p>
<r><t>Bar</t></r>
</p>
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA<br />Foo<br />Bar</li>
<li>CCC</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_paragraph_followed_by_paragraph_with_only_whitespace(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
<p>
<r><t> </t></r>
</p>
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
ccc=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_empty_item(self):
numbering_xml = '''
{letter}
'''.format(
letter=self.simple_list_definition.format(
num_id=1,
num_format='lowerLetter',
),
)
document_xml = '''
{aaa}
'''.format(
aaa=self.simple_list_item.format(
content='',
num_id=1,
ilvl=0,
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-lowerLetter">
<li></li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_numfmt_None_causes_list_to_be_ignored(self):
document_xml = '''
{aaa}
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=0,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="none"/>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>AAA</p>
<p>BBB</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_numfmt_None_causes_sub_list_to_be_ignored(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
{ddd}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=1,
),
ddd=self.simple_list_item.format(
content='DDD',
num_id=1,
ilvl=0,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
</lvl>
<lvl ilvl="1">
<numFmt val="none"/>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>
AAA
<br />
BBB
<br />
CCC
</li>
<li>DDD</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_root_level_numfmt_None_with_sublist(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
{ddd}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=1,
),
ddd=self.simple_list_item.format(
content='DDD',
num_id=1,
ilvl=0,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="none"/>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal"/>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>AAA</p>
<ol class="pydocx-list-style-type-decimal">
<li>BBB</li>
<li>CCC</li>
</ol>
<p>DDD</p>
'''
self.assert_document_generates_html(document, expected_html)
class NumberingIndentationTestCase(NumberingTestBase, DocumentGeneratorTestCase):
def test_no_numbering_definition_defined(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=2,
),
)
document = WordprocessingDocumentFactory()
document.add(MainDocumentPart, document_xml)
expected_html = '''
<p>AAA</p>
<p>BBB</p>
<p>CCC</p>
'''
self.assert_document_generates_html(document, expected_html)
def test_default_indentation(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=1,
ilvl=2,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="2">
<numFmt val="decimal" />
<pPr>
<ind left="2160" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AAA
<ol class="pydocx-list-style-type-decimal">
<li>BBB
<ol class="pydocx-list-style-type-decimal">
<li>CCC</li>
</ol>
</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_custom_indentation(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
'''.format(
aaa=self.simple_list_item_with_indentation.format(
content='AAA',
num_id=1,
ilvl=0,
ind='left="1440" hanging="360"'
),
bbb=self.simple_list_item_with_indentation.format(
content='BBB',
num_id=1,
ilvl=1,
ind='left="2880" hanging="360"'
),
ccc=self.simple_list_item_with_indentation.format(
content='CCC',
num_id=1,
ilvl=2,
ind='left="4320" hanging="360"'
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="2">
<numFmt val="decimal" />
<pPr>
<ind left="2160" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:3.00em">AAA
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:3.00em">BBB
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:3.00em">CCC</li>
</ol>
</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_custom_hanging_indentation(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
'''.format(
aaa=self.simple_list_item_with_indentation.format(
content='AAA',
num_id=1,
ilvl=0,
ind='left="720" hanging="500"'
),
bbb=self.simple_list_item_with_indentation.format(
content='BBB',
num_id=1,
ilvl=1,
ind='left="1440" hanging="700"'
),
ccc=self.simple_list_item_with_indentation.format(
content='CCC',
num_id=1,
ilvl=2,
ind='left="2160" hanging="800"'
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="2">
<numFmt val="decimal" />
<pPr>
<ind left="2160" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:-0.58em">
<span style="display:inline-block;text-indent:0.58em">AAA</span>
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:-1.42em">
<span style="display:inline-block;text-indent:1.42em">BBB</span>
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:-1.83em">
<span style="display:inline-block;text-indent:1.83em">CCC
</span>
</li>
</ol>
</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_custom_first_line_indentation(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
'''.format(
aaa=self.simple_list_item_with_indentation.format(
content='AAA',
num_id=1,
ilvl=0,
ind='firstLine="360"'
),
bbb=self.simple_list_item_with_indentation.format(
content='BBB',
num_id=1,
ilvl=1,
ind='firstLine="360"'
),
ccc=self.simple_list_item_with_indentation.format(
content='CCC',
num_id=1,
ilvl=2,
ind='firstLine="360"'
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="2">
<numFmt val="decimal" />
<pPr>
<ind left="2160" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:1.50em">AAA
<ol class="pydocx-list-style-type-decimal">
<li>BBB
<ol class="pydocx-list-style-type-decimal">
<li>CCC</li>
</ol>
</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_nested_separated_lists(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
{ddd}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=2,
ilvl=0,
),
ddd=self.simple_list_item.format(
content='DDD',
num_id=1,
ilvl=1,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<num numId="2">
<abstractNumId val="2"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="2">
<numFmt val="decimal" />
<pPr>
<ind left="2160" hanging="360" />
</pPr>
</lvl>
</abstractNum>
<abstractNum abstractNumId="2">
<lvl ilvl="0">
<numFmt val="lowerLetter"/>
<pPr>
<ind left="2880" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>
AAA
<ol class="pydocx-list-style-type-decimal">
<li>
BBB
<ol class="pydocx-list-style-type-lowerLetter">
<li style="margin-left:3.00em">CCC</li>
</ol>
</li>
<li>DDD</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_nested_separated_lists_different_level(self):
document_xml = '''
{aaa}
{bbb}
{ccc}
{ddd}
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=2,
ilvl=1,
),
ccc=self.simple_list_item.format(
content='CCC',
num_id=2,
ilvl=1,
),
ddd=self.simple_list_item.format(
content='DDD',
num_id=1,
ilvl=0,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<num numId="2">
<abstractNumId val="2"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
</abstractNum>
<abstractNum abstractNumId="2">
<lvl ilvl="0">
<numFmt val="lowerLetter"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="lowerLetter" />
<pPr>
<ind left="1440" hanging="360" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>
AAA
<ol class="pydocx-list-style-type-lowerLetter">
<li>BBB</li>
<li>CCC</li>
</ol>
</li>
<li>DDD</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
class FakedNumberingManyItemsTestCase(NumberingTestBase, DocumentGeneratorTestCase):
def assert_html(self, list_type, digit_generator):
paragraphs = []
expected_items = []
for i in range(1, 100):
content = 'Foo-{i}'.format(i=i)
digit = digit_generator(i)
paragraphs.append(
'<p><r><t>{digit}. {content}</t></r></p>'.format(
digit=digit,
content=content,
),
)
expected_items.append(content)
document_xml = ''.join(paragraphs)
items = [
'<li>{item}</li>'.format(item=item)
for item in expected_items
]
expected_html = '''
<ol class="pydocx-list-style-type-{list_type}">
{items}
</ol>
'''.format(
list_type=list_type,
items=''.join(items),
)
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_fake_decimal_list_with_many_items(self):
self.assert_html('decimal', int)
def test_fake_lower_alpha_list_with_many_items(self):
def digit_generator(index):
return int_to_alpha(index).lower()
self.assert_html('lowerLetter', digit_generator)
def test_fake_upper_alpha_list_with_many_items(self):
def digit_generator(index):
return int_to_alpha(index).upper()
self.assert_html('upperLetter', digit_generator)
def test_fake_upper_roman_list_with_many_items(self):
def digit_generator(index):
return int_to_roman(index).upper()
self.assert_html('upperRoman', digit_generator)
def test_fake_lower_roman_list_with_many_items(self):
def digit_generator(index):
return int_to_roman(index).lower()
self.assert_html('lowerRoman', digit_generator)
class FakedNumberingTestCase(NumberingTestBase, DocumentGeneratorTestCase):
def test_real_list_plus_fake_list(self):
document_xml = '''
{foo}
<p><r><t>2. Bar</t></r></p>
<p><r><t>3. Baz</t></r></p>
'''.format(
foo=self.simple_list_item.format(
content='Foo',
num_id=1,
ilvl=0,
),
)
# This works because simple_list_definition doesn't define an
# indentation for the level. So the real list indentation is
# effectively 0
numbering_xml = '''
{decimal}
'''.format(
decimal=self.simple_list_definition.format(
num_id=1,
num_format='decimal',
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>Foo</li>
<li>Bar</li>
<li>Baz</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_real_list_plus_tab_nested_fake_list_with_mixed_formats(self):
document_xml = '''
{aaa}
<p><r><tab /><t>a. BBB</t></r></p>
<p><r><tab /><t>b. CCC</t></r></p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
# This works because simple_list_definition doesn't define an
# indentation for the level. So the real list indentation is
# effectively 0
numbering_xml = '''
{decimal}
'''.format(
decimal=self.simple_list_definition.format(
num_id=1,
num_format='decimal',
),
)
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AAA
<ol class="pydocx-list-style-type-lowerLetter">
<li>BBB</li>
<li>CCC</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_initial_faked_list_plus_real_list(self):
document_xml = '''
<p><r><t>1. Foo</t></r></p>
<p><r><t>2. Bar</t></r></p>
{foo}
'''.format(
foo=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
)
# This works because the level definition doesn't define an indentation
# for the level. So the real list indentation is effectively 0
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<start val="3" />
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>Foo</li>
<li>Bar</li>
<li>AAA</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_one_fake_list_followed_by_another_fake_list_same_format(self):
document_xml = '''
<p><r><t>1. AA</t></r></p>
<p><r><t>2. AB</t></r></p>
<p><r><t>1. BA</t></r></p>
<p><r><t>2. BB</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AA</li>
<li>AB</li>
</ol>
<ol class="pydocx-list-style-type-decimal">
<li>BA</li>
<li>BB</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_one_fake_list_followed_by_another_fake_list_different_format(self):
document_xml = '''
<p><r><t>1. AA</t></r></p>
<p><r><t>2. AB</t></r></p>
<p><r><t>a. BA</t></r></p>
<p><r><t>b. BB</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AA</li>
<li>AB</li>
</ol>
<ol class="pydocx-list-style-type-lowerLetter">
<li>BA</li>
<li>BB</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_real_nested_list_continuation_fake_nested_list_using_indentation(self):
document_xml = '''
{aaa}
{bbb}
<p>
<pPr>
<ind left="720" hanging="0" />
</pPr>
<r><t>2. CCC</t></r>
</p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="720" hanging="0" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:1.50em">AAA
<ol class="pydocx-list-style-type-decimal">
<li>BBB</li>
<li>CCC</li>
</ol>
</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_real_nested_list_continuation_fake_list_using_indentation(self):
document_xml = '''
{aaa}
{bbb}
<p>
<pPr>
<ind left="720" hanging="360" />
</pPr>
<r><t>2. CCC</t></r>
</p>
'''.format(
aaa=self.simple_list_item.format(
content='AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='BBB',
num_id=1,
ilvl=1,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="decimal"/>
<pPr>
<ind left="720" hanging="360" />
</pPr>
</lvl>
<lvl ilvl="1">
<numFmt val="decimal" />
<pPr>
<ind left="720" hanging="0" />
</pPr>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li style="margin-left:1.50em">AAA
<ol class="pydocx-list-style-type-decimal">
<li>BBB</li>
</ol>
</li>
<li>CCC</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_faked_list_using_indentation(self):
document_xml = '''
<p><r><t>1. AA</t></r></p>
<p>
<pPr>
<ind left="200" />
</pPr>
<r><t>a. AAA</t></r>
</p>
<p>
<pPr>
<ind left="0" firstLine="200" />
</pPr>
<r><t>b. AAB</t></r>
</p>
<p>
<pPr>
<ind left="400" hanging="200" firstLine="300" />
</pPr>
<r><t>c. AAC</t></r>
</p>
<p>
<pPr>
<ind left="200" firstLine="400" />
</pPr>
<r><t>A. AACA</t></r>
</p>
<p>
<pPr>
<ind left="100" firstLine="100" />
</pPr>
<r><t>d. AAD</t></r>
</p>
<p><r><t>2. AB</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AA
<ol class="pydocx-list-style-type-lowerLetter">
<li>AAA</li>
<li>AAB</li>
<li>AAC
<ol class="pydocx-list-style-type-upperLetter">
<li>AACA</li>
</ol>
</li>
<li>AAD</li>
</ol>
</li>
<li>AB</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_faked_list_that_skips_numbers(self):
document_xml = '''
<p><r><t>1. AA</t></r></p>
<p><r><t>2. AB</t></r></p>
<p><r><t>4. AC</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AA</li>
<li>AB</li>
</ol>
<p>
4. AC
</p>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_faked_list_that_does_not_start_from_1(self):
document_xml = '''
<p><r><t>2. AA</t></r></p>
<p><r><t>3. AB</t></r></p>
'''
expected_html = '''
<p>2. AA</p>
<p>3. AB</p>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_decimal_number_is_not_converted(self):
document_xml = '''
<p><r><t>1.1</t></r></p>
<p><r><t>1.2</t></r></p>
'''
expected_html = '''
<p>1.1</p>
<p>1.2</p>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_space_after_dot_followed_by_number_is_converted(self):
# This is like the decimal case, but there's a space after the dot
document_xml = '''
<p><r><t>1. 1</t></r></p>
<p><r><t>2. 2</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>1</li>
<li>2</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_space_required_after_digit_dot(self):
document_xml = '''
<p><r><t>1.a</t></r></p>
<p><r><t>a</t><t>.b</t></r></p>
<p><r><t>A</t><t>.</t><t>c</t></r></p>
<p><r><t>I.</t><t>d</t></r></p>
<p><r><t>i.e</t></r></p>
'''
expected_html = '''
<p>1.a</p>
<p>a.b</p>
<p>A.c</p>
<p>I.d</p>
<p>i.e</p>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_tab_char_is_sufficient_for_space_after_dot(self):
document_xml = '''
<p><r><t>1.</t><tab /><t>a</t></r></p>
<p><r><t>a.</t><tab /><t>b</t></r></p>
<p><r><t>A.</t><tab /><t>c</t></r></p>
<p><r><t>I.</t><tab /><t>d</t></r></p>
<p><r><t>i.</t><tab /><t>e</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>a</li>
</ol>
<ol class="pydocx-list-style-type-lowerLetter">
<li>b</li>
</ol>
<ol class="pydocx-list-style-type-upperLetter">
<li>c</li>
</ol>
<ol class="pydocx-list-style-type-upperRoman">
<li>d</li>
</ol>
<ol class="pydocx-list-style-type-lowerRoman">
<li>e</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_single_item_lists(self):
document_xml = '''
<p><r><t>1. a</t></r></p>
<p><r><t>a. b</t></r></p>
<p><r><t>A. c</t></r></p>
<p><r><t>I. d</t></r></p>
<p><r><t>i. e</t></r></p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>a</li>
</ol>
<ol class="pydocx-list-style-type-lowerLetter">
<li>b</li>
</ol>
<ol class="pydocx-list-style-type-upperLetter">
<li>c</li>
</ol>
<ol class="pydocx-list-style-type-upperRoman">
<li>d</li>
</ol>
<ol class="pydocx-list-style-type-lowerRoman">
<li>e</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_trailing_text_is_not_removed(self):
document_xml = '''
<p>
<r>
<t>1.</t>
<t> Foo </t>
<t>Bar</t>
</r>
</p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>Foo Bar</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_leading_text_is_not_removed(self):
document_xml = '''
<p>
<r>
<t>1.</t>
<t> Foo</t>
<t> Bar</t>
</r>
</p>
'''
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>Foo Bar</li>
</ol>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_faked_list_with_list_level_numfmt_None_still_detected(self):
document_xml = '''
{aaa}
{bbb}
'''.format(
aaa=self.simple_list_item.format(
content='1. AAA',
num_id=1,
ilvl=0,
),
bbb=self.simple_list_item.format(
content='2. BBB',
num_id=1,
ilvl=0,
),
)
numbering_xml = '''
<num numId="1">
<abstractNumId val="1"/>
</num>
<abstractNum abstractNumId="1">
<lvl ilvl="0">
<numFmt val="none"/>
</lvl>
</abstractNum>
'''
document = WordprocessingDocumentFactory()
document.add(NumberingDefinitionsPart, numbering_xml)
document.add(MainDocumentPart, document_xml)
expected_html = '''
<ol class="pydocx-list-style-type-decimal">
<li>AAA</li>
<li>BBB</li>
</ol>
'''
self.assert_document_generates_html(document, expected_html)
def test_faked_within_a_table(self):
document_xml = '''
<tbl>
<tr>
<tc>
<p>
<r>
<t>1. Foo</t>
</r>
</p>
<p>
<r>
<t>2. Bar</t>
</r>
</p>
</tc>
</tr>
</tbl>
'''
expected_html = '''
<table border="1">
<tr>
<td>
<ol class="pydocx-list-style-type-decimal">
<li>Foo</li>
<li>Bar</li>
</ol>
</td>
</tr>
</table>
'''
self.assert_main_document_xml_generates_html(document_xml, expected_html)
class FakedNumberingPatternBase(object):
def assert_html_using_pattern(self, pattern):
document_xml_format = [
pattern.format(digit)
for digit in self.document_xml_sequence
]
document_xml = self.document_xml.format(*document_xml_format)
expected_html = self.expected_html.format(*self.expected_html_format)
self.assert_main_document_xml_generates_html(document_xml, expected_html)
def test_format_digit_dot_space(self):
self.assert_html_using_pattern('{0}. ')
def test_digit_paren(self):
self.assert_html_using_pattern('{0})')
def test_digit_paren_with_spaces(self):
self.assert_html_using_pattern(' {0} ) ')
def test_paren_digit_paren(self):
self.assert_html_using_pattern('({0})')
def test_paren_digit_paren_with_spaces(self):
self.assert_html_using_pattern(' ( {0} ) ')
def test_format_digit_dot_with_spacing(self):
self.assert_html_using_pattern(' {0} . ')
class PyDocXHTMLExporterNoStyleBaseNumberingSpan(PyDocXHTMLExporterNoStyle):
numbering_span_builder_class = BaseNumberingSpanBuilder
class FakedNumberingDetectionDisabledBase(FakedNumberingPatternBase):
def setUp(self):
super(FakedNumberingDetectionDisabledBase, self).setUp()
self.document_xml = '''
<p><r><t>{0}AA</t></r></p>
<p><r><t>{1}AB</t></r></p>
<p><r>
<tab />
<t>{2}ABA</t>
</r></p>
<p><r>
<tab />
<t>{3}ABB</t>
</r></p>
<p><r><t>{4}AC</t></r></p>
'''
self.expected_html = '''
<p>{0}AA</p>
<p>{1}AB</p>
<p>
<span class="pydocx-tab"></span>{2}ABA
</p>
<p>
<span class="pydocx-tab"></span>{3}ABB
</p>
<p>{4}AC</p>
'''
def assert_html_using_pattern(self, pattern):
document_xml_format = [
pattern.format(digit)
for digit in self.document_xml_sequence
]
document_xml = self.document_xml.format(*document_xml_format)
expected_html = self.expected_html.format(*document_xml_format)
self.assert_main_document_xml_generates_html(document_xml, expected_html)
class FakedNestedDecimalDisabledTestCase(
FakedNumberingDetectionDisabledBase,
DocumentGeneratorTestCase,
):
exporter = PyDocXHTMLExporterNoStyleBaseNumberingSpan
document_xml_sequence = [1, 2, 1, 2, 3]
class FakedNestedLowerLetterDisabledTestCase(
FakedNumberingDetectionDisabledBase,
DocumentGeneratorTestCase,
):
exporter = PyDocXHTMLExporterNoStyleBaseNumberingSpan
document_xml_sequence = ['a', 'b', 'a', 'b', 'c']
class FakedNestedUpperLetterDisabledTestCase(
FakedNumberingDetectionDisabledBase,
DocumentGeneratorTestCase,
):
exporter = PyDocXHTMLExporterNoStyleBaseNumberingSpan
document_xml_sequence = ['A', 'B', 'A', 'B', 'C']
class FakedNestedLowerRomanDisabledTestCase(
FakedNumberingDetectionDisabledBase,
DocumentGeneratorTestCase,
):
exporter = PyDocXHTMLExporterNoStyleBaseNumberingSpan
document_xml_sequence = ['i', 'ii', 'i', 'ii', 'iii']
class FakedNestedUpperRomanDisabledTestCase(
FakedNumberingDetectionDisabledBase,
DocumentGeneratorTestCase,
):
exporter = PyDocXHTMLExporterNoStyleBaseNumberingSpan
document_xml_sequence = ['I', 'II', 'I', 'II', 'III']
class FakedNestedNoContentBase(FakedNumberingPatternBase):
def setUp(self):
super(FakedNestedNoContentBase, self).setUp()
self.document_xml = '''
<p><r><t>{0}</t></r></p>
<p><r><t>{1}</t></r></p>
<p><r>
<tab />
<t>{2}</t>
</r></p>
<p><r>
<tab />
<t>{3}</t>
</r></p>
<p><r><t>{4}</t></r></p>
'''
self.expected_html = '''
<ol class="pydocx-list-style-type-{0}">
<li></li>
<li>
<ol class="pydocx-list-style-type-{1}">
<li></li>
<li></li>
</ol>
</li>
<li></li>
</ol>
'''
class FakedNestedDecimalNoContentTestCase(
FakedNestedNoContentBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['decimal', 'decimal']
document_xml_sequence = [1, 2, 1, 2, 3]
class FakedNestedLowerLetterNoContentTestCase(
FakedNestedNoContentBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['lowerLetter', 'lowerLetter']
document_xml_sequence = ['a', 'b', 'a', 'b', 'c']
class FakedNestedUpperLetterNoContentTestCase(
FakedNestedNoContentBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['upperLetter', 'upperLetter']
document_xml_sequence = ['A', 'B', 'A', 'B', 'C']
class FakedNestedLowerRomanNoContentTestCase(
FakedNestedNoContentBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['lowerRoman', 'lowerRoman']
document_xml_sequence = ['i', 'ii', 'i', 'ii', 'iii']
class FakedNestedUpperRomanNoContentTestCase(
FakedNestedNoContentBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['upperRoman', 'upperRoman']
document_xml_sequence = ['I', 'II', 'I', 'II', 'III']
class FakedNestedNumberingPatternBase(FakedNumberingPatternBase):
def setUp(self):
super(FakedNestedNumberingPatternBase, self).setUp()
self.document_xml = '''
<p><r><t>{0}AA</t></r></p>
<p><r><t>{1}AB</t></r></p>
<p><r>
<tab />
<t>{2}ABA</t>
</r></p>
<p><r>
<tab />
<t>{3}ABB</t>
</r></p>
<p><r>
<tab />
<tab />
<t>{4}ABBA</t>
</r></p>
<p><r>
<tab />
<tab />
<t>{5}ABBB</t>
</r></p>
<p>
<pPr>
<ind left="1440" />
</pPr>
<r><t>{6}ABBC</t></r>
</p>
<p><r>
<tab />
<t>{7}ABC</t>
</r></p>
<p><r><t>{8}AC</t></r></p>
'''
self.expected_html = '''
<ol class="pydocx-list-style-type-{0}">
<li>AA</li>
<li>AB
<ol class="pydocx-list-style-type-{1}">
<li>ABA</li>
<li>ABB
<ol class="pydocx-list-style-type-{2}">
<li>ABBA</li>
<li>ABBB</li>
<li>ABBC</li>
</ol>
</li>
<li>ABC</li>
</ol>
</li>
<li>AC</li>
</ol>
'''
class FakedNestedDecimalTestCase(
FakedNestedNumberingPatternBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['decimal', 'decimal', 'decimal']
document_xml_sequence = [1, 2, 1, 2, 1, 2, 3, 3, 3]
class FakedNestedLowerLetterTestCase(
FakedNestedNumberingPatternBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['lowerLetter', 'lowerLetter', 'lowerLetter']
document_xml_sequence = ['a', 'b', 'a', 'b', 'a', 'b', 'c', 'c', 'c']
class FakedNestedUpperLetterTestCase(
FakedNestedNumberingPatternBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['upperLetter', 'upperLetter', 'upperLetter']
document_xml_sequence = ['A', 'B', 'A', 'B', 'A', 'B', 'C', 'C', 'C']
class FakedNestedLowerRomanTestCase(
FakedNestedNumberingPatternBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['lowerRoman', 'lowerRoman', 'lowerRoman']
document_xml_sequence = ['i', 'ii', 'i', 'ii', 'i', 'ii', 'iii', 'iii', 'iii']
class FakedNestedUpperRomanTestCase(
FakedNestedNumberingPatternBase,
DocumentGeneratorTestCase,
):
expected_html_format = ['upperRoman', 'upperRoman', 'upperRoman']
document_xml_sequence = ['I', 'II', 'I', 'II', 'I', 'II', 'III', 'III', 'III']
| 29.574961 | 93 | 0.446024 | 6,846 | 75,357 | 4.705375 | 0.042507 | 0.051563 | 0.043895 | 0.042219 | 0.886909 | 0.863658 | 0.837146 | 0.824108 | 0.808804 | 0.789184 | 0 | 0.014972 | 0.425667 | 75,357 | 2,547 | 94 | 29.586572 | 0.729321 | 0.006303 | 0 | 0.843034 | 0 | 0.004409 | 0.454742 | 0.062347 | 0 | 0 | 0 | 0 | 0.029982 | 1 | 0.031746 | false | 0 | 0.003086 | 0.001764 | 0.06261 | 0.000441 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b444e2ae6c97733eb3a573c6a0793b77ee1ff4dc | 3,968 | py | Python | RFEM/BasicObjects/memberSet.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | RFEM/BasicObjects/memberSet.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | RFEM/BasicObjects/memberSet.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | from RFEM.initModel import Model, clearAtributes, ConvertToDlString
from RFEM.enums import SetType
class MemberSet():
def __init__(self,
no: int = 1,
members_no: str = '1 4 5 8 9 12 13 16 17 20 21 24',
member_set_type = SetType.SET_TYPE_GROUP,
comment: str = '',
params: dict = {}):
'''
Args:
no (int): Member Set Tag
members_no (str): Tags of Members Contained Within Member Set
member_set_type (enum): Member Set Type Enumeration
comment (str, optional): Comments
params (dict, optional): Parameters
'''
# Client model | Member Set
clientObject = Model.clientModel.factory.create('ns0:member_set')
# Clears object atributes | Sets all atributes to None
clearAtributes(clientObject)
# Member Set No.
clientObject.no = no
# Members number
clientObject.members = ConvertToDlString(members_no)
# Member Set Type
clientObject.set_type = member_set_type.name
# Comment
clientObject.comment = comment
# Adding optional parameters via dictionary
for key in params:
clientObject[key] = params[key]
# Add Member Set to client model
Model.clientModel.service.set_member_set(clientObject)
def ContinuousMembers(self,
no: int = 1,
members_no: str = '1 4 5 8 9 12 13 16 17 20 21 24',
comment: str = '',
params: dict = {}):
'''
Args:
no (int): Member Set Tag
members_no (str): Tags of Members Contained Within Continuous Member Set
comment (str, optional): Comments
params (dict, optional): Parameters
'''
# Client model | Member Set
clientObject = Model.clientModel.factory.create('ns0:member_set')
# Clears object atributes | Sets all atributes to None
clearAtributes(clientObject)
# Member Set No.
clientObject.no = no
# Members number
clientObject.members = ConvertToDlString(members_no)
# Member Set Type
clientObject.set_type = SetType.SET_TYPE_CONTINUOUS.name
# Comment
clientObject.comment = comment
# Adding optional parameters via dictionary
for key in params:
clientObject[key] = params[key]
# Add Member Set to client model
Model.clientModel.service.set_member_set(clientObject)
def GroupOfmembers(self,
no: int = 1,
members_no: str = '1 4 5 8 9 12 13 16 17 20 21 24',
comment: str = '',
params: dict = {}):
'''
Args:
no (int): Member Set Tag
members_no (str): Tags of Members Contained Within Group of Members Member Set
comment (str, optional): Comments
params (dict, optional): Parameters
'''
# Client model | Member Set
clientObject = Model.clientModel.factory.create('ns0:member_set')
# Clears object atributes | Sets all atributes to None
clearAtributes(clientObject)
# Member Set No.
clientObject.no = no
# Members number
clientObject.members = ConvertToDlString(members_no)
# Member Set Type
clientObject.set_type = SetType.SET_TYPE_GROUP.name
# Comment
clientObject.comment = comment
# Adding optional parameters via dictionary
for key in params:
clientObject[key] = params[key]
# Add Member Set to client model
Model.clientModel.service.set_member_set(clientObject)
| 32.260163 | 91 | 0.55872 | 407 | 3,968 | 5.356265 | 0.181818 | 0.115596 | 0.041743 | 0.013761 | 0.889908 | 0.889908 | 0.875688 | 0.875688 | 0.875688 | 0.875688 | 0 | 0.025301 | 0.37248 | 3,968 | 122 | 92 | 32.52459 | 0.850201 | 0.312248 | 0 | 0.782609 | 0 | 0 | 0.054772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b46344186abd3e89dcf48f70b122c5100d972ff7 | 7,590 | py | Python | test/test_expose.py | limodou/uliweb3 | 560fe818047c8ee8b4b775e714d9c637f0d23651 | [
"BSD-2-Clause"
] | 16 | 2018-09-12T02:50:28.000Z | 2021-08-20T08:38:31.000Z | test/test_expose.py | limodou/uliweb3 | 560fe818047c8ee8b4b775e714d9c637f0d23651 | [
"BSD-2-Clause"
] | 21 | 2018-11-29T06:41:08.000Z | 2022-01-18T13:27:38.000Z | test/test_expose.py | limodou/uliweb3 | 560fe818047c8ee8b4b775e714d9c637f0d23651 | [
"BSD-2-Clause"
] | 1 | 2018-10-08T10:02:56.000Z | 2018-10-08T10:02:56.000Z | from uliweb.core.rules import expose, clear_rules, merge_rules, set_app_rules
import uliweb.core.rules as rules
def test():
"""
>>> @expose
... def index():pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/test_expose/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... def index(id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/test_expose/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose()
... def index():pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/test_expose/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose()
... def index(id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/test_expose/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose('/index')
... def index():pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose(static=True)
... def index():pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/test_expose/index', {'static': True})]
>>> clear_rules()
>>> ####################################################
>>> @expose('/index')
... def index(id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.index', '/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:pass
>>> print(merge_rules())
[]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... def index(self):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... def index(self, id):pass
... @classmethod
... def p(cls, id):pass
... @staticmethod
... def x(id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index/<id>', {}), ('test_expose', 'test_expose.A.p', '/test_expose/A/p/<id>', {}), ('test_expose', 'test_expose.A.x', '/test_expose/A/x', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... @expose('/index')
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose('/user')
... class A:
... @expose('/index')
... def index(self, id):pass
... def hello(self):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.hello', '/user/hello', {}), ('test_expose', 'test_expose.A.index', '/index', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose('/user')
... class A(object):
... @expose('/index')
... def index(self, id):pass
... def hello(self):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.hello', '/user/hello', {}), ('test_expose', 'test_expose.A.index', '/index', {})]
>>> clear_rules()
>>> ####################################################
>>> app_rules = {'test_expose':'/wiki'}
>>> set_app_rules(app_rules)
>>> @expose('/user')
... class A(object):
... @expose('/index')
... def index(self, id):pass
... def hello(self):pass
... @expose('inter')
... def inter(self):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.hello', '/wiki/user/hello', {}), ('test_expose', 'test_expose.A.index', '/wiki/index', {}), ('test_expose', 'test_expose.A.inter', '/wiki/user/inter', {})]
>>> clear_rules()
>>> rules.__app_rules__ = {}
>>> ####################################################
>>> @expose
... class A:
... @expose('/index', name='index', static=True)
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/index', {'static': True})]
>>> clear_rules()
>>> ####################################################
>>> set_app_rules({})
>>> @expose
... class A:
... @expose
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> set_app_rules({})
>>> @expose
... class A:
... @expose()
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... @expose(name='index', static=True)
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index/<id>', {'static': True})]
>>> clear_rules()
>>> ####################################################
>>> @expose('/')
... class A:
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> def static():pass
>>> n = expose('/static', static=True)(static)
>>> print(merge_rules())
[('test_expose', 'test_expose.static', '/static', {'static': True})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... @expose('/index', name='index', static=True)
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/index', {'static': True})]
>>> print(rules.__url_names__)
{'index': 'test_expose.A.index'}
>>> clear_rules()
>>> ####################################################
>>> @expose('/')
... class A:
... @expose('index/<id>')
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/index/<id>', {})]
>>> clear_rules()
>>> ####################################################
>>> @expose
... class A:
... @expose('index')
... def index(self, id):pass
>>> print(merge_rules())
[('test_expose', 'test_expose.A.index', '/test_expose/A/index', {})]
>>> clear_rules()
"""
#if __name__ == '__main__':
# @expose
# class A(object):
# @expose('index')
# def index(self, id):pass
# def hello(self):pass
# print(merge_rules())
| 37.761194 | 203 | 0.414361 | 682 | 7,590 | 4.387097 | 0.055718 | 0.250668 | 0.113971 | 0.19385 | 0.871992 | 0.861631 | 0.835227 | 0.811497 | 0.787767 | 0.787767 | 0 | 0 | 0.221344 | 7,590 | 200 | 204 | 37.95 | 0.506261 | 0.854414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.666667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 13 |
b463802a61b5bd1777c0a88b89ffd9222fa9b8f2 | 199,241 | py | Python | quantlib_swig_bindings/Python/test/assetswap.py | andrew-stakiwicz-r3/financial_derivatives_demo | 2d3067d8374bb7a34a2119822022c741099ad519 | [
"Apache-2.0"
] | null | null | null | quantlib_swig_bindings/Python/test/assetswap.py | andrew-stakiwicz-r3/financial_derivatives_demo | 2d3067d8374bb7a34a2119822022c741099ad519 | [
"Apache-2.0"
] | null | null | null | quantlib_swig_bindings/Python/test/assetswap.py | andrew-stakiwicz-r3/financial_derivatives_demo | 2d3067d8374bb7a34a2119822022c741099ad519 | [
"Apache-2.0"
] | null | null | null | """
Copyright (C) 2011 Lluis Pujol Bajador
This file is part of QuantLib, a free-software/open-source library
for financial quantitative analysts and developers - http://quantlib.org/
QuantLib is free software: you can redistribute it and/or modify it
under the terms of the QuantLib license. You should have received a
copy of the license along with this program; if not, please email
<quantlib-dev@lists.sf.net>. The license is also available online at
<http://quantlib.org/license.shtml>.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the license for more details.
"""
import QuantLib as ql
import unittest
class AssetSwapTest(unittest.TestCase):
def setUp(self):
# initial setup
self.termStructure = ql.RelinkableYieldTermStructureHandle()
self.swapSettlementDays = 2
self.faceAmount = 100.0
self.fixedConvention = ql.Unadjusted
self.compounding = ql.Continuous
self.fixedFrequency = ql.Annual
self.floatingFrequency = ql.Semiannual
self.iborIndex = ql.Euribor(ql.Period(self.floatingFrequency), self.termStructure)
self.calendar = self.iborIndex.fixingCalendar()
self.swapIndex = ql.SwapIndex(
"EuriborSwapIsdaFixA",
ql.Period(10, ql.Years),
self.swapSettlementDays,
self.iborIndex.currency(),
self.calendar,
ql.Period(self.fixedFrequency),
self.fixedConvention,
self.iborIndex.dayCounter(),
self.iborIndex,
)
self.spread = 0.0
self.nonnullspread = 0.003
self.today = ql.Date(24, ql.April, 2007)
ql.Settings.instance().evaluationDate = self.today
self.termStructure.linkTo(ql.FlatForward(self.today, 0.05, ql.Actual365Fixed()))
self.yieldCurve = ql.FlatForward(self.today, 0.05, ql.Actual365Fixed())
self.pricer = ql.BlackIborCouponPricer()
self.swaptionVolatilityStructure = ql.SwaptionVolatilityStructureHandle(
ql.ConstantSwaptionVolatility(self.today, ql.NullCalendar(), ql.Following, 0.2, ql.Actual365Fixed())
)
self.meanReversionQuote = ql.QuoteHandle(ql.SimpleQuote(0.01))
self.cmspricer = ql.AnalyticHaganPricer(
self.swaptionVolatilityStructure, ql.GFunctionFactory.Standard, self.meanReversionQuote
)
def testConsistency(self):
"""Testing consistency between fair price and fair spread..."""
bondCalendar = ql.TARGET()
settlementDays = 3
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
bondSchedule = ql.Schedule(
ql.Date(4, ql.January, 2005),
ql.Date(4, ql.January, 2037),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
bond = ql.FixedRateBond(
settlementDays,
self.faceAmount,
bondSchedule,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
payFixedRate = True
bondPrice = 95.0
isPar = True
parAssetSwap = ql.AssetSwap(
payFixedRate,
bond,
bondPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
swapEngine = ql.DiscountingSwapEngine(
self.termStructure, True, bond.settlementDate(), ql.Settings.instance().evaluationDate
)
parAssetSwap.setPricingEngine(swapEngine)
fairCleanPrice = parAssetSwap.fairCleanPrice()
fairSpread = parAssetSwap.fairSpread()
tolerance = 1.0e-13
assetSwap2 = ql.AssetSwap(
payFixedRate,
bond,
fairCleanPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
assetSwap2.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap2.NPV()) > tolerance,
"\npar asset swap fair clean price doesn't zero the NPV: "
+ "\n clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(fairCleanPrice)
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap2.fairCleanPrice() - fairCleanPrice) > tolerance,
"\npar asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(assetSwap2.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap2.fairSpread() - self.spread) > tolerance,
"\npar asset swap fair spread doesn't equal input spread "
+ "at zero NPV: "
+ "\n input spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(assetSwap2.fairSpread())
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap3 = ql.AssetSwap(
payFixedRate, bond, bondPrice, self.iborIndex, fairSpread, ql.Schedule(), self.iborIndex.dayCounter(), isPar
)
assetSwap3.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap3.NPV()) > tolerance,
"\npar asset swap fair spread doesn't zero the NPV: "
+ "\n spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(fairSpread)
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap3.fairCleanPrice() - bondPrice) > tolerance,
"\npar asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(assetSwap3.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap3.fairSpread() - fairSpread) > tolerance,
"\npar asset swap fair spread doesn't equal input spread at"
+ " zero NPV: "
+ "\n input spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(assetSwap3.fairSpread())
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
## let's change the npv date
swapEngine = ql.DiscountingSwapEngine(self.termStructure, True, bond.settlementDate(), bond.settlementDate())
parAssetSwap.setPricingEngine(swapEngine)
## fair clean price and fair spread should not change
self.assertFalse(
abs(parAssetSwap.fairCleanPrice() - fairCleanPrice) > tolerance,
"\npar asset swap fair clean price changed with NpvDate:"
+ "\n expected clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(parAssetSwap.fairCleanPrice())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(parAssetSwap.fairSpread() - fairSpread) > tolerance,
"\npar asset swap fair spread changed with NpvDate:"
+ "\n expected spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(parAssetSwap.fairSpread())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap2 = ql.AssetSwap(
payFixedRate,
bond,
fairCleanPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
assetSwap2.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap2.NPV()) > tolerance,
"\npar asset swap fair clean price doesn't zero the NPV: "
+ "\n clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(fairCleanPrice)
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap2.fairCleanPrice() - fairCleanPrice) > tolerance,
"\npar asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(assetSwap2.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap2.fairSpread() - self.spread) > tolerance,
"\npar asset swap fair spread doesn't equal input spread at zero NPV: "
+ "\n input spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(assetSwap2.fairSpread())
+ "\n NPV: "
+ str(assetSwap2.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap3 = ql.AssetSwap(
payFixedRate, bond, bondPrice, self.iborIndex, fairSpread, ql.Schedule(), self.iborIndex.dayCounter(), isPar
)
assetSwap3.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap3.NPV()) > tolerance,
"\npar asset swap fair spread doesn't zero the NPV: "
+ "\n spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(fairSpread)
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap3.fairCleanPrice() - bondPrice) > tolerance,
"\npar asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(assetSwap3.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap3.fairSpread() - fairSpread) > tolerance,
"\npar asset swap fair spread doesn't equal input spread at zero NPV: "
+ "\n input spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(assetSwap3.fairSpread())
+ "\n NPV: "
+ str(assetSwap3.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
## now market asset swap
isPar = False
mktAssetSwap = ql.AssetSwap(
payFixedRate,
bond,
bondPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
swapEngine = ql.DiscountingSwapEngine(
self.termStructure, True, bond.settlementDate(), ql.Settings.instance().evaluationDate
)
mktAssetSwap.setPricingEngine(swapEngine)
fairCleanPrice = mktAssetSwap.fairCleanPrice()
fairSpread = mktAssetSwap.fairSpread()
assetSwap4 = ql.AssetSwap(
payFixedRate,
bond,
fairCleanPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
assetSwap4.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap4.NPV()) > tolerance,
"\nmarket asset swap fair clean price doesn't zero the NPV: "
+ "\n clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(fairCleanPrice)
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap4.fairCleanPrice() - fairCleanPrice) > tolerance,
"\nmarket asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(assetSwap4.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap4.fairSpread() - self.spread) > tolerance,
"\nmarket asset swap fair spread doesn't equal input spread"
+ " at zero NPV: "
+ "\n input spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(assetSwap4.fairSpread())
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap5 = ql.AssetSwap(
payFixedRate, bond, bondPrice, self.iborIndex, fairSpread, ql.Schedule(), self.iborIndex.dayCounter(), isPar
)
assetSwap5.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap5.NPV()) > tolerance,
"\nmarket asset swap fair spread doesn't zero the NPV: "
+ "\n spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(fairSpread)
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap5.fairCleanPrice() - bondPrice) > tolerance,
"\nmarket asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(assetSwap5.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap5.fairSpread() - fairSpread) > tolerance,
"\nmarket asset swap fair spread doesn't equal input spread at zero NPV: "
+ "\n input spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(assetSwap5.fairSpread())
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
## let's change the npv date
swapEngine = ql.DiscountingSwapEngine(self.termStructure, True, bond.settlementDate(), bond.settlementDate())
mktAssetSwap.setPricingEngine(swapEngine)
## fair clean price and fair spread should not change
self.assertFalse(
abs(mktAssetSwap.fairCleanPrice() - fairCleanPrice) > tolerance,
"\nmarket asset swap fair clean price changed with NpvDate:"
+ "\n expected clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(mktAssetSwap.fairCleanPrice())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(mktAssetSwap.fairSpread() - fairSpread) > tolerance,
"\nmarket asset swap fair spread changed with NpvDate:"
+ "\n expected spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(mktAssetSwap.fairSpread())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap4 = ql.AssetSwap(
payFixedRate,
bond,
fairCleanPrice,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
isPar,
)
assetSwap4.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap4.NPV()) > tolerance,
"\nmarket asset swap fair clean price doesn't zero the NPV: "
+ "\n clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(fairCleanPrice)
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap4.fairCleanPrice() - fairCleanPrice) > tolerance,
"\nmarket asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(fairCleanPrice)
+ "\n fair clean price: "
+ str(assetSwap4.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap4.fairSpread() - self.spread) > tolerance,
"\nmarket asset swap fair spread doesn't equal input spread at zero NPV: "
+ "\n input spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(assetSwap4.fairSpread())
+ "\n NPV: "
+ str(assetSwap4.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
assetSwap5 = ql.AssetSwap(
payFixedRate, bond, bondPrice, self.iborIndex, fairSpread, ql.Schedule(), self.iborIndex.dayCounter(), isPar
)
assetSwap5.setPricingEngine(swapEngine)
self.assertFalse(
abs(assetSwap5.NPV()) > tolerance,
"\nmarket asset swap fair spread doesn't zero the NPV: "
+ "\n spread: "
+ str(self.spread)
+ "\n fair spread: "
+ str(fairSpread)
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap5.fairCleanPrice() - bondPrice) > tolerance,
"\nmarket asset swap fair clean price doesn't equal input "
+ "clean price at zero NPV: "
+ "\n input clean price: "
+ str(bondPrice)
+ "\n fair clean price: "
+ str(assetSwap5.fairCleanPrice())
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
self.assertFalse(
abs(assetSwap5.fairSpread() - fairSpread) > tolerance,
"\nmarket asset swap fair spread doesn't equal input spread at zero NPV: "
+ "\n input spread: "
+ str(fairSpread)
+ "\n fair spread: "
+ str(assetSwap5.fairSpread())
+ "\n NPV: "
+ str(assetSwap5.NPV())
+ "\n tolerance: "
+ str(tolerance),
)
def testImpliedValue(self):
"""Testing implied bond value against asset-swap fair price with null spread..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondSchedule1 = ql.Schedule(
ql.Date(4, ql.January, 2005),
ql.Date(4, ql.January, 2037),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond1 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule1,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
swapEngine = ql.DiscountingSwapEngine(self.termStructure, False)
fixedBond1.setPricingEngine(bondEngine)
fixedBondPrice1 = fixedBond1.cleanPrice()
fixedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap1.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice1 = fixedBondAssetSwap1.fairCleanPrice()
tolerance = 1.0e-13
error1 = abs(fixedBondAssetSwapPrice1 - fixedBondPrice1)
self.assertFalse(
error1 > tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: "
+ str(fixedBondPrice1)
+ "\n asset swap fair price: "
+ str(fixedBondAssetSwapPrice1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondSchedule2 = ql.Schedule(
ql.Date(5, ql.February, 2005),
ql.Date(5, ql.February, 2019),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond2 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule2,
[0.05],
ql.Thirty360(ql.Thirty360.BondBasis),
ql.Following,
100.0,
ql.Date(5, ql.February, 2005),
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondPrice2 = fixedBond2.cleanPrice()
fixedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap2.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice2 = fixedBondAssetSwap2.fairCleanPrice()
error2 = abs(fixedBondAssetSwapPrice2 - fixedBondPrice2)
self.assertFalse(
error2 > tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: "
+ str(fixedBondPrice2)
+ "\n asset swap fair price: "
+ str(fixedBondAssetSwapPrice2)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondSchedule1 = ql.Schedule(
ql.Date(29, ql.September, 2003),
ql.Date(29, ql.September, 2013),
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBond1 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
fixingDays,
[1],
[0.0056],
[],
[],
inArrears,
100.0,
ql.Date(29, ql.September, 2003),
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondPrice1 = floatingBond1.cleanPrice()
floatingBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap1.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice1 = floatingBondAssetSwap1.fairCleanPrice()
error3 = abs(floatingBondAssetSwapPrice1 - floatingBondPrice1)
self.assertFalse(
error3 > tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: "
+ str(floatingBondPrice1)
+ "\n asset swap fair price: "
+ str(floatingBondAssetSwapPrice1)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondSchedule2 = ql.Schedule(
ql.Date(24, ql.September, 2004),
ql.Date(24, ql.September, 2018),
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBond2 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
fixingDays,
[1],
[0.0025],
[],
[],
inArrears,
100.0,
ql.Date(24, ql.September, 2004),
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
currentCoupon = 0.04013 + 0.0025
floatingCurrentCoupon = floatingBond2.nextCouponRate()
error4 = abs(floatingCurrentCoupon - currentCoupon)
self.assertFalse(
error4 > tolerance,
"wrong current coupon is returned for floater bond:"
+ "\n bond's calculated current coupon: "
+ str(currentCoupon)
+ "\n current coupon asked to the bond: "
+ str(floatingCurrentCoupon)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
floatingBondPrice2 = floatingBond2.cleanPrice()
floatingBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap2.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice2 = floatingBondAssetSwap2.fairCleanPrice()
error5 = abs(floatingBondAssetSwapPrice2 - floatingBondPrice2)
self.assertFalse(
error5 > tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: "
+ str(floatingBondPrice2)
+ "\n asset swap fair price: "
+ str(floatingBondAssetSwapPrice2)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondSchedule1 = ql.Schedule(
ql.Date(22, ql.August, 2005),
ql.Date(22, ql.August, 2020),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond1 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[1.0],
[0.0],
[0.055],
[0.025],
inArrears,
100.0,
ql.Date(22, ql.August, 2005),
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondPrice1 = cmsBond1.cleanPrice()
cmsBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap1.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice1 = cmsBondAssetSwap1.fairCleanPrice()
error6 = abs(cmsBondAssetSwapPrice1 - cmsBondPrice1)
self.assertFalse(
error6 > tolerance,
"wrong zero spread asset swap price for cms bond:"
+ "\n bond's clean price: "
+ str(cmsBondPrice1)
+ "\n asset swap fair price: "
+ str(cmsBondAssetSwapPrice1)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondSchedule2 = ql.Schedule(
ql.Date(6, ql.May, 2005),
ql.Date(6, ql.May, 2015),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond2 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[0.84],
[0.0],
[],
[],
inArrears,
100.0,
ql.Date(6, ql.May, 2005),
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondPrice2 = cmsBond2.cleanPrice()
cmsBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap2.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice2 = cmsBondAssetSwap2.fairCleanPrice()
error7 = abs(cmsBondAssetSwapPrice2 - cmsBondPrice2)
self.assertFalse(
error7 > tolerance,
"wrong zero spread asset swap price for cms bond:"
+ "\n bond's clean price: "
+ str(cmsBondPrice2)
+ "\n asset swap fair price: "
+ str(cmsBondAssetSwapPrice2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBond1 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(20, ql.December, 2015),
ql.Following,
100.0,
ql.Date(19, ql.December, 1985),
)
zeroCpnBond1.setPricingEngine(bondEngine)
zeroCpnBondPrice1 = zeroCpnBond1.cleanPrice()
zeroCpnAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice1 = zeroCpnAssetSwap1.fairCleanPrice()
error8 = abs(cmsBondAssetSwapPrice1 - cmsBondPrice1)
self.assertFalse(
error8 > tolerance,
"wrong zero spread asset swap price for zero cpn bond:"
+ "\n bond's clean price: "
+ str(zeroCpnBondPrice1)
+ "\n asset swap fair price: "
+ str(zeroCpnBondAssetSwapPrice1)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBond2 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(17, ql.February, 2028),
ql.Following,
100.0,
ql.Date(17, ql.February, 1998),
)
zeroCpnBond2.setPricingEngine(bondEngine)
zeroCpnBondPrice2 = zeroCpnBond2.cleanPrice()
zeroCpnAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice2 = zeroCpnAssetSwap2.fairCleanPrice()
error9 = abs(cmsBondAssetSwapPrice2 - cmsBondPrice2)
self.assertFalse(
error9 > tolerance,
"wrong zero spread asset swap price for zero cpn bond:"
+ "\n bond's clean price: "
+ str(zeroCpnBondPrice2)
+ "\n asset swap fair price: "
+ str(zeroCpnBondAssetSwapPrice2)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
def testMarketASWSpread(self):
"""Testing relationship between market asset swap and par asset swap..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
mktAssetSwap = False
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondSchedule1 = ql.Schedule(
ql.Date(4, ql.January, 2005),
ql.Date(4, ql.January, 2037),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond1 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule1,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
swapEngine = ql.DiscountingSwapEngine(self.termStructure, False)
fixedBond1.setPricingEngine(bondEngine)
fixedBondMktPrice1 = 89.22 ## market price observed on 7th June 2007
fixedBondMktFullPrice1 = fixedBondMktPrice1 + fixedBond1.accruedAmount()
fixedBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondParAssetSwap1.setPricingEngine(swapEngine)
fixedBondParAssetSwapSpread1 = fixedBondParAssetSwap1.fairSpread()
fixedBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
fixedBondMktAssetSwap1.setPricingEngine(swapEngine)
fixedBondMktAssetSwapSpread1 = fixedBondMktAssetSwap1.fairSpread()
tolerance = 1.0e-13
error1 = abs(fixedBondMktAssetSwapSpread1 - 100 * fixedBondParAssetSwapSpread1 / fixedBondMktFullPrice1)
self.assertFalse(
error1 > tolerance,
"wrong asset swap spreads for fixed bond:"
+ "\n market ASW spread: "
+ str(fixedBondMktAssetSwapSpread1)
+ "\n par ASW spread: "
+ str(fixedBondParAssetSwapSpread1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondSchedule2 = ql.Schedule(
ql.Date(5, ql.February, 2005),
ql.Date(5, ql.February, 2019),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond2 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule2,
[0.05],
ql.Thirty360(ql.Thirty360.BondBasis),
ql.Following,
100.0,
ql.Date(5, ql.February, 2005),
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondMktPrice2 = 99.98 ## market price observed on 7th June 2007
fixedBondMktFullPrice2 = fixedBondMktPrice2 + fixedBond2.accruedAmount()
fixedBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondParAssetSwap2.setPricingEngine(swapEngine)
fixedBondParAssetSwapSpread2 = fixedBondParAssetSwap2.fairSpread()
fixedBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
fixedBondMktAssetSwap2.setPricingEngine(swapEngine)
fixedBondMktAssetSwapSpread2 = fixedBondMktAssetSwap2.fairSpread()
error2 = abs(fixedBondMktAssetSwapSpread2 - 100 * fixedBondParAssetSwapSpread2 / fixedBondMktFullPrice2)
self.assertFalse(
error2 > tolerance,
"wrong asset swap spreads for fixed bond:"
+ "\n market ASW spread: "
+ str(fixedBondMktAssetSwapSpread2)
+ "\n par ASW spread: "
+ str(fixedBondParAssetSwapSpread2)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondSchedule1 = ql.Schedule(
ql.Date(29, ql.September, 2003),
ql.Date(29, ql.September, 2013),
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBond1 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
fixingDays,
[1],
[0.0056],
[],
[],
inArrears,
100.0,
ql.Date(29, ql.September, 2003),
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
## market price observed on 7th June 2007
floatingBondMktPrice1 = 101.64
floatingBondMktFullPrice1 = floatingBondMktPrice1 + floatingBond1.accruedAmount()
floatingBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondParAssetSwap1.setPricingEngine(swapEngine)
floatingBondParAssetSwapSpread1 = floatingBondParAssetSwap1.fairSpread()
floatingBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
floatingBondMktAssetSwap1.setPricingEngine(swapEngine)
floatingBondMktAssetSwapSpread1 = floatingBondMktAssetSwap1.fairSpread()
error3 = abs(
floatingBondMktAssetSwapSpread1 - 100 * floatingBondParAssetSwapSpread1 / floatingBondMktFullPrice1
)
self.assertFalse(
error3 > tolerance,
"wrong asset swap spreads for floating bond:"
+ "\n market ASW spread: "
+ str(floatingBondMktAssetSwapSpread1)
+ "\n par ASW spread: "
+ str(floatingBondParAssetSwapSpread1)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondSchedule2 = ql.Schedule(
ql.Date(24, ql.September, 2004),
ql.Date(24, ql.September, 2018),
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBond2 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
fixingDays,
[1],
[0.0025],
[],
[],
inArrears,
100.0,
ql.Date(24, ql.September, 2004),
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
## market price observed on 7th June 2007
floatingBondMktPrice2 = 101.248
floatingBondMktFullPrice2 = floatingBondMktPrice2 + floatingBond2.accruedAmount()
floatingBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondParAssetSwap2.setPricingEngine(swapEngine)
floatingBondParAssetSwapSpread2 = floatingBondParAssetSwap2.fairSpread()
floatingBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
floatingBondMktAssetSwap2.setPricingEngine(swapEngine)
floatingBondMktAssetSwapSpread2 = floatingBondMktAssetSwap2.fairSpread()
error4 = abs(
floatingBondMktAssetSwapSpread2 - 100 * floatingBondParAssetSwapSpread2 / floatingBondMktFullPrice2
)
self.assertFalse(
error4 > tolerance,
"wrong asset swap spreads for floating bond:"
+ "\n market ASW spread: "
+ str(floatingBondMktAssetSwapSpread2)
+ "\n par ASW spread: "
+ str(floatingBondParAssetSwapSpread2)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondSchedule1 = ql.Schedule(
ql.Date(22, ql.August, 2005),
ql.Date(22, ql.August, 2020),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond1 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[1, 1.0],
[0.0],
[0.055],
[0.025],
inArrears,
100.0,
ql.Date(22, ql.August, 2005),
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondMktPrice1 = 88.45 ## market price observed on 7th June 2007
cmsBondMktFullPrice1 = cmsBondMktPrice1 + cmsBond1.accruedAmount()
cmsBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondParAssetSwap1.setPricingEngine(swapEngine)
cmsBondParAssetSwapSpread1 = cmsBondParAssetSwap1.fairSpread()
cmsBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
cmsBondMktAssetSwap1.setPricingEngine(swapEngine)
cmsBondMktAssetSwapSpread1 = cmsBondMktAssetSwap1.fairSpread()
error5 = abs(cmsBondMktAssetSwapSpread1 - 100 * cmsBondParAssetSwapSpread1 / cmsBondMktFullPrice1)
self.assertFalse(
error5 > tolerance,
"wrong asset swap spreads for cms bond:"
+ "\n market ASW spread: "
+ str(cmsBondMktAssetSwapSpread1)
+ "\n par ASW spread: "
+ str(cmsBondParAssetSwapSpread1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondSchedule2 = ql.Schedule(
ql.Date(6, ql.May, 2005),
ql.Date(6, ql.May, 2015),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond2 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[0.84],
[0.0],
[],
[],
inArrears,
100.0,
ql.Date(6, ql.May, 2005),
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondMktPrice2 = 94.08 ## market price observed on 7th June 2007
cmsBondMktFullPrice2 = cmsBondMktPrice2 + cmsBond2.accruedAmount()
cmsBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondParAssetSwap2.setPricingEngine(swapEngine)
cmsBondParAssetSwapSpread2 = cmsBondParAssetSwap2.fairSpread()
cmsBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
cmsBondMktAssetSwap2.setPricingEngine(swapEngine)
cmsBondMktAssetSwapSpread2 = cmsBondMktAssetSwap2.fairSpread()
error6 = abs(cmsBondMktAssetSwapSpread2 - 100 * cmsBondParAssetSwapSpread2 / cmsBondMktFullPrice2)
self.assertFalse(
error6 > tolerance,
"wrong asset swap spreads for cms bond:"
+ "\n market ASW spread: "
+ str(cmsBondMktAssetSwapSpread2)
+ "\n par ASW spread: "
+ str(cmsBondParAssetSwapSpread2)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBond1 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(20, ql.December, 2015),
ql.Following,
100.0,
ql.Date(19, ql.December, 1985),
)
zeroCpnBond1.setPricingEngine(bondEngine)
## market price observed on 12th June 2007
zeroCpnBondMktPrice1 = 70.436
zeroCpnBondMktFullPrice1 = zeroCpnBondMktPrice1 + zeroCpnBond1.accruedAmount()
zeroCpnBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondParAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondParAssetSwapSpread1 = zeroCpnBondParAssetSwap1.fairSpread()
zeroCpnBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
zeroCpnBondMktAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondMktAssetSwapSpread1 = zeroCpnBondMktAssetSwap1.fairSpread()
error7 = abs(zeroCpnBondMktAssetSwapSpread1 - 100 * zeroCpnBondParAssetSwapSpread1 / zeroCpnBondMktFullPrice1)
self.assertFalse(
error7 > tolerance,
"wrong asset swap spreads for zero cpn bond:"
+ "\n market ASW spread: "
+ str(zeroCpnBondMktAssetSwapSpread1)
+ "\n par ASW spread: "
+ str(zeroCpnBondParAssetSwapSpread1)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBond2 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(17, ql.February, 2028),
ql.Following,
100.0,
ql.Date(17, ql.February, 1998),
)
zeroCpnBond2.setPricingEngine(bondEngine)
## zeroCpnBondPrice2 = zeroCpnBond2.cleanPrice()
## market price observed on 12th June 2007
zeroCpnBondMktPrice2 = 35.160
zeroCpnBondMktFullPrice2 = zeroCpnBondMktPrice2 + zeroCpnBond2.accruedAmount()
zeroCpnBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondParAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondParAssetSwapSpread2 = zeroCpnBondParAssetSwap2.fairSpread()
zeroCpnBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
zeroCpnBondMktAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondMktAssetSwapSpread2 = zeroCpnBondMktAssetSwap2.fairSpread()
error8 = abs(zeroCpnBondMktAssetSwapSpread2 - 100 * zeroCpnBondParAssetSwapSpread2 / zeroCpnBondMktFullPrice2)
self.assertFalse(
error8 > tolerance,
"wrong asset swap spreads for zero cpn bond:"
+ "\n market ASW spread: "
+ str(zeroCpnBondMktAssetSwapSpread2)
+ "\n par ASW spread: "
+ str(zeroCpnBondParAssetSwapSpread2)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
def testZSpread(self):
"""Testing clean and dirty price with null Z-spread against theoretical prices..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
inArrears = False
## Fixed bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondSchedule1 = ql.Schedule(
ql.Date(4, ql.January, 2005),
ql.Date(4, ql.January, 2037),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond1 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule1,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
fixedBond1.setPricingEngine(bondEngine)
fixedBondImpliedValue1 = fixedBond1.cleanPrice()
fixedBondSettlementDate1 = fixedBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YC...
fixedBondCleanPrice1 = ql.cleanPriceFromZSpread(
fixedBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
fixedBondSettlementDate1,
)
tolerance = 1.0e-13
error1 = abs(fixedBondImpliedValue1 - fixedBondCleanPrice1)
self.assertFalse(
error1 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(fixedBondCleanPrice1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondSchedule2 = ql.Schedule(
ql.Date(5, ql.February, 2005),
ql.Date(5, ql.February, 2019),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBond2 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule2,
[0.05],
ql.Thirty360(ql.Thirty360.BondBasis),
ql.Following,
100.0,
ql.Date(5, ql.February, 2005),
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondImpliedValue2 = fixedBond2.cleanPrice()
fixedBondSettlementDate2 = fixedBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
fixedBondCleanPrice2 = ql.cleanPriceFromZSpread(
fixedBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
fixedBondSettlementDate2,
)
error3 = abs(fixedBondImpliedValue2 - fixedBondCleanPrice2)
self.assertFalse(
error3 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(fixedBondCleanPrice2)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondSchedule1 = ql.Schedule(
ql.Date(29, ql.September, 2003),
ql.Date(29, ql.September, 2013),
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBond1 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
fixingDays,
[1],
[0.0056],
[],
[],
inArrears,
100.0,
ql.Date(29, ql.September, 2003),
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondImpliedValue1 = floatingBond1.cleanPrice()
floatingBondSettlementDate1 = floatingBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice1 = ql.cleanPriceFromZSpread(
floatingBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Semiannual,
floatingBondSettlementDate1,
)
error5 = abs(floatingBondImpliedValue1 - floatingBondCleanPrice1)
self.assertFalse(
error5 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(floatingBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(floatingBondCleanPrice1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondSchedule2 = ql.Schedule(
ql.Date(24, ql.September, 2004),
ql.Date(24, ql.September, 2018),
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBond2 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
fixingDays,
[1],
[0.0025],
[],
[],
inArrears,
100.0,
ql.Date(24, ql.September, 2004),
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
floatingBondImpliedValue2 = floatingBond2.cleanPrice()
floatingBondSettlementDate2 = floatingBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice2 = ql.cleanPriceFromZSpread(
floatingBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Semiannual,
floatingBondSettlementDate2,
)
error7 = abs(floatingBondImpliedValue2 - floatingBondCleanPrice2)
self.assertFalse(
error7 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(floatingBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(floatingBondCleanPrice2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
#### CMS bond (Isin: XS0228052402 CRDIT 0 8/22/20)
#### maturity doesn't occur on a business day
cmsBondSchedule1 = ql.Schedule(
ql.Date(22, ql.August, 2005),
ql.Date(22, ql.August, 2020),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond1 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[1.0],
[0.0],
[0.055],
[0.025],
inArrears,
100.0,
ql.Date(22, ql.August, 2005),
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondImpliedValue1 = cmsBond1.cleanPrice()
cmsBondSettlementDate1 = cmsBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice1 = ql.cleanPriceFromZSpread(
cmsBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
cmsBondSettlementDate1,
)
error9 = abs(cmsBondImpliedValue1 - cmsBondCleanPrice1)
self.assertFalse(
error9 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(cmsBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(cmsBondCleanPrice1)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondSchedule2 = ql.Schedule(
ql.Date(6, ql.May, 2005),
ql.Date(6, ql.May, 2015),
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBond2 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[0.84],
[0.0],
[],
[],
inArrears,
100.0,
ql.Date(6, ql.May, 2005),
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondImpliedValue2 = cmsBond2.cleanPrice()
cmsBondSettlementDate2 = cmsBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice2 = ql.cleanPriceFromZSpread(
cmsBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
cmsBondSettlementDate2,
)
error11 = abs(cmsBondImpliedValue2 - cmsBondCleanPrice2)
self.assertFalse(
error11 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(cmsBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(cmsBondCleanPrice2)
+ "\n error: "
+ str(error11)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero-Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBond1 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(20, ql.December, 2015),
ql.Following,
100.0,
ql.Date(19, ql.December, 1985),
)
zeroCpnBond1.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue1 = zeroCpnBond1.cleanPrice()
zeroCpnBondSettlementDate1 = zeroCpnBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice1 = ql.cleanPriceFromZSpread(
zeroCpnBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
zeroCpnBondSettlementDate1,
)
error13 = abs(zeroCpnBondImpliedValue1 - zeroCpnBondCleanPrice1)
self.assertFalse(
error13 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: "
+ str(zeroCpnBondImpliedValue1)
+ "\n zero cpn price: "
+ str(zeroCpnBondCleanPrice1)
+ "\n error: "
+ str(error13)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity doesn't occur on a business day
zeroCpnBond2 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(17, ql.February, 2028),
ql.Following,
100.0,
ql.Date(17, ql.February, 1998),
)
zeroCpnBond2.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue2 = zeroCpnBond2.cleanPrice()
zeroCpnBondSettlementDate2 = zeroCpnBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice2 = ql.cleanPriceFromZSpread(
zeroCpnBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
zeroCpnBondSettlementDate2,
)
error15 = abs(zeroCpnBondImpliedValue2 - zeroCpnBondCleanPrice2)
self.assertFalse(
error15 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: "
+ str(zeroCpnBondImpliedValue2)
+ "\n zero cpn price: "
+ str(zeroCpnBondCleanPrice2)
+ "\n error: "
+ str(error15)
+ "\n tolerance: "
+ str(tolerance),
)
def testGenericBondImplied(self):
"""Testing implied generic-bond value against asset-swap fair price with null spread..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = ql.Date(4, ql.January, 2005)
fixedBondMaturityDate1 = ql.Date(4, ql.January, 2037)
fixedBondSchedule1 = ql.Schedule(
fixedBondStartDate1,
fixedBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg1 = list(
ql.FixedRateLeg(fixedBondSchedule1, ql.ActualActual(ql.ActualActual.ISDA), [self.faceAmount], [0.04])
)
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1, ql.Following)
fixedBondLeg1.append(ql.SimpleCashFlow(100.0, fixedbondRedemption1))
fixedBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
fixedBondMaturityDate1,
fixedBondStartDate1,
tuple(fixedBondLeg1),
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
swapEngine = ql.DiscountingSwapEngine(self.termStructure, True)
fixedBond1.setPricingEngine(bondEngine)
fixedBondPrice1 = fixedBond1.cleanPrice()
fixedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap1.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice1 = fixedBondAssetSwap1.fairCleanPrice()
tolerance = 1.0e-13
error1 = abs(fixedBondAssetSwapPrice1 - fixedBondPrice1)
self.assertFalse(
error1 > tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: "
+ str(fixedBondPrice1)
+ "\n asset swap fair price: "
+ str(fixedBondAssetSwapPrice1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondStartDate2 = ql.Date(5, ql.February, 2005)
fixedBondMaturityDate2 = ql.Date(5, ql.February, 2019)
fixedBondSchedule2 = ql.Schedule(
fixedBondStartDate2,
fixedBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg2 = list(
ql.FixedRateLeg(fixedBondSchedule2, ql.Thirty360(ql.Thirty360.BondBasis), [self.faceAmount], [0.05])
)
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2, ql.Following)
fixedBondLeg2.append(ql.SimpleCashFlow(100.0, fixedbondRedemption2))
fixedBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
fixedBondMaturityDate2,
fixedBondStartDate2,
tuple(fixedBondLeg2),
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondPrice2 = fixedBond2.cleanPrice()
fixedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap2.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice2 = fixedBondAssetSwap2.fairCleanPrice()
error2 = abs(fixedBondAssetSwapPrice2 - fixedBondPrice2)
self.assertFalse(
error2 > tolerance,
"wrong zero spread asset swap price for fixed bond:"
+ "\n bond's clean price: "
+ str(fixedBondPrice2)
+ "\n asset swap fair price: "
+ str(fixedBondAssetSwapPrice2)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondStartDate1 = ql.Date(29, ql.September, 2003)
floatingBondMaturityDate1 = ql.Date(29, ql.September, 2013)
floatingBondSchedule1 = ql.Schedule(
floatingBondStartDate1,
floatingBondMaturityDate1,
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg1 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0056],
[],
[],
inArrears,
)
)
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, ql.Following)
floatingBondLeg1.append(ql.SimpleCashFlow(100.0, floatingbondRedemption1))
floatingBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate1,
floatingBondStartDate1,
tuple(floatingBondLeg1),
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondPrice1 = floatingBond1.cleanPrice()
floatingBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap1.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice1 = floatingBondAssetSwap1.fairCleanPrice()
error3 = abs(floatingBondAssetSwapPrice1 - floatingBondPrice1)
self.assertFalse(
error3 > tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: "
+ str(floatingBondPrice1)
+ "\n asset swap fair price: "
+ str(floatingBondAssetSwapPrice1)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondStartDate2 = ql.Date(24, ql.September, 2004)
floatingBondMaturityDate2 = ql.Date(24, ql.September, 2018)
floatingBondSchedule2 = ql.Schedule(
floatingBondStartDate2,
floatingBondMaturityDate2,
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg2 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0025],
[],
[],
inArrears,
)
)
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ql.ModifiedFollowing)
floatingBondLeg2.append(ql.SimpleCashFlow(100.0, floatingbondRedemption2))
floatingBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate2,
floatingBondStartDate2,
tuple(floatingBondLeg2),
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
currentCoupon = 0.04013 + 0.0025
floatingCurrentCoupon = floatingBond2.nextCouponRate()
error4 = abs(floatingCurrentCoupon - currentCoupon)
self.assertFalse(
error4 > tolerance,
"wrong current coupon is returned for floater bond:"
+ "\n bond's calculated current coupon: "
+ str(currentCoupon)
+ "\n current coupon asked to the bond: "
+ str(floatingCurrentCoupon)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
floatingBondPrice2 = floatingBond2.cleanPrice()
floatingBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap2.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice2 = floatingBondAssetSwap2.fairCleanPrice()
error5 = abs(floatingBondAssetSwapPrice2 - floatingBondPrice2)
self.assertFalse(
error5 > tolerance,
"wrong zero spread asset swap price for floater:"
+ "\n bond's clean price: "
+ str(floatingBondPrice2)
+ "\n asset swap fair price: "
+ str(floatingBondAssetSwapPrice2)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = ql.Date(22, ql.August, 2005)
cmsBondMaturityDate1 = ql.Date(22, ql.August, 2020)
cmsBondSchedule1 = ql.Schedule(
cmsBondStartDate1,
cmsBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg1 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[],
[0.055],
[0.025],
[],
inArrears,
)
)
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, ql.Following)
cmsBondLeg1.append(ql.SimpleCashFlow(100.0, cmsbondRedemption1))
cmsBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate1, cmsBondStartDate1, tuple(cmsBondLeg1)
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondPrice1 = cmsBond1.cleanPrice()
cmsBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap1.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice1 = cmsBondAssetSwap1.fairCleanPrice()
error6 = abs(cmsBondAssetSwapPrice1 - cmsBondPrice1)
self.assertFalse(
error6 > tolerance,
"wrong zero spread asset swap price for cms bond:"
+ "\n bond's clean price: "
+ str(cmsBondPrice1)
+ "\n asset swap fair price: "
+ str(cmsBondAssetSwapPrice1)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondStartDate2 = ql.Date(6, ql.May, 2005)
cmsBondMaturityDate2 = ql.Date(6, ql.May, 2015)
cmsBondSchedule2 = ql.Schedule(
cmsBondStartDate2,
cmsBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg2 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[0.84],
[],
[],
[],
inArrears,
)
)
cmsbondRedemption2 = bondCalendar.adjust(cmsBondMaturityDate2, ql.Following)
cmsBondLeg2.append(ql.SimpleCashFlow(100.0, cmsbondRedemption2))
cmsBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate2, cmsBondStartDate2, tuple(cmsBondLeg2)
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondPrice2 = cmsBond2.cleanPrice()
cmsBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap2.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice2 = cmsBondAssetSwap2.fairCleanPrice()
error7 = abs(cmsBondAssetSwapPrice2 - cmsBondPrice2)
self.assertFalse(
error7 > tolerance,
"wrong zero spread asset swap price for cms bond:"
+ "\n bond's clean price: "
+ str(cmsBondPrice2)
+ "\n asset swap fair price: "
+ str(cmsBondAssetSwapPrice2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBondStartDate1 = ql.Date(19, ql.December, 1985)
zeroCpnBondMaturityDate1 = ql.Date(20, ql.December, 2015)
zeroCpnBondRedemption1 = bondCalendar.adjust(zeroCpnBondMaturityDate1, ql.Following)
zeroCpnBondLeg1 = ql.Leg([ql.SimpleCashFlow(100.0, zeroCpnBondRedemption1)])
zeroCpnBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate1,
zeroCpnBondStartDate1,
zeroCpnBondLeg1,
)
zeroCpnBond1.setPricingEngine(bondEngine)
zeroCpnBondPrice1 = zeroCpnBond1.cleanPrice()
zeroCpnAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice1 = zeroCpnAssetSwap1.fairCleanPrice()
error8 = abs(zeroCpnBondAssetSwapPrice1 - zeroCpnBondPrice1)
self.assertFalse(
error8 > tolerance,
"wrong zero spread asset swap price for zero cpn bond:"
+ "\n bond's clean price: "
+ str(zeroCpnBondPrice1)
+ "\n asset swap fair price: "
+ str(zeroCpnBondAssetSwapPrice1)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBondStartDate2 = ql.Date(17, ql.February, 1998)
zeroCpnBondMaturityDate2 = ql.Date(17, ql.February, 2028)
zerocpbondRedemption2 = bondCalendar.adjust(zeroCpnBondMaturityDate2, ql.Following)
zeroCpnBondLeg2 = ql.Leg([ql.SimpleCashFlow(100.0, zerocpbondRedemption2)])
zeroCpnBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate2,
zeroCpnBondStartDate2,
zeroCpnBondLeg2,
)
zeroCpnBond2.setPricingEngine(bondEngine)
zeroCpnBondPrice2 = zeroCpnBond2.cleanPrice()
zeroCpnAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice2 = zeroCpnAssetSwap2.fairCleanPrice()
error9 = abs(cmsBondAssetSwapPrice2 - cmsBondPrice2)
self.assertFalse(
error9 > tolerance,
"wrong zero spread asset swap price for zero cpn bond:"
+ "\n bond's clean price: "
+ str(zeroCpnBondPrice2)
+ "\n asset swap fair price: "
+ str(zeroCpnBondAssetSwapPrice2)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
def testMASWWithGenericBond(self):
"""Testing market asset swap against par asset swap with generic bond..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
mktAssetSwap = False
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = ql.Date(4, ql.January, 2005)
fixedBondMaturityDate1 = ql.Date(4, ql.January, 2037)
fixedBondSchedule1 = ql.Schedule(
fixedBondStartDate1,
fixedBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg1 = list(
ql.FixedRateLeg(fixedBondSchedule1, ql.ActualActual(ql.ActualActual.ISDA), [self.faceAmount], [0.04])
)
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1, ql.Following)
fixedBondLeg1.append(ql.SimpleCashFlow(100.0, fixedbondRedemption1))
fixedBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate1, fixedBondStartDate1, fixedBondLeg1
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
swapEngine = ql.DiscountingSwapEngine(self.termStructure, False)
fixedBond1.setPricingEngine(bondEngine)
fixedBondMktPrice1 = 89.22 ## market price observed on 7th June 2007
fixedBondMktFullPrice1 = fixedBondMktPrice1 + fixedBond1.accruedAmount()
fixedBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondParAssetSwap1.setPricingEngine(swapEngine)
fixedBondParAssetSwapSpread1 = fixedBondParAssetSwap1.fairSpread()
fixedBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
fixedBondMktAssetSwap1.setPricingEngine(swapEngine)
fixedBondMktAssetSwapSpread1 = fixedBondMktAssetSwap1.fairSpread()
tolerance = 1.0e-13
error1 = abs(fixedBondMktAssetSwapSpread1 - 100 * fixedBondParAssetSwapSpread1 / fixedBondMktFullPrice1)
self.assertFalse(
error1 > tolerance,
"wrong asset swap spreads for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondMktAssetSwapSpread1)
+ "\n par asset swap spread: "
+ str(fixedBondParAssetSwapSpread1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondStartDate2 = ql.Date(5, ql.February, 2005)
fixedBondMaturityDate2 = ql.Date(5, ql.February, 2019)
fixedBondSchedule2 = ql.Schedule(
fixedBondStartDate2,
fixedBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg2 = list(
ql.FixedRateLeg(fixedBondSchedule2, ql.Thirty360(ql.Thirty360.BondBasis), [self.faceAmount], [0.05])
)
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2, ql.Following)
fixedBondLeg2.append(ql.SimpleCashFlow(100.0, fixedbondRedemption2))
fixedBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate2, fixedBondStartDate2, fixedBondLeg2
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondMktPrice2 = 99.98 ## market price observed on 7th June 2007
fixedBondMktFullPrice2 = fixedBondMktPrice2 + fixedBond2.accruedAmount()
fixedBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondParAssetSwap2.setPricingEngine(swapEngine)
fixedBondParAssetSwapSpread2 = fixedBondParAssetSwap2.fairSpread()
fixedBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
fixedBondMktAssetSwap2.setPricingEngine(swapEngine)
fixedBondMktAssetSwapSpread2 = fixedBondMktAssetSwap2.fairSpread()
error2 = abs(fixedBondMktAssetSwapSpread2 - 100 * fixedBondParAssetSwapSpread2 / fixedBondMktFullPrice2)
self.assertFalse(
error2 > tolerance,
"wrong asset swap spreads for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondMktAssetSwapSpread2)
+ "\n par asset swap spread: "
+ str(fixedBondParAssetSwapSpread2)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondStartDate1 = ql.Date(29, ql.September, 2003)
floatingBondMaturityDate1 = ql.Date(29, ql.September, 2013)
floatingBondSchedule1 = ql.Schedule(
floatingBondStartDate1,
floatingBondMaturityDate1,
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg1 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
[fixingDays],
[],
[0.0056],
[],
[],
inArrears,
)
)
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, ql.Following)
floatingBondLeg1.append(ql.SimpleCashFlow(100.0, floatingbondRedemption1))
floatingBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate1,
floatingBondStartDate1,
floatingBondLeg1,
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
## market price observed on 7th June 2007
floatingBondMktPrice1 = 101.64
floatingBondMktFullPrice1 = floatingBondMktPrice1 + floatingBond1.accruedAmount()
floatingBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondParAssetSwap1.setPricingEngine(swapEngine)
floatingBondParAssetSwapSpread1 = floatingBondParAssetSwap1.fairSpread()
floatingBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
floatingBondMktAssetSwap1.setPricingEngine(swapEngine)
floatingBondMktAssetSwapSpread1 = floatingBondMktAssetSwap1.fairSpread()
error3 = abs(
floatingBondMktAssetSwapSpread1 - 100 * floatingBondParAssetSwapSpread1 / floatingBondMktFullPrice1
)
self.assertFalse(
error3 > tolerance,
"wrong asset swap spreads for floating bond:"
+ "\n market asset swap spread: "
+ str(floatingBondMktAssetSwapSpread1)
+ "\n par asset swap spread: "
+ str(floatingBondParAssetSwapSpread1)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondStartDate2 = ql.Date(24, ql.September, 2004)
floatingBondMaturityDate2 = ql.Date(24, ql.September, 2018)
floatingBondSchedule2 = ql.Schedule(
floatingBondStartDate2,
floatingBondMaturityDate2,
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg2 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0025],
[],
[],
inArrears,
)
)
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ql.ModifiedFollowing)
floatingBondLeg2.append(ql.SimpleCashFlow(100.0, floatingbondRedemption2))
floatingBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate2,
floatingBondStartDate2,
floatingBondLeg2,
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
## market price observed on 7th June 2007
floatingBondMktPrice2 = 101.248
floatingBondMktFullPrice2 = floatingBondMktPrice2 + floatingBond2.accruedAmount()
floatingBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondParAssetSwap2.setPricingEngine(swapEngine)
floatingBondParAssetSwapSpread2 = floatingBondParAssetSwap2.fairSpread()
floatingBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
floatingBondMktAssetSwap2.setPricingEngine(swapEngine)
floatingBondMktAssetSwapSpread2 = floatingBondMktAssetSwap2.fairSpread()
error4 = abs(
floatingBondMktAssetSwapSpread2 - 100 * floatingBondParAssetSwapSpread2 / floatingBondMktFullPrice2
)
self.assertFalse(
error4 > tolerance,
"wrong asset swap spreads for floating bond:"
+ "\n market asset swap spread: "
+ str(floatingBondMktAssetSwapSpread2)
+ "\n par asset swap spread: "
+ str(floatingBondParAssetSwapSpread2)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = ql.Date(22, ql.August, 2005)
cmsBondMaturityDate1 = ql.Date(22, ql.August, 2020)
cmsBondSchedule1 = ql.Schedule(
cmsBondStartDate1,
cmsBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg1 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[],
[],
[0.055],
[0.025],
inArrears,
)
)
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, ql.Following)
cmsBondLeg1.append(ql.SimpleCashFlow(100.0, cmsbondRedemption1))
cmsBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate1, cmsBondStartDate1, cmsBondLeg1
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondMktPrice1 = 88.45 ## market price observed on 7th June 2007
cmsBondMktFullPrice1 = cmsBondMktPrice1 + cmsBond1.accruedAmount()
cmsBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondParAssetSwap1.setPricingEngine(swapEngine)
cmsBondParAssetSwapSpread1 = cmsBondParAssetSwap1.fairSpread()
cmsBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
cmsBondMktAssetSwap1.setPricingEngine(swapEngine)
cmsBondMktAssetSwapSpread1 = cmsBondMktAssetSwap1.fairSpread()
error5 = abs(cmsBondMktAssetSwapSpread1 - 100 * cmsBondParAssetSwapSpread1 / cmsBondMktFullPrice1)
self.assertFalse(
error5 > tolerance,
"wrong asset swap spreads for cms bond:"
+ "\n market asset swap spread: "
+ str(cmsBondMktAssetSwapSpread1)
+ "\n par asset swap spread: "
+ str(100 * cmsBondParAssetSwapSpread1 / cmsBondMktFullPrice1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondStartDate2 = ql.Date(6, ql.May, 2005)
cmsBondMaturityDate2 = ql.Date(6, ql.May, 2015)
cmsBondSchedule2 = ql.Schedule(
cmsBondStartDate2,
cmsBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg2 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[0.84],
[],
[],
[],
inArrears,
)
)
cmsbondRedemption2 = bondCalendar.adjust(cmsBondMaturityDate2, ql.Following)
cmsBondLeg2.append(ql.SimpleCashFlow(100.0, cmsbondRedemption2))
cmsBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate2, cmsBondStartDate2, cmsBondLeg2
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondMktPrice2 = 94.08 ## market price observed on 7th June 2007
cmsBondMktFullPrice2 = cmsBondMktPrice2 + cmsBond2.accruedAmount()
cmsBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondParAssetSwap2.setPricingEngine(swapEngine)
cmsBondParAssetSwapSpread2 = cmsBondParAssetSwap2.fairSpread()
cmsBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
cmsBondMktAssetSwap2.setPricingEngine(swapEngine)
cmsBondMktAssetSwapSpread2 = cmsBondMktAssetSwap2.fairSpread()
error6 = abs(cmsBondMktAssetSwapSpread2 - 100 * cmsBondParAssetSwapSpread2 / cmsBondMktFullPrice2)
self.assertFalse(
error6 > tolerance,
"wrong asset swap spreads for cms bond:"
+ "\n market asset swap spread: "
+ str(cmsBondMktAssetSwapSpread2)
+ "\n par asset swap spread: "
+ str(cmsBondParAssetSwapSpread2)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBondStartDate1 = ql.Date(19, ql.December, 1985)
zeroCpnBondMaturityDate1 = ql.Date(20, ql.December, 2015)
zeroCpnBondRedemption1 = bondCalendar.adjust(zeroCpnBondMaturityDate1, ql.Following)
zeroCpnBondLeg1 = ql.Leg([ql.SimpleCashFlow(100.0, zeroCpnBondRedemption1)])
zeroCpnBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate1,
zeroCpnBondStartDate1,
zeroCpnBondLeg1,
)
zeroCpnBond1.setPricingEngine(bondEngine)
## market price observed on 12th June 2007
zeroCpnBondMktPrice1 = 70.436
zeroCpnBondMktFullPrice1 = zeroCpnBondMktPrice1 + zeroCpnBond1.accruedAmount()
zeroCpnBondParAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondParAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondParAssetSwapSpread1 = zeroCpnBondParAssetSwap1.fairSpread()
zeroCpnBondMktAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
zeroCpnBondMktAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondMktAssetSwapSpread1 = zeroCpnBondMktAssetSwap1.fairSpread()
error7 = abs(zeroCpnBondMktAssetSwapSpread1 - 100 * zeroCpnBondParAssetSwapSpread1 / zeroCpnBondMktFullPrice1)
self.assertFalse(
error7 > tolerance,
"wrong asset swap spreads for zero cpn bond:"
+ "\n market asset swap spread: "
+ str(zeroCpnBondMktAssetSwapSpread1)
+ "\n par asset swap spread: "
+ str(zeroCpnBondParAssetSwapSpread1)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBondStartDate2 = ql.Date(17, ql.February, 1998)
zeroCpnBondMaturityDate2 = ql.Date(17, ql.February, 2028)
zerocpbondRedemption2 = bondCalendar.adjust(zeroCpnBondMaturityDate2, ql.Following)
zeroCpnBondLeg2 = ql.Leg([ql.SimpleCashFlow(100.0, zerocpbondRedemption2)])
zeroCpnBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate2,
zeroCpnBondStartDate2,
zeroCpnBondLeg2,
)
zeroCpnBond2.setPricingEngine(bondEngine)
## zeroCpnBondPrice2 = zeroCpnBond2.cleanPrice()
## market price observed on 12th June 2007
zeroCpnBondMktPrice2 = 35.160
zeroCpnBondMktFullPrice2 = zeroCpnBondMktPrice2 + zeroCpnBond2.accruedAmount()
zeroCpnBondParAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondParAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondParAssetSwapSpread2 = zeroCpnBondParAssetSwap2.fairSpread()
zeroCpnBondMktAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
mktAssetSwap,
)
zeroCpnBondMktAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondMktAssetSwapSpread2 = zeroCpnBondMktAssetSwap2.fairSpread()
error8 = abs(zeroCpnBondMktAssetSwapSpread2 - 100 * zeroCpnBondParAssetSwapSpread2 / zeroCpnBondMktFullPrice2)
self.assertFalse(
error8 > tolerance,
"wrong asset swap spreads for zero cpn bond:"
+ "\n market asset swap spread: "
+ str(zeroCpnBondMktAssetSwapSpread2)
+ "\n par asset swap spread: "
+ str(zeroCpnBondParAssetSwapSpread2)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
def testZSpreadWithGenericBond(self):
"""Testing clean and dirty price with null Z-spread against theoretical prices..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = ql.Date(4, ql.January, 2005)
fixedBondMaturityDate1 = ql.Date(4, ql.January, 2037)
fixedBondSchedule1 = ql.Schedule(
fixedBondStartDate1,
fixedBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg1 = list(
ql.FixedRateLeg(fixedBondSchedule1, ql.ActualActual(ql.ActualActual.ISDA), [self.faceAmount], [0.04])
)
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1, ql.Following)
fixedBondLeg1.append(ql.SimpleCashFlow(100.0, fixedbondRedemption1))
fixedBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate1, fixedBondStartDate1, fixedBondLeg1
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
fixedBond1.setPricingEngine(bondEngine)
fixedBondImpliedValue1 = fixedBond1.cleanPrice()
fixedBondSettlementDate1 = fixedBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
fixedBondCleanPrice1 = ql.cleanPriceFromZSpread(
fixedBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
fixedBondSettlementDate1,
)
tolerance = 1.0e-13
error1 = abs(fixedBondImpliedValue1 - fixedBondCleanPrice1)
self.assertFalse(
error1 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(fixedBondCleanPrice1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondStartDate2 = ql.Date(5, ql.February, 2005)
fixedBondMaturityDate2 = ql.Date(5, ql.February, 2019)
fixedBondSchedule2 = ql.Schedule(
fixedBondStartDate2,
fixedBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg2 = list(
ql.FixedRateLeg(fixedBondSchedule2, ql.Thirty360(ql.Thirty360.BondBasis), [self.faceAmount], [0.05])
)
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2, ql.Following)
fixedBondLeg2.append(ql.SimpleCashFlow(100.0, fixedbondRedemption2))
fixedBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate2, fixedBondStartDate2, fixedBondLeg2
)
fixedBond2.setPricingEngine(bondEngine)
fixedBondImpliedValue2 = fixedBond2.cleanPrice()
fixedBondSettlementDate2 = fixedBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
fixedBondCleanPrice2 = ql.cleanPriceFromZSpread(
fixedBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
fixedBondSettlementDate2,
)
error3 = abs(fixedBondImpliedValue2 - fixedBondCleanPrice2)
self.assertFalse(
error3 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(fixedBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(fixedBondCleanPrice2)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondStartDate1 = ql.Date(29, ql.September, 2003)
floatingBondMaturityDate1 = ql.Date(29, ql.September, 2013)
floatingBondSchedule1 = ql.Schedule(
floatingBondStartDate1,
floatingBondMaturityDate1,
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg1 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
[fixingDays],
[],
[0.0056],
[],
[],
inArrears,
)
)
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, ql.Following)
floatingBondLeg1.append(ql.SimpleCashFlow(100.0, floatingbondRedemption1))
floatingBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate1,
floatingBondStartDate1,
floatingBondLeg1,
)
floatingBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondImpliedValue1 = floatingBond1.cleanPrice()
floatingBondSettlementDate1 = floatingBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice1 = ql.cleanPriceFromZSpread(
floatingBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Semiannual,
floatingBondSettlementDate1,
)
error5 = abs(floatingBondImpliedValue1 - floatingBondCleanPrice1)
self.assertFalse(
error5 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(floatingBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(floatingBondCleanPrice1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondStartDate2 = ql.Date(24, ql.September, 2004)
floatingBondMaturityDate2 = ql.Date(24, ql.September, 2018)
floatingBondSchedule2 = ql.Schedule(
floatingBondStartDate2,
floatingBondMaturityDate2,
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg2 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0025],
[],
[],
inArrears,
)
)
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ql.ModifiedFollowing)
floatingBondLeg2.append(ql.SimpleCashFlow(100.0, floatingbondRedemption2))
floatingBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate2,
floatingBondStartDate2,
floatingBondLeg2,
)
floatingBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
floatingBondImpliedValue2 = floatingBond2.cleanPrice()
floatingBondSettlementDate2 = floatingBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
floatingBondCleanPrice2 = ql.cleanPriceFromZSpread(
floatingBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Semiannual,
floatingBondSettlementDate2,
)
error7 = abs(floatingBondImpliedValue2 - floatingBondCleanPrice2)
self.assertFalse(
error7 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(floatingBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(floatingBondCleanPrice2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = ql.Date(22, ql.August, 2005)
cmsBondMaturityDate1 = ql.Date(22, ql.August, 2020)
cmsBondSchedule1 = ql.Schedule(
cmsBondStartDate1,
cmsBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg1 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[],
[],
[0.055],
[0.025],
inArrears,
)
)
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, ql.Following)
cmsBondLeg1.append(ql.SimpleCashFlow(100.0, cmsbondRedemption1))
cmsBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate1, cmsBondStartDate1, cmsBondLeg1
)
cmsBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondImpliedValue1 = cmsBond1.cleanPrice()
cmsBondSettlementDate1 = cmsBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice1 = ql.cleanPriceFromZSpread(
cmsBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
cmsBondSettlementDate1,
)
error9 = abs(cmsBondImpliedValue1 - cmsBondCleanPrice1)
self.assertFalse(
error9 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(cmsBondImpliedValue1)
+ "\n par asset swap spread: "
+ str(cmsBondCleanPrice1)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondStartDate2 = ql.Date(6, ql.May, 2005)
cmsBondMaturityDate2 = ql.Date(6, ql.May, 2015)
cmsBondSchedule2 = ql.Schedule(
cmsBondStartDate2,
cmsBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg2 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[0.84],
[],
[],
[],
inArrears,
)
)
cmsbondRedemption2 = bondCalendar.adjust(cmsBondMaturityDate2, ql.Following)
cmsBondLeg2.append(ql.SimpleCashFlow(100.0, cmsbondRedemption2))
cmsBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate2, cmsBondStartDate2, cmsBondLeg2
)
cmsBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondImpliedValue2 = cmsBond2.cleanPrice()
cmsBondSettlementDate2 = cmsBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
cmsBondCleanPrice2 = ql.cleanPriceFromZSpread(
cmsBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
cmsBondSettlementDate2,
)
error11 = abs(cmsBondImpliedValue2 - cmsBondCleanPrice2)
self.assertFalse(
error11 > tolerance,
"wrong clean price for fixed bond:"
+ "\n market asset swap spread: "
+ str(cmsBondImpliedValue2)
+ "\n par asset swap spread: "
+ str(cmsBondCleanPrice2)
+ "\n error: "
+ str(error11)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBondStartDate1 = ql.Date(19, ql.December, 1985)
zeroCpnBondMaturityDate1 = ql.Date(20, ql.December, 2015)
zeroCpnBondRedemption1 = bondCalendar.adjust(zeroCpnBondMaturityDate1, ql.Following)
zeroCpnBondLeg1 = ql.Leg([ql.SimpleCashFlow(100.0, zeroCpnBondRedemption1)])
zeroCpnBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate1,
zeroCpnBondStartDate1,
zeroCpnBondLeg1,
)
zeroCpnBond1.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue1 = zeroCpnBond1.cleanPrice()
zeroCpnBondSettlementDate1 = zeroCpnBond1.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice1 = ql.cleanPriceFromZSpread(
zeroCpnBond1,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
zeroCpnBondSettlementDate1,
)
error13 = abs(zeroCpnBondImpliedValue1 - zeroCpnBondCleanPrice1)
self.assertFalse(
error13 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: "
+ str(zeroCpnBondImpliedValue1)
+ "\n zero cpn price: "
+ str(zeroCpnBondCleanPrice1)
+ "\n error: "
+ str(error13)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBondStartDate2 = ql.Date(17, ql.February, 1998)
zeroCpnBondMaturityDate2 = ql.Date(17, ql.February, 2028)
zerocpbondRedemption2 = bondCalendar.adjust(zeroCpnBondMaturityDate2, ql.Following)
zeroCpnBondLeg2 = ql.Leg([ql.SimpleCashFlow(100.0, zerocpbondRedemption2)])
zeroCpnBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate2,
zeroCpnBondStartDate2,
zeroCpnBondLeg2,
)
zeroCpnBond2.setPricingEngine(bondEngine)
zeroCpnBondImpliedValue2 = zeroCpnBond2.cleanPrice()
zeroCpnBondSettlementDate2 = zeroCpnBond2.settlementDate()
## standard market conventions:
## bond's frequency + coumpounding and daycounter of the YieldCurve
zeroCpnBondCleanPrice2 = ql.cleanPriceFromZSpread(
zeroCpnBond2,
self.yieldCurve,
self.spread,
ql.Actual365Fixed(),
self.compounding,
ql.Annual,
zeroCpnBondSettlementDate2,
)
error15 = abs(zeroCpnBondImpliedValue2 - zeroCpnBondCleanPrice2)
self.assertFalse(
error15 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n zero cpn implied value: "
+ str(zeroCpnBondImpliedValue2)
+ "\n zero cpn price: "
+ str(zeroCpnBondCleanPrice2)
+ "\n error: "
+ str(error15)
+ "\n tolerance: "
+ str(tolerance),
)
def testSpecializedBondVsGenericBond(self):
"""Testing clean and dirty prices for specialized bond against equivalent generic bond..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
inArrears = False
## Fixed Underlying bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = ql.Date(4, ql.January, 2005)
fixedBondMaturityDate1 = ql.Date(4, ql.January, 2037)
fixedBondSchedule1 = ql.Schedule(
fixedBondStartDate1,
fixedBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg1 = list(
ql.FixedRateLeg(fixedBondSchedule1, ql.ActualActual(ql.ActualActual.ISDA), [self.faceAmount], [0.04])
)
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1, ql.Following)
fixedBondLeg1.append(ql.SimpleCashFlow(100.0, fixedbondRedemption1))
## generic bond
fixedBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate1, fixedBondStartDate1, fixedBondLeg1
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
fixedBond1.setPricingEngine(bondEngine)
## equivalent specialized fixed rate bond
fixedSpecializedBond1 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule1,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
fixedSpecializedBond1.setPricingEngine(bondEngine)
fixedBondTheoValue1 = fixedBond1.cleanPrice()
fixedSpecializedBondTheoValue1 = fixedSpecializedBond1.cleanPrice()
tolerance = 1.0e-13
error1 = abs(fixedBondTheoValue1 - fixedSpecializedBondTheoValue1)
self.assertFalse(
error1 > tolerance,
"wrong clean price for fixed bond:"
+ "\n specialized fixed rate bond's theo clean price: "
+ str(fixedBondTheoValue1)
+ "\n generic equivalent bond's theo clean price: "
+ str(fixedSpecializedBondTheoValue1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
fixedBondTheoDirty1 = fixedBondTheoValue1 + fixedBond1.accruedAmount()
fixedSpecializedTheoDirty1 = fixedSpecializedBondTheoValue1 + fixedSpecializedBond1.accruedAmount()
error2 = abs(fixedBondTheoDirty1 - fixedSpecializedTheoDirty1)
self.assertFalse(
error2 > tolerance,
"wrong dirty price for fixed bond:"
+ "\n specialized fixed rate bond's theo dirty price: "
+ str(fixedBondTheoDirty1)
+ "\n generic equivalent bond's theo dirty price: "
+ str(fixedSpecializedTheoDirty1)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
## Fixed Underlying bond (Isin: IT0006527060 IBRD 5 02/05/19)
## maturity occurs on a business day
fixedBondStartDate2 = ql.Date(5, ql.February, 2005)
fixedBondMaturityDate2 = ql.Date(5, ql.February, 2019)
fixedBondSchedule2 = ql.Schedule(
fixedBondStartDate2,
fixedBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg2 = list(
ql.FixedRateLeg(fixedBondSchedule2, ql.Thirty360(ql.Thirty360.BondBasis), [self.faceAmount], [0.05])
)
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2, ql.Following)
fixedBondLeg2.append(ql.SimpleCashFlow(100.0, fixedbondRedemption2))
## generic bond
fixedBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate2, fixedBondStartDate2, fixedBondLeg2
)
fixedBond2.setPricingEngine(bondEngine)
## equivalent specialized fixed rate bond
fixedSpecializedBond2 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule2,
[0.05],
ql.Thirty360(ql.Thirty360.BondBasis),
ql.Following,
100.0,
ql.Date(5, ql.February, 2005),
)
fixedSpecializedBond2.setPricingEngine(bondEngine)
fixedBondTheoValue2 = fixedBond2.cleanPrice()
fixedSpecializedBondTheoValue2 = fixedSpecializedBond2.cleanPrice()
error3 = abs(fixedBondTheoValue2 - fixedSpecializedBondTheoValue2)
self.assertFalse(
error3 > tolerance,
"wrong clean price for fixed bond:"
+ "\n specialized fixed rate bond's theo clean price: "
+ str(fixedBondTheoValue2)
+ "\n generic equivalent bond's theo clean price: "
+ str(fixedSpecializedBondTheoValue2)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
fixedBondTheoDirty2 = fixedBondTheoValue2 + fixedBond2.accruedAmount()
fixedSpecializedBondTheoDirty2 = fixedSpecializedBondTheoValue2 + fixedSpecializedBond2.accruedAmount()
error4 = abs(fixedBondTheoDirty2 - fixedSpecializedBondTheoDirty2)
self.assertFalse(
error4 > tolerance,
"wrong dirty price for fixed bond:"
+ "\n specialized fixed rate bond's dirty clean price: "
+ str(fixedBondTheoDirty2)
+ "\n generic equivalent bond's theo dirty price: "
+ str(fixedSpecializedBondTheoDirty2)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: IT0003543847 ISPIM 0 09/29/13)
## maturity doesn't occur on a business day
floatingBondStartDate1 = ql.Date(29, ql.September, 2003)
floatingBondMaturityDate1 = ql.Date(29, ql.September, 2013)
floatingBondSchedule1 = ql.Schedule(
floatingBondStartDate1,
floatingBondMaturityDate1,
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg1 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
[fixingDays],
[],
[0.0056],
[],
[],
inArrears,
)
)
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, ql.Following)
floatingBondLeg1.append(ql.SimpleCashFlow(100.0, floatingbondRedemption1))
## generic bond
floatingBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate1,
floatingBondStartDate1,
floatingBondLeg1,
)
floatingBond1.setPricingEngine(bondEngine)
## equivalent specialized floater
floatingSpecializedBond1 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
fixingDays,
[1],
[0.0056],
[],
[],
inArrears,
100.0,
ql.Date(29, ql.September, 2003),
)
floatingSpecializedBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
ql.setCouponPricer(floatingSpecializedBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondTheoValue1 = floatingBond1.cleanPrice()
floatingSpecializedBondTheoValue1 = floatingSpecializedBond1.cleanPrice()
error5 = abs(floatingBondTheoValue1 - floatingSpecializedBondTheoValue1)
self.assertFalse(
error5 > tolerance,
"wrong clean price for fixed bond:"
+ "\n generic fixed rate bond's theo clean price: "
+ str(floatingBondTheoValue1)
+ "\n equivalent specialized bond's theo clean price: "
+ str(floatingSpecializedBondTheoValue1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
floatingBondTheoDirty1 = floatingBondTheoValue1 + floatingBond1.accruedAmount()
floatingSpecializedBondTheoDirty1 = floatingSpecializedBondTheoValue1 + floatingSpecializedBond1.accruedAmount()
error6 = abs(floatingBondTheoDirty1 - floatingSpecializedBondTheoDirty1)
self.assertFalse(
error6 > tolerance,
"wrong dirty price for frn bond:"
+ "\n generic frn bond's dirty clean price: "
+ str(floatingBondTheoDirty1)
+ "\n equivalent specialized bond's theo dirty price: "
+ str(floatingSpecializedBondTheoDirty1)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
## FRN Underlying bond (Isin: XS0090566539 COE 0 09/24/18)
## maturity occurs on a business day
floatingBondStartDate2 = ql.Date(24, ql.September, 2004)
floatingBondMaturityDate2 = ql.Date(24, ql.September, 2018)
floatingBondSchedule2 = ql.Schedule(
floatingBondStartDate2,
floatingBondMaturityDate2,
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg2 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0025],
[],
[],
inArrears,
)
)
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ql.ModifiedFollowing)
floatingBondLeg2.append(ql.SimpleCashFlow(100.0, floatingbondRedemption2))
## generic bond
floatingBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate2,
floatingBondStartDate2,
floatingBondLeg2,
)
floatingBond2.setPricingEngine(bondEngine)
## equivalent specialized floater
floatingSpecializedBond2 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
fixingDays,
[1],
[0.0025],
[],
[],
inArrears,
100.0,
ql.Date(24, ql.September, 2004),
)
floatingSpecializedBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
ql.setCouponPricer(floatingSpecializedBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
floatingBondTheoValue2 = floatingBond2.cleanPrice()
floatingSpecializedBondTheoValue2 = floatingSpecializedBond2.cleanPrice()
error7 = abs(floatingBondTheoValue2 - floatingSpecializedBondTheoValue2)
self.assertFalse(
error7 > tolerance,
"wrong clean price for floater bond:"
+ "\n generic floater bond's theo clean price: "
+ str(floatingBondTheoValue2)
+ "\n equivalent specialized bond's theo clean price: "
+ str(floatingSpecializedBondTheoValue2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
floatingBondTheoDirty2 = floatingBondTheoValue2 + floatingBond2.accruedAmount()
floatingSpecializedTheoDirty2 = floatingSpecializedBondTheoValue2 + floatingSpecializedBond2.accruedAmount()
error8 = abs(floatingBondTheoDirty2 - floatingSpecializedTheoDirty2)
self.assertFalse(
error8 > tolerance,
"wrong dirty price for floater bond:"
+ "\n generic floater bond's theo dirty price: "
+ str(floatingBondTheoDirty2)
+ "\n equivalent specialized bond's theo dirty price: "
+ str(floatingSpecializedTheoDirty2)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = ql.Date(22, ql.August, 2005)
cmsBondMaturityDate1 = ql.Date(22, ql.August, 2020)
cmsBondSchedule1 = ql.Schedule(
cmsBondStartDate1,
cmsBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg1 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[],
[],
[0.055],
[0.025],
inArrears,
)
)
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, ql.Following)
cmsBondLeg1.append(ql.SimpleCashFlow(100.0, cmsbondRedemption1))
## generic cms bond
cmsBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate1, cmsBondStartDate1, cmsBondLeg1
)
cmsBond1.setPricingEngine(bondEngine)
## equivalent specialized cms bond
cmsSpecializedBond1 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[1.0],
[0.0],
[0.055],
[0.025],
inArrears,
100.0,
ql.Date(22, ql.August, 2005),
)
cmsSpecializedBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
ql.setCouponPricer(cmsSpecializedBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondTheoValue1 = cmsBond1.cleanPrice()
cmsSpecializedBondTheoValue1 = cmsSpecializedBond1.cleanPrice()
error9 = abs(cmsBondTheoValue1 - cmsSpecializedBondTheoValue1)
self.assertFalse(
error9 > tolerance,
"wrong clean price for cms bond:"
+ "\n generic cms bond's theo clean price: "
+ str(cmsBondTheoValue1)
+ "\n equivalent specialized bond's theo clean price: "
+ str(cmsSpecializedBondTheoValue1)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
cmsBondTheoDirty1 = cmsBondTheoValue1 + cmsBond1.accruedAmount()
cmsSpecializedBondTheoDirty1 = cmsSpecializedBondTheoValue1 + cmsSpecializedBond1.accruedAmount()
error10 = abs(cmsBondTheoDirty1 - cmsSpecializedBondTheoDirty1)
self.assertFalse(
error10 > tolerance,
"wrong dirty price for cms bond:"
+ "\n generic cms bond's theo dirty price: "
+ str(cmsBondTheoDirty1)
+ "\n specialized cms bond's theo dirty price: "
+ str(cmsSpecializedBondTheoDirty1)
+ "\n error: "
+ str(error10)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS Underlying bond (Isin: XS0218766664 ISPIM 0 5/6/15)
## maturity occurs on a business day
cmsBondStartDate2 = ql.Date(6, ql.May, 2005)
cmsBondMaturityDate2 = ql.Date(6, ql.May, 2015)
cmsBondSchedule2 = ql.Schedule(
cmsBondStartDate2,
cmsBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg2 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[0.84],
[],
[],
[],
inArrears,
)
)
cmsbondRedemption2 = bondCalendar.adjust(cmsBondMaturityDate2, ql.Following)
cmsBondLeg2.append(ql.SimpleCashFlow(100.0, cmsbondRedemption2))
## generic bond
cmsBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate2, cmsBondStartDate2, cmsBondLeg2
)
cmsBond2.setPricingEngine(bondEngine)
## equivalent specialized cms bond
cmsSpecializedBond2 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[0.84],
[0.0],
[],
[],
inArrears,
100.0,
ql.Date(6, ql.May, 2005),
)
cmsSpecializedBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
ql.setCouponPricer(cmsSpecializedBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondTheoValue2 = cmsBond2.cleanPrice()
cmsSpecializedBondTheoValue2 = cmsSpecializedBond2.cleanPrice()
error11 = abs(cmsBondTheoValue2 - cmsSpecializedBondTheoValue2)
self.assertFalse(
error11 > tolerance,
"wrong clean price for cms bond:"
+ "\n generic cms bond's theo clean price: "
+ str(cmsBondTheoValue2)
+ "\n cms bond's theo clean price: "
+ str(cmsSpecializedBondTheoValue2)
+ "\n error: "
+ str(error11)
+ "\n tolerance: "
+ str(tolerance),
)
cmsBondTheoDirty2 = cmsBondTheoValue2 + cmsBond2.accruedAmount()
cmsSpecializedBondTheoDirty2 = cmsSpecializedBondTheoValue2 + cmsSpecializedBond2.accruedAmount()
error12 = abs(cmsBondTheoDirty2 - cmsSpecializedBondTheoDirty2)
self.assertFalse(
error12 > tolerance,
"wrong dirty price for cms bond:"
+ "\n generic cms bond's dirty price: "
+ str(cmsBondTheoDirty2)
+ "\n specialized cms bond's theo dirty price: "
+ str(cmsSpecializedBondTheoDirty2)
+ "\n error: "
+ str(error12)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBondStartDate1 = ql.Date(19, ql.December, 1985)
zeroCpnBondMaturityDate1 = ql.Date(20, ql.December, 2015)
zeroCpnBondRedemption1 = bondCalendar.adjust(zeroCpnBondMaturityDate1, ql.Following)
zeroCpnBondLeg1 = ql.Leg([ql.SimpleCashFlow(100.0, zeroCpnBondRedemption1)])
## generic bond
zeroCpnBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate1,
zeroCpnBondStartDate1,
zeroCpnBondLeg1,
)
zeroCpnBond1.setPricingEngine(bondEngine)
## specialized zerocpn bond
zeroCpnSpecializedBond1 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(20, ql.December, 2015),
ql.Following,
100.0,
ql.Date(19, ql.December, 1985),
)
zeroCpnSpecializedBond1.setPricingEngine(bondEngine)
zeroCpnBondTheoValue1 = zeroCpnBond1.cleanPrice()
zeroCpnSpecializedBondTheoValue1 = zeroCpnSpecializedBond1.cleanPrice()
error13 = abs(zeroCpnBondTheoValue1 - zeroCpnSpecializedBondTheoValue1)
self.assertFalse(
error13 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n generic zero bond's clean price: "
+ str(zeroCpnBondTheoValue1)
+ "\n specialized zero bond's clean price: "
+ str(zeroCpnSpecializedBondTheoValue1)
+ "\n error: "
+ str(error13)
+ "\n tolerance: "
+ str(tolerance),
)
zeroCpnBondTheoDirty1 = zeroCpnBondTheoValue1 + zeroCpnBond1.accruedAmount()
zeroCpnSpecializedBondTheoDirty1 = zeroCpnSpecializedBondTheoValue1 + zeroCpnSpecializedBond1.accruedAmount()
error14 = abs(zeroCpnBondTheoDirty1 - zeroCpnSpecializedBondTheoDirty1)
self.assertFalse(
error14 > tolerance,
"wrong dirty price for zero bond:"
+ "\n generic zerocpn bond's dirty price: "
+ str(zeroCpnBondTheoDirty1)
+ "\n specialized zerocpn bond's clean price: "
+ str(zeroCpnSpecializedBondTheoDirty1)
+ "\n error: "
+ str(error14)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity occurs on a business day
zeroCpnBondStartDate2 = ql.Date(17, ql.February, 1998)
zeroCpnBondMaturityDate2 = ql.Date(17, ql.February, 2028)
zerocpbondRedemption2 = bondCalendar.adjust(zeroCpnBondMaturityDate2, ql.Following)
zeroCpnBondLeg2 = ql.Leg([ql.SimpleCashFlow(100.0, zerocpbondRedemption2)])
## generic bond
zeroCpnBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate2,
zeroCpnBondStartDate2,
zeroCpnBondLeg2,
)
zeroCpnBond2.setPricingEngine(bondEngine)
## specialized zerocpn bond
zeroCpnSpecializedBond2 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(17, ql.February, 2028),
ql.Following,
100.0,
ql.Date(17, ql.February, 1998),
)
zeroCpnSpecializedBond2.setPricingEngine(bondEngine)
zeroCpnBondTheoValue2 = zeroCpnBond2.cleanPrice()
zeroCpnSpecializedBondTheoValue2 = zeroCpnSpecializedBond2.cleanPrice()
error15 = abs(zeroCpnBondTheoValue2 - zeroCpnSpecializedBondTheoValue2)
self.assertFalse(
error15 > tolerance,
"wrong clean price for zero coupon bond:"
+ "\n generic zerocpn bond's clean price: "
+ str(zeroCpnBondTheoValue2)
+ "\n specialized zerocpn bond's clean price: "
+ str(zeroCpnSpecializedBondTheoValue2)
+ "\n error: "
+ str(error15)
+ "\n tolerance: "
+ str(tolerance),
)
zeroCpnBondTheoDirty2 = zeroCpnBondTheoValue2 + zeroCpnBond2.accruedAmount()
zeroCpnSpecializedBondTheoDirty2 = zeroCpnSpecializedBondTheoValue2 + zeroCpnSpecializedBond2.accruedAmount()
error16 = abs(zeroCpnBondTheoDirty2 - zeroCpnSpecializedBondTheoDirty2)
self.assertFalse(
error16 > tolerance,
"wrong dirty price for zero coupon bond:"
+ "\n generic zerocpn bond's dirty price: "
+ str(zeroCpnBondTheoDirty2)
+ "\n specialized zerocpn bond's dirty price: "
+ str(zeroCpnSpecializedBondTheoDirty2)
+ "\n error: "
+ str(error16)
+ "\n tolerance: "
+ str(tolerance),
)
def testSpecializedBondVsGenericBondUsingAsw(self):
"""Testing asset-swap prices and spreads for specialized bond against equivalent generic bond..."""
bondCalendar = ql.TARGET()
settlementDays = 3
fixingDays = 2
payFixedRate = True
parAssetSwap = True
inArrears = False
## Fixed bond (Isin: DE0001135275 DBR 4 01/04/37)
## maturity doesn't occur on a business day
fixedBondStartDate1 = ql.Date(4, ql.January, 2005)
fixedBondMaturityDate1 = ql.Date(4, ql.January, 2037)
fixedBondSchedule1 = ql.Schedule(
fixedBondStartDate1,
fixedBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg1 = list(
ql.FixedRateLeg(fixedBondSchedule1, ql.ActualActual(ql.ActualActual.ISDA), [self.faceAmount], [0.04])
)
fixedbondRedemption1 = bondCalendar.adjust(fixedBondMaturityDate1, ql.Following)
fixedBondLeg1.append(ql.SimpleCashFlow(100.0, fixedbondRedemption1))
## generic bond
fixedBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate1, fixedBondStartDate1, fixedBondLeg1
)
bondEngine = ql.DiscountingBondEngine(self.termStructure)
swapEngine = ql.DiscountingSwapEngine(self.termStructure, False)
fixedBond1.setPricingEngine(bondEngine)
## equivalent specialized fixed rate bond
fixedSpecializedBond1 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule1,
[0.04],
ql.ActualActual(ql.ActualActual.ISDA),
ql.Following,
100.0,
ql.Date(4, ql.January, 2005),
)
fixedSpecializedBond1.setPricingEngine(bondEngine)
fixedBondPrice1 = fixedBond1.cleanPrice()
fixedSpecializedBondPrice1 = fixedSpecializedBond1.cleanPrice()
fixedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap1.setPricingEngine(swapEngine)
fixedSpecializedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
fixedSpecializedBond1,
fixedSpecializedBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedSpecializedBondAssetSwap1.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice1 = fixedBondAssetSwap1.fairCleanPrice()
fixedSpecializedBondAssetSwapPrice1 = fixedSpecializedBondAssetSwap1.fairCleanPrice()
tolerance = 1.0e-13
error1 = abs(fixedBondAssetSwapPrice1 - fixedSpecializedBondAssetSwapPrice1)
self.assertFalse(
error1 > tolerance,
"wrong clean price for fixed bond:"
+ "\n generic fixed rate bond's clean price: "
+ str(fixedBondAssetSwapPrice1)
+ "\n equivalent specialized bond's clean price: "
+ str(fixedSpecializedBondAssetSwapPrice1)
+ "\n error: "
+ str(error1)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
fixedBondMktPrice1 = 91.832
fixedBondASW1 = ql.AssetSwap(
payFixedRate,
fixedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondASW1.setPricingEngine(swapEngine)
fixedSpecializedBondASW1 = ql.AssetSwap(
payFixedRate,
fixedSpecializedBond1,
fixedBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedSpecializedBondASW1.setPricingEngine(swapEngine)
fixedBondASWSpread1 = fixedBondASW1.fairSpread()
fixedSpecializedBondASWSpread1 = fixedSpecializedBondASW1.fairSpread()
error2 = abs(fixedBondASWSpread1 - fixedSpecializedBondASWSpread1)
self.assertFalse(
error2 > tolerance,
"wrong asw spread for fixed bond:"
+ "\n generic fixed rate bond's asw spread: "
+ str(fixedBondASWSpread1)
+ "\n equivalent specialized bond's asw spread: "
+ str(fixedSpecializedBondASWSpread1)
+ "\n error: "
+ str(error2)
+ "\n tolerance: "
+ str(tolerance),
)
##Fixed bond (Isin: IT0006527060 IBRD 5 02/05/19)
##maturity occurs on a business day
fixedBondStartDate2 = ql.Date(5, ql.February, 2005)
fixedBondMaturityDate2 = ql.Date(5, ql.February, 2019)
fixedBondSchedule2 = ql.Schedule(
fixedBondStartDate2,
fixedBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
fixedBondLeg2 = list(
ql.FixedRateLeg(fixedBondSchedule2, ql.Thirty360(ql.Thirty360.BondBasis), [self.faceAmount], [0.05])
)
fixedbondRedemption2 = bondCalendar.adjust(fixedBondMaturityDate2, ql.Following)
fixedBondLeg2.append(ql.SimpleCashFlow(100.0, fixedbondRedemption2))
## generic bond
fixedBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, fixedBondMaturityDate2, fixedBondStartDate2, fixedBondLeg2
)
fixedBond2.setPricingEngine(bondEngine)
## equivalent specialized fixed rate bond
fixedSpecializedBond2 = ql.FixedRateBond(
settlementDays,
self.faceAmount,
fixedBondSchedule2,
[0.05],
ql.Thirty360(ql.Thirty360.BondBasis),
ql.Following,
100.0,
ql.Date(5, ql.February, 2005),
)
fixedSpecializedBond2.setPricingEngine(bondEngine)
fixedBondPrice2 = fixedBond2.cleanPrice()
fixedSpecializedBondPrice2 = fixedSpecializedBond2.cleanPrice()
fixedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondAssetSwap2.setPricingEngine(swapEngine)
fixedSpecializedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
fixedSpecializedBond2,
fixedSpecializedBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedSpecializedBondAssetSwap2.setPricingEngine(swapEngine)
fixedBondAssetSwapPrice2 = fixedBondAssetSwap2.fairCleanPrice()
fixedSpecializedBondAssetSwapPrice2 = fixedSpecializedBondAssetSwap2.fairCleanPrice()
error3 = abs(fixedBondAssetSwapPrice2 - fixedSpecializedBondAssetSwapPrice2)
self.assertFalse(
error3 > tolerance,
"wrong clean price for fixed bond:"
+ "\n generic fixed rate bond's clean price: "
+ str(fixedBondAssetSwapPrice2)
+ "\n equivalent specialized bond's clean price: "
+ str(fixedSpecializedBondAssetSwapPrice2)
+ "\n error: "
+ str(error3)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
fixedBondMktPrice2 = 102.178
fixedBondASW2 = ql.AssetSwap(
payFixedRate,
fixedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedBondASW2.setPricingEngine(swapEngine)
fixedSpecializedBondASW2 = ql.AssetSwap(
payFixedRate,
fixedSpecializedBond2,
fixedBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
fixedSpecializedBondASW2.setPricingEngine(swapEngine)
fixedBondASWSpread2 = fixedBondASW2.fairSpread()
fixedSpecializedBondASWSpread2 = fixedSpecializedBondASW2.fairSpread()
error4 = abs(fixedBondASWSpread2 - fixedSpecializedBondASWSpread2)
self.assertFalse(
error4 > tolerance,
"wrong asw spread for fixed bond:"
+ "\n generic fixed rate bond's asw spread: "
+ str(fixedBondASWSpread2)
+ "\n equivalent specialized bond's asw spread: "
+ str(fixedSpecializedBondASWSpread2)
+ "\n error: "
+ str(error4)
+ "\n tolerance: "
+ str(tolerance),
)
##FRN bond (Isin: IT0003543847 ISPIM 0 09/29/13)
##maturity doesn't occur on a business day
floatingBondStartDate1 = ql.Date(29, ql.September, 2003)
floatingBondMaturityDate1 = ql.Date(29, ql.September, 2013)
floatingBondSchedule1 = ql.Schedule(
floatingBondStartDate1,
floatingBondMaturityDate1,
ql.Period(ql.Semiannual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg1 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
[fixingDays],
[],
[0.0056],
[],
[],
inArrears,
)
)
floatingbondRedemption1 = bondCalendar.adjust(floatingBondMaturityDate1, ql.Following)
floatingBondLeg1.append(ql.SimpleCashFlow(100.0, floatingbondRedemption1))
## generic bond
floatingBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate1,
floatingBondStartDate1,
floatingBondLeg1,
)
floatingBond1.setPricingEngine(bondEngine)
## equivalent specialized floater
floatingSpecializedBond1 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule1,
self.iborIndex,
ql.Actual360(),
ql.Following,
fixingDays,
[1],
[0.0056],
[],
[],
inArrears,
100.0,
ql.Date(29, ql.September, 2003),
)
floatingSpecializedBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond1.cashflows(), self.pricer)
ql.setCouponPricer(floatingSpecializedBond1.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(27, ql.March, 2007), 0.0402)
floatingBondPrice1 = floatingBond1.cleanPrice()
floatingSpecializedBondPrice1 = floatingSpecializedBond1.cleanPrice()
floatingBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap1.setPricingEngine(swapEngine)
floatingSpecializedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
floatingSpecializedBond1,
floatingSpecializedBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingSpecializedBondAssetSwap1.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice1 = floatingBondAssetSwap1.fairCleanPrice()
floatingSpecializedBondAssetSwapPrice1 = floatingSpecializedBondAssetSwap1.fairCleanPrice()
error5 = abs(floatingBondAssetSwapPrice1 - floatingSpecializedBondAssetSwapPrice1)
self.assertFalse(
error5 > tolerance,
"wrong clean price for frnbond:"
+ "\n generic frn rate bond's clean price: "
+ str(floatingBondAssetSwapPrice1)
+ "\n equivalent specialized bond's price: "
+ str(floatingSpecializedBondAssetSwapPrice1)
+ "\n error: "
+ str(error5)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
floatingBondMktPrice1 = 101.33
floatingBondASW1 = ql.AssetSwap(
payFixedRate,
floatingBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondASW1.setPricingEngine(swapEngine)
floatingSpecializedBondASW1 = ql.AssetSwap(
payFixedRate,
floatingSpecializedBond1,
floatingBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingSpecializedBondASW1.setPricingEngine(swapEngine)
floatingBondASWSpread1 = floatingBondASW1.fairSpread()
floatingSpecializedBondASWSpread1 = floatingSpecializedBondASW1.fairSpread()
error6 = abs(floatingBondASWSpread1 - floatingSpecializedBondASWSpread1)
self.assertFalse(
error6 > tolerance,
"wrong asw spread for fixed bond:"
+ "\n generic frn rate bond's asw spread: "
+ str(floatingBondASWSpread1)
+ "\n equivalent specialized bond's asw spread: "
+ str(floatingSpecializedBondASWSpread1)
+ "\n error: "
+ str(error6)
+ "\n tolerance: "
+ str(tolerance),
)
##FRN bond (Isin: XS0090566539 COE 0 09/24/18)
##maturity occurs on a business day
floatingBondStartDate2 = ql.Date(24, ql.September, 2004)
floatingBondMaturityDate2 = ql.Date(24, ql.September, 2018)
floatingBondSchedule2 = ql.Schedule(
floatingBondStartDate2,
floatingBondMaturityDate2,
ql.Period(ql.Semiannual),
bondCalendar,
ql.ModifiedFollowing,
ql.ModifiedFollowing,
ql.DateGeneration.Backward,
False,
)
floatingBondLeg2 = list(
ql.IborLeg(
[self.faceAmount],
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
[fixingDays],
[],
[0.0025],
[],
[],
inArrears,
)
)
floatingbondRedemption2 = bondCalendar.adjust(floatingBondMaturityDate2, ql.ModifiedFollowing)
floatingBondLeg2.append(ql.SimpleCashFlow(100.0, floatingbondRedemption2))
## generic bond
floatingBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
floatingBondMaturityDate2,
floatingBondStartDate2,
floatingBondLeg2,
)
floatingBond2.setPricingEngine(bondEngine)
## equivalent specialized floater
floatingSpecializedBond2 = ql.FloatingRateBond(
settlementDays,
self.faceAmount,
floatingBondSchedule2,
self.iborIndex,
ql.Actual360(),
ql.ModifiedFollowing,
fixingDays,
[1],
[0.0025],
[],
[],
inArrears,
100.0,
ql.Date(24, ql.September, 2004),
)
floatingSpecializedBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(floatingBond2.cashflows(), self.pricer)
ql.setCouponPricer(floatingSpecializedBond2.cashflows(), self.pricer)
self.iborIndex.addFixing(ql.Date(22, ql.March, 2007), 0.04013)
floatingBondPrice2 = floatingBond2.cleanPrice()
floatingSpecializedBondPrice2 = floatingSpecializedBond2.cleanPrice()
floatingBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondAssetSwap2.setPricingEngine(swapEngine)
floatingSpecializedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
floatingSpecializedBond2,
floatingSpecializedBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingSpecializedBondAssetSwap2.setPricingEngine(swapEngine)
floatingBondAssetSwapPrice2 = floatingBondAssetSwap2.fairCleanPrice()
floatingSpecializedBondAssetSwapPrice2 = floatingSpecializedBondAssetSwap2.fairCleanPrice()
error7 = abs(floatingBondAssetSwapPrice2 - floatingSpecializedBondAssetSwapPrice2)
self.assertFalse(
error7 > tolerance,
"wrong clean price for frnbond:"
+ "\n generic frn rate bond's clean price: "
+ str(floatingBondAssetSwapPrice2)
+ "\n equivalent specialized frn bond's price: "
+ str(floatingSpecializedBondAssetSwapPrice2)
+ "\n error: "
+ str(error7)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
floatingBondMktPrice2 = 101.26
floatingBondASW2 = ql.AssetSwap(
payFixedRate,
floatingBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingBondASW2.setPricingEngine(swapEngine)
floatingSpecializedBondASW2 = ql.AssetSwap(
payFixedRate,
floatingSpecializedBond2,
floatingBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
floatingSpecializedBondASW2.setPricingEngine(swapEngine)
floatingBondASWSpread2 = floatingBondASW2.fairSpread()
floatingSpecializedBondASWSpread2 = floatingSpecializedBondASW2.fairSpread()
error8 = abs(floatingBondASWSpread2 - floatingSpecializedBondASWSpread2)
self.assertFalse(
error8 > tolerance,
"wrong asw spread for frn bond:"
+ "\n generic frn rate bond's asw spread: "
+ str(floatingBondASWSpread2)
+ "\n equivalent specialized bond's asw spread: "
+ str(floatingSpecializedBondASWSpread2)
+ "\n error: "
+ str(error8)
+ "\n tolerance: "
+ str(tolerance),
)
## CMS bond (Isin: XS0228052402 CRDIT 0 8/22/20)
## maturity doesn't occur on a business day
cmsBondStartDate1 = ql.Date(22, ql.August, 2005)
cmsBondMaturityDate1 = ql.Date(22, ql.August, 2020)
cmsBondSchedule1 = ql.Schedule(
cmsBondStartDate1,
cmsBondMaturityDate1,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg1 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[],
[],
[0.055],
[0.025],
inArrears,
)
)
cmsbondRedemption1 = bondCalendar.adjust(cmsBondMaturityDate1, ql.Following)
cmsBondLeg1.append(ql.SimpleCashFlow(100.0, cmsbondRedemption1))
## generic cms bond
cmsBond1 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate1, cmsBondStartDate1, cmsBondLeg1
)
cmsBond1.setPricingEngine(bondEngine)
## equivalent specialized cms bond
cmsSpecializedBond1 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule1,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[1.0],
[0.0],
[0.055],
[0.025],
inArrears,
100.0,
ql.Date(22, ql.August, 2005),
)
cmsSpecializedBond1.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond1.cashflows(), self.cmspricer)
ql.setCouponPricer(cmsSpecializedBond1.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(18, ql.August, 2006), 0.04158)
cmsBondPrice1 = cmsBond1.cleanPrice()
cmsSpecializedBondPrice1 = cmsSpecializedBond1.cleanPrice()
cmsBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap1.setPricingEngine(swapEngine)
cmsSpecializedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
cmsSpecializedBond1,
cmsSpecializedBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsSpecializedBondAssetSwap1.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice1 = cmsBondAssetSwap1.fairCleanPrice()
cmsSpecializedBondAssetSwapPrice1 = cmsSpecializedBondAssetSwap1.fairCleanPrice()
error9 = abs(cmsBondAssetSwapPrice1 - cmsSpecializedBondAssetSwapPrice1)
self.assertFalse(
error9 > tolerance,
"wrong clean price for cmsbond:"
+ "\n generic bond's clean price: "
+ str(cmsBondAssetSwapPrice1)
+ "\n equivalent specialized cms rate bond's price: "
+ str(cmsSpecializedBondAssetSwapPrice1)
+ "\n error: "
+ str(error9)
+ "\n tolerance: "
+ str(tolerance),
)
cmsBondMktPrice1 = 87.02 ## market executable price as of 4th sept 2007
cmsBondASW1 = ql.AssetSwap(
payFixedRate,
cmsBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondASW1.setPricingEngine(swapEngine)
cmsSpecializedBondASW1 = ql.AssetSwap(
payFixedRate,
cmsSpecializedBond1,
cmsBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsSpecializedBondASW1.setPricingEngine(swapEngine)
cmsBondASWSpread1 = cmsBondASW1.fairSpread()
cmsSpecializedBondASWSpread1 = cmsSpecializedBondASW1.fairSpread()
error10 = abs(cmsBondASWSpread1 - cmsSpecializedBondASWSpread1)
self.assertFalse(
error10 > tolerance,
"wrong asw spread for cm bond:"
+ "\n generic cms rate bond's asw spread: "
+ str(cmsBondASWSpread1)
+ "\n equivalent specialized bond's asw spread: "
+ str(cmsSpecializedBondASWSpread1)
+ "\n error: "
+ str(error10)
+ "\n tolerance: "
+ str(tolerance),
)
##CMS bond (Isin: XS0218766664 ISPIM 0 5/6/15)
##maturity occurs on a business day
cmsBondStartDate2 = ql.Date(6, ql.May, 2005)
cmsBondMaturityDate2 = ql.Date(6, ql.May, 2015)
cmsBondSchedule2 = ql.Schedule(
cmsBondStartDate2,
cmsBondMaturityDate2,
ql.Period(ql.Annual),
bondCalendar,
ql.Unadjusted,
ql.Unadjusted,
ql.DateGeneration.Backward,
False,
)
cmsBondLeg2 = list(
ql.CmsLeg(
[self.faceAmount],
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
[fixingDays],
[0.84],
[],
[],
[],
inArrears,
)
)
cmsbondRedemption2 = bondCalendar.adjust(cmsBondMaturityDate2, ql.Following)
cmsBondLeg2.append(ql.SimpleCashFlow(100.0, cmsbondRedemption2))
## generic bond
cmsBond2 = ql.Bond(
settlementDays, bondCalendar, self.faceAmount, cmsBondMaturityDate2, cmsBondStartDate2, cmsBondLeg2
)
cmsBond2.setPricingEngine(bondEngine)
## equivalent specialized cms bond
cmsSpecializedBond2 = ql.CmsRateBond(
settlementDays,
self.faceAmount,
cmsBondSchedule2,
self.swapIndex,
ql.Thirty360(),
ql.Following,
fixingDays,
[0.84],
[0.0],
[],
[],
inArrears,
100.0,
ql.Date(6, ql.May, 2005),
)
cmsSpecializedBond2.setPricingEngine(bondEngine)
ql.setCouponPricer(cmsBond2.cashflows(), self.cmspricer)
ql.setCouponPricer(cmsSpecializedBond2.cashflows(), self.cmspricer)
self.swapIndex.addFixing(ql.Date(4, ql.May, 2006), 0.04217)
cmsBondPrice2 = cmsBond2.cleanPrice()
cmsSpecializedBondPrice2 = cmsSpecializedBond2.cleanPrice()
cmsBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondAssetSwap2.setPricingEngine(swapEngine)
cmsSpecializedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
cmsSpecializedBond2,
cmsSpecializedBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsSpecializedBondAssetSwap2.setPricingEngine(swapEngine)
cmsBondAssetSwapPrice2 = cmsBondAssetSwap2.fairCleanPrice()
cmsSpecializedBondAssetSwapPrice2 = cmsSpecializedBondAssetSwap2.fairCleanPrice()
error11 = abs(cmsBondAssetSwapPrice2 - cmsSpecializedBondAssetSwapPrice2)
self.assertFalse(
error11 > tolerance,
"wrong clean price for cmsbond:"
+ "\n generic bond's clean price: "
+ str(cmsBondAssetSwapPrice2)
+ "\n equivalent specialized cms rate bond's price: "
+ str(cmsSpecializedBondAssetSwapPrice2)
+ "\n error: "
+ str(error11)
+ "\n tolerance: "
+ str(tolerance),
)
cmsBondMktPrice2 = 94.35 ## market executable price as of 4th sept 2007
cmsBondASW2 = ql.AssetSwap(
payFixedRate,
cmsBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsBondASW2.setPricingEngine(swapEngine)
cmsSpecializedBondASW2 = ql.AssetSwap(
payFixedRate,
cmsSpecializedBond2,
cmsBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
cmsSpecializedBondASW2.setPricingEngine(swapEngine)
cmsBondASWSpread2 = cmsBondASW2.fairSpread()
cmsSpecializedBondASWSpread2 = cmsSpecializedBondASW2.fairSpread()
error12 = abs(cmsBondASWSpread2 - cmsSpecializedBondASWSpread2)
self.assertFalse(
error12 > tolerance,
"wrong asw spread for cm bond:"
+ "\n generic cms rate bond's asw spread: "
+ str(cmsBondASWSpread2)
+ "\n equivalent specialized bond's asw spread: "
+ str(cmsSpecializedBondASWSpread2)
+ "\n error: "
+ str(error12)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero-Coupon bond (Isin: DE0004771662 IBRD 0 12/20/15)
## maturity doesn't occur on a business day
zeroCpnBondStartDate1 = ql.Date(19, ql.December, 1985)
zeroCpnBondMaturityDate1 = ql.Date(20, ql.December, 2015)
zeroCpnBondRedemption1 = bondCalendar.adjust(zeroCpnBondMaturityDate1, ql.Following)
zeroCpnBondLeg1 = ql.Leg([ql.SimpleCashFlow(100.0, zeroCpnBondRedemption1)])
## generic bond
zeroCpnBond1 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate1,
zeroCpnBondStartDate1,
zeroCpnBondLeg1,
)
zeroCpnBond1.setPricingEngine(bondEngine)
## specialized zerocpn bond
zeroCpnSpecializedBond1 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(20, ql.December, 2015),
ql.Following,
100.0,
ql.Date(19, ql.December, 1985),
)
zeroCpnSpecializedBond1.setPricingEngine(bondEngine)
zeroCpnBondPrice1 = zeroCpnBond1.cleanPrice()
zeroCpnSpecializedBondPrice1 = zeroCpnSpecializedBond1.cleanPrice()
zeroCpnBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondAssetSwap1.setPricingEngine(swapEngine)
zeroCpnSpecializedBondAssetSwap1 = ql.AssetSwap(
payFixedRate,
zeroCpnSpecializedBond1,
zeroCpnSpecializedBondPrice1,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnSpecializedBondAssetSwap1.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice1 = zeroCpnBondAssetSwap1.fairCleanPrice()
zeroCpnSpecializedBondAssetSwapPrice1 = zeroCpnSpecializedBondAssetSwap1.fairCleanPrice()
error13 = abs(zeroCpnBondAssetSwapPrice1 - zeroCpnSpecializedBondAssetSwapPrice1)
self.assertFalse(
error13 > tolerance,
"wrong clean price for zerocpn bond:"
+ "\n generic zero cpn bond's clean price: "
+ str(zeroCpnBondAssetSwapPrice1)
+ "\n specialized equivalent bond's price: "
+ str(zeroCpnSpecializedBondAssetSwapPrice1)
+ "\n error: "
+ str(error13)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
zeroCpnBondMktPrice1 = 72.277
zeroCpnBondASW1 = ql.AssetSwap(
payFixedRate,
zeroCpnBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondASW1.setPricingEngine(swapEngine)
zeroCpnSpecializedBondASW1 = ql.AssetSwap(
payFixedRate,
zeroCpnSpecializedBond1,
zeroCpnBondMktPrice1,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnSpecializedBondASW1.setPricingEngine(swapEngine)
zeroCpnBondASWSpread1 = zeroCpnBondASW1.fairSpread()
zeroCpnSpecializedBondASWSpread1 = zeroCpnSpecializedBondASW1.fairSpread()
error14 = abs(zeroCpnBondASWSpread1 - zeroCpnSpecializedBondASWSpread1)
self.assertFalse(
error14 > tolerance,
"wrong asw spread for zeroCpn bond:"
+ "\n generic zeroCpn bond's asw spread: "
+ str(zeroCpnBondASWSpread1)
+ "\n equivalent specialized bond's asw spread: "
+ str(zeroCpnSpecializedBondASWSpread1)
+ "\n error: "
+ str(error14)
+ "\n tolerance: "
+ str(tolerance),
)
## Zero Coupon bond (Isin: IT0001200390 ISPIM 0 02/17/28)
## maturity doesn't occur on a business day
zeroCpnBondStartDate2 = ql.Date(17, ql.February, 1998)
zeroCpnBondMaturityDate2 = ql.Date(17, ql.February, 2028)
zerocpbondRedemption2 = bondCalendar.adjust(zeroCpnBondMaturityDate2, ql.Following)
zeroCpnBondLeg2 = ql.Leg([ql.SimpleCashFlow(100.0, zerocpbondRedemption2)])
## generic bond
zeroCpnBond2 = ql.Bond(
settlementDays,
bondCalendar,
self.faceAmount,
zeroCpnBondMaturityDate2,
zeroCpnBondStartDate2,
zeroCpnBondLeg2,
)
zeroCpnBond2.setPricingEngine(bondEngine)
## specialized zerocpn bond
zeroCpnSpecializedBond2 = ql.ZeroCouponBond(
settlementDays,
bondCalendar,
self.faceAmount,
ql.Date(17, ql.February, 2028),
ql.Following,
100.0,
ql.Date(17, ql.February, 1998),
)
zeroCpnSpecializedBond2.setPricingEngine(bondEngine)
zeroCpnBondPrice2 = zeroCpnBond2.cleanPrice()
zeroCpnSpecializedBondPrice2 = zeroCpnSpecializedBond2.cleanPrice()
zeroCpnBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondAssetSwap2.setPricingEngine(swapEngine)
zeroCpnSpecializedBondAssetSwap2 = ql.AssetSwap(
payFixedRate,
zeroCpnSpecializedBond2,
zeroCpnSpecializedBondPrice2,
self.iborIndex,
self.nonnullspread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnSpecializedBondAssetSwap2.setPricingEngine(swapEngine)
zeroCpnBondAssetSwapPrice2 = zeroCpnBondAssetSwap2.fairCleanPrice()
zeroCpnSpecializedBondAssetSwapPrice2 = zeroCpnSpecializedBondAssetSwap2.fairCleanPrice()
error15 = abs(zeroCpnBondAssetSwapPrice2 - zeroCpnSpecializedBondAssetSwapPrice2)
self.assertFalse(
error8 > tolerance,
"wrong clean price for zerocpn bond:"
+ "\n generic zero cpn bond's clean price: "
+ str(zeroCpnBondAssetSwapPrice2)
+ "\n equivalent specialized bond's price: "
+ str(zeroCpnSpecializedBondAssetSwapPrice2)
+ "\n error: "
+ str(error15)
+ "\n tolerance: "
+ str(tolerance),
)
## market executable price as of 4th sept 2007
zeroCpnBondMktPrice2 = 72.277
zeroCpnBondASW2 = ql.AssetSwap(
payFixedRate,
zeroCpnBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnBondASW2.setPricingEngine(swapEngine)
zeroCpnSpecializedBondASW2 = ql.AssetSwap(
payFixedRate,
zeroCpnSpecializedBond2,
zeroCpnBondMktPrice2,
self.iborIndex,
self.spread,
ql.Schedule(),
self.iborIndex.dayCounter(),
parAssetSwap,
)
zeroCpnSpecializedBondASW2.setPricingEngine(swapEngine)
zeroCpnBondASWSpread2 = zeroCpnBondASW2.fairSpread()
zeroCpnSpecializedBondASWSpread2 = zeroCpnSpecializedBondASW2.fairSpread()
error16 = abs(zeroCpnBondASWSpread2 - zeroCpnSpecializedBondASWSpread2)
self.assertFalse(
error16 > tolerance,
"wrong asw spread for zeroCpn bond:"
+ "\n generic zeroCpn bond's asw spread: "
+ str(zeroCpnBondASWSpread2)
+ "\n equivalent specialized bond's asw spread: "
+ str(zeroCpnSpecializedBondASWSpread2)
+ "\n error: "
+ str(error16)
+ "\n tolerance: "
+ str(tolerance),
)
if __name__ == "__main__":
print("testing QuantLib " + ql.__version__)
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(AssetSwapTest, "test"))
unittest.TextTestRunner(verbosity=2).run(suite)
| 37.206536 | 121 | 0.541661 | 14,195 | 199,241 | 7.601902 | 0.04572 | 0.026624 | 0.013252 | 0.022426 | 0.87432 | 0.857389 | 0.853497 | 0.852496 | 0.837242 | 0.825955 | 0 | 0.048084 | 0.377156 | 199,241 | 5,354 | 122 | 37.213485 | 0.821477 | 0.054662 | 0 | 0.845766 | 0 | 0 | 0.094647 | 0 | 0 | 0 | 0 | 0 | 0.023114 | 1 | 0.002101 | false | 0 | 0.00042 | 0 | 0.002732 | 0.00021 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4747ab9882982bd8d63bfb8bd302b3558f044c1 | 9,791 | py | Python | option_pricer/MC.py | tsengkasing/option-pricer | 89fff55070834698d801f3a6eb10e16d40fc7762 | [
"MIT"
] | null | null | null | option_pricer/MC.py | tsengkasing/option-pricer | 89fff55070834698d801f3a6eb10e16d40fc7762 | [
"MIT"
] | null | null | null | option_pricer/MC.py | tsengkasing/option-pricer | 89fff55070834698d801f3a6eb10e16d40fc7762 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
created by @ Qiangyu YAN
'''
import closed_form_formulas as form
import numpy as np
from scipy.stats import norm
import random as rd
rd.seed(10)
##########################################
# Arith Call Option, return 2 number
# as interval begin and end
# m is the number of paths
# control is bool, false - no control
##########################################
def Arith_Call_Option(S_0, sigma, r, T, K, n, m, control, seed):
Dt = T/n
geo = form.geom_asian_call_option(S_0, sigma, r, T, K, n, t=0)
np.random.seed(seed)
mu = np.exp((r - 0.5*sigma*sigma) * Dt)
arithPayoff, geoPayoff = [], []
for i in range(m):
growthFactor = mu * np.exp(sigma * np.sqrt(Dt) * np.random.standard_normal())
Spath = []
Spath.append(S_0 * growthFactor)
for j in range(n-1):
# from lecture 4, page 16
growthFactor = mu * np.exp(sigma * np.sqrt(Dt)*np.random.standard_normal())
Spath.append(Spath[-1] * growthFactor)
# Arithmetic mean
arithMean = np.mean(Spath)
arithPayoff.append(np.exp(-r*T) * max(arithMean - K, 0))
# Geometric mean
if control:
geoMean = np.exp( (1/n) * np.sum(np.log(Spath)))
geoPayoff.append(np.exp(-r*T) * max(geoMean - K, 0))
if control:
covXY = np.mean(np.multiply(arithPayoff,geoPayoff)) \
- np.mean(arithPayoff) * np.mean(geoPayoff)
theta = covXY / np.var(geoPayoff)
Z = arithPayoff + theta * (geo - geoPayoff)
Zmean = np.mean(Z)
Zstd = np.std(Z)
return Zmean-1.96*Zstd/np.sqrt(m), Zmean+1.96*Zstd/np.sqrt(m)
else:
Pmean = np.mean(arithPayoff)
Pstd = np.std(arithPayoff)
return Pmean-1.96*Pstd/np.sqrt(m), Pmean+1.96*Pstd/np.sqrt(m)
##########################################
# Arith Put Option, return 2 number
# as interval begin and end
# m is the number of paths
# control is bool, false - no control
##########################################
def Arith_Put_Option(S_0, sigma, r, T, K, n, m, control, seed):
Dt = T/n
geo = form.geom_asian_put_option(S_0, sigma, r, T, K, n, t=0)
np.random.seed(seed)
mu = np.exp((r - 0.5*sigma*sigma) * Dt)
arithPayoff, geoPayoff = [], []
for i in range(m):
growthFactor = mu * np.exp(sigma * np.sqrt(Dt) * np.random.standard_normal())
Spath = []
Spath.append(S_0 * growthFactor)
for j in range(n-1):
# from lecture 4, page 16
growthFactor = mu * np.exp(sigma * np.sqrt(Dt)*np.random.standard_normal())
Spath.append(Spath[-1] * growthFactor)
# Arithmetic mean
arithMean = np.mean(Spath)
arithPayoff.append(np.exp(-r*T) * max(K - arithMean, 0))
# Geometric mean
if control:
geoMean = np.exp( 1/n * np.sum(np.log(Spath)))
geoPayoff.append(np.exp(-r*T) * max(K - geoMean, 0))
if control:
covXY = np.mean(np.multiply(arithPayoff,geoPayoff)) \
- np.mean(arithPayoff) * np.mean(geoPayoff)
theta = covXY / np.var(geoPayoff)
Z = arithPayoff + theta * (geo - geoPayoff)
Zmean = np.mean(Z)
Zstd = np.std(Z)
return Zmean-1.96*Zstd/np.sqrt(m), Zmean+1.96*Zstd/np.sqrt(m)
else:
Pmean = np.mean(arithPayoff)
Pstd = np.std(arithPayoff)
return Pmean-1.96*Pstd/np.sqrt(m), Pmean+1.96*Pstd/np.sqrt(m)
##########################################
# Arith Mean Call Basket, return 2 number
# as interval begin and end
# m is the number of paths
# control is bool, false - no control
##########################################
def Arith_Call_Basket(S_0_1, S_0_2, sigma_1, sigma_2, r, T, K, rho, m, control, seed):
geo = form.geom_basket_call_option(S_0_1, S_0_2, sigma_1, sigma_2, r, T, K, rho, t=0)
np.random.seed(seed)
arithPayoff, geoPayoff = [], []
for i in range(m):
Z1 = np.random.standard_normal()
Z2 = rho*Z1 + np.sqrt(1 - rho*rho)*np.random.standard_normal()
S_1 = S_0_1 * np.exp( (r - 0.5*sigma_1*sigma_1)*T \
+ sigma_1 * np.sqrt(T) * Z1 )
S_2 = S_0_2 * np.exp( (r - 0.5*sigma_2*sigma_2)*T \
+ sigma_2 * np.sqrt(T) * Z2 )
Spath = [S_1, S_2]
# Arithmetic mean
arithMean = np.mean(Spath)
arithPayoff.append(np.exp(-r*T) * max(arithMean - K, 0))
# Geometric mean
if control:
geoMean = np.exp( 0.5 * np.sum(np.log(Spath)))
geoPayoff.append(np.exp(-r*T) * max(geoMean - K, 0))
if control:
covXY = np.mean(np.multiply(arithPayoff,geoPayoff)) \
- np.mean(arithPayoff) * np.mean(geoPayoff)
theta = covXY / np.var(geoPayoff)
Z = arithPayoff + theta * (geo - geoPayoff)
Zmean = np.mean(Z)
Zstd = np.std(Z)
return Zmean-1.96*Zstd/np.sqrt(m), Zmean+1.96*Zstd/np.sqrt(m)
else:
Pmean = np.mean(arithPayoff)
Pstd = np.std(arithPayoff)
return Pmean-1.96*Pstd/np.sqrt(m), Pmean+1.96*Pstd/np.sqrt(m)
##########################################
# Arith Mean Put Basket, return 2 number
# as interval begin and end
# m is the number of paths
# control is bool, false - no control
##########################################
def Arith_Put_Basket(S_0_1, S_0_2, sigma_1, sigma_2, r, T, K, rho, m, control, seed):
geo = form.geom_basket_put_option(S_0_1, S_0_2, sigma_1, sigma_2, r, T, K, rho, t=0)
np.random.seed(seed)
arithPayoff, geoPayoff = [], []
for i in range(m):
Z1 = np.random.standard_normal()
Z2 = rho*Z1 + np.sqrt(1 - rho*rho)*np.random.standard_normal()
S_1 = S_0_1 * np.exp( (r - 0.5*sigma_1*sigma_1)*T \
+ sigma_1 * np.sqrt(T) * Z1 )
S_2 = S_0_2 * np.exp( (r - 0.5*sigma_2*sigma_2)*T \
+ sigma_2 * np.sqrt(T) * Z2 )
Spath = [S_1, S_2]
# Arithmetic mean
arithMean = np.mean(Spath)
arithPayoff.append(np.exp(-r*T) * max(K - arithMean, 0))
# Geometric mean
if control:
geoMean = np.exp( 0.5 * np.sum(np.log(Spath)))
geoPayoff.append(np.exp(-r*T) * max(K - geoMean, 0))
if control:
covXY = np.mean(np.multiply(arithPayoff,geoPayoff)) \
- np.mean(arithPayoff) * np.mean(geoPayoff)
theta = covXY / np.var(geoPayoff)
Z = arithPayoff + theta * (geo - geoPayoff)
Zmean = np.mean(Z)
Zstd = np.std(Z)
return Zmean-1.96*Zstd/np.sqrt(m), Zmean+1.96*Zstd/np.sqrt(m)
else:
Pmean = np.mean(arithPayoff)
Pstd = np.std(arithPayoff)
return Pmean-1.96*Pstd/np.sqrt(m), Pmean+1.96*Pstd/np.sqrt(m)
'''
r = 0.05
T = 3
S = 100
m = 100000
# Arith_Call_Option(S_0, sigma, r, T, K, n, m, control, seed):
print("Arith_Call_Option: no control")
print( Arith_Call_Option(S, 0.3, r, T, 100, 50, m, False, 10) )
print( Arith_Call_Option(S, 0.3, r, T, 100, 100, m, False, 10) )
print( Arith_Call_Option(S, 0.4, r, T, 100, 50, m, False, 10) )
print("Arith_Call_Option:")
print( Arith_Call_Option(S, 0.3, r, T, 100, 50, m, True, 10) )
print( Arith_Call_Option(S, 0.3, r, T, 100, 100, m, True, 10) )
print( Arith_Call_Option(S, 0.4, r, T, 100, 50, m, True, 10) )
# Arith_Put_Option(S_0, sigma, r, T, K, n, m, control, seed):
print("Arith_Put_Option: no control")
print( Arith_Put_Option(S, 0.3, r, T, 100, 50, m, False, 10) )
print( Arith_Put_Option(S, 0.3, r, T, 100, 100, m, False, 10) )
print( Arith_Put_Option(S, 0.4, r, T, 100, 50, m, False, 10) )
print("Arith_Put_Option:")
print( Arith_Put_Option(S, 0.3, r, T, 100, 50, m, True, 10) )
print( Arith_Put_Option(S, 0.3, r, T, 100, 100, m, True, 10) )
print( Arith_Put_Option(S, 0.4, r, T, 100, 50, m, True, 10) )
# # Arith_Call_Basket(S_0_1, S_0_2, sigma_1, sigma_2,
# # r, T, K, rho, m, control, seed)
print("Arith_Call_Basket: no control")
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 100, 0.5, m, False, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 100, 0.9, m, False, 10))
print(Arith_Call_Basket(S, S, 0.1, 0.3, r, T, 100, 0.5, m, False, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 80, 0.5, m, False, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 120, 0.5, m, False, 10))
print(Arith_Call_Basket(S, S, 0.5, 0.5, r, T, 100, 0.5, m, False, 10))
print("Arith_Call_Basket:")
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 100, 0.5, m, True, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 100, 0.9, m, True, 10))
print(Arith_Call_Basket(S, S, 0.1, 0.3, r, T, 100, 0.5, m, True, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 80, 0.5, m, True, 10))
print(Arith_Call_Basket(S, S, 0.3, 0.3, r, T, 120, 0.5, m, True, 10))
print(Arith_Call_Basket(S, S, 0.5, 0.5, r, T, 100, 0.5, m, True, 10))
# # Arith_Call_Basket(S_0_1, S_0_2, sigma_1, sigma_2,
# # r, T, K, rho, m, control, seed)
print("Arith_Put_Basket: no control")
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 100, 0.5, m, False, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 100, 0.9, m, False, 10))
print(Arith_Put_Basket(S, S, 0.1, 0.3, r, T, 100, 0.5, m, False, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 80, 0.5, m, False, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 120, 0.5, m, False, 10))
print(Arith_Put_Basket(S, S, 0.5, 0.5, r, T, 100, 0.5, m, False, 10))
print("Arith_Put_Basket:")
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 100, 0.5, m, True, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 100, 0.9, m, True, 10))
print(Arith_Put_Basket(S, S, 0.1, 0.3, r, T, 100, 0.5, m, True, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 80, 0.5, m, True, 10))
print(Arith_Put_Basket(S, S, 0.3, 0.3, r, T, 120, 0.5, m, True, 10))
print(Arith_Put_Basket(S, S, 0.5, 0.5, r, T, 100, 0.5, m, True, 10))
''' | 41.66383 | 89 | 0.573179 | 1,718 | 9,791 | 3.151339 | 0.064028 | 0.022165 | 0.070927 | 0.020687 | 0.969523 | 0.958441 | 0.955855 | 0.955855 | 0.955855 | 0.954562 | 0 | 0.074732 | 0.22919 | 9,791 | 235 | 90 | 41.66383 | 0.642639 | 0.073026 | 0 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03252 | false | 0 | 0.03252 | 0 | 0.130081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81ec74cbd7697c66757294a770050c9aeef2bd87 | 66 | py | Python | tests/test_nbtutor.py | ouseful-PR/nbtutor | 07798a044cf6e1fd4eaac2afddeef3e13348dbcd | [
"BSD-3-Clause"
] | 1 | 2018-12-10T10:31:05.000Z | 2018-12-10T10:31:05.000Z | tests/test_nbtutor.py | betatim/nbtutor | 07798a044cf6e1fd4eaac2afddeef3e13348dbcd | [
"BSD-3-Clause"
] | null | null | null | tests/test_nbtutor.py | betatim/nbtutor | 07798a044cf6e1fd4eaac2afddeef3e13348dbcd | [
"BSD-3-Clause"
] | null | null | null |
def test_main():
import nbtutor
# TODO Proper test suite
| 13.2 | 28 | 0.666667 | 9 | 66 | 4.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 66 | 4 | 29 | 16.5 | 0.895833 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c31b53a032e6e9961e3a02679aa905a97fb8901b | 31 | py | Python | datasets/mmdet_tusimple/mmdet/ops/bbox_dis/__init__.py | Jinming-Su/SGNet | fcf35edaf332c1a4e2713acad5a0fc0e21509c3e | [
"MIT"
] | 13 | 2021-10-15T15:14:45.000Z | 2022-03-09T00:22:55.000Z | datasets/mmdet_tusimple/mmdet/ops/bbox_dis/__init__.py | Jinming-Su/SGNet | fcf35edaf332c1a4e2713acad5a0fc0e21509c3e | [
"MIT"
] | 4 | 2021-10-17T09:04:20.000Z | 2022-03-25T06:43:00.000Z | datasets/mmdet_tusimple/mmdet/ops/bbox_dis/__init__.py | Jinming-Su/SGNet | fcf35edaf332c1a4e2713acad5a0fc0e21509c3e | [
"MIT"
] | 2 | 2021-11-17T11:31:35.000Z | 2021-11-29T06:50:35.000Z | from .bbox_dis import bbox_dis
| 15.5 | 30 | 0.83871 | 6 | 31 | 4 | 0.666667 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c3685a5b43d1010334787b651a9ab056e47c0e82 | 5,051 | py | Python | template/spec/fixtures/grammar/syntax_test_python_functions-template.py | imgovind/language-legesher-python | 9a0d625a35bb44fc14f0d315cb38c4490853e339 | [
"MIT"
] | 10 | 2019-09-26T15:14:32.000Z | 2020-10-03T22:41:53.000Z | template/spec/fixtures/grammar/syntax_test_python_functions-template.py | imgovind/language-legesher-python | 9a0d625a35bb44fc14f0d315cb38c4490853e339 | [
"MIT"
] | 41 | 2019-05-18T01:12:39.000Z | 2021-11-05T03:46:11.000Z | template/spec/fixtures/grammar/syntax_test_python_functions-template.py | imgovind/language-legesher-python | 9a0d625a35bb44fc14f0d315cb38c4490853e339 | [
"MIT"
] | 13 | 2019-10-03T16:21:57.000Z | 2021-09-30T12:52:53.000Z | # SYNTAX TEST "source.python.legesher"
# it "tokenizes async function definitions"
{async} {def} test(param):
# <- meta.function.python.legesher storage.modifier.async.python.legesher
# ^^^ storage.type.function.python.legesher
# ^^^^ entity.name.function.python.legesher
{pass}
# it "tokenizes comments inside function parameters"
{def} test(arg, # comment')
# <- meta.function.python.legesher storage.type.function.python.legesher
# ^^^^ entity.name.function.python.legesher
# ^ punctuation.definition.parameters.begin.python.legesher
# ^^^^^^^^^^^^^^^^ meta.function.parameters.python.legesher
# ^^^ variable.parameter.function.python.legesher
# ^ punctuation.separator.parameters.python.legesher
# ^ comment.line.number-sign.python.legesher punctuation.definition.comment.python.legesher
# ^^^^^^^ comment.line.number-sign.python.legesher
):
{pass}
{def} __init__(
# <- meta.function.python.legesher storage.type.function.python.legesher
# ^^^^^^^^ entity.name.function.python.legesher support.function.magic.python.legesher
# ^ punctuation.definition.parameters.begin.python.legesher
self,
# ^^^^^ meta.function.parameters.python.legesher
# ^^^^ variable.parameter.function.python.legesher
# ^ punctuation.separator.parameters.python.legesher
codec, # comment
# ^^^^^^^^^^^^^^^^ meta.function.parameters.python.legesher
# ^^^^^ variable.parameter.function.python.legesher
# ^ punctuation.separator.parameters.python.legesher
# ^ comment.line.number-sign.python.legesher punctuation.definition.comment.python.legesher
# ^^^^^^^ comment.line.number-sign.python.legesher
config
# ^^^^^^ meta.function.parameters.python.legesher variable.parameter.function.python.legesher
# >> meta.function.python.legesher
):
# <- punctuation.definition.parameters.end.python.legesher
#^ punctuation.definition.function.begin.python.legesher
{pass}
# it "tokenizes a function definition with annotations"
{def} f(a: None, b: int = 3) -> int:
# <- meta.function.python.legesher storage.type.function.python.legesher
# ^ entity.name.function.python.legesher
# ^ punctuation.definition.parameters.begin.python.legesher
# ^^^^^^^^^^^^^^^^^^^ meta.function.parameters.python.legesher
# ^ variable.parameter.function.python.legesher
# ^ punctuation.separator.python.legesher
# ^^^^ storage.type.python.legesher
# ^ punctuation.separator.parameters.python.legesher
# ^ variable.parameter.function.python.legesher
# ^ punctuation.separator.python.legesher
# ^^^ storage.type.python.legesher
# ^ keyword.operator.assignment.python.legesher
# ^ constant.numeric.integer.decimal.python.legesher
# ^ punctuation.definition.parameters.end.python.legesher
# ^^ keyword.operator.function-annotation.python.legesher
# ^^^ storage.type.python.legesher
# ^ punctuation.definition.function.begin.python.legesher
{pass}
#
# # it "tokenizes complex function calls"
# torch.nn.BCELoss()(Variable(bayes_optimal_prob, 1, requires_grad={False}), Yvar).data[0]
# # ^^^^^^^^^ meta.method-call.python.legesher
# # ^^^^^^^ entity.name.function.python.legesher
# # ^ punctuation.definition.arguments.begin.bracket.round.python.legesher
# # ^ punctuation.definition.arguments.end.bracket.round.python.legesher
# # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ meta.function-call.python.legesher
# # ^ punctuation.definition.arguments.begin.bracket.round.python.legesher
# # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ meta.function-call.arguments.python.legesher
# # ^^^^^^^^ entity.name.function.python.legesher
# # ^ punctuation.definition.arguments.begin.bracket.round.python.legesher
# # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ meta.function-call.arguments.python.legesher
# # ^^^^^^^^^^^^^ variable.parameter.function.python.legesher
# # ^^^^^^^ constant.language.python.legesher
# # ^ punctuation.definition.arguments.end.bracket.round.python.legesher
# # ^ punctuation.separator.arguments.python.legesher
# # ^ punctuation.definition.arguments.end.bracket.round.python.legesher
# # ^ punctuation.separator.property.period.python.legesher
| 56.752809 | 149 | 0.585627 | 426 | 5,051 | 6.92723 | 0.185446 | 0.322603 | 0.194849 | 0.177906 | 0.828872 | 0.812606 | 0.809895 | 0.776008 | 0.717045 | 0.717045 | 0 | 0.000797 | 0.254405 | 5,051 | 88 | 150 | 57.397727 | 0.782793 | 0.931697 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.307692 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 12 |
6f0c9b25c131cae45317e9dd277804e6faa31841 | 9,590 | py | Python | elodie/tests/media/text_test.py | mattca/elodie | 4ff4f25ed2fcd8c31d457d5c68a0b906181d971c | [
"Apache-2.0"
] | 964 | 2015-12-02T17:44:47.000Z | 2022-03-30T16:16:55.000Z | elodie/tests/media/text_test.py | mattca/elodie | 4ff4f25ed2fcd8c31d457d5c68a0b906181d971c | [
"Apache-2.0"
] | 395 | 2015-12-02T21:24:50.000Z | 2022-03-29T21:36:23.000Z | elodie/tests/media/text_test.py | mattca/elodie | 4ff4f25ed2fcd8c31d457d5c68a0b906181d971c | [
"Apache-2.0"
] | 145 | 2015-12-02T21:54:27.000Z | 2022-03-29T11:55:35.000Z | # -*- coding: utf-8
# Project imports
import os
import sys
from datetime import datetime
import shutil
import tempfile
import time
from nose.plugins.skip import SkipTest
sys.path.insert(0, os.path.abspath(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))))
sys.path.insert(0, os.path.abspath(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))
import helper
from elodie.media.base import Base
from elodie.media.text import Text
os.environ['TZ'] = 'GMT'
def test_text_extensions():
text = Text()
extensions = text.extensions
assert 'txt' in extensions
valid_extensions = Text.get_valid_extensions()
assert extensions == valid_extensions, valid_extensions
def test_get_original_name():
media = Text(helper.get_file('with-original-name.txt'))
original_name = media.get_original_name()
assert original_name == 'originalname.txt', original_name
def test_get_original_name_when_does_not_exist():
media = Text(helper.get_file('valid.txt'))
original_name = media.get_original_name()
assert original_name is None, original_name
def test_get_title():
text = Text(helper.get_file('valid.txt'))
text.get_metadata()
assert text.get_title() == 'sample title', text.get_title()
def test_get_default_coordinate():
text = Text(helper.get_file('valid.txt'))
text.get_metadata()
assert text.get_coordinate() == '51.521435', text.get_coordinate()
def test_get_coordinate_latitude():
text = Text(helper.get_file('valid.txt'))
text.get_metadata()
assert text.get_coordinate('latitude') == '51.521435', text.get_coordinate('latitude')
def test_get_coordinate_longitude():
text = Text(helper.get_file('valid.txt'))
text.get_metadata()
assert text.get_coordinate('longitude') == '0.162714', text.get_coordinate('longitude')
def test_get_date_taken():
text = Text(helper.get_file('valid.txt'))
text.get_metadata()
date_taken = text.get_date_taken()
assert date_taken == helper.time_convert((2016, 4, 7, 11, 15, 26, 3, 98, 0)), date_taken
def test_get_date_taken_from_invalid():
origin = helper.get_file('valid-without-header.txt')
text = Text(origin)
text.get_metadata()
date_taken = text.get_date_taken()
seconds_since_epoch = min(
os.path.getmtime(origin),
os.path.getctime(origin)
)
expected_date_taken = time.gmtime(seconds_since_epoch)
assert date_taken == expected_date_taken, date_taken
def test_get_metadata_with_numeric_header():
# See gh-98 for details
text = Text(helper.get_file('valid-with-numeric-header.txt'))
# Should not throw error
# TypeError: argument of type 'int' is not iterable
metadata = text.get_metadata()
assert metadata['mime_type'] == 'text/plain'
def test_set_album():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid.txt'), origin)
text = Text(origin)
metadata = text.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents = f.read()
album_name = 'Test Album'
assert album_name != metadata['album']
status = text.set_album(album_name)
assert status == True, status
text_new = Text(origin)
metadata_new = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert album_name == metadata_new['album'], metadata_new
def test_set_date_taken():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid.txt'), origin)
text = Text(origin)
metadata = text.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents = f.read()
assert helper.time_convert((2013, 9, 30, 7, 6, 5, 0, 273, 0)) != metadata['date_taken'], metadata['date_taken']
status = text.set_date_taken(datetime(2013, 9, 30, 7, 6, 5))
assert status == True, status
text_new = Text(origin)
metadata_new = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert helper.time_convert((2013, 9, 30, 7, 6, 5, 0, 273, 0)) == metadata_new['date_taken'], metadata_new['date_taken']
def test_set_location():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid.txt'), origin)
text = Text(origin)
origin_metadata = text.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents = f.read()
# Verify that original photo has different location info that what we
# will be setting and checking
assert not helper.isclose(origin_metadata['latitude'], 11.1111111111), origin_metadata['latitude']
assert not helper.isclose(origin_metadata['longitude'], 99.9999999999), origin_metadata['longitude']
status = text.set_location(11.1111111111, 99.9999999999)
assert status == True, status
text_new = Text(origin)
metadata = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert helper.isclose(metadata['latitude'], 11.1111111111), metadata['latitude']
def test_set_album_without_header():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid-without-header.txt'), origin)
text = Text(origin)
metadata = text.get_metadata()
with open(origin, 'r') as f:
contents = f.read()
album_name = 'Test Album'
assert album_name != metadata['album']
status = text.set_album(album_name)
assert status == True, status
text_new = Text(origin)
metadata_new = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert album_name == metadata_new['album'], metadata_new
def test_set_date_taken_without_header():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid-without-header.txt'), origin)
text = Text(origin)
metadata = text.get_metadata()
with open(origin, 'r') as f:
contents = f.read()
assert helper.time_convert((2013, 9, 30, 7, 6, 5, 0, 273, 0)) != metadata['date_taken'], metadata['date_taken']
status = text.set_date_taken(datetime(2013, 9, 30, 7, 6, 5))
assert status == True, status
text_new = Text(origin)
metadata_new = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert helper.time_convert((2013, 9, 30, 7, 6, 5, 0, 273, 0)) == metadata_new['date_taken'], metadata_new['date_taken']
def test_set_location_without_header():
temporary_folder, folder = helper.create_working_folder()
origin = '%s/text.txt' % folder
shutil.copyfile(helper.get_file('valid-without-header.txt'), origin)
text = Text(origin)
origin_metadata = text.get_metadata()
with open(origin, 'r') as f:
contents = f.read()
# Verify that original photo has different location info that what we
# will be setting and checking
assert not helper.isclose(origin_metadata['latitude'], 11.1111111111), origin_metadata['latitude']
assert not helper.isclose(origin_metadata['longitude'], 99.9999999999), origin_metadata['longitude']
status = text.set_location(11.1111111111, 99.9999999999)
assert status == True, status
text_new = Text(origin)
metadata = text_new.get_metadata()
with open(origin, 'r') as f:
f.readline()
contents_new = f.read()
assert contents == contents_new, contents_new
shutil.rmtree(folder)
assert helper.isclose(metadata['latitude'], 11.1111111111), metadata['latitude']
def test_set_original_name():
temporary_folder, folder = helper.create_working_folder()
random_file_name = '%s.txt' % helper.random_string(10)
origin = '%s/%s' % (folder, random_file_name)
shutil.copyfile(helper.get_file('valid.txt'), origin)
text = Text(origin)
metadata = text.get_metadata()
text.set_original_name()
metadata_updated = text.get_metadata()
shutil.rmtree(folder)
assert metadata['original_name'] is None, metadata['original_name']
assert metadata_updated['original_name'] == random_file_name, metadata_updated['original_name']
def test_set_original_name_with_arg():
temporary_folder, folder = helper.create_working_folder()
random_file_name = '%s.txt' % helper.random_string(10)
origin = '%s/%s' % (folder, random_file_name)
shutil.copyfile(helper.get_file('valid.txt'), origin)
new_name = helper.random_string(15)
text = Text(origin)
metadata = text.get_metadata()
text.set_original_name(new_name)
metadata_updated = text.get_metadata()
shutil.rmtree(folder)
assert metadata['original_name'] is None, metadata['original_name']
assert metadata_updated['original_name'] == new_name, metadata_updated['original_name']
| 30.062696 | 131 | 0.691762 | 1,295 | 9,590 | 4.896525 | 0.113514 | 0.03091 | 0.034853 | 0.045419 | 0.833149 | 0.786154 | 0.774641 | 0.769279 | 0.769279 | 0.759029 | 0 | 0.03188 | 0.182273 | 9,590 | 318 | 132 | 30.157233 | 0.776715 | 0.033994 | 0 | 0.717703 | 0 | 0 | 0.084828 | 0.015885 | 0 | 0 | 0 | 0 | 0.196172 | 1 | 0.086124 | false | 0 | 0.047847 | 0 | 0.133971 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f22b3d7f36cc154c3315027dcc3e4427505d707 | 83,662 | py | Python | bce-python-sdk-0.8.34/baidubce/services/blb/blb_client.py | PickHeBin/2020-2-25 | fa8d9a9ce321c6d34ba5713d288fd16968de3672 | [
"Apache-2.0"
] | null | null | null | bce-python-sdk-0.8.34/baidubce/services/blb/blb_client.py | PickHeBin/2020-2-25 | fa8d9a9ce321c6d34ba5713d288fd16968de3672 | [
"Apache-2.0"
] | null | null | null | bce-python-sdk-0.8.34/baidubce/services/blb/blb_client.py | PickHeBin/2020-2-25 | fa8d9a9ce321c6d34ba5713d288fd16968de3672 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2014 Baidu.com, Inc. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions
# and limitations under the License.
"""
This module provides a client class for BLB.
"""
import copy
import json
import logging
import uuid
import sys
from baidubce import bce_base_client
from baidubce.auth import bce_v1_signer
from baidubce.http import bce_http_client
from baidubce.http import handler
from baidubce.http import http_methods
from baidubce import utils
from baidubce.utils import required
from baidubce import compat
if sys.version < '3':
sys.setdefaultencoding('utf-8')
_logger = logging.getLogger(__name__)
class BlbClient(bce_base_client.BceBaseClient):
"""
BLB base sdk client
"""
version = b'/v1'
def __init__(self, config=None):
bce_base_client.BceBaseClient.__init__(self, config)
def _merge_config(self, config=None):
"""
:param config:
:type config: baidubce.BceClientConfiguration
:return:
"""
if config is None:
return self.config
else:
new_config = copy.copy(self.config)
new_config.merge_non_none_values(config)
return new_config
def _send_request(self, http_method, path,
body=None, headers=None, params=None,
config=None, body_parser=None):
config = self._merge_config(config)
if body_parser is None:
body_parser = handler.parse_json
if headers is None:
headers = {b'Accept': b'*/*',
b'Content-Type': b'application/json;charset=utf-8'}
return bce_http_client.send_request(
config, bce_v1_signer.sign, [handler.parse_error, body_parser],
http_method, path, body, headers, params)
@required(vpc_id=(bytes, str),
subnet_id=(bytes, str))
def create_loadbalancer(self, vpc_id, subnet_id, name=None,
desc=None, client_token=None, config=None):
"""
Create a LoadBalancer with the specified options.
:param name:
the name of LoadBalancer to create
:type name: string
:param desc:
The description of LoadBalancer
:type desc: string
:param vpc_id:
id of vpc which the LoadBalancer belong to
:type vpc_id: string
:param subnet_id:
id of subnet which the LoadBalancer belong to
:type subnet_id: string
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
if name is not None:
body['name'] = compat.convert_to_string(name)
if desc is not None:
body['desc'] = compat.convert_to_string(desc)
body['vpcId'] = compat.convert_to_string(vpc_id)
body['subnetId'] = compat.convert_to_string(subnet_id)
return self._send_request(http_methods.POST, path,
body=json.dumps(body), params=params,
config=config)
def describe_loadbalancers(self, address=None, name=None, blb_id=None,
bcc_id=None, marker=None, max_keys=None,
config=None):
"""
Return a list of LoadBalancers
:param address:
Intranet service address in dotted decimal notation
:type address: string
:param name:
name of LoadBalancer to describe
:type name: string
:param blb_id:
id of LoadBalancer to describe
:type blb_id: string
:param bcc_id:
bcc which bind the LoadBalancers
:type bcc_id: string
:param marker:
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result
which listing should begin.
If the marker is not specified, the list result will
listing from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of list
result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb')
params = {}
if address is not None:
params[b'address'] = address
if name is not None:
params[b'name'] = name
if blb_id is not None:
params[b'blbId'] = blb_id
if bcc_id is not None:
params[b'bccId'] = bcc_id
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_loadbalancer_detail(self, blb_id, config=None):
"""
Return detail imformation of specific LoadBalancer
:param blb_id:
id of LoadBalancer to describe
:type blb_id: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id)
return self._send_request(http_methods.GET, path,
config=config)
@required(blbId=(bytes, str))
def update_loadbalancer(self, blb_id, name=None, desc=None,
client_token=None, config=None):
"""
Modify the special attribute to new value of the LoadBalancer
owned by the user.
:param name:
name of LoadBalancer to describe
:type name: string
:param blb_id:
id of LoadBalancer to describe
:type blb_id: string
:param desc:
The description of LoadBalancer
:type desc: string
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm
will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id)
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
if name is not None:
body['name'] = compat.convert_to_string(name)
if desc is not None:
body['desc'] = compat.convert_to_string(desc)
return self._send_request(http_methods.PUT, path, json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str))
def delete_loadbalancer(self, blb_id, client_token=None, config=None):
"""
delete the LoadBalancer owned by the user.
:param blb_id:
id of LoadBalancer to describe
:type blb_id: string
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm
will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id)
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
return self._send_request(http_methods.DELETE, path,
params=params, config=config)
@required(blb_id=(bytes, str),
listener_port=int,
backend_port=int,
scheduler=(bytes, str))
def create_tcp_listener(self, blb_id, listener_port,
backend_port, scheduler,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealthy_threshold=None,
healthy_threshold=None,
client_token=None, config=None):
"""
Create a tcp listener rule with the specified options.
:param blb_id:
the id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler
balancing algorithm
:value 'RoundRobin' or 'LeastConnection' or 'Hash'
:type scheduler: string
:param health_check_timeout_in_second
Health check timeout
:value 1-60, default: 3, unit: seconds
:type health_check_timeout_in_second: string
:param health_check_interval
Health check interval
:value 1-10, default: 3, unit: seconds
:type health_check_interval: string
:param unhealthy_threshold
Unhealthy threshold,
how many consecutive health check failures,
shielding the backend server
:value 2-5, default: 3
:type unhealthy_threshold: string
:param healthy_threshold
Health threshold,
how many consecutive health checks are successful,
then re-use the back-end server
:value 2-5, default: 3
:type healthy_threshold: string
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'TCPlistener')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {
'listenerPort': listener_port,
'backendPort': backend_port,
'scheduler': compat.convert_to_string(scheduler)
}
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealthy_threshold is not None:
body['unhealthyThreshold'] = unhealthy_threshold
if healthy_threshold is not None:
body['healthyThreshold'] = healthy_threshold
return self._send_request(http_methods.POST, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str),
listener_port=int,
backend_port=int,
scheduler=(bytes, str),
health_check_string=(bytes, str))
def create_udp_listener(self, blb_id, listener_port, backend_port,
scheduler, health_check_string,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealthy_threshold=None,
healthy_threshold=None,
client_token=None, config=None):
"""
Create a udp listener rule with the specified options.
:param blb_id:
the id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler
balancing algorithm
:value 'RoundRobin' or 'LeastConnection' or 'Hash'
:type scheduler: string
:param health_check_string
The request string sent by the health,
the backend server needs to respond after receiving it.
:type health_check_string: string
:param health_check_timeout_in_second
Health check timeout
:value 1-60, default: 3, unit: seconds
:type health_check_timeout_in_second: string
:param health_check_interval
Health check interval
:value 1-10, default: 3, unit: seconds
:type health_check_interval: string
:param unhealthy_threshold
Unhealthy threshold,
how many consecutive health check failures,
shielding the backend server
:value 2-5, default: 3
:type unhealthy_threshold: string
:param healthy_threshold
Health threshold,
how many consecutive health checks are successful,
then re-use the back-end server
:value 2-5, default: 3
:type healthy_threshold: string
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'UDPlistener')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {
'listenerPort': listener_port,
'backendPort': backend_port,
'scheduler': compat.convert_to_string(scheduler),
'healthCheckString': compat.convert_to_string(health_check_string)
}
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealthy_threshold is not None:
body['unhealthyThreshold'] = unhealthy_threshold
if healthy_threshold is not None:
body['healthyThreshold'] = healthy_threshold
return self._send_request(http_methods.POST, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str), listener_port=int,
backend_port=int, scheduler=(bytes, str))
def create_http_listener(self, blb_id, listener_port,
backend_port, scheduler,
keep_session=None, keep_session_type=None,
keep_session_duration=None,
keep_session_cookie_name=None,
x_forward_for=None,
health_check_type=None, health_check_port=None,
health_check_uri=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealthy_threshold=None,
healthy_threshold=None,
health_check_normal_status=None,
server_timeout=None, redirect_port=None,
client_token=None, config=None):
"""
Create a http listener rule with the specified options.
:param blb_id:
the id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
balancing algorithm
:value 'RoundRobin' or 'LeastConnection'
:type scheduler: string
:param keep_session:
Whether to enable the session hold function,
that is,the request sent by the same client will
reach the same backend server
:value true or false default:false
:type keep_session: bool
:param keep_session_type:
The cookie handling method maintained by the session,
valid only if the session is held open
:value 'insert' or 'rewrite' default:insert
:type keep_session_type: string
:param keep_session_duration:
The time the cookie is kept in session (in seconds),
valid only if the session is held open
:value 1-15552000 default:3600
:type keep_session_duration: int
:param keep_session_cookie_name:
The session keeps the name of the cookie that needs to be
overridden if and only if session persistence is enabled
and keep_session_type="rewrite"
:type keep_session_cookie_name: int
:param x_forward_for:
Whether to enable the real IP address of the client,
the backend server can obtain the real address of the client
through the X-Forwarded-For HTTP header.
:value true or false, default: False
:type x_forward_for: bool
:param health_check_type:
Health check protocol
:value 'HTTP' or 'TCP'
:type health_check_type: string
:param health_check_port:
Health check port, the default is the same as backend_port
:type health_check_port: int
:param health_check_uri:
Health check URI, default '/'.
Effective when the health check protocol is "HTTP"
:type health_check_uri: string
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default: 3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealthy_threshold:
The unhealthy threshold, that is,
how many consecutive health check failures,
shields the backend server.
:value 2-5, default: 3
:type unhealthy_threshold: int
:param healthy_threshold:
Health threshold, that is,
how many consecutive health checks are successful,
then re-use the back-end server
value: 2-5, default: 3
:type health_threshold: int
:param health_check_normal_status:
The HTTP status code when the health check is normal supports
a combination of five types of status codes,
such as "http_1xx|http_2xx",
Effective when the health check protocol is "HTTP"
:value default:http_2xx|http_3xx
:type health_check_normal_status:string
:param server_timeout:
Backend server maximum timeout (unit: second)
:value 1-3600, default: 30
:type server_timeout:int
:param redirect_port:
Forward the request received by this listener to the
HTTPS listener, which is specified by the HTTPS listener.
:type redirect_port:int
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPlistener')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {
'listenerPort': listener_port,
'backendPort': backend_port,
'scheduler': compat.convert_to_string(scheduler)}
if keep_session is not None:
body['keepSession'] = keep_session
if keep_session_type is not None:
body['keepSessionType'] = keep_session_type
if keep_session_duration is not None:
body['keepSessionDuration'] = keep_session_duration
if keep_session_cookie_name is not None:
body['keepSessionCookieName'] = keep_session_cookie_name
if x_forward_for is not None:
body['xForwardFor'] = x_forward_for
if health_check_type is not None:
body['healthCheckType'] = \
compat.convert_to_string(health_check_type)
if health_check_port is not None:
body['healthCheckPort'] = health_check_port
if health_check_uri is not None:
body['healthCheckURI'] = \
compat.convert_to_string(health_check_uri)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealthy_threshold is not None:
body['unhealthyThreshold'] = unhealthy_threshold
if healthy_threshold is not None:
body['healthyThreshold'] = healthy_threshold
if health_check_normal_status is not None:
body['healthCheckNormalStatus'] = \
compat.convert_to_string(health_check_normal_status)
if server_timeout is not None:
body['serverTimeout'] = server_timeout
if redirect_port is not None:
body['redirectPort'] = redirect_port
return self._send_request(http_methods.POST, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str), listener_port=int,
backend_port=int, scheduler=(bytes, str), cert_ids=list)
def create_https_listener(self, blb_id, listener_port, backend_port,
scheduler, cert_ids, keep_session=None,
keep_session_type=None,
keep_session_duration=None,
keep_session_cookie_name=None,
x_forward_for=None, health_check_type=None,
health_check_port=None, health_check_uri=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None, health_threshold=None,
health_check_normal_status=None,
server_timeout=None, ie6_compatible=None,
encryption_type=None, encryption_protocols=None,
dual_auth=None, client_certIds=None,
client_token=None, config=None):
"""
Create a https listener rule with the specified options
:param blb_id:
The id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
Port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
balancing algorithm
:value 'RoundRobin' or 'LeastConnection'
:type scheduler: string
:param cert_ids:
The certificate to be loaded by the listener.
:type cert_ids: List<String>
:param keep_session:
Whether to enable the session hold function,
that is, the request sent by the same client will reach the
same backend server
:value true or false, default: false
:type keep_session: bool
:param keep_session_type:
The cookie handling method maintained by the session,
valid only if the session is held open
:value 'insert' or 'rewrite', default:insert
:type keep_session_type: string
:param keep_session_duration:
The time the cookie is kept in session (in seconds),
valid only if the session is held open
:value 1-15552000, default:3600
:type keep_session_duration: int
:param keep_session_cookie_name:
The session keeps the name of the cookie that needs
to be overridden if and only if session persistence
is enabled and keep_session_type="rewrite"
:type keep_session_cookie_name: int
:param x_forward_for:
Whether to enable the real IP address of the client,
the backend server can obtain the real address of the client
through the X-Forwarded-For HTTP header.
:value true or false, default: flase
:type x_forward_for: bool
:param health_check_type:
Health check protocol
:value 'HTTP' or 'TCP'
:type health_check_type: string
:param health_check_port:
Health check port, the default is the same as backend_port
:type health_check_port: int
:param health_check_uri:
Health check URI, default '/'.
Effective when the health check protocol is "HTTP"
:type health_check_uri: string
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default:3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealth_threshold:
The unhealthy threshold, that is, how many consecutive health
check failures, shields the backend server.
:value 2-5, default: 3
:type unhealth_threshold: int
:param health_threshold:
Health threshold, that is, how many consecutive health checks
are successful, then re-use the back-end server
:value:2-5, default: 3
:type health_threshold: int
:param health_check_normal_status:
The HTTP status code when the health check is normal
supports a combination of five types of status codes,
such as "http_1xx|http_2xx", Effective when the health check
protocol is "HTTP"
:value default: http_2xx|http_3xx
:type health_check_normal_status: string
:param server_timeout:
Backend server maximum timeout (unit: second)
:value 1-3600, default: 30
:type server_timeout: int
:param ie6_compatible:
compatible with IE6 HTTPS request
(the protocol format is earlier SSL3.0, the security is poor)
:value true or false, default: true
:type ie6_compatible: bool
:param encryption_type:
Encryption options, support three types:
compatibleIE or incompatibleIE or userDefind,
corresponding to:
IE-compatible encryption or disabled unsecure encryption
or custom encryption,
when encryptionType is valid and legitimate,
ie6Compatible field transfer value will not take effect
type: encryption_type:string
:param encryption_protocols:
When the encryptionType value is userDefind,
the list of protocol types is a string list composed of four protocols:
"sslv3", "tlsv10", "tlsv11", "tlsv12".
type: encryption_protocols:list
:param dual_auth:
Whether to Open Two-way Authentication,
default:false
:type dual_auth: boolean
:param client_certIds:
When dualAuth is true, the loaded client certificate chain
:type client_certIds: list
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPSlistener')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {
'listenerPort': listener_port, 'backendPort': backend_port,
'scheduler': compat.convert_to_string(scheduler),
'certIds': cert_ids}
if keep_session is not None:
body['keepSession'] = keep_session
if keep_session_type is not None:
body['keepSessionType'] = \
compat.convert_to_string(keep_session_type)
if keep_session_duration is not None:
body['keepSessionDuration'] = keep_session_duration
if keep_session_cookie_name is not None:
body['keepSessionCookieName'] = keep_session_cookie_name
if x_forward_for is not None:
body['xForwardFor'] = x_forward_for
if health_check_type is not None:
body['healthCheckType'] = \
compat.convert_to_string(health_check_type)
if health_check_port is not None:
body['healthCheckPort'] = health_check_port
if health_check_uri is not None:
body['healthCheckURI'] = \
compat.convert_to_string(health_check_uri)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
if health_check_normal_status is not None:
body['healthCheckNormalStatus'] = \
compat.convert_to_string(health_check_normal_status)
if server_timeout is not None:
body['serverTimeout'] = server_timeout
if ie6_compatible is not None:
body['ie6Compatible'] = ie6_compatible
if encryption_type is not None:
body['encryptionType'] = \
compat.convert_to_string(encryption_type)
if encryption_protocols is not None:
body['encryptionProtocols'] = encryption_protocols
if dual_auth is not None:
body['dualAuth'] = dual_auth
if client_certIds is not None:
body['clientCertIds'] = client_certIds
return self._send_request(http_methods.POST, path,
body=json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str), listener_port=int,
backend_port=int, scheduler=(bytes, str), cert_ids=list)
def create_ssl_listener(self, blb_id, listener_port, backend_port,
scheduler, cert_ids,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None, health_threshold=None,
ie6_compatible=None, encryption_type=None,
encryption_protocols=None,
dual_auth=None, client_certIds=None,
client_token=None, config=None):
"""
Create a ssl listener rule with thSe specified options.
:param blb_id:
The id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
Port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
balancing algorithm
:value 'RoundRobin' or 'LeastConnection'
:type scheduler: string
:param cert_ids:
The SSL certificate to be loaded by the listener.
Currently HTTPS listeners can only bind one SSL certificate.
:type cert_ids: List<String>
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default:3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealth_threshold:
The unhealthy threshold, that is, how many consecutive health
check failures, shields the backend server.
:value 2-5, default: 3
:type unhealth_threshold: int
:param health_threshold:
Health threshold, that is, how many consecutive health checks
are successful, then re-use the back-end server
:value:2-5, default: 3
:type health_threshold: int
:param ie6_compatible:
compatible with IE6 HTTPS request
(the protocol format is earlier SSL3.0, the security is poor)
:value true or false, default: true
:type ie6_compatible: bool
:param encryption_type:
Encryption options, support three types:
compatibleIE or incompatibleIE or userDefind,
corresponding to:
IE-compatible encryption or disabled unsecure encryption
or custom encryption,
when encryptionType is valid and legitimate,
ie6Compatible field transfer value will not take effect
type: encryption_type:string
:param encryption_protocols:
When the encryptionType value is userDefind,
the list of protocol types is a string list composed of four protocols:
"sslv3", "tlsv10", "tlsv11", "tlsv12".
type: encryption_protocols:list
:param dual_auth:
Whether to Open Two-way Authentication,
default:false
:type dual_auth: boolean
:param client_certIds:
When dualAuth is true, the loaded client certificate chain
:type client_certIds: list
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'SSLlistener')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {
'listenerPort': listener_port, 'backendPort': backend_port,
'scheduler': compat.convert_to_string(scheduler),
'certIds': cert_ids}
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
if ie6_compatible is not None:
body['ie6Compatible'] = ie6_compatible
if encryption_type is not None:
body['encryptionType'] = \
compat.convert_to_string(encryption_type)
if encryption_protocols is not None:
body['encryptionProtocols'] = encryption_protocols
if dual_auth is not None:
body['dualAuth'] = dual_auth
if client_certIds is not None:
body['clientCertIds'] = client_certIds
#for test.txt,if not,return internal server error
#body['healthCheckType'] = "TCP"
return self._send_request(http_methods.POST, path,
body=json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_tcp_listener(self, blb_id, listener_port=None,
marker=None, max_keys=None, config=None):
"""
get tcp listeners identified by bibID
:param blb_id
the id of blb which the listener work on
:type blb_id:string
:param listener_port
The listener port to query
:type listener_port:int
:param marker
The optional parameter marker specified in the
original request to specify
where in the results to begin listing.
Together with the marker, specifies the list result
which listing should begin.
If the marker is not specified, the list result will
listing from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of
list result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'TCPlistener')
params = {}
if listener_port is not None:
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_udp_listener(self, blb_id, listener_port=None, marker=None,
max_keys=None, config=None):
"""
get udp listeners identified by bibID
:param blb_id
the id of blb which the listener work on
:type blb_id:string
:param listener_port
The listener port to query
:type listener_port:int
:param marker
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result which
listing should begin.
If the marker is not specified, the list result will
listing from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of
list result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'UDPlistener')
params = {}
if listener_port is not None:
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_http_listener(self, blb_id, listener_port=None,
marker=None, max_keys=None, config=None):
"""
get http listeners identified by blbID
:param blb_id
the id of blb which the listener work on
:type blb_id:string
:param listener_port
The listener port to query
:type listener_port:int
:param marker
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result which
listing should begin.
If the marker is not specified, the list result will listing
from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of list
result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPlistener')
params = {}
if listener_port is not None:
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_https_listener(self, blb_id, listener_port=None,
marker=None, max_keys=None, config=None):
"""
get https listeners identified by bibID
:param blb_id
the id of blb which the listener work on
:type blb_id:string
:param listener_port
The listener port to query
:type listener_port:int
:param marker
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result which
listing should begin.
If the marker is not specified, the list result will listing
from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of list
result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPSlistener')
params = {}
if listener_port is not None:
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str))
def describe_ssl_listener(self, blb_id, listener_port=None,
marker=None, max_keys=None, config=None):
"""
get ssl listeners identified by bibID
:param blb_id
the id of blb which the listener work on
:type blb_id:string
:param listener_port
The listener port to query
:type listener_port:int
:param marker
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result which
listing should begin.
If the marker is not specified, the list result will listing
from the first one.
:type marker: string
:param max_keys
The optional parameter to specifies the max number of list
result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'SSLlistener')
params = {}
if listener_port is not None:
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path,
params=params, config=config)
@required(blb_id=(bytes, str),
listener_port=int)
def update_tcp_listener(self, blb_id, listener_port,
backend_port=None, scheduler=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None,
health_threshold=None,
config=None):
"""
update a tcp listener rule with the specified options.
:param blb_id:
the id of blb which the listener work on
:type blb_id:string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port:int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port:int
:param scheduler
balancing algorithm
:value 'RoundRobin'or'LeastConnection'or'Hash'
:type scheduler:string
:param health_check_timeout_in_second
Health check timeout
:value 1-60 default:3 unit:seconds
:type health_check_timeout_in_second:string
:param health_check_interval
Health check interval
:value 1-10 default:3 unit:seconds
:type health_check_interval:string
:param unhealth_threshold
Unhealthy threshold,
how many consecutive health check failures,
shielding the backend server
:value 2-5 default:3
:type unhealth_threshold:string
:param health_threshold
Health threshold,
how many consecutive health checks are successful,
then re-use the back-end server
:value 2-5 default:3
:type health_threshold:string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'TCPlistener')
params = {}
params[b'listenerPort'] = listener_port
body = {}
if backend_port is not None:
body['backendPort'] = backend_port
if scheduler is not None:
body['scheduler'] = compat.convert_to_string(scheduler)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
return self._send_request(http_methods.PUT, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str),
listener_port=int,
backend_port=int)
def update_udp_listener(self, blb_id, listener_port, backend_port=None,
scheduler=None, health_check_string=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None,
health_threshold=None,
config=None):
"""
update a udp listener rule with the specified options.
:param blb_id:
the id of blb which the listener work on
:type blb_id:string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port:int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port:int
:param scheduler
balancing algorithm
:value 'RoundRobin'or'LeastConnection'or'Hash'
:type scheduler:string
:param health_check_string
The request string sent by the health,
the backend server needs to respond after receiving it,
and supports standard escaping
:type health_check_string:string
:param health_check_timeout_in_second
Health check timeout
:value 1-60 default:3 unit:seconds
:type health_check_timeout_in_second:string
:param health_check_interval
Health check interval
:value 1-10 default:3 unit:seconds
:type health_check_interval:string
:param unhealth_threshold
Unhealthy threshold,
how many consecutive health check failures,
shielding the backend server
:value 2-5 default:3
:type unhealth_threshold:string
:param health_threshold
Health threshold,
how many consecutive health checks are successful,
then re-use the back-end server
:value 2-5 default:3
:type health_threshold:string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'UDPlistener')
params = {}
params[b'listenerPort'] = listener_port
body = {}
if backend_port is not None:
body['backendPort'] = backend_port
if scheduler is not None:
body['scheduler'] = compat.convert_to_string(scheduler)
if health_check_string is not None:
body['healthCheckString'] = \
compat.convert_to_string(health_check_string)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
return self._send_request(http_methods.PUT, path,
body=json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str),
listener_port=int)
def update_http_listener(self, blb_id, listener_port, backend_port=None,
scheduler=None, keep_session=None,
keep_session_type=None,
keep_session_duration=None,
keep_session_cookie_name=None,
x_forward_for=None,
health_check_type=None, health_check_port=None,
health_check_uri=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None, health_threshold=None,
health_check_normal_status=None,
server_timeout=None,
redirect_port=None, config=None):
"""
update a http listener rule with the specified options.
:param blb_id:
The id of blb which the listener work on
:type blb_id: string
:param listener_port:
Port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
Balancing algorithm
:value 'RoundRobin' or 'LeastConnection' or 'Hash'
:type scheduler: string
:param keep_session:
Whether to enable the session hold function, that is,
the request sent by the same client will reach the
same backend server
:value true or false, default:false
:type keep_session: bool
:param keep_session_type:
The cookie handling method maintained by the session,
valid only if the session is held open
:value 'insert' or 'rewrite', default:insert
:type keep_session_type: string
:param keep_session_duration:
The time the cookie is kept in session (in seconds),
valid only if the session is held open
:value 1-15552000, default:3600
:type keep_session_duration: int
:param keep_session_cookie_name:
The session keeps the name of the cookie that needs
to be overridden,if and only if session persistence is
enabled and keep_session_type="rewrite"
:type keep_session_cookie_name: int
:param x_forward_for:
Whether to enable the real IP address of the client,
the backend server can obtain the real address of the
client through the X-Forwarded-For HTTP header.
:value true or false, default: flase
:type x_forward_for: bool
:param health_check_type:
Health check protocol
:value 'HTTP' or 'TCP'
:type health_check_type: string
:param health_check_port:
Health check port, the default is the same as backend_port
:type health_check_port: int
:param health_check_uri:
Health check URI, default '/'.
Effective when the health check protocol is "HTTP"
:type health_check_uri: string
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default: 3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealth_threshold:
The unhealthy threshold, that is, how many consecutive health
check failures, shields the backend server.
:value 2-5, default: 3
:type unhealth_threshold: int
:param health_threshold:
Health threshold, that is, how many consecutive health checks
are successful, then re-use the back-end server
:value:2-5, default: 3
:type health_threshold: int
:param health_check_normal_status:
The HTTP status code when the health check is normal supports
a combination of five types of status codes,
such as "http_1xx|http_2xx", Effective when the health check
protocol is "HTTP"
:value default: http_2xx|http_3xx
:type health_check_normal_status: string
:param server_timeout:
Backend server maximum timeout (unit: second)
:value 1-3600, default: 30
:type server_timeout: int
:param redirect_port:
Forward the request received by this listener to the HTTPS
listener, which is specified by the HTTPS listener.
:type redirect_port: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPlistener')
params = {}
params[b'listenerPort'] = listener_port
body = {}
if backend_port is not None:
body['backendPort'] = backend_port
if scheduler is not None:
body['scheduler'] = compat.convert_to_string(scheduler)
if keep_session is not None:
body['keepSession'] = keep_session
if keep_session_type is not None:
body['keepSessionType'] = \
compat.convert_to_string(keep_session_type)
if keep_session_duration is not None:
body['keepSessionDuration'] = keep_session_duration
if keep_session_cookie_name is not None:
body['keepSessionCookieName'] = keep_session_cookie_name
if x_forward_for is not None:
body['xForwardFor'] = x_forward_for
if health_check_type is not None:
body['healthCheckType'] = \
compat.convert_to_string(health_check_type)
if health_check_port is not None:
body['healthCheckPort'] = health_check_port
if health_check_uri is not None:
body['healthCheckURI'] = \
compat.convert_to_string(health_check_uri)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
if health_check_normal_status is not None:
body['healthCheckNormalStatus'] = \
compat.convert_to_string(health_check_normal_status)
if server_timeout is not None:
body['serverTimeout'] = server_timeout
if redirect_port is not None:
body['redirectPort'] = redirect_port
return self._send_request(http_methods.PUT, path,
body=json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str), listener_port=int)
def update_https_listener(self, blb_id, listener_port,
backend_port=None,
scheduler=None, keep_session=None,
keep_session_type=None,
keep_session_duration=None,
keep_session_cookie_name=None,
x_forward_for=None, health_check_type=None,
health_check_port=None, health_check_uri=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None, health_threshold=None,
health_check_normal_status=None,
server_timeout=None,
cert_ids=None, ie6_compatible=None,
config=None):
"""
update a https listener rule with the specified options.
:param blb_id:
The id of blb which the listener work on
:type blb_id: string
:param listener_port:
Port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
Port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
Balancing algorithm
:value 'RoundRobin' or 'LeastConnection' or 'Hash'
:type scheduler: string
:param keep_session:
Whether to enable the session hold function, that is, the request
sent by the same client will reach the same backend server
:value true or false, default: false
:type keep_session: bool
:param keep_session_type:
The cookie handling method maintained by the session,
valid only if the session is held open
:value 'insert' or 'rewrite', default: insert
:type keep_session_type: string
:param keep_session_duration:
The time the cookie is kept in session (in seconds),
valid only if the session is held open
:value 1-15552000, default:3600
:type keep_session_duration: int
:param keep_session_cookie_name:
The session keeps the name of the cookie that needs to be
overridden,if and only if session persistence is enabled and
keep_session_type="rewrite"
:type keep_session_cookie_name: int
:param x_forward_for:
Whether to enable the real IP address of the client,
the backend server can obtain the real address of the client
through the X-Forwarded-For HTTP header.
:value true or false, default: False
:type x_forward_for: bool
:param health_check_type:
Health check protocol
:value 'HTTP' or 'TCP'
:type health_check_type: string
:param health_check_port:
Health check port, the default is the same as backend_port
:type health_check_port: int
:param health_check_uri:
Health check URI, default '/'.
Effective when the health check protocol is "HTTP"
:type health_check_uri: string
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default: 3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealth_threshold:
The unhealthy threshold, that is, how many consecutive health
check failures, shields the backend server.
:value 2-5, default: 3
:type unhealth_threshold: int
:param health_threshold:
Health threshold, that is, how many consecutive health checks
are successful, then re-use the back-end server
:value:2-5, default: 3
:type health_threshold: int
:param health_check_normal_status:
The HTTP status code when the health check is normal supports
a combination of five types of status codes,
such as "http_1xx|http_2xx", Effective when the health check
protocol is "HTTP"
:value default: http_2xx|http_3xx
:type health_check_normal_status: string
:param server_timeout:
Backend server maximum timeout (unit: second)
:value 1-3600, default: 30
:type server_timeout: int
:param cert_ids:
The SSL certificate to be loaded by the listener.
Currently HTTPS listeners can only bind one SSL certificate.
:type cert_ids:List<String>
:param ie6_compatible:
Is it compatible with IE6 HTTPS request
(the protocol format is earlier SSL3.0, the security is poor)
:value true or false, default: true
:type ie6_compatible: bool
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'HTTPSlistener')
params = {}
params[b'listenerPort'] = listener_port
body = {}
if backend_port is not None:
body['backendPort'] = backend_port
if scheduler is not None:
body['scheduler'] = compat.convert_to_string(scheduler)
if keep_session is not None:
body['keepSession'] = keep_session
if keep_session_type is not None:
body['keepSessionType'] = \
compat.convert_to_string(keep_session_type)
if keep_session_duration is not None:
body['keepSessionDuration'] = keep_session_duration
if keep_session_cookie_name is not None:
body['keepSessionCookieName'] = keep_session_cookie_name
if x_forward_for is not None:
body['xForwardFor'] = x_forward_for
if health_check_type is not None:
body['healthCheckType'] = \
compat.convert_to_string(health_check_type)
if health_check_port is not None:
body['healthCheckPort'] = health_check_port
if health_check_uri is not None:
body['healthCheckURI'] = \
compat.convert_to_string(health_check_uri)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
if health_check_normal_status is not None:
body['healthCheckNormalStatus'] = \
compat.convert_to_string(health_check_normal_status)
if server_timeout is not None:
body['serverTimeout'] = server_timeout
if cert_ids is not None:
body['certIds'] = cert_ids
if ie6_compatible is not None:
body['ie6Compatible'] = ie6_compatible
return self._send_request(http_methods.PUT, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str), listener_port=int)
def update_ssl_listener(self, blb_id, listener_port,
backend_port=None, scheduler=None,
health_check_timeout_in_second=None,
health_check_interval=None,
unhealth_threshold=None,
health_threshold=None, cert_ids=None,
ie6_compatible=None,
encryption_type=None,
encryption_protocols=None,
dual_auth=None, client_certIds=None,
config=None):
"""
update a ssl listener rule with the specified options.
:param blb_id:
The id of blb which the listener work on
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param backend_port:
Port to be listened owned by Backend server
:value 1-65535
:type backend_port: int
:param scheduler:
balancing algorithm
:value 'RoundRobin' or 'LeastConnection'
:type scheduler: string
:param health_check_timeout_in_second:
Health check timeout (unit: second)
:value 1-60, default:3
:type health_check_timeout_in_second: int
:param health_check_interval:
Health check interval (unit: second)
:value 1-10, default: 3
:type health_check_interval: int
:param unhealth_threshold:
The unhealthy threshold, that is, how many consecutive health
check failures, shields the backend server.
:value 2-5, default: 3
:type unhealth_threshold: int
:param health_threshold:
Health threshold, that is, how many consecutive health checks
are successful, then re-use the back-end server
:value:2-5, default: 3
:type health_threshold: int
:param cert_ids:
The SSL certificate to be loaded by the listener.
Currently HTTPS listeners can only bind one SSL certificate.
:type cert_ids: List<String>
:param ie6_compatible:
compatible with IE6 HTTPS request
(the protocol format is earlier SSL3.0, the security is poor)
:value true or false, default: true
:type ie6_compatible: bool
:param encryption_type:
Encryption options, support three types:
compatibleIE or incompatibleIE or userDefind,
corresponding to:
IE-compatible encryption or disabled unsecure encryption or
custom encryption,
when encryptionType is valid and legitimate,
ie6Compatible field transfer value will not take effect
type: encryption_type:string
:param encryption_protocols:
When the encryptionType value is userDefind,
the list of protocol types is a string list composed of four protocols:
"sslv3", "tlsv10", "tlsv11", "tlsv12".
type: encryption_protocols:list
:param dual_auth:
Whether to Open Two-way Authentication,
default:false
:type dual_auth: boolean
:param client_certIds:
When dualAuth is true, the loaded client certificate chain
:type client_certIds: list
:param config:
:type config: baidubce.BceClientConfiguration
:return
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'SSLlistener')
params = {}
params[b'listenerPort'] = listener_port
body = {}
if backend_port is not None:
body['backendPort'] = backend_port
if scheduler is not None:
body['scheduler'] = compat.convert_to_string(scheduler)
if health_check_timeout_in_second is not None:
body['healthCheckTimeoutInSecond'] = \
health_check_timeout_in_second
if health_check_interval is not None:
body['healthCheckInterval'] = health_check_interval
if unhealth_threshold is not None:
body['unhealthyThreshold'] = unhealth_threshold
if health_threshold is not None:
body['healthyThreshold'] = health_threshold
if cert_ids is not None:
body['certIds'] = cert_ids
if ie6_compatible is not None:
body['ie6Compatible'] = ie6_compatible
if encryption_type is not None:
body['encryptionType'] = \
compat.convert_to_string(encryption_type)
if encryption_protocols is not None:
body['encryptionProtocols'] = encryption_protocols
if dual_auth is not None:
body['dualAuth'] = dual_auth
if client_certIds is not None:
body['clientCertIds'] = client_certIds
return self._send_request(http_methods.PUT, path,
body=json.dumps(body),
params=params, config=config)
@required(blb_id=(bytes, str),
portList=list)
def delete_listeners(self, blb_id, portList, client_token=None, config=None):
"""
Release the listener under the specified LoadBalancer,
the listener is specified by listening to the port.
:param blb_id:
id of LoadBalancer
:type blb_id:string
:param portList:
The ports of listeners to be released
:type portList:list<int>
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'listener')
params = {}
params[b'batchdelete'] = None
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
body['portList'] = portList
return self._send_request(http_methods.PUT, path,
body=json.dumps(body), params=params,
config=config)
"""
BackendServer API
"""
@required(blb_id=(bytes, str),
backend_server_list=list)
def add_backend_servers(self, blb_id, backend_server_list,
client_token=None, config=None):
"""
Add a backend server for the specified LoadBalancer,
support batch add
:param blb_id:
id of LoadBalancer
:type blb_id:string
:param backend_server_list
List of backend servers to be added
:type backend_server_list:List<BackendServerModel>
BackendServerModel{:param:instanceId
id of Backend server
:type instanceId:string
:param weight
Backend server weight, value range [0, 100],
weight 0 means not to forward traffic to
the backend server
:type weight:int
}
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'backendserver')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
body['backendServerList'] = backend_server_list
return self._send_request(http_methods.POST, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str),
listener_port=int)
def describe_health_status(self, blb_id, listener_port,
marker=None, max_keys=None, config=None):
"""
Query the information about the backend server under the specified
LoadBalancer identified by listenPort
:param blb_id:
id of LoadBalancer
:type blb_id: string
:param listener_port:
port to be linstened owned by listener
:value 1-65535
:type listener_port: int
:param marker:
The optional parameter marker specified in the original request
to specify where in the results to begin listing.
Together with the marker, specifies the list result which listing
should begin. If the marker is not specified, the list result will
listing from the first one.
:type marker: string
:param max_keys:
The optional parameter to specifies the max number of list
result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'backendserver')
params = {}
params[b'listenerPort'] = listener_port
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path, params=params,
config=config)
@required(blb_id=(bytes, str))
def describe_backend_servers(self, blb_id, marker=None,
max_keys=None, config=None):
"""
Query the list of backend servers under the specified LoadBalancer
:param blb_id:
Id of LoadBalancer
:type blb_id:string
:param marker:
The optional parameter marker specified in the original
request to specify where in the results to begin listing.
Together with the marker, specifies the list result which
listing should begin. If the marker is not specified,
the list result will listing from the first one.
:type marker: string
:param max_keys:
The optional parameter to specifies the max number of
list result to return.
The default value is 1000.
:type max_keys: int
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'backendserver')
params = {}
if marker is not None:
params[b'marker'] = marker
if max_keys is not None:
params[b'maxKeys'] = max_keys
return self._send_request(http_methods.GET, path, params=params,
config=config)
@required(blb_id=(bytes, str),
backend_server_list=list)
def update_backend_servers(self, blb_id, backend_server_list,
client_token=None, config=None):
"""
update the information about the backend server under
the specified LoadBalancer
:param blb_id:
id of LoadBalancer
:type blb_id:string
:param backend_server_list:
List of backend servers to be updated
:type backend_server_list:List<BackendServerModel>
BackendServerModel{:param:instanceId
id of Backend server
:type instanceId:string
:param weight
Backend server weight, value range [0, 100],
weight 0 means not to forward traffic to
the backend server
:type weight:int
}
:param client_token:
If the clientToken is not specified by the user, a random String
generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'backendserver')
params = {}
params[b'update'] = None
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
body['backendServerList'] = backend_server_list
return self._send_request(http_methods.PUT, path,
body=json.dumps(body), params=params,
config=config)
@required(blb_id=(bytes, str),
backend_server_list=list)
def remove_backend_servers(self, blb_id, backend_server_list,
client_token=None, config=None):
"""
Release the backend server under the specified LoadBalancer,
which is specified by its backend server
:param blb_id:
id of LoadBalancer
:type blb_id:string
:param backend_server_list:
List of backend servers to be removed
:type backend_server_list:List<string>
:param client_token:
If the clientToken is not specified by the user,
a random String generated by default algorithm will be used.
:type client_token: string
:param config:
:type config: baidubce.BceClientConfiguration
:return:
:rtype baidubce.bce_response.BceResponse
"""
path = utils.append_uri(self.version, 'blb', blb_id, 'backendserver')
params = {}
if client_token is None:
params[b'clientToken'] = generate_client_token()
else:
params[b'clientToken'] = client_token
body = {}
body['backendServerList'] = backend_server_list
return self._send_request(http_methods.PUT, path,
body=json.dumps(body), params=params,
config=config)
def generate_client_token_by_uuid():
"""
The default method to generate the random string for client_token
if the optional parameter client_token is not specified by the user.
:return:
:rtype string
"""
return str(uuid.uuid4())
generate_client_token = generate_client_token_by_uuid
| 39.31485 | 83 | 0.599388 | 9,457 | 83,662 | 5.123612 | 0.045257 | 0.055393 | 0.026004 | 0.030854 | 0.93854 | 0.932493 | 0.928984 | 0.922731 | 0.919614 | 0.916312 | 0 | 0.008678 | 0.340238 | 83,662 | 2,127 | 84 | 39.333333 | 0.86916 | 0.451304 | 0 | 0.843373 | 0 | 0 | 0.080904 | 0.012613 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040161 | false | 0 | 0.017403 | 0 | 0.100402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f633894858836f2d69009d9548f2c9964ce2969 | 9,735 | py | Python | tethysext/atcore/tests/integrated_tests/mixins/user_lock_mixin_tests.py | Aquaveo/tethysext-atcore | 7a83ccea24fdbbe806f12154f938554dd6c8015f | [
"BSD-3-Clause"
] | 3 | 2020-11-05T23:50:47.000Z | 2021-02-26T21:43:29.000Z | tethysext/atcore/tests/integrated_tests/mixins/user_lock_mixin_tests.py | Aquaveo/tethysext-atcore | 7a83ccea24fdbbe806f12154f938554dd6c8015f | [
"BSD-3-Clause"
] | 7 | 2020-10-29T16:53:49.000Z | 2021-05-07T19:46:47.000Z | tethysext/atcore/tests/integrated_tests/mixins/user_lock_mixin_tests.py | Aquaveo/tethysext-atcore | 7a83ccea24fdbbe806f12154f938554dd6c8015f | [
"BSD-3-Clause"
] | null | null | null | from unittest import mock
from django.test import RequestFactory
from tethys_sdk.testing import TethysTestCase
from tethysext.atcore.tests.factories.django_user import UserFactory
from tethysext.atcore.mixins import UserLockMixin
class LockedThing(UserLockMixin):
pass
class UserLockMixinTests(TethysTestCase):
def setUp(self):
# Custom setup here
self.instance = LockedThing()
self.django_user = UserFactory()
self.django_user.save()
self.rf = RequestFactory()
def test_acquire_user_lock_django_user(self):
request = self.rf.get('/foo/bar')
request.user = self.django_user
ret = self.instance.acquire_user_lock(request)
self.assertTrue(ret)
self.assertEqual(self.django_user.username, self.instance._user_lock)
def test_acquire_user_lock_django_user_already_locked_for_given_user(self):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.django_user.username
ret = self.instance.acquire_user_lock(request)
self.assertTrue(ret)
self.assertEqual(self.django_user.username, self.instance._user_lock)
def test_acquire_user_lock_django_user_already_locked_not_given_user(self):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = 'otheruser'
ret = self.instance.acquire_user_lock(request)
self.assertFalse(ret)
self.assertEqual('otheruser', self.instance._user_lock)
def test_acquire_user_lock_django_user_already_locked_for_all_users(self):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.acquire_user_lock(request)
self.assertFalse(ret)
self.assertEqual(self.instance.LOCKED_FOR_ALL_USERS, self.instance._user_lock)
def test_acquire_user_lock_for_all_users(self):
ret = self.instance.acquire_user_lock()
self.assertTrue(ret)
self.assertEqual(self.instance.LOCKED_FOR_ALL_USERS, self.instance._user_lock)
def test_acquire_user_lock_for_all_users_already_locked_for_all_users(self):
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.acquire_user_lock()
self.assertTrue(ret)
self.assertEqual(self.instance.LOCKED_FOR_ALL_USERS, self.instance._user_lock)
def test_acquire_user_lock_for_all_users_already_locked_for_specific_user(self):
self.instance._user_lock = self.django_user.username
ret = self.instance.acquire_user_lock()
self.assertFalse(ret)
self.assertEqual(self.django_user.username, self.instance._user_lock)
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_release_user_lock_not_locked(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
ret = self.instance.release_user_lock(request)
self.assertTrue(ret)
self.assertIsNone(self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_release_user_lock_locked_with_given_request_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.django_user.username
ret = self.instance.release_user_lock(request)
self.assertTrue(ret)
self.assertIsNone(self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_release_user_lock_locked_not_given_request_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = 'otheruser'
ret = self.instance.release_user_lock(request)
self.assertFalse(ret)
self.assertEqual('otheruser', self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=True)
def test_release_user_lock_locked_not_given_request_user_permitted_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = 'otheruser'
ret = self.instance.release_user_lock(request)
self.assertTrue(ret)
self.assertIsNone(self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=True)
def test_release_user_lock_locked_for_all_users_permitted_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.release_user_lock(request)
self.assertTrue(ret)
self.assertIsNone(self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_release_user_lock_locked_for_all_users_not_admin_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.release_user_lock(request)
self.assertFalse(ret)
self.assertEqual(self.instance.LOCKED_FOR_ALL_USERS, self.instance._user_lock)
mock_hp.assert_called_with(request, 'can_override_user_locks')
def test_user_lock_initial(self):
ret = self.instance.user_lock
self.assertIsNone(ret)
def test_user_lock_set(self):
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.user_lock
self.assertEqual(self.instance.LOCKED_FOR_ALL_USERS, ret)
def test_is_user_locked_initial(self):
ret = self.instance.is_user_locked
self.assertFalse(ret)
def test_is_user_locked_empty_string(self):
self.instance._user_lock = ''
ret = self.instance.is_user_locked
self.assertFalse(ret)
def test_is_user_locked_user(self):
self.instance._user_lock = self.django_user.username
ret = self.instance.is_user_locked
self.assertTrue(ret)
def test_is_user_locked_for_all_users(self):
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.is_user_locked
self.assertTrue(ret)
def test_is_locked_for_all_users_initial(self):
ret = self.instance.is_locked_for_all_users
self.assertFalse(ret)
def test_is_locked_for_all_users_locked(self):
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.is_locked_for_all_users
self.assertTrue(ret)
def test_is_locked_for_all_users_username(self):
self.instance._user_lock = self.django_user.username
ret = self.instance.is_locked_for_all_users
self.assertFalse(ret)
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_is_locked_for_request_user_locked_with_given_request_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.django_user.username
ret = self.instance.is_locked_for_request_user(request)
self.assertFalse(ret)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_is_locked_for_request_user_locked_not_given_request_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = 'otheruser'
ret = self.instance.is_locked_for_request_user(request)
self.assertTrue(ret)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=True)
def test_is_locked_for_request_user_locked_not_given_request_user_permitted_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = 'otheruser'
ret = self.instance.is_locked_for_request_user(request)
self.assertFalse(ret)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=False)
def test_is_locked_for_request_user_locked_for_all_users_not_permitted_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.is_locked_for_request_user(request)
self.assertTrue(ret)
mock_hp.assert_called_with(request, 'can_override_user_locks')
@mock.patch('tethys_sdk.permissions.has_permission', return_value=True)
def test_is_locked_for_request_user_locked_for_all_users_permitted_user(self, mock_hp):
request = self.rf.get('/foo/bar')
request.user = self.django_user
self.instance._user_lock = self.instance.LOCKED_FOR_ALL_USERS
ret = self.instance.is_locked_for_request_user(request)
self.assertFalse(ret)
mock_hp.assert_called_with(request, 'can_override_user_locks')
| 36.597744 | 100 | 0.729841 | 1,317 | 9,735 | 4.994685 | 0.056948 | 0.138644 | 0.087565 | 0.109456 | 0.926269 | 0.920036 | 0.904986 | 0.899666 | 0.889024 | 0.886136 | 0 | 0 | 0.179764 | 9,735 | 265 | 101 | 36.735849 | 0.823795 | 0.001746 | 0 | 0.761111 | 0 | 0 | 0.086764 | 0.067929 | 0 | 0 | 0 | 0 | 0.283333 | 1 | 0.155556 | false | 0.005556 | 0.027778 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f6b2428e4d18494879ff98f12a655bc38a7d3f5 | 12,843 | py | Python | resolwe_bio/processes/reads_processing/cutadapt_corall.py | dblenkus/resolwe-bio | 5077a162f454576dbe1bc41e97923bde49420261 | [
"Apache-2.0"
] | null | null | null | resolwe_bio/processes/reads_processing/cutadapt_corall.py | dblenkus/resolwe-bio | 5077a162f454576dbe1bc41e97923bde49420261 | [
"Apache-2.0"
] | null | null | null | resolwe_bio/processes/reads_processing/cutadapt_corall.py | dblenkus/resolwe-bio | 5077a162f454576dbe1bc41e97923bde49420261 | [
"Apache-2.0"
] | null | null | null | """Pre-process reads obtained using CORALL Total RNA-Seq Library Prep Kit."""
import os
from plumbum import TEE
from resolwe.process import (
Cmd,
DataField,
FileField,
FileHtmlField,
GroupField,
IntegerField,
ListField,
Process,
SchedulingClass,
)
class CutadaptCorallSingle(Process):
"""Pre-process reads obtained using CORALL Total RNA-Seq Library Prep Kit.
Trim UMI-tags from input reads and use Cutadapt to remove adapters and run QC filtering steps.
"""
slug = "cutadapt-corall-single"
name = "Cutadapt (Corall RNA-Seq, single-end)"
process_type = "data:reads:fastq:single:cutadapt:"
version = "1.1.1"
category = "Other"
scheduling_class = SchedulingClass.BATCH
entity = {"type": "sample"}
requirements = {
"expression-engine": "jinja",
"executor": {"docker": {"image": "resolwebio/rnaseq:4.9.0"},},
"resources": {"cores": 10, "memory": 16384,},
}
data_name = '{{ reads|sample_name|default("?") }}'
class Input:
"""Input fields."""
reads = DataField("reads:fastq:single", label="Select sample(s)")
class Options:
"""Options."""
nextseq_trim = IntegerField(
label="NextSeq/NovaSeq trim",
description="NextSeq/NovaSeq-specific quality trimming. Trims also dark "
"cycles appearing as high-quality G bases. This option is mutually "
"exclusive with the use of standard quality-cutoff trimming and is "
"suitable for the use with data generated by the recent Illumina "
"machines that utilize two-color chemistry to encode the four bases.",
default=10,
)
quality_cutoff = IntegerField(
label="Quality cutoff",
description="Trim low-quality bases from 3' end of each read before adapter "
"removal. The use of this option will override the use of "
"NextSeq/NovaSeq trim option.",
required=False,
)
min_len = IntegerField(label="Minimum read length", default=20,)
min_overlap = IntegerField(
label="Mimimum overlap",
description="Minimum overlap between adapter and read for an adapter to be found.",
default=20,
)
options = GroupField(Options, label="Options")
class Output:
"""Output fields."""
fastq = ListField(FileField(), label="Reads file")
report = FileField(label="Cutadapt report")
fastqc_url = ListField(FileHtmlField(), label="Quality control with FastQC")
fastqc_archive = ListField(FileField(), label="Download FastQC archive")
def run(self, inputs, outputs):
"""Run analysis."""
# Get input reads file name (for the first of the possible multiple lanes)
reads_path = os.path.basename(inputs.reads.fastq[0].path)
assert reads_path.endswith(".fastq.gz")
name = reads_path[:-9]
# Concatenate multi-lane read files
(
Cmd["cat"][[reads.path for reads in inputs.reads.fastq]]
> "input_reads.fastq.gz"
)()
# Extract UMI sequences
Cmd["extract_umi.sh"]([10, 13, "input_reads.fastq.gz"])
# Prepare Cutadapt inputs
if inputs.options.quality_cutoff is not None:
read_trim_cutoff = "--quality-cutoff={}".format(
inputs.options.quality_cutoff
)
else:
read_trim_cutoff = "--nextseq-trim={}".format(inputs.options.nextseq_trim)
rd1Adapter = "AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC"
first_pass_input = [
"-m",
inputs.options.min_len,
"-O",
inputs.options.min_overlap,
"-a",
"QUALITY=G{20}",
"-j",
self.requirements.resources.cores,
"input_reads_umi.fastq.gz",
]
second_pass_input = [
"-m",
inputs.options.min_len,
read_trim_cutoff,
"-a",
rd1Adapter,
"-j",
self.requirements.resources.cores,
"-",
]
third_pass_input = [
"-m",
inputs.options.min_len,
"-O",
3,
"-a",
"r1polyA=A{18}",
"-j",
self.requirements.resources.cores,
"-",
]
fourth_pass_input = [
"-m",
inputs.options.min_len,
"-O",
inputs.options.min_overlap,
"-g",
rd1Adapter,
"--discard-trimmed",
"-j",
self.requirements.resources.cores,
"-o",
"{}_trimmed.fastq.gz".format(name),
"-",
]
# Run Cutadapt, write analysis reports into a report file
(
Cmd["cutadapt"][first_pass_input]
| Cmd["cutadapt"][second_pass_input]
| Cmd["cutadapt"][third_pass_input]
| Cmd["cutadapt"][fourth_pass_input]
> "cutadapt_report.txt"
)()
# Prepare final FASTQC report
fastqc_args = [
"{}_trimmed.fastq.gz".format(name),
"fastqc",
"fastqc_archive",
"fastqc_url",
"--nogroup",
]
return_code, _, _ = Cmd["fastqc.sh"][fastqc_args] & TEE(retcode=None)
if return_code:
self.error("Error while preparing FASTQC report.")
# Save the outputs
outputs.fastq = ["{}_trimmed.fastq.gz".format(name)]
outputs.report = "cutadapt_report.txt"
class CutadaptCorallPaired(Process):
"""Pre-process reads obtained using CORALL Total RNA-Seq Library Prep Kit.
Trim UMI-tags from input reads and use Cutadapt to remove adapters and run QC filtering steps.
"""
slug = "cutadapt-corall-paired"
name = "Cutadapt (Corall RNA-Seq, paired-end)"
process_type = "data:reads:fastq:paired:cutadapt:"
version = "1.1.1"
category = "Other"
scheduling_class = SchedulingClass.BATCH
entity = {"type": "sample"}
requirements = {
"expression-engine": "jinja",
"executor": {"docker": {"image": "resolwebio/rnaseq:4.9.0"},},
"resources": {"cores": 10, "memory": 16384,},
}
data_name = '{{ reads|sample_name|default("?") }}'
class Input:
"""Input fields."""
reads = DataField("reads:fastq:paired", label="Select sample(s)")
class Options:
"""Options."""
nextseq_trim = IntegerField(
label="NextSeq/NovaSeq trim",
description="NextSeq/NovaSeq-specific quality trimming. Trims also dark "
"cycles appearing as high-quality G bases. This option is mutually "
"exclusive with the use of standard quality-cutoff trimming and is "
"suitable for the use with data generated by the recent Illumina "
"machines that utilize two-color chemistry to encode the four bases.",
default=10,
)
quality_cutoff = IntegerField(
label="Quality cutoff",
description="Trim low-quality bases from 3' end of each read before adapter "
"removal. The use of this option will override the use of "
"NextSeq/NovaSeq trim option.",
required=False,
)
min_len = IntegerField(label="Minimum read length", default=20,)
min_overlap = IntegerField(
label="Mimimum overlap",
description="Minimum overlap between adapter and read for an adapter to be found.",
default=20,
)
options = GroupField(Options, label="Options")
class Output:
"""Output fields."""
fastq = ListField(FileField(), label="Remaining mate1 reads")
fastq2 = ListField(FileField(), label="Remaining mate2 reads")
report = FileField(label="Cutadapt report")
fastqc_url = ListField(
FileHtmlField(), label="Mate1 quality control with FastQC"
)
fastqc_url2 = ListField(
FileHtmlField(), label="Mate2 quality control with FastQC"
)
fastqc_archive = ListField(FileField(), label="Download mate1 FastQC archive")
fastqc_archive2 = ListField(FileField(), label="Download mate2 FastQC archive")
def run(self, inputs, outputs):
"""Run analysis."""
# Get input reads file name (for the first of the possible multiple lanes)
mate1_path = os.path.basename(inputs.reads.fastq[0].path)
assert mate1_path.endswith(".fastq.gz")
name_mate1 = mate1_path[:-9]
mate2_path = os.path.basename(inputs.reads.fastq2[0].path)
assert mate2_path.endswith(".fastq.gz")
name_mate2 = mate2_path[:-9]
# Concatenate multi-lane read files
(
Cmd["cat"][[reads.path for reads in inputs.reads.fastq]]
> "input_reads_mate1.fastq.gz"
)()
(
Cmd["cat"][[reads.path for reads in inputs.reads.fastq2]]
> "input_reads_mate2.fastq.gz"
)()
# Extract UMI sequences
Cmd["extract_umi.sh"](
[10, 13, "input_reads_mate1.fastq.gz", "input_reads_mate2.fastq.gz"]
)
# Prepare Cutadapt inputs
if inputs.options.quality_cutoff is not None:
read_trim_cutoff = "--quality-cutoff={}".format(
inputs.options.quality_cutoff
)
else:
read_trim_cutoff = "--nextseq-trim={}".format(inputs.options.nextseq_trim)
rd1Adapter = "AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC"
rd2Adapter = "AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT"
first_pass_input = [
"-m",
inputs.options.min_len,
"-O",
inputs.options.min_overlap,
"--interleaved",
"-n",
2,
"-a",
"QUALITY=G{20}",
"-A",
"QUALITY=G{20}",
"-j",
self.requirements.resources.cores,
"input_reads_mate1_umi.fastq.gz",
"input_reads_mate2_umi.fastq.gz",
]
second_pass_input = [
"-m",
inputs.options.min_len,
"--interleaved",
"-n",
3,
read_trim_cutoff,
"-a",
rd1Adapter,
"-A",
rd2Adapter,
"-G",
"XT{18}",
"-j",
self.requirements.resources.cores,
"-",
]
third_pass_input = [
"-m",
inputs.options.min_len,
"-O",
3,
"--interleaved",
"-n",
1,
"-a",
"r1polyA=A{18}",
"-j",
self.requirements.resources.cores,
"-",
]
fourth_pass_input = [
"-m",
inputs.options.min_len,
"-O",
inputs.options.min_overlap,
"--interleaved",
"-g",
rd1Adapter,
"-G",
rd2Adapter,
"--discard-trimmed",
"-j",
self.requirements.resources.cores,
"-o",
"{}_trimmed.fastq.gz".format(name_mate1),
"-p",
"{}_trimmed.fastq.gz".format(name_mate2),
"-",
]
# Run Cutadapt, write analysis reports into a report file
(
Cmd["cutadapt"][first_pass_input]
| Cmd["cutadapt"][second_pass_input]
| Cmd["cutadapt"][third_pass_input]
| Cmd["cutadapt"][fourth_pass_input]
> "cutadapt_report.txt"
)()
# Prepare final FASTQC report
fastqc_args = [
"{}_trimmed.fastq.gz".format(name_mate1),
"fastqc",
"fastqc_archive",
"fastqc_url",
]
return_code, _, _ = Cmd["fastqc.sh"][fastqc_args] & TEE(retcode=None)
if return_code:
self.error("Error while preparing FASTQC report.")
fastqc_args = [
"{}_trimmed.fastq.gz".format(name_mate2),
"fastqc",
"fastqc_archive2",
"fastqc_url2",
]
return_code, _, _ = Cmd["fastqc.sh"][fastqc_args] & TEE(retcode=None)
if return_code:
self.error("Error while preparing FASTQC report.")
# Save the outputs
outputs.fastq = ["{}_trimmed.fastq.gz".format(name_mate1)]
outputs.fastq2 = ["{}_trimmed.fastq.gz".format(name_mate2)]
outputs.report = "cutadapt_report.txt"
| 32.431818 | 99 | 0.54045 | 1,282 | 12,843 | 5.290952 | 0.173947 | 0.021672 | 0.028306 | 0.026537 | 0.897833 | 0.840484 | 0.816158 | 0.816158 | 0.816158 | 0.805838 | 0 | 0.013726 | 0.341976 | 12,843 | 395 | 100 | 32.513924 | 0.788901 | 0.079421 | 0 | 0.701923 | 0 | 0 | 0.278545 | 0.04712 | 0 | 0 | 0 | 0 | 0.009615 | 1 | 0.00641 | false | 0.051282 | 0.009615 | 0 | 0.099359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
48bd7e8ce0b7b99bfb9407ccb7a8341798a41deb | 105,958 | py | Python | dingtalk/python/alibabacloud_dingtalk/doc_1_0/models.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 15 | 2020-08-27T04:10:26.000Z | 2022-03-07T06:25:42.000Z | dingtalk/python/alibabacloud_dingtalk/doc_1_0/models.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 1 | 2020-09-27T01:30:46.000Z | 2021-12-29T09:15:34.000Z | dingtalk/python/alibabacloud_dingtalk/doc_1_0/models.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 5 | 2020-08-27T04:07:44.000Z | 2021-12-03T02:55:20.000Z | # -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.model import TeaModel
from typing import Dict, List
class BatchGetWorkspaceDocsHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class BatchGetWorkspaceDocsRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
node_ids: List[str] = None,
ding_isv_org_id: int = None,
ding_org_id: int = None,
ding_access_token_type: str = None,
ding_uid: int = None,
):
# 操作用户unionId
self.operator_id = operator_id
# 查询节点Id
self.node_ids = node_ids
self.ding_isv_org_id = ding_isv_org_id
self.ding_org_id = ding_org_id
self.ding_access_token_type = ding_access_token_type
self.ding_uid = ding_uid
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.node_ids is not None:
result['nodeIds'] = self.node_ids
if self.ding_isv_org_id is not None:
result['dingIsvOrgId'] = self.ding_isv_org_id
if self.ding_org_id is not None:
result['dingOrgId'] = self.ding_org_id
if self.ding_access_token_type is not None:
result['dingAccessTokenType'] = self.ding_access_token_type
if self.ding_uid is not None:
result['dingUid'] = self.ding_uid
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('nodeIds') is not None:
self.node_ids = m.get('nodeIds')
if m.get('dingIsvOrgId') is not None:
self.ding_isv_org_id = m.get('dingIsvOrgId')
if m.get('dingOrgId') is not None:
self.ding_org_id = m.get('dingOrgId')
if m.get('dingAccessTokenType') is not None:
self.ding_access_token_type = m.get('dingAccessTokenType')
if m.get('dingUid') is not None:
self.ding_uid = m.get('dingUid')
return self
class BatchGetWorkspaceDocsResponseBodyResultNodeBO(TeaModel):
def __init__(
self,
name: str = None,
node_id: str = None,
url: str = None,
deleted: bool = None,
):
self.name = name
self.node_id = node_id
self.url = url
self.deleted = deleted
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.url is not None:
result['url'] = self.url
if self.deleted is not None:
result['deleted'] = self.deleted
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('deleted') is not None:
self.deleted = m.get('deleted')
return self
class BatchGetWorkspaceDocsResponseBodyResultWorkspaceBO(TeaModel):
def __init__(
self,
workspace_id: str = None,
name: str = None,
):
self.workspace_id = workspace_id
self.name = name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('name') is not None:
self.name = m.get('name')
return self
class BatchGetWorkspaceDocsResponseBodyResult(TeaModel):
def __init__(
self,
node_bo: BatchGetWorkspaceDocsResponseBodyResultNodeBO = None,
workspace_bo: BatchGetWorkspaceDocsResponseBodyResultWorkspaceBO = None,
has_permission: bool = None,
):
self.node_bo = node_bo
self.workspace_bo = workspace_bo
self.has_permission = has_permission
def validate(self):
if self.node_bo:
self.node_bo.validate()
if self.workspace_bo:
self.workspace_bo.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_bo is not None:
result['nodeBO'] = self.node_bo.to_map()
if self.workspace_bo is not None:
result['workspaceBO'] = self.workspace_bo.to_map()
if self.has_permission is not None:
result['hasPermission'] = self.has_permission
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeBO') is not None:
temp_model = BatchGetWorkspaceDocsResponseBodyResultNodeBO()
self.node_bo = temp_model.from_map(m['nodeBO'])
if m.get('workspaceBO') is not None:
temp_model = BatchGetWorkspaceDocsResponseBodyResultWorkspaceBO()
self.workspace_bo = temp_model.from_map(m['workspaceBO'])
if m.get('hasPermission') is not None:
self.has_permission = m.get('hasPermission')
return self
class BatchGetWorkspaceDocsResponseBody(TeaModel):
def __init__(
self,
result: List[BatchGetWorkspaceDocsResponseBodyResult] = None,
):
self.result = result
def validate(self):
if self.result:
for k in self.result:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['result'] = []
if self.result is not None:
for k in self.result:
result['result'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
self.result = []
if m.get('result') is not None:
for k in m.get('result'):
temp_model = BatchGetWorkspaceDocsResponseBodyResult()
self.result.append(temp_model.from_map(k))
return self
class BatchGetWorkspaceDocsResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: BatchGetWorkspaceDocsResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = BatchGetWorkspaceDocsResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class DeleteSheetHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class DeleteSheetRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
):
# 操作人unionId
self.operator_id = operator_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
return self
class DeleteSheetResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class UpdateWorkspaceDocMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class UpdateWorkspaceDocMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
role_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
# 用户权限
self.role_type = role_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
if self.role_type is not None:
result['roleType'] = self.role_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
if m.get('roleType') is not None:
self.role_type = m.get('roleType')
return self
class UpdateWorkspaceDocMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[UpdateWorkspaceDocMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = UpdateWorkspaceDocMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class UpdateWorkspaceDocMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class CreateWorkspaceDocHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class CreateWorkspaceDocRequest(TeaModel):
def __init__(
self,
name: str = None,
doc_type: str = None,
operator_id: str = None,
parent_node_id: str = None,
):
# 文档名
self.name = name
# 文档类型
self.doc_type = doc_type
# 操作人unionId
self.operator_id = operator_id
# 父节点nodeId
self.parent_node_id = parent_node_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.doc_type is not None:
result['docType'] = self.doc_type
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.parent_node_id is not None:
result['parentNodeId'] = self.parent_node_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('docType') is not None:
self.doc_type = m.get('docType')
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('parentNodeId') is not None:
self.parent_node_id = m.get('parentNodeId')
return self
class CreateWorkspaceDocResponseBody(TeaModel):
def __init__(
self,
workspace_id: str = None,
node_id: str = None,
doc_key: str = None,
url: str = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 文档Id
self.node_id = node_id
# 文档docKey
self.doc_key = doc_key
# 文档打开url
self.url = url
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.doc_key is not None:
result['docKey'] = self.doc_key
if self.url is not None:
result['url'] = self.url
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('docKey') is not None:
self.doc_key = m.get('docKey')
if m.get('url') is not None:
self.url = m.get('url')
return self
class CreateWorkspaceDocResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: CreateWorkspaceDocResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = CreateWorkspaceDocResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class CreateSheetHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class CreateSheetRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
name: str = None,
):
# 操作人unionId
self.operator_id = operator_id
# 工作表名称
self.name = name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('name') is not None:
self.name = m.get('name')
return self
class CreateSheetResponseBody(TeaModel):
def __init__(
self,
visibility: str = None,
name: str = None,
):
# 工作表可见性
self.visibility = visibility
# 创建的工作表的名称。当输入参数中的工作表名称在表格中已存在时,可能与输入参数指定的工作表名称不同。
self.name = name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.visibility is not None:
result['visibility'] = self.visibility
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('visibility') is not None:
self.visibility = m.get('visibility')
if m.get('name') is not None:
self.name = m.get('name')
return self
class CreateSheetResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: CreateSheetResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = CreateSheetResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class CreateWorkspaceHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class CreateWorkspaceRequest(TeaModel):
def __init__(
self,
name: str = None,
description: str = None,
operator_id: str = None,
ding_org_id: int = None,
ding_uid: int = None,
ding_access_token_type: str = None,
ding_isv_org_id: int = None,
):
# 团队空间名称
self.name = name
# 团队空间描述
self.description = description
# 用户id
self.operator_id = operator_id
self.ding_org_id = ding_org_id
self.ding_uid = ding_uid
self.ding_access_token_type = ding_access_token_type
self.ding_isv_org_id = ding_isv_org_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.description is not None:
result['description'] = self.description
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.ding_org_id is not None:
result['dingOrgId'] = self.ding_org_id
if self.ding_uid is not None:
result['dingUid'] = self.ding_uid
if self.ding_access_token_type is not None:
result['dingAccessTokenType'] = self.ding_access_token_type
if self.ding_isv_org_id is not None:
result['dingIsvOrgId'] = self.ding_isv_org_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('description') is not None:
self.description = m.get('description')
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('dingOrgId') is not None:
self.ding_org_id = m.get('dingOrgId')
if m.get('dingUid') is not None:
self.ding_uid = m.get('dingUid')
if m.get('dingAccessTokenType') is not None:
self.ding_access_token_type = m.get('dingAccessTokenType')
if m.get('dingIsvOrgId') is not None:
self.ding_isv_org_id = m.get('dingIsvOrgId')
return self
class CreateWorkspaceResponseBody(TeaModel):
def __init__(
self,
workspace_id: str = None,
name: str = None,
description: str = None,
url: str = None,
):
# 工作空间id
self.workspace_id = workspace_id
# 工作空间名称
self.name = name
# 工作空间描述
self.description = description
# 工作空间打开url
self.url = url
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.name is not None:
result['name'] = self.name
if self.description is not None:
result['description'] = self.description
if self.url is not None:
result['url'] = self.url
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('name') is not None:
self.name = m.get('name')
if m.get('description') is not None:
self.description = m.get('description')
if m.get('url') is not None:
self.url = m.get('url')
return self
class CreateWorkspaceResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: CreateWorkspaceResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = CreateWorkspaceResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class DeleteWorkspaceDocMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class DeleteWorkspaceDocMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
return self
class DeleteWorkspaceDocMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[DeleteWorkspaceDocMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = DeleteWorkspaceDocMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class DeleteWorkspaceDocMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class GetWorkspaceHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class GetWorkspaceResponseBody(TeaModel):
def __init__(
self,
url: str = None,
is_deleted: bool = None,
owner: str = None,
corp_id: str = None,
):
self.url = url
self.is_deleted = is_deleted
self.owner = owner
# 团队空间所属企业id
self.corp_id = corp_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.url is not None:
result['url'] = self.url
if self.is_deleted is not None:
result['isDeleted'] = self.is_deleted
if self.owner is not None:
result['owner'] = self.owner
if self.corp_id is not None:
result['corpId'] = self.corp_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('url') is not None:
self.url = m.get('url')
if m.get('isDeleted') is not None:
self.is_deleted = m.get('isDeleted')
if m.get('owner') is not None:
self.owner = m.get('owner')
if m.get('corpId') is not None:
self.corp_id = m.get('corpId')
return self
class GetWorkspaceResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: GetWorkspaceResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = GetWorkspaceResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class SearchWorkspaceDocsHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class SearchWorkspaceDocsRequest(TeaModel):
def __init__(
self,
workspace_id: str = None,
operator_id: str = None,
keyword: str = None,
max_results: int = None,
next_token: str = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 发起操作用户unionId
self.operator_id = operator_id
# 搜索关键字
self.keyword = keyword
# 搜索数量
self.max_results = max_results
# 翻页Id
self.next_token = next_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.keyword is not None:
result['keyword'] = self.keyword
if self.max_results is not None:
result['maxResults'] = self.max_results
if self.next_token is not None:
result['nextToken'] = self.next_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('keyword') is not None:
self.keyword = m.get('keyword')
if m.get('maxResults') is not None:
self.max_results = m.get('maxResults')
if m.get('nextToken') is not None:
self.next_token = m.get('nextToken')
return self
class SearchWorkspaceDocsResponseBodyDocsNodeBO(TeaModel):
def __init__(
self,
name: str = None,
node_id: str = None,
url: str = None,
last_edit_time: int = None,
):
# 节点名称
self.name = name
# 节点Id
self.node_id = node_id
# 节点打开url
self.url = url
# 最近编辑时间
self.last_edit_time = last_edit_time
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.url is not None:
result['url'] = self.url
if self.last_edit_time is not None:
result['lastEditTime'] = self.last_edit_time
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('lastEditTime') is not None:
self.last_edit_time = m.get('lastEditTime')
return self
class SearchWorkspaceDocsResponseBodyDocsWorkspaceBO(TeaModel):
def __init__(
self,
workspace_id: str = None,
name: str = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 团队空间名称
self.name = name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('name') is not None:
self.name = m.get('name')
return self
class SearchWorkspaceDocsResponseBodyDocs(TeaModel):
def __init__(
self,
node_bo: SearchWorkspaceDocsResponseBodyDocsNodeBO = None,
workspace_bo: SearchWorkspaceDocsResponseBodyDocsWorkspaceBO = None,
):
self.node_bo = node_bo
self.workspace_bo = workspace_bo
def validate(self):
if self.node_bo:
self.node_bo.validate()
if self.workspace_bo:
self.workspace_bo.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_bo is not None:
result['nodeBO'] = self.node_bo.to_map()
if self.workspace_bo is not None:
result['workspaceBO'] = self.workspace_bo.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeBO') is not None:
temp_model = SearchWorkspaceDocsResponseBodyDocsNodeBO()
self.node_bo = temp_model.from_map(m['nodeBO'])
if m.get('workspaceBO') is not None:
temp_model = SearchWorkspaceDocsResponseBodyDocsWorkspaceBO()
self.workspace_bo = temp_model.from_map(m['workspaceBO'])
return self
class SearchWorkspaceDocsResponseBody(TeaModel):
def __init__(
self,
has_more: bool = None,
next_token: str = None,
docs: List[SearchWorkspaceDocsResponseBodyDocs] = None,
):
# 是否还有可搜索内容
self.has_more = has_more
self.next_token = next_token
self.docs = docs
def validate(self):
if self.docs:
for k in self.docs:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.has_more is not None:
result['hasMore'] = self.has_more
if self.next_token is not None:
result['nextToken'] = self.next_token
result['docs'] = []
if self.docs is not None:
for k in self.docs:
result['docs'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('hasMore') is not None:
self.has_more = m.get('hasMore')
if m.get('nextToken') is not None:
self.next_token = m.get('nextToken')
self.docs = []
if m.get('docs') is not None:
for k in m.get('docs'):
temp_model = SearchWorkspaceDocsResponseBodyDocs()
self.docs.append(temp_model.from_map(k))
return self
class SearchWorkspaceDocsResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: SearchWorkspaceDocsResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = SearchWorkspaceDocsResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class UpdateRangeHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class UpdateRangeRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
values: List[List[str]] = None,
background_colors: List[List[str]] = None,
):
# 操作人unionId
self.operator_id = operator_id
# 值
self.values = values
# 背景色
self.background_colors = background_colors
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.values is not None:
result['values'] = self.values
if self.background_colors is not None:
result['backgroundColors'] = self.background_colors
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('values') is not None:
self.values = m.get('values')
if m.get('backgroundColors') is not None:
self.background_colors = m.get('backgroundColors')
return self
class UpdateRangeResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class BatchGetWorkspacesHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class BatchGetWorkspacesRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
include_recent: bool = None,
workspace_ids: List[str] = None,
ding_org_id: int = None,
ding_isv_org_id: int = None,
ding_uid: int = None,
ding_access_token_type: str = None,
):
# 操作用户unionId
self.operator_id = operator_id
# 是否查询最近访问文档
self.include_recent = include_recent
# 待查询空间Id
self.workspace_ids = workspace_ids
self.ding_org_id = ding_org_id
self.ding_isv_org_id = ding_isv_org_id
self.ding_uid = ding_uid
self.ding_access_token_type = ding_access_token_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.include_recent is not None:
result['includeRecent'] = self.include_recent
if self.workspace_ids is not None:
result['workspaceIds'] = self.workspace_ids
if self.ding_org_id is not None:
result['dingOrgId'] = self.ding_org_id
if self.ding_isv_org_id is not None:
result['dingIsvOrgId'] = self.ding_isv_org_id
if self.ding_uid is not None:
result['dingUid'] = self.ding_uid
if self.ding_access_token_type is not None:
result['dingAccessTokenType'] = self.ding_access_token_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('includeRecent') is not None:
self.include_recent = m.get('includeRecent')
if m.get('workspaceIds') is not None:
self.workspace_ids = m.get('workspaceIds')
if m.get('dingOrgId') is not None:
self.ding_org_id = m.get('dingOrgId')
if m.get('dingIsvOrgId') is not None:
self.ding_isv_org_id = m.get('dingIsvOrgId')
if m.get('dingUid') is not None:
self.ding_uid = m.get('dingUid')
if m.get('dingAccessTokenType') is not None:
self.ding_access_token_type = m.get('dingAccessTokenType')
return self
class BatchGetWorkspacesResponseBodyWorkspacesWorkspaceRecentList(TeaModel):
def __init__(
self,
node_id: str = None,
name: str = None,
url: str = None,
last_edit_time: str = None,
):
# 文档Id
self.node_id = node_id
# 文档名称
self.name = name
# 文档打开url
self.url = url
# 最近编辑时间
self.last_edit_time = last_edit_time
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.name is not None:
result['name'] = self.name
if self.url is not None:
result['url'] = self.url
if self.last_edit_time is not None:
result['lastEditTime'] = self.last_edit_time
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('name') is not None:
self.name = m.get('name')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('lastEditTime') is not None:
self.last_edit_time = m.get('lastEditTime')
return self
class BatchGetWorkspacesResponseBodyWorkspacesWorkspace(TeaModel):
def __init__(
self,
workspace_id: str = None,
name: str = None,
url: str = None,
recent_list: List[BatchGetWorkspacesResponseBodyWorkspacesWorkspaceRecentList] = None,
org_published: bool = None,
create_time: int = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 团队空间名称
self.name = name
# 团队空间打开url
self.url = url
# 最近访问列表
self.recent_list = recent_list
# 是否全员公开
self.org_published = org_published
# 团队空间创建时间
self.create_time = create_time
def validate(self):
if self.recent_list:
for k in self.recent_list:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.name is not None:
result['name'] = self.name
if self.url is not None:
result['url'] = self.url
result['recentList'] = []
if self.recent_list is not None:
for k in self.recent_list:
result['recentList'].append(k.to_map() if k else None)
if self.org_published is not None:
result['orgPublished'] = self.org_published
if self.create_time is not None:
result['createTime'] = self.create_time
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('name') is not None:
self.name = m.get('name')
if m.get('url') is not None:
self.url = m.get('url')
self.recent_list = []
if m.get('recentList') is not None:
for k in m.get('recentList'):
temp_model = BatchGetWorkspacesResponseBodyWorkspacesWorkspaceRecentList()
self.recent_list.append(temp_model.from_map(k))
if m.get('orgPublished') is not None:
self.org_published = m.get('orgPublished')
if m.get('createTime') is not None:
self.create_time = m.get('createTime')
return self
class BatchGetWorkspacesResponseBodyWorkspaces(TeaModel):
def __init__(
self,
has_permission: bool = None,
workspace: BatchGetWorkspacesResponseBodyWorkspacesWorkspace = None,
):
# 是否有访问团队空间权限
self.has_permission = has_permission
# 团队空间信息
self.workspace = workspace
def validate(self):
if self.workspace:
self.workspace.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.has_permission is not None:
result['hasPermission'] = self.has_permission
if self.workspace is not None:
result['workspace'] = self.workspace.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('hasPermission') is not None:
self.has_permission = m.get('hasPermission')
if m.get('workspace') is not None:
temp_model = BatchGetWorkspacesResponseBodyWorkspacesWorkspace()
self.workspace = temp_model.from_map(m['workspace'])
return self
class BatchGetWorkspacesResponseBody(TeaModel):
def __init__(
self,
workspaces: List[BatchGetWorkspacesResponseBodyWorkspaces] = None,
):
# workspace信息
self.workspaces = workspaces
def validate(self):
if self.workspaces:
for k in self.workspaces:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['workspaces'] = []
if self.workspaces is not None:
for k in self.workspaces:
result['workspaces'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
self.workspaces = []
if m.get('workspaces') is not None:
for k in m.get('workspaces'):
temp_model = BatchGetWorkspacesResponseBodyWorkspaces()
self.workspaces.append(temp_model.from_map(k))
return self
class BatchGetWorkspacesResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: BatchGetWorkspacesResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = BatchGetWorkspacesResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class DeleteWorkspaceMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class DeleteWorkspaceMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
return self
class DeleteWorkspaceMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[DeleteWorkspaceMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = DeleteWorkspaceMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class DeleteWorkspaceMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class AddWorkspaceDocMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class AddWorkspaceDocMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
role_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
# 用户权限
self.role_type = role_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
if self.role_type is not None:
result['roleType'] = self.role_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
if m.get('roleType') is not None:
self.role_type = m.get('roleType')
return self
class AddWorkspaceDocMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[AddWorkspaceDocMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = AddWorkspaceDocMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class AddWorkspaceDocMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class UpdateWorkspaceMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class UpdateWorkspaceMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
role_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
# 用户权限
self.role_type = role_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
if self.role_type is not None:
result['roleType'] = self.role_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
if m.get('roleType') is not None:
self.role_type = m.get('roleType')
return self
class UpdateWorkspaceMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[UpdateWorkspaceMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = UpdateWorkspaceMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class UpdateWorkspaceMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
class GetSheetHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class GetSheetRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
):
# 操作人unionId
self.operator_id = operator_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
return self
class GetSheetResponseBody(TeaModel):
def __init__(
self,
name: List[str] = None,
visibility: List[str] = None,
):
# 工作表名称
self.name = name
# 工作表可见性
self.visibility = visibility
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.visibility is not None:
result['visibility'] = self.visibility
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('visibility') is not None:
self.visibility = m.get('visibility')
return self
class GetSheetResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: GetSheetResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = GetSheetResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class GetRelatedWorkspacesHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class GetRelatedWorkspacesRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
include_recent: bool = None,
):
# 发起操作用户unionId
self.operator_id = operator_id
# 是否查询最近访问文档列表
self.include_recent = include_recent
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.include_recent is not None:
result['includeRecent'] = self.include_recent
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('includeRecent') is not None:
self.include_recent = m.get('includeRecent')
return self
class GetRelatedWorkspacesResponseBodyWorkspacesRecentList(TeaModel):
def __init__(
self,
node_id: str = None,
name: str = None,
url: str = None,
last_edit_time: int = None,
):
# 文档id
self.node_id = node_id
# 文档名称
self.name = name
# 文档打开url
self.url = url
# 文档最后编辑时间
self.last_edit_time = last_edit_time
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.name is not None:
result['name'] = self.name
if self.url is not None:
result['url'] = self.url
if self.last_edit_time is not None:
result['lastEditTime'] = self.last_edit_time
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('name') is not None:
self.name = m.get('name')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('lastEditTime') is not None:
self.last_edit_time = m.get('lastEditTime')
return self
class GetRelatedWorkspacesResponseBodyWorkspaces(TeaModel):
def __init__(
self,
workspace_id: str = None,
url: str = None,
deleted: bool = None,
owner: str = None,
role: str = None,
name: str = None,
recent_list: List[GetRelatedWorkspacesResponseBodyWorkspacesRecentList] = None,
create_time: int = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 团队空间打开url
self.url = url
# 团队空间是否被删除
self.deleted = deleted
self.owner = owner
# 用户的角色
self.role = role
# 团队空间名称
self.name = name
# 团队空间最近访问文档列表
self.recent_list = recent_list
# 团队空间创建时间
self.create_time = create_time
def validate(self):
if self.recent_list:
for k in self.recent_list:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.url is not None:
result['url'] = self.url
if self.deleted is not None:
result['deleted'] = self.deleted
if self.owner is not None:
result['owner'] = self.owner
if self.role is not None:
result['role'] = self.role
if self.name is not None:
result['name'] = self.name
result['recentList'] = []
if self.recent_list is not None:
for k in self.recent_list:
result['recentList'].append(k.to_map() if k else None)
if self.create_time is not None:
result['createTime'] = self.create_time
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('deleted') is not None:
self.deleted = m.get('deleted')
if m.get('owner') is not None:
self.owner = m.get('owner')
if m.get('role') is not None:
self.role = m.get('role')
if m.get('name') is not None:
self.name = m.get('name')
self.recent_list = []
if m.get('recentList') is not None:
for k in m.get('recentList'):
temp_model = GetRelatedWorkspacesResponseBodyWorkspacesRecentList()
self.recent_list.append(temp_model.from_map(k))
if m.get('createTime') is not None:
self.create_time = m.get('createTime')
return self
class GetRelatedWorkspacesResponseBody(TeaModel):
def __init__(
self,
workspaces: List[GetRelatedWorkspacesResponseBodyWorkspaces] = None,
):
# 团队空间结果集
self.workspaces = workspaces
def validate(self):
if self.workspaces:
for k in self.workspaces:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['workspaces'] = []
if self.workspaces is not None:
for k in self.workspaces:
result['workspaces'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
self.workspaces = []
if m.get('workspaces') is not None:
for k in m.get('workspaces'):
temp_model = GetRelatedWorkspacesResponseBodyWorkspaces()
self.workspaces.append(temp_model.from_map(k))
return self
class GetRelatedWorkspacesResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: GetRelatedWorkspacesResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = GetRelatedWorkspacesResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class GetRecentEditDocsHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class GetRecentEditDocsRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
max_results: int = None,
next_token: str = None,
):
# 发起操作用户unionId
self.operator_id = operator_id
# 查询size
self.max_results = max_results
self.next_token = next_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.max_results is not None:
result['maxResults'] = self.max_results
if self.next_token is not None:
result['nextToken'] = self.next_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('maxResults') is not None:
self.max_results = m.get('maxResults')
if m.get('nextToken') is not None:
self.next_token = m.get('nextToken')
return self
class GetRecentEditDocsResponseBodyRecentListNodeBO(TeaModel):
def __init__(
self,
node_id: str = None,
node_name: str = None,
url: str = None,
last_edit_time: int = None,
is_deleted: bool = None,
):
# 文档Id
self.node_id = node_id
# 文档名称
self.node_name = node_name
# 文档打开url
self.url = url
# 最后编辑时间
self.last_edit_time = last_edit_time
# 是否被删除
self.is_deleted = is_deleted
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.node_name is not None:
result['nodeName'] = self.node_name
if self.url is not None:
result['url'] = self.url
if self.last_edit_time is not None:
result['lastEditTime'] = self.last_edit_time
if self.is_deleted is not None:
result['isDeleted'] = self.is_deleted
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('nodeName') is not None:
self.node_name = m.get('nodeName')
if m.get('url') is not None:
self.url = m.get('url')
if m.get('lastEditTime') is not None:
self.last_edit_time = m.get('lastEditTime')
if m.get('isDeleted') is not None:
self.is_deleted = m.get('isDeleted')
return self
class GetRecentEditDocsResponseBodyRecentListWorkspaceBO(TeaModel):
def __init__(
self,
workspace_id: str = None,
workspace_name: str = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 团队空间名称
self.workspace_name = workspace_name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.workspace_name is not None:
result['workspaceName'] = self.workspace_name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('workspaceName') is not None:
self.workspace_name = m.get('workspaceName')
return self
class GetRecentEditDocsResponseBodyRecentList(TeaModel):
def __init__(
self,
node_bo: GetRecentEditDocsResponseBodyRecentListNodeBO = None,
workspace_bo: GetRecentEditDocsResponseBodyRecentListWorkspaceBO = None,
):
# 文档信息
self.node_bo = node_bo
# 团队空间信息
self.workspace_bo = workspace_bo
def validate(self):
if self.node_bo:
self.node_bo.validate()
if self.workspace_bo:
self.workspace_bo.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_bo is not None:
result['nodeBO'] = self.node_bo.to_map()
if self.workspace_bo is not None:
result['workspaceBO'] = self.workspace_bo.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeBO') is not None:
temp_model = GetRecentEditDocsResponseBodyRecentListNodeBO()
self.node_bo = temp_model.from_map(m['nodeBO'])
if m.get('workspaceBO') is not None:
temp_model = GetRecentEditDocsResponseBodyRecentListWorkspaceBO()
self.workspace_bo = temp_model.from_map(m['workspaceBO'])
return self
class GetRecentEditDocsResponseBody(TeaModel):
def __init__(
self,
recent_list: List[GetRecentEditDocsResponseBodyRecentList] = None,
next_token: str = None,
):
# 查询结果
self.recent_list = recent_list
self.next_token = next_token
def validate(self):
if self.recent_list:
for k in self.recent_list:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
result['recentList'] = []
if self.recent_list is not None:
for k in self.recent_list:
result['recentList'].append(k.to_map() if k else None)
if self.next_token is not None:
result['nextToken'] = self.next_token
return result
def from_map(self, m: dict = None):
m = m or dict()
self.recent_list = []
if m.get('recentList') is not None:
for k in m.get('recentList'):
temp_model = GetRecentEditDocsResponseBodyRecentList()
self.recent_list.append(temp_model.from_map(k))
if m.get('nextToken') is not None:
self.next_token = m.get('nextToken')
return self
class GetRecentEditDocsResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: GetRecentEditDocsResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = GetRecentEditDocsResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class AddWorkspaceMembersHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class AddWorkspaceMembersRequestMembers(TeaModel):
def __init__(
self,
member_id: str = None,
member_type: str = None,
role_type: str = None,
):
# 被操作用户unionId
self.member_id = member_id
# 用户类型
self.member_type = member_type
# 用户权限
self.role_type = role_type
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.member_id is not None:
result['memberId'] = self.member_id
if self.member_type is not None:
result['memberType'] = self.member_type
if self.role_type is not None:
result['roleType'] = self.role_type
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('memberId') is not None:
self.member_id = m.get('memberId')
if m.get('memberType') is not None:
self.member_type = m.get('memberType')
if m.get('roleType') is not None:
self.role_type = m.get('roleType')
return self
class AddWorkspaceMembersRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
members: List[AddWorkspaceMembersRequestMembers] = None,
):
# 发起操作者unionId
self.operator_id = operator_id
# 被操作用户组
self.members = members
def validate(self):
if self.members:
for k in self.members:
if k:
k.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
result['members'] = []
if self.members is not None:
for k in self.members:
result['members'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
self.members = []
if m.get('members') is not None:
for k in m.get('members'):
temp_model = AddWorkspaceMembersRequestMembers()
self.members.append(temp_model.from_map(k))
return self
class AddWorkspaceMembersResponseBody(TeaModel):
def __init__(
self,
not_in_org_list: List[str] = None,
):
self.not_in_org_list = not_in_org_list
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.not_in_org_list is not None:
result['notInOrgList'] = self.not_in_org_list
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('notInOrgList') is not None:
self.not_in_org_list = m.get('notInOrgList')
return self
class AddWorkspaceMembersResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: AddWorkspaceMembersResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = AddWorkspaceMembersResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class GetWorkspaceNodeHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class GetWorkspaceNodeRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
):
# 操作用户unionId
self.operator_id = operator_id
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
return self
class GetWorkspaceNodeResponseBodyNodeBO(TeaModel):
def __init__(
self,
name: str = None,
node_id: str = None,
url: str = None,
):
# 节点名称
self.name = name
# 节点Id
self.node_id = node_id
# 节点打开url
self.url = url
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.name is not None:
result['name'] = self.name
if self.node_id is not None:
result['nodeId'] = self.node_id
if self.url is not None:
result['url'] = self.url
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('name') is not None:
self.name = m.get('name')
if m.get('nodeId') is not None:
self.node_id = m.get('nodeId')
if m.get('url') is not None:
self.url = m.get('url')
return self
class GetWorkspaceNodeResponseBodyWorkspaceBO(TeaModel):
def __init__(
self,
workspace_id: str = None,
name: str = None,
):
# 团队空间Id
self.workspace_id = workspace_id
# 团队空间名称
self.name = name
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.workspace_id is not None:
result['workspaceId'] = self.workspace_id
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('workspaceId') is not None:
self.workspace_id = m.get('workspaceId')
if m.get('name') is not None:
self.name = m.get('name')
return self
class GetWorkspaceNodeResponseBody(TeaModel):
def __init__(
self,
node_bo: GetWorkspaceNodeResponseBodyNodeBO = None,
workspace_bo: GetWorkspaceNodeResponseBodyWorkspaceBO = None,
has_permission: bool = None,
):
# 节点信息
self.node_bo = node_bo
# 节点所属团队空间信息
self.workspace_bo = workspace_bo
# 是否有权限
self.has_permission = has_permission
def validate(self):
if self.node_bo:
self.node_bo.validate()
if self.workspace_bo:
self.workspace_bo.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.node_bo is not None:
result['nodeBO'] = self.node_bo.to_map()
if self.workspace_bo is not None:
result['workspaceBO'] = self.workspace_bo.to_map()
if self.has_permission is not None:
result['hasPermission'] = self.has_permission
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('nodeBO') is not None:
temp_model = GetWorkspaceNodeResponseBodyNodeBO()
self.node_bo = temp_model.from_map(m['nodeBO'])
if m.get('workspaceBO') is not None:
temp_model = GetWorkspaceNodeResponseBodyWorkspaceBO()
self.workspace_bo = temp_model.from_map(m['workspaceBO'])
if m.get('hasPermission') is not None:
self.has_permission = m.get('hasPermission')
return self
class GetWorkspaceNodeResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
body: GetWorkspaceNodeResponseBody = None,
):
self.headers = headers
self.body = body
def validate(self):
self.validate_required(self.headers, 'headers')
self.validate_required(self.body, 'body')
if self.body:
self.body.validate()
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
if self.body is not None:
result['body'] = self.body.to_map()
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
if m.get('body') is not None:
temp_model = GetWorkspaceNodeResponseBody()
self.body = temp_model.from_map(m['body'])
return self
class AppendRowsHeaders(TeaModel):
def __init__(
self,
common_headers: Dict[str, str] = None,
x_acs_dingtalk_access_token: str = None,
):
self.common_headers = common_headers
self.x_acs_dingtalk_access_token = x_acs_dingtalk_access_token
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.common_headers is not None:
result['commonHeaders'] = self.common_headers
if self.x_acs_dingtalk_access_token is not None:
result['x-acs-dingtalk-access-token'] = self.x_acs_dingtalk_access_token
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('commonHeaders') is not None:
self.common_headers = m.get('commonHeaders')
if m.get('x-acs-dingtalk-access-token') is not None:
self.x_acs_dingtalk_access_token = m.get('x-acs-dingtalk-access-token')
return self
class AppendRowsRequest(TeaModel):
def __init__(
self,
operator_id: str = None,
values: List[List[str]] = None,
):
# 操作人unionId
self.operator_id = operator_id
# 要追加的值
self.values = values
def validate(self):
pass
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.operator_id is not None:
result['operatorId'] = self.operator_id
if self.values is not None:
result['values'] = self.values
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('operatorId') is not None:
self.operator_id = m.get('operatorId')
if m.get('values') is not None:
self.values = m.get('values')
return self
class AppendRowsResponse(TeaModel):
def __init__(
self,
headers: Dict[str, str] = None,
):
self.headers = headers
def validate(self):
self.validate_required(self.headers, 'headers')
def to_map(self):
_map = super().to_map()
if _map is not None:
return _map
result = dict()
if self.headers is not None:
result['headers'] = self.headers
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('headers') is not None:
self.headers = m.get('headers')
return self
| 29.286346 | 94 | 0.571594 | 13,095 | 105,958 | 4.435281 | 0.021993 | 0.04709 | 0.084762 | 0.055269 | 0.859246 | 0.839291 | 0.831353 | 0.826808 | 0.816271 | 0.812001 | 0 | 0.000014 | 0.330367 | 105,958 | 3,617 | 95 | 29.294443 | 0.818556 | 0.010344 | 0 | 0.905898 | 1 | 0 | 0.068363 | 0.015468 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126833 | false | 0.018752 | 0.000682 | 0 | 0.254347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
48e530e5fcac44078cb597baf9990b7317539811 | 96 | py | Python | my_lambdata/Hello.py | charlie-may86/lambdata-charlie-may-86 | 5cd8966361764230b5d22f492947ca9e6d91246e | [
"MIT"
] | null | null | null | my_lambdata/Hello.py | charlie-may86/lambdata-charlie-may-86 | 5cd8966361764230b5d22f492947ca9e6d91246e | [
"MIT"
] | null | null | null | my_lambdata/Hello.py | charlie-may86/lambdata-charlie-may-86 | 5cd8966361764230b5d22f492947ca9e6d91246e | [
"MIT"
] | null | null | null | # TODO import enlarge
from my_lambdata.my_mod import enlarge
print('Hello')
print(enlarge(8)) | 13.714286 | 38 | 0.770833 | 15 | 96 | 4.8 | 0.666667 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.125 | 96 | 7 | 39 | 13.714286 | 0.845238 | 0.197917 | 0 | 0 | 0 | 0 | 0.065789 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
5b1985fa4e88b239191f2bbc83f7451bcb5e9d0f | 191 | py | Python | tests/functional/preview_and_dev/conftest.py | alphagov/notify-functional-tests | 5d15be45500f381629c32dba7650dd77c9f58a2e | [
"MIT"
] | 3 | 2017-03-01T18:17:36.000Z | 2019-05-15T12:32:05.000Z | tests/functional/preview_and_dev/conftest.py | alphagov/notify-functional-tests | 5d15be45500f381629c32dba7650dd77c9f58a2e | [
"MIT"
] | 110 | 2016-03-09T16:42:24.000Z | 2021-11-22T16:51:21.000Z | tests/functional/preview_and_dev/conftest.py | alphagov/notify-functional-tests | 5d15be45500f381629c32dba7650dd77c9f58a2e | [
"MIT"
] | 4 | 2017-11-21T17:14:56.000Z | 2021-04-10T19:11:26.000Z | import pytest
from config import setup_preview_dev_config
@pytest.fixture(scope="session", autouse=True)
def preview_dev_config():
"""
Setup
"""
setup_preview_dev_config()
| 15.916667 | 46 | 0.722513 | 24 | 191 | 5.416667 | 0.541667 | 0.230769 | 0.369231 | 0.323077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172775 | 191 | 11 | 47 | 17.363636 | 0.822785 | 0.026178 | 0 | 0 | 0 | 0 | 0.041176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d2dda8a6f47e94d77868276cad3f53e0b19fd126 | 109 | py | Python | tests/test-operator/testop/delete.py | yubozhao/bentoctl | e2a831508e5625cde1001813a5edf0b3a7d16456 | [
"Apache-2.0"
] | 1 | 2022-02-10T16:41:59.000Z | 2022-02-10T16:41:59.000Z | tests/test-operator/testop/delete.py | liangkai1001/bentoctl | a30f9d61cccec182fe366efd61d847fcfcce3bf4 | [
"Apache-2.0"
] | null | null | null | tests/test-operator/testop/delete.py | liangkai1001/bentoctl | a30f9d61cccec182fe366efd61d847fcfcce3bf4 | [
"Apache-2.0"
] | null | null | null | def delete(deployment_name, deployment_spec):
print("Deleting with: ", deployment_name, deployment_spec)
| 36.333333 | 62 | 0.788991 | 13 | 109 | 6.307692 | 0.615385 | 0.341463 | 0.585366 | 0.682927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 109 | 2 | 63 | 54.5 | 0.845361 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
d2e0ea1ea312ca21c3b38628288b627557dc53d2 | 32,647 | py | Python | RasPi_Dev/ros_ws/devel/lib/python2.7/dist-packages/world_canvas_msgs/srv/_EditAnnotationsData.py | QianheYu/xtark_driver_dev | 1708888161cf20c0d1f45c99d0da4467d69c26c8 | [
"BSD-3-Clause"
] | 1 | 2022-03-11T03:31:15.000Z | 2022-03-11T03:31:15.000Z | RasPi_Dev/ros_ws/devel/lib/python2.7/dist-packages/world_canvas_msgs/srv/_EditAnnotationsData.py | bravetree/xtark_driver_dev | 1708888161cf20c0d1f45c99d0da4467d69c26c8 | [
"BSD-3-Clause"
] | null | null | null | RasPi_Dev/ros_ws/devel/lib/python2.7/dist-packages/world_canvas_msgs/srv/_EditAnnotationsData.py | bravetree/xtark_driver_dev | 1708888161cf20c0d1f45c99d0da4467d69c26c8 | [
"BSD-3-Clause"
] | null | null | null | # This Python file uses the following encoding: utf-8
"""autogenerated by genpy from world_canvas_msgs/EditAnnotationsDataRequest.msg. Do not edit."""
import sys
python3 = True if sys.hexversion > 0x03000000 else False
import genpy
import struct
import uuid_msgs.msg
import world_canvas_msgs.msg
import geometry_msgs.msg
import genpy
import std_msgs.msg
class EditAnnotationsDataRequest(genpy.Message):
_md5sum = "41ee6a631a74d3fee28d7fa0847f92d3"
_type = "world_canvas_msgs/EditAnnotationsDataRequest"
_has_header = False #flag to mark the presence of a Header object
_full_text = """
uint8 NEW=0
uint8 EDIT=1
uint8 action
Annotation annotation
AnnotationData data
================================================================================
MSG: world_canvas_msgs/Annotation
# Annotation: a generic descriptor for an element (object) in a semantic map
# An annotation can be used to introspect, visualize or key into database filters/searches without
# having to touch the described object directly
# - timestamp : Creation/last update time
# - world : World the object belongs to
# - id : Annotation unique id
# - data_id : Referenced object unique id (an object can be reference by 1 or more annotations)
# - name : Referenced object name
# - type : Referenced object type (a message type, as world canvas objects are ROS messages)
# - shape : One of 1 (CUBE), 2 (SPHERE), 3 (CYLINDER), 9 (TEXT)
# - color : R, G, B, A (optional)
# - size : X, Y, Z (optional)
# - keywords : Generic properties of this object: allows basic filtering without introspecting
# the object itself
# - relationships : List of associated annotations, used for retrieving operational sets of objects
# General properties
time timestamp
uuid_msgs/UniqueID id
uuid_msgs/UniqueID data_id
string world
string name
string type
# Physical properties
int32 shape
std_msgs/ColorRGBA color
geometry_msgs/Vector3 size
geometry_msgs/PoseWithCovarianceStamped pose
# Querying properties
string[] keywords
uuid_msgs/UniqueID[] relationships
================================================================================
MSG: uuid_msgs/UniqueID
# A universally unique identifier (UUID).
#
# http://en.wikipedia.org/wiki/Universally_unique_identifier
# http://tools.ietf.org/html/rfc4122.html
uint8[16] uuid
================================================================================
MSG: std_msgs/ColorRGBA
float32 r
float32 g
float32 b
float32 a
================================================================================
MSG: geometry_msgs/Vector3
# This represents a vector in free space.
# It is only meant to represent a direction. Therefore, it does not
# make sense to apply a translation to it (e.g., when applying a
# generic rigid transformation to a Vector3, tf2 will only apply the
# rotation). If you want your data to be translatable too, use the
# geometry_msgs/Point message instead.
float64 x
float64 y
float64 z
================================================================================
MSG: geometry_msgs/PoseWithCovarianceStamped
# This expresses an estimated pose with a reference coordinate frame and timestamp
Header header
PoseWithCovariance pose
================================================================================
MSG: std_msgs/Header
# Standard metadata for higher-level stamped data types.
# This is generally used to communicate timestamped data
# in a particular coordinate frame.
#
# sequence ID: consecutively increasing ID
uint32 seq
#Two-integer timestamp that is expressed as:
# * stamp.sec: seconds (stamp_secs) since epoch (in Python the variable is called 'secs')
# * stamp.nsec: nanoseconds since stamp_secs (in Python the variable is called 'nsecs')
# time-handling sugar is provided by the client library
time stamp
#Frame this data is associated with
# 0: no frame
# 1: global frame
string frame_id
================================================================================
MSG: geometry_msgs/PoseWithCovariance
# This represents a pose in free space with uncertainty.
Pose pose
# Row-major representation of the 6x6 covariance matrix
# The orientation parameters use a fixed-axis representation.
# In order, the parameters are:
# (x, y, z, rotation about X axis, rotation about Y axis, rotation about Z axis)
float64[36] covariance
================================================================================
MSG: geometry_msgs/Pose
# A representation of pose in free space, composed of position and orientation.
Point position
Quaternion orientation
================================================================================
MSG: geometry_msgs/Point
# This contains the position of a point in free space
float64 x
float64 y
float64 z
================================================================================
MSG: geometry_msgs/Quaternion
# This represents an orientation in free space in quaternion form.
float64 x
float64 y
float64 z
float64 w
================================================================================
MSG: world_canvas_msgs/AnnotationData
# Data for an element in a semantic map stored as a byte array generated by ros::serialization
# These objects are unique, although they can be referenced by one or more annotations
# - id : Object unique id; data_id field on Annotation messages point to this uuid
# - type : Object type; duplicated from annotation for convenience on deserialization
# - data : Serialized data
uuid_msgs/UniqueID id
string type
uint8[] data
"""
# Pseudo-constants
NEW = 0
EDIT = 1
__slots__ = ['action','annotation','data']
_slot_types = ['uint8','world_canvas_msgs/Annotation','world_canvas_msgs/AnnotationData']
def __init__(self, *args, **kwds):
"""
Constructor. Any message fields that are implicitly/explicitly
set to None will be assigned a default value. The recommend
use is keyword arguments as this is more robust to future message
changes. You cannot mix in-order arguments and keyword arguments.
The available fields are:
action,annotation,data
:param args: complete set of field values, in .msg order
:param kwds: use keyword arguments corresponding to message field names
to set specific fields.
"""
if args or kwds:
super(EditAnnotationsDataRequest, self).__init__(*args, **kwds)
#message fields cannot be None, assign default values for those that are
if self.action is None:
self.action = 0
if self.annotation is None:
self.annotation = world_canvas_msgs.msg.Annotation()
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
else:
self.action = 0
self.annotation = world_canvas_msgs.msg.Annotation()
self.data = world_canvas_msgs.msg.AnnotationData()
def _get_types(self):
"""
internal API method
"""
return self._slot_types
def serialize(self, buff):
"""
serialize message into buffer
:param buff: buffer, ``StringIO``
"""
try:
_x = self
buff.write(_get_struct_B2I().pack(_x.action, _x.annotation.timestamp.secs, _x.annotation.timestamp.nsecs))
_x = self.annotation.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.annotation.data_id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.annotation.world
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.annotation.name
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.annotation.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_i4f3d3I().pack(_x.annotation.shape, _x.annotation.color.r, _x.annotation.color.g, _x.annotation.color.b, _x.annotation.color.a, _x.annotation.size.x, _x.annotation.size.y, _x.annotation.size.z, _x.annotation.pose.header.seq, _x.annotation.pose.header.stamp.secs, _x.annotation.pose.header.stamp.nsecs))
_x = self.annotation.pose.header.frame_id
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_7d().pack(_x.annotation.pose.pose.pose.position.x, _x.annotation.pose.pose.pose.position.y, _x.annotation.pose.pose.pose.position.z, _x.annotation.pose.pose.pose.orientation.x, _x.annotation.pose.pose.pose.orientation.y, _x.annotation.pose.pose.pose.orientation.z, _x.annotation.pose.pose.pose.orientation.w))
buff.write(_get_struct_36d().pack(*self.annotation.pose.pose.covariance))
length = len(self.annotation.keywords)
buff.write(_struct_I.pack(length))
for val1 in self.annotation.keywords:
length = len(val1)
if python3 or type(val1) == unicode:
val1 = val1.encode('utf-8')
length = len(val1)
buff.write(struct.pack('<I%ss'%length, length, val1))
length = len(self.annotation.relationships)
buff.write(_struct_I.pack(length))
for val1 in self.annotation.relationships:
_x = val1.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.data.data
length = len(_x)
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize(self, str):
"""
unpack serialized message in str into this message instance
:param str: byte array of serialized message, ``str``
"""
try:
if self.annotation is None:
self.annotation = world_canvas_msgs.msg.Annotation()
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
end = 0
_x = self
start = end
end += 9
(_x.action, _x.annotation.timestamp.secs, _x.annotation.timestamp.nsecs,) = _get_struct_B2I().unpack(str[start:end])
start = end
end += 16
self.annotation.id.uuid = str[start:end]
start = end
end += 16
self.annotation.data_id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.world = str[start:end].decode('utf-8')
else:
self.annotation.world = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.name = str[start:end].decode('utf-8')
else:
self.annotation.name = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.type = str[start:end].decode('utf-8')
else:
self.annotation.type = str[start:end]
_x = self
start = end
end += 56
(_x.annotation.shape, _x.annotation.color.r, _x.annotation.color.g, _x.annotation.color.b, _x.annotation.color.a, _x.annotation.size.x, _x.annotation.size.y, _x.annotation.size.z, _x.annotation.pose.header.seq, _x.annotation.pose.header.stamp.secs, _x.annotation.pose.header.stamp.nsecs,) = _get_struct_i4f3d3I().unpack(str[start:end])
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.pose.header.frame_id = str[start:end].decode('utf-8')
else:
self.annotation.pose.header.frame_id = str[start:end]
_x = self
start = end
end += 56
(_x.annotation.pose.pose.pose.position.x, _x.annotation.pose.pose.pose.position.y, _x.annotation.pose.pose.pose.position.z, _x.annotation.pose.pose.pose.orientation.x, _x.annotation.pose.pose.pose.orientation.y, _x.annotation.pose.pose.pose.orientation.z, _x.annotation.pose.pose.pose.orientation.w,) = _get_struct_7d().unpack(str[start:end])
start = end
end += 288
self.annotation.pose.pose.covariance = _get_struct_36d().unpack(str[start:end])
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
self.annotation.keywords = []
for i in range(0, length):
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
val1 = str[start:end].decode('utf-8')
else:
val1 = str[start:end]
self.annotation.keywords.append(val1)
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
self.annotation.relationships = []
for i in range(0, length):
val1 = uuid_msgs.msg.UniqueID()
start = end
end += 16
val1.uuid = str[start:end]
self.annotation.relationships.append(val1)
start = end
end += 16
self.data.id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.data.type = str[start:end].decode('utf-8')
else:
self.data.type = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
self.data.data = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
def serialize_numpy(self, buff, numpy):
"""
serialize message with numpy array types into buffer
:param buff: buffer, ``StringIO``
:param numpy: numpy python module
"""
try:
_x = self
buff.write(_get_struct_B2I().pack(_x.action, _x.annotation.timestamp.secs, _x.annotation.timestamp.nsecs))
_x = self.annotation.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.annotation.data_id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.annotation.world
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.annotation.name
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.annotation.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_i4f3d3I().pack(_x.annotation.shape, _x.annotation.color.r, _x.annotation.color.g, _x.annotation.color.b, _x.annotation.color.a, _x.annotation.size.x, _x.annotation.size.y, _x.annotation.size.z, _x.annotation.pose.header.seq, _x.annotation.pose.header.stamp.secs, _x.annotation.pose.header.stamp.nsecs))
_x = self.annotation.pose.header.frame_id
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_7d().pack(_x.annotation.pose.pose.pose.position.x, _x.annotation.pose.pose.pose.position.y, _x.annotation.pose.pose.pose.position.z, _x.annotation.pose.pose.pose.orientation.x, _x.annotation.pose.pose.pose.orientation.y, _x.annotation.pose.pose.pose.orientation.z, _x.annotation.pose.pose.pose.orientation.w))
buff.write(self.annotation.pose.pose.covariance.tostring())
length = len(self.annotation.keywords)
buff.write(_struct_I.pack(length))
for val1 in self.annotation.keywords:
length = len(val1)
if python3 or type(val1) == unicode:
val1 = val1.encode('utf-8')
length = len(val1)
buff.write(struct.pack('<I%ss'%length, length, val1))
length = len(self.annotation.relationships)
buff.write(_struct_I.pack(length))
for val1 in self.annotation.relationships:
_x = val1.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.data.data
length = len(_x)
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize_numpy(self, str, numpy):
"""
unpack serialized message in str into this message instance using numpy for array types
:param str: byte array of serialized message, ``str``
:param numpy: numpy python module
"""
try:
if self.annotation is None:
self.annotation = world_canvas_msgs.msg.Annotation()
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
end = 0
_x = self
start = end
end += 9
(_x.action, _x.annotation.timestamp.secs, _x.annotation.timestamp.nsecs,) = _get_struct_B2I().unpack(str[start:end])
start = end
end += 16
self.annotation.id.uuid = str[start:end]
start = end
end += 16
self.annotation.data_id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.world = str[start:end].decode('utf-8')
else:
self.annotation.world = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.name = str[start:end].decode('utf-8')
else:
self.annotation.name = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.type = str[start:end].decode('utf-8')
else:
self.annotation.type = str[start:end]
_x = self
start = end
end += 56
(_x.annotation.shape, _x.annotation.color.r, _x.annotation.color.g, _x.annotation.color.b, _x.annotation.color.a, _x.annotation.size.x, _x.annotation.size.y, _x.annotation.size.z, _x.annotation.pose.header.seq, _x.annotation.pose.header.stamp.secs, _x.annotation.pose.header.stamp.nsecs,) = _get_struct_i4f3d3I().unpack(str[start:end])
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.annotation.pose.header.frame_id = str[start:end].decode('utf-8')
else:
self.annotation.pose.header.frame_id = str[start:end]
_x = self
start = end
end += 56
(_x.annotation.pose.pose.pose.position.x, _x.annotation.pose.pose.pose.position.y, _x.annotation.pose.pose.pose.position.z, _x.annotation.pose.pose.pose.orientation.x, _x.annotation.pose.pose.pose.orientation.y, _x.annotation.pose.pose.pose.orientation.z, _x.annotation.pose.pose.pose.orientation.w,) = _get_struct_7d().unpack(str[start:end])
start = end
end += 288
self.annotation.pose.pose.covariance = numpy.frombuffer(str[start:end], dtype=numpy.float64, count=36)
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
self.annotation.keywords = []
for i in range(0, length):
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
val1 = str[start:end].decode('utf-8')
else:
val1 = str[start:end]
self.annotation.keywords.append(val1)
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
self.annotation.relationships = []
for i in range(0, length):
val1 = uuid_msgs.msg.UniqueID()
start = end
end += 16
val1.uuid = str[start:end]
self.annotation.relationships.append(val1)
start = end
end += 16
self.data.id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.data.type = str[start:end].decode('utf-8')
else:
self.data.type = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
self.data.data = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
_struct_I = genpy.struct_I
def _get_struct_I():
global _struct_I
return _struct_I
_struct_i4f3d3I = None
def _get_struct_i4f3d3I():
global _struct_i4f3d3I
if _struct_i4f3d3I is None:
_struct_i4f3d3I = struct.Struct("<i4f3d3I")
return _struct_i4f3d3I
_struct_7d = None
def _get_struct_7d():
global _struct_7d
if _struct_7d is None:
_struct_7d = struct.Struct("<7d")
return _struct_7d
_struct_36d = None
def _get_struct_36d():
global _struct_36d
if _struct_36d is None:
_struct_36d = struct.Struct("<36d")
return _struct_36d
_struct_16B = None
def _get_struct_16B():
global _struct_16B
if _struct_16B is None:
_struct_16B = struct.Struct("<16B")
return _struct_16B
_struct_B2I = None
def _get_struct_B2I():
global _struct_B2I
if _struct_B2I is None:
_struct_B2I = struct.Struct("<B2I")
return _struct_B2I
_struct_16s = None
def _get_struct_16s():
global _struct_16s
if _struct_16s is None:
_struct_16s = struct.Struct("<16s")
return _struct_16s
# This Python file uses the following encoding: utf-8
"""autogenerated by genpy from world_canvas_msgs/EditAnnotationsDataResponse.msg. Do not edit."""
import sys
python3 = True if sys.hexversion > 0x03000000 else False
import genpy
import struct
import uuid_msgs.msg
import world_canvas_msgs.msg
class EditAnnotationsDataResponse(genpy.Message):
_md5sum = "f3d451f2a08e1dc3084d378560b01c8e"
_type = "world_canvas_msgs/EditAnnotationsDataResponse"
_has_header = False #flag to mark the presence of a Header object
_full_text = """uint8 UPDATE=10
uint8 DELETE=12
uint8 CANCEL=13
uint8 action
AnnotationData data
================================================================================
MSG: world_canvas_msgs/AnnotationData
# Data for an element in a semantic map stored as a byte array generated by ros::serialization
# These objects are unique, although they can be referenced by one or more annotations
# - id : Object unique id; data_id field on Annotation messages point to this uuid
# - type : Object type; duplicated from annotation for convenience on deserialization
# - data : Serialized data
uuid_msgs/UniqueID id
string type
uint8[] data
================================================================================
MSG: uuid_msgs/UniqueID
# A universally unique identifier (UUID).
#
# http://en.wikipedia.org/wiki/Universally_unique_identifier
# http://tools.ietf.org/html/rfc4122.html
uint8[16] uuid
"""
# Pseudo-constants
UPDATE = 10
DELETE = 12
CANCEL = 13
__slots__ = ['action','data']
_slot_types = ['uint8','world_canvas_msgs/AnnotationData']
def __init__(self, *args, **kwds):
"""
Constructor. Any message fields that are implicitly/explicitly
set to None will be assigned a default value. The recommend
use is keyword arguments as this is more robust to future message
changes. You cannot mix in-order arguments and keyword arguments.
The available fields are:
action,data
:param args: complete set of field values, in .msg order
:param kwds: use keyword arguments corresponding to message field names
to set specific fields.
"""
if args or kwds:
super(EditAnnotationsDataResponse, self).__init__(*args, **kwds)
#message fields cannot be None, assign default values for those that are
if self.action is None:
self.action = 0
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
else:
self.action = 0
self.data = world_canvas_msgs.msg.AnnotationData()
def _get_types(self):
"""
internal API method
"""
return self._slot_types
def serialize(self, buff):
"""
serialize message into buffer
:param buff: buffer, ``StringIO``
"""
try:
buff.write(_get_struct_B().pack(self.action))
_x = self.data.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.data.data
length = len(_x)
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize(self, str):
"""
unpack serialized message in str into this message instance
:param str: byte array of serialized message, ``str``
"""
try:
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
end = 0
start = end
end += 1
(self.action,) = _get_struct_B().unpack(str[start:end])
start = end
end += 16
self.data.id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.data.type = str[start:end].decode('utf-8')
else:
self.data.type = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
self.data.data = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
def serialize_numpy(self, buff, numpy):
"""
serialize message with numpy array types into buffer
:param buff: buffer, ``StringIO``
:param numpy: numpy python module
"""
try:
buff.write(_get_struct_B().pack(self.action))
_x = self.data.id.uuid
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(_get_struct_16B().pack(*_x))
else:
buff.write(_get_struct_16s().pack(_x))
_x = self.data.type
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.data.data
length = len(_x)
# - if encoded as a list instead, serialize as bytes instead of string
if type(_x) in [list, tuple]:
buff.write(struct.pack('<I%sB'%length, length, *_x))
else:
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize_numpy(self, str, numpy):
"""
unpack serialized message in str into this message instance using numpy for array types
:param str: byte array of serialized message, ``str``
:param numpy: numpy python module
"""
try:
if self.data is None:
self.data = world_canvas_msgs.msg.AnnotationData()
end = 0
start = end
end += 1
(self.action,) = _get_struct_B().unpack(str[start:end])
start = end
end += 16
self.data.id.uuid = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.data.type = str[start:end].decode('utf-8')
else:
self.data.type = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
self.data.data = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
_struct_I = genpy.struct_I
def _get_struct_I():
global _struct_I
return _struct_I
_struct_B = None
def _get_struct_B():
global _struct_B
if _struct_B is None:
_struct_B = struct.Struct("<B")
return _struct_B
_struct_16B = None
def _get_struct_16B():
global _struct_16B
if _struct_16B is None:
_struct_16B = struct.Struct("<16B")
return _struct_16B
_struct_16s = None
def _get_struct_16s():
global _struct_16s
if _struct_16s is None:
_struct_16s = struct.Struct("<16s")
return _struct_16s
class EditAnnotationsData(object):
_type = 'world_canvas_msgs/EditAnnotationsData'
_md5sum = '457c97e1836c61682d0f4bb2f4defba9'
_request_class = EditAnnotationsDataRequest
_response_class = EditAnnotationsDataResponse
| 36.558791 | 348 | 0.63479 | 4,418 | 32,647 | 4.527614 | 0.08488 | 0.053592 | 0.040694 | 0.034395 | 0.813578 | 0.810378 | 0.803779 | 0.79933 | 0.79933 | 0.795431 | 0 | 0.019824 | 0.219714 | 32,647 | 892 | 349 | 36.599776 | 0.765408 | 0.105431 | 0 | 0.806409 | 1 | 0.004005 | 0.234219 | 0.056349 | 0 | 0 | 0.000695 | 0 | 0 | 1 | 0.030708 | false | 0 | 0.017356 | 0 | 0.102804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d2f2215f6e1d40b09007b948a2003921b7aaece8 | 2,691 | py | Python | Ushort/tests/CreatorModelTest.py | soheylzahiry/url_shortener | 0f9bd9c7d8e8e3da4c1654d3fe686f9797c3105e | [
"BSD-2-Clause"
] | null | null | null | Ushort/tests/CreatorModelTest.py | soheylzahiry/url_shortener | 0f9bd9c7d8e8e3da4c1654d3fe686f9797c3105e | [
"BSD-2-Clause"
] | null | null | null | Ushort/tests/CreatorModelTest.py | soheylzahiry/url_shortener | 0f9bd9c7d8e8e3da4c1654d3fe686f9797c3105e | [
"BSD-2-Clause"
] | null | null | null | from .Init import *
class CreatorModelTest(Init):
def test__Free_Accounts__different_url_creations(self):
self.creator.set_Free_Account()
for _ in range(Creator.Account.Free.max_url_a_day):
self.make_url(save=True)
self.assertEqual(self.creator.can_generate_url_tody, False)
self.assertEqual(self.creator.can_generate_monitored_url, False)
self.assertEqual(self.creator.can_generate_url, True)
def test__Advanced_Accounts__different_url_creations(self):
self.creator.set_Advanced_Account()
for _ in range(Creator.Account.Advanced.max_url_a_day):
self.make_url(save=True)
self.assertEqual(self.creator.can_generate_url_tody, False)
self.assertEqual(self.creator.can_generate_monitored_url, True)
self.assertEqual(self.creator.can_generate_url, True)
def test__Complete_Accounts__different_url_creations(self):
self.creator.set_Complete_Account()
for _ in range(Creator.Account.Complete.max_url_a_day):
self.make_url(save=True)
self.assertEqual(self.creator.can_generate_url_tody, False)
self.assertEqual(self.creator.can_generate_monitored_url, True)
self.assertEqual(self.creator.can_generate_url, True)
def test__switching_between_account_types(self):
self.creator.set_Free_Account()
self.assertEqual(self.creator.account_type, Creator.Account.Types.FREE)
self.assertEqual(self.creator.max_url, Creator.Account.Free.max_url)
self.assertEqual(self.creator.max_url_a_day, Creator.Account.Free.max_url_a_day)
self.assertEqual(self.creator.max_monitored_url, Creator.Account.Free.max_monitored_url)
self.assertEqual(self.creator.type, "Free")
self.creator.set_Advanced_Account()
self.assertEqual(self.creator.account_type, Creator.Account.Types.ADVANCED)
self.assertEqual(self.creator.max_url, Creator.Account.Advanced.max_url)
self.assertEqual(self.creator.max_url_a_day, Creator.Account.Advanced.max_url_a_day)
self.assertEqual(self.creator.max_monitored_url, Creator.Account.Advanced.max_monitored_url)
self.assertEqual(self.creator.type, "Advanced")
self.creator.set_Complete_Account()
self.assertEqual(self.creator.account_type, Creator.Account.Types.COMPLETE)
self.assertEqual(self.creator.max_url, Creator.Account.Complete.max_url)
self.assertEqual(self.creator.max_url_a_day, Creator.Account.Complete.max_url_a_day)
self.assertEqual(self.creator.max_monitored_url, Creator.Account.Complete.max_monitored_url)
self.assertEqual(self.creator.type, "Complete")
| 48.053571 | 100 | 0.751394 | 355 | 2,691 | 5.369014 | 0.104225 | 0.173137 | 0.239244 | 0.327387 | 0.929696 | 0.87723 | 0.836831 | 0.834208 | 0.573977 | 0.573977 | 0 | 0 | 0.153103 | 2,691 | 55 | 101 | 48.927273 | 0.836332 | 0 | 0 | 0.404762 | 0 | 0 | 0.007432 | 0 | 0 | 0 | 0 | 0 | 0.571429 | 1 | 0.095238 | false | 0 | 0.02381 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82570a52dc5fea1f917d38d5ef62657013ec7065 | 839,170 | py | Python | jupyter/Python_ROM_GUI/pySOFC.py | dt-schwartz/NGFC | 9ebbfc2288c9a0b55313998a04e42c80b332db49 | [
"MIT"
] | null | null | null | jupyter/Python_ROM_GUI/pySOFC.py | dt-schwartz/NGFC | 9ebbfc2288c9a0b55313998a04e42c80b332db49 | [
"MIT"
] | null | null | null | jupyter/Python_ROM_GUI/pySOFC.py | dt-schwartz/NGFC | 9ebbfc2288c9a0b55313998a04e42c80b332db49 | [
"MIT"
] | null | null | null |
##############################################################################
# The development of this flowsheet/code is funded by the ARPA-E DIFFERENTIATE project:
# “Machine Learning for Natural Gas to Electric Power System Design”
# Project number: DE-FOA-0002107-1625.
# This project is a collaborative effort between the Pacific Northwest National Laboratory,
# National Energy Technology Laboratory, and Washington University.
##############################################################################
import numpy as np
import numpy.linalg as la
import numpy.ma as ma
from numpy import array
from scipy import stats
import pandas as pd
import ipywidgets
import paramiko
import pysftp
import shutil
import getpass
import imp
import math
import sys
import copy
import os
import time
from datetime import timedelta
from smt.sampling_methods import LHS
import matplotlib.pyplot as plt
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
plt.rcParams.update({'font.size': 30})
from matplotlib.colors import ListedColormap
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras.callbacks import TensorBoard
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
def sshCommand(hostname, port, username, password, command):
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(hostname, port, username, password)
stdin, stdout, stderr = sshClient.exec_command(command)
for line in stdout:
print(line.strip('\n'))
def put_r_windows(sftp, localdir, remotedir, preserve_mtime = False):
for entry in os.listdir(localdir):
remotepath = remotedir + "/" + entry
localpath = os.path.join(localdir, entry)
if not os.path.isfile(localpath):
try:
sftp.mkdir(remotepath)
except OSError:
pass
put_r_windows(sftp, localpath, remotepath, preserve_mtime)
else:
sftp.put(localpath, remotepath, preserve_mtime=preserve_mtime)
def query_yes_no(question, default = None):
"""Ask a yes/no question via input() and return their answer.
"question" is a string that is presented to the user.
"default" is the presumed answer if the user just hits <Enter>.
It must be "yes" (the default), "no" or None (meaning
an answer is required of the user).
The "answer" return value is True for "yes" or False for "no".
"""
valid = {"yes": True, "y": True, "ye": True,
"no": False, "n": False}
if default is None:
prompt = " [y/n] "
elif default == "yes":
prompt = " [Y/n] "
elif default == "no":
prompt = " [y/N] "
else:
raise ValueError("invalid default answer: '%s'" % default)
while True:
sys.stdout.write(question + prompt)
choice = input().lower()
if default is not None and choice == '':
return valid[default]
elif choice in valid:
return valid[choice]
else:
sys.stdout.write("Please respond with 'yes' or 'no' "
"(or 'y' or 'n').\n")
def dos2unix(file_path):
# replacement strings
WINDOWS_LINE_ENDING = b'\r\n'
UNIX_LINE_ENDING = b'\n'
with open(file_path, 'rb') as open_file:
content = open_file.read()
content = content.replace(WINDOWS_LINE_ENDING, UNIX_LINE_ENDING)
with open(file_path, 'wb') as open_file:
open_file.write(content)
def variable_options(display = False):
names = [
"Average_CellVoltage",
"Average_CurrentDensity",
"BackEnvironmentT",
"BottomEnvironmentT",
"CellFuelFlowRate",
"CellOxidantFlowRate",
"FrontEnvironmentT",
"Fuel_Utilization",
"FuelH2",
"FuelH2O",
"FuelCO",
"FuelCO2",
"FuelCH4",
"FuelN2",
"FuelTemperature",
"FuelTOnTop",
"FuelRecyclePercent",
"FuelHTXEffectiveness",
"FuelNGTemperature",
"FuelNGHTXDeltaT",
"Internal_Reforming",
"nCells",
"Oxidant_Recirculation",
"OxidantRecyclePercent",
"OxygenToCarbon_Ratio",
"OxidantO2",
"OxidantN2",
"OxidantH2O",
"OxidantCO2",
"OxidantAr",
"OxidantTemperature",
"OxidantTOnTop",
"PreReform",
"SideEnvironmentT",
"Simulation_Option",
"Stack_Fuel_Utilization",
"Stack_Oxidant_Utilization",
"StackFuelFlowRate",
"StackFuelFlowRateH2O",
"StackFuelFlowRateCO",
"StackFuelFlowRateCO2",
"StackFuelFlowRateCH4",
"StackFuelFlowRateH2",
"StackFuelFlowRateN2",
"StackOxidantFlowRate",
"StackOxidantFlowRateO2",
"StackOxidantFlowRateN2",
"StackOxidantFlowRateH2O",
"StackOxidantFlowRateCO2",
"StackOxidantFlowRateAr",
"StackVoltage",
"SystemPressure",
"TopEnvironmentT",
"VGRRate",
"VGRTemperature",
"VGRH2OPassRate",
"VGRH2PassRate",
"VGRCO2CaptureRate",
"VGRCOConvertRate"
]
units = [
"V",
"A/m^2",
"C",
"C",
"mol/s",
"mol/s",
"C",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"%",
"-",
"C",
"C",
"-",
"-",
"-",
"%",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"-",
"C",
"-",
"-",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"V",
"atm",
"C",
"-",
"C",
"-",
"-",
"-",
"-"
]
if display == True:
print('Options of input variable:')
for i in range(len(names)):
print(i+1, ':', names[i]+', ['+units[i]+']', end = '\t\n')
return names, units
class sys_preprocessor():
def NGFC_ccs(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (lb-mol/hr)",pref_CH4)
# print("Air cell outlet (U) (lb-mol/hr)",cell_aexit)
# print("Fuel cell outlet (Q) (lb-mol/hr)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(SOFC_Ain,stack_ain,stack_fin*Const_Convert,stack_recirc,stack_mix,pref_CH4,cell_exit,Frec,succs)
#return(stack_fin,stack_ain/Const_Convert,Frec,succs)
return(stack_fin,SOFC_Ain,Fresh_Ain,Frec,succs)
def NGFC_nocc(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 1
splt_ain[Index_Ar] = 1
splt_ain[Index_CO2] = 1
splt_ain[Index_O2] = 1
splt_ain[Index_N2] = 1
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (lb-mol/hr)",pref_CH4)
# print("Air cell outlet (U) (lb-mol/hr)",cell_aexit)
# print("Fuel cell outlet (Q) (lb-mol/hr)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
# return(stack_ain/Const_Convert,stack_fin,Frec,succs)
return(stack_fin, SOFC_Ain, Fresh_Ain, Frec, succs)
def IGFC_ccs(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (igfc) default conventional
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='conventional':
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='enhanced':
NG_fin[Index_H2O] = 0.0006
NG_fin[Index_Ar] = 0.0009
NG_fin[Index_CO2] = 0.2423
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0064
NG_fin[Index_CH4] = 0.1022
NG_fin[Index_CO] = 0.3415
NG_fin[Index_H2] = 0.3062
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='catalytic':
NG_fin[Index_H2O] = 0.0004
NG_fin[Index_Ar] = 0.0003
NG_fin[Index_CO2] = 0.3465
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0069
NG_fin[Index_CH4] = 0.3159
NG_fin[Index_CO] = 0.0914
NG_fin[Index_H2] = 0.2386
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
IR = 1.0
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (lb-mol/hr)",pref_CH4)
# print("Air cell outlet (U) (lb-mol/hr)",cell_aexit)
# print("Fuel cell outlet (Q) (lb-mol/hr)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
# return(stack_fin,stack_ain/Const_Convert,Frec,succs)
return(stack_fin, SOFC_Ain, Fresh_Ain, Frec, succs)
def NGFC_ccs_vgr(self, J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
recirc_VGR0 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR1 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR2 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR3 = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
# stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
recirc_VGR3[i]=stack_fin[i]*0.05
for i in range(Nspecies):
stack_mix[i]=stack_fin[i]+stack_recirc[i]+recirc_VGR3[i]
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]+recirc_VGR3[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
#cell_ref[Index_H2O] = pref_CH4[Index_H2O]-pref_CH4[Index_CH4]-2*pref_CH4[Index_C2H6]-3*pref_CH4[Index_C3H8]-4*pref_CH4[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (7a) Calculate the new VGR recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
recirc_VGR0[i]=cell_exit[i]-stack_recirc[i]
recirc_VGR1[i]=recirc_VGR0[i]
WGSmol=WGS*recirc_VGR1[Index_CO]
recirc_VGR1[Index_H2O] = recirc_VGR1[Index_H2O] - WGSmol
recirc_VGR1[Index_CO2] = recirc_VGR1[Index_CO2] + WGSmol
recirc_VGR1[Index_CO] = recirc_VGR1[Index_CO] - WGSmol
recirc_VGR1[Index_H2] = recirc_VGR1[Index_H2] + WGSmol
for i in range(Nspecies):
recirc_VGR2[i]=recirc_VGR1[i]
VGRH2O=recirc_VGR1[Index_H2O]*H2OCap
VGRCO2=recirc_VGR1[Index_CO2]*CO2Cap
VGRH2=recirc_VGR1[Index_H2]*H2Cap
recirc_VGR2[Index_H2O]=recirc_VGR2[Index_H2O]-VGRH2O
recirc_VGR2[Index_CO2]=recirc_VGR2[Index_CO2]-VGRCO2
recirc_VGR2[Index_H2]=recirc_VGR2[Index_H2]-VGRH2
for i in range(Nspecies):
recirc_VGR3[i]=recirc_VGR2[i]*VGR
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
#Frec = CalcR #; //they do equal //not working for VGR
CalcR=Frec
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (lb-mol/hr)",pref_CH4)
# print("Air cell outlet (U) (lb-mol/hr)",cell_aexit)
# print("Fuel cell outlet (Q) (lb-mol/hr)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
# return(stack_fin,stack_ain/Const_Convert,Frec,succs)
return(stack_fin, SOFC_Ain, Fresh_Ain, Frec, succs)
def IGFC_ccs_vgr(self, J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
recirc_VGR0 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR1 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR2 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR3 = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (igfc) default conventional
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='conventional':
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='enhanced':
NG_fin[Index_H2O] = 0.0006
NG_fin[Index_Ar] = 0.0009
NG_fin[Index_CO2] = 0.2423
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0064
NG_fin[Index_CH4] = 0.1022
NG_fin[Index_CO] = 0.3415
NG_fin[Index_H2] = 0.3062
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='catalytic':
NG_fin[Index_H2O] = 0.0004
NG_fin[Index_Ar] = 0.0003
NG_fin[Index_CO2] = 0.3465
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0069
NG_fin[Index_CH4] = 0.3159
NG_fin[Index_CO] = 0.0914
NG_fin[Index_H2] = 0.2386
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
IR = 1.0
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
# stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
recirc_VGR3[i]=stack_fin[i]*0.05
for i in range(Nspecies):
stack_mix[i]=stack_fin[i]+stack_recirc[i]+recirc_VGR3[i]
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]+recirc_VGR3[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
# cell_ref[Index_H2O] = pref_CH4[Index_H2O]-pref_CH4[Index_CH4]-2*pref_CH4[Index_C2H6]-3*pref_CH4[Index_C3H8]-4*pref_CH4[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (7a) Calculate the new VGR recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
recirc_VGR0[i]=cell_exit[i]-stack_recirc[i]
recirc_VGR1[i]=recirc_VGR0[i]
WGSmol=WGS*recirc_VGR1[Index_CO]
recirc_VGR1[Index_H2O] = recirc_VGR1[Index_H2O] - WGSmol
recirc_VGR1[Index_CO2] = recirc_VGR1[Index_CO2] + WGSmol
recirc_VGR1[Index_CO] = recirc_VGR1[Index_CO] - WGSmol
recirc_VGR1[Index_H2] = recirc_VGR1[Index_H2] + WGSmol
for i in range(Nspecies):
recirc_VGR2[i]=recirc_VGR1[i]
VGRH2O=recirc_VGR1[Index_H2O]*H2OCap
VGRCO2=recirc_VGR1[Index_CO2]*CO2Cap
VGRH2=recirc_VGR1[Index_H2]*H2Cap
recirc_VGR2[Index_H2O]=recirc_VGR2[Index_H2O]-VGRH2O
recirc_VGR2[Index_CO2]=recirc_VGR2[Index_CO2]-VGRCO2
recirc_VGR2[Index_H2]=recirc_VGR2[Index_H2]-VGRH2
for i in range(Nspecies):
recirc_VGR3[i]=recirc_VGR2[i]*VGR
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
#Frec = CalcR #; //they do equal //not working for VGR
CalcR=Frec
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (lb-mol/hr)",pref_CH4)
# print("Air cell outlet (U) (lb-mol/hr)",cell_aexit)
# print("Fuel cell outlet (Q) (lb-mol/hr)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
# return(stack_fin,stack_ain/Const_Convert,ref_ain,stack_amix/Const_Convert,Frec,succs)
return(stack_fin, SOFC_Ain, Fresh_Ain, Frec, succs)
def LHSampling(work_path, numvar=None, numsample=None,
listvar=None, listmin=None, listmax=None):
'''
The function conducts Latin Hypercube Sampling
'''
print('############################################################\
\nConducts Latin Hypercube Sampling\
\n############################################################')
# Part 0: Input variable options
nameoptions, unitoptions = variable_options()
# Part 1: creat given.dat
filename = work_path+'/given.dat'
Create_Given = True
if os.path.exists(filename):
query = query_yes_no('"given.dat" file already exists on the local machine, do you want to overwrite it?')
Create_Given = query
if Create_Given == True:
if len(listvar) != numvar or len(listmin) != numvar or len(listmax) != numvar:
sys.exit('Code terminated: the lengths of variables/minimums/maximums not match')
lines=["", "", "", ""]
for i in range(numvar):
lines[0] = lines[0] + nameoptions[listvar[i]-1] + '\t'
lines[1] = lines[1] + str(listmin[i]) + '\t'
lines[2] = lines[2] + str(listmax[i]) + '\t'
lines[3] = lines[3] + str(numsample) + '\t'
lines[0] += '\n'
lines[1] += '\n'
lines[2] += '\n'
lines[3] += '\n'
outputfilename = work_path+'/'+'given.dat'
inp_w=open(outputfilename,"w")
inp_w.writelines(lines)
inp_w.close()
print("Created given.dat")
# Part 2: creat LHS.dat from given.dat
inputfilename = work_path+'/'+'given.dat'
outputfilename = work_path+'/LHS.dat'
Create_LHS = True
if os.path.exists(outputfilename):
query = query_yes_no('"LHS.dat" file already exists on the local machine, do you want to overwrite it?')
Create_LHS = query
if Create_LHS == True:
print('Given vairables and limits:')
name_tmp = []
value_tmp = []
with open(inputfilename) as f:
i = 0
for line in f.readlines():
if i == 0:
name_tmp = line.strip().split()
elif i > 0:
linestr = line.strip().split()
linenum = [float(lineele) for lineele in linestr]
value_tmp.append(linenum)
i += 1
# display given.dat
givenname = name_tmp
givenvalue = np.array(value_tmp)
numvar = len(givenname)
numsample = int(givenvalue[2, 0])
for i in range(numvar):
print(i+1, ':', givenname[i], '\n\tMin: ', givenvalue[0, i], '\tMax: ', givenvalue[1, i],
'\t', int(givenvalue[2, i]), ' Samples', end = '\t\n')
# perform Latin Hypercube sampling
xlimits = np.transpose(givenvalue[:2, :])
sampling = LHS(xlimits = xlimits)
LHSvalue = sampling(numsample)
# write LHS.dat
lines = ["#######title########\n"]
line = "case No."
for i in range(numvar):
line = line+"\t"+givenname[i]+'\t'
line += '\n'
lines.append(line)
for i in range(numsample):
line = str(i+1)+'\t'
for j in range(numvar):
line = line+'\t'+"{:.6g}".format(LHSvalue[i, j])+'\t'
line += '\n'
lines.append(line)
inp_w=open(outputfilename,"w")
inp_w.writelines(lines)
inp_w.close()
print("Created LHS.dat")
print('End of code\n')
def createcases(work_path, source_path, inputbasefilename,
preprocessor_enabled = False, preprocessor_name = None,
igfc = None):
'''
The function creates cases based on LHS.dat
'''
print('############################################################\
\nCreate case folders on the local machine\
\n############################################################')
# preprocessor_name: "NGFC_ccs", "NGFC_nocc", "IGFC_ccs", "NGFC_ccs_vgr", "IGFC_ccs_vgr"
# igfc: "conventional", "enhanced", "catalytic"
## load LHS_file
name_tmp = []
value_tmp = []
filename = work_path+'/LHS.dat'
with open(filename) as f:
i = 0
for line in f.readlines():
if i == 1:
name_tmp = line.strip().split()
elif i > 1:
linestr = line.strip().split()
linenum = [float(lineele) for lineele in linestr]
value_tmp.append(linenum)
i += 1
value_tmp = np.array(value_tmp)
LHSvalue = value_tmp[:,1:]
Ncase, Nvar = LHSvalue.shape
len_tmp = len(name_tmp)
LHSname = np.array(name_tmp[len_tmp-Nvar:len_tmp])
## create folders and copy essential files
path_tmp = work_path+'/Cases'
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
else:
query = query_yes_no('"cases" folder already exists on the local machine, do you want to overwrite it?')
if query == False:
pass
indpreprocessorfailed = []
for i in range(Ncase):
path_tmp = work_path+'/Cases/Case'+str(i).zfill(5)
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
filename = 'ButlerVolmer.inp'
source = source_path+'/'+filename
target = path_tmp+'/'+filename
shutil.copy2(source, target)
filename = 'thermo.lib'
source = source_path+'/'+filename
target = path_tmp+'/'+filename
shutil.copy2(source, target)
filename = 'trans.lib'
source = source_path+'/'+filename
target = path_tmp+'/'+filename
shutil.copy2(source, target)
filename = 'VoltageOnCurrent.dat'
source = work_path+'/'+filename
target = path_tmp+'/'+filename
shutil.copy2(source, target)
## generate romSOFCMP2D4ROM.inp
outputfilename = path_tmp+'/'+'romSOFCMP2D4ROM.inp'
lines = ["@model="+inputbasefilename+"\n"]
for j in range(Nvar):
line = LHSname[j]+"="+str(LHSvalue[i, j])+"\n"
lines.append(line)
inp_base=open(inputbasefilename,"r")
lines_inp=inp_base.readlines()
for j in range(len(lines_inp)):
str00=lines_inp[j].split('=')
str00[0]=str00[0].rstrip()
str00[0]=str00[0].lstrip()
inp_w=open(outputfilename,"w")
inp_w.writelines(lines)
inp_w.close()
## generate sofc4rom.dat
if preprocessor_enabled == True:
# load romSOFCMP2D4ROM.inp
inputfilename = path_tmp+'/'+'romSOFCMP2D4ROM.inp'
text_file=open(inputfilename,"r")
lines = text_file.readlines()
df0 = pd.DataFrame(np.array([['1a', '1b', '1c']]),columns=['Name', 'Value', 'Called'])
df1 = pd.DataFrame(columns=['Name', 'Value', 'Called'])
for j in range(len(lines)):
if j>0:
str01 = lines[j].split('=')
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
df0['Name']=str01[0]
df0['Value']=float(str01[1])
df0['Called']=False
df1=pd.concat([df1,df0],sort=False,ignore_index=True)
# load inputbasefilename (base.dat or input000.dat)
text_file=open(inputbasefilename,"r")
lines = text_file.readlines()
df2 = pd.DataFrame(np.array([['1a', '1b', '1c']]),columns=['Name', 'Value', 'Updated'])
df3 = pd.DataFrame(columns=['Name', 'Value', 'Updated']) # currently, "Updated" feature not active
for j in range(len(lines)):
str01 = lines[j].split('=')
if len(str01) == 2:
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
try:
df2['Name']=str01[0]
df2['Value']=float(str01[1])
df2['Updated']=False
df3=pd.concat([df3,df2],sort=False,ignore_index=True)
except:
pass
## Call "preprocessor" function
# "preprocessor" input #1
try:
J=df1.loc[df1["Name"]=="Average_CurrentDensity","Value"].iloc[0]/10.0 # convert from A/m2 to mA/cm2
df1.loc[df1["Name"]=="Average_CurrentDensity","Called"]=True
except:
try:
J=df3.loc[df3["Name"]=="Average_CurrentDensity","Value"].iloc[0]/10.0
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #2
try:
FU=df1.loc[df1["Name"]=="Stack_Fuel_Utilization","Value"].iloc[0]
df1.loc[df1["Name"]=="Stack_Fuel_Utilization","Called"]=True
except:
try:
FU=df3.loc[df3["Name"]=="Stack_Fuel_Utilization","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #3
try:
AU=df1.loc[df1["Name"]=="Stack_Oxidant_Utilization","Value"].iloc[0]
df1.loc[df1["Name"]=="Stack_Oxidant_Utilization","Called"]=True
except:
try:
AU=df3.loc[df3["Name"]=="Stack_Oxidant_Utilization","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #4
try:
OCR=df1.loc[df1["Name"]=="OxygenToCarbon_Ratio","Value"].iloc[0]
df1.loc[df1["Name"]=="OxygenToCarbon_Ratio","Called"]=True
except:
try:
OCR=df3.loc[df3["Name"]=="OxygenToCarbon_Ratio","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #5
try:
IR=df1.loc[df1["Name"]=="Internal_Reforming","Value"].iloc[0]
df1.loc[df1["Name"]=="Internal_Reforming","Called"]=True
except:
try:
IR=df3.loc[df3["Name"]=="Internal_Reforming","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #6
try:
Arec=df1.loc[df1["Name"]=="Oxidant_Recirculation","Value"].iloc[0]
df1.loc[df1["Name"]=="Oxidant_Recirculation","Called"]=True
except:
try:
Arec=df3.loc[df3["Name"]=="Oxidant_Recirculation","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #7
try:
PreReform=df1.loc[df1["Name"]=="PreReform","Value"].iloc[0]
df1.loc[df1["Name"]=="PreReform","Called"]=True
except:
try:
PreReform=df3.loc[df3["Name"]=="PreReform","Value"].iloc[0]
except:
# print('Warning: "PreReform" not defined, PreReform=0.2')
PreReform=0.2
# "preprocessor" input #8
try:
cellsize=df1.loc[df1["Name"]=="cellsize","Value"].iloc[0]
df1.loc[df1["Name"]=="cellsize","Called"]=True
except:
try:
cellsize=df3.loc[df3["Name"]=="cellsize","Value"].iloc[0]
except:
# print('Warning: "cellsize" not defined, cellsize=550.0')
cellsize=550.0 #cm2
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
# "preprocessor" input #9
try:
VGR=df1.loc[df1["Name"]=="VGRRate","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRRate","Called"]=True
except:
try:
VGR=df3.loc[df3["Name"]=="VGRRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #10
try:
VGRTemperature=df1.loc[df1["Name"]=="VGRTemperature","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRTemperature","Called"]=True
except:
try:
VGRTemperature=df3.loc[df3["Name"]=="VGRTemperature","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #11
try:
H2OCap=1-df1.loc[df1["Name"]=="VGRH2OPassRate","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRH2OPassRate","Called"]=True
except:
try:
H2OCap=1-df3.loc[df3["Name"]=="VGRH2OPassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #12
try:
CO2Cap=df1.loc[df1["Name"]=="VGRCO2CaptureRate","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRCO2CaptureRate","Called"]=True
except:
try:
CO2Cap=df3.loc[df3["Name"]=="VGRCO2CaptureRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #13
try:
H2Cap=1-df1.loc[df1["Name"]=="VGRH2PassRate","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRH2PassRate","Called"]=True
except:
try:
H2Cap=1-df3.loc[df3["Name"]=="VGRH2PassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# "preprocessor" input #14
try:
WGS=df1.loc[df1["Name"]=="VGRCOConvertRate","Value"].iloc[0]
df1.loc[df1["Name"]=="VGRCOConvertRate","Called"]=True
except:
try:
WGS=df3.loc[df3["Name"]=="VGRCOConvertRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
W = sys_preprocessor()
if preprocessor_name == 'NGFC_ccs': # NGFC CCS
FuelIn,AirIn,AirFresh,Frec,succ=W.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc': # NGFC NO CCS
FuelIn,AirIn,AirFresh,Frec,succ=W.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelIn,AirIn,AirFresh,Frec,succ=W.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelIn,AirIn,AirFresh,Frec,succ=W.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelIn,AirIn,AirFresh,Frec,succ=W.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
if succ == 1:
## write to sofc4rom.dat
inp_base=open(inputbasefilename,"r")
lines_inp=inp_base.readlines()
for j in range(len(lines_inp)):
str00=lines_inp[j].split('=')
str00[0]=str00[0].rstrip()
str00[0]=str00[0].lstrip()
# update according to "preprocessor" outputs
if str00[0]=="FuelNGH2O": lines_inp[j]="FuelNGH2O = "+str(FuelIn[0])+"\n"
if str00[0]=="FuelNGAr": lines_inp[j]="FuelNGAr = "+str(FuelIn[1])+"\n"
if str00[0]=="FuelNGCO2": lines_inp[j]="FuelNGCO2 = "+str(FuelIn[2])+"\n"
if str00[0]=="FuelNGO2": lines_inp[j]="FuelNGO2 = "+str(FuelIn[3])+"\n"
if str00[0]=="FuelNGN2": lines_inp[j]="FuelNGN2 = "+str(FuelIn[4])+"\n"
if str00[0]=="FuelNGCH4": lines_inp[j]="FuelNGCH4 = "+str(FuelIn[5])+"\n"
if str00[0]=="FuelNGCO": lines_inp[j]="FuelNGCO = "+str(FuelIn[6])+"\n"
if str00[0]=="FuelNGH2": lines_inp[j]="FuelNGH2 = "+str(FuelIn[7])+"\n"
if str00[0]=="FuelNGC2H6": lines_inp[j]="FuelNGC2H6 = "+str(FuelIn[8])+"\n"
if str00[0]=="FuelNGC3H8": lines_inp[j]="FuelNGC3H8 = "+str(FuelIn[9])+"\n"
if str00[0]=="FuelNGC4H10": lines_inp[j]="FuelNGC4H10 = "+str(FuelIn[10])+"\n"
if str00[0]=="StackOxidantFlowRateO2": lines_inp[j]="StackOxidantFlowRateO2 = "+str(AirIn[0])+"\n"
if str00[0]=="StackOxidantFlowRateN2": lines_inp[j]="StackOxidantFlowRateN2 = "+str(AirIn[1])+"\n"
if str00[0]=="StackOxidantFlowRateH2O": lines_inp[j]="StackOxidantFlowRateH2O = "+str(AirIn[2])+"\n"
if str00[0]=="StackOxidantFlowRateCO2": lines_inp[j]="StackOxidantFlowRateCO2 = "+str(AirIn[3])+"\n"
if str00[0]=="StackOxidantFlowRateAr": lines_inp[j]="StackOxidantFlowRateAr = "+str(AirIn[4])+"\n"
if str00[0]=="FuelNGRecirculationRate": lines_inp[j]="FuelNGRecirculationRate = "+str(Frec)+"\n"
if str00[0]=="FuelNGFlowRate": lines_inp[j]="FuelNGFlowRate = "+str(sum(FuelIn))+"\n"
# delete four lines when "preprocessor" enabled
if str00[0]=="FuelRecycle": lines_inp[j]=""
if str00[0]=="FuelRecyclePercent": lines_inp[j]=""
if str00[0]=="OxidantRecycle": lines_inp[j]=""
if str00[0]=="OxidantRecyclePercent": lines_inp[j]=""
# update according to LH sampling
for k in range(len(df1)):
if str00[0]==df1['Name'].iloc[k]:
lines_inp[j]=str00[0]+" = "+str(df1['Value'].iloc[k])+"\n"
df1.loc[df1["Name"]==str00[0],'Called']=True
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
add_inp_lines=["0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0", "0"]
add_inp_lines[0]="Stack_Fuel_Utilization = "+str(FU)+"\n"
add_inp_lines[1]="Stack_Oxidant_Utilization = "+str(AU)+"\n"
add_inp_lines[2]="Oxidant_Recirculation = "+str(Arec)+"\n"
add_inp_lines[3]="Internal_Reforming = "+str(IR)+"\n"
add_inp_lines[4]="OxygenToCarbon_Ratio = "+str(OCR)+"\n"
add_inp_lines[5]="Average_CurrentDensity = "+str(J*10.0)+"\n"
add_inp_lines[6]="PreReform = "+str(PreReform)+"\n"
add_inp_lines[7]="VGRRate = "+str(VGR)+"\n"
add_inp_lines[8]="VGRTemperature = "+str(VGRTemperature )+"\n"
add_inp_lines[9]="VGRH2OPassRate = "+str(1-H2OCap)+"\n"
add_inp_lines[10]="VGRH2PassRate = "+str(1-H2Cap)+"\n"
add_inp_lines[11]="VGRCO2CaptureRate = "+str(CO2Cap)+"\n"
add_inp_lines[12]="VGRCOConvertRate = "+str(WGS)+"\n"
add_inp_lines[13]="FreshOxidantFlowRateO2 = "+str(AirFresh[0])+"\n"
add_inp_lines[14]="FreshOxidantFlowRateN2 = "+str(AirFresh[1])+"\n"
add_inp_lines[15]="FreshOxidantFlowRateH2O = "+str(AirFresh[2])+"\n"
add_inp_lines[16]="FreshOxidantFlowRateCO2 = "+str(AirFresh[3])+"\n"
add_inp_lines[17]="FreshOxidantFlowRateAr = "+str(AirFresh[4])+"\n"
else:
add_inp_lines=["0","0","0","0","0","0","0","0","0","0","0","0"]
add_inp_lines[0]="Stack_Fuel_Utilization = "+str(FU)+"\n"
add_inp_lines[1]="Stack_Oxidant_Utilization = "+str(AU)+"\n"
add_inp_lines[2]="Oxidant_Recirculation = "+str(Arec)+"\n"
add_inp_lines[3]="Internal_Reforming = "+str(IR)+"\n"
add_inp_lines[4]="OxygenToCarbon_Ratio = "+str(OCR)+"\n"
add_inp_lines[5]="Average_CurrentDensity = "+str(J*10.0)+"\n"
add_inp_lines[6]="PreReform = "+str(PreReform)+"\n"
add_inp_lines[7]="FreshOxidantFlowRateO2 = "+str(AirFresh[0])+"\n"
add_inp_lines[8]="FreshOxidantFlowRateN2 = "+str(AirFresh[1])+"\n"
add_inp_lines[9]="FreshOxidantFlowRateH2O = "+str(AirFresh[2])+"\n"
add_inp_lines[10]="FreshOxidantFlowRateCO2 = "+str(AirFresh[3])+"\n"
add_inp_lines[11]="FreshOxidantFlowRateAr = "+str(AirFresh[4])+"\n"
extra_inp_lines = []
for k in range(len(df1)):
if df1['Called'].iloc[k] == False:
line_tmp=str(df1['Name'].iloc[k])+" = "+str(df1['Value'].iloc[k])+"\n"
extra_inp_lines.append(line_tmp)
df1.loc[df1["Name"]==str(df1['Name'].iloc[k]),'Called']=True
outputfilename = path_tmp+'/'+'sofc4rom.dat'
inp_w=open(outputfilename,"w")
inp_w.write("@model="+inputbasefilename+"\n")
inp_w.writelines(lines_inp)
inp_w.writelines(add_inp_lines)
inp_w.writelines(extra_inp_lines)
inp_w.close()
else:
## create failure resutl SOFC_MP_ROM.dat
indpreprocessorfailed.append(i)
lines=["0", "0", "0"]
lines[0]="#SOFC 2D Simulation Result for Reduced Order Modeling\n"
lines[1]="#FAILED\n"
if Frec<0:
lines[2]="Calcualted fuel recirculation "+str(Frec)+" is negative\n"
if Frec>0.9:
lines[2]="Calcualted fuel recirculation "+str(Frec)+" is larger than 0.9\n"
outputfilename = path_tmp+'/'+'SOFC_MP_ROM.dat'
inp_w=open(outputfilename,"w")
inp_w.writelines(lines)
inp_w.close()
else: # if "preprocessor" not enabled
nCells = 1
StackVoltage = 0.7082
# load 'romSOFCMP2D4ROM.inp'
inputfilename = path_tmp+'/'+'romSOFCMP2D4ROM.inp'
text_file=open(inputfilename,"r")
lines = text_file.readlines()
df0 = pd.DataFrame(np.array([['1a', '1b', '1c']]),columns=['Name', 'Value', 'Called'])
df1 = pd.DataFrame(columns=['Name', 'Value', 'Called'])
for j in range(len(lines)):
if j>0:
str01 = lines[j].split('=')
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
df0['Name']=str01[0]
df0['Value']=float(str01[1])
df0['Called']=False
df1=pd.concat([df1,df0],sort=False,ignore_index=True)
# load inputbasefile
inp_base=open(inputbasefilename,"r")
lines_inp=inp_base.readlines()
for j in range(len(lines_inp)):
str00=lines_inp[j].split('=')
str00[0]=str00[0].rstrip()
str00[0]=str00[0].lstrip()
if str00[0] == 'nCells':
nCells = int(str00[1])
for j in range(len(lines_inp)):
str00=lines_inp[j].split('=')
str00[0]=str00[0].rstrip()
str00[0]=str00[0].lstrip()
for k in range(len(df1)):
if str00[0]==df1['Name'].iloc[k]:
lines_inp[j]=str00[0]+" = "+str(df1['Value'].iloc[k])+"\n"
df1.loc[df1["Name"]==str00[0],'Called']=True
if str00[0]=='StackVoltage':
for k in range(len(df1)):
if df1['Name'].iloc[k]=='Average_CellVoltage':
StackVoltage=nCells*df1['Value'].iloc[k]
lines_inp[j]=str00[0]+" = "+str(StackVoltage)+"\n"
extra_inp_lines = []
for k in range(len(df1)):
if df1['Called'].iloc[k] == False:
line_tmp=str(df1['Name'].iloc[k])+" = "+str(df1['Value'].iloc[k])+"\n"
extra_inp_lines.append(line_tmp)
df1.loc[df1["Name"]==str(df1['Name'].iloc[k]),'Called']=True
outputfilename = path_tmp+'/'+'sofc4rom.dat'
inp_w=open(outputfilename,"w")
inp_w.write("@model="+inputbasefilename+"\n")
inp_w.writelines(lines_inp)
inp_w.writelines(extra_inp_lines)
inp_w.close()
if preprocessor_enabled == True:
print('The following cases failed for preprocessor "'+preprocessor_name+'":')
print(*indpreprocessorfailed)
print('End of code\n')
else:
print('End of code\n')
class runSimu_HPC():
def __init__(self, local_path, HPC_path, numcase, create_HPC_path,
use_scratch, vgr_enabled,
hostname, username, password, port):
self.local_path = local_path # work path on the local machine
self.HPC_path = HPC_path # work path on the HPC
self.create_HPC_path = create_HPC_path # if create HPC_path if not exist
self.use_scratch = use_scratch # if use "scratch" drive
self.vgr_enabled = vgr_enabled # if enable vgr feature
self.numcase = numcase # number of total cases
self.hostname = hostname # address of HPC
self.username = username # account username
self.password = password # account password
self.port = port # default: 22
self.numruncase = None # number of cases sent to HPC
self.indruncase = None # index of cases sent to HPC
def PutCaseonHPC(self):
'''
The function puts all the cases on the HPC
'''
print('############################################################\
\nPut all the cases on the HPC\
\n############################################################')
#cinfo = {'host':'hostname', 'username':'me', 'password':'secret', 'port':2222}
#sftp = pysftp.Connection(**cinfo)
sftp = pysftp.Connection(self.hostname, username=self.username, password=self.password, port=self.port)
#cnopts = pysftp.CnOpts()
#cnopts.hostkeys = None
#sftp = pysftp.Connection(self.hostname, username=self.username, password=self.password, cnopts = cnopts)
localdir = self.local_path + '/Cases'
remotedir = self.HPC_path + '/Cases'
if sftp.exists(self.HPC_path) == True:
if sftp.exists(remotedir) == False: # if destination directories (cases) not exist, copy cases to HPC
sftp.makedirs(remotedir, mode = 777)
if os.name == 'nt':
put_r_windows(sftp, localdir, remotedir, preserve_mtime = True)
else:
sftp.put_r(localdir, remotedir, preserve_mtime = True)
else: # if destination directories (cases) exist, ask before copy
query = query_yes_no('"cases" folder already exists on the HPC, do you want to overwrite it?')
if query == True:
if os.name == 'nt':
put_r_windows(sftp, localdir, remotedir, preserve_mtime = True)
else:
sftp.put_r(localdir, remotedir, preserve_mtime = True)
else:
sftp.close()
pass
elif self.create_HPC_path == True:
print('The remote path does not exist, create directories')
sftp.makedirs(remotedir, mode = 777)
if os.name == 'nt':
put_r_windows(sftp, localdir, remotedir, preserve_mtime = True)
else:
sftp.put_r(localdir, remotedir, preserve_mtime = True)
else:
error('The remote path does not exist')
sftp.close()
def SubSimuonHPC(self, NumCores_eachnode = '24', allocation = 'face',
partition = 'short', time_limit = '0:30:00'):
'''
The function submits simulations on the HPC
'''
print('############################################################\
\nSubmit simulations on the HPC\
\n############################################################')
## Step 1: determine which cases are not finished: numruncase and indruncase
# icase_start, icase_end
numcores = NumCores_eachnode
numruncase = self.numcase # numcase = icase_end-icase_start+1
indruncase = []
indfinishedcase = []
for i in range(self.numcase): # may consider icase_start, icase_end
path_tmp = self.local_path+'/Cases/Case'+str(i).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(path_tmp):
#print('Case'+str(i).zfill(5)+' already has the result "SOFC_MP_ROM.dat" on the local machine')
numruncase = numruncase-1
indfinishedcase.append(i)
else:
indruncase.append(i)
print('The following cases already have "SOFC_MP_ROM.dat" on the local machine:')
print(*indfinishedcase)
# update global variables
self.numruncase = numruncase
self.indruncase = indruncase
## Step 2: generate ".batch" files, assign jobs to each node
numnode = int(math.ceil(float(numruncase)/float(numcores)))
numLastnode = numruncase%numcores
if numLastnode == 0: numLastnode = numcores
list_sbatch = []
for i in range(numnode):
if i<numnode-1 or numnode == 1:
ttjobs = numcores
if numcores>numruncase: ttjobs = numLastnode
else:
ttjobs = numLastnode
job_start = i*numcores # may consider icase_start, icase_end
job_end = i*numcores+ttjobs-1 # may consider icase_start, icase_end
# generate individual job (.batch file) for each node
lines=[]
lines.append("#!/bin/csh -f\n")
lines.append("#SBATCH --job-name=" + str(job_start) + "-" + str(job_end) + "\n")
lines.append("#SBATCH --time=" + time_limit + "\n")
lines.append("#SBATCH -N 1\n")
lines.append("#SBATCH -n " + str(ttjobs) + "\n")
lines.append("#SBATCH --output=batchsofc" + str(job_start) + "-" + str(job_end) + ".out\n")
lines.append("#SBATCH -A " + allocation + "\n")
lines.append("#SBATCH -p " + partition + "\n")
lines.append("source /etc/profile.d/modules.csh\n")
lines.append("module purge\n")
lines.append("module load gcc/4.4.7\n")
for j in range(numruncase):
icase = indruncase[j]
if self.vgr_enabled == True:
if self.use_scratch == True:
lines.append("(cp -rf " + self.HPC_path +
"/Cases/Case" + str(icase).zfill(5) +
" /scratch/; cd /scratch/Case" +
str(icase).zfill(5) +
"; sofcvgr sofc4rom.dat; cp /scratch/Case" +
str(icase).zfill(5) + "/* " +
self.HPC_path + "/Cases/Case" +
str(icase).zfill(5) + "/ ) &\n")
else:
lines.append("(cd " + self.HPC_path +
"/Cases/Case" + str(icase).zfill(5) +
"; sofcvgr sofc4rom.dat ) &\n")
else:
if self.use_scratch == True:
lines.append("(cp -rf " + self.HPC_path +
"/Cases/Case" + str(icase).zfill(5) +
" /scratch/; cd /scratch/Case" +
str(icase).zfill(5) +
"; sofc sofc4rom.dat; cp /scratch/Case" +
str(icase).zfill(5) + "/* " +
self.HPC_path + "/Cases/Case" +
str(icase).zfill(5) + "/ ) &\n")
else:
lines.append("(cd " + self.HPC_path +
"/Cases/Case" + str(icase).zfill(5) +
"; sofc sofc4rom.dat ) &\n")
lines.append("wait\n")
outputfilename = self.local_path + '/Cases/run' + str(job_start) + "-" + str(job_end) + '.sbatch'
inp_w=open(outputfilename,"w")
inp_w.writelines(lines)
inp_w.close()
# one need to convert \r\n to \n for windows system
if os.name == 'nt':
dos2unix(outputfilename)
# update .sbatch filenames
list_sbatch.append('run' + str(job_start) + "-" + str(job_end) + '.sbatch')
## Step 3: transfer ".batch" files to HPC, submit jobs
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(self.hostname, self.port, self.username, self.password)
sftpClient = sshClient.open_sftp()
for string in list_sbatch:
sourcefile = self.local_path + '/Cases/' + string
destfile = self.HPC_path + '/Cases/' + string
sftpClient.put(sourcefile, destfile)
sftpClient.close
# Step 4: submit simulations
query = query_yes_no('".sbatch" files have been put on the HPC, do you want to submit the simulations?')
if query == True:
command_sbatch = 'cd ' + self.HPC_path + '/Cases'
for string in list_sbatch:
command_sbatch = command_sbatch + '; sbatch ' + string
stdin, stdout, stderr = sshClient.exec_command(command_sbatch)
for line in stdout:
print(line.strip('\n'))
sshClient.close()
else:
sshClient.close()
def CheckSimuStatus(self):
'''
The function checks the simulation status on the HPC
'''
print('############################################################\
\nChecks the simulation status on the HPC\
\n############################################################')
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(self.hostname, self.port, self.username, self.password)
sftpClient = sshClient.open_sftp()
numruncase = self.numruncase
indruncase = self.indruncase
indfinishedcase = []
indfailedcase = []
numfinishedcase = 0
for icase in indruncase:
destfile = self.HPC_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
try:
sftpClient.stat(destfile)
numfinishedcase += 1
indfinishedcase.append(icase)
except IOError:
indfailedcase.append(icase)
print(str(numfinishedcase)+' out of '+str(numruncase)+' cases have been done:')
print(*indfinishedcase)
sftpClient.close()
sshClient.close()
def GetReslfromHPC(self):
'''
The function gets simulation results from the HPC
'''
print('############################################################\
\nGet simulation results from the HPC\
\n############################################################')
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(self.hostname, self.port, self.username, self.password)
sftpClient = sshClient.open_sftp()
numruncase = self.numruncase
indruncase = self.indruncase
query = False
for icase in indruncase:
path_tmp = self.local_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(path_tmp):
query = query_yes_no('certain cases already have "SOFC_MP_ROM.dat" on the local machine, do you want to overwite it?')
break
indexist = []
indnonexist = []
if query == True:
for icase in indruncase:
sourcefile = self.HPC_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
destfile = self.local_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
try:
sftpClient.get(sourcefile, destfile)
indexist.append(icase)
except:
indnonexist.append(icase)
else:
for icase in indruncase:
sourcefile = self.HPC_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
destfile = self.local_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(destfile):
indexistlocal.append(icase)
else:
try:
sftpClient.get(sourcefile, destfile)
indexist.append(icase)
except:
indnonexist.append(icase)
print('The following cases do not have "SOFC_MP_ROM.dat" on the HPC (case failed or has not converged yet):')
print(*indnonexist)
print('Get "SOFC_MP_ROM.dat" to the local machine for the following cases:')
print(*indexist)
sftpClient.close
sshClient.close()
class runSimu_SubSys():
def __init__(self, work_path, source_path, numcase, vgr_enabled,
hostname, username, password, port):
self.work_path = work_path # work path on the local machine
self.source_path = source_path # source path on the local machine
self.vgr_enabled = vgr_enabled # if enable vgr feature
self.numcase = numcase # number of total cases
self.hostname = hostname # address of sub-system
self.username = username # account username
self.password = password # account password
self.port = port # port of sub-system
self.numruncase = None # number of cases sent to sub-system
self.indruncase = None # index of cases sent to sub-system
def SubSimuonSS(self, MaxSimulIns = 1, time_limit = '1:00:00'):
'''
The function submits simulations on the sub-system
'''
print('############################################################\
\nSubmit simulations on the sub-system\
\n############################################################')
# Start sshClient
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(self.hostname, self.port, self.username, self.password)
sftpClient = sshClient.open_sftp()
RunningCount = 0
RunningInd = []
FinishedCount = 0
FinishedInd = []
FinishedCount_update = 0
time_start = time.time()
while(True):
# Check how many processes in the background
if self.vgr_enabled == False:
command = 'pgrep -c sofc'
else:
command = 'pgrep -c sofcvgr'
stdin, stdout, stderr = sshClient.exec_command(command)
RunningCount = int(stdout.read())
for i in range(self.numcase):
# Check if case i is done or not
if i in FinishedInd:
CaseFinished = True
else:
destfile = self.work_path+'/Cases/Case'+str(i).zfill(5)+'/SOFC_MP_ROM.dat'
try:
sftpClient.stat(destfile)
FinishedCount += 1
FinishedInd.append(i)
if i in RunningInd:
RunningInd.remove(i)
CaseFinished = True
except IOError:
CaseFinished = False
# Run case i if 1: case not done; 2: space in the queue; 3: case not running
if CaseFinished == False and RunningCount < MaxSimulIns and (i not in RunningInd):
if self.vgr_enabled == False:
command = '(cd '+self.work_path+'/Cases/Case'+ str(i).zfill(5) +'; '+self.source_path+'/sofc sofc4rom.dat) &'
sshClient.exec_command(command)
# Add case i to the running case list
RunningInd.append(i)
RunningCount += 1
else:
command = '(cd '+self.work_path+'/Cases/Case'+ str(i).zfill(5) +'; '+self.source_path+'/sofcvgr sofc4rom.dat) &'
sshClient.exec_command(command)
# Add case i to the running case list
RunningInd.append(i)
RunningCount += 1
# Break out for-loop if not space in the queue
if RunningCount >= MaxSimulIns:
break
# Update simulation status
if (FinishedCount-FinishedCount_update) >= 5:
FinishedCount_update = FinishedCount
print("Simulation status:\nRunning: "+str(RunningCount)+"\tFinished: "+str(FinishedCount))
# Break out while-loop if no running case or exceed time
hour, min, sec = [float(i) for i in time_limit.split(':')]
time_limit_sec = hour*3600+min*60+sec
time_elapsed = time.time()-time_start
if RunningCount == 0:
print("All the simulation Done!")
break
if time_elapsed > time_limit_sec:
print("Exceed time limit, simulation terminated!")
# Kill all the background processes and break while loop
if self.vgr_enabled == False:
command = 'pkill sofc'
else:
command = 'pkill sofcvgr'
stdin, stdout, stderr = sshClient.exec_command(command)
break
# End sshClient
sftpClient.close
sshClient.close()
def CheckSimuStatus(self):
'''
The function checks the simulation status
'''
print('############################################################\
\nChecks the simulation status\
\n############################################################')
sshClient = paramiko.SSHClient() # create SSHClient instance
sshClient.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # AutoAddPolicy automatically adding the hostname and new host key
sshClient.load_system_host_keys()
sshClient.connect(self.hostname, self.port, self.username, self.password)
sftpClient = sshClient.open_sftp()
indfinishedcase = []
indfailedcase = []
numfinishedcase = 0
for icase in range(self.numcase):
destfile = self.work_path + '/Cases/Case'+str(icase).zfill(5)+'/SOFC_MP_ROM.dat'
try:
sftpClient.stat(destfile)
numfinishedcase += 1
indfinishedcase.append(icase)
except IOError:
indfailedcase.append(icase)
print(str(numfinishedcase)+' out of '+str(self.numcase)+' cases have been done:')
print(*indfinishedcase)
sftpClient.close
sshClient.close()
class kriging():
def __init__(self, work_path,
allresultsFile = 'allResults.dat',
allresults_infoFile = 'allResults_info.dat',
inkrigingFile = 'inTraining_kriging.dat',
infoFile = 'info_kriging.dat',
outkrigingFile = 'outTraining_kriging.dat',
inpredictionFile = 'inPrediction_kriging.dat',
outpredictionFile = 'outPrediction_kriging.dat',
order = 0):
self.work_path = work_path
self.allresultsFile = work_path + '/' + allresultsFile
self.allresults_infoFile = work_path + '/' + allresults_infoFile
self.inkrigingFile = work_path + '/' + inkrigingFile
self.infoFile = work_path + '/' + infoFile
self.outkrigingFile = work_path + '/' + outkrigingFile
self.inpredictionFile = work_path + '/' + inpredictionFile
self.outpredictionFile = work_path + '/' + outpredictionFile
self.incrossvaliFile = work_path + '/inCrossVali_kriging.dat'
self.outcrossvaliFile = work_path + '/outCrossVali_kriging.dat'
self.order = int(order)
self.Sname = None
self.Yname = None
self.S_norm = None
self.Y_norm = None
self.X_norm = None
self.Xy_norm = None
self.S = None
self.Y = None
self.X = None
self.Xy = None
self.MSE = None
self.S_row = 0
self.Y_row = 0
self.S_col = 0
self.Y_col = 0
self.stdS = None
self.stdY = None
self.meanS = None
self.meanY = None
def summarize_SimuResult(self, source_path, indcase, exclude_case = 1, display_detail = False):
'''
The function extracts simulation results
exclude_case = -1: all cases included
exclude_case = 0: exclude failed cases only
exclude_case = 1: exclude both failed and non-converged cases
'''
print('############################################################\
\nSummarize simulation results\
\n############################################################')
## Step 1: load simulation outputs to Y4kriging
numcase4kriging = 0 # number of cases for kriging
indcase4kriging = [] # index of cases for kriging, start from 1
S4kriging = None # simulation inputs for kriging
Y4kriging = None # simulation outputs for kriging
for icase in indcase:
# load SOFC_MP_ROM.dat to df1
strcase = 'Case'+str(icase-1)+'Value'
inputfilename = source_path+'/Cases/Case'+str(icase-1).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
if len(lines) == 0:
continue #print('Empty case')
if lines[1].strip() == '#FAILED':
continue #print('"preprocessor" failed case')
df0 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
df1 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
for j in range(len(lines)):
if j>1: # skip first two lines
str01 = lines[j].split('=')
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
if len(str01) == 1: continue
# convert variables in SOFC_MP_ROM.dat to xxx_xxx format
str_tmp = str01[0].strip().split()
str_tmp = '_'.join(str_tmp)
df0['Name']=str_tmp
df0[strcase]=float(str01[1])
if j==2:
df1["Name"]=df0["Name"]
df1[strcase]=df0[strcase]
else:
df1=pd.concat([df1,df0],sort=False, ignore_index=True)
# exclude failed or non-converged cases
if int(df1.loc[0, [strcase]]) >= exclude_case:
numcase4kriging += 1
indcase4kriging.append(icase)
if numcase4kriging == 1:
Y4kriging = df1
else:
Y4kriging = pd.concat([Y4kriging, df1[strcase]], sort=False, axis=1)
## Step 2: load simulation inputs to S4kriging
inputfilename = source_path+'/LHS.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
for j in range(len(lines)):
if j == 1:
list_tmp = lines[j].strip().split()
list_tmp = list_tmp[2:] # 0: case; 1: No.
df2 = pd.DataFrame(list_tmp,columns=['Name'])
if j > 1:
list_tmp = lines[j].strip().split()
strcase = 'Case'+str(int(list_tmp[0])-1)+'Value'
list_tmp = list_tmp[1:] # 0: case No.
df2[strcase] = list_tmp
S4kriging = df2
## Step 3: display simulation input and output
if exclude_case == 1:
print('Converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
elif exclude_case == 0:
print('Converged and non-converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
else:
print('Simulation results are summarized from '+ str(numcase4kriging)+' cases:')
print(*indcase4kriging)
print('\nSelect from the following input variables for training:')
for i in range(S4kriging.index.size):
print(i+1, ':', S4kriging.loc[i, 'Name'], end = '\t\n')
print('\nSelect from the following output variables for training:')
for i in range(Y4kriging.index.size):
print(i+1, ':', Y4kriging.loc[i, 'Name'], end = '\t\n')
if display_detail == True:
print('\n')
print(S4kriging)
print('\n')
print(Y4kriging)
## Step 4: create allResults.dat
indS = list(S4kriging.index)
indY = list(Y4kriging.index)
indS = [x+1 for x in indS]
indY = [x+1 for x in indY]
if len(indcase4kriging) == 0 or len(indS) == 0 or len(indY) == 0:
print('Error: No data available for training')
with open(self.allresultsFile, 'w') as f:
for i in indS:
f.write(S4kriging.loc[i-1, 'Name'] + '\t')
for i in indY:
f.write(Y4kriging.loc[i-1, 'Name'] + '\t')
f.write('\n')
for i in indcase4kriging:
strcase = 'Case'+str(i-1)+'Value'
for j in indS:
f.write('{:11.4E}\t'.format(float(S4kriging.loc[j-1, strcase])))
for j in indY:
f.write('{:11.4E}\t'.format(float(Y4kriging.loc[j-1, strcase])))
f.write('\n')
with open(self.allresults_infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
def file_read(self, FileName):
'''
This function loads the kriginginputFile,
infoFile and predictioninputFile
'''
namearray = []
valuearray = []
with open(FileName) as f:
i = 0
for line in f.readlines():
if i == 0:
namearray = line.strip().split()
else:
linestr = line.strip().split()
linenum = [float(lineele) for lineele in linestr]
valuearray.append(linenum)
i += 1
return namearray, np.array(valuearray)
def cal_obj(self, theta, finalized = False, order = 0):
# Copy to local
theta = copy.deepcopy(theta)
[S_row, Y_row, S_col, Y_col] = [self.S_row, self.Y_row, self.S_col, self.Y_col]
[S_norm, Y_norm] = [self.S_norm, self.Y_norm]
# calculate F
if order == 0:
F = np.full([S_row, 1], 1.0)
else:
F = np.full([S_row, S_col+1], 1.0)
for i in range(S_col):
for j in range(S_row):
F[j, i+1] = S_norm[j, i]
# Calculate R
R = np.empty([S_row, S_row])
R_tmp = 0.0
multiple_sites = 0.0
for i in range(S_row):
for j in range(S_row):
for k in range(S_col):
R_tmp = R_tmp+theta[k]*(S_norm[i, k]-S_norm[j, k]) *(S_norm[i, k]-S_norm[j, k])
# Check if "multiple sites" exists or not
if S_norm[i, k] == S_norm[j, k] and i != j:
for k_multiple_sites in range(S_col):
multiple_sites = multiple_sites + np.abs(S_norm[i, k_multiple_sites] - S_norm[j, k_multiple_sites])
if multiple_sites == 0:
sys.exit('Code terminated: multiple sites found')
R[i, j] = np.exp(-R_tmp)
R_tmp = 0.0
#print('R: ', R)
# Cholesky decomposition
C = la.cholesky(R)
#print('C: ', C)
# calculate F hat
Ft = la.solve(C, F)
#Ft, resid_tmp, rank_tmp, sigma_tmp = \
#la.lstsq(C, F, rcond = None)
#print('Ft: ', Ft)
# calculate Y hat
Yt = la.solve(C, Y_norm)
#Yt, resid_tmp, rank_tmp, sigma_tmp = \
#la.lstsq(C, Y_norm, rcond = None)
#print('Yt: ', Yt)
#print('Yt size', Yt.shape)
# QR factorization
Q, G = la.qr(Ft, 'reduced')
#Q, G = scipy.linalg.qr(Ft, mode = 'economic')
#print('Q: ', Q)
#print('G: ', G)
# calculate beta
beta = la.solve(G, np.matmul(Q.T, Yt))
#beta, resid_tmp, rank_tmp, sigma_tmp = \
#la.lstsq(G, np.matmul(Q.T, Yt), rcond = None)
#print('beta: ', beta)
# calculate rho, sigma
rho = Yt-np.matmul(Ft, beta)
#print('rho: ', rho)
sigma2_tmp0 = np.full([1, Y_col], Y_row)
sigma2_tmp = np.sum(rho*rho, axis = 0)/sigma2_tmp0
#print('sigma2_tmp: ', sigma2_tmp)
# calculate diag, detR
diag = np.power(np.diag(C), 2./float(S_row))
detR = np.prod(diag)
#print('diag: ', diag)
#print('detR: ', detR)
# calculate obj
obj = np.sum(sigma2_tmp)*detR
if finalized == False:
#print('obj: ', obj)
#print('theta: ', theta)
return obj
else:
gamma = np.matmul(rho.T, la.inv(C))
sigma2 = (self.stdY*self.stdY)*sigma2_tmp
#print('theta: ', theta)
#print('beta: ', beta)
#print('sigma2: ', sigma2_tmp)
#print('G: ', G)
#print('Ft: ', Ft)
#print('gamma: ', gamma)
#print('C: ', C)
return obj, beta, sigma2, G, Ft, gamma, C
def variables(self):
print('input variables:')
for i in range(len(self.Sname)):
print(i+1, ':', self.Sname[i], end = '\t\n')
print('\noutput variables:')
for i in range(len(self.Yname)):
print(i+1, ':', self.Yname[i], end = '\t\n')
def variable_options(self, display = False):
names_input = [
"Average_CellVoltage",
"Average_CurrentDensity",
"BackEnvironmentT",
"BottomEnvironmentT",
"CellFuelFlowRate",
"CellOxidantFlowRate",
"FrontEnvironmentT",
"Fuel_Utilization",
"FuelH2",
"FuelH2O",
"FuelCO",
"FuelCO2",
"FuelCH4",
"FuelN2",
"FuelTemperature",
"FuelTOnTop",
"FuelRecyclePercent",
"FuelHTXEffectiveness",
"FuelNGTemperature",
"FuelNGHTXDeltaT",
"Internal_Reforming",
"nCells",
"Oxidant_Recirculation",
"OxidantRecyclePercent",
"OxygenToCarbon_Ratio",
"OxidantO2",
"OxidantN2",
"OxidantH2O",
"OxidantCO2",
"OxidantAr",
"OxidantTemperature",
"OxidantTOnTop",
"PreReform",
"SideEnvironmentT",
"Simulation_Option",
"Stack_Fuel_Utilization",
"Stack_Oxidant_Utilization",
"StackFuelFlowRate",
"StackFuelFlowRateH2O",
"StackFuelFlowRateCO",
"StackFuelFlowRateCO2",
"StackFuelFlowRateCH4",
"StackFuelFlowRateH2",
"StackFuelFlowRateN2",
"StackOxidantFlowRate",
"StackOxidantFlowRateO2",
"StackOxidantFlowRateN2",
"StackOxidantFlowRateH2O",
"StackOxidantFlowRateCO2",
"StackOxidantFlowRateAr",
"StackVoltage",
"SystemPressure",
"TopEnvironmentT",
"VGRRate",
"VGRTemperature",
"VGRH2OPassRate",
"VGRH2PassRate",
"VGRCO2CaptureRate",
"VGRCOConvertRate"
]
units_input = [
"V",
"A/m^2",
"C",
"C",
"mol/s",
"mol/s",
"C",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"%",
"-",
"C",
"C",
"-",
"-",
"-",
"%",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"-",
"C",
"-",
"-",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"V",
"atm",
"C",
"-",
"C",
"-",
"-",
"-",
"-"
]
names_output = [
'SimulationStatus',
'Stack_Voltage',
'Avg_cell_voltage',
'Stack_Current',
'Avg_current_density',
'Max_current_density',
'Min_current_density',
'Avg_Cell_Temperature',
'Max_Cell_Temperature',
'Min_Cell_Temperature',
'Delta_Cell_Temperature',
'Outlet_Fuel_Temperature',
'Delta_Fuel_Temperature',
'Outlet_Air_Temperature',
'Delta_Air_Temperature',
'Air_Heat_Exchanger_Effectiveness',
'Fuel_Utilization',
'Air_Utilization',
'Outlet_Fuel_Flowrate',
'Outlet_Fuel_H2',
'Outlet_Fuel_H2O',
'Outlet_Fuel_CO',
'Outlet_Fuel_CO2',
'Outlet_Fuel_CH4',
'Outlet_Fuel_N2',
'Outlet_Air_Flowrate',
'Outlet_Air_O2',
'Outlet_Air_N2',
'Outlet_Air_H2O',
'Outlet_Air_CO2',
'Outlet_Air_Ar',
'Total_Power',
'Air_Enthalpy_Change',
'Fuel_Enthalpy_Change',
'External_Heat',
'Electrical_Efficiency',
'Stack_Efficiency',
'Air_Inlet_Temperature',
'FSI_Temperature',
'FSI_Flowrate',
'FSI_H2_MF',
'FSI_H2O_MF',
'FSI_CO_MF',
'FSI_CO2_MF',
'FSI_CH4_MF',
'FSI_N2_MF',
'Fuel_Temperature_after_Mix',
'Fuel_Temperature_before_Gibbs_Reactor',
'Fuel_Heat_Exchanger_Effectiveness'
]
units_output = [
'-',
'V',
'V',
'A',
'A/m2',
'A/m2',
'A/m2',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'W',
'W',
'W',
'W',
'-',
'-',
'K',
'K',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'K',
'K',
'-'
]
if display == True:
print('Options of input variable:')
for i in range(len(names_input)):
print(i+1, ':', names_input[i]+', ['+units_input[i]+']', end = '\t\n')
print('Options of output variable:')
for i in range(len(names_output)):
print(i+1, ':', names_output[i]+', ['+units_output[i]+']', end = '\t\n')
return names_input, units_input, names_output, units_output
def grid(x, y, z, resX = 100, resY = 100):
'''
The function Convert 3 column data to matplotlib grid
'''
xi = np.linspace(min(x), max(x), resX)
yi = np.linspace(min(y), max(y), resY)
Z = matplotlib.mlab.griddata(x, y, z, xi, yi)
X, Y = np.meshgrid(xi, yi)
return X, Y, Z
def training(self):
'''
The function trains the Kriging model
(regression model with polynomials of order 0, 1, 2)
'''
print('############################################################\
\nTrain the Kriging model (order ', self.order, ')\
\n############################################################')
# # Step 0: check if outkriging.dat existing
# if os.path.exists(self.outkrigingFile):
# query = query_yes_no('kriging results already exist on the local machine, do you want to overwrite it?')
# if query == False: return
# Step 1: Load the training data S, Y
print('Step 1: Load the training data S, Y')
SYname, SYvalue = self.file_read(self.inkrigingFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [S_row, Y_row, S_col, Y_col]
S = copy.deepcopy(SYvalue[:, :S_col])
Y = copy.deepcopy(SYvalue[:, S_col:])
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
# Step 2: Normalize S, Y
print('Step 2: Normalize S, Y')
meanS = np.mean(S, axis = 0)
meanY = np.mean(Y, axis = 0)
stdS = np.std(S, axis = 0, ddof = 1) #calculate standard deviation of normal distribution
stdY = np.std(Y, axis = 0, ddof = 1)
stdS[stdS == 0] = 1
stdY[stdY == 0] = 1
S_norm = (S - np.tile(meanS, [S_row, 1]))/np.tile(stdS, [S_row, 1])
Y_norm = (Y - np.tile(meanY, [Y_row, 1]))/np.tile(stdY, [Y_row, 1])
# copy from local to global
self.S_norm = S_norm
self.Y_norm = Y_norm
self.S = S
self.Y = Y
[self.stdS, self.stdY] = [stdS, stdY]
self.Sname = Sname
self.Yname = Yname
# Step 3: Initial Regression model
print('Step 3: Regression model')
theta1 = np.ones(S_col)*10.0
lo = np.ones(S_col)*0.1
up = np.ones(S_col)*20.0
print('\tDesign variable: ')
print('\tlower bound: ', lo, ', upper bound: ', up, ', initial theta: ', theta1)
#call cal_obj (1st)
obj = self.cal_obj(theta1, False, self.order)
print('\tInitial: obj: ', obj)
# Step 4: Loop optimizing the regression model
if S_col <= 2:
kmax = 2
elif S_col <= 4:
kmax = copy.deepcopy(S_col)
else:
kmax = 4
p = np.array(range(0, S_col))+1
D = np.power(2, p/(float(S_col)+2.))
#print('p: ', p)
#print('D: ', D)
for i_opt in range(kmax):
# EXPLORE
theta1_org = copy.deepcopy(theta1)
atbd = None
theta_theta = copy.deepcopy(theta1)
for k in range(S_col):
if theta1[k] == lo[k]:
atbd = 1
theta_theta[k] = theta1[k]*np.power(D[k], 0.5)
elif theta1[k] == up[k]:
atbd = 1
theta_theta[k] = theta1[k]/np.power(D[k], 0.5)
else:
atbd = 0
if up[k] >= theta1[k]*D[k]:
theta_theta[k] = theta1[k]*D[k]
else:
theta_theta[k] = up[k]
#call cal_obj (2nd)
obj_tmp = self.cal_obj(theta_theta, False, self.order)
if obj_tmp < obj:
obj = copy.deepcopy(obj_tmp)
theta1 = copy.deepcopy(theta_theta)
else:
if atbd == 0:
if lo[k] >= theta1[k]/D[k]:
theta_theta[k] = lo[k]
else:
theta_theta[k] = theta1[k]/D[k]
#call cal_obj (3rd)
obj_tmp = self.cal_obj(theta_theta, False, self.order)
if obj_tmp < obj:
obj = copy.deepcopy(obj_tmp)
theta1 = copy.deepcopy(theta_theta)
print('\t', i_opt+1, ' iteration - Finish EXPLORE - obj: ', obj_tmp)
# MOVE
v = theta_theta/theta1_org
k = np.sum(v == 1)
if k == S_col:
for i in range(S_col):
D[i] = np.power(D[S_col-i-1], 0.2)
rept = 1
while rept == 1:
for i in range(S_col):
if lo[i] >= theta1[i]*v[i]:
move_tmp = lo[i]
else:
move_tmp = theta1[i]*v[i]
if up[i] >= move_tmp:
theta_theta[i] = move_tmp
else:
theta_theta[i] = up[i]
#call cal_obj (4th)
obj_tmp = self.cal_obj(theta_theta, False, self.order)
if obj_tmp < obj:
obj = copy.deepcopy(obj_tmp)
theta1 = copy.deepcopy(theta_theta)
v = v*v
#print('v new: ', v)
else:
rept = 0
for i in range(S_col):
if theta_theta[i] == lo[i] or theta_theta[i] == up[i]:
rept = 0
print('\t - Finish MOVE - obj: ', obj_tmp)
#update D
D_tmp = np.power(D, 0.25)
#print('D: ', D)
#print('D_tmp', D_tmp)
D[:(S_col-1)] = D_tmp[1:]
D[S_col-1] = D_tmp[0]
#print('D: ', D)
# Step 5: Final Regression Model
obj, beta, sigma2, G, Ft, gamma, C = self.cal_obj(theta1, True, self.order)
print('\tFinal: obj: ', obj, ', theta: ', theta1)
# Step 6: Write the trained model
print('Step 4: Write the trained model')
with open(self.outkrigingFile, 'w') as f:
f.write('S_row\n')
f.write(str(S_row) + '\n')
f.write('S_col\n')
f.write(str(S_col) + '\n')
f.write('Y_row\n')
f.write(str(Y_row) + '\n')
f.write('Y_col\n')
f.write(str(Y_col) + '\n')
f.write('meanS\n')
for value in meanS:
f.write(str(value) + ' ')
f.write('\n' + '\n')
f.write('meanY\n')
for value in meanY:
f.write(str(value) + ' ')
f.write('\n' + '\n')
f.write('stdS\n')
for value in stdS:
f.write(str(value) + ' ')
f.write('\n' + '\n')
f.write('stdY\n')
for value in stdY:
f.write(str(value) + ' ')
f.write('\n' + '\n')
f.write('theta\n')
for value in theta1:
f.write(str(value) + ' ')
f.write('\n' + '\n')
f.write('beta\n')
[row, col] = beta.shape
for i in range(row):
for j in range(col-1):
f.write(str(beta[i, j]) + ' ')
f.write(str(beta[i, col-1]) + '\n')
f.write('\n')
f.write('sigma2\n')
for i in range(len(sigma2.T)):
f.write(str(sigma2[0,i]) + ' ')
f.write('\n' + '\n')
f.write('G\n')
[row, col] = G.shape
for i in range(row):
for j in range(col-1):
f.write(str(G[i, j]) + ' ')
f.write(str(G[i, col-1]) + '\n')
f.write('\n')
f.write('Ft\n')
[row, col] = Ft.shape
for i in range(row):
for j in range(col-1):
f.write(str(Ft[i, j]) + ' ')
f.write(str(Ft[i, col-1]) + '\n')
f.write('\n')
f.write('gamma\n')
[row, col] = gamma.shape
for i in range(row):
for j in range(col-1):
f.write(str(gamma[i, j]) + ' ')
f.write(str(gamma[i, col-1]) + '\n')
f.write('\n')
f.write('C\n')
[row, col] = C.shape
for i in range(row):
for j in range(col-1):
f.write(str(C[i, j]) + ' ')
f.write(str(C[i, col-1]) + '\n')
f.write('\n')
print('End of code\n')
def prediction(self):
'''
This function predicts the outputs and MSEs
based on the trained kriging model
(regression model with polynomials of order 0, 1, 2)
'''
print('############################################################\
\nPredict Based on the trained kriging model (order ', self.order, ')\
\n############################################################')
# # Step 0: check if outprediction.dat existing
# if os.path.exists(self.outpredictionFile):
# query = query_yes_no('prediction results already exist on the local machine, do you want to overwrite it?')
# if query == False: return
# Step 1: Load the training data S, Y and prediction data Sp
print('Step 1: Load the training data S, Y and prediction input data X')
SYname, SYvalue = self.file_read(self.inkrigingFile)
Xname, Xvalue = self.file_read(self.inpredictionFile)
# Step 2: Load the trained model (outkrigingFile)
print('Step 2: Load the trained model (outkrigingFile)')
with open(self.outkrigingFile) as f:
i = 0
for line in f.readlines():
if i == 2-1:
linestr = line
S_row = int(linestr)
#print(type(S_row))
#print(S_row)
if i == 4-1:
linestr = line
S_col = int(linestr)
#print(type(S_col))
#print(S_col)
if i == 6-1:
linestr = line
Y_row = int(linestr)
if i == 8-1:
linestr = line
Y_col = int(linestr)
i += 1
countFt = 0
countgamma = 0
countC = 0
countbeta = 0
countG = 0
countsigma2 = 0
if self.order == 0:
# load outkriging file with order 0: especially G, beta
with open(self.outkrigingFile) as f:
i = 0
for line in f.readlines():
if i == 10-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
meanS = np.array(linenum)
#print(meanS)
#print(type(meanS))
if i == 13-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
meanY = np.array(linenum)
if i == 16-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
stdS = np.array(linenum)
#print(stdS)
#print(type(stdS))
if i == 19-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
stdY = np.array(linenum)
if i == 22-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
theta1_opt = np.array(linenum)
#print(theta1_opt)
#print(type(theta1_opt))
if i == 25-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
beta = np.array(linenum)
if i == 28-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
sigma2 = np.array(linenum)
if i == 31-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
G = np.array(linenum)
#print(G)
if i == 34-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
Ft = np.array(linenum)
countFt += 1
if i == 34-1+countFt and countFt < S_row:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
Ft = np.append(Ft, linenum)
countFt += 1
if i == 34-1+countFt+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
gamma = np.array(linenum)
countgamma += 1
if i == 34-1+countFt+2+countgamma and countgamma < Y_col:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
gamma = np.append(gamma, linenum, axis = 0)
countgamma += 1
if i == 34-1+countFt+2+countgamma+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
C = np.array(linenum)
countC += 1
if i == 34-1+countFt+2+countgamma+2+countC and countC < S_row:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
C = np.append(C, linenum, axis = 0)
countC += 1
i += 1
theta1_opt = np.reshape(theta1_opt, (1, theta1_opt.size))
beta = np.reshape(beta, (1, beta.size))
sigma2 = np.reshape(sigma2, (1, sigma2.size))
G = np.reshape(G, (G.size, 1))
Ft = np.reshape(Ft, (Ft.size, 1))
gamma = np.reshape(gamma, (countgamma, int(gamma.size/countgamma)))
C = np.reshape(C, (countC, int(C.size/countC)))
elif self.order == 1:
# load outkriging file with order 1: especially G, beta
with open(self.outkrigingFile) as f:
i = 0
for line in f.readlines():
if i == 10-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
meanS = np.array(linenum)
if i == 13-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
meanY = np.array(linenum)
if i == 16-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
stdS = np.array(linenum)
if i == 19-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
stdY = np.array(linenum)
if i == 22-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
theta1_opt = np.array(linenum)
if i == 25-1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
beta = np.array(linenum)
countbeta += 1
if i == 25-1+countbeta and countbeta < S_col+1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
beta = np.append(beta, linenum, axis = 0)
countbeta += 1
if i == 25-1+countbeta+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
sigma2 = np.array(linenum)
countsigma2 += 1
if i == 25-1+countbeta+2+countsigma2+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
G = np.array(linenum)
countG += 1
if i == 25-1+countbeta+2+countsigma2+2+countG and countG < S_col+1:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
G = np.append(G, linenum, axis = 0)
countG += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
Ft = np.array(linenum)
countFt += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2+countFt and countFt < S_row:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
Ft = np.append(Ft, linenum, axis = 0)
countFt += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2+countFt+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
gamma = np.array(linenum)
countgamma += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2+countFt+2+countgamma and countgamma < Y_col:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
gamma = np.append(gamma, linenum, axis = 0)
countgamma += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2+countFt+2+countgamma+2:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
C = np.array(linenum)
countC += 1
if i == 25-1+countbeta+2+countsigma2+2+countG+2+countFt+2+countgamma+2+countC and countC < S_row:
linestr = line.strip().split(' ')
linenum = [float(lineele) for lineele in linestr]
C = np.append(C, linenum, axis = 0)
countC += 1
i += 1
theta1_opt = np.reshape(theta1_opt, (1, theta1_opt.size))
beta = np.reshape(beta, (countbeta, int(beta.size/countbeta)))
sigma2 = np.reshape(sigma2, (1, sigma2.size))
G = np.reshape(G, (countG, int(G.size/countG)))
Ft = np.reshape(Ft, (countFt, int(Ft.size/countFt)))
gamma = np.reshape(gamma, (countgamma, int(gamma.size/countgamma)))
C = np.reshape(C, (countC, int(C.size/countC)))
# Design and response sites
S = copy.deepcopy(SYvalue[:, :S_col])
Y = copy.deepcopy(SYvalue[:, S_col:])
X = copy.deepcopy(Xvalue)
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
[X_row, X_col] = X.shape
if X_col != S_col:
sys.exit('Code terminated: # of prediction input variables \
does not match # of given input variables')
# Step 3: Normalize S, Y, X
S_norm = (S - np.tile(meanS, [S_row, 1]))/np.tile(stdS, [S_row, 1])
Y_norm = (Y - np.tile(meanY, [Y_row, 1]))/np.tile(stdY, [Y_row, 1])
X_norm = (X - np.tile(meanS, [X_row, 1]))/np.tile(stdS, [X_row, 1])
# Step 4: Build regression model
print('Step 3: Regression model')
#Calculate dx
dx = np.empty([X_row*S_row, S_col])
for j in range(S_col):
for i in range(X_row*S_row):
#print(i//S_row)
#print(i%S_row)
dx[i, j] = X_norm[i//S_row, j] - S_norm[i%S_row, j]
#print('dx: ', dx)
#Calculate r
r = np.empty([X_row*S_row, 1])
r_tmp = 0.0
for i in range(X_row*S_row):
for j in range(S_col):
r_tmp = r_tmp - theta1_opt[0, j]*dx[i, j]*dx[i, j]
r[i, 0] = np.exp(r_tmp)
r_tmp = 0.0
r_reshape = np.reshape(r, (X_row, S_row)).T
#print('r: ', r)
#print('r_reshape: ', r_reshape)
#calculate f
if self.order == 0:
f = np.ones([X_row, 1])
elif self.order == 1:
f = np.ones([X_row, S_col+1])
for i in range(S_col):
for j in range(X_row):
f[j, i+1] = X_norm[j, i]
#Calculate prediction Xy
Xy_norm = np.matmul(f, beta) + np.matmul(gamma, r_reshape).T
Xy = np.tile(meanY, [X_row, 1]) + np.tile(stdY, [X_row, 1])*Xy_norm
print('\tFinish Prediction - Xy')
#print('Finish Prediction - Xy: \n', Xy)
#Calculate MSEs
rt = np.matmul(la.inv(C), r_reshape)
#print('rt: ', rt)
u_tmp = np.matmul(Ft.T, rt)-f.T
u = la.solve(G, u_tmp)
#print('u: ', np.sum(u*u, axis = 0))
or1_tmp = 1 + np.sum(u*u, axis = 0) - np.sum(rt*rt, axis = 0)
# print(or1_tmp)
or1_tmp = np.reshape(or1_tmp, (1, or1_tmp.size)).T
# print(or1_tmp)
or1 = np.abs(np.tile(sigma2, [X_row, 1]) * np.tile(or1_tmp, [1, Y_col]))
print('\tFinish MSEs - or1')
#print('Finish MSEs - or1: ', or1)
# print(Xy)
# print(or1)
# Copy to Global
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [S_row, Y_row, S_col, Y_col]
self.S_norm = S_norm
self.Y_norm = Y_norm
self.S = S
self.Y = Y
[self.stdS, self.stdY] = [stdS, stdY]
self.X = X
self.Xy = Xy
self.X_norm = X_norm
self.Xy_norm = Xy_norm
self.MSE = or1
self.Sname = Sname
self.Yname = Yname
# Step 5: Write the predictions
print('Step 4: Write the predictions')
with open(self.outpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
for i in range(Y_col):
f.write('OUT' + str(i+1) + '\t')
for i in range(Y_col):
f.write('MSE' + str(i+1) + '\t')
f.write('\n')
for i in range(X_row):
for j in range(S_col):
f.write('{:11.4E}\t'.format(X[i, j]))
#f.write(str(X[i, j]) + '\t')
for j in range(Y_col):
f.write('{:11.4E}\t'.format(Xy[i, j]))
#f.write(str(Xy[i, j]) +'\t')
for j in range(Y_col):
f.write('{:11.4E}\t'.format(or1[i, j]))
#f.write(str(or1[i, j]) +'\t')
f.write('\n')
print('End of code\n')
def buildROM(self, indS = None, indY = None, frac4ROM = 80, filter_enabled = False, z_thres = 5):
'''
The function build the ROM for a certain output variable
'''
print('############################################################\
\nBuild the ROM\
\n############################################################')
# create inKriging.dat
if os.path.exists(self.allresultsFile) and os.path.exists(self.allresults_infoFile):
## Step 1: load all simulation results
SYname, SYvalue = self.file_read(self.allresultsFile)
infoname, infovalue = self.file_read(self.allresults_infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
if indS == None: indS = list(range(1, S_col+1))
if indY == None: indY = list(range(1, Y_col+1))
## Step 1.5: filter the noise and remove all failed/unconverged cases
if SYname[S_col] == 'SimulationStatus':
cls_enabled = True
else:
cls_enabled = False
if cls_enabled == True:
SYvalue_cov = SYvalue[SYvalue[:, S_col] == 1, :]
else:
SYvalue_cov = SYvalue
if filter_enabled == True:
SY_row_rm = []
for j in indY:
tmp_data = SYvalue_cov[:, S_col+j-1]
while(True):
z = np.abs(stats.zscore(tmp_data, axis = 0))
result = np.where(z > z_thres)
index = list(result[0])
# line removal list
if len(index) == 0: break
SY_row_rm += index
SY_row_rm = list(dict.fromkeys(SY_row_rm))
# replace outliers with mean
tmp_data[SY_row_rm] = np.mean(tmp_data)
# remove rows and columns accroding to SY_row_rm and SY_col_rm
SYvalue_new = np.delete(SYvalue_cov, SY_row_rm, axis = 0)
print('Noise filter: trim ' + str(len(SY_row_rm)) + ' rows from a total of ' + str(len(SYvalue_cov)) + ' rows')
else:
SYvalue_new = SYvalue_cov
[S_row, Y_row, S_col, Y_col] = [len(SYvalue_new), len(SYvalue_new), int(infovalue[0,0]), int(infovalue[0,1])]
S = copy.deepcopy(SYvalue_new[:, :S_col])
Y = copy.deepcopy(SYvalue_new[:, S_col:])
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
## Step 2: compute istep, numcrossvali, rndnumberlist
if frac4ROM >= 0:
numtraining = int(S_row*frac4ROM/100.0)
numcrossvali = S_row-numtraining
if numtraining < (2**len(indS)):
print('warning: data set to build the ROM is not large enough')
if numcrossvali > 0:
istep = int((S_row)/numcrossvali)
rndnumberlist =[]
for i in range(1, numcrossvali+1):
rndnumberlist.append(i*istep-1)
else:
rndnumberlist =[]
else:
numtraining = S_row-1000
numcrossvali = S_row-numtraining
rndnumberlist = list(range(numtraining, S_row))
## Step 3: write to inkriging.dat, info.dat and inPrediction_vali.dat
inpredictionFile4vali = self.work_path + '/inPrediction_vali_kriging.dat'
f0 = open(self.outcrossvaliFile, 'w')
f1 = open(self.inkrigingFile, 'w')
f2 = open(inpredictionFile4vali, 'w')
f3 = open(self.incrossvaliFile, 'w')
for i in indS:
f1.write(Sname[i-1] + '\t')
f2.write(Sname[i-1] + '\t')
f3.write(Sname[i-1] + '\t')
for i in indY:
f1.write(Yname[i-1] + '\t')
f3.write(Yname[i-1] + '\t')
f1.write('\n')
f2.write('\n')
f3.write('\n')
for i in range(S_row):
if i in rndnumberlist:
for j in indS:
f2.write('{:11.4E}\t'.format(S[i, j-1]))
f3.write('{:11.4E}\t'.format(S[i, j-1]))
for j in indY:
f3.write('{:11.4E}\t'.format(Y[i, j-1]))
f2.write('\n')
f3.write('\n')
else:
for j in indS:
f1.write('{:11.4E}\t'.format(S[i, j-1]))
for j in indY:
f1.write('{:11.4E}\t'.format(Y[i, j-1]))
f1.write('\n')
f1.close()
f2.close()
f3.close()
# write info.dat
with open(self.infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
## Step 4: perform training and prediction
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_vali_kriging.dat'
self.outpredictionFile = self.work_path + '/outPrediction_vali_kriging.dat'
self.training()
if numcrossvali > 0:
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
## Step 5: write to outCrossVali.dat
Yname_new = []
for i in indY:
name = Yname[i-1]
Yname_new.append(name)
f0.write(name + '\t')
f0.write('\n')
for i in range(len(rndnumberlist)):
for j in range(len(indY)):
tempi = rndnumberlist[i]
tempj = indY[j]-1
f0.write('{:11.4E}\t'.format(self.Xy[i, j]-Y[tempi, tempj]))
f0.write('\n')
f0.close()
## Step 6: write ROM prediction accuracy
int_95 = self.percent2intervl(95) # 95% confidence interval
trainingoutput_file = self.outkrigingFile
trainingoutput_accuracy = trainingoutput_file.replace(".dat", "")+'_acc.dat'
with open(trainingoutput_accuracy, 'w') as f:
f.write('ROM Accuracy (95% confidence interval): \n')
for i in range(len(Yname_new)):
f.write(Yname_new[i])
f.write('\t' + str(int_95[i]) + '\n')
elif os.path.exists(self.inkrigingFile) and os.path.exists(self.infoFile):
self.training()
print('End of code\n')
def percent2intervl(self, percentage, var = None):
print('############################################################\
\nPercentage to Confidence Interval\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence interval
interval_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
err = np.sort(ERR[:, i])
N = len(err)
n = (N-1)*percentage/100.0 + 1
if n == 1:
interval = err[0]
elif n == N:
interval = err[N-1]
else:
k = int(n)
d = n-k
interval = err[k-1]+d*(err[k]-err[k-1])
interval_all[i] = interval
if var == None:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif Yname[i] == var:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(interval_all)
def intervl2percent(self, interval, var = None):
print('############################################################\
\nConfidence Interval to Percentage\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence percentage
percentage_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
if var == Yname[i]:
err = np.sort(ERR[:, i])
N = len(err)
if interval <= err[0]:
percentage = 0
elif interval >= err[N-1]:
percentage = 1
else:
result = np.where(err>interval)
index = result[0]
k = index[0]
percentage = ((interval-err[k-1])/(err[k]-err[k-1])+k-1)/float(N-1)
percentage_all[i] = percentage
print('For "' + str(Yname[i]) + '": '
+ '[' + Yunit[i] + ']'
+ '\n\t\u00B1' + str(interval)
+ ' interval has a confidence of ' + str(round(percentage*100, 2)) + '%')
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(percentage_all)
def plot_contour_2D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_kriging.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_kriging.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
plt.figure(figsize=(17.5,6))
plt.subplot(1, 2, 1)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x1, y1, z1, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x1, y1, z1, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
#plt.colorbar().set_label(label='a label',size=15,weight='bold')
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.subplot(1, 2, 2)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x2, y2, z2, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x2, y2, z2, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '2D_contour.png'
plt.savefig(figurename)
def plot_contour_3D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_kriging.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_kriging.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(18.5,6))
ax = fig.add_subplot(1, 2, 1, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x1, y1, z1, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
ax = fig.add_subplot(1, 2, 2, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x2, y2, z2, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '3D_contour.png'
plt.savefig(figurename)
def plot_box(self, xvariable, yvariable, saveoption = False):
'''
The function is for box plot, it can help to perform sensitivity studies
'''
# convert to pandam dataframe
S = pd.DataFrame(data = self.S, columns = self.Sname, dtype = 'float')
Y = pd.DataFrame(data = self.Y, columns = self.Yname, dtype = 'float')
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_output.index(yvariable)
yunit = units_output[tempindex]
# generate box plot data
x = S[[xvariable]]
y = Y[[yvariable]]
min_x = min(x.values)
max_x = max(x.values)
x = round((x-min_x)/((max_x-min_x)/9), 0)*((max_x-min_x)/9)+min_x
x = round(x, 2)
#xy = pd.concat([x, y], axis = 1, sort = False)
#print(x.sort_values(by = ['Average_CurrentDensity']))
#print(xy)
# box plot
plt.figure(figsize=(18.5,6))
sns.set_context("paper", font_scale=3)
sns.set_style('ticks')
bplot = sns.boxplot(y=y[yvariable], x=x[xvariable],
color = 'yellow', width = 0.5)
bplot = sns.swarmplot(y=y[yvariable], x=x[xvariable],
color = 'black', alpha = 0.5)
sns.axes_style()
bplot.axes.set_title('Design-response sites', fontsize = 25)
bplot.set_xlabel(xvariable+', ['+xunit+']', fontsize = 25)
bplot.set_ylabel(yvariable+', ['+yunit+']', fontsize = 25)
bplot.tick_params(labelsize = 25)
plt.show()
# save option
if saveoption == True:
figurename = 'boxplot.png'
plt.savefig(figurename)
def Generate_inprediction(self, numsample = None, listmin = None, listmax = None):
'''
The function generates prediction input if it doesn't exist by Latin Hypercube Sampling
'''
print('############################################################\
\nGenerate prediction input\
\n############################################################')
# find input variable list Sname
SYname, SYvalue = self.file_read(self.inkrigingFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_col, Y_col] = [int(infovalue[0,0]), int(infovalue[0,1])]
Sname = copy.deepcopy(SYname[:S_col])
# check if exists
filename = self.inpredictionFile
Create_handle = True
if os.path.exists(filename):
query = query_yes_no('Prediction input file already exists on the local machine, do you want to overwrite it?')
Create_handle = query
if Create_handle == True:
numvar = len(Sname)
listvar = Sname
if len(listmin) != numvar or len(listmax) != numvar:
sys.exit('Code terminated: the lengths of variables/minimums/maximums not match')
# LHS sampling
xlimits = np.transpose(np.vstack((listmin, listmax)))
sampling = LHS(xlimits = xlimits)
LHSvalue = sampling(numsample)
# write prediction input
with open(filename, 'w') as f:
for name in Sname:
f.write(name + '\t')
f.write('\n')
for i in range(numsample):
for j in range(numvar):
f.write('{:11.4E}\t'.format(LHSvalue[i, j]))
f.write('\n')
print("Created prediciton input file")
print('End of code\n')
class DNN():
def __init__(self, work_path,
allresultsFile = 'allResults.dat',
allresults_infoFile = 'allResults_info.dat',
intrainingFile = 'inTraining_DNN.dat',
infoFile = 'info_DNN.dat',
outtrainingFile = 'outTraining_DNN.dat',
inpredictionFile = 'inPrediction_DNN.dat',
outpredictionFile = 'outPrediction_DNN.dat',
incrossvaliFile = 'inCrossVali_DNN.dat',
outcrossvaliFile = 'outCrossVali_DNN.dat'):
self.work_path = work_path
self.allresultsFile = work_path + '/' + allresultsFile
self.allresults_infoFile = work_path + '/' + allresults_infoFile
self.intrainingFile = work_path + '/' + intrainingFile
self.infoFile = work_path + '/' + infoFile
self.outtrainingFile = work_path + '/' + outtrainingFile
self.inpredictionFile = work_path + '/' + inpredictionFile
self.outpredictionFile = work_path + '/' + outpredictionFile
self.incrossvaliFile = work_path + '/' + incrossvaliFile
self.outcrossvaliFile = work_path + '/' + outcrossvaliFile
self.Sname = None
self.Yname = None
self.S_norm = None
self.Y_norm = None
self.X_norm = None
self.Xy_norm = None
self.S = None
self.Y = None
self.X = None
self.Xy = None
self.MSE = None
self.S_row = 0
self.Y_row = 0
self.S_col = 0
self.Y_col = 0
self.stdS = None
self.stdY = None
self.meanS = None
self.meanY = None
#%% The DNN function for ROM, save the trained DNN
def DNNROM(self,maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,input_num,output_num,DNN_save_file):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs = maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
# Network Parameters
n_hidden_1 = 32#64
n_hidden_2 = 200#400
n_hidden_3 = 200#400
n_hidden_4 = 256#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
tf.summary.histogram("weights",weights['h1'])
tf.summary.histogram("layer", layer_1)
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
saver = tf.train.Saver()
#tf.reset_default_graph()
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 == 0 :
print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c))
#for validation set if no improvement then break
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#for k,v in zip(variables_names, values):
# print(k, v)
# for v in values:
# print(v)
sess.close()
tf.reset_default_graph()
return(test_p1, values)
#%% The DNN function for ROM, save the trained DNN
def DNNROM2(self,maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,input_num,output_num,DNN_save_file, DNNsize):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs = maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
# Network Parameters
DNNlayers=len(DNNsize)
print('Number of layers = ',DNNlayers)
if DNNlayers>10:
print('Number of layers needs <=10')
return()
if DNNlayers>=1: n_hidden_1 = DNNsize[0]#64
if DNNlayers>=2: n_hidden_2 = DNNsize[1]#400
if DNNlayers>=3: n_hidden_3 = DNNsize[2]#400
if DNNlayers>=4: n_hidden_4 = DNNsize[3]#512
if DNNlayers>=5: n_hidden_5 = DNNsize[4]#512
if DNNlayers>=6: n_hidden_6 = DNNsize[5]#512
if DNNlayers>=7: n_hidden_7 = DNNsize[6]#512
if DNNlayers>=8: n_hidden_8 = DNNsize[7]#512
if DNNlayers>=9: n_hidden_9 = DNNsize[8]#512
if DNNlayers>=10: n_hidden_10 = DNNsize[9]#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
#tf.compat.v1.disable_eager_execution()
# Store layers weight & bias
if DNNlayers==1:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_1, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==2:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_2, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==3:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_3, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==4:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==5:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_5, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==6:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_6, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==7:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_7, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==8:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_8, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==9:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_9, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==10:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'h10': tf.Variable(tf.random.normal([n_hidden_9, n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_10, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'b10': tf.Variable(tf.random.normal([n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
print(DNNlayers)
if DNNlayers>=1:
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
#tf.summary.histogram("weights",weights['h1'])
#tf.summary.histogram("layer", layer_1)
if DNNlayers>=2:
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
if DNNlayers>=3:
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
if DNNlayers>=4:
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
if DNNlayers>=5:
layer_5 = tf.add(tf.matmul(layer_4, weights['h5']), biases['b5'])
layer_5 = tf.nn.sigmoid(layer_5)
if DNNlayers>=6:
layer_6 = tf.add(tf.matmul(layer_5, weights['h6']), biases['b6'])
layer_6 = tf.nn.sigmoid(layer_6)
if DNNlayers>=7:
layer_7 = tf.add(tf.matmul(layer_6, weights['h7']), biases['b7'])
layer_7 = tf.nn.sigmoid(layer_7)
if DNNlayers>=8:
layer_8 = tf.add(tf.matmul(layer_7, weights['h8']), biases['b8'])
layer_8 = tf.nn.sigmoid(layer_8)
if DNNlayers>=9:
layer_9 = tf.add(tf.matmul(layer_8, weights['h9']), biases['b9'])
layer_9 = tf.nn.sigmoid(layer_9)
if DNNlayers>=10:
layer_10 = tf.add(tf.matmul(layer_9, weights['h10']), biases['b10'])
layer_10 = tf.nn.sigmoid(layer_10)
if DNNlayers==1:
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
if DNNlayers==2:
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
if DNNlayers==3:
out_layer = tf.matmul(layer_3, weights['out']) + biases['out']
if DNNlayers==4:
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
if DNNlayers==5:
out_layer = tf.matmul(layer_5, weights['out']) + biases['out']
if DNNlayers==6:
out_layer = tf.matmul(layer_6, weights['out']) + biases['out']
if DNNlayers==7:
out_layer = tf.matmul(layer_7, weights['out']) + biases['out']
if DNNlayers==8:
out_layer = tf.matmul(layer_8, weights['out']) + biases['out']
if DNNlayers==9:
out_layer = tf.matmul(layer_9, weights['out']) + biases['out']
if DNNlayers==10:
out_layer = tf.matmul(layer_10, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
saver = tf.train.Saver()
#tf.reset_default_graph()
config = tf.ConfigProto(device_count={"CPU": 1}, # limit to num_cpu_core CPU usage
inter_op_parallelism_threads = 0,
intra_op_parallelism_threads = 28,
)
init = tf.global_variables_initializer()
start=time.time()
with tf.Session(config=config) as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch %2000 == 0 :
end=time.time()
print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c),' training time (s/100epochs)= ','{:.5f}'.format(end-start))
start=time.time()
#for validation set if no improvement then break
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
#saver.save(sess, r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\DNN')
saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#for k,v in zip(variables_names, values):
# print(k, v)
# for v in values:
# print(v)
sess.close()
tf.reset_default_graph()
return(test_p1, values)
#%% The DNN function for ROM, load in a trained DNN, and continue training
def DNNROM_restore(self,maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,input_num,output_num,DNN_load_file,DNN_save_file):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs = maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
# Network Parameters
n_hidden_1 = 32#64
n_hidden_2 = 200#400
n_hidden_3 = 200#400
n_hidden_4 = 256#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
tf.summary.histogram("weights",weights['h1'])
tf.summary.histogram("layer", layer_1)
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
saver = tf.train.Saver()
#tf.train.latest_checkpoint(r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\checkpoint')
#init = tf.global_variables_initializer()
with tf.Session() as sess:
saver.restore(sess, DNN_load_file)
#sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 == 0 :print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c))
#for validation set if no improvement then break
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#for k,v in zip(variables_names, values):
# print(k, v)
# for v in values:
# print(v)
sess.close()
tf.reset_default_graph()
return(test_p1,values)
#%% The DNN function for ROM, load in a trained DNN, and continue training
def DNNROM_restore2(self,maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,input_num,output_num,DNN_load_file,DNN_save_file, DNNsize):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs = maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
DNNlayers=len(DNNsize)
print('Number of layers = ',DNNlayers)
if DNNlayers>10:
print('Number of layers needs <=10')
return()
if DNNlayers>=1: n_hidden_1 = DNNsize[0]#64
if DNNlayers>=2: n_hidden_2 = DNNsize[1]#400
if DNNlayers>=3: n_hidden_3 = DNNsize[2]#400
if DNNlayers>=4: n_hidden_4 = DNNsize[3]#512
if DNNlayers>=5: n_hidden_5 = DNNsize[4]#512
if DNNlayers>=6: n_hidden_6 = DNNsize[5]#512
if DNNlayers>=7: n_hidden_7 = DNNsize[6]#512
if DNNlayers>=8: n_hidden_8 = DNNsize[7]#512
if DNNlayers>=9: n_hidden_9 = DNNsize[8]#512
if DNNlayers>=10: n_hidden_10 = DNNsize[9]#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
#tf.compat.v1.disable_eager_execution()
# Store layers weight & bias
if DNNlayers==1:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_1, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==2:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_2, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==3:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_3, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==4:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==5:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_5, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==6:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_6, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==7:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_7, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==8:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_8, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==9:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_9, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==10:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'h10': tf.Variable(tf.random.normal([n_hidden_9, n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_10, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'b10': tf.Variable(tf.random.normal([n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
print(DNNlayers)
if DNNlayers>=1:
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
#tf.summary.histogram("weights",weights['h1'])
#tf.summary.histogram("layer", layer_1)
if DNNlayers>=2:
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
if DNNlayers>=3:
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
if DNNlayers>=4:
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
if DNNlayers>=5:
layer_5 = tf.add(tf.matmul(layer_4, weights['h5']), biases['b5'])
layer_5 = tf.nn.sigmoid(layer_5)
if DNNlayers>=6:
layer_6 = tf.add(tf.matmul(layer_5, weights['h6']), biases['b6'])
layer_6 = tf.nn.sigmoid(layer_6)
if DNNlayers>=7:
layer_7 = tf.add(tf.matmul(layer_6, weights['h7']), biases['b7'])
layer_7 = tf.nn.sigmoid(layer_7)
if DNNlayers>=8:
layer_8 = tf.add(tf.matmul(layer_7, weights['h8']), biases['b8'])
layer_8 = tf.nn.sigmoid(layer_8)
if DNNlayers>=9:
layer_9 = tf.add(tf.matmul(layer_8, weights['h9']), biases['b9'])
layer_9 = tf.nn.sigmoid(layer_9)
if DNNlayers>=10:
layer_10 = tf.add(tf.matmul(layer_9, weights['h10']), biases['b10'])
layer_10 = tf.nn.sigmoid(layer_10)
if DNNlayers==1:
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
if DNNlayers==2:
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
if DNNlayers==3:
out_layer = tf.matmul(layer_3, weights['out']) + biases['out']
if DNNlayers==4:
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
if DNNlayers==5:
out_layer = tf.matmul(layer_5, weights['out']) + biases['out']
if DNNlayers==6:
out_layer = tf.matmul(layer_6, weights['out']) + biases['out']
if DNNlayers==7:
out_layer = tf.matmul(layer_7, weights['out']) + biases['out']
if DNNlayers==8:
out_layer = tf.matmul(layer_8, weights['out']) + biases['out']
if DNNlayers==9:
out_layer = tf.matmul(layer_9, weights['out']) + biases['out']
if DNNlayers==10:
out_layer = tf.matmul(layer_10, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
saver = tf.train.Saver()
#tf.train.latest_checkpoint(r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\checkpoint')
#init = tf.global_variables_initializer()
start=time.time()
with tf.Session() as sess:
saver.restore(sess, DNN_load_file)
#sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 == 0 :
end=time.time()
print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c),' training time (s/100epochs)= ','{:.5f}'.format(end-start))
start=time.time()
#for validation set if no improvement then break
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
# for k,v in zip(variables_names, values):
# print(k, v)
# for v in values:
# print(v)
sess.close()
tf.reset_default_graph()
return(test_p1,values)
#%% The DNN function for ROM, load in a trained DNN, and do prediction
def DNNROM_prediction(self,testX_nrm1,input_num,output_num,DNN_load_file):
#split_size = int(trainX_nrm.shape[0]*0.8)
#X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
#y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
#learning_rate = 0.001
#training_epochs = 0
#batch_size = int(X_train.shape[0]/3)
#total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM predicting start ...")
#print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
#print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for class training data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
# Network Parameters
n_hidden_1 = 32#64
n_hidden_2 = 200#400
n_hidden_3 = 200#400
n_hidden_4 = 256#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
# y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
tf.summary.histogram("weights",weights['h1'])
tf.summary.histogram("layer", layer_1)
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
#cost = tf.reduce_mean(tf.square(pred-y))
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
#predict = np.array([])
#count_converge= [0] * training_epochs
#prev_cost=10000000.
saver = tf.train.Saver()
#tf.train.latest_checkpoint(r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\checkpoint')
#init = tf.global_variables_initializer()
with tf.Session() as sess:
saver.restore(sess, DNN_load_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
sess.close()
tf.reset_default_graph()
return(test_p1)
#%% DNN classification one layer, train DNN classifier, and save DNN
def DNNCls(self,maxiteration,trainX_nrm,trainY_nrm,input_num_units,DNNcls_save_file):
hidden_num_units = 500
output_num_units = 2
seed=88
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
print("DNN classification training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
# print("prediction for testing data set size ", testX_nrm.shape[0]," * ",testX_nrm.shape[1])
# define placeholders
xc = tf.placeholder(tf.float32, [None, input_num_units])
yc = tf.placeholder(tf.float32, [None, output_num_units])
# set remaining variables
epochs = maxiteration
batch_size = int(X_train.shape[0]/2) #1500
learning_rate = 0.001
### define weights and biases of the neural network
weights = {
'hidden': tf.Variable(tf.random_uniform([input_num_units, hidden_num_units],-1,1,seed=seed)),
#'hidden': tf.Variable(tf.random_normal([input_num_units, hidden_num_units], 0, 1,seed=seed)),
'output': tf.Variable(tf.random_normal([hidden_num_units, output_num_units],0, 0.1, seed=seed))
}
biases = {
#'hidden': tf.Variable(tf.random_normal([hidden_num_units], seed=seed)),
'hidden': tf.Variable(tf.random_uniform([hidden_num_units], -1,1,seed=seed)),
'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))
}
#
hidden_layer = tf.add(tf.matmul(xc, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.sigmoid(hidden_layer)
tf.summary.histogram("weights_hidden",weights['hidden'])
tf.summary.histogram("biases_hidden",biases['hidden'])
tf.summary.histogram("layer_hidden", hidden_layer)
output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']
tf.summary.histogram("weights_output",weights['output'])
tf.summary.histogram("biases_output",biases['output'])
tf.summary.histogram("layer_output", output_layer)
#
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output_layer, labels=yc))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
pred=output_layer
#tf.summary.scalar('cost',cost)
init = tf.global_variables_initializer()
#write this after all the summary
#merged = tf.summary.merge_all()
#writer = tf.summary.FileWriter(cwd)
# covert output scalar to vector https://stackoverflow.com/questions/43543594/label-scalar-into-one-hot-in-tensorr-flow-code
def dense_to_one_hot(labels_dense, num_classes=2):
"""Convert class labels from scalars to one-hot vectors"""
num_labels = labels_dense.shape[0]
#index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
for ii in range(num_labels):
labels_one_hot[ii,int(labels_dense[ii])]=1
#labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
prev_cost=0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
avg_cost = 0
total_batch = int(X_train.shape[0]/batch_size)
for i in range(total_batch):
batch_x = X_train[i*batch_size:(i+1)*batch_size,]
batch_y = y_train[i*batch_size:(i+1)*batch_size,]
batch_y = dense_to_one_hot(batch_y)
_, c = sess.run([optimizer, cost], feed_dict = {xc: batch_x, yc: batch_y})
avg_cost += c / total_batch
#write tensorboard summary
#summary_avg_cost = tf.Summary()
#summary_avg_cost.value.add(tag="avg_cost", simple_value=avg_cost)
#writer.add_summary(summary_avg_cost, epoch)
#writer.add_summary(summary, epoch)
# find predictions on val set #location of the catagory, can be greater than 2
pred_temp = tf.equal(tf.argmax(output_layer, 1), tf.argmax(yc, 1))
# pred_temp2= tf.argmax(output_layer, 1)
accuracy = tf.reduce_mean(tf.cast(pred_temp, "float"))
val_acc=accuracy.eval({xc: val_x, yc: dense_to_one_hot(val_y)})
# test_acc=accuracy.eval({xc: testX_nrm, yc: dense_to_one_hot(testY_nrm)})
#print ("Validation Accuracy:", accuracy.eval({x: val_x, y: dense_to_one_hot(val_y)}))
if epoch == epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 :print ('Epoch:', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost),
" Validation accuracy:", val_acc," ")
if epoch %2000 ==0 and val_acc<=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_acc
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
saver.save(sess, DNNcls_save_file)
sess.close()
tf.reset_default_graph()
return(val_acc, values)
#%% DNN classification one layer, load in a trained DNN, and continue training
def DNNCls_restore(self,maxiteration,trainX_nrm,trainY_nrm,input_num_units,DNNcls_load_file,DNNcls_save_file):
# input_num_units = 55
hidden_num_units = 500
output_num_units = 2
seed=88
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
print("DNN classification training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
# print("prediction for testing data set size ", testX_nrm.shape[0]," * ",testX_nrm.shape[1])
# define placeholders
xc = tf.placeholder(tf.float32, [None, input_num_units])
yc = tf.placeholder(tf.float32, [None, output_num_units])
# set remaining variables
epochs = maxiteration
batch_size = int(X_train.shape[0]/2) #1500
learning_rate = 0.001
### define weights and biases of the neural network
weights = {
'hidden': tf.Variable(tf.random_uniform([input_num_units, hidden_num_units],-1,1,seed=seed)),
#'hidden': tf.Variable(tf.random_normal([input_num_units, hidden_num_units], 0, 1,seed=seed)),
'output': tf.Variable(tf.random_normal([hidden_num_units, output_num_units],0, 0.1, seed=seed))
}
biases = {
#'hidden': tf.Variable(tf.random_normal([hidden_num_units], seed=seed)),
'hidden': tf.Variable(tf.random_uniform([hidden_num_units], -1,1,seed=seed)),
'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))
}
#
hidden_layer = tf.add(tf.matmul(xc, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.sigmoid(hidden_layer)
tf.summary.histogram("weights_hidden",weights['hidden'])
tf.summary.histogram("biases_hidden",biases['hidden'])
tf.summary.histogram("layer_hidden", hidden_layer)
output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']
tf.summary.histogram("weights_output",weights['output'])
tf.summary.histogram("biases_output",biases['output'])
tf.summary.histogram("layer_output", output_layer)
#
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output_layer, labels=yc))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
pred=output_layer
#tf.summary.scalar('cost',cost)
#init = tf.global_variables_initializer()
#write this after all the summary
#merged = tf.summary.merge_all()
#writer = tf.summary.FileWriter(cwd)
# covert output scalar to vector https://stackoverflow.com/questions/43543594/label-scalar-into-one-hot-in-tensorr-flow-code
def dense_to_one_hot(labels_dense, num_classes=2):
"""Convert class labels from scalars to one-hot vectors"""
num_labels = labels_dense.shape[0]
#index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
for ii in range(num_labels):
labels_one_hot[ii,int(labels_dense[ii])]=1
#labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
prev_cost=0
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, DNNcls_load_file)
#sess.run(init)
for epoch in range(epochs):
avg_cost = 0
total_batch = int(X_train.shape[0]/batch_size)
for i in range(total_batch):
batch_x = X_train[i*batch_size:(i+1)*batch_size,]
batch_y = y_train[i*batch_size:(i+1)*batch_size,]
batch_y = dense_to_one_hot(batch_y)
_, c = sess.run([optimizer, cost], feed_dict = {xc: batch_x, yc: batch_y})
avg_cost += c / total_batch
#write tensorboard summary
#summary_avg_cost = tf.Summary()
#summary_avg_cost.value.add(tag="avg_cost", simple_value=avg_cost)
#writer.add_summary(summary_avg_cost, epoch)
#writer.add_summary(summary, epoch)
# find predictions on val set #location of the catagory, can be greater than 2
pred_temp = tf.equal(tf.argmax(output_layer, 1), tf.argmax(yc, 1))
# pred_temp2= tf.argmax(output_layer, 1)
accuracy = tf.reduce_mean(tf.cast(pred_temp, "float"))
val_acc=accuracy.eval({xc: val_x, yc: dense_to_one_hot(val_y)})
# test_acc=accuracy.eval({xc: testX_nrm, yc: dense_to_one_hot(testY_nrm)})
#print ("Validation Accuracy:", accuracy.eval({x: val_x, y: dense_to_one_hot(val_y)}))
if epoch == epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 :
print ('Epoch:', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost)," Validation accuracy:", val_acc," ")
if epoch %2000 ==0 and val_acc<=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_acc
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
saver.save(sess, DNNcls_save_file)
sess.close()
tf.reset_default_graph()
return(values)
#%% DNN classification one layer, load in a trained DNN, and do preidction for classification
def DNNCls_prediction(self,testX_nrm,input_num_units,DNNcls_load_file):
# input_num_units = 55
hidden_num_units = 500
output_num_units = 2
seed=88
# split_size = int(trainX_nrm.shape[0]*0.8)
# X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
# y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
print("DNN classification prediction start ...")
# print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
# print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size ", testX_nrm.shape[0]," * ",testX_nrm.shape[1])
# define placeholders
xc = tf.placeholder(tf.float32, [None, input_num_units])
yc = tf.placeholder(tf.float32, [None, output_num_units])
# set remaining variables
# epochs = 5000
# batch_size = int(X_train.shape[0]/2) #1500
# learning_rate = 0.001
### define weights and biases of the neural network
weights = {
'hidden': tf.Variable(tf.random_uniform([input_num_units, hidden_num_units],-1,1,seed=seed)),
#'hidden': tf.Variable(tf.random_normal([input_num_units, hidden_num_units], 0, 1,seed=seed)),
'output': tf.Variable(tf.random_normal([hidden_num_units, output_num_units],0, 0.1, seed=seed))
}
biases = {
#'hidden': tf.Variable(tf.random_normal([hidden_num_units], seed=seed)),
'hidden': tf.Variable(tf.random_uniform([hidden_num_units], -1,1,seed=seed)),
'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))
}
#
hidden_layer = tf.add(tf.matmul(xc, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.sigmoid(hidden_layer)
tf.summary.histogram("weights_hidden",weights['hidden'])
tf.summary.histogram("biases_hidden",biases['hidden'])
tf.summary.histogram("layer_hidden", hidden_layer)
output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']
tf.summary.histogram("weights_output",weights['output'])
tf.summary.histogram("biases_output",biases['output'])
tf.summary.histogram("layer_output", output_layer)
#
# cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output_layer, labels=yc))
# optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
pred=output_layer
#tf.summary.scalar('cost',cost)
#init = tf.global_variables_initializer()
#write this after all the summary
#merged = tf.summary.merge_all()
#writer = tf.summary.FileWriter(cwd)
# covert output scalar to vector https://stackoverflow.com/questions/43543594/label-scalar-into-one-hot-in-tensorr-flow-code
def dense_to_one_hot(labels_dense, num_classes=2):
"""Convert class labels from scalars to one-hot vectors"""
num_labels = labels_dense.shape[0]
#index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
for ii in range(num_labels):
labels_one_hot[ii,int(labels_dense[ii])]=1
#labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
# prev_cost=0
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, DNNcls_load_file)
#sess.run(init)
test_p1=sess.run(pred, feed_dict={xc: testX_nrm})
test_p0=sess.run(tf.argmax(test_p1,1))
#saver.save(sess, r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\ClsDNN')
sess.close()
tf.reset_default_graph()
return(test_p0)
def summarize_SimuResult(self, source_path, indcase, exclude_case = 1, display_detail = False):
'''
The function extracts simulation results
exclude_case = -1: all cases included
exclude_case = 0: exclude failed cases only
exclude_case = 1: exclude both failed and non-converged cases
'''
print('############################################################\
\nSummarize simulation results\
\n############################################################')
## Step 1: load simulation outputs to Y4kriging
numcase4kriging = 0 # number of cases for kriging
indcase4kriging = [] # index of cases for kriging, start from 1
S4kriging = None # simulation inputs for kriging
Y4kriging = None # simulation outputs for kriging
for icase in indcase:
# load SOFC_MP_ROM.dat to df1
strcase = 'Case'+str(icase-1)+'Value'
inputfilename = source_path+'/Cases/Case'+str(icase-1).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
if len(lines) == 0:
continue #print('Empty case')
if lines[1].strip() == '#FAILED':
continue #print('"preprocessor" failed case')
df0 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
df1 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
for j in range(len(lines)):
if j>1: # skip first two lines
str01 = lines[j].split('=')
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
if len(str01) == 1: continue
# convert variables in SOFC_MP_ROM.dat to xxx_xxx format
str_tmp = str01[0].strip().split()
str_tmp = '_'.join(str_tmp)
df0['Name']=str_tmp
df0[strcase]=float(str01[1])
if j==2:
df1["Name"]=df0["Name"]
df1[strcase]=df0[strcase]
else:
df1=pd.concat([df1,df0],sort=False, ignore_index=True)
# exclude failed or non-converged cases
if int(df1.loc[0, [strcase]]) >= exclude_case:
numcase4kriging += 1
indcase4kriging.append(icase)
if numcase4kriging == 1:
Y4kriging = df1
else:
Y4kriging = pd.concat([Y4kriging, df1[strcase]], sort=False, axis=1)
## Step 2: load simulation inputs to S4kriging
inputfilename = source_path+'/LHS.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
for j in range(len(lines)):
if j == 1:
list_tmp = lines[j].strip().split()
list_tmp = list_tmp[2:] # 0: case; 1: No.
df2 = pd.DataFrame(list_tmp,columns=['Name'])
if j > 1:
list_tmp = lines[j].strip().split()
strcase = 'Case'+str(int(list_tmp[0])-1)+'Value'
list_tmp = list_tmp[1:] # 0: case No.
df2[strcase] = list_tmp
S4kriging = df2
## Step 3: display simulation input and output
if exclude_case == 1:
print('Converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
elif exclude_case == 0:
print('Converged and non-converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
else:
print('Simulation results are summarized from '+ str(numcase4kriging)+' cases:')
print(*indcase4kriging)
print('\nSelect from the following input variables for training:')
for i in range(S4kriging.index.size):
print(i+1, ':', S4kriging.loc[i, 'Name'], end = '\t\n')
print('\nSelect from the following output variables for training:')
for i in range(Y4kriging.index.size):
print(i+1, ':', Y4kriging.loc[i, 'Name'], end = '\t\n')
if display_detail == True:
print('\n')
print(S4kriging)
print('\n')
print(Y4kriging)
## Step 4: create allResults.dat
indS = list(S4kriging.index)
indY = list(Y4kriging.index)
indS = [x+1 for x in indS]
indY = [x+1 for x in indY]
if len(indcase4kriging) == 0 or len(indS) == 0 or len(indY) == 0:
print('Error: No data available for training')
with open(self.allresultsFile, 'w') as f:
for i in indS:
f.write(S4kriging.loc[i-1, 'Name'] + '\t')
for i in indY:
f.write(Y4kriging.loc[i-1, 'Name'] + '\t')
f.write('\n')
for i in indcase4kriging:
strcase = 'Case'+str(i-1)+'Value'
for j in indS:
f.write('{:11.4E}\t'.format(float(S4kriging.loc[j-1, strcase])))
for j in indY:
f.write('{:11.4E}\t'.format(float(Y4kriging.loc[j-1, strcase])))
f.write('\n')
with open(self.allresults_infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
def file_read(self, FileName):
'''
This function loads the kriginginputFile,
infoFile and predictioninputFile
'''
namearray = []
valuearray = []
with open(FileName) as f:
i = 0
for line in f.readlines():
if i == 0:
namearray = line.strip().split()
else:
linestr = line.strip().split()
linenum = [float(lineele) for lineele in linestr]
valuearray.append(linenum)
i += 1
return namearray, np.array(valuearray)
def variables(self):
print('input variables:')
for i in range(len(self.Sname)):
print(i+1, ':', self.Sname[i], end = '\t\n')
print('\noutput variables:')
for i in range(len(self.Yname)):
print(i+1, ':', self.Yname[i], end = '\t\n')
def variable_options(self, display = False):
names_input = [
"Average_CellVoltage",
"Average_CurrentDensity",
"BackEnvironmentT",
"BottomEnvironmentT",
"CellFuelFlowRate",
"CellOxidantFlowRate",
"FrontEnvironmentT",
"Fuel_Utilization",
"FuelH2",
"FuelH2O",
"FuelCO",
"FuelCO2",
"FuelCH4",
"FuelN2",
"FuelTemperature",
"FuelTOnTop",
"FuelRecyclePercent",
"FuelHTXEffectiveness",
"FuelNGTemperature",
"FuelNGHTXDeltaT",
"Internal_Reforming",
"nCells",
"Oxidant_Recirculation",
"OxidantRecyclePercent",
"OxygenToCarbon_Ratio",
"OxidantO2",
"OxidantN2",
"OxidantH2O",
"OxidantCO2",
"OxidantAr",
"OxidantTemperature",
"OxidantTOnTop",
"PreReform",
"SideEnvironmentT",
"Simulation_Option",
"Stack_Fuel_Utilization",
"Stack_Oxidant_Utilization",
"StackFuelFlowRate",
"StackFuelFlowRateH2O",
"StackFuelFlowRateCO",
"StackFuelFlowRateCO2",
"StackFuelFlowRateCH4",
"StackFuelFlowRateH2",
"StackFuelFlowRateN2",
"StackOxidantFlowRate",
"StackOxidantFlowRateO2",
"StackOxidantFlowRateN2",
"StackOxidantFlowRateH2O",
"StackOxidantFlowRateCO2",
"StackOxidantFlowRateAr",
"StackVoltage",
"SystemPressure",
"TopEnvironmentT",
"VGRRate",
"VGRTemperature",
"VGRH2OPassRate",
"VGRH2PassRate",
"VGRCO2CaptureRate",
"VGRCOConvertRate"
]
units_input = [
"V",
"A/m^2",
"C",
"C",
"mol/s",
"mol/s",
"C",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"%",
"-",
"C",
"C",
"-",
"-",
"-",
"%",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"-",
"C",
"-",
"-",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"V",
"atm",
"C",
"-",
"C",
"-",
"-",
"-",
"-"
]
names_output = [
'SimulationStatus',
'Stack_Voltage',
'Avg_cell_voltage',
'Stack_Current',
'Avg_current_density',
'Max_current_density',
'Min_current_density',
'Avg_Cell_Temperature',
'Max_Cell_Temperature',
'Min_Cell_Temperature',
'Delta_Cell_Temperature',
'Outlet_Fuel_Temperature',
'Delta_Fuel_Temperature',
'Outlet_Air_Temperature',
'Delta_Air_Temperature',
'Air_Heat_Exchanger_Effectiveness',
'Fuel_Utilization',
'Air_Utilization',
'Outlet_Fuel_Flowrate',
'Outlet_Fuel_H2',
'Outlet_Fuel_H2O',
'Outlet_Fuel_CO',
'Outlet_Fuel_CO2',
'Outlet_Fuel_CH4',
'Outlet_Fuel_N2',
'Outlet_Air_Flowrate',
'Outlet_Air_O2',
'Outlet_Air_N2',
'Outlet_Air_H2O',
'Outlet_Air_CO2',
'Outlet_Air_Ar',
'Total_Power',
'Air_Enthalpy_Change',
'Fuel_Enthalpy_Change',
'External_Heat',
'Electrical_Efficiency',
'Stack_Efficiency',
'Air_Inlet_Temperature',
'FSI_Temperature',
'FSI_Flowrate',
'FSI_H2_MF',
'FSI_H2O_MF',
'FSI_CO_MF',
'FSI_CO2_MF',
'FSI_CH4_MF',
'FSI_N2_MF',
'Fuel_Temperature_after_Mix',
'Fuel_Temperature_before_Gibbs_Reactor',
'Fuel_Heat_Exchanger_Effectiveness'
]
units_output = [
'-',
'V',
'V',
'A',
'A/m2',
'A/m2',
'A/m2',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'W',
'W',
'W',
'W',
'-',
'-',
'K',
'K',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'K',
'K',
'-'
]
if display == True:
print('Options of input variable:')
for i in range(len(names_input)):
print(i+1, ':', names_input[i]+', ['+units_input[i]+']', end = '\t\n')
print('Options of output variable:')
for i in range(len(names_output)):
print(i+1, ':', names_output[i]+', ['+units_output[i]+']', end = '\t\n')
return names_input, units_input, names_output, units_output
def buildROM(self, indS = None, indY = None, frac4ROM = 80, filter_enabled = False, z_thres = 5):
'''
The function build the ROM for certain input/output variables
'''
print('############################################################\
\nBuild the ROM\
\n############################################################')
if not os.path.exists(self.allresultsFile) or not os.path.exists(self.allresults_infoFile):
sys.exit('Code terminated: essential files missing')
## Step -1: train the classifier
SYname, SYvalue = self.file_read(self.allresultsFile)
infoname, infovalue = self.file_read(self.allresults_infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
if indS == None: indS = list(range(1, S_col+1))
if indY == None: indY = list(range(1, Y_col+1))
indS_index = [i-1 for i in indS]
indY_index = [i-1 for i in indY]
if SYname[S_col] == 'SimulationStatus':
cls_enabled = True
else:
cls_enabled = False
if cls_enabled == True:
if 1 in indY: indY.remove(1) # remove SimulationStatus
if 0 in indY_index: indY_index.remove(0)
for i in range(S_row):
if SYvalue[i, S_col] == -1: SYvalue[i, S_col] = 0
temp = SYvalue[:, 0:S_col+1]
S_train_cls = temp[:, indS_index]
Y_train_cls = temp[:, S_col]
meanS_cls = S_train_cls.mean(axis=0)
stdS_cls = S_train_cls.std(axis=0)
S_train_nrm_cls = (S_train_cls-meanS_cls)/stdS_cls
Y_train_cls = Y_train_cls. astype(int)
maxiteration = 50000
trainingoutput_file = self.outtrainingFile
DNNcls_load_file = trainingoutput_file.replace(".dat", "")+'_cls'
DNNcls_save_file = DNNcls_load_file
# Initial training
acc_val, cls_values = self.DNNCls(maxiteration, S_train_nrm_cls, Y_train_cls, len(indS), DNNcls_save_file)
print("Classifier accuracy: ", acc_val)
# Restore DNN, continue training
#cls_values = self.DNNCls_restore(maxiteration, S_train_nrm_cls, Y_train_cls, len(indS), DNNcls_load_file, DNNcls_save_file)
## Step 0: filter the noise and remove all failed/unconverged cases
if cls_enabled == True:
SYvalue_cov = SYvalue[SYvalue[:, S_col] == 1, :]
else:
SYvalue_cov = SYvalue
if filter_enabled == True:
SY_row_rm = []
for j in indY:
tmp_data = SYvalue_cov[:, S_col+j-1]
while(True):
z = np.abs(stats.zscore(tmp_data, axis = 0))
result = np.where(z > z_thres)
index = list(result[0])
# line removal list
if len(index) == 0: break
SY_row_rm += index
SY_row_rm = list(dict.fromkeys(SY_row_rm))
# replace outliers with mean
tmp_data[SY_row_rm] = np.mean(tmp_data)
# remove rows and columns accroding to SY_row_rm and SY_col_rm
SYvalue_new = np.delete(SYvalue_cov, SY_row_rm, axis = 0)
print('Noise filter: trim ' + str(len(SY_row_rm)) + ' rows from a total of ' + str(len(SYvalue_cov)) + ' rows')
else:
SYvalue_new = SYvalue_cov
## Step 1: load all simulation results
[S_row, Y_row, S_col, Y_col] = [len(SYvalue_new), len(SYvalue_new), int(infovalue[0,0]), int(infovalue[0,1])]
S = copy.deepcopy(SYvalue_new[:, :S_col])
Y = copy.deepcopy(SYvalue_new[:, S_col:])
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
## Step 2: compute istep, numcrossvali, rndnumberlist
if frac4ROM >= 0:
numtraining = int(S_row*frac4ROM/100.0)
numcrossvali = S_row-numtraining
if numtraining < (2**len(indS)):
print('warning: data set to build the ROM is not large enough')
if numcrossvali > 0:
istep = int((S_row)/numcrossvali)
rndnumberlist =[]
restlist = list(range(S_row))
for i in range(1, numcrossvali+1):
rndnumberlist.append(i*istep-1)
restlist = [i for i in restlist if i not in rndnumberlist]
else:
sys.exit('Code terminated: the fraction of training dataset cannot be 100%')
else:
numtraining = S_row-1000
numcrossvali = S_row-numtraining
rndnumberlist = list(range(numtraining, S_row))
restlist = list(range(numtraining))
## Step 3: write to info.dat, intraining.dat, info.dat and inCrossVali.dat
with open(self.infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
f1 = open(self.intrainingFile, 'w')
f3 = open(self.incrossvaliFile, 'w')
for i in indS:
f1.write(Sname[i-1] + '\t')
f3.write(Sname[i-1] + '\t')
for i in indY:
f1.write(Yname[i-1] + '\t')
f3.write(Yname[i-1] + '\t')
f1.write('\n')
f3.write('\n')
for i in range(S_row):
if i in rndnumberlist:
for j in indS:
f3.write('{:11.4E}\t'.format(S[i, j-1]))
for j in indY:
f3.write('{:11.4E}\t'.format(Y[i, j-1]))
f3.write('\n')
else:
for j in indS:
f1.write('{:11.4E}\t'.format(S[i, j-1]))
for j in indY:
f1.write('{:11.4E}\t'.format(Y[i, j-1]))
f1.write('\n')
f1.close()
f3.close()
## Step 4: perform training and prediction
temp = S[restlist, :]
S_train = temp[:, indS_index]
temp = S[rndnumberlist, :]
S_vali = temp[:, indS_index]
temp = Y[restlist, :]
Y_train = temp[:, indY_index]
temp = Y[rndnumberlist, :]
Y_vali = temp[:, indY_index]
meanS=S_train.mean(axis=0)
stdS=S_train.std(axis=0)
meanY=Y_train.mean(axis=0)
stdY=Y_train.std(axis=0)
S_train_nrm=(S_train-meanS)/stdS
Y_train_nrm=(Y_train-meanY)/stdY
S_vali_nrm=(S_vali-meanS)/stdS
maxiteration = 50000
trainingoutput_file = self.outtrainingFile
DNN_load_file = trainingoutput_file.replace(".dat", "")
DNN_save_file = DNN_load_file
DNNsize = [32, 200, 200, 256]
# Initial training
Y_vali_nrm_pre, model_values = self.DNNROM2(maxiteration,
S_train_nrm, Y_train_nrm, S_vali_nrm,
len(indS), len(indY), DNN_save_file, DNNsize)
# Restore DNN, continue training
# Y_vali_nrm_pre, model_values = self.DNNROM_restore2(maxiteration, S_train_nrm, Y_train_nrm, S_vali_nrm, len(indS), len(indY), DNN_load_file, DNN_save_file, DNNsize)
# Load a DNN, and prediction
#Y_vali_nrm_load_pre = self.DNNROM_prediction(S_vali_nrm, len(indS), len(indY), DNN_load_file)
## Step 5: save built ROM
trainingoutput_file = self.outtrainingFile
trainingoutput_file_cls = trainingoutput_file.replace(".dat", "")+'_cls.dat'
if cls_enabled == True:
w1,w2,b1,b2 = cls_values
with open(trainingoutput_file_cls, 'w') as f:
f.write('w1\n')
values_tmp = np.copy(w1)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w2\n')
values_tmp = np.copy(w2)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('b1\n')
values_tmp = np.copy(b1)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b2\n')
values_tmp = np.copy(b2)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('end\n')
w1,w2,w3,w4,w5,b1,b2,b3,b4,b5 = model_values
with open(self.outtrainingFile, 'w') as f:
f.write('w1\n')
values_tmp = np.copy(w1)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w2\n')
values_tmp = np.copy(w2)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w3\n')
values_tmp = np.copy(w3)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w4\n')
values_tmp = np.copy(w4)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w5\n')
values_tmp = np.copy(w5)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('b1\n')
values_tmp = np.copy(b1)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b2\n')
values_tmp = np.copy(b2)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b3\n')
values_tmp = np.copy(b3)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b4\n')
values_tmp = np.copy(b4)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b5\n')
values_tmp = np.copy(b5)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanS\n')
values_tmp = np.copy(meanS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanY\n')
values_tmp = np.copy(meanY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdS\n')
values_tmp = np.copy(stdS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdY\n')
values_tmp = np.copy(stdY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('end\n')
## Step 6: write to outCrossVali.dat
Y_vali_pre = Y_vali_nrm_pre*stdY+meanY
f0 = open(self.outcrossvaliFile, 'w')
for i in indY:
name = Yname[i-1]
f0.write(name + '\t')
f0.write('\n')
for i in range(len(rndnumberlist)):
for j in range(len(indY)):
f0.write('{:11.4E}\t'.format(Y_vali_pre[i,j]-Y_vali[i, j]))
f0.write('\n')
f0.close()
## Step 7: update global variables
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [len(restlist), len(restlist), len(indS), len(indY)]
self.S_norm = S_train_nrm
self.Y_norm = Y_train_nrm
self.S = S_train
self.Y = Y_train
[self.stdS, self.stdY, self.meanS, self.meanY] = [stdS, stdY, meanS, meanY]
Sname_new = [ Sname[i] for i in indS_index]
Yname_new = [ Yname[i] for i in indY_index]
self.Sname = Sname_new
self.Yname = Yname_new
## Step 8: write classifier accuracy and ROM prediction accuracy
int_95 = self.percent2intervl(95) # 95% confidence interval
trainingoutput_file = self.outtrainingFile
trainingoutput_accuracy = trainingoutput_file.replace(".dat", "")+'_acc.dat'
with open(trainingoutput_accuracy, 'w') as f:
if cls_enabled == True:
f.write('Classifier Accuracy: \n')
f.write(str(acc_val) + '\n')
f.write('ROM Accuracy (95% confidence interval): \n')
for i in range(len(Yname_new)):
f.write(Yname_new[i])
f.write('\t' + str(int_95[i]) + '\n')
print('End of code\n')
def Generate_inprediction(self, numsample = None, listmin = None, listmax = None):
'''
The function generates prediction input if it doesn't exist by Latin Hypercube Sampling
'''
print('############################################################\
\nGenerate prediction input\
\n############################################################')
# find input variable list Sname
SYname, SYvalue = self.file_read(self.intrainingFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_col, Y_col] = [int(infovalue[0,0]), int(infovalue[0,1])]
Sname = copy.deepcopy(SYname[:S_col])
# check if exists
filename = self.inpredictionFile
Create_handle = True
if os.path.exists(filename):
query = query_yes_no('Prediction input file already exists on the local machine, do you want to overwrite it?')
Create_handle = query
if Create_handle == True:
numvar = len(Sname)
listvar = Sname
if len(listmin) != numvar or len(listmax) != numvar:
sys.exit('Code terminated: the lengths of variables/minimums/maximums not match')
# LHS sampling
xlimits = np.transpose(np.vstack((listmin, listmax)))
sampling = LHS(xlimits = xlimits)
LHSvalue = sampling(numsample)
# write prediction input
with open(filename, 'w') as f:
for name in Sname:
f.write(name + '\t')
f.write('\n')
for i in range(numsample):
for j in range(numvar):
f.write('{:11.4E}\t'.format(LHSvalue[i, j]))
f.write('\n')
print("Created prediciton input file")
print('End of code\n')
def prediction(self):
'''
This function predicts the outputs and MSEs
based on the trained model
'''
print('############################################################\
\nPredict Based on the trained model\
\n############################################################')
# # Step 0: check if outprediction.dat existing
# if os.path.exists(self.outpredictionFile):
# query = query_yes_no('prediction results already exist on the local machine, do you want to overwrite it?')
# if query == False: return
# Step 1: Load the training data S, Y and prediction data X
print('Step 1: Load the training data S, Y and prediction input data X')
SYname, SYvalue = self.file_read(self.intrainingFile)
Xname, Xvalue = self.file_read(self.inpredictionFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
# Step 1.5: Load the trained classifier
trainingoutput_file = self.outtrainingFile
if not os.path.exists(trainingoutput_file):
sys.exit('Code terminated: trained model missing')
trainingoutput_file_cls = trainingoutput_file.replace(".dat", "")+'_cls.dat'
if os.path.exists(trainingoutput_file_cls):
cls_enabled = True
else:
cls_enabled = False
print('trained model has no classifier, continue')
if cls_enabled == True:
with open(trainingoutput_file_cls) as f:
lines = f.readlines()
i = 0
for line in lines:
linestr = line.strip().split(' ')
if linestr[0] == 'w1':
w1_s_cls = i+1
if linestr[0] == 'w2':
w2_s_cls = i+1
w1_e_cls = i-2
if linestr[0] == 'b1':
b1_s_cls = i+1
w2_e_cls = i-2
if linestr[0] == 'b2':
b2_s_cls = i+1
b1_e_cls = i-2
if linestr[0] == 'end':
b2_e_cls = i-2
i += 1
i = 0
for line in lines:
linestr = line.strip().split(' ')
if i == w1_s_cls:
linenum = [float(lineele) for lineele in linestr]
w1_cls = np.array(linenum)
w1_row_cls = w1_e_cls-w1_s_cls+1
w1_col_cls = len(w1_cls)
if i > w1_s_cls and i <= w1_e_cls:
linenum = [float(lineele) for lineele in linestr]
w1_cls = np.append(w1_cls, linenum)
if i == w2_s_cls:
linenum = [float(lineele) for lineele in linestr]
w2_cls = np.array(linenum)
w2_row_cls = w2_e_cls-w2_s_cls+1
w2_col_cls = len(w2_cls)
if i > w2_s_cls and i <= w2_e_cls:
linenum = [float(lineele) for lineele in linestr]
w2_cls = np.append(w2_cls, linenum)
if i == b1_s_cls:
linenum = [float(lineele) for lineele in linestr]
b1_cls = np.array(linenum)
if i > b1_s_cls and i <= b1_e_cls:
linenum = [float(lineele) for lineele in linestr]
b1_cls = np.append(b1_cls, linenum)
if i == b2_s_cls:
linenum = [float(lineele) for lineele in linestr]
b2_cls = np.array(linenum)
if i > b2_s_cls and i <= b2_e_cls:
linenum = [float(lineele) for lineele in linestr]
b2_cls = np.append(b2_cls, linenum)
i += 1
w1_cls = np.reshape(w1_cls, (w1_row_cls, w1_col_cls))
w2_cls = np.reshape(w2_cls, (w2_row_cls, w2_col_cls))
# Step 2: Load the trained model (outtrainingFile)
print('Step 2: Load the trained model (outtrainingFile)')
with open(self.outtrainingFile) as f:
lines = f.readlines()
i = 0
for line in lines:
linestr = line.strip().split(' ')
if linestr[0] == 'w1':
w1_s = i+1
if linestr[0] == 'w2':
w2_s = i+1
w1_e = i-2
if linestr[0] == 'w3':
w3_s = i+1
w2_e = i-2
if linestr[0] == 'w4':
w4_s = i+1
w3_e = i-2
if linestr[0] == 'w5':
w5_s = i+1
w4_e = i-2
if linestr[0] == 'b1':
b1_s = i+1
w5_e = i-2
if linestr[0] == 'b2':
b2_s = i+1
b1_e = i-2
if linestr[0] == 'b3':
b3_s = i+1
b2_e = i-2
if linestr[0] == 'b4':
b4_s = i+1
b3_e = i-2
if linestr[0] == 'b5':
b5_s = i+1
b4_e = i-2
if linestr[0] == 'meanS':
meanS_s = i+1
b5_e = i-2
if linestr[0] == 'meanY':
meanY_s = i+1
meanS_e = i-2
if linestr[0] == 'stdS':
stdS_s = i+1
meanY_e = i-2
if linestr[0] == 'stdY':
stdY_s = i+1
stdS_e = i-2
if linestr[0] == 'end':
stdY_e = i-2
i += 1
i = 0
for line in lines:
linestr = line.strip().split(' ')
if i == w1_s:
linenum = [float(lineele) for lineele in linestr]
w1 = np.array(linenum)
w1_row = w1_e-w1_s+1
w1_col = len(w1)
if i > w1_s and i <= w1_e:
linenum = [float(lineele) for lineele in linestr]
w1 = np.append(w1, linenum)
if i == w2_s:
linenum = [float(lineele) for lineele in linestr]
w2 = np.array(linenum)
w2_row = w2_e-w2_s+1
w2_col = len(w2)
if i > w2_s and i <= w2_e:
linenum = [float(lineele) for lineele in linestr]
w2 = np.append(w2, linenum)
if i == w3_s:
linenum = [float(lineele) for lineele in linestr]
w3 = np.array(linenum)
w3_row = w3_e-w3_s+1
w3_col = len(w3)
if i > w3_s and i <= w3_e:
linenum = [float(lineele) for lineele in linestr]
w3 = np.append(w3, linenum)
if i == w4_s:
linenum = [float(lineele) for lineele in linestr]
w4 = np.array(linenum)
w4_row = w4_e-w4_s+1
w4_col = len(w4)
if i > w4_s and i <= w4_e:
linenum = [float(lineele) for lineele in linestr]
w4 = np.append(w4, linenum)
if i == w5_s:
linenum = [float(lineele) for lineele in linestr]
w5 = np.array(linenum)
w5_row = w5_e-w5_s+1
w5_col = len(w5)
if i > w5_s and i <= w5_e:
linenum = [float(lineele) for lineele in linestr]
w5 = np.append(w5, linenum)
if i == b1_s:
linenum = [float(lineele) for lineele in linestr]
b1 = np.array(linenum)
if i > b1_s and i <= b1_e:
linenum = [float(lineele) for lineele in linestr]
b1 = np.append(b1, linenum)
if i == b2_s:
linenum = [float(lineele) for lineele in linestr]
b2 = np.array(linenum)
if i > b2_s and i <= b2_e:
linenum = [float(lineele) for lineele in linestr]
b2 = np.append(b2, linenum)
if i == b3_s:
linenum = [float(lineele) for lineele in linestr]
b3 = np.array(linenum)
if i > b3_s and i <= b3_e:
linenum = [float(lineele) for lineele in linestr]
b3 = np.append(b3, linenum)
if i == b4_s:
linenum = [float(lineele) for lineele in linestr]
b4 = np.array(linenum)
if i > b4_s and i <= b4_e:
linenum = [float(lineele) for lineele in linestr]
b4 = np.append(b4, linenum)
if i == b5_s:
linenum = [float(lineele) for lineele in linestr]
b5 = np.array(linenum)
if i > b5_s and i <= b5_e:
linenum = [float(lineele) for lineele in linestr]
b5 = np.append(b5, linenum)
if i == meanS_s:
linenum = [float(lineele) for lineele in linestr]
meanS = np.array(linenum)
if i > meanS_s and i <= meanS_e:
linenum = [float(lineele) for lineele in linestr]
meanS = np.append(meanS, linenum)
if i == meanY_s:
linenum = [float(lineele) for lineele in linestr]
meanY = np.array(linenum)
if i > meanY_s and i <= meanY_e:
linenum = [float(lineele) for lineele in linestr]
meanY = np.append(meanY, linenum)
if i == stdS_s:
linenum = [float(lineele) for lineele in linestr]
stdS = np.array(linenum)
if i > stdS_s and i <= stdS_e:
linenum = [float(lineele) for lineele in linestr]
stdS = np.append(stdS, linenum)
if i == stdY_s:
linenum = [float(lineele) for lineele in linestr]
stdY = np.array(linenum)
if i > stdY_s and i <= stdY_e:
linenum = [float(lineele) for lineele in linestr]
stdY = np.append(stdY, linenum)
i += 1
w1 = np.reshape(w1, (w1_row, w1_col))
w2 = np.reshape(w2, (w2_row, w2_col))
w3 = np.reshape(w3, (w3_row, w3_col))
w4 = np.reshape(w4, (w4_row, w4_col))
w5 = np.reshape(w5, (w5_row, w5_col))
# Step 3: Normalize S, Y, X
S = copy.deepcopy(SYvalue[:, :S_col])
Y = copy.deepcopy(SYvalue[:, S_col:])
X = copy.deepcopy(Xvalue)
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
[X_row, X_col] = X.shape
if X_col != S_col:
sys.exit('Code terminated: # of prediction input variables \
does not match # of given input variables')
S_nrm = (S - np.tile(meanS, [S_row, 1]))/np.tile(stdS, [S_row, 1])
Y_nrm = (Y - np.tile(meanY, [Y_row, 1]))/np.tile(stdY, [Y_row, 1])
X_nrm = (X - np.tile(meanS, [X_row, 1]))/np.tile(stdS, [X_row, 1])
# Step 3.5: perform prediction of SimulationStatus
if cls_enabled == True:
for j in range(X_row):
inputX_cls = X_nrm[j,:]
m1_cls = np.matmul(inputX_cls,w1_cls)
m1b_cls = np.add(m1_cls,b1_cls)
m1ba_cls = np.zeros(len(m1b_cls))
for i in range(len(m1b_cls)):
m1ba_cls[i] = 1.0/(1+math.exp(-m1b_cls[i]))
m2_cls = np.matmul(m1ba_cls,w2_cls)
m2b_cls = np.add(m2_cls,b2_cls)
m2ba_cls = np.zeros(len(m2b_cls))
for i in range(len(m2b_cls)):
m2ba_cls[i] = m2b_cls[i]
outputX_cls = m2ba_cls
if j == 0:
Xy_cls = outputX_cls
else:
Xy_cls = np.vstack((Xy_cls, outputX_cls))
#convert to 0 and 1
Xy_cls = np.argmax(Xy_cls, 1)
# print(len(Xy_cls))
# print(sum(Xy_cls))
# DNNcls_load_file = trainingoutput_file.replace(".dat", "")+'_cls'
# SimuStatus = self.DNNCls_prediction(X_nrm, S_col, DNNcls_load_file)
# print('Compare two methods of predictions:')
# print((Xy_cls==SimuStatus).all())
# Step 4: perform prediction
for j in range(X_row):
inputX = X_nrm[j,:]
m1 = np.matmul(inputX,w1)
m1b = np.add(m1,b1)
m1ba = np.zeros(len(m1b))
for i in range(len(m1b)):
m1ba[i] = 1.0/(1+math.exp(-m1b[i]))
m2 = np.matmul(m1ba,w2)
m2b = np.add(m2,b2)
m2ba = np.zeros(len(m2b))
for i in range(len(m2b)):
m2ba[i] = 1.0/(1+math.exp(-m2b[i]))
m3 = np.matmul(m2ba,w3)
m3b = np.add(m3,b3)
m3ba = np.zeros(len(m3b))
for i in range(len(m3b)):
m3ba[i] = 1.0/(1+math.exp(-m3b[i]))
m4 = np.matmul(m3ba,w4)
m4b = np.add(m4,b4)
m4ba = np.zeros(len(m4b))
for i in range(len(m4b)):
m4ba[i] = 1.0/(1+math.exp(-m4b[i]))
m5 = np.matmul(m4ba,w5)
m5b = np.add(m5,b5)
m5ba = np.zeros(len(m5b))
for i in range(len(m5b)):
m5ba[i] = m5b[i]
outputX_nrm = m5ba
outputX = m5ba*stdY+meanY
if j == 0:
Xy_nrm = outputX_nrm
Xy = outputX
else:
Xy_nrm = np.vstack((Xy_nrm, outputX_nrm))
Xy = np.vstack((Xy, outputX))
print('\tFinish Prediction - Xy')
# Copy to Global
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [S_row, Y_row, S_col, Y_col]
self.S_norm = S_nrm
self.Y_norm = Y_nrm
self.S = S
self.Y = Y
[self.stdS, self.stdY] = [stdS, stdY]
self.X = X
self.Xy = Xy
self.X_norm = X_nrm
self.Xy_norm = Xy_nrm
self.Sname = Sname
self.Yname = Yname
# Step 5: Write the predictions
print('Step 4: Write the predictions')
with open(self.outpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
if cls_enabled == True:
f.write('SimulationStatus\t')
for i in range(Y_col):
f.write(Yname[i] + '\t')
f.write('\n')
for i in range(X_row):
# write input variables
for j in range(S_col):
f.write('{:11.4E}\t'.format(X[i, j]))
# write simulation status
if cls_enabled == True:
f.write('{:11.4E}\t'.format(Xy_cls[i]))
# write output variables
for j in range(Y_col):
f.write('{:11.4E}\t'.format(Xy[i, j]))
f.write('\n')
print('End of code\n')
def percent2intervl(self, percentage, var = None):
print('############################################################\
\nPercentage to Confidence Interval\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence interval
interval_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
err = np.sort(ERR[:, i])
N = len(err)
n = (N-1)*percentage/100.0 + 1
if n == 1:
interval = err[0]
elif n == N:
interval = err[N-1]
else:
k = int(n)
d = n-k
interval = err[k-1]+d*(err[k]-err[k-1])
interval_all[i] = interval
if var == None:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif Yname[i] == var:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(interval_all)
def intervl2percent(self, interval, var = None):
print('############################################################\
\nConfidence Interval to Percentage\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence percentage
percentage_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
if var == Yname[i]:
err = np.sort(ERR[:, i])
N = len(err)
if interval <= err[0]:
percentage = 0
elif interval >= err[N-1]:
percentage = 1
else:
result = np.where(err>interval)
index = result[0]
k = index[0]
percentage = ((interval-err[k-1])/(err[k]-err[k-1])+k-1)/float(N-1)
percentage_all[i] = percentage
print('For "' + str(Yname[i]) + '": '
+ '[' + Yunit[i] + ']'
+ '\n\t\u00B1' + str(interval)
+ ' interval has a confidence of ' + str(round(percentage*100, 2)) + '%')
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(percentage_all)
def plot_contour_2D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_DNN.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_DNN.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
plt.figure(figsize=(17.5,6))
plt.subplot(1, 2, 1)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x1, y1, z1, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x1, y1, z1, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
#plt.colorbar().set_label(label='a label',size=15,weight='bold')
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.subplot(1, 2, 2)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x2, y2, z2, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x2, y2, z2, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '2D_contour.png'
plt.savefig(figurename)
def plot_contour_3D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_kriging.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_kriging.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(18.5,6))
ax = fig.add_subplot(1, 2, 1, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x1, y1, z1, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
ax = fig.add_subplot(1, 2, 2, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x2, y2, z2, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '3D_contour.png'
plt.savefig(figurename)
def plot_box(self, xvariable, yvariable, saveoption = False):
'''
The function is for box plot, it can help to perform sensitivity studies
'''
# convert to pandam dataframe
S = pd.DataFrame(data = self.S, columns = self.Sname, dtype = 'float')
Y = pd.DataFrame(data = self.Y, columns = self.Yname, dtype = 'float')
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_output.index(yvariable)
yunit = units_output[tempindex]
# generate box plot data
x = S[[xvariable]]
y = Y[[yvariable]]
min_x = min(x.values)
max_x = max(x.values)
x = round((x-min_x)/((max_x-min_x)/9), 0)*((max_x-min_x)/9)+min_x
x = round(x, 2)
#xy = pd.concat([x, y], axis = 1, sort = False)
#print(x.sort_values(by = ['Average_CurrentDensity']))
#print(xy)
# box plot
plt.figure(figsize=(18.5,6))
sns.set_context("paper", font_scale=3)
sns.set_style('ticks')
bplot = sns.boxplot(y=y[yvariable], x=x[xvariable],
color = 'yellow', width = 0.5)
bplot = sns.swarmplot(y=y[yvariable], x=x[xvariable],
color = 'black', alpha = 0.5)
sns.axes_style()
bplot.axes.set_title('Design-response sites', fontsize = 25)
bplot.set_xlabel(xvariable+', ['+xunit+']', fontsize = 25)
bplot.set_ylabel(yvariable+', ['+yunit+']', fontsize = 25)
bplot.tick_params(labelsize = 25)
plt.show()
# save option
if saveoption == True:
figurename = 'boxplot.png'
plt.savefig(figurename)
class PhyDNN():
def __init__(self, work_path,
allresultsFile = 'allResults.dat',
allresults_infoFile = 'allResults_info.dat',
intrainingFile = 'inTraining_Phy.dat',
infoFile = 'info_Phy.dat',
outtrainingFile = 'outTraining_Phy.dat',
inpredictionFile = 'inPrediction_Phy.dat',
outpredictionFile = 'outPrediction_Phy.dat',
incrossvaliFile = 'inCrossVali_Phy.dat',
outcrossvaliFile = 'outCrossVali_Phy.dat'):
self.work_path = work_path
self.allresultsFile = work_path + '/' + allresultsFile
self.allresults_infoFile = work_path + '/' + allresults_infoFile
self.intrainingFile = work_path + '/' + intrainingFile
self.infoFile = work_path + '/' + infoFile
self.outtrainingFile = work_path + '/' + outtrainingFile
self.inpredictionFile = work_path + '/' + inpredictionFile
self.outpredictionFile = work_path + '/' + outpredictionFile
self.incrossvaliFile = work_path + '/' + incrossvaliFile
self.outcrossvaliFile = work_path + '/' + outcrossvaliFile
self.Sname = None
self.Yname = None
self.S_norm = None
self.Y_norm = None
self.X_norm = None
self.Xy_norm = None
self.S = None
self.Y = None
self.X = None
self.Xy = None
self.MSE = None
self.S_row = 0
self.Y_row = 0
self.S_col = 0
self.Y_col = 0
self.stdS = None
self.stdY = None
self.meanS = None
self.meanY = None
def NGFC_ccs(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
cell_exhaust = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
cell_exhaust[i] = cell_exit[i] - stack_recirc[i]
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
cell_exhaust[i] /= Const_Convert
cell_aexhaust[i] /= Const_Convert
cell_exit[i] /= Const_Convert
cell_aexit[i] /= Const_Convert
pref_CH4[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (mol/s)",pref_CH4)
# print("Air cell outlet (U) (mol/s)",cell_aexit)
# print("Fuel cell outlet (Q) (mol/s)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(SOFC_Ain,stack_ain,stack_fin*Const_Convert,stack_recirc,stack_mix,pref_CH4,cell_exit,Frec,succs)
#return(stack_fin,stack_ain/Const_Convert,Frec,succs)
#return(stack_fin,SOFC_Ain,Fresh_Ain,Frec,succs)
return(cell_exit, cell_aexit, pref_CH4, succs)
def NGFC_nocc(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
cell_exhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 1
splt_ain[Index_Ar] = 1
splt_ain[Index_CO2] = 1
splt_ain[Index_O2] = 1
splt_ain[Index_N2] = 1
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# print(FU_REF1,FU_REF2,FU_REF3,FU_REF,FU)
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_exhaust[i] = cell_exit[i] - stack_recirc[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
cell_exhaust[i] /= Const_Convert
cell_aexhaust[i] /= Const_Convert
cell_exit[i] /= Const_Convert
cell_aexit[i] /= Const_Convert
pref_CH4[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (mol/s)",pref_CH4)
# print("Air cell outlet (U) (mol/s)",cell_aexit)
# print("Fuel cell outlet (Q) (mol/s)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(stack_ain/Const_Convert,stack_fin,Frec,succs)
#return(stack_fin, SOFC_Ain, Fresh_Ain, Frec, succs)
return(cell_exit, cell_aexit, pref_CH4, succs)
def IGFC_ccs(self, J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
cell_exhaust = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (igfc) default conventional
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='conventional':
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='enhanced':
NG_fin[Index_H2O] = 0.0006
NG_fin[Index_Ar] = 0.0009
NG_fin[Index_CO2] = 0.2423
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0064
NG_fin[Index_CH4] = 0.1022
NG_fin[Index_CO] = 0.3415
NG_fin[Index_H2] = 0.3062
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='catalytic':
NG_fin[Index_H2O] = 0.0004
NG_fin[Index_Ar] = 0.0003
NG_fin[Index_CO2] = 0.3465
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0069
NG_fin[Index_CH4] = 0.3159
NG_fin[Index_CO] = 0.0914
NG_fin[Index_H2] = 0.2386
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
IR = 1.0
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_exhaust[i] = cell_exit[i] - stack_recirc[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
cell_exhaust[i] /= Const_Convert
cell_aexhaust[i] /= Const_Convert
cell_exit[i] /= Const_Convert
cell_aexit[i] /= Const_Convert
pref_CH4[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
Frec = CalcR #; //they do equal
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (mol/s)",pref_CH4)
# print("Air cell outlet (U) (mol/s)",cell_aexit)
# print("Fuel cell outlet (Q) (mol/s)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(stack_fin,stack_ain/Const_Convert,Frec,succs)
#return(stack_fin,SOFC_Ain,Fresh_Ain,Frec,succs)
return(cell_exit, cell_aexit, pref_CH4, succs)
def NGFC_ccs_vgr(self, J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
cell_exhaust = np.arange(Nspecies,dtype=np.float64)
recirc_VGR0 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR1 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR2 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR3 = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (NG)
NG_fin[Index_H2O] = 0
NG_fin[Index_Ar] = 0
NG_fin[Index_CO2] = 74.0729157
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 118.516665
NG_fin[Index_CH4] = 6896.18846
NG_fin[Index_CO] = 0
NG_fin[Index_H2] = 0
NG_fin[Index_C2H6] = 237.03333
NG_fin[Index_C3H8] = 51.851041
NG_fin[Index_C4H10] = 29.6291663
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
# stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
recirc_VGR3[i]=stack_fin[i]*0.05
for i in range(Nspecies):
stack_mix[i]=stack_fin[i]+stack_recirc[i]+recirc_VGR3[i]
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]+recirc_VGR3[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
#cell_ref[Index_H2O] = pref_CH4[Index_H2O]-pref_CH4[Index_CH4]-2*pref_CH4[Index_C2H6]-3*pref_CH4[Index_C3H8]-4*pref_CH4[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (7a) Calculate the new VGR recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
recirc_VGR0[i]=cell_exit[i]-stack_recirc[i]
recirc_VGR1[i]=recirc_VGR0[i]
WGSmol=WGS*recirc_VGR1[Index_CO]
recirc_VGR1[Index_H2O] = recirc_VGR1[Index_H2O] - WGSmol
recirc_VGR1[Index_CO2] = recirc_VGR1[Index_CO2] + WGSmol
recirc_VGR1[Index_CO] = recirc_VGR1[Index_CO] - WGSmol
recirc_VGR1[Index_H2] = recirc_VGR1[Index_H2] + WGSmol
for i in range(Nspecies):
recirc_VGR2[i]=recirc_VGR1[i]
VGRH2O=recirc_VGR1[Index_H2O]*H2OCap
VGRCO2=recirc_VGR1[Index_CO2]*CO2Cap
VGRH2=recirc_VGR1[Index_H2]*H2Cap
recirc_VGR2[Index_H2O]=recirc_VGR2[Index_H2O]-VGRH2O
recirc_VGR2[Index_CO2]=recirc_VGR2[Index_CO2]-VGRCO2
recirc_VGR2[Index_H2]=recirc_VGR2[Index_H2]-VGRH2
for i in range(Nspecies):
recirc_VGR3[i]=recirc_VGR2[i]*VGR
cell_exhaust[i] = recirc_VGR2[i] - recirc_VGR3[i]
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
cell_exhaust[i] /= Const_Convert
cell_aexhaust[i] /= Const_Convert
cell_exit[i] /= Const_Convert
cell_aexit[i] /= Const_Convert
pref_CH4[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
#Frec = CalcR #; //they do equal //not working for VGR
CalcR=Frec
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (mol/s)",pref_CH4)
# print("Air cell outlet (U) (mol/s)",cell_aexit)
# print("Fuel cell outlet (Q) (mol/s)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(stack_fin,stack_ain/Const_Convert,Frec,succs)
#return(stack_fin,SOFC_Ain,Fresh_Ain,Frec,succs)
return(cell_exit, cell_aexit, pref_CH4, succs)
def IGFC_ccs_vgr(self, J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc):
Nspecies = 11
MW_fuel = np.arange(Nspecies,dtype=np.float64) ##molucular weight
NG_fin = np.arange(Nspecies,dtype=np.float64) ##hardcode, fuel species, in
NG_mfin = np.arange(Nspecies,dtype=np.float64) ##fuel species from NG_fin[] turned to fractions
std_ain = np.arange(Nspecies,dtype=np.float64) ##standard air in
splt_ain = np.arange(Nspecies,dtype=np.float64) ##air separation split? why not sum==1?
ref_ain = np.arange(Nspecies,dtype=np.float64) ##recirculation fuel species? what unit?
mix_refin = np.arange(Nspecies,dtype=np.float64) ##goes to Reformer, see the graph. Comes from three sources: part of NG, Steam, and air after split.
mix_cpox=np.arange(Nspecies,dtype=np.float64) ##intermediate fuel species assuming all complete oxidized?
mix_refout=np.arange(Nspecies,dtype=np.float64) ##fuel output after hydrocarbon reforming? ExtReform part of NG
stack_recirc = np.arange(Nspecies,dtype=np.float64) ##contains onl H2O, Ar, CO2, N2, CO, and H2. NO CH4. In iteration loop
stack_mix = np.arange(Nspecies,dtype=np.float64) ##= stack_fin[] + stack_recirc[]
pref_HH = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 1: taking care of high hydrocarbon: all high hydrocarbon hone
pref_CH4 = np.arange(Nspecies,dtype=np.float64) ##After PreReformer step 2: taking care of PreReforming: only CH4, by PreReform
##this leads to output SOFC_Fin[]
cell_ref = np.arange(Nspecies,dtype=np.float64) ##an assumed fuel composition at the stack inlet in the iteration loop. No more CH4.
cell_use = np.arange(Nspecies,dtype=np.float64) ##
cell_exit = np.arange(Nspecies,dtype=np.float64)
NG_in = np.arange(Nspecies,dtype=np.float64)
vartemp = np.arange(Nspecies,dtype=np.float64)
tester = np.arange(Nspecies,dtype=np.float64)
pref_CH4OLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD = np.arange(Nspecies,dtype=np.float64)
stack_recircOLD[:]=0.0
##air part
stack_ain = np.arange(Nspecies,dtype=np.float64)
stack_amix = np.arange(Nspecies,dtype=np.float64)
stack_arecirc = np.arange(Nspecies,dtype=np.float64)
stack_arecircOLD = np.arange(Nspecies,dtype=np.float64)
cell_aexit = np.arange(Nspecies,dtype=np.float64)
cell_aexhaust = np.arange(Nspecies,dtype=np.float64)
cell_exhaust = np.arange(Nspecies,dtype=np.float64)
recirc_VGR0 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR1 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR2 = np.arange(Nspecies,dtype=np.float64)
recirc_VGR3 = np.arange(Nspecies,dtype=np.float64)
SOFC_Ain = np.arange(5,dtype=np.float64)
Fresh_Ain = np.arange(5,dtype=np.float64)
stack_fin = np.arange(Nspecies,dtype=np.float64) ##The NG part before PreReformer: sum of two parts, pure NG (IR part) and mix_refout (ExtReform part)
#% Read Independent Variables
# J=400
# FU=0.9
# AU=0.378
# OCR=2.6
# IR=0.6
# Arec=0.5
# PreReform=0.2
# cellsize = 550 # cell area (cm2)
#% Assign General Fixed Values
R=8.3145
F=96485
Pi=3.14159265359
#% index
Index_H2O = 0
Index_Ar = 1
Index_CO2 = 2
Index_O2 = 3
Index_N2 = 4
Index_CH4 = 5
Index_CO = 6
Index_H2 = 7
Index_C2H6 = 8
Index_C3H8 = 9
Index_C4H10 = 10
#%
# Molecular Weights
MW_fuel[Index_H2O] = 18.01488 # H2O
MW_fuel[Index_Ar] = 39.948 # Ar
MW_fuel[Index_CO2] = 44.009 # CO2
MW_fuel[Index_O2] = 31.998 # O2
MW_fuel[Index_N2] = 28.0134 # N2
MW_fuel[Index_CH4] = 16.04276 # CH4
MW_fuel[Index_CO] = 28.01 # CO
MW_fuel[Index_H2] = 2.01588 # H2
MW_fuel[Index_C2H6] = 30.07 # C2H6
MW_fuel[Index_C3H8] = 44.1 # C3H8
MW_fuel[Index_C4H10] = 58.12 # C4H10
#%
#-- Define Fixed Assumptions for Operation
max_steam = 0.99 #-- Maximum fuel recirculation fraction
#%
#-- Define the inlet fuel feed composition (igfc) default conventional
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='conventional':
NG_fin[Index_H2O] = 0.0013
NG_fin[Index_Ar] = 0.0008
NG_fin[Index_CO2] = 0.2043
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.006
NG_fin[Index_CH4] = 0.0583
NG_fin[Index_CO] = 0.3774
NG_fin[Index_H2] = 0.3519
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='enhanced':
NG_fin[Index_H2O] = 0.0006
NG_fin[Index_Ar] = 0.0009
NG_fin[Index_CO2] = 0.2423
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0064
NG_fin[Index_CH4] = 0.1022
NG_fin[Index_CO] = 0.3415
NG_fin[Index_H2] = 0.3062
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
if igfc=='catalytic':
NG_fin[Index_H2O] = 0.0004
NG_fin[Index_Ar] = 0.0003
NG_fin[Index_CO2] = 0.3465
NG_fin[Index_O2] = 0
NG_fin[Index_N2] = 0.0069
NG_fin[Index_CH4] = 0.3159
NG_fin[Index_CO] = 0.0914
NG_fin[Index_H2] = 0.2386
NG_fin[Index_C2H6] = 0.0
NG_fin[Index_C3H8] = 0.0
NG_fin[Index_C4H10] = 0.0
#%
#-- Define the standard air composition
std_ain[Index_H2O] = 0.0104
std_ain[Index_Ar] = 0.0094
std_ain[Index_CO2] = 0.0003
std_ain[Index_O2] = 0.2077
std_ain[Index_N2] = 0.7722
std_ain[Index_CH4] = 0
std_ain[Index_CO] = 0
std_ain[Index_H2] = 0
std_ain[Index_C2H6] = 0
std_ain[Index_C3H8] = 0
std_ain[Index_C4H10] = 0
#%
#-- Define the air separation splits
splt_ain[Index_H2O] = 0
splt_ain[Index_Ar] = 0.0673
splt_ain[Index_CO2] = 0
splt_ain[Index_O2] = 0.9691
splt_ain[Index_N2] = 0.0005
splt_ain[Index_CH4] = 0
splt_ain[Index_CO] = 0
splt_ain[Index_H2] = 0
splt_ain[Index_C2H6] = 0
splt_ain[Index_C3H8] = 0
splt_ain[Index_C4H10] = 0
#%
zb = -1 #make Brian's 1-based code to 0-based
#%
# (0) Initial Calculations |
#-- Define useful paramters
IR = 1.0
ExtReform = 1.0 - IR #-- External reformation fraction
Stoichs = 1.0 / AU #-- Stoichs air
current = J * cellsize / 1000 # '-- Current (A)
#-- Calculate the air and fuel needs
fuelneed = current / 2 / F #-- H2 equiv (mol/s)
airneed = current / 4 / F # '-- O2 (mol/s)
#-- Define iteration parameters
itermax = 5000 # Total allowed iterations
ERRTOTAL = 100 # ' Error value
ERRTOLER = 1e-8 # ' Error convergence target
#-- Define calculation flags
Flag1 = 1 # ' 0=no output, 1=write output to spreadsheet
#%
# (1F) External Reformer Calculations |
#-- Fuel composition
NG_fin_sum = 0
for i in range(Nspecies):
NG_fin_sum += NG_fin[i]
#%
for i in range(Nspecies):
# print(i,NG_fin[i],NG_fin_sum,NG_fin[i]/NG_fin_sum)
#a=NG_fin[i]/NG_fin_sum
NG_mfin[i]=NG_fin[i]/NG_fin_sum
#print(NG_mfin[i],i)
#NG_mfin=NG_fin/NG_fin_sum
fueleqv = NG_mfin[Index_H2] + NG_mfin[Index_CO] + 4 * NG_mfin[Index_CH4] + 7 * NG_mfin[Index_C2H6] + 10 * NG_mfin[Index_C3H8] + 13 * NG_mfin[Index_C4H10]
NG_flowrate = fuelneed / fueleqv #//fuelneed=mol/s, so NG_flowrate = mol/s
#//why Const_Convert=3600 * 2.20462 / 1000, making it SLPM (should only *60), 3600=hour in seconds, NOT mole volume=22.4 (litter/mole).
#// 2.20462=1/0.454, from kilogram to lbs. /1000 is to make it kilogram because NW_fuel[] are in gram?
#//
#// but FU_REF1 and FU_REF2 are both very local, only to calculate FU_REF
#// FU_ stands for fuel utlization?
Const_Convert = 3600 * 2.20462 / 1000
FU_REF1 = NG_flowrate * Const_Convert * fueleqv # //equivalent fuel in lbs/h
#//FU_REF2: sum (molecular weight * composition) * flowrate
FU_REF2 = 0.0;
for i in range(Nspecies):
FU_REF2 = FU_REF2 + NG_mfin[i] * MW_fuel[i]
#//what is 2.0? 0.44? and 0.4?
#// 0.44 related to CO2 molucular weight 44?
#// 0.4 ??
FU_REF2 = 2.0 * NG_flowrate * Const_Convert * FU_REF2 * 0.44 * ExtReform / 0.4 / MW_fuel[Index_O2]
FU_REF3 = fuelneed / FU * Const_Convert
#//FU_REF = no unit
#// the effective FU?
#// 0.44 * ExtReform * Sum(NG_mfin[]*NW_fuel[])
#// fueleqv - -------------------------------------------
#// 0.4 NW_fuel[O2]
#// = FU * NG*Flowrate * (--------------------------------------------------------)
#// fuelneed
FU_REF = (FU_REF1 - FU_REF2) / FU_REF3
# SOFCMP2D4ROM.debugwrite.WriteLine("FU_REF = (FU_REF1 - FU_REF2) / FU_REF3: " + FU_REF.ToString() + "=" + FU_REF1.ToString() + "-" + FU_REF2.ToString() + "/" + FU_REF3.ToString());
#//NG_in[] = NG_mfin[] mass composition * flowrate * C / FU_REF?
for i in range(Nspecies):
NG_in[i] = NG_mfin[i] * (NG_flowrate * Const_Convert) / FU_REF # //in lbs/h unit?
#//NG_massflow: sum(inlet * molecular weight)
NG_massflow = 0
for i in range(Nspecies):
NG_massflow += NG_in[i] * MW_fuel[i];
#//'-- Reformer air composition
O2_flowrate = (NG_massflow * 0.44 * ExtReform * 1 / 0.4) / MW_fuel[Index_O2]
ref_ain[Index_O2] = O2_flowrate
#//what does it do?
for i in range(1,Nspecies+1):
if i != 4: #//zb+4=3=Index_O2
ref_ain[zb + i] = splt_ain[zb + i] * (ref_ain[Index_O2] / splt_ain[Index_O2]) / std_ain[Index_O2] * std_ain[zb + i]
#//basically ref_air[]= splt_ain[] * (std_ain[]/std_ain[O2]) * (ref_ain[O2]/splt_ain[O2]) or
#ref_air[]= ref_ain[O2] * (splt_ain[]/splt_ain[O2]) * (std_ain[]/std_ain[O2])
# //'-- Reformer Mix
#//debugging8
c1 = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform
c2 = ref_ain[Index_H2O]
c3 = (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# SOFCMP2D4ROM.debugwrite.WriteLine("For water: original " + c1.ToString() + " air separator " + c2.ToString() + " added " + c3.ToString());
#//end of debugging8
mix_refin[Index_H2O] = NG_mfin[Index_H2O] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[Index_H2O] + (NG_flowrate * Const_Convert) / FU_REF * ExtReform
# //VB code: mix_refin(zb + 1) = NG_mfin(zb + 1) * (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform + ref_ain(zb + 1) + (NG_flowrate * 3600# * 2.20462 / 1000#) / FU_REF * ExtReform
# //i=1 is for H2O, already done
# //the below makes more sense than the one with H2O. See the question to Brian
# //
for i in range(2,Nspecies+1):
mix_refin[zb + i] = NG_mfin[zb + i] * (NG_flowrate * Const_Convert) / FU_REF * ExtReform + ref_ain[zb + i] # //unit=lbs/h?
# //'-- After CPOX
# //what is fueloxid? fuel oxidide fraction?
# //CPOX: partial oxidization?
fueloxid = 0;
if ExtReform == 0:
fueloxid = 0
else:
# //NG_in[] already with proper flow rate unit, so we can simply +
# // CratCH4: C=1, H=1/4, so CH4=1+4/4=2
# // CratC2H6: 2*1 + 6/4 = 3.5
# // C3H8: =3*1+8/4=5
# // C4H10: 4*1+10/4=6.5
# /*old code, using Ctot, not necessary at all
# Ctot = NG_in[Index_CH4] + NG_in[Index_C2H6] + NG_in[Index_C3H8] + NG_in[Index_C4H10]
# CratCH4 = NG_in[Index_CH4] / Ctot
# CratC2H6 = NG_in[Index_C2H6] / Ctot
# CratC2H8 = NG_in[Index_C3H8] / Ctot
# double CratC4H10 = NG_in[Index_C4H10] / Ctot;
# fueloxid = O2_flowrate / (2 * CratCH4 + 3.5 * CratC2H6 + 5 * CratC2H8 + 6.5 * CratC4H10) / (Ctot * ExtReform)
# */
fueloxid = O2_flowrate / (2 * NG_in[Index_CH4] + 3.5 * NG_in[Index_C2H6] + 5 * NG_in[Index_C3H8] + 6.5 * NG_in[Index_C4H10]) / ExtReform
#% GetMix_CPoxFromMix_Refin(mix_refin, out mix_cpox, out mix_refout, fueloxid)
mix_cpox = np.arange(Nspecies,dtype=np.float64)
mix_cpox[Index_H2O] = mix_refin[Index_H2O] + (2 * mix_refin[Index_CH4] + 3 * mix_refin[Index_C2H6] + 4 * mix_refin[Index_C3H8] + 5 * mix_refin[Index_C4H10]) * fueloxid;
mix_cpox[Index_CO2] = mix_refin[Index_CO2] + (mix_refin[Index_CH4] + 2 * mix_refin[Index_C2H6] + 3 * mix_refin[Index_C3H8] + 4 * mix_refin[Index_C4H10]) * fueloxid
mix_cpox[Index_Ar] = mix_refin[Index_Ar]
mix_cpox[Index_N2] = mix_refin[Index_N2]
mix_cpox[Index_CO] = mix_refin[Index_CO]
mix_cpox[Index_H2] = mix_refin[Index_H2]
mix_cpox[Index_CH4] = mix_refin[Index_CH4] * (1 - fueloxid)
mix_cpox[Index_C2H6] = mix_refin[Index_C2H6] * (1 - fueloxid)
mix_cpox[Index_C3H8] = mix_refin[Index_C3H8] * (1 - fueloxid)
mix_cpox[Index_C4H10] = mix_refin[Index_C4H10] * (1 - fueloxid)
mix_cpox[Index_O2] = (2 * (mix_refin[Index_CH4] - mix_cpox[Index_CH4]) + 3.5 * (mix_refin[Index_C2H6] - mix_cpox[Index_C2H6]) + 5 * (mix_refin[Index_C3H8] - mix_cpox[Index_C3H8]) + 6.5 * (mix_refin[Index_C4H10] - mix_cpox[Index_C4H10])) - mix_refin[Index_O2]
mix_cpox[Index_O2] = max(mix_cpox[Index_O2], 0)
# //'-- Reformer Exit (get rid of higher hydrocarbons)
# //'-------------------------------------------------
# //Kevin, why CH4 = 0? All go to CO and H2 and H2O
mix_refout = np.arange(Nspecies,dtype=np.float64)
# //No change species
mix_refout[Index_Ar] = mix_cpox[Index_Ar]
mix_refout[Index_CO2] = mix_cpox[Index_CO2]
mix_refout[Index_O2] = mix_cpox[Index_O2]
mix_refout[Index_N2] = mix_cpox[Index_N2]
# //the actual reformer, see the equations below
# // CH4 + H2O -> CO + 3H2
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
mix_refout[Index_H2O] = mix_cpox[Index_H2O] - (mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10])
mix_refout[Index_CO] = mix_cpox[Index_CO] + mix_cpox[Index_CH4] + 2 * mix_cpox[Index_C2H6] + 3 * mix_cpox[Index_C3H8] + 4 * mix_cpox[Index_C4H10] # //added mix_cpox[Index_CO]=0
mix_refout[Index_H2] = mix_cpox[Index_H2] + 3 * mix_cpox[Index_CH4] + 5 * mix_cpox[Index_C2H6] + 7 * mix_cpox[Index_C3H8] + 9 * mix_cpox[Index_C4H10] #//added mix_cpox[Index_H2]=0
# //SOFCMP2D4ROM.debugwrite.WriteLine("mix_refout[Index_H2]=" + mix_refout[Index_H2].ToString()); proven work!
# //0-out all species with C
mix_refout[Index_CH4] = 0;
mix_refout[Index_C2H6] = 0;
mix_refout[Index_C3H8] = 0;
mix_refout[Index_C4H10] = 0;
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("IR=" + IR.ToString() + " ExtReform=" + ExtReform.ToString() + " PreReform=" + PreReform.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t mix_refout[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + mix_refout[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + mix_refout[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + mix_refout[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + mix_refout[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + mix_refout[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + mix_refout[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + mix_refout[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + mix_refout[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + mix_refout[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + mix_refout[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + mix_refout[Index_C4H10].ToString("E4"));
# //'-- Mix to SOFC
# //'--------------
# //Kevin: or going to Pre-Reformer?
for i in range(Nspecies):
stack_fin[i] = mix_refout[i] + NG_mfin[i] * (NG_flowrate * Const_Convert / FU_REF) * (1.0 - ExtReform)
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_fin[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_fin[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_fin[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_fin[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_fin[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_fin[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_fin[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_fin[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_fin[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_fin[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_fin[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_fin[Index_C4H10].ToString("E4"));
#%//'-------------------------------------------------------------------------------------------
# //'| (1A) Air Inlet |
# //'-------------------------------------------------------------------------------------------
air_flowrate = airneed / std_ain[Index_O2]
for i in range(Nspecies):
stack_ain[i] = Stoichs * air_flowrate * 3600 * std_ain[i] * 2.20462 / 1000
# // *** START ITERATIVE LOOP ***
# double Steam1, Steam2;
Steam1=0.0
Steam2=0.0
# //double Frec; //fuel recirculation ratio
AddedSteam = 0;
Frec = 0.05;
OCRValue=0.0
#%
itermax=5000
for iter in range(1,itermax):
# //'-------------------------------------------------------------------------------------------
# //'| [2] Calculate the fuel inlet composition to get OCR ratio |
# //'-------------------------------------------------------------------------------------------
if iter == 1: # // This is the first iteration needing initialization
for i in range(Nspecies):
stack_recirc[i] = stack_fin[i] * 0.05 #; // ' Initial condition set to 5% of fuel inlet
# stack_mix[i] = stack_fin[i] + stack_recirc[i] #;
recirc_VGR3[i]=stack_fin[i]*0.05
for i in range(Nspecies):
stack_mix[i]=stack_fin[i]+stack_recirc[i]+recirc_VGR3[i]
AddedSteam = 0 #; // ' Initial condition set to zero
Frec = 0.05 #; // ' Initial condition set to 5%
cell_exit[Index_H2O] = stack_fin[Index_H2O] #; // ' Initial condition set to fuel inlet
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O]) #;
Steam2 = 0;
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam;
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O];
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1;
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam;
else: # //Else ' This is the second + iteration
Steam1 = OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])- (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_recirc[Index_H2O])
if cell_exit[Index_H2O] == 0:
Steam2 = max_steam
else:
Steam2 = (OCR * (stack_mix[Index_CO2] + stack_mix[Index_CH4] + stack_mix[Index_CO] + 2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - (2 * stack_mix[Index_CO2] + stack_mix[Index_CO] + stack_fin[Index_H2O])) / cell_exit[Index_H2O]
if Steam2 > max_steam:
Frec = max_steam
else:
Frec = Steam2
if Steam2 < max_steam:
AddedSteam = 0
else:
AddedSteam = Steam1
for i in range(Nspecies):
stack_mix[i] = stack_fin[i] + stack_recirc[i]+recirc_VGR3[i]
stack_mix[Index_H2O] = stack_mix[Index_H2O] + AddedSteam # //need to ask Brian
# //'MsgBox "Steam1: " & Steam1 & "Steam2: " & Steam2 & "AddedSteam: " & AddedSteam
# //'
# //'-------------------------------------------------------------------------------------------
# //'| [3] Calculate the fuel inlet composition after prereforming higher hydrocarbons |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - NOT THIS ONE
# // C2H6 + 2H2O -> 2CO + 5H2
# // C3H8 + 3H2O -> 3CO + 7H2
# // C4H10 + 4H2O -> 4CO + 9H2
pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_Ar] = stack_mix[Index_Ar]
pref_HH[Index_CO2] = stack_mix[Index_CO2]
pref_HH[Index_O2] = stack_mix[Index_O2]
pref_HH[Index_N2] = stack_mix[Index_N2]
pref_HH[Index_CH4] = stack_mix[Index_CH4]
pref_HH[Index_CO] = stack_mix[Index_CO] + (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10])
pref_HH[Index_H2] = stack_mix[Index_H2] + (5 * stack_mix[Index_C2H6] + 7 * stack_mix[Index_C3H8] + 9 * stack_mix[Index_C4H10])
pref_HH[Index_C2H6] = 0
pref_HH[Index_C3H8] = 0
pref_HH[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (4) Calculate the fuel inlet composition after prereforming CH4 |
# //'-------------------------------------------------------------------------------------------
# // CH4 + H2O -> CO + 3H2 - only by ratio=PreReform
pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4]
pref_CH4[Index_Ar] = pref_HH[Index_Ar]
pref_CH4[Index_CO2] = pref_HH[Index_CO2]
pref_CH4[Index_O2] = pref_HH[Index_O2]
pref_CH4[Index_N2] = pref_HH[Index_N2]
pref_CH4[Index_CH4] = pref_HH[Index_CH4] * (1 - PreReform)
pref_CH4[Index_CO] = pref_HH[Index_CO] + PreReform * pref_HH[Index_CH4]
pref_CH4[Index_H2] = pref_HH[Index_H2] + 3 * PreReform * pref_HH[Index_CH4]
pref_CH4[Index_C2H6] = pref_HH[Index_C2H6]
pref_CH4[Index_C3H8] = pref_HH[Index_C3H8]
pref_CH4[Index_C4H10] = pref_HH[Index_C4H10]
# //'-------------------------------------------------------------------------------------------
# //'| (5) Reform the CH4 in stack |
# //'-------------------------------------------------------------------------------------------
# //Question: why cell_ref[H2O]!=pref_CH4[H2O]?
# // pref_HH[Index_H2O] = stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]);
# // pref_CH4[Index_H2O] = pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * pref_HH[Index_CH4];
# // =stack_mix[Index_H2O] - (2 * stack_mix[Index_C2H6] + 3 * stack_mix[Index_C3H8] + 4 * stack_mix[Index_C4H10]) - PreReform * stack_mix[Index_CH4];
# // There is a difference between - PreReform * stack_mix[Index_CH4] and - stack_mix[Index_CH4]
# //Explanation: whether CH4 is reformed in PreReformer or in the stack, it consumes the same amount of water
# // cell_use[Index_H2O]=pref_CH4[Index_H2O]-((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - PreReform * pref_HH[Index_CH4] - ((1-PreReform) * pref_HH[Index_CH4])
# // =pref_HH[Index_H2O] - pref_HH[Index_CH4]
cell_ref[Index_H2O] = stack_mix[Index_H2O] - stack_mix[Index_CH4] - 2 * stack_mix[Index_C2H6] - 3 * stack_mix[Index_C3H8] - 4 * stack_mix[Index_C4H10]
# cell_ref[Index_H2O] = pref_CH4[Index_H2O]-pref_CH4[Index_CH4]-2*pref_CH4[Index_C2H6]-3*pref_CH4[Index_C3H8]-4*pref_CH4[Index_C4H10]
cell_ref[Index_Ar] = pref_CH4[Index_Ar]
cell_ref[Index_CO2] = pref_CH4[Index_CO2]
cell_ref[Index_O2] = pref_CH4[Index_O2]
cell_ref[Index_N2] = pref_CH4[Index_N2]
cell_ref[Index_CH4] = 0
cell_ref[Index_CO] = pref_CH4[Index_CO] + pref_CH4[Index_CH4] + 2 * pref_CH4[Index_C2H6] + 3 * pref_CH4[Index_C3H8] + 4 * pref_CH4[Index_C4H10]
cell_ref[Index_H2] = pref_CH4[Index_H2] + 3 * pref_CH4[Index_CH4] + 5 * pref_CH4[Index_C2H6] + 7 * pref_CH4[Index_C3H8] + 9 * pref_CH4[Index_C4H10]
cell_ref[Index_C2H6] = 0
cell_ref[Index_C3H8] = 0
cell_ref[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (6) Calculate the fuel outlet composition |
# //'-------------------------------------------------------------------------------------------
# //FU: per-pass value, because applying on stack_fin[] which are fresh
cell_use[Index_H2O] = -(stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_Ar] = 0
cell_use[Index_CO2] = -(stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_O2] = 0
cell_use[Index_N2] = 0
cell_use[Index_CH4] = 0
cell_use[Index_CO] = (stack_fin[Index_CO] + stack_fin[Index_CH4] + 2 * stack_fin[Index_C2H6] + 3 * stack_fin[Index_C3H8] + 4 * stack_fin[Index_C4H10]) * FU
cell_use[Index_H2] = (stack_fin[Index_H2] + 3 * stack_fin[Index_CH4] + 5 * stack_fin[Index_C2H6] + 7 * stack_fin[Index_C3H8] + 9 * stack_fin[Index_C4H10]) * FU
cell_use[Index_C2H6] = 0
cell_use[Index_C3H8] = 0
cell_use[Index_C4H10] = 0
# //'-------------------------------------------------------------------------------------------
# //'| (7) Calculate the new recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
cell_exit[i] = cell_ref[i] - cell_use[i]
stack_recirc[i] = cell_exit[i] * Frec
#print(cell_ref,"cell_ref")
#print(cell_use,"cell_use")
# //'-------------------------------------------------------------------------------------------
# //'| (7a) Calculate the new VGR recirc composition |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
recirc_VGR0[i]=cell_exit[i]-stack_recirc[i]
recirc_VGR1[i]=recirc_VGR0[i]
WGSmol=WGS*recirc_VGR1[Index_CO]
recirc_VGR1[Index_H2O] = recirc_VGR1[Index_H2O] - WGSmol
recirc_VGR1[Index_CO2] = recirc_VGR1[Index_CO2] + WGSmol
recirc_VGR1[Index_CO] = recirc_VGR1[Index_CO] - WGSmol
recirc_VGR1[Index_H2] = recirc_VGR1[Index_H2] + WGSmol
for i in range(Nspecies):
recirc_VGR2[i]=recirc_VGR1[i]
VGRH2O=recirc_VGR1[Index_H2O]*H2OCap
VGRCO2=recirc_VGR1[Index_CO2]*CO2Cap
VGRH2=recirc_VGR1[Index_H2]*H2Cap
recirc_VGR2[Index_H2O]=recirc_VGR2[Index_H2O]-VGRH2O
recirc_VGR2[Index_CO2]=recirc_VGR2[Index_CO2]-VGRCO2
recirc_VGR2[Index_H2]=recirc_VGR2[Index_H2]-VGRH2
for i in range(Nspecies):
recirc_VGR3[i]=recirc_VGR2[i]*VGR
cell_exhaust[i] = recirc_VGR2[i] - recirc_VGR3[i]
# //'-------------------------------------------------------------------------------------------
# //'| (9) Calculate the new air composition with recirculation |
# //'-------------------------------------------------------------------------------------------
for i in range(Nspecies):
stack_amix[i] = stack_ain[i] + stack_arecirc[i]
cell_aexit[i] = stack_amix[i]
cell_aexit[Index_O2] = stack_amix[Index_O2] - stack_ain[Index_O2] * AU
for i in range(Nspecies):
stack_arecirc[i] = cell_aexit[i] * Arec
cell_aexhaust[i] = cell_aexit[i] - stack_arecirc[i]
# //NOT YET write the following: Frec, stack_mix[i] = stack_fin[i] + stack_recirc[i];
# SOFCMP2D4ROM.debugwrite.WriteLine("Iteration " + iter.ToString() + " of " + itermax.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t Frec=" + Frec.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("\t cell_ref[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + cell_ref[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + cell_ref[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + cell_ref[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + cell_ref[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + cell_ref[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + cell_ref[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + cell_ref[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + cell_ref[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + cell_ref[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + cell_ref[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + cell_ref[Index_C4H10].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t stack_recirc[]:");
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2O:\t" + stack_recirc[Index_H2O].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t Ar:\t" + stack_recirc[Index_Ar].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO2:\t" + stack_recirc[Index_CO2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t O2:\t" + stack_recirc[Index_O2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t N2:\t" + stack_recirc[Index_N2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CH4:\t" + stack_recirc[Index_CH4].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t CO:\t" + stack_recirc[Index_CO].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t H2:\t" + stack_recirc[Index_H2].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C2H6:\t" + stack_recirc[Index_C2H6].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C3H8:\t" + stack_recirc[Index_C3H8].ToString("E4"));
# SOFCMP2D4ROM.debugwrite.WriteLine("\t\t C4H10:\t" + stack_recirc[Index_C4H10].ToString("E4"));
oo = pref_CH4[Index_H2O] + pref_CH4[Index_CO] + pref_CH4[Index_CO2] * 2.0
cc = pref_CH4[Index_CO] + pref_CH4[Index_CO2] + pref_CH4[Index_CH4]
OCRValue = oo / cc
# SOFCMP2D4ROM.debugwrite.WriteLine("OCR value " + OCR.ToString() + " vs. calculated " + OCRValue.ToString());
# //'-------------------------------------------------------------------------------------------
# //'| Check for convergence |
# //'-------------------------------------------------------------------------------------------
if iter == 1:
ERRTOTAL = 100
for i in range(Nspecies):
stack_recirc[i] = stack_recircOLD[i];
else:
ERRSUM = 0;
for i in range(Nspecies):
ERRSUM = ERRSUM + pow(stack_recirc[i] - stack_recircOLD[i], 2)
ERRSUM = ERRSUM + pow(stack_arecirc[i] - stack_arecircOLD[i], 2)
stack_recircOLD[i] = stack_recirc[i]
stack_arecircOLD[i] = stack_arecirc[i]
ERRTOTAL = math.sqrt(ERRSUM)
#print("Iteration=",iter,": Frec=",Frec,"; OCR=",OCRValue,"; Error=",ERRTOTAL,"; Target error=",ERRTOLER)
if ERRTOTAL < ERRTOLER:
break
# //' *** END ITERATIVE LOOP ***
# } //iter
#%
# SOFCMP2D4ROM.debugwrite.WriteLine("DONE Iterations");
# //' *** END ITERATIVE LOOP ***
# //MsgBox "Iterations Required: " & iter
# //convert to mole/s
for i in range(Nspecies):
stack_fin[i] /= Const_Convert
cell_exhaust[i] /= Const_Convert
cell_aexhaust[i] /= Const_Convert
cell_exit[i] /= Const_Convert
cell_aexit[i] /= Const_Convert
pref_CH4[i] /= Const_Convert
#%
# //'-------------------------------------------------------------------------------------------
# //'| Final Results for SOFC-MP: 1-cell gas flow rates in mol/s |
# //'-------------------------------------------------------------------------------------------
# //'-- Air
SOFC_Ain[0] = stack_amix[Index_O2] / Const_Convert #; //' O2
SOFC_Ain[1] = stack_amix[Index_N2] / Const_Convert #; //' N2
SOFC_Ain[2] = stack_amix[Index_H2O] / Const_Convert #; //' H2O
SOFC_Ain[3] = stack_amix[Index_CO2] / Const_Convert #; //' CO2
SOFC_Ain[4] = stack_amix[Index_Ar] / Const_Convert #; //' Ar'
# //Calculting Frec directly
FaradayEC = 96487.0
ooFromCurrent = (cellsize * J * 0.001) / (2.0 * FaradayEC) #; //this is for O atom
ooNG = stack_fin[Index_H2O] + stack_fin[Index_CO2] * 2.0 + stack_fin[Index_O2] * 2.0 + stack_fin[Index_CO]
ccNG = stack_fin[Index_CO2] + stack_fin[Index_CH4] + stack_fin[Index_CO] + 2.0 * stack_fin[Index_C2H6] + 3.0 * stack_fin[Index_C3H8] + 4.0 * stack_fin[Index_C4H10]
CalcR = (ccNG * OCR - ooNG) / ooFromCurrent
#Frec = CalcR #; //they do equal //not working for VGR
CalcR=Frec
# SOFCMP2D4ROM.debugwrite.WriteLine("calcR=" + CalcR.ToString());
# //calculating air side
o2Consumed4Current = (cellsize * J * 0.001) / (4.0 * FaradayEC) #; //this is for O2
o2_fresh = o2Consumed4Current / AU
o2_stack = (o2_fresh - Arec * o2Consumed4Current) / (1.0 - Arec)
fresh_factor = o2_fresh / std_ain[Index_O2]
ar_fresh = fresh_factor * std_ain[Index_Ar]
h2o_fresh = fresh_factor * std_ain[Index_H2O]
co2_fresh = fresh_factor * std_ain[Index_CO2]
n2_fresh = fresh_factor * std_ain[Index_N2]
ar_stack = ar_fresh / (1.0 - Arec)
h2o_stack = h2o_fresh / (1.0 - Arec)
co2_stack = co2_fresh / (1.0 - Arec)
n2_stack = n2_fresh / (1.0 - Arec)
Fresh_Ain[0] = o2_fresh
Fresh_Ain[1] = n2_fresh
Fresh_Ain[2] = h2o_fresh
Fresh_Ain[3] = co2_fresh
Fresh_Ain[4] = ar_fresh
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, ROMdemo() result (O2, N2, H2O, CO2, Ar)="
# + SOFC_Ain[0].ToString() + ","
# + SOFC_Ain[1].ToString() + ","
# + SOFC_Ain[2].ToString() + ","
# + SOFC_Ain[3].ToString() + ","
# + SOFC_Ain[4].ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result stack (O2, N2, H2O, CO2, Ar)="
# + o2_stack.ToString() + ","
# + n2_stack.ToString() + ","
# + h2o_stack.ToString() + ","
# + co2_stack.ToString() + ","
# + ar_stack.ToString());
# SOFCMP2D4ROM.debugwrite.WriteLine("Air side, calculated result fresh (O2, N2, H2O, CO2, Ar)="
# + o2_fresh.ToString() + ","
# + n2_fresh.ToString() + ","
# + h2o_fresh.ToString() + ","
# + co2_fresh.ToString() + ","
# + ar_fresh.ToString());
# }
#% Print outputs
# print("Fresh air in (J)",Fresh_Ain)
# print("Stack air in (T)",SOFC_Ain)
# print("Fuel in (F)",stack_fin)
# print("Fuel recy (R) (lb-mol/hr)",stack_recirc)
# print("Air recy (V) (lb-mol/hr)",stack_arecirc)
# The outputs used for SOFC-MP ROM
# print("Fuel cell inlet (P) (mol/s)",pref_CH4)
# print("Air cell outlet (U) (mol/s)",cell_aexit)
# print("Fuel cell outlet (Q) (mol/s)",cell_exit)
#The outputs used for SOFC-MP ROM
if Frec>0.9 or Frec<=0:
succs=0
else:
succs=1
#return(stack_fin,stack_ain/Const_Convert,ref_ain,stack_amix/Const_Convert,Frec,succs)
#return(stack_fin,SOFC_Ain,Fresh_Ain,Frec,succs)
return(cell_exit, cell_aexit, pref_CH4, succs)
def DNNROM_4cls(self, maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,testX_nrm2,input_num,output_num,DNNsize):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs= maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for class training data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
print("prediction for final testing data set size ", testX_nrm2.shape[0]," * ",testX_nrm2.shape[1])
# Network Parameters
DNNlayers=len(DNNsize)
print('Number of layers = ',DNNlayers)
if DNNlayers>10:
print('Number of layers needs <=10')
return()
if DNNlayers>=1: n_hidden_1 = DNNsize[0]#64
if DNNlayers>=2: n_hidden_2 = DNNsize[1]#400
if DNNlayers>=3: n_hidden_3 = DNNsize[2]#400
if DNNlayers>=4: n_hidden_4 = DNNsize[3]#512
if DNNlayers>=5: n_hidden_5 = DNNsize[4]#512
if DNNlayers>=6: n_hidden_6 = DNNsize[5]#512
if DNNlayers>=7: n_hidden_7 = DNNsize[6]#512
if DNNlayers>=8: n_hidden_8 = DNNsize[7]#512
if DNNlayers>=9: n_hidden_9 = DNNsize[8]#512
if DNNlayers>=10: n_hidden_10 = DNNsize[9]#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
#tf.compat.v1.disable_eager_execution()
# Store layers weight & bias
if DNNlayers==1:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_1, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==2:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_2, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==3:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_3, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==4:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==5:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_5, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==6:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_6, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==7:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_7, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==8:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_8, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==9:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_9, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==10:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'h10': tf.Variable(tf.random.normal([n_hidden_9, n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_10, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'b10': tf.Variable(tf.random.normal([n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
print(DNNlayers)
if DNNlayers>=1:
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
#tf.summary.histogram("weights",weights['h1'])
#tf.summary.histogram("layer", layer_1)
if DNNlayers>=2:
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
if DNNlayers>=3:
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
if DNNlayers>=4:
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
if DNNlayers>=5:
layer_5 = tf.add(tf.matmul(layer_4, weights['h5']), biases['b5'])
layer_5 = tf.nn.sigmoid(layer_5)
if DNNlayers>=6:
layer_6 = tf.add(tf.matmul(layer_5, weights['h6']), biases['b6'])
layer_6 = tf.nn.sigmoid(layer_6)
if DNNlayers>=7:
layer_7 = tf.add(tf.matmul(layer_6, weights['h7']), biases['b7'])
layer_7 = tf.nn.sigmoid(layer_7)
if DNNlayers>=8:
layer_8 = tf.add(tf.matmul(layer_7, weights['h8']), biases['b8'])
layer_8 = tf.nn.sigmoid(layer_8)
if DNNlayers>=9:
layer_9 = tf.add(tf.matmul(layer_8, weights['h9']), biases['b9'])
layer_9 = tf.nn.sigmoid(layer_9)
if DNNlayers>=10:
layer_10 = tf.add(tf.matmul(layer_9, weights['h10']), biases['b10'])
layer_10 = tf.nn.sigmoid(layer_10)
if DNNlayers==1:
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
if DNNlayers==2:
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
if DNNlayers==3:
out_layer = tf.matmul(layer_3, weights['out']) + biases['out']
if DNNlayers==4:
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
if DNNlayers==5:
out_layer = tf.matmul(layer_5, weights['out']) + biases['out']
if DNNlayers==6:
out_layer = tf.matmul(layer_6, weights['out']) + biases['out']
if DNNlayers==7:
out_layer = tf.matmul(layer_7, weights['out']) + biases['out']
if DNNlayers==8:
out_layer = tf.matmul(layer_8, weights['out']) + biases['out']
if DNNlayers==9:
out_layer = tf.matmul(layer_9, weights['out']) + biases['out']
if DNNlayers==10:
out_layer = tf.matmul(layer_10, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
#saver = tf.train.Saver()
#tf.reset_default_graph()
config = tf.ConfigProto(device_count={"CPU": 1}, # limit to num_cpu_core CPU usage
inter_op_parallelism_threads = 0,
intra_op_parallelism_threads = 28,
)
init = tf.global_variables_initializer()
start=time.time()
with tf.Session(config=config) as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
test_p2=sess.run(pred, feed_dict={x: testX_nrm2})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch %2000 == 0 :
end=time.time()
print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c),' training time (s/100epochs)= ','{:.5f}'.format(end-start))
start=time.time()
#for validation set if no improvement then break
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
#saver.save(sess, r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\DNN')
#saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
test_p2=sess.run(pred, feed_dict={x: testX_nrm2})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#for k,v in zip(variables_names, values):
# print(k, v)
# for v in values:
# print(v)
sess.close()
tf.reset_default_graph()
return(test_p1,test_p2, values)
def DNNCls(self, maxiteration,trainX_nrm,trainY_nrm,testX_nrm,testY_nrm,input_num_units):
hidden_num_units = 500
output_num_units = 2
seed=88
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
print("DNN classification training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for final testing data set size ", testX_nrm.shape[0]," * ",testX_nrm.shape[1])
# define placeholders
xc = tf.placeholder(tf.float32, [None, input_num_units])
yc = tf.placeholder(tf.float32, [None, output_num_units])
# set remaining variables
epochs = maxiteration
batch_size = int(X_train.shape[0]/2) #1500
learning_rate = 0.001
### define weights and biases of the neural network
weights = {
'hidden': tf.Variable(tf.random_uniform([input_num_units, hidden_num_units],-1,1,seed=seed)),
#'hidden': tf.Variable(tf.random_normal([input_num_units, hidden_num_units], 0, 1,seed=seed)),
'output': tf.Variable(tf.random_normal([hidden_num_units, output_num_units],0, 0.1, seed=seed))
}
biases = {
#'hidden': tf.Variable(tf.random_normal([hidden_num_units], seed=seed)),
'hidden': tf.Variable(tf.random_uniform([hidden_num_units], -1,1,seed=seed)),
'output': tf.Variable(tf.random_normal([output_num_units], seed=seed))
}
#
hidden_layer = tf.add(tf.matmul(xc, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.sigmoid(hidden_layer)
tf.summary.histogram("weights_hidden",weights['hidden'])
tf.summary.histogram("biases_hidden",biases['hidden'])
tf.summary.histogram("layer_hidden", hidden_layer)
output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']
tf.summary.histogram("weights_output",weights['output'])
tf.summary.histogram("biases_output",biases['output'])
tf.summary.histogram("layer_output", output_layer)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output_layer, labels=yc))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
pred=output_layer
init = tf.global_variables_initializer()
#write this after all the summary
#merged = tf.summary.merge_all()
#writer = tf.summary.FileWriter(cwd)
#saver = tf.train.Saver()
# covert output scalar to vector https://stackoverflow.com/questions/43543594/label-scalar-into-one-hot-in-tensorr-flow-code
def dense_to_one_hot(labels_dense, num_classes=2):
"""Convert class labels from scalars to one-hot vectors"""
num_labels = labels_dense.shape[0]
#index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
for ii in range(num_labels):
labels_one_hot[ii,int(labels_dense[ii])]=1
#labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
prev_cost=0
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
avg_cost = 0
total_batch = int(X_train.shape[0]/batch_size)
for i in range(total_batch):
batch_x = X_train[i*batch_size:(i+1)*batch_size,]
batch_y = y_train[i*batch_size:(i+1)*batch_size,]
batch_y = dense_to_one_hot(batch_y)
_, c = sess.run([optimizer, cost], feed_dict = {xc: batch_x, yc: batch_y})
avg_cost += c / total_batch
#write tensorboard summary
#summary_avg_cost = tf.Summary()
#summary_avg_cost.value.add(tag="avg_cost", simple_value=avg_cost)
#writer.add_summary(summary_avg_cost, epoch)
#writer.add_summary(summary, epoch)
#find predictions on val set #location of the catagory, can be greater than 2
pred_temp = tf.equal(tf.argmax(output_layer, 1), tf.argmax(yc, 1))
accuracy = tf.reduce_mean(tf.cast(pred_temp, "float"))
val_acc=accuracy.eval({xc: val_x, yc: dense_to_one_hot(val_y)})
test_acc=accuracy.eval({xc: testX_nrm, yc: dense_to_one_hot(testY_nrm)})
#print ("Validation Accuracy:", accuracy.eval({x: val_x, y: dense_to_one_hot(val_y)}))
if epoch %2000 ==0 :print ('Epoch:', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost)," Validation accuracy:", val_acc," Test accuracy:",test_acc)
if epoch == epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 and val_acc<=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_acc
test_p1=sess.run(pred, feed_dict={xc: testX_nrm})
test_p0=sess.run(tf.argmax(test_p1,1))
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#saver.save(sess, DNNcls_save_file)
sess.close()
tf.reset_default_graph()
return(val_acc,test_acc,test_p0, values)
def DNNROM(self, maxiteration,trainX_nrm,trainY_nrm,testX_nrm1,input_num,output_num,DNNsize):
split_size = int(trainX_nrm.shape[0]*0.8)
X_train, val_x = trainX_nrm[:split_size],trainX_nrm[split_size:]
y_train, val_y = trainY_nrm[:split_size], trainY_nrm[split_size:]
learning_rate = 0.001
training_epochs = maxiteration
batch_size = int(X_train.shape[0]/3)
total_len=trainX_nrm.shape[0]
seed=88
print("DNN ROM training start ...")
print("training data set size ", X_train.shape[0]," * ",X_train.shape[1])
print("validation data set size", val_x.shape[0]," * ",val_x.shape[1])
print("prediction for testing data set size", testX_nrm1.shape[0]," * ",testX_nrm1.shape[1])
# Network Parameters
DNNlayers=len(DNNsize)
print('Number of layers = ',DNNlayers)
if DNNlayers>10:
print('Number of layers needs <=10')
return()
if DNNlayers>=1: n_hidden_1 = DNNsize[0]#64
if DNNlayers>=2: n_hidden_2 = DNNsize[1]#400
if DNNlayers>=3: n_hidden_3 = DNNsize[2]#400
if DNNlayers>=4: n_hidden_4 = DNNsize[3]#512
if DNNlayers>=5: n_hidden_5 = DNNsize[4]#512
if DNNlayers>=6: n_hidden_6 = DNNsize[5]#512
if DNNlayers>=7: n_hidden_7 = DNNsize[6]#512
if DNNlayers>=8: n_hidden_8 = DNNsize[7]#512
if DNNlayers>=9: n_hidden_9 = DNNsize[8]#512
if DNNlayers>=10: n_hidden_10 = DNNsize[9]#512
n_input = input_num
n_classes = output_num
# tf Graph input
x = tf.placeholder("float", [None, n_input],name="x")
y = tf.placeholder("float", [None, n_classes])
#tf.compat.v1.disable_eager_execution()
# Store layers weight & bias
if DNNlayers==1:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_1, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==2:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_2, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==3:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_3, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==4:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_4, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==5:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_5, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==6:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_6, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==7:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_7, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==8:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_8, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==9:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_9, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
if DNNlayers==10:
weights = {
'h1': tf.Variable(tf.random.normal([n_input, n_hidden_1], 0, 0.1,seed=seed)),
'h2': tf.Variable(tf.random.normal([n_hidden_1, n_hidden_2], 0, 0.1,seed=seed)),
'h3': tf.Variable(tf.random.normal([n_hidden_2, n_hidden_3], 0, 0.1,seed=seed)),
'h4': tf.Variable(tf.random.normal([n_hidden_3, n_hidden_4], 0, 0.1,seed=seed)),
'h5': tf.Variable(tf.random.normal([n_hidden_4, n_hidden_5], 0, 0.1,seed=seed)),
'h6': tf.Variable(tf.random.normal([n_hidden_5, n_hidden_6], 0, 0.1,seed=seed)),
'h7': tf.Variable(tf.random.normal([n_hidden_6, n_hidden_7], 0, 0.1,seed=seed)),
'h8': tf.Variable(tf.random.normal([n_hidden_7, n_hidden_8], 0, 0.1,seed=seed)),
'h9': tf.Variable(tf.random.normal([n_hidden_8, n_hidden_9], 0, 0.1,seed=seed)),
'h10': tf.Variable(tf.random.normal([n_hidden_9, n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_hidden_10, n_classes], 0, 0.1,seed=seed))
}
biases = {
'b1': tf.Variable(tf.random.normal([n_hidden_1], 0, 0.1,seed=seed)),
'b2': tf.Variable(tf.random.normal([n_hidden_2], 0, 0.1,seed=seed)),
'b3': tf.Variable(tf.random.normal([n_hidden_3], 0, 0.1,seed=seed)),
'b4': tf.Variable(tf.random.normal([n_hidden_4], 0, 0.1,seed=seed)),
'b5': tf.Variable(tf.random.normal([n_hidden_5], 0, 0.1,seed=seed)),
'b6': tf.Variable(tf.random.normal([n_hidden_6], 0, 0.1,seed=seed)),
'b7': tf.Variable(tf.random.normal([n_hidden_7], 0, 0.1,seed=seed)),
'b8': tf.Variable(tf.random.normal([n_hidden_8], 0, 0.1,seed=seed)),
'b9': tf.Variable(tf.random.normal([n_hidden_9], 0, 0.1,seed=seed)),
'b10': tf.Variable(tf.random.normal([n_hidden_10], 0, 0.1,seed=seed)),
'out': tf.Variable(tf.random.normal([n_classes], 0, 0.1,seed=seed))
}
# Create model
def multilayer_perceptron(x):
# Hidden layer with RELU activation
print(DNNlayers)
if DNNlayers>=1:
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
#tf.summary.histogram("weights",weights['h1'])
#tf.summary.histogram("layer", layer_1)
if DNNlayers>=2:
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
if DNNlayers>=3:
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.sigmoid(layer_3)
if DNNlayers>=4:
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
if DNNlayers>=5:
layer_5 = tf.add(tf.matmul(layer_4, weights['h5']), biases['b5'])
layer_5 = tf.nn.sigmoid(layer_5)
if DNNlayers>=6:
layer_6 = tf.add(tf.matmul(layer_5, weights['h6']), biases['b6'])
layer_6 = tf.nn.sigmoid(layer_6)
if DNNlayers>=7:
layer_7 = tf.add(tf.matmul(layer_6, weights['h7']), biases['b7'])
layer_7 = tf.nn.sigmoid(layer_7)
if DNNlayers>=8:
layer_8 = tf.add(tf.matmul(layer_7, weights['h8']), biases['b8'])
layer_8 = tf.nn.sigmoid(layer_8)
if DNNlayers>=9:
layer_9 = tf.add(tf.matmul(layer_8, weights['h9']), biases['b9'])
layer_9 = tf.nn.sigmoid(layer_9)
if DNNlayers>=10:
layer_10 = tf.add(tf.matmul(layer_9, weights['h10']), biases['b10'])
layer_10 = tf.nn.sigmoid(layer_10)
if DNNlayers==1:
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
if DNNlayers==2:
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
if DNNlayers==3:
out_layer = tf.matmul(layer_3, weights['out']) + biases['out']
if DNNlayers==4:
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
if DNNlayers==5:
out_layer = tf.matmul(layer_5, weights['out']) + biases['out']
if DNNlayers==6:
out_layer = tf.matmul(layer_6, weights['out']) + biases['out']
if DNNlayers==7:
out_layer = tf.matmul(layer_7, weights['out']) + biases['out']
if DNNlayers==8:
out_layer = tf.matmul(layer_8, weights['out']) + biases['out']
if DNNlayers==9:
out_layer = tf.matmul(layer_9, weights['out']) + biases['out']
if DNNlayers==10:
out_layer = tf.matmul(layer_10, weights['out']) + biases['out']
return out_layer
# Construct model
pred = multilayer_perceptron(x)
cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Run the graph in the session
predict = np.array([])
count_converge= [0] * training_epochs
prev_cost=10000000.
#saver = tf.train.Saver()
#tf.reset_default_graph()
config = tf.ConfigProto(device_count={"CPU": 1}, # limit to num_cpu_core CPU usage
inter_op_parallelism_threads = 0,
intra_op_parallelism_threads = 28,
)
init = tf.global_variables_initializer()
start=time.time()
with tf.Session(config=config) as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(total_len/batch_size)
for i in range(total_batch-1):
batch_x = X_train[i*batch_size:(i+1)*batch_size]
batch_y = y_train[i*batch_size:(i+1)*batch_size]
_, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch==training_epochs-1:
predict = np.append(predict, p)
# print ('epoch', (epoch+1), 'cost =', '{:.5f}'.format(avg_cost))
val_c, val_p=sess.run([cost, pred], feed_dict={x: val_x, y: val_y})
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
#count cost convergence for validation
count_converge[epoch]=val_c
if epoch %2000 == 0 :
end=time.time()
print ('epoch ',(epoch+1),' training cost =','{:.5f}'.format(avg_cost),' validation cost =', '{:.5f}'.format(val_c),' training time (s/100epochs)= ','{:.5f}'.format(end-start))
start=time.time()
#for validation set if no improvement then break
if epoch == training_epochs-1:
print('break the loop at maximum iteration')
if epoch %2000 ==0 and val_c>=prev_cost:
break
#print("val cost increase !!!")
if epoch %2000 ==0:
prev_cost=val_c
#saver.save(sess, r'E:\SOFC\ARPA-E\Work2020\codes\DNN_rom\DNN')
#saver.save(sess, DNN_save_file)
test_p1=sess.run(pred, feed_dict={x: testX_nrm1})
variables_names =[v.name for v in tf.trainable_variables()]
values = sess.run(variables_names)
#for k,v in zip(variables_names, values):
# print(k, v)
sess.close()
tf.reset_default_graph()
return(test_p1, values)
def summarize_SimuResult(self, source_path, indcase, exclude_case = 1, display_detail = False):
'''
The function extracts simulation results
exclude_case = -1: all cases included
exclude_case = 0: exclude failed cases only
exclude_case = 1: exclude both failed and non-converged cases
'''
print('############################################################\
\nSummarize simulation results\
\n############################################################')
## Step 1: load simulation outputs to Y4kriging
numcase4kriging = 0 # number of cases for kriging
indcase4kriging = [] # index of cases for kriging, start from 1
S4kriging = None # simulation inputs for kriging
Y4kriging = None # simulation outputs for kriging
for icase in indcase:
# load SOFC_MP_ROM.dat to df1
strcase = 'Case'+str(icase-1)+'Value'
inputfilename = source_path+'/Cases/Case'+str(icase-1).zfill(5)+'/SOFC_MP_ROM.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
if len(lines) == 0:
continue #print('Empty case')
if lines[1].strip() == '#FAILED':
continue #print('"preprocessor" failed case')
df0 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
df1 = pd.DataFrame(np.array([['1a', '1b']]),columns=['Name', strcase])
for j in range(len(lines)):
if j>1: # skip first two lines
str01 = lines[j].split('=')
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
if len(str01) == 1: continue
# convert variables in SOFC_MP_ROM.dat to xxx_xxx format
str_tmp = str01[0].strip().split()
str_tmp = '_'.join(str_tmp)
df0['Name']=str_tmp
df0[strcase]=float(str01[1])
if j==2:
df1["Name"]=df0["Name"]
df1[strcase]=df0[strcase]
else:
df1=pd.concat([df1,df0],sort=False, ignore_index=True)
# exclude failed or non-converged cases
if int(df1.loc[0, [strcase]]) >= exclude_case:
numcase4kriging += 1
indcase4kriging.append(icase)
if numcase4kriging == 1:
Y4kriging = df1
else:
Y4kriging = pd.concat([Y4kriging, df1[strcase]], sort=False, axis=1)
## Step 2: load simulation inputs to S4kriging
inputfilename = source_path+'/LHS.dat'
if os.path.exists(inputfilename):
text_input=open(inputfilename,"r")
lines=text_input.readlines()
for j in range(len(lines)):
if j == 1:
list_tmp = lines[j].strip().split()
list_tmp = list_tmp[2:] # 0: case; 1: No.
df2 = pd.DataFrame(list_tmp,columns=['Name'])
if j > 1:
list_tmp = lines[j].strip().split()
strcase = 'Case'+str(int(list_tmp[0])-1)+'Value'
list_tmp = list_tmp[1:] # 0: case No.
df2[strcase] = list_tmp
S4kriging = df2
## Step 3: display simulation input and output
if exclude_case == 1:
print('Converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
elif exclude_case == 0:
print('Converged and non-converged simulation results are summarized from '+ str(numcase4kriging)+' cases:')
else:
print('Simulation results are summarized from '+ str(numcase4kriging)+' cases:')
print(*indcase4kriging)
print('\nSelect from the following input variables for training:')
for i in range(S4kriging.index.size):
print(i+1, ':', S4kriging.loc[i, 'Name'], end = '\t\n')
print('\nSelect from the following output variables for training:')
for i in range(Y4kriging.index.size):
print(i+1, ':', Y4kriging.loc[i, 'Name'], end = '\t\n')
if display_detail == True:
print('\n')
print(S4kriging)
print('\n')
print(Y4kriging)
## Step 4: create allResults.dat
indS = list(S4kriging.index)
indY = list(Y4kriging.index)
indS = [x+1 for x in indS]
indY = [x+1 for x in indY]
if len(indcase4kriging) == 0 or len(indS) == 0 or len(indY) == 0:
print('Error: No data available for training')
with open(self.allresultsFile, 'w') as f:
for i in indS:
f.write(S4kriging.loc[i-1, 'Name'] + '\t')
for i in indY:
f.write(Y4kriging.loc[i-1, 'Name'] + '\t')
f.write('\n')
for i in indcase4kriging:
strcase = 'Case'+str(i-1)+'Value'
for j in indS:
f.write('{:11.4E}\t'.format(float(S4kriging.loc[j-1, strcase])))
for j in indY:
f.write('{:11.4E}\t'.format(float(Y4kriging.loc[j-1, strcase])))
f.write('\n')
with open(self.allresults_infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
def file_read(self, FileName):
'''
This function loads the kriginginputFile,
infoFile and predictioninputFile
'''
namearray = []
valuearray = []
with open(FileName) as f:
i = 0
for line in f.readlines():
if i == 0:
namearray = line.strip().split()
else:
linestr = line.strip().split()
linenum = [float(lineele) for lineele in linestr]
valuearray.append(linenum)
i += 1
return namearray, np.array(valuearray)
def variables(self):
print('input variables:')
for i in range(len(self.Sname)):
print(i+1, ':', self.Sname[i], end = '\t\n')
print('\noutput variables:')
for i in range(len(self.Yname)):
print(i+1, ':', self.Yname[i], end = '\t\n')
def variable_options(self, display = False):
names_input = [
"Average_CellVoltage",
"Average_CurrentDensity",
"BackEnvironmentT",
"BottomEnvironmentT",
"CellFuelFlowRate",
"CellOxidantFlowRate",
"FrontEnvironmentT",
"Fuel_Utilization",
"FuelH2",
"FuelH2O",
"FuelCO",
"FuelCO2",
"FuelCH4",
"FuelN2",
"FuelTemperature",
"FuelTOnTop",
"FuelRecyclePercent",
"FuelHTXEffectiveness",
"FuelNGTemperature",
"FuelNGHTXDeltaT",
"Internal_Reforming",
"nCells",
"Oxidant_Recirculation",
"OxidantRecyclePercent",
"OxygenToCarbon_Ratio",
"OxidantO2",
"OxidantN2",
"OxidantH2O",
"OxidantCO2",
"OxidantAr",
"OxidantTemperature",
"OxidantTOnTop",
"PreReform",
"SideEnvironmentT",
"Simulation_Option",
"Stack_Fuel_Utilization",
"Stack_Oxidant_Utilization",
"StackFuelFlowRate",
"StackFuelFlowRateH2O",
"StackFuelFlowRateCO",
"StackFuelFlowRateCO2",
"StackFuelFlowRateCH4",
"StackFuelFlowRateH2",
"StackFuelFlowRateN2",
"StackOxidantFlowRate",
"StackOxidantFlowRateO2",
"StackOxidantFlowRateN2",
"StackOxidantFlowRateH2O",
"StackOxidantFlowRateCO2",
"StackOxidantFlowRateAr",
"StackVoltage",
"SystemPressure",
"TopEnvironmentT",
"VGRRate",
"VGRTemperature",
"VGRH2OPassRate",
"VGRH2PassRate",
"VGRCO2CaptureRate",
"VGRCOConvertRate"
]
units_input = [
"V",
"A/m^2",
"C",
"C",
"mol/s",
"mol/s",
"C",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"%",
"-",
"C",
"C",
"-",
"-",
"-",
"%",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"C",
"C",
"-",
"C",
"-",
"-",
"-",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"mol/s",
"V",
"atm",
"C",
"-",
"C",
"-",
"-",
"-",
"-"
]
names_output = [
'SimulationStatus',
'Stack_Voltage',
'Avg_cell_voltage',
'Stack_Current',
'Avg_current_density',
'Max_current_density',
'Min_current_density',
'Avg_Cell_Temperature',
'Max_Cell_Temperature',
'Min_Cell_Temperature',
'Delta_Cell_Temperature',
'Outlet_Fuel_Temperature',
'Delta_Fuel_Temperature',
'Outlet_Air_Temperature',
'Delta_Air_Temperature',
'Air_Heat_Exchanger_Effectiveness',
'Fuel_Utilization',
'Air_Utilization',
'Outlet_Fuel_Flowrate',
'Outlet_Fuel_H2',
'Outlet_Fuel_H2O',
'Outlet_Fuel_CO',
'Outlet_Fuel_CO2',
'Outlet_Fuel_CH4',
'Outlet_Fuel_N2',
'Outlet_Air_Flowrate',
'Outlet_Air_O2',
'Outlet_Air_N2',
'Outlet_Air_H2O',
'Outlet_Air_CO2',
'Outlet_Air_Ar',
'Total_Power',
'Air_Enthalpy_Change',
'Fuel_Enthalpy_Change',
'External_Heat',
'Electrical_Efficiency',
'Stack_Efficiency',
'Air_Inlet_Temperature',
'FSI_Temperature',
'FSI_Flowrate',
'FSI_H2_MF',
'FSI_H2O_MF',
'FSI_CO_MF',
'FSI_CO2_MF',
'FSI_CH4_MF',
'FSI_N2_MF',
'Fuel_Temperature_after_Mix',
'Fuel_Temperature_before_Gibbs_Reactor',
'Fuel_Heat_Exchanger_Effectiveness'
]
units_output = [
'-',
'V',
'V',
'A',
'A/m2',
'A/m2',
'A/m2',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'K',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'W',
'W',
'W',
'W',
'-',
'-',
'K',
'K',
'mol/s',
'-',
'-',
'-',
'-',
'-',
'-',
'K',
'K',
'-'
]
if display == True:
print('Options of input variable:')
for i in range(len(names_input)):
print(i+1, ':', names_input[i]+', ['+units_input[i]+']', end = '\t\n')
print('Options of output variable:')
for i in range(len(names_output)):
print(i+1, ':', names_output[i]+', ['+units_output[i]+']', end = '\t\n')
return names_input, units_input, names_output, units_output
def buildROM(self, frac4ROM = 80, preprocessor_name = None, igfc = None,
filter_enabled = True, z_thres = 5, inputbasefilename = None):
'''
The function build the ROM for certain input/output variables
'''
print('############################################################\
\nBuild the ROM\
\n############################################################')
if not os.path.exists(self.allresultsFile) or not os.path.exists(self.allresults_infoFile):
sys.exit('Code terminated: essential files missing')
################## Step 1: train the classifier ##################
SYname, SYvalue = self.file_read(self.allresultsFile)
infoname, infovalue = self.file_read(self.allresults_infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
Svalue = copy.deepcopy(SYvalue[:, :S_col])
Yvalue = copy.deepcopy(SYvalue[:, S_col:])
## 1.1 determine indS, indY
indS = list(range(1, S_col+1))
indY = []
for i in range(Y_col):
Y_tmp = Yvalue[:, i]
if len(np.unique(Y_tmp))>5:
indY.append(i+1)
indS_index = [i-1 for i in indS]
indY_index = [i-1 for i in indY]
## 1.2 determine if enabling classifier or not
if Yname[0] == 'SimulationStatus':
cls_enabled = True
else:
cls_enabled = False
## 1.3-- call "preprocessor", train classifier, etc.
if cls_enabled == True:
## 1.3 split dataset into 3 sets
if frac4ROM >= 0:
size_tmp1 = int(S_row*frac4ROM/100.0)
size_tmp2 = int(size_tmp1*50.0/100.0)
size_tmp3 = int(S_row*(1-frac4ROM/100.0))
else:
size_tmp1 = int(S_row*0.8)
size_tmp2 = int(size_tmp1*50.0/100.0)
size_tmp3 = int(S_row*0.2)
## 1.4 change all SimulationStatus = -1 to 0
for i in range(S_row):
if Yvalue[i, 0] == -1: Yvalue[i, 0] = 0
Sname_4cls = [ Sname[i] for i in indS_index]
Yname_4cls = [ Yname[i] for i in indY_index]
S_4cls_ROM_train_tmp = Svalue[:size_tmp2, :]
Y_4cls_ROM_train_tmp = Yvalue[:size_tmp2, :]
S_4cls_ROM_train_tmp = S_4cls_ROM_train_tmp[Y_4cls_ROM_train_tmp[:, 0] == 1, :]
Y_4cls_ROM_train_tmp = Y_4cls_ROM_train_tmp[Y_4cls_ROM_train_tmp[:, 0] == 1, :]
S_4cls_ROM_train = S_4cls_ROM_train_tmp[:, indS_index]
Y_4cls_ROM_train = Y_4cls_ROM_train_tmp[:, indY_index]
S_4cls_ROM_vali_tmp = Svalue[size_tmp2:size_tmp1, :]
Y_4cls_ROM_vali_tmp = Yvalue[size_tmp2:size_tmp1, :]
S_4cls_ROM_vali_cls_train = S_4cls_ROM_vali_tmp[:, indS_index]
Y_4cls_ROM_vali = Y_4cls_ROM_vali_tmp[:, indY_index]
Y_4cls_cls_train = Y_4cls_ROM_vali_tmp[:, 0]
S_4cls_vali = Svalue[S_row-size_tmp3:, indS_index]
Y_4cls_vali = Yvalue[S_row-size_tmp3:, 0]
## 1.5 normalize dataset
meanS=S_4cls_ROM_train.mean(axis=0)
stdS=S_4cls_ROM_train.std(axis=0)
meanY=Y_4cls_ROM_train.mean(axis=0)
stdY=Y_4cls_ROM_train.std(axis=0)
S_4cls_ROM_train_nrm=(S_4cls_ROM_train-meanS)/stdS
Y_4cls_ROM_train_nrm=(Y_4cls_ROM_train-meanY)/stdY
S_4cls_ROM_vali_cls_train_nrm=(S_4cls_ROM_vali_cls_train-meanS)/stdS
S_4cls_vali_nrm=(S_4cls_vali-meanS)/stdS
## 1.6 call DNN rom
maxiteration = 50000
DNNsize = [64, 200, 200, 256]
Y_4cls_ROM_vali_cls_train_nrm_pred, Y_4cls_vali_nrm_pred, cls_ROM_values = self.DNNROM_4cls(maxiteration, S_4cls_ROM_train_nrm, Y_4cls_ROM_train_nrm, S_4cls_ROM_vali_cls_train_nrm, S_4cls_vali_nrm, len(indS), len(indY), DNNsize)
## 1.7 call preprocessor
succs_cls_training = np.zeros((S_4cls_ROM_vali_cls_train_nrm.shape[0],1),dtype=np.float64)
succs_cls_testing = np.zeros((S_4cls_vali_nrm.shape[0],1),dtype=np.float64)
# load inputbasefilename (base.dat or input000.dat)
if inputbasefilename != None:
text_file=open(inputbasefilename,"r")
lines = text_file.readlines()
df2 = pd.DataFrame(np.array([['1a', '1b', '1c']]),columns=['Name', 'Value', 'Updated'])
df3 = pd.DataFrame(columns=['Name', 'Value', 'Updated']) # currently, "Updated" feature not active
for j in range(len(lines)):
str01 = lines[j].split('=')
if len(str01) == 2:
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
try:
df2['Name']=str01[0]
df2['Value']=float(str01[1])
df2['Updated']=False
df3=pd.concat([df3,df2],sort=False,ignore_index=True)
except:
pass
# find index of preprocessor inputs
try:
index1 = Sname_4cls.index("Average_CurrentDensity")
except:
index1 = -1
try:
J_fix = df3.loc[df3["Name"]=="Average_CurrentDensity","Value"].iloc[0]/10.0
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index2 = Sname_4cls.index("Stack_Fuel_Utilization")
except:
index2 = -1
try:
FU_fix = df3.loc[df3["Name"]=="Stack_Fuel_Utilization","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index3 = Sname_4cls.index("Stack_Oxidant_Utilization")
except:
index3 = -1
try:
AU_fix = df3.loc[df3["Name"]=="Stack_Oxidant_Utilization","Value"].iloc[0]/10.0
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index4 = Sname_4cls.index("OxygenToCarbon_Ratio")
except:
index4 = -1
try:
OCR_fix = df3.loc[df3["Name"]=="OxygenToCarbon_Ratio","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index5 = Sname_4cls.index("Internal_Reforming")
except:
index5 = -1
try:
IR_fix = df3.loc[df3["Name"]=="Internal_Reforming","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index6 = Sname_4cls.index("Oxidant_Recirculation")
except:
index6 = -1
try:
Arec_fix = df3.loc[df3["Name"]=="Oxidant_Recirculation","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index7= Sname_4cls.index("PreReform")
except:
index7 = -1
try:
PreReform_fix = df3.loc[df3["Name"]=="PreReform","Value"].iloc[0]
except:
# sys.exit('Code terminated: "preprocessor" input not defined')
PreReform_fix=0.2 #[]
try:
index8= Sname_4cls.index("cellsize")
except:
index8 = -1
try:
cellsize_fix = df3.loc[df3["Name"]=="cellsize","Value"].iloc[0]
except:
# sys.exit('Code terminated: "preprocessor" input not defined')
cellsize_fix=550 #[cm2]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
try:
index9 = Sname_4cls.index("VGRRate")
except:
index9 = -1
try:
VGR_fix = df3.loc[df3["Name"]=="VGRRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index11 = Sname_4cls.index("VGRH2OPassRate")
except:
index11 = -1
try:
H2OCap_fix = 1-df3.loc[df3["Name"]=="VGRH2OPassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index12 = Sname_4cls.index("VGRCO2CaptureRate")
except:
index12 = -1
try:
CO2Cap_fix = df3.loc[df3["Name"]=="VGRCO2CaptureRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index13 = Sname_4cls.index("VGRH2PassRate")
except:
index13 = -1
try:
H2Cap_fix = 1-df3.loc[df3["Name"]=="VGRH2PassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index14 = Sname_4cls.index("VGRCOConvertRate")
except:
index14 = -1
try:
WGS_fix = df3.loc[df3["Name"]=="VGRCOConvertRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# find value of preprocessor inputs
for i in range(S_4cls_ROM_vali_cls_train_nrm.shape[0]):
if index1 == -1:
J = J_fix
else:
J = S_4cls_ROM_vali_cls_train[i,index1]/10.0 # mA/cm2
if index2 == -1:
FU = FU_fix
else:
FU = S_4cls_ROM_vali_cls_train[i,index2]
if index3 == -1:
AU = AU_fix
else:
AU = S_4cls_ROM_vali_cls_train[i,index3]
if index4 == -1:
OCR = OCR_fix
else:
OCR = S_4cls_ROM_vali_cls_train[i,index4]
if index5 == -1:
IR = IR_fix
else:
IR = S_4cls_ROM_vali_cls_train[i,index5]
if index6 == -1:
Arec = Arec_fix
else:
Arec = S_4cls_ROM_vali_cls_train[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = S_4cls_ROM_vali_cls_train[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = S_4cls_ROM_vali_cls_train[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
if index9 == -1:
VGR = VGR_fix
else:
VGR = S_4cls_ROM_vali_cls_train[i,index9]
if index11 == -1:
H2OCap = H2OCap_fix
else:
H2OCap = 1-S_4cls_ROM_vali_cls_train[i,index11]
if index12 == -1:
CO2Cap = CO2Cap_fix
else:
CO2Cap = S_4cls_ROM_vali_cls_train[i,index12]
if index13 == -1:
H2Cap = H2Cap_fix
else:
H2Cap = 1-S_4cls_ROM_vali_cls_train[i,index13]
if index14 == -1:
WGS = WGS_fix
else:
WGS = S_4cls_ROM_vali_cls_train[i,index14]
if i%1000 == 0: print(i," cls_training")
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
succs_cls_training[i,0] = succ
mean_succs = succs_cls_training.mean(axis=0)
std_succs = succs_cls_training.std(axis=0)
succs_cls_training_nrm = (succs_cls_training-mean_succs)/std_succs
for i in range(S_4cls_vali_nrm.shape[0]):
if index1 == -1:
J = J_fix
else:
J = S_4cls_vali[i,index1]/10.0 # mA/cm2
if index2 == -1:
FU = FU_fix
else:
FU = S_4cls_vali[i,index2]
if index3 == -1:
AU = AU_fix
else:
AU = S_4cls_vali[i,index3]
if index4 == -1:
OCR = OCR_fix
else:
OCR = S_4cls_vali[i,index4]
if index5 == -1:
IR = IR_fix
else:
IR = S_4cls_vali[i,index5]
if index6 == -1:
Arec = Arec_fix
else:
Arec = S_4cls_vali[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = S_4cls_vali[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = S_4cls_vali[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
if index9 == -1:
VGR = VGR_fix
else:
VGR = S_4cls_vali[i,index9]
if index11 == -1:
H2OCap = H2OCap_fix
else:
H2OCap = 1-S_4cls_vali[i,index11]
if index12 == -1:
CO2Cap = CO2Cap_fix
else:
CO2Cap = S_4cls_vali[i,index12]
if index13 == -1:
H2Cap = H2Cap_fix
else:
H2Cap = 1-S_4cls_vali[i,index13]
if index14 == -1:
WGS = WGS_fix
else:
WGS = S_4cls_vali[i,index14]
if i%1000 == 0: print(i," cls_testing")
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
succs_cls_testing[i,0] = succ
mean_succs=succs_cls_testing.mean(axis=0)
std_succs=succs_cls_testing.std(axis=0)
succs_cls_testing_nrm=(succs_cls_testing-mean_succs)/std_succs
## 1.8 prepare classification data
data_cls_training_y = Y_4cls_cls_train
data_cls_training_x = np.concatenate((S_4cls_ROM_vali_cls_train_nrm,Y_4cls_ROM_vali_cls_train_nrm_pred),axis=1)
data_cls_testing_x = np.concatenate((S_4cls_vali_nrm, Y_4cls_vali_nrm_pred),axis=1)
data_cls_testing_y = Y_4cls_vali
## 1.9 perform classification with all inputs + all outputs + mbm decision
data_cls_training_x_with_mbm = np.concatenate((data_cls_training_x,succs_cls_training_nrm),axis=1)
data_cls_testing_x_with_mbm = np.concatenate((data_cls_testing_x,succs_cls_testing_nrm),axis=1)
maxiteration = 50000
acc_val_mbm,acc_test_mbm,test_prediction_mbm, cls_values = self.DNNCls(maxiteration, data_cls_training_x_with_mbm, data_cls_training_y, data_cls_testing_x_with_mbm, data_cls_testing_y, len(indS)+len(indY)+1)
# ## 1.10 show classifier accuracy
print('Classifier accuracy with vali-data: ', acc_val_mbm)
print('Classifier accuracy with test-data: ', acc_test_mbm)
# print(test_prediction_mbm)
## 1.11 write classifier as text file
trainingoutput_file = self.outtrainingFile
trainingoutput_file_cls = trainingoutput_file.replace(".dat", "")+'_cls.dat'
trainingoutput_file_cls_ROM = trainingoutput_file.replace(".dat", "")+'_cls_ROM.dat'
print('length of cls_values: ', len(cls_values))
w1,w2,b1,b2 = cls_values
with open(trainingoutput_file_cls, 'w') as f:
f.write('w1\n')
values_tmp = np.copy(w1)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w2\n')
values_tmp = np.copy(w2)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('b1\n')
values_tmp = np.copy(b1)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b2\n')
values_tmp = np.copy(b2)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('end\n')
print('length of cls_ROM_values: ', len(cls_ROM_values))
w1,w2,w3,w4,w5,b1,b2,b3,b4,b5 = cls_ROM_values
with open(trainingoutput_file_cls_ROM, 'w') as f:
f.write('w1\n')
values_tmp = np.copy(w1)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w2\n')
values_tmp = np.copy(w2)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w3\n')
values_tmp = np.copy(w3)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w4\n')
values_tmp = np.copy(w4)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w5\n')
values_tmp = np.copy(w5)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('b1\n')
values_tmp = np.copy(b1)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b2\n')
values_tmp = np.copy(b2)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b3\n')
values_tmp = np.copy(b3)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b4\n')
values_tmp = np.copy(b4)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b5\n')
values_tmp = np.copy(b5)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanS\n')
values_tmp = np.copy(meanS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanY\n')
values_tmp = np.copy(meanY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdS\n')
values_tmp = np.copy(stdS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdY\n')
values_tmp = np.copy(stdY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('end\n')
################## Step 2: train the ROM ##################
## 2.1 determine indS, indY and determine if enabling ROM training
indS = list(range(1, S_col+1))
indY = []
Yname_4indY = ["Outlet_Fuel_Flowrate", "Outlet_Fuel_H2",
"Outlet_Fuel_H2O", "Outlet_Fuel_CO",
"Outlet_Fuel_CO2", "Outlet_Fuel_CH4",
"Outlet_Fuel_N2", "Outlet_Air_Flowrate",
"Outlet_Air_O2", "Outlet_Air_N2",
"Outlet_Air_H2O", "Outlet_Air_CO2",
"Outlet_Air_Ar", "FSI_Flowrate", "FSI_H2_MF",
"FSI_H2O_MF", "FSI_CO_MF", "FSI_CO2_MF",
"FSI_CH4_MF", "FSI_N2_MF"]
ROM_enabled = False
for i in range(Y_col):
Yname_tmp = Yname[i]
if Yname_tmp in Yname_4indY:
indY.append(i+1)
if len(indY) == len(Yname_4indY):
ROM_enabled = True # if any element in Yname_4indY is missing, disable ROM training
else:
print('certain disired variable is missing')
indS_index = [i-1 for i in indS]
indY_index = [i-1 for i in indY]
## 2.2- call preprocessor, prepare training data, train the ROM model, etc.
if ROM_enabled == True:
## 2.2 prepare training data (simulation results)
if cls_enabled == True: # filter non-converged
SYvalue_cov = SYvalue[SYvalue[:, S_col] == 1, :]
else:
SYvalue_cov = SYvalue
if filter_enabled == True: # filter noise
SY_row_rm = []
for j in indY:
tmp_data = SYvalue_cov[:, S_col+j-1]
while(True):
z = np.abs(stats.zscore(tmp_data, axis = 0))
result = np.where(z > z_thres)
index = list(result[0])
# line removal list
if len(index) == 0: break
SY_row_rm += index
SY_row_rm = list(dict.fromkeys(SY_row_rm))
# replace outliers with mean
tmp_data[SY_row_rm] = np.mean(tmp_data)
# remove rows and columns accroding to SY_row_rm and SY_col_rm
SYvalue_new = np.delete(SYvalue_cov, SY_row_rm, axis = 0)
print('Noise filter: trim ' + str(len(SY_row_rm)) + ' rows from a total of ' + str(len(SYvalue_cov)) + ' rows')
else:
SYvalue_new = SYvalue_cov
[S_row, Y_row, S_col, Y_col] = [len(SYvalue_new), len(SYvalue_new), int(infovalue[0,0]), int(infovalue[0,1])]
Svalue_new = copy.deepcopy(SYvalue_new[:, :S_col])
Yvalue_new = copy.deepcopy(SYvalue_new[:, S_col:])
# compute istep, numcrossvali, rndnumberlist
if frac4ROM >= 0:
numtraining = int(S_row*frac4ROM/100.0)
numcrossvali = S_row-numtraining
if numtraining < (2**len(indS)):
print('warning: "frac4ROM" is too low')
if numcrossvali > 0:
istep = int((S_row)/numcrossvali)
rndnumberlist =[]
restlist = list(range(S_row))
for i in range(1, numcrossvali+1):
rndnumberlist.append(i*istep-1)
restlist = [i for i in restlist if i not in rndnumberlist]
else:
sys.exit('Code terminated: the fraction of training dataset cannot be 100%')
else:
numtraining = S_row-1000
numcrossvali = S_row-numtraining
rndnumberlist = list(range(numtraining, S_row))
restlist = list(range(numtraining))
# split to training and validation data
Sname_4ROM = [ Sname[i] for i in indS_index]
Yname_4ROM = [ Yname[i] for i in indY_index]
temp = Svalue_new[restlist, :]
S_4ROM_train = temp[:, indS_index]
temp = Svalue_new[rndnumberlist, :]
S_4ROM_vali = temp[:, indS_index]
temp = Yvalue_new[restlist, :]
Y_4ROM_train = temp[:, indY_index]
temp = Yvalue_new[rndnumberlist, :]
Y_4ROM_vali = temp[:, indY_index]
## 2.3 prepare training data ("preprocessor" results)
preprocessor_result_train = np.zeros((len(restlist),len(indY)),dtype=np.float64)
preprocessor_result_vali = np.zeros((len(rndnumberlist),len(indY)),dtype=np.float64)
# load inputbasefilename (base.dat or input000.dat)
if inputbasefilename != None:
text_file=open(inputbasefilename,"r")
lines = text_file.readlines()
df2 = pd.DataFrame(np.array([['1a', '1b', '1c']]),columns=['Name', 'Value', 'Updated'])
df3 = pd.DataFrame(columns=['Name', 'Value', 'Updated']) # currently, "Updated" feature not active
for j in range(len(lines)):
str01 = lines[j].split('=')
if len(str01) == 2:
str01[0]=str01[0].rstrip()
str01[0]=str01[0].lstrip()
try:
df2['Name']=str01[0]
df2['Value']=float(str01[1])
df2['Updated']=False
df3=pd.concat([df3,df2],sort=False,ignore_index=True)
except:
pass
# find index of preprocessor inputs
try:
index1 = Sname_4ROM.index("Average_CurrentDensity")
except:
index1 = -1
try:
J_fix = df3.loc[df3["Name"]=="Average_CurrentDensity","Value"].iloc[0]/10.0
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index2 = Sname_4ROM.index("Stack_Fuel_Utilization")
except:
index2 = -1
try:
FU_fix = df3.loc[df3["Name"]=="Stack_Fuel_Utilization","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index3 = Sname_4ROM.index("Stack_Oxidant_Utilization")
except:
index3 = -1
try:
AU_fix = df3.loc[df3["Name"]=="Stack_Oxidant_Utilization","Value"].iloc[0]/10.0
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index4 = Sname_4ROM.index("OxygenToCarbon_Ratio")
except:
index4 = -1
try:
OCR_fix = df3.loc[df3["Name"]=="OxygenToCarbon_Ratio","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index5 = Sname_4ROM.index("Internal_Reforming")
except:
index5 = -1
try:
IR_fix = df3.loc[df3["Name"]=="Internal_Reforming","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index6 = Sname_4ROM.index("Oxidant_Recirculation")
except:
index6 = -1
try:
Arec_fix = df3.loc[df3["Name"]=="Oxidant_Recirculation","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index7= Sname_4ROM.index("PreReform")
except:
index7 = -1
try:
PreReform_fix = df3.loc[df3["Name"]=="PreReform","Value"].iloc[0]
except:
# sys.exit('Code terminated: "preprocessor" input not defined')
PreReform_fix=0.2 #[]
try:
index8= Sname_4ROM.index("cellsize")
except:
index8 = -1
try:
cellsize_fix = df3.loc[df3["Name"]=="cellsize","Value"].iloc[0]
except:
# sys.exit('Code terminated: "preprocessor" input not defined')
cellsize_fix=550 #[cm2]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
try:
index9 = Sname_4ROM.index("VGRRate")
except:
index9 = -1
try:
VGR_fix = df3.loc[df3["Name"]=="VGRRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index11 = Sname_4ROM.index("VGRH2OPassRate")
except:
index11 = -1
try:
H2OCap_fix = 1-df3.loc[df3["Name"]=="VGRH2OPassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index12 = Sname_4ROM.index("VGRCO2CaptureRate")
except:
index12 = -1
try:
CO2Cap_fix = df3.loc[df3["Name"]=="VGRCO2CaptureRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index13 = Sname_4ROM.index("VGRH2PassRate")
except:
index13 = -1
try:
H2Cap_fix = 1-df3.loc[df3["Name"]=="VGRH2PassRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index14 = Sname_4ROM.index("VGRCOConvertRate")
except:
index14 = -1
try:
WGS_fix = df3.loc[df3["Name"]=="VGRCOConvertRate","Value"].iloc[0]
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# call preprocessor for trianing data
for i in range(S_4ROM_train.shape[0]):
if index1 == -1:
J = J_fix
else:
J = S_4ROM_train[i,index1]/10.0 # mA/cm2
if index2 == -1:
FU = FU_fix
else:
FU = S_4ROM_train[i,index2]
if index3 == -1:
AU = AU_fix
else:
AU = S_4ROM_train[i,index3]
if index4 == -1:
OCR = OCR_fix
else:
OCR = S_4ROM_train[i,index4]
if index5 == -1:
IR = IR_fix
else:
IR = S_4ROM_train[i,index5]
if index6 == -1:
Arec = Arec_fix
else:
Arec = S_4ROM_train[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = S_4ROM_train[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = S_4ROM_train[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
if index9 == -1:
VGR = VGR_fix
else:
VGR = S_4ROM_train[i,index9]
if index11 == -1:
H2OCap = H2OCap_fix
else:
H2OCap = 1-S_4ROM_train[i,index11]
if index12 == -1:
CO2Cap = CO2Cap_fix
else:
CO2Cap = S_4ROM_train[i,index12]
if index13 == -1:
H2Cap = H2Cap_fix
else:
H2Cap = 1-S_4ROM_train[i,index13]
if index14 == -1:
WGS = WGS_fix
else:
WGS = S_4ROM_train[i,index14]
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
preprocessor_result_train[i,0] = np.sum(FuelOut)
preprocessor_result_train[i,1] = FuelOut[7]/np.sum(FuelOut)
preprocessor_result_train[i,2] = FuelOut[0]/np.sum(FuelOut)
preprocessor_result_train[i,3] = FuelOut[6]/np.sum(FuelOut)
preprocessor_result_train[i,4] = FuelOut[2]/np.sum(FuelOut)
preprocessor_result_train[i,5] = FuelOut[5]/np.sum(FuelOut)
preprocessor_result_train[i,6] = FuelOut[4]/np.sum(FuelOut)
preprocessor_result_train[i,7] = np.sum(AirOut)
preprocessor_result_train[i,8] = AirOut[3]/np.sum(AirOut)
preprocessor_result_train[i,9] = AirOut[4]/np.sum(AirOut)
preprocessor_result_train[i,10] = AirOut[0]/np.sum(AirOut)
preprocessor_result_train[i,11] = AirOut[2]/np.sum(AirOut)
preprocessor_result_train[i,12] = AirOut[1]/np.sum(AirOut)
preprocessor_result_train[i,13] = np.sum(FuelIn)
preprocessor_result_train[i,14] = FuelIn[7]/np.sum(FuelIn)
preprocessor_result_train[i,15] = FuelIn[0]/np.sum(FuelIn)
preprocessor_result_train[i,16] = FuelIn[6]/np.sum(FuelIn)
preprocessor_result_train[i,17] = FuelIn[2]/np.sum(FuelIn)
preprocessor_result_train[i,18] = FuelIn[5]/np.sum(FuelIn)
preprocessor_result_train[i,19] = FuelIn[4]/np.sum(FuelIn)
# # plot preprocessor results vs simulation results
# tempy1 = Y_4ROM_train[i,:].flatten()
# tempy2 = preprocessor_result_train[i,:].flatten()
# tempx = list(range(1, len(indY)+1))
# fig, ax = plt.subplots(figsize=(8,6))
# ax.plot(tempx, tempy1, 'ro-', linewidth = 2,
# markersize = 12, label = 'Simulation')
# ax.plot(tempx, tempy2, 'bd--', linewidth = 2,
# markersize = 12, label = 'Preprocessor')
# plt.legend(loc='upper left')
# ax.set(title = 'Results comparison of case '+str(i))
# FigureName = self.work_path + '/Case ' + str(i) +'.png'
# plt.savefig(FigureName)
# plt.show()
# call preprocessor for validation data
for i in range(S_4ROM_vali.shape[0]):
if index1 == -1:
J = J_fix
else:
J = S_4ROM_vali[i,index1]/10.0 # mA/cm2
if index2 == -1:
FU = FU_fix
else:
FU = S_4ROM_vali[i,index2]
if index3 == -1:
AU = AU_fix
else:
AU = S_4ROM_vali[i,index3]
if index4 == -1:
OCR = OCR_fix
else:
OCR = S_4ROM_vali[i,index4]
if index5 == -1:
IR = IR_fix
else:
IR = S_4ROM_vali[i,index5]
if index6 == -1:
Arec = Arec_fix
else:
Arec = S_4ROM_vali[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = S_4ROM_vali[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = S_4ROM_vali[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
if index9 == -1:
VGR = VGR_fix
else:
VGR = S_4ROM_vali[i,index9]
if index11 == -1:
H2OCap = H2OCap_fix
else:
H2OCap = 1-S_4ROM_vali[i,index11]
if index12 == -1:
CO2Cap = CO2Cap_fix
else:
CO2Cap = S_4ROM_vali[i,index12]
if index13 == -1:
H2Cap = H2Cap_fix
else:
H2Cap = 1-S_4ROM_vali[i,index13]
if index14 == -1:
WGS = WGS_fix
else:
WGS = S_4ROM_vali[i,index14]
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
preprocessor_result_vali[i,0] = np.sum(FuelOut)
preprocessor_result_vali[i,1] = FuelOut[7]/np.sum(FuelOut)
preprocessor_result_vali[i,2] = FuelOut[0]/np.sum(FuelOut)
preprocessor_result_vali[i,3] = FuelOut[6]/np.sum(FuelOut)
preprocessor_result_vali[i,4] = FuelOut[2]/np.sum(FuelOut)
preprocessor_result_vali[i,5] = FuelOut[5]/np.sum(FuelOut)
preprocessor_result_vali[i,6] = FuelOut[4]/np.sum(FuelOut)
preprocessor_result_vali[i,7] = np.sum(AirOut)
preprocessor_result_vali[i,8] = AirOut[3]/np.sum(AirOut)
preprocessor_result_vali[i,9] = AirOut[4]/np.sum(AirOut)
preprocessor_result_vali[i,10] = AirOut[0]/np.sum(AirOut)
preprocessor_result_vali[i,11] = AirOut[2]/np.sum(AirOut)
preprocessor_result_vali[i,12] = AirOut[1]/np.sum(AirOut)
preprocessor_result_vali[i,13] = np.sum(FuelIn)
preprocessor_result_vali[i,14] = FuelIn[7]/np.sum(FuelIn)
preprocessor_result_vali[i,15] = FuelIn[0]/np.sum(FuelIn)
preprocessor_result_vali[i,16] = FuelIn[6]/np.sum(FuelIn)
preprocessor_result_vali[i,17] = FuelIn[2]/np.sum(FuelIn)
preprocessor_result_vali[i,18] = FuelIn[5]/np.sum(FuelIn)
preprocessor_result_vali[i,19] = FuelIn[4]/np.sum(FuelIn)
## 2.4 prepare training data (differences betweeen simulation and preprocessor results)
err_4ROM_train = preprocessor_result_train - Y_4ROM_train
err_4ROM_vali = preprocessor_result_vali - Y_4ROM_vali
meanS=S_4ROM_train.mean(axis=0)
stdS=S_4ROM_train.std(axis=0)
meanY=Y_4ROM_train.mean(axis=0)
stdY=Y_4ROM_train.std(axis=0)
meanerr=err_4ROM_train.mean(axis=0)
stderr=err_4ROM_train.std(axis=0)
S_4ROM_train_nrm=(S_4ROM_train-meanS)/stdS
S_4ROM_vali_nrm=(S_4ROM_vali-meanS)/stdS
Y_4ROM_train_nrm=(Y_4ROM_train-meanY)/stdY
err_4ROM_train_nrm=(err_4ROM_train-meanerr)/stderr
## 2.4 write to info.dat, intraining.dat, info.dat and inCrossVali.dat
with open(self.infoFile, 'w') as f:
f.write('input_col\toutput_col\n')
f.write(str(len(indS))+'\t'+str(len(indY))+'\n')
f1 = open(self.intrainingFile, 'w')
f3 = open(self.incrossvaliFile, 'w')
for i in range(len(indS)):
f1.write(Sname_4ROM[i] + '\t')
f3.write(Sname_4ROM[i] + '\t')
for i in range(len(indY)):
f1.write(Yname_4ROM[i] + '\t')
f3.write(Yname_4ROM[i] + '\t')
f1.write('\n')
f3.write('\n')
for i in range(len(restlist)):
for j in range(len(indS)):
f1.write('{:11.4E}\t'.format(S_4ROM_train[i, j]))
for j in range(len(indY)):
f1.write('{:11.4E}\t'.format(Y_4ROM_train[i, j]))
f1.write('\n')
for i in range(len(rndnumberlist)):
for j in range(len(indS)):
f3.write('{:11.4E}\t'.format(S_4ROM_vali[i, j]))
for j in range(len(indY)):
f3.write('{:11.4E}\t'.format(Y_4ROM_vali[i, j]))
f3.write('\n')
f1.close()
f3.close()
# # write simulation results and "preprocessor" results
# traininginput_file = self.intrainingFile
# traininginput_file_simu = traininginput_file.replace(".dat", "")+'_simu.dat'
# traininginput_file_wrap = traininginput_file.replace(".dat", "")+'_wrap.dat'
# f1 = open(traininginput_file_simu, 'w')
# f3 = open(traininginput_file_wrap, 'w')
# for i in range(len(indS)):
# f1.write(Sname_4ROM[i] + '\t')
# f3.write(Sname_4ROM[i] + '\t')
# for i in range(len(indY)):
# f1.write(Yname_4ROM[i] + '\t')
# f3.write(Yname_4ROM[i] + '\t')
# f1.write('\n')
# f3.write('\n')
# for i in range(len(restlist)):
# for j in range(len(indS)):
# f1.write('{:11.4E}\t'.format(S_4ROM_train[i, j]))
# f3.write('{:11.4E}\t'.format(S_4ROM_train[i, j]))
# for j in range(len(indY)):
# f1.write('{:11.4E}\t'.format(Y_4ROM_train[i, j]))
# f3.write('{:11.4E}\t'.format(preprocessor_result_train[i, j]))
# f1.write('\n')
# f3.write('\n')
# f1.close()
# f3.close()
## 2.5 perform training and prediction
maxiteration = 50000
DNNsize = [32, 200, 200, 256]
err_4ROM_vali_nrm_pre, ROM_values = self.DNNROM(maxiteration, S_4ROM_train_nrm, err_4ROM_train_nrm, S_4ROM_vali_nrm, len(indS), len(indY), DNNsize)
## 2.6 save built ROM model
print('length of ROM_values: ', len(ROM_values))
w1,w2,w3,w4,w5,b1,b2,b3,b4,b5 = ROM_values
with open(self.outtrainingFile, 'w') as f:
f.write('w1\n')
values_tmp = np.copy(w1)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w2\n')
values_tmp = np.copy(w2)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w3\n')
values_tmp = np.copy(w3)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w4\n')
values_tmp = np.copy(w4)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('w5\n')
values_tmp = np.copy(w5)
[row, col] = values_tmp.shape
for i in range(row):
for j in range(col-1):
f.write(str(values_tmp[i, j]) + ' ')
f.write(str(values_tmp[i, col-1]) + '\n')
f.write('\n')
f.write('b1\n')
values_tmp = np.copy(b1)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b2\n')
values_tmp = np.copy(b2)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b3\n')
values_tmp = np.copy(b3)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b4\n')
values_tmp = np.copy(b4)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('b5\n')
values_tmp = np.copy(b5)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanS\n')
values_tmp = np.copy(meanS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanY\n')
values_tmp = np.copy(meanY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdS\n')
values_tmp = np.copy(stdS)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stdY\n')
values_tmp = np.copy(stdY)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('meanerr\n')
values_tmp = np.copy(meanerr)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('stderr\n')
values_tmp = np.copy(stderr)
row = len(values_tmp)
for i in range(row):
f.write(str(values_tmp[i]) + '\n')
f.write('\n')
f.write('end\n')
## 2.7 write to ourCrossVali.dat
err_4ROM_vali_pre = err_4ROM_vali_nrm_pre*stderr+meanerr
Y_4ROM_vali_pre = preprocessor_result_vali-err_4ROM_vali_pre
f0 = open(self.outcrossvaliFile, 'w')
for i in range(len(indY)):
name = Yname_4ROM[i]
f0.write(name + '\t')
f0.write('\n')
for i in range(len(rndnumberlist)):
for j in range(len(indY)):
f0.write('{:11.4E}\t'.format(Y_4ROM_vali_pre[i,j]-Y_4ROM_vali[i, j]))
f0.write('\n')
f0.close()
## 2.8 update global variables
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [len(restlist), len(restlist), len(indS), len(indY)]
self.S_norm = S_4ROM_train_nrm
self.Y_norm = Y_4ROM_train_nrm
self.S = S_4ROM_train
self.Y = Y_4ROM_train
[self.stdS, self.stdY, self.meanS, self.meanY] = [stdS, stdY, meanS, meanY]
self.Sname = Sname_4ROM
self.Yname = Yname_4ROM
################## Step 3: write accuracy ##################
int_95 = self.percent2intervl(95) # 95% confidence interval
trainingoutput_file = self.outtrainingFile
trainingoutput_accuracy = trainingoutput_file.replace(".dat", "")+'_acc.dat'
with open(trainingoutput_accuracy, 'w') as f:
if cls_enabled == True:
f.write('Classifier Accuracy: \n')
f.write(str(acc_test_mbm) + '\n')
if ROM_enabled == True:
f.write('ROM Accuracy (95% confidence interval): \n')
for i in range(len(Yname_4ROM)):
f.write(Yname_4ROM[i])
f.write('\t' + str(int_95[i]) + '\n')
print('End of code\n')
def Generate_inprediction(self, numsample = None, listmin = None, listmax = None):
'''
The function generates prediction input if it doesn't exist by Latin Hypercube Sampling
'''
print('############################################################\
\nGenerate prediction input\
\n############################################################')
# find input variable list Sname
SYname, SYvalue = self.file_read(self.intrainingFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_col, Y_col] = [int(infovalue[0,0]), int(infovalue[0,1])]
Sname = copy.deepcopy(SYname[:S_col])
# check if exists
filename = self.inpredictionFile
Create_handle = True
if os.path.exists(filename):
query = query_yes_no('Prediction input file already exists on the local machine, do you want to overwrite it?')
Create_handle = query
if Create_handle == True:
numvar = len(Sname)
listvar = Sname
if len(listmin) != numvar or len(listmax) != numvar:
sys.exit('Code terminated: the lengths of variables/minimums/maximums not match')
# LHS sampling
xlimits = np.transpose(np.vstack((listmin, listmax)))
sampling = LHS(xlimits = xlimits)
LHSvalue = sampling(numsample)
# write prediction input
with open(filename, 'w') as f:
for name in Sname:
f.write(name + '\t')
f.write('\n')
for i in range(numsample):
for j in range(numvar):
f.write('{:11.4E}\t'.format(LHSvalue[i, j]))
f.write('\n')
print("Created prediciton input file")
print('End of code\n')
def prediction(self, preprocessor_name = None, igfc = None):
'''
This function predicts the outputs and MSEs
based on the trained model
'''
print('############################################################\
\nPredict Based on the trained model\
\n############################################################')
# # Step 0: check if outprediction.dat existing
# if os.path.exists(self.outpredictionFile):
# query = query_yes_no('prediction results already exist on the local machine, do you want to overwrite it?')
# if query == False: return
############# Step 1: Load the training data S, Y and prediction data X #############
print('Step 1: Load the training data S, Y and prediction input data X')
SYname, SYvalue = self.file_read(self.intrainingFile)
Xname, Xvalue = self.file_read(self.inpredictionFile)
infoname, infovalue = self.file_read(self.infoFile)
[S_row, Y_row, S_col, Y_col] = [len(SYvalue), len(SYvalue), int(infovalue[0,0]), int(infovalue[0,1])]
S = copy.deepcopy(SYvalue[:, :S_col])
Y = copy.deepcopy(SYvalue[:, S_col:])
X = copy.deepcopy(Xvalue)
Sname = copy.deepcopy(SYname[:S_col])
Yname = copy.deepcopy(SYname[S_col:])
[X_row, X_col] = X.shape
if X_col != S_col:
sys.exit('Code terminated: # of prediction input variables \
does not match # of given input variables')
############# Step 2: Load the trained models for classifier #############
trainingoutput_file = self.outtrainingFile
if not os.path.exists(trainingoutput_file):
sys.exit('Code terminated: trained model missing')
trainingoutput_file_cls = trainingoutput_file.replace(".dat", "")+'_cls.dat'
trainingoutput_file_cls_ROM = trainingoutput_file.replace(".dat", "")+'_cls_ROM.dat'
if os.path.exists(trainingoutput_file_cls) or os.path.exists(trainingoutput_file_cls_ROM):
cls_enabled = True
else:
cls_enabled = False
print('trained model has no classifier, continue')
if cls_enabled == True:
with open(trainingoutput_file_cls) as f:
lines = f.readlines()
i = 0
for line in lines:
linestr = line.strip().split(' ')
if linestr[0] == 'w1':
w1_s_cls = i+1
if linestr[0] == 'w2':
w2_s_cls = i+1
w1_e_cls = i-2
if linestr[0] == 'b1':
b1_s_cls = i+1
w2_e_cls = i-2
if linestr[0] == 'b2':
b2_s_cls = i+1
b1_e_cls = i-2
if linestr[0] == 'end':
b2_e_cls = i-2
i += 1
i = 0
for line in lines:
linestr = line.strip().split(' ')
if i == w1_s_cls:
linenum = [float(lineele) for lineele in linestr]
w1_cls = np.array(linenum)
w1_row_cls = w1_e_cls-w1_s_cls+1
w1_col_cls = len(w1_cls)
if i > w1_s_cls and i <= w1_e_cls:
linenum = [float(lineele) for lineele in linestr]
w1_cls = np.append(w1_cls, linenum)
if i == w2_s_cls:
linenum = [float(lineele) for lineele in linestr]
w2_cls = np.array(linenum)
w2_row_cls = w2_e_cls-w2_s_cls+1
w2_col_cls = len(w2_cls)
if i > w2_s_cls and i <= w2_e_cls:
linenum = [float(lineele) for lineele in linestr]
w2_cls = np.append(w2_cls, linenum)
if i == b1_s_cls:
linenum = [float(lineele) for lineele in linestr]
b1_cls = np.array(linenum)
if i > b1_s_cls and i <= b1_e_cls:
linenum = [float(lineele) for lineele in linestr]
b1_cls = np.append(b1_cls, linenum)
if i == b2_s_cls:
linenum = [float(lineele) for lineele in linestr]
b2_cls = np.array(linenum)
if i > b2_s_cls and i <= b2_e_cls:
linenum = [float(lineele) for lineele in linestr]
b2_cls = np.append(b2_cls, linenum)
i += 1
w1_cls = np.reshape(w1_cls, (w1_row_cls, w1_col_cls))
w2_cls = np.reshape(w2_cls, (w2_row_cls, w2_col_cls))
with open(trainingoutput_file_cls_ROM) as f:
lines = f.readlines()
i = 0
for line in lines:
linestr = line.strip().split(' ')
if linestr[0] == 'w1':
w1_s = i+1
if linestr[0] == 'w2':
w2_s = i+1
w1_e = i-2
if linestr[0] == 'w3':
w3_s = i+1
w2_e = i-2
if linestr[0] == 'w4':
w4_s = i+1
w3_e = i-2
if linestr[0] == 'w5':
w5_s = i+1
w4_e = i-2
if linestr[0] == 'b1':
b1_s = i+1
w5_e = i-2
if linestr[0] == 'b2':
b2_s = i+1
b1_e = i-2
if linestr[0] == 'b3':
b3_s = i+1
b2_e = i-2
if linestr[0] == 'b4':
b4_s = i+1
b3_e = i-2
if linestr[0] == 'b5':
b5_s = i+1
b4_e = i-2
if linestr[0] == 'meanS':
meanS_s = i+1
b5_e = i-2
if linestr[0] == 'meanY':
meanY_s = i+1
meanS_e = i-2
if linestr[0] == 'stdS':
stdS_s = i+1
meanY_e = i-2
if linestr[0] == 'stdY':
stdY_s = i+1
stdS_e = i-2
if linestr[0] == 'end':
stdY_e = i-2
i += 1
i = 0
for line in lines:
linestr = line.strip().split(' ')
if i == w1_s:
linenum = [float(lineele) for lineele in linestr]
w1 = np.array(linenum)
w1_row = w1_e-w1_s+1
w1_col = len(w1)
if i > w1_s and i <= w1_e:
linenum = [float(lineele) for lineele in linestr]
w1 = np.append(w1, linenum)
if i == w2_s:
linenum = [float(lineele) for lineele in linestr]
w2 = np.array(linenum)
w2_row = w2_e-w2_s+1
w2_col = len(w2)
if i > w2_s and i <= w2_e:
linenum = [float(lineele) for lineele in linestr]
w2 = np.append(w2, linenum)
if i == w3_s:
linenum = [float(lineele) for lineele in linestr]
w3 = np.array(linenum)
w3_row = w3_e-w3_s+1
w3_col = len(w3)
if i > w3_s and i <= w3_e:
linenum = [float(lineele) for lineele in linestr]
w3 = np.append(w3, linenum)
if i == w4_s:
linenum = [float(lineele) for lineele in linestr]
w4 = np.array(linenum)
w4_row = w4_e-w4_s+1
w4_col = len(w4)
if i > w4_s and i <= w4_e:
linenum = [float(lineele) for lineele in linestr]
w4 = np.append(w4, linenum)
if i == w5_s:
linenum = [float(lineele) for lineele in linestr]
w5 = np.array(linenum)
w5_row = w5_e-w5_s+1
w5_col = len(w5)
if i > w5_s and i <= w5_e:
linenum = [float(lineele) for lineele in linestr]
w5 = np.append(w5, linenum)
if i == b1_s:
linenum = [float(lineele) for lineele in linestr]
b1 = np.array(linenum)
if i > b1_s and i <= b1_e:
linenum = [float(lineele) for lineele in linestr]
b1 = np.append(b1, linenum)
if i == b2_s:
linenum = [float(lineele) for lineele in linestr]
b2 = np.array(linenum)
if i > b2_s and i <= b2_e:
linenum = [float(lineele) for lineele in linestr]
b2 = np.append(b2, linenum)
if i == b3_s:
linenum = [float(lineele) for lineele in linestr]
b3 = np.array(linenum)
if i > b3_s and i <= b3_e:
linenum = [float(lineele) for lineele in linestr]
b3 = np.append(b3, linenum)
if i == b4_s:
linenum = [float(lineele) for lineele in linestr]
b4 = np.array(linenum)
if i > b4_s and i <= b4_e:
linenum = [float(lineele) for lineele in linestr]
b4 = np.append(b4, linenum)
if i == b5_s:
linenum = [float(lineele) for lineele in linestr]
b5 = np.array(linenum)
if i > b5_s and i <= b5_e:
linenum = [float(lineele) for lineele in linestr]
b5 = np.append(b5, linenum)
if i == meanS_s:
linenum = [float(lineele) for lineele in linestr]
meanS = np.array(linenum)
if i > meanS_s and i <= meanS_e:
linenum = [float(lineele) for lineele in linestr]
meanS = np.append(meanS, linenum)
if i == meanY_s:
linenum = [float(lineele) for lineele in linestr]
meanY = np.array(linenum)
if i > meanY_s and i <= meanY_e:
linenum = [float(lineele) for lineele in linestr]
meanY = np.append(meanY, linenum)
if i == stdS_s:
linenum = [float(lineele) for lineele in linestr]
stdS = np.array(linenum)
if i > stdS_s and i <= stdS_e:
linenum = [float(lineele) for lineele in linestr]
stdS = np.append(stdS, linenum)
if i == stdY_s:
linenum = [float(lineele) for lineele in linestr]
stdY = np.array(linenum)
if i > stdY_s and i <= stdY_e:
linenum = [float(lineele) for lineele in linestr]
stdY = np.append(stdY, linenum)
i += 1
del w1_s, w1_e, w2_s, w2_e, w3_s, w3_e, w4_s, w4_e, w5_s, w5_e,
b1_s, b1_e, b2_s, b2_e, b3_s, b3_e, b4_s, b4_e, b5_s, b5_e,
meanS_s, meanS_e, meanY_s, meanY_e, stdS_s, stdS_e, stdY_s, stdY_e
w1 = np.reshape(w1, (w1_row, w1_col))
w2 = np.reshape(w2, (w2_row, w2_col))
w3 = np.reshape(w3, (w3_row, w3_col))
w4 = np.reshape(w4, (w4_row, w4_col))
w5 = np.reshape(w5, (w5_row, w5_col))
############# Step 3: ROM prediction for classifier #############
X_nrm = (X - np.tile(meanS, [X_row, 1]))/np.tile(stdS, [X_row, 1])
for j in range(X_row):
inputX = X_nrm[j,:]
m1 = np.matmul(inputX,w1)
m1b = np.add(m1,b1)
m1ba = np.zeros(len(m1b))
for i in range(len(m1b)):
m1ba[i] = 1.0/(1+math.exp(-m1b[i]))
m2 = np.matmul(m1ba,w2)
m2b = np.add(m2,b2)
m2ba = np.zeros(len(m2b))
for i in range(len(m2b)):
m2ba[i] = 1.0/(1+math.exp(-m2b[i]))
m3 = np.matmul(m2ba,w3)
m3b = np.add(m3,b3)
m3ba = np.zeros(len(m3b))
for i in range(len(m3b)):
m3ba[i] = 1.0/(1+math.exp(-m3b[i]))
m4 = np.matmul(m3ba,w4)
m4b = np.add(m4,b4)
m4ba = np.zeros(len(m4b))
for i in range(len(m4b)):
m4ba[i] = 1.0/(1+math.exp(-m4b[i]))
m5 = np.matmul(m4ba,w5)
m5b = np.add(m5,b5)
m5ba = np.zeros(len(m5b))
for i in range(len(m5b)):
m5ba[i] = m5b[i]
outputX_nrm = m5ba
outputX = m5ba*stdY+meanY
if j == 0:
Xy_nrm_4cls = outputX_nrm
Xy_4cls = outputX
else:
Xy_nrm_4cls = np.vstack((Xy_nrm_4cls, outputX_nrm))
Xy_4cls = np.vstack((Xy_4cls, outputX))
############# Step 4: preprocessor prediction (SimulationStatus) for classifier #############
succs_Xy = np.zeros((X.shape[0],1),dtype=np.float64)
try: # find index of preprocessor inputs
index1 = Xname.index("Average_CurrentDensity")
index2 = Xname.index("Stack_Fuel_Utilization")
index3 = Xname.index("Stack_Oxidant_Utilization")
index4 = Xname.index("OxygenToCarbon_Ratio")
index5 = Xname.index("Internal_Reforming")
index6 = Xname.index("Oxidant_Recirculation")
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index7= Xname.index("PreReform")
except:
index7 = -1
PreReform_fix=0.2 #[]
try:
index8= Xname.index("cellsize")
except:
index8 = -1
cellsize_fix=550 #[cm2]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
try:
index9 = Xname.index("VGRRate")
index11 = Xname.index("VGRH2OPassRate")
index12 = Xname.index("VGRCO2CaptureRate")
index13 = Xname.index("VGRH2PassRate")
index14 = Xname.index("VGRCOConvertRate")
except:
sys.exit('Code terminated: "preprocessor" input not defined')
# find value of preprocessor inputs
for i in range(X.shape[0]):
J = X[i,index1]/10.0 # mA/cm2
FU = X[i,index2]
AU = X[i,index3]
OCR = X[i,index4]
IR = X[i,index5]
Arec = X[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = X[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = X[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
VGR = X[i,index9]
H2OCap = 1-X[i,index11]
CO2Cap = X[i,index12]
H2Cap = 1-X[i,index13]
WGS = X[i,index14]
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
succs_Xy[i,0] = succ
mean_succs = succs_Xy.mean(axis=0)
std_succs = succs_Xy.std(axis=0)
succs_Xy_nrm = (succs_Xy-mean_succs)/std_succs
############# Step 5: perform prediction of SimulationStatus #############
X_nrm_4cls = np.concatenate((X_nrm, Xy_nrm_4cls, succs_Xy_nrm),axis=1)
for j in range(X_row):
inputX_cls = X_nrm_4cls[j,:]
m1_cls = np.matmul(inputX_cls,w1_cls)
m1b_cls = np.add(m1_cls,b1_cls)
m1ba_cls = np.zeros(len(m1b_cls))
for i in range(len(m1b_cls)):
m1ba_cls[i] = 1.0/(1+math.exp(-m1b_cls[i]))
m2_cls = np.matmul(m1ba_cls,w2_cls)
m2b_cls = np.add(m2_cls,b2_cls)
m2ba_cls = np.zeros(len(m2b_cls))
for i in range(len(m2b_cls)):
m2ba_cls[i] = m2b_cls[i]
outputX_cls = m2ba_cls
if j == 0:
Xy_cls = outputX_cls
else:
Xy_cls = np.vstack((Xy_cls, outputX_cls))
#convert to 0 and 1
Xy_cls = np.argmax(Xy_cls, 1)
############# Step 6: Load the trained model for ROM #############
print('Step 6: Load the trained model (outtrainingFile)')
with open(self.outtrainingFile) as f:
lines = f.readlines()
i = 0
for line in lines:
linestr = line.strip().split(' ')
if linestr[0] == 'w1':
w1_s = i+1
if linestr[0] == 'w2':
w2_s = i+1
w1_e = i-2
if linestr[0] == 'w3':
w3_s = i+1
w2_e = i-2
if linestr[0] == 'w4':
w4_s = i+1
w3_e = i-2
if linestr[0] == 'w5':
w5_s = i+1
w4_e = i-2
if linestr[0] == 'b1':
b1_s = i+1
w5_e = i-2
if linestr[0] == 'b2':
b2_s = i+1
b1_e = i-2
if linestr[0] == 'b3':
b3_s = i+1
b2_e = i-2
if linestr[0] == 'b4':
b4_s = i+1
b3_e = i-2
if linestr[0] == 'b5':
b5_s = i+1
b4_e = i-2
if linestr[0] == 'meanS':
meanS_s = i+1
b5_e = i-2
if linestr[0] == 'meanY':
meanY_s = i+1
meanS_e = i-2
if linestr[0] == 'stdS':
stdS_s = i+1
meanY_e = i-2
if linestr[0] == 'stdY':
stdY_s = i+1
stdS_e = i-2
if linestr[0] == 'meanerr':
meanerr_s = i+1
stdY_e = i-2
if linestr[0] == 'stderr':
stderr_s = i+1
meanerr_e = i-2
if linestr[0] == 'end':
stderr_e = i-2
i += 1
i = 0
for line in lines:
linestr = line.strip().split(' ')
if i == w1_s:
linenum = [float(lineele) for lineele in linestr]
w1 = np.array(linenum)
w1_row = w1_e-w1_s+1
w1_col = len(w1)
if i > w1_s and i <= w1_e:
linenum = [float(lineele) for lineele in linestr]
w1 = np.append(w1, linenum)
if i == w2_s:
linenum = [float(lineele) for lineele in linestr]
w2 = np.array(linenum)
w2_row = w2_e-w2_s+1
w2_col = len(w2)
if i > w2_s and i <= w2_e:
linenum = [float(lineele) for lineele in linestr]
w2 = np.append(w2, linenum)
if i == w3_s:
linenum = [float(lineele) for lineele in linestr]
w3 = np.array(linenum)
w3_row = w3_e-w3_s+1
w3_col = len(w3)
if i > w3_s and i <= w3_e:
linenum = [float(lineele) for lineele in linestr]
w3 = np.append(w3, linenum)
if i == w4_s:
linenum = [float(lineele) for lineele in linestr]
w4 = np.array(linenum)
w4_row = w4_e-w4_s+1
w4_col = len(w4)
if i > w4_s and i <= w4_e:
linenum = [float(lineele) for lineele in linestr]
w4 = np.append(w4, linenum)
if i == w5_s:
linenum = [float(lineele) for lineele in linestr]
w5 = np.array(linenum)
w5_row = w5_e-w5_s+1
w5_col = len(w5)
if i > w5_s and i <= w5_e:
linenum = [float(lineele) for lineele in linestr]
w5 = np.append(w5, linenum)
if i == b1_s:
linenum = [float(lineele) for lineele in linestr]
b1 = np.array(linenum)
if i > b1_s and i <= b1_e:
linenum = [float(lineele) for lineele in linestr]
b1 = np.append(b1, linenum)
if i == b2_s:
linenum = [float(lineele) for lineele in linestr]
b2 = np.array(linenum)
if i > b2_s and i <= b2_e:
linenum = [float(lineele) for lineele in linestr]
b2 = np.append(b2, linenum)
if i == b3_s:
linenum = [float(lineele) for lineele in linestr]
b3 = np.array(linenum)
if i > b3_s and i <= b3_e:
linenum = [float(lineele) for lineele in linestr]
b3 = np.append(b3, linenum)
if i == b4_s:
linenum = [float(lineele) for lineele in linestr]
b4 = np.array(linenum)
if i > b4_s and i <= b4_e:
linenum = [float(lineele) for lineele in linestr]
b4 = np.append(b4, linenum)
if i == b5_s:
linenum = [float(lineele) for lineele in linestr]
b5 = np.array(linenum)
if i > b5_s and i <= b5_e:
linenum = [float(lineele) for lineele in linestr]
b5 = np.append(b5, linenum)
if i == meanS_s:
linenum = [float(lineele) for lineele in linestr]
meanS = np.array(linenum)
if i > meanS_s and i <= meanS_e:
linenum = [float(lineele) for lineele in linestr]
meanS = np.append(meanS, linenum)
if i == meanY_s:
linenum = [float(lineele) for lineele in linestr]
meanY = np.array(linenum)
if i > meanY_s and i <= meanY_e:
linenum = [float(lineele) for lineele in linestr]
meanY = np.append(meanY, linenum)
if i == stdS_s:
linenum = [float(lineele) for lineele in linestr]
stdS = np.array(linenum)
if i > stdS_s and i <= stdS_e:
linenum = [float(lineele) for lineele in linestr]
stdS = np.append(stdS, linenum)
if i == stdY_s:
linenum = [float(lineele) for lineele in linestr]
stdY = np.array(linenum)
if i > stdY_s and i <= stdY_e:
linenum = [float(lineele) for lineele in linestr]
stdY = np.append(stdY, linenum)
# two more variables meanerr, stderr
if i == meanerr_s:
linenum = [float(lineele) for lineele in linestr]
meanerr = np.array(linenum)
if i > meanerr_s and i <= meanerr_e:
linenum = [float(lineele) for lineele in linestr]
meanerr = np.append(meanerr, linenum)
if i == stderr_s:
linenum = [float(lineele) for lineele in linestr]
stderr = np.array(linenum)
if i > stderr_s and i <= stderr_e:
linenum = [float(lineele) for lineele in linestr]
stderr = np.append(stderr, linenum)
i += 1
w1 = np.reshape(w1, (w1_row, w1_col))
w2 = np.reshape(w2, (w2_row, w2_col))
w3 = np.reshape(w3, (w3_row, w3_col))
w4 = np.reshape(w4, (w4_row, w4_col))
w5 = np.reshape(w5, (w5_row, w5_col))
############# Step 7: perform prediction of other variables #############
# Normalize S, Y, X again
S_nrm = (S - np.tile(meanS, [S_row, 1]))/np.tile(stdS, [S_row, 1])
Y_nrm = (Y - np.tile(meanY, [Y_row, 1]))/np.tile(stdY, [Y_row, 1])
X_nrm = (X - np.tile(meanS, [X_row, 1]))/np.tile(stdS, [X_row, 1])
for j in range(X_row):
inputX = X_nrm[j,:]
m1 = np.matmul(inputX,w1)
m1b = np.add(m1,b1)
m1ba = np.zeros(len(m1b))
for i in range(len(m1b)):
m1ba[i] = 1.0/(1+math.exp(-m1b[i]))
m2 = np.matmul(m1ba,w2)
m2b = np.add(m2,b2)
m2ba = np.zeros(len(m2b))
for i in range(len(m2b)):
m2ba[i] = 1.0/(1+math.exp(-m2b[i]))
m3 = np.matmul(m2ba,w3)
m3b = np.add(m3,b3)
m3ba = np.zeros(len(m3b))
for i in range(len(m3b)):
m3ba[i] = 1.0/(1+math.exp(-m3b[i]))
m4 = np.matmul(m3ba,w4)
m4b = np.add(m4,b4)
m4ba = np.zeros(len(m4b))
for i in range(len(m4b)):
m4ba[i] = 1.0/(1+math.exp(-m4b[i]))
m5 = np.matmul(m4ba,w5)
m5b = np.add(m5,b5)
m5ba = np.zeros(len(m5b))
for i in range(len(m5b)):
m5ba[i] = m5b[i]
outputX_nrm = m5ba
outputX = m5ba*stderr+meanerr
if j == 0:
err_nrm = outputX_nrm
err = outputX
else:
err_nrm = np.vstack((err_nrm, outputX_nrm))
err = np.vstack((err, outputX))
############# Step 8: preprocessor prediction for ROM #############
preprocessor_result = np.zeros((X.shape[0], 20),dtype=np.float64)
# find index of preprocessor inputs
try:
index1 = Xname.index("Average_CurrentDensity")
index2 = Xname.index("Stack_Fuel_Utilization")
index3 = Xname.index("Stack_Oxidant_Utilization")
index4 = Xname.index("OxygenToCarbon_Ratio")
index5 = Xname.index("Internal_Reforming")
index6 = Xname.index("Oxidant_Recirculation")
except:
sys.exit('Code terminated: "preprocessor" input not defined')
try:
index7= Xname.index("PreReform")
except:
index7 = -1
PreReform_fix=0.2 #[]
try:
index8= Xname.index("cellsize")
except:
index8 = -1
cellsize_fix=550 #[cm2]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
try:
index9 = Xname.index("VGRRate")
index11 = Xname.index("VGRH2OPassRate")
index12 = Xname.index("VGRCO2CaptureRate")
index13 = Xname.index("VGRH2PassRate")
index14 = Xname.index("VGRCOConvertRate")
except:
sys.exit('Code terminated: "preprocessor" input not defined')
for i in range(X.shape[0]):
J = X[i,index1]/10.0 # mA/cm2
FU = X[i,index2]
AU = X[i,index3]
OCR = X[i,index4]
IR = X[i,index5]
Arec = X[i,index6]
if index7 == -1:
PreReform = PreReform_fix
else:
PreReform = X[i,index7]
if index8 == -1:
cellsize = cellsize_fix # cm2
else:
cellsize = X[i,index8]
if preprocessor_name == 'NGFC_ccs_vgr' or preprocessor_name == 'IGFC_ccs_vgr':
VGR = X[i,index9]
H2OCap = 1-X[i,index11]
CO2Cap = X[i,index12]
H2Cap = 1-X[i,index13]
WGS = X[i,index14]
if preprocessor_name == None or preprocessor_name == 'NGFC_ccs':
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'NGFC_nocc':
FuelOut, AirOut, FuelIn,succ=self.NGFC_nocc(J,FU,AU,OCR,IR,Arec,PreReform,cellsize)
elif preprocessor_name == 'IGFC_ccs': # IGFC: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs(J,FU,AU,OCR,IR,Arec,PreReform,cellsize,igfc)
elif preprocessor_name == 'NGFC_ccs_vgr': # NGFC CCS VGR
FuelOut, AirOut, FuelIn,succ=self.NGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize)
elif preprocessor_name == 'IGFC_ccs_vgr': # IGFC VGR: conventional, Enhanced, Catalytic
FuelOut, AirOut, FuelIn,succ=self.IGFC_ccs_vgr(J,FU,AU,OCR,IR,Arec,PreReform,VGR,H2OCap,CO2Cap,H2Cap,WGS,cellsize,igfc)
else:
sys.exit('Code terminated: the selected "preprocessor" cannot be found')
preprocessor_result[i,0] = np.sum(FuelOut)
preprocessor_result[i,1] = FuelOut[7]
preprocessor_result[i,2] = FuelOut[0]
preprocessor_result[i,3] = FuelOut[6]
preprocessor_result[i,4] = FuelOut[2]
preprocessor_result[i,5] = FuelOut[5]
preprocessor_result[i,6] = FuelOut[4]
preprocessor_result[i,7] = np.sum(AirOut)
preprocessor_result[i,8] = AirOut[3]
preprocessor_result[i,9] = AirOut[4]
preprocessor_result[i,10] = AirOut[0]
preprocessor_result[i,11] = AirOut[2]
preprocessor_result[i,12] = AirOut[1]
preprocessor_result[i,13] = np.sum(FuelIn)
preprocessor_result[i,14] = FuelIn[7]
preprocessor_result[i,15] = FuelIn[0]
preprocessor_result[i,16] = FuelIn[6]
preprocessor_result[i,17] = FuelIn[2]
preprocessor_result[i,18] = FuelIn[5]
preprocessor_result[i,19] = FuelIn[4]
############# Step 9: Final prediction for ROM #############
Xy = preprocessor_result - err
Xy_nrm = (Xy - np.tile(meanY, [X_row, 1]))/np.tile(stdY, [X_row, 1])
# Copy to Global
[self.S_row, self.Y_row, self.S_col, self.Y_col] = [S_row, Y_row, S_col, Y_col]
self.S_norm = S_nrm
self.Y_norm = Y_nrm
self.S = S
self.Y = Y
[self.stdS, self.stdY] = [stdS, stdY]
self.X = X
self.Xy = Xy
self.X_norm = X_nrm
self.Xy_norm = Xy_nrm
self.Sname = Sname
self.Yname = Yname
############# Step 10: Write the predictions #############
print('Step 10: Write the predictions')
with open(self.outpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
if cls_enabled == True:
f.write('SimulationStatus\t')
for i in range(Y_col):
f.write(Yname[i] + '\t')
f.write('\n')
for i in range(X_row):
# write input variables
for j in range(S_col):
f.write('{:11.4E}\t'.format(X[i, j]))
# write simulation status
if cls_enabled == True:
f.write('{:11.4E}\t'.format(Xy_cls[i]))
# write output variables
for j in range(Y_col):
f.write('{:11.4E}\t'.format(Xy[i, j]))
f.write('\n')
print('End of code\n')
def percent2intervl(self, percentage, var = None):
print('############################################################\
\nPercentage to Confidence Interval\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence interval
interval_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
err = np.sort(ERR[:, i])
N = len(err)
n = (N-1)*percentage/100.0 + 1
if n == 1:
interval = err[0]
elif n == N:
interval = err[N-1]
else:
k = int(n)
d = n-k
interval = err[k-1]+d*(err[k]-err[k-1])
interval_all[i] = interval
if var == None:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif Yname[i] == var:
print('For "' + str(Yname[i]) + '":'
+ '[' + Yunit[i] + ']'
+' \n\t'
+ str(percentage) + '% confidence interval is '
+ '\u00B1' + '{:11.4E}\t'.format(interval))
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(interval_all)
def intervl2percent(self, interval, var = None):
print('############################################################\
\nConfidence Interval to Percentage\
\n############################################################')
# load cross validation results
Yname, ERR = self.file_read(self.outcrossvaliFile)
# find the units
names_input, units_input, names_output, units_output = self.variable_options()
Yunit = []
for i in range(len(Yname)):
tempindex = names_output.index(Yname[i])
tempunit = units_output[tempindex]
Yunit.append(tempunit)
# compute confidence percentage
percentage_all = np.zeros((len(Yname),),dtype=np.float64)
for i in range(len(Yname)):
if var == Yname[i]:
err = np.sort(ERR[:, i])
N = len(err)
if interval <= err[0]:
percentage = 0
elif interval >= err[N-1]:
percentage = 1
else:
result = np.where(err>interval)
index = result[0]
k = index[0]
percentage = ((interval-err[k-1])/(err[k]-err[k-1])+k-1)/float(N-1)
percentage_all[i] = percentage
print('For "' + str(Yname[i]) + '": '
+ '[' + Yunit[i] + ']'
+ '\n\t\u00B1' + str(interval)
+ ' interval has a confidence of ' + str(round(percentage*100, 2)) + '%')
elif var not in Yname:
print('The given variable cannot be found')
print('End of code\n')
return(percentage_all)
def plot_contour_2D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_DNN.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_DNN.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
plt.figure(figsize=(17.5,6))
plt.subplot(1, 2, 1)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x1, y1, z1, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x1, y1, z1, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
#plt.colorbar().set_label(label='a label',size=15,weight='bold')
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.subplot(1, 2, 2)
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
C = plt.tricontour(x2, y2, z2, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x2, y2, z2, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.xlim((min(min(x1), min(x2)), max(max(x1), max(x2))))
plt.ylim((min(min(y1), min(y2)), max(max(y1), max(y2))))
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
plt.figure(figsize=(8,6))
C = plt.tricontour(x, y, z, 10, linewidths = 0.5, colors = 'k')
Cf = plt.tricontourf(x, y, z, 20, alpha = 0.75)
#plt.clabel(C, inline = True, fontsize = 10)
plt.colorbar(orientation = 'vertical', shrink = 1).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '2D_contour.png'
plt.savefig(figurename)
def plot_contour_3D(self, xvariable, yvariable, zvariable,
pltoption = 0, saveoption = False):
'''
The function plots 2D contour of designs and responses
pltoption = 0: plot both training and prediction sets; 1: plot only training sets, 2: plot only prediction sets
'''
# check if the given variables are in the list
if (xvariable not in self.Sname) or (yvariable not in self.Sname) or (zvariable not in self.Yname):
sys.exit('Code terminated: variable index out of bound')
v1 = self.Sname.index(xvariable)+1
v2 = self.Sname.index(yvariable)+1
v3 = self.Yname.index(zvariable)+1
option = int(pltoption)
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_input.index(yvariable)
yunit = units_input[tempindex]
tempindex = names_output.index(zvariable)
zunit = units_output[tempindex]
# Generate inPrediction4contour.dat
if option == 0 or option == 2:
Xname, Xvalue = self.file_read(self.inpredictionFile)
Xvalue_mean = np.mean(Xvalue, axis = 0)
[X_row, X_col] = Xvalue.shape
inpredictionFile_orig = self.inpredictionFile
outpredictionFile_orig = self.outpredictionFile
self.inpredictionFile = self.work_path + '/inPrediction_contour_kriging.dat'
self.outpredictionFile = self.work_path + '/outPrediction_contour_kriging.dat'
with open(self.inpredictionFile, 'w') as f:
for name in Xname:
f.write(name + '\t')
f.write('\n')
for i in range(X_row):
for j in range(X_col):
if (j+1) == v1 or (j+1) == v2:
f.write('{:11.4E}\t'.format(Xvalue[i, j]))
else:
f.write('{:11.4E}\t'.format(Xvalue_mean[j]))
f.write('\n')
self.prediction()
os.remove(self.inpredictionFile)
os.remove(self.outpredictionFile)
self.inpredictionFile = inpredictionFile_orig
self.outpredictionFile = outpredictionFile_orig
if option == 0: # Default: plot both training and prediction sets
x1 = self.S[:, v1-1]
y1 = self.S[:, v2-1]
z1 = self.Y[:, v3-1]
x2 = self.X[:, v1-1]
y2 = self.X[:, v2-1]
z2 = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(18.5,6))
ax = fig.add_subplot(1, 2, 1, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x1, y1, z1, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
ax = fig.add_subplot(1, 2, 2, projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x2, y2, z2, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 1: # plot training sets
x = self.S[:, v1-1]
y = self.S[:, v2-1]
z = self.Y[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Training sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
elif option == 2: # plot prediciton sets
x = self.X[:, v1-1]
y = self.X[:, v2-1]
z = self.Xy[:, v3-1]
xname = self.Sname[v1-1]
yname = self.Sname[v2-1]
zname = self.Yname[v3-1]
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection = '3d')
ax.tick_params(labelsize=12)
surf = ax.plot_trisurf(x, y, z, color = 'k', cmap = plt.get_cmap('rainbow'))
fig.colorbar(surf, orientation = 'vertical', shrink = 0.8).ax.tick_params(labelsize=12)
plt.xlabel(xname+', ['+xunit+']', fontsize = 12)
plt.ylabel(yname+', ['+yunit+']', fontsize = 12)
plt.title('Prediction sets: '+zname+', ['+zunit+']', fontsize = 12)
plt.show()
# save option
if saveoption == True:
figurename = '3D_contour.png'
plt.savefig(figurename)
def plot_box(self, xvariable, yvariable, saveoption = False):
'''
The function is for box plot, it can help to perform sensitivity studies
'''
# convert to pandam dataframe
S = pd.DataFrame(data = self.S, columns = self.Sname, dtype = 'float')
Y = pd.DataFrame(data = self.Y, columns = self.Yname, dtype = 'float')
# find the units for x,y,z variables
names_input, units_input, names_output, units_output = self.variable_options()
tempindex = names_input.index(xvariable)
xunit = units_input[tempindex]
tempindex = names_output.index(yvariable)
yunit = units_output[tempindex]
# generate box plot data
x = S[[xvariable]]
y = Y[[yvariable]]
min_x = min(x.values)
max_x = max(x.values)
x = round((x-min_x)/((max_x-min_x)/9), 0)*((max_x-min_x)/9)+min_x
x = round(x, 2)
#xy = pd.concat([x, y], axis = 1, sort = False)
#print(x.sort_values(by = ['Average_CurrentDensity']))
#print(xy)
# box plot
plt.figure(figsize=(18.5,6))
sns.set_context("paper", font_scale=3)
sns.set_style('ticks')
bplot = sns.boxplot(y=y[yvariable], x=x[xvariable],
color = 'yellow', width = 0.5)
bplot = sns.swarmplot(y=y[yvariable], x=x[xvariable],
color = 'black', alpha = 0.5)
sns.axes_style()
bplot.axes.set_title('Design-response sites', fontsize = 25)
bplot.set_xlabel(xvariable+', ['+xunit+']', fontsize = 25)
bplot.set_ylabel(yvariable+', ['+yunit+']', fontsize = 25)
bplot.tick_params(labelsize = 25)
plt.show()
# save option
if saveoption == True:
figurename = 'boxplot.png'
plt.savefig(figurename)
| 50.858788 | 287 | 0.504572 | 99,818 | 839,170 | 4.043629 | 0.01639 | 0.015381 | 0.023512 | 0.025598 | 0.947684 | 0.939793 | 0.934167 | 0.927433 | 0.920905 | 0.915246 | 0 | 0.052556 | 0.342382 | 839,170 | 16,499 | 288 | 50.86187 | 0.678847 | 0.267295 | 0 | 0.898191 | 0 | 0.000176 | 0.052607 | 0.006889 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007852 | false | 0.004146 | 0.002647 | 0 | 0.013057 | 0.022938 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
825943ce4ce97340635af970c0bb7e59a94fdc72 | 30,091 | py | Python | model.py | cjshui/WADN | fcb4afed33bfd3d5d54d0542e49b11d6ebb21d09 | [
"MIT"
] | 8 | 2021-07-26T22:47:33.000Z | 2022-01-05T20:18:15.000Z | model.py | cjshui/WADN | fcb4afed33bfd3d5d54d0542e49b11d6ebb21d09 | [
"MIT"
] | null | null | null | model.py | cjshui/WADN | fcb4afed33bfd3d5d54d0542e49b11d6ebb21d09 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd as autograd
from sklearn.metrics import confusion_matrix
from module import L2ProjFunction, GradientReversalLayer
import utils
########## Some components ##########
class MLPNet(nn.Module):
def __init__(self, configs):
"""
MLP network with ReLU
"""
super().__init__()
self.input_dim = configs["input_dim"]
self.num_hidden_layers = len(configs["hidden_layers"])
self.num_neurons = [self.input_dim] + configs["hidden_layers"]
# Parameters of hidden, fully-connected layers
self.hiddens = nn.ModuleList(
[
nn.Linear(self.num_neurons[i], self.num_neurons[i + 1])
for i in range(self.num_hidden_layers)
]
)
self.final = nn.Linear(self.num_neurons[-1], configs["output_dim"])
self.dropout = nn.Dropout(p=configs["drop_rate"]) # drop probability
self.process_final = configs["process_final"]
def forward(self, x):
for hidden in self.hiddens:
x = F.relu(hidden(self.dropout(x)))
if self.process_final:
return F.relu(self.final(self.dropout(x)))
else:
# no dropout or transform
return self.final(x)
class ConvNet(nn.Module):
def __init__(self, configs):
"""
Feature extractor for the image (digits) datasets
"""
super().__init__()
self.channels = configs["channels"] # number of channels
self.num_conv_layers = len(configs["conv_layers"])
self.num_channels = [self.channels] + configs["conv_layers"]
# Parameters of hidden, cpcpcp, feature learning component.
self.convs = nn.ModuleList(
[
nn.Conv2d(self.num_channels[i], self.num_channels[i + 1], kernel_size=3)
for i in range(self.num_conv_layers)
]
)
self.dropout = nn.Dropout(p=configs["drop_rate"]) # drop probability
def forward(self, x):
dropout = self.dropout
for conv in self.convs:
x = F.max_pool2d(F.relu(conv(dropout(x))), 2, 2, ceil_mode=True)
x = x.view(x.size(0), -1) # flatten
return x
class MLPNet_digits(nn.Module):
def __init__(self, configs):
"""
MLP network with ReLU
"""
super().__init__()
self.input_dim = configs["input_dim"]
self.num_hidden_layers = len(configs["hidden_layers"])
self.num_neurons = [self.input_dim] + configs["hidden_layers"]
# Parameters of hidden, fully-connected layers
self.hiddens = nn.ModuleList(
[
nn.Linear(self.num_neurons[i], self.num_neurons[i + 1])
for i in range(self.num_hidden_layers)
]
)
self.final = nn.Linear(self.num_neurons[-1], configs["output_dim"])
self.dropout = nn.Dropout(p=configs["drop_rate"]) # drop probability
self.process_final = configs["process_final"]
def forward(self, x):
for hidden in self.hiddens:
x = F.relu(hidden(self.dropout(x)))
latent_x = x
if self.process_final:
return latent_x, F.relu(self.final(self.dropout(x)))
else:
# no dropout or transform
return latent_x, self.final(x)
class WarnBase_digits(nn.Module):
def __init__(self, configs):
"""
Domain AggRegation Network.
"""
super().__init__()
self.num_src_domains = configs["num_src_domains"]
# define the classes numbers
self.num_class = configs["num_src_classes"]
self.fea_dim = configs["feauture_dim"]
# Gradient reversal layer.
self.grl = GradientReversalLayer.apply
# self.mode = configs["mode"]
self.mu = configs["mu"]
self.gp_coef = configs["gp_coef"]
self.sem_coef = configs["sem_coef"]
self.gamma = configs["gamma"]
# option about semantic matching
self.semantic = True
# define the confusion matrix for every source domain
self.C = np.zeros([self.num_class, self.num_class, self.num_src_domains])
# define the label re-weights alpha (T-task times num_classes)
self.lam = np.ones([self.num_src_domains]) / self.num_src_domains
# defining the src_centre (self.num_src_domains X self.num_class X self. fea_dim)
self.src_centroid = torch.zeros([self.num_src_domains, self.num_class, self.fea_dim])
self.tar_centroid = torch.zeros([self.num_class, self.fea_dim])
self.decay = 0.3
# mse loss for semantic losses
self.MSELoss = nn.MSELoss(reduction="none")
# define the confusion matrix, source taget prediction output distribution
self.tar_pred = np.zeros([self.num_class])
def forward(self, sinputs, soutputs, tinputs, alpha, src_truth_label):
"""
:param sinputs: A list of k inputs from k source domains.
:param soutputs: A list of k outputs from k source domains.
:param tinputs: Input from the target domain.
:estimated_tar_dis: Estimated target label distribution (this is different from target prediction distribution)
:return: tuple(aggregated loss, domain weights)
"""
# Compute features
s_features = []
s_semantic = []
for dom_idx in range(self.num_src_domains):
s_features.append(self.feature_net(sinputs[dom_idx]))
s_semantic.append(self.class_net(s_features[dom_idx])[0])
t_features = self.feature_net(tinputs)
t_semantic = self.class_net(t_features)[0]
# Classification probabilities on k source domains
logprobs = []
for dom_idx in range(self.num_src_domains):
with torch.no_grad():
# source prediction error
src_pred = torch.argmax(self.class_net(s_features[dom_idx])[1], 1).cpu().numpy()
tar_pred = torch.argmax(self.class_net(t_features)[1], 1).cpu().numpy()
src_true = soutputs[dom_idx].cpu().numpy()
# un-normalized
self.C[:, :, dom_idx] = confusion_matrix(
src_true, src_pred, labels=list(range(self.num_class))
)
for cls_idx in range(self.num_class):
self.tar_pred[cls_idx] = np.count_nonzero(tar_pred == cls_idx)
logprobs.append(F.log_softmax(self.class_net(s_features[dom_idx])[1], dim=1))
# weighted prediction loss
cls_losses = torch.stack(
[
F.nll_loss(logprobs[dom_idx], soutputs[dom_idx], weight=alpha[dom_idx, :])
for dom_idx in range(self.num_src_domains)
]
)
# Domain classification accuracies. (wasserstein based approach)
sdomains, tdomains = [], []
batch_size = tinputs.shape[0]
src_alpha = []
src_alpha_weights = torch.ones(
[self.num_src_domains, batch_size],
requires_grad=False,
dtype=torch.float32,
device=tinputs.device,
)
for dom_idx in range(self.num_src_domains):
for cls_idx in range(self.num_class):
src_alpha_weights[dom_idx, soutputs[dom_idx] == cls_idx] = alpha[dom_idx, cls_idx]
src_alpha.append(src_alpha_weights)
for dom_idx in range(self.num_src_domains):
# weighted src adversarial loss
sdomains.append(
torch.mul(
self.domain_nets[dom_idx](self.grl(s_features[dom_idx])),
torch.unsqueeze(src_alpha_weights[dom_idx], -1),
)
)
tdomains.append(self.domain_nets[dom_idx](self.grl(t_features)))
# slabels = torch.ones([batch_size, 1], requires_grad=False,
# dtype=torch.float32, device=tinputs.device)
# tlabels = torch.zeros([batch_size, 1], requires_grad=False,
# dtype=torch.float32, device=tinputs.device)
# domain loss is the Wasserstein loss (current w.o gradient penality)
domain_losses = torch.stack(
[torch.mean(sdomains[i]) - torch.mean(tdomains[i]) for i in range(self.num_src_domains)]
)
# Defining domain regularization loss (gradient penality)
domain_gradient = []
for tsk in range(self.num_src_domains):
src_rand = s_features[tsk]
epsilon = np.random.rand()
interpolated = epsilon * src_rand + (1 - epsilon) * t_features
inter_f = self.domain_nets[tsk](interpolated)
# The following compute the penalty of the Lipschitz constant
penalty_coefficient = 10.0
# torch.norm can be unstable? https://github.com/pytorch/pytorch/issues/2534
# f_gradient_norm = torch.norm(torch.autograd.grad(torch.sum(inter_f), interpolated)[0], dim=1)
f_gradient = torch.autograd.grad(
torch.sum(inter_f), interpolated, create_graph=True, retain_graph=True
)[0]
f_gradient_norm = torch.sqrt(torch.sum(f_gradient ** 2, dim=1) + 1e-10)
domain_gradient_penalty = penalty_coefficient * torch.mean((f_gradient_norm - 1.0) ** 2)
domain_gradient.append(domain_gradient_penalty)
domain_gradient = torch.stack(domain_gradient)
# semantic loss (depending on the tar_reweighted loss)
tar_pred_cuda = torch.tensor(tar_pred).to(alpha.device)
src_semantic = []
for tsk in range(self.num_src_domains):
tar_y_estimated = alpha[tsk, :] * src_truth_label[tsk, :]
sematinc_loss = self.update_center(
tsk, s_semantic[tsk], t_semantic, soutputs[tsk], tar_pred_cuda, tar_y_estimated
)
src_semantic.append(sematinc_loss)
src_semantic = torch.stack(src_semantic)
return self._aggregation(cls_losses, domain_losses, domain_gradient, src_semantic)
def _aggregation(self, cls_losses, domain_losses, domain_gradient, src_semantic):
"""
Aggregate the losses into a scalar
"""
losses_tuple = (cls_losses, domain_losses, domain_gradient, src_semantic)
mu = self.mu
gp_coef = self.gp_coef
sem_coef = self.sem_coef
train_loss = cls_losses + mu * (
domain_losses + gp_coef * domain_gradient + sem_coef * src_semantic
)
convex_loss = (cls_losses + 0.01 * src_semantic).detach()
return train_loss, self.C, self.tar_pred, convex_loss, losses_tuple
def update_center(self, tsk, src_fea, tar_fea, s_true, t_pseudo, tar_y_estimated):
self.src_centroid = self.src_centroid.to(src_fea.device)
self.tar_centroid = self.tar_centroid.to(src_fea.device)
# get feature size (batch_size X dimension)
n, d = src_fea.shape
# get labels
s_labels, t_labels = s_true, t_pseudo
# image number in each class
ones = torch.ones_like(s_labels, dtype=torch.float)
zeros = torch.zeros(self.num_class).to(src_fea.device)
# smaples per class
s_n_classes = zeros.scatter_add(0, s_labels, ones)
t_n_classes = zeros.scatter_add(0, t_labels, ones)
# image number cannot be 0, when calculating centroids
ones = torch.ones_like(s_n_classes)
s_n_classes = torch.max(s_n_classes, ones)
t_n_classes = torch.max(t_n_classes, ones)
# calculating centroids, sum and divide
zeros = torch.zeros(self.num_class, d).to(src_fea.device)
s_sum_feature = zeros.scatter_add(0, torch.transpose(s_labels.repeat(d, 1), 1, 0), src_fea)
t_sum_feature = zeros.scatter_add(0, torch.transpose(t_labels.repeat(d, 1), 1, 0), tar_fea)
current_s_centroid = torch.div(s_sum_feature, s_n_classes.view(self.num_class, 1))
current_t_centroid = torch.div(t_sum_feature, t_n_classes.view(self.num_class, 1))
# Moving Centroid
decay = self.decay
src_centroid = (1 - decay) * self.src_centroid[tsk, :, :] + decay * current_s_centroid
tar_centroid = (1 - decay) * self.tar_centroid + decay * current_t_centroid
# *** version 1 ***
s_loss = torch.mean(torch.pow(src_centroid - tar_centroid, 2), dim=1)
semantic_loss = torch.sum(torch.mul(tar_y_estimated, s_loss))
# *** version 2: code from MSTN ***
# s_loss = self.MSELoss(src_centroid, tar_centroid)
# semantic_loss = torch.sum(torch.mm(torch.unsqueeze(tar_y_estimated, 0), s_loss)) / n
self.src_centroid[tsk, :, :] = src_centroid.detach()
self.trc_centroid = tar_centroid.detach()
return semantic_loss
def inference(self, x):
x = self.feature_net(x)
x = self.class_net(x)[1]
return F.log_softmax(x, dim=1)
########## Models ##########
# DARN and MDAN
class DarnBase(nn.Module):
def __init__(self, configs):
"""
Domain AggRegation Network.
"""
super().__init__()
self.num_src_domains = configs["num_src_domains"]
# Gradient reversal layer.
self.grl = GradientReversalLayer.apply
self.mode = mode = configs["mode"]
self.mu = configs["mu"]
self.gamma = configs["gamma"]
if mode == "L2":
self.proj = L2ProjFunction.apply
else:
self.proj = None
def forward(self, sinputs, soutputs, tinputs):
"""
:param sinputs: A list of k inputs from k source domains.
:param soutputs: A list of k outputs from k source domains.
:param tinputs: Input from the target domain.
:return: tuple(aggregated loss, domain weights)
"""
# Compute features
s_features = []
for i in range(self.num_src_domains):
s_features.append(self.feature_net(sinputs[i]))
t_features = self.feature_net(tinputs)
# Classification probabilities on k source domains.
logprobs = []
for i in range(self.num_src_domains):
logprobs.append(F.log_softmax(self.class_net(s_features[i]), dim=1))
train_losses = torch.stack(
[F.nll_loss(logprobs[i], soutputs[i]) for i in range(self.num_src_domains)]
)
# Domain classification accuracies.
sdomains, tdomains = [], []
for i in range(self.num_src_domains):
sdomains.append(self.domain_nets[i](self.grl(s_features[i])))
tdomains.append(self.domain_nets[i](self.grl(t_features)))
batch_size = tinputs.shape[0]
slabels = torch.ones(
[batch_size, 1], requires_grad=False, dtype=torch.float32, device=tinputs.device
)
tlabels = torch.zeros(
[batch_size, 1], requires_grad=False, dtype=torch.float32, device=tinputs.device
)
domain_losses = torch.stack(
[
F.binary_cross_entropy_with_logits(sdomains[i], slabels)
+ F.binary_cross_entropy_with_logits(tdomains[i], tlabels)
for i in range(self.num_src_domains)
]
)
return self._aggregation(train_losses, domain_losses)
def _aggregation(self, train_losses, domain_losses):
"""
Aggregate the losses into a scalar
"""
mu, alpha = self.mu, None
if self.num_src_domains == 1: # dann
loss = train_losses + mu * domain_losses
else:
mode, gamma = self.mode, self.gamma
if mode == "dynamic": # mdan
g = (train_losses + mu * domain_losses) * gamma
loss = torch.logsumexp(g, dim=0) / gamma
elif mode == "L2": # darn
g = gamma * (train_losses + mu * domain_losses)
alpha = self.proj(g)
loss = torch.dot(g, alpha) + torch.norm(alpha)
alpha = alpha.cpu().detach().numpy()
else:
raise NotImplementedError("Unknown aggregation mode %s" % mode)
return loss, alpha
def inference(self, x):
x = self.feature_net(x)
x = self.class_net(x)
return F.log_softmax(x, dim=1)
class WarnBase(nn.Module):
def __init__(self, configs):
"""
Domain AggRegation Network.
"""
super().__init__()
self.num_src_domains = configs["num_src_domains"]
# define the classes numbers
self.num_class = configs["num_src_classes"]
self.fea_dim = configs["feauture_dim"]
# Gradient reversal layer.
self.grl = GradientReversalLayer.apply
# self.mode = configs["mode"]
self.mu = configs["mu"]
self.gp_coef = configs["gp_coef"]
self.sem_coef = configs["sem_coef"]
self.gamma = configs["gamma"]
# option about semantic matching
self.semantic = True
# define the confusion matrix for every source domain
self.C = np.zeros([self.num_class, self.num_class, self.num_src_domains])
# define the label re-weights alpha (T-task times num_classes)
self.lam = np.ones([self.num_src_domains]) / self.num_src_domains
# defining the src_centre (self.num_src_domains X self.num_class X self. fea_dim)
self.src_centroid = torch.zeros([self.num_src_domains, self.num_class, self.fea_dim])
self.tar_centroid = torch.zeros([self.num_class, self.fea_dim])
self.decay = 0.3
# mse loss for semantic losses
self.MSELoss = nn.MSELoss(reduction="none")
# define the confusion matrix, source taget prediction output distribution
self.tar_pred = np.zeros([self.num_class])
def forward(self, sinputs, soutputs, tinputs, alpha, src_truth_label):
"""
:param sinputs: A list of k inputs from k source domains.
:param soutputs: A list of k outputs from k source domains.
:param tinputs: Input from the target domain.
:estimated_tar_dis: Estimated target label distribution (this is different from target prediction distribution)
:return: tuple(aggregated loss, domain weights)
"""
# Compute features
s_features = []
for dom_idx in range(self.num_src_domains):
s_features.append(self.feature_net(sinputs[dom_idx]))
t_features = self.feature_net(tinputs)
# Classification probabilities on k source domains
logprobs = []
for dom_idx in range(self.num_src_domains):
with torch.no_grad():
# source prediction error
src_pred = torch.argmax(self.class_net(s_features[dom_idx]), 1).cpu().numpy()
tar_pred = torch.argmax(self.class_net(t_features), 1).cpu().numpy()
src_true = soutputs[dom_idx].cpu().numpy()
# un-normalized
self.C[:, :, dom_idx] = confusion_matrix(
src_true, src_pred, labels=list(range(self.num_class))
)
for cls_idx in range(self.num_class):
self.tar_pred[cls_idx] = np.count_nonzero(tar_pred == cls_idx)
logprobs.append(F.log_softmax(self.class_net(s_features[dom_idx]), dim=1))
# weighted prediction loss
cls_losses = torch.stack(
[
F.nll_loss(logprobs[dom_idx], soutputs[dom_idx], weight=alpha[dom_idx, :])
for dom_idx in range(self.num_src_domains)
]
)
# Domain classification accuracies. (wasserstein based approach)
sdomains, tdomains = [], []
batch_size = tinputs.shape[0]
src_alpha = []
src_alpha_weights = torch.ones(
[self.num_src_domains, batch_size],
requires_grad=False,
dtype=torch.float32,
device=tinputs.device,
)
for dom_idx in range(self.num_src_domains):
for cls_idx in range(self.num_class):
src_alpha_weights[dom_idx, soutputs[dom_idx] == cls_idx] = alpha[dom_idx, cls_idx]
src_alpha.append(src_alpha_weights)
for dom_idx in range(self.num_src_domains):
# weighted src adversarial loss
sdomains.append(
torch.mul(
self.domain_nets[dom_idx](self.grl(s_features[dom_idx])),
torch.unsqueeze(src_alpha_weights[dom_idx], -1),
)
)
tdomains.append(self.domain_nets[dom_idx](self.grl(t_features)))
# slabels = torch.ones([batch_size, 1], requires_grad=False,
# dtype=torch.float32, device=tinputs.device)
# tlabels = torch.zeros([batch_size, 1], requires_grad=False,
# dtype=torch.float32, device=tinputs.device)
# domain loss is the Wasserstein loss (current w.o gradient penality)
domain_losses = torch.stack(
[torch.mean(sdomains[i]) - torch.mean(tdomains[i]) for i in range(self.num_src_domains)]
)
# Defining domain regularization loss (gradient penality)
domain_gradient = []
for tsk in range(self.num_src_domains):
src_rand = s_features[tsk]
epsilon = np.random.rand()
interpolated = epsilon * src_rand + (1 - epsilon) * t_features
inter_f = self.domain_nets[tsk](interpolated)
# The following compute the penalty of the Lipschitz constant
penalty_coefficient = 10.0
# torch.norm can be unstable? https://github.com/pytorch/pytorch/issues/2534
# f_gradient_norm = torch.norm(torch.autograd.grad(torch.sum(inter_f), interpolated)[0], dim=1)
f_gradient = torch.autograd.grad(
torch.sum(inter_f), interpolated, create_graph=True, retain_graph=True
)[0]
f_gradient_norm = torch.sqrt(torch.sum(f_gradient ** 2, dim=1) + 1e-10)
domain_gradient_penalty = penalty_coefficient * torch.mean((f_gradient_norm - 1.0) ** 2)
domain_gradient.append(domain_gradient_penalty)
domain_gradient = torch.stack(domain_gradient)
# semantic loss (depending on the tar_reweighted loss)
src_semantic = []
tar_pred_cuda = torch.tensor(tar_pred).to(alpha.device)
for tsk in range(self.num_src_domains):
tar_y_estimated = alpha[tsk, :] * src_truth_label[tsk, :]
sematinc_loss = self.update_center(
tsk, s_features[tsk], t_features, soutputs[tsk], tar_pred_cuda, tar_y_estimated
)
src_semantic.append(sematinc_loss)
src_semantic = torch.stack(src_semantic)
return self._aggregation(cls_losses, domain_losses, domain_gradient, src_semantic)
def _aggregation(self, cls_losses, domain_losses, domain_gradient, src_semantic):
"""
Aggregate the losses into a scalar
"""
losses_tuple = (cls_losses, domain_losses, domain_gradient, src_semantic)
mu = self.mu
gp_coef = self.gp_coef
sem_coef = self.sem_coef
train_loss = cls_losses + mu * (
domain_losses + gp_coef * domain_gradient + sem_coef * src_semantic
)
# for amazon
# convex_loss = (cls_losses + sem_coef * mu * src_semantic).detach()
# convex_loss = (cls_losse + mu*src_semantic).detach()
convex_loss = (cls_losses + 0.1 * src_semantic).detach()
return train_loss, self.C, self.tar_pred, convex_loss, losses_tuple
def update_center(self, tsk, src_fea, tar_fea, s_true, t_pseudo, tar_y_estimated):
self.src_centroid = self.src_centroid.to(src_fea.device)
self.tar_centroid = self.tar_centroid.to(src_fea.device)
# get feature size (batch_size X dimension)
n, d = src_fea.shape
# get labels
s_labels, t_labels = s_true, t_pseudo
# image number in each class
ones = torch.ones_like(s_labels, dtype=torch.float)
zeros = torch.zeros(self.num_class).to(src_fea.device)
# smaples per class
s_n_classes = zeros.scatter_add(0, s_labels, ones)
t_n_classes = zeros.scatter_add(0, t_labels, ones)
# image number cannot be 0, when calculating centroids
ones = torch.ones_like(s_n_classes)
s_n_classes = torch.max(s_n_classes, ones)
t_n_classes = torch.max(t_n_classes, ones)
# calculating centroids, sum and divide
zeros = torch.zeros(self.num_class, d).to(src_fea.device)
s_sum_feature = zeros.scatter_add(0, torch.transpose(s_labels.repeat(d, 1), 1, 0), src_fea)
t_sum_feature = zeros.scatter_add(0, torch.transpose(t_labels.repeat(d, 1), 1, 0), tar_fea)
current_s_centroid = torch.div(s_sum_feature, s_n_classes.view(self.num_class, 1))
current_t_centroid = torch.div(t_sum_feature, t_n_classes.view(self.num_class, 1))
# Moving Centroid
decay = self.decay
src_centroid = (1 - decay) * self.src_centroid[tsk, :, :] + decay * current_s_centroid
tar_centroid = (1 - decay) * self.tar_centroid + decay * current_t_centroid
# *** version 1 ***
s_loss = torch.mean(torch.pow(src_centroid - tar_centroid, 2), dim=1)
semantic_loss = torch.sum(torch.mul(tar_y_estimated, s_loss))
# *** version 2: code from MSTN ***
# s_loss = self.MSELoss(src_centroid, tar_centroid)
# semantic_loss = torch.sum(torch.mm(torch.unsqueeze(tar_y_estimated, 0), s_loss)) / n
self.src_centroid[tsk, :, :] = src_centroid.detach()
self.trc_centroid = tar_centroid.detach()
return semantic_loss
def inference(self, x):
x = self.feature_net(x)
x = self.class_net(x)
return F.log_softmax(x, dim=1)
class WarnMLP(WarnBase):
def __init__(self, configs):
"""
DARN with MLP
"""
super().__init__(configs)
fea_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["hidden_layers"][:-1],
"output_dim": configs["hidden_layers"][-1],
"drop_rate": configs["drop_rate"],
"process_final": True,
}
self.feature_net = MLPNet(fea_configs)
self.class_net = nn.Linear(configs["hidden_layers"][-1], configs["num_classes"])
self.domain_nets = nn.ModuleList(
[nn.Linear(configs["hidden_layers"][-1], 1) for _ in range(self.num_src_domains)]
)
class DarnMLP(DarnBase):
def __init__(self, configs):
"""
DARN with MLP
"""
super().__init__(configs)
fea_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["hidden_layers"][:-1],
"output_dim": configs["hidden_layers"][-1],
"drop_rate": configs["drop_rate"],
"process_final": True,
}
self.feature_net = MLPNet(fea_configs)
self.class_net = nn.Linear(configs["hidden_layers"][-1], configs["num_classes"])
self.domain_nets = nn.ModuleList(
[nn.Linear(configs["hidden_layers"][-1], 1) for _ in range(self.num_src_domains)]
)
class DarnConv(DarnBase):
def __init__(self, configs):
"""
WARN with convolution feature extractor
"""
super().__init__(configs)
self.feature_net = ConvNet(configs)
cls_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["cls_fc_layers"],
"output_dim": configs["num_classes"],
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.class_net = MLPNet(cls_configs)
dom_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["dom_fc_layers"],
"output_dim": 1,
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.domain_nets = nn.ModuleList([MLPNet(dom_configs) for _ in range(self.num_src_domains)])
class WarnConv(WarnBase):
def __init__(self, configs):
"""
WARN with convolution feature extractor
"""
super().__init__(configs)
self.feature_net = ConvNet(configs)
cls_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["cls_fc_layers"],
"output_dim": configs["num_classes"],
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.class_net = MLPNet(cls_configs)
dom_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["dom_fc_layers"],
"output_dim": 1,
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.domain_nets = nn.ModuleList([MLPNet(dom_configs) for _ in range(self.num_src_domains)])
class WarnConv_digits(WarnBase_digits):
def __init__(self, configs):
"""
WARN with convolution feature extractor
"""
super().__init__(configs)
self.feature_net = ConvNet(configs)
cls_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["cls_fc_layers"],
"output_dim": configs["num_classes"],
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.class_net = MLPNet_digits(cls_configs)
dom_configs = {
"input_dim": configs["input_dim"],
"hidden_layers": configs["dom_fc_layers"],
"output_dim": 1,
"drop_rate": configs["drop_rate"],
"process_final": False,
}
self.domain_nets = nn.ModuleList([MLPNet(dom_configs) for _ in range(self.num_src_domains)]) | 37.897985 | 119 | 0.60769 | 3,756 | 30,091 | 4.608892 | 0.084398 | 0.03518 | 0.033794 | 0.041245 | 0.913177 | 0.90249 | 0.888279 | 0.875975 | 0.869216 | 0.862226 | 0 | 0.007469 | 0.283606 | 30,091 | 794 | 120 | 37.897985 | 0.795565 | 0.175833 | 0 | 0.738683 | 0 | 0 | 0.049035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05144 | false | 0 | 0.016461 | 0 | 0.123457 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8294f337a16c4e8e99f6d28b161d0954a2445e4a | 1,665 | py | Python | setup.py | vyathakavilocana/AIatNCStateSpring2021SafetyPathGeneratorProjectRepository | ae095115bde5dbf1d7f9ebcbefac6b4446f04bd5 | [
"MIT"
] | null | null | null | setup.py | vyathakavilocana/AIatNCStateSpring2021SafetyPathGeneratorProjectRepository | ae095115bde5dbf1d7f9ebcbefac6b4446f04bd5 | [
"MIT"
] | 5 | 2021-05-02T19:49:44.000Z | 2021-05-02T20:02:47.000Z | setup.py | vyathakavilocana/AIatNCStateSpring2021SafetyPathGeneratorProjectRepository | ae095115bde5dbf1d7f9ebcbefac6b4446f04bd5 | [
"MIT"
] | null | null | null | from setuptools import find_packages, setup
setup(
name='src',
packages=find_packages(),
version='0.1.0',
description='This Spring 2021 AI at NC State project repository cotnains all of the machine learning prototyping code for the Safety Path Generator project. Acquiring tabular de-identified crime data from local college cities in North and South Carolina, [C[C[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[A[Do[B[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[D[cities in North and South Carolina, it seeks to harness the power of applied AI models to provide a heat map at any given time and place where incidences of crime per type are occuring and alert users of those areas, thereby providing with a path to take to avoid certains areas at certain points of the day where their likelihood of becoming victims of a particular type of crime are higher.',
author='AI at NC State ()[D [DPratham Chhabria, Swathi Dinkaran, [C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[CSrisheel Gunnisetti) and Clemson AI Club (Jeremy Wang, Jeremy Spooner)',
license='MIT',
)
| 151.363636 | 1,281 | 0.581982 | 635 | 1,665 | 1.908661 | 0.166929 | 0.19802 | 0.29703 | 0.386139 | 0.441419 | 0.441419 | 0.393564 | 0.393564 | 0.393564 | 0.393564 | 0 | 0.004664 | 0.098499 | 1,665 | 10 | 1,282 | 166.5 | 0.639574 | 0 | 0 | 0 | 0 | 0.222222 | 0.901502 | 0.445045 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
82cac140b00129de3fa0a26c767d6a2672c9bcd7 | 10,800 | py | Python | sams_dunbrack/analysis/plot_cluster.py | jiayeguo/sams_dunbrack | 9f8bcffdabd1fcbd59c398e52763c22dcd1868df | [
"MIT"
] | 1 | 2019-07-25T18:46:33.000Z | 2019-07-25T18:46:33.000Z | sams_dunbrack/analysis/plot_cluster.py | jiayeguo/sams_dunbrack | 9f8bcffdabd1fcbd59c398e52763c22dcd1868df | [
"MIT"
] | 1 | 2021-09-17T18:17:56.000Z | 2021-09-17T18:17:56.000Z | sams_dunbrack/analysis/plot_cluster.py | choderalab/sams_dunbrack | 9f8bcffdabd1fcbd59c398e52763c22dcd1868df | [
"MIT"
] | null | null | null | from netCDF4 import Dataset
import mdtraj as md
from openmmtools import states
import matplotlib
matplotlib.use("TkAgg")
import matplotlib.pyplot as plt
import numpy as np
def plot_history():
# get state, log weights and gamma history from traj.nc
#clusters = [19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 17, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 10, 10, 10, 4, 4, 4, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 5, 5, 5, 5, 5, 12, 5, 5, 12, 5, 12, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 3, 3, 3, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 9, 16, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 6, 0, 6, 0, 6, 6, 6, 6, 0, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 8, 15, 15, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 2, 8, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18]
#clusters = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 2, 2, 2, 8, 8, 8, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 7, 2, 2, 2, 7, 2, 2, 2, 7, 7, 2, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 9, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 11, 11, 11, 4, 4, 4, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 3, 3, 11, 11, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 13, 13, 5, 6, 5, 5, 13, 5, 5, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 6, 6, 6, 6, 13, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 1, 14, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 12, 1, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]
clusters = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 11, 5, 11, 5, 5, 5, 11, 5, 5, 5, 11, 11, 5, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 2, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 10, 2, 2, 10, 10, 10, 10, 10, 10, 10, 2, 10, 10, 2, 2, 2, 2, 2, 2, 2, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 6, 0, 6, 6, 0, 0, 0, 0, 0, 0, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 9, 3, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 7, 9, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
#define colors
c = dict()
c[0] = '#FF0000' #red
c[1] = '#FF8C00' #orange
c[2] = '#FFD700' #yellow
c[3] = '#32CD32' #green
c[4] = '#48D1CC' #teal
c[5] = '#0000FF' #blue
c[6] = '#8A2BE2' #magenta
c[7] = '#FF1493' #pink
c[8] = '#393E46' #dark
fig,ax = plt.subplots()
x = list()
for i in range(len(clusters)):
x.append(i+1)
ax.scatter(x, clusters, color=c[5])
ax.set_ylim(-1,14)
ax.set_yticks(np.arange(0, 14, 1, dtype=int))
plt.show()
return
plot_history()
| 300 | 3,477 | 0.413148 | 3,125 | 10,800 | 1.42656 | 0.02912 | 0.113055 | 0.166891 | 0.218932 | 0.87528 | 0.873262 | 0.859803 | 0.858681 | 0.857111 | 0.850381 | 0 | 0.528048 | 0.296852 | 10,800 | 35 | 3,478 | 308.571429 | 0.058994 | 0.635185 | 0 | 0 | 0 | 0 | 0.01732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.206897 | 0 | 0.275862 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
7da271363da9aa7abda6f408f83066bdc193a041 | 32,243 | py | Python | core/domain/blog_validators_test.py | juanapatankar/oppia | c8155452634825ad0bb7ce0e5b0daafece86e206 | [
"Apache-2.0"
] | null | null | null | core/domain/blog_validators_test.py | juanapatankar/oppia | c8155452634825ad0bb7ce0e5b0daafece86e206 | [
"Apache-2.0"
] | null | null | null | core/domain/blog_validators_test.py | juanapatankar/oppia | c8155452634825ad0bb7ce0e5b0daafece86e206 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2021 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for core.domain.blog_validators."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
import datetime
from core.domain import blog_services
from core.domain import prod_validation_jobs_one_off
from core.platform import models
from core.tests import test_utils
datastore_services = models.Registry.import_datastore_services()
(blog_models, user_models) = models.Registry.import_models([
models.NAMES.blog, models.NAMES.user])
class BlogPostModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(BlogPostModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup('abc@gmail.com', 'abc')
self.author_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.author_id_1 = self.get_user_id_from_email('abc@gmail.com')
self.blog_post_1 = blog_services.create_new_blog_post(self.author_id)
self.blog_post_id_1 = self.blog_post_1.id
self.blog_post_model_1 = (
blog_models.BlogPostModel.get_by_id(self.blog_post_id_1))
self.blog_post_2 = blog_services.create_new_blog_post(self.author_id_1)
self.blog_post_id_2 = self.blog_post_2.id
self.blog_post_model_2 = (
blog_models.BlogPostModel.get_by_id(self.blog_post_id_2))
self.blog_post_summary_model = (
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_1))
self.job_class = (
prod_validation_jobs_one_off.BlogPostModelAuditOneOffJob)
def test_standard_operation(self):
expected_output = [
u'[u\'fully-validated BlogPostModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.blog_post_model_1.created_on = (
self.blog_post_model_1.last_updated + datetime.timedelta(
days=1))
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of BlogPostModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.blog_post_model_1.id,
self.blog_post_model_1.created_on,
self.blog_post_model_1.last_updated
), u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_repeated_title(self):
self.blog_post_model_1.title = 'Sample Title'
self.blog_post_model_2.title = 'Sample Title'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_model_2.update_timestamps()
self.blog_post_model_2.put()
self.blog_post_summary_model.title = 'Sample Title'
self.blog_post_summary_model.update_timestamps()
self.blog_post_summary_model.put()
blog_post_summary_model_2 = (
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_2))
blog_post_summary_model_2.title = 'Sample Title'
blog_post_summary_model_2.update_timestamps()
blog_post_summary_model_2.put()
expected_output = [
(
u'[u\'failed validation check for unique title for blog post '
'of BlogPostModel\', '
'[u"Entity id %s: title %s matches with title '
'blog post models with ids [\'%s\']",'
' u"Entity id %s: title %s matches'
' with title blog post models with ids [\'%s\']"]]' % (
self.blog_post_id_1, self.blog_post_model_1.title,
self.blog_post_id_2, self.blog_post_id_2,
self.blog_post_model_1.title, self.blog_post_id_1)
)
]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
def test_model_with_repeated_url_fragment(self):
self.blog_post_model_1.url_fragment = 'sample-url'
self.blog_post_model_2.url_fragment = 'sample-url'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_model_2.update_timestamps()
self.blog_post_model_2.put()
expected_output = [
(
u'[u\'failed validation check for unique url fragment for '
'blog post of BlogPostModel\', '
'[u"Entity id %s: url fragment %s matches with url fragment'
' of blog post models with ids [\'%s\']",'
' u"Entity id %s: url fragment %s matches with url'
' fragment of blog post models with ids [\'%s\']"]]' % (
self.blog_post_id_1,
self.blog_post_model_1.url_fragment,
self.blog_post_id_2, self.blog_post_id_2,
self.blog_post_model_1.url_fragment,
self.blog_post_id_1)
)
]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
def test_missing_summary_model_failure(self):
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_1).delete()
expected_output = [
(
u'[u\'failed validation check for blog_post_summary_model_ids '
'field check of BlogPostModel\', '
'[u"Entity id %s: based on field blog_post_summary_model_ids '
'having value %s, expected model BlogPostSummaryModel with id'
' %s but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1)
), u'[u\'fully-validated BlogPostModel\', 1]'
]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_rights_model_failure(self):
blog_models.BlogPostRightsModel.get_by_id(self.blog_post_id_1).delete()
expected_output = [
(
u'[u\'failed validation check for blog_post_rights_model_ids'
' field check of BlogPostModel\', '
'[u"Entity id %s: based on field blog_post_rights_model_ids '
'having value %s, expected model BlogPostRightsModel with id %s'
' but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1)
), (
u'[u\'failed validation check for domain object check of '
'BlogPostModel\', [u"Entity id %s: Entity fails domain '
'validation with the error \'NoneType\' object has no '
'attribute \'blog_post_is_published\'"]]' % self.blog_post_id_1
), u'[u\'fully-validated BlogPostModel\', 1]'
]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_with_missing_thumbnail_filename(self):
expected_output = [
u'[u\'fully-validated BlogPostModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_with_missing_title(self):
expected_output = [
u'[u\'fully-validated BlogPostModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_with_missing_url_fragment(self):
expected_output = [
u'[u\'fully-validated BlogPostModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_with_missing_thumbnail_filename(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_model_1.title = 'Sample Title'
self.blog_post_model_1.tags = ['tag']
self.blog_post_model_1.url = 'sample-title'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_summary_model.title = 'Sample Title'
self.blog_post_summary_model.update_timestamps()
self.blog_post_summary_model.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Expected thumbnail filename '
'to be a string, received: None.\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_with_missing_title(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_model_1.title = ''
self.blog_post_model_1.tags = ['tag']
self.blog_post_model_1.url = 'sample-title'
self.blog_post_model_1.thumbnail = 'thumbnail.svg'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Title '
'should not be empty\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_with_missing_url_fragment(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_model_1.title = 'sample-title'
self.blog_post_model_1.tags = ['tag']
self.blog_post_model_1.url = ''
self.blog_post_model_1.thumbnail_filename = 'thumbnail.svg'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_summary_model.title = 'sample-title'
self.blog_post_summary_model.update_timestamps()
self.blog_post_summary_model.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Blog Post URL Fragment '
'field should not be empty.\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_with_missing_content(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_model_1.title = 'sample-title'
self.blog_post_model_1.tags = ['tag']
self.blog_post_model_1.url_fragment = 'sample-title'
self.blog_post_model_1.thumbnail_filename = 'thumbnail.svg'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_summary_model.title = 'sample-title'
self.blog_post_summary_model.update_timestamps()
self.blog_post_summary_model.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Content can not be '
'empty\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_author_user_model_failure(self):
user_models.UserSettingsModel.get_by_id(self.author_id).delete()
expected_output = [
(
u'[u\'failed validation check for author_id '
'field check of BlogPostModel\', '
'[u"Entity id %s: based on field author_id having '
'value %s, expected model UserSettingsModel with id %s '
'but it doesn\'t exist"]]') % (
self.blog_post_id_1, self.author_id, self.author_id),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_different_title_for_blog_post_summary(self):
self.blog_post_model_1.title = 'sample'
self.blog_post_model_1.update_timestamps()
self.blog_post_model_1.put()
self.blog_post_summary_model.title = 'sample-title'
self.blog_post_summary_model.update_timestamps()
self.blog_post_summary_model.put()
expected_output = [
(
u'[u\'failed validation check for Same Title for blog post'
' and blog post summary of BlogPostModel\', '
'[u"Title for blog post with Entity id %s'
' does not match with title of corresponding'
' blog post summary model"]]' % (self.blog_post_id_1)
),
u'[u\'fully-validated BlogPostModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
class BlogPostSummaryModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(BlogPostSummaryModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup('abc@gmail.com', 'abc')
self.author_id_1 = self.get_user_id_from_email('abc@gmail.com')
self.author_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.blog_post_1 = blog_services.create_new_blog_post(self.author_id)
self.blog_post_id_1 = self.blog_post_1.id
self.blog_post_summary_model_1 = (
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_1))
self.blog_post_2 = blog_services.create_new_blog_post(self.author_id_1)
self.blog_post_id_2 = self.blog_post_2.id
self.blog_post_summary_model_2 = (
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_2))
self.job_class = (
prod_validation_jobs_one_off.BlogPostSummaryModelAuditOneOffJob)
def test_standard_operation(self):
expected_output = [
u'[u\'fully-validated BlogPostSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.blog_post_summary_model_1.created_on = (
self.blog_post_summary_model_1.last_updated +
datetime.timedelta(days=1))
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of BlogPostSummaryModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.blog_post_summary_model_1.id,
self.blog_post_summary_model_1.created_on,
self.blog_post_summary_model_1.last_updated
), u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_author_user_model_failure(self):
user_models.UserSettingsModel.get_by_id(self.author_id).delete()
expected_output = [
(
u'[u\'failed validation check for author_id '
'field check of BlogPostSummaryModel\', '
'[u"Entity id %s: based on field author_id having '
'value %s, expected model UserSettingsModel with id %s '
'but it doesn\'t exist"]]') % (
self.blog_post_id_1, self.author_id, self.author_id),
u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_summary_with_missing_title(self):
expected_output = [
u'[u\'fully-validated BlogPostSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_summary_with_missing_thumbnail_filename(self):
expected_output = [u'[u\'fully-validated BlogPostSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_rights_model_failure(self):
blog_models.BlogPostRightsModel.get_by_id(
self.blog_post_id_1).delete()
expected_output = [
(
u'[u\'failed validation check for blog_post_rights_model_ids'
' field check of BlogPostSummaryModel\', '
'[u"Entity id %s: based on field blog_post_rights_model_ids '
'having value %s, expected model BlogPostRightsModel with id %s'
' but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1)
), (
u'[u\'failed validation check for domain object check of '
'BlogPostSummaryModel\', [u"Entity id %s: Entity fails domain '
'validation with the error \'NoneType\' object has no '
'attribute \'blog_post_is_published\'"]]' % self.blog_post_id_1
), u'[u\'fully-validated BlogPostSummaryModel\', 1]'
]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_blog_post_model_failure(self):
blog_models.BlogPostModel.get_by_id(self.blog_post_id_1).delete()
expected_output = [
(
u'[u\'failed validation check for blog_post_model_ids '
'field check of BlogPostSummaryModel\', '
'[u"Entity id %s: based on field blog_post_model_ids having '
'value %s, expected model BlogPostModel with id %s '
'but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1)
), u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_private_blog_post_summary_with_missing_url_fragment(self):
expected_output = [
u'[u\'fully-validated BlogPostSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_summary_with_missing_thumbnail_filename(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_summary_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_summary_model_1.title = 'Sample Title'
self.blog_post_summary_model_1.tags = ['tag']
self.blog_post_summary_model_1.url_fragment = 'sample-title'
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostSummaryModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Expected thumbnail filename '
'to be a string, received: None.\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_summary_with_missing_title(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_summary_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_summary_model_1.title = ''
self.blog_post_summary_model_1.tags = ['tag']
self.blog_post_summary_model_1.url = 'sample-title'
self.blog_post_summary_model_1.thumbnail_filename = 'thumbnail.svg'
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostSummaryModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Title '
'should not be empty\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_summary_with_missing_url_fragment(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_summary_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_summary_model_1.title = 'sample-title'
self.blog_post_summary_model_1.tags = ['tag']
self.blog_post_summary_model_1.url_fragment = ''
self.blog_post_summary_model_1.thumbnail_filename = 'thumbnail.svg'
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostSummaryModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Blog Post URL Fragment '
'field should not be empty.\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_public_blog_post_summary_with_missing_summary(self):
blog_post_rights = blog_services.get_blog_post_rights(
self.blog_post_summary_model_1.id, strict=False)
blog_post_rights.blog_post_is_published = True
blog_services.save_blog_post_rights(blog_post_rights)
self.blog_post_summary_model_1.title = 'sample-title'
self.blog_post_summary_model_1.tags = ['tag']
self.blog_post_summary_model_1.url_fragment = 'sample-title'
self.blog_post_summary_model_1.thumbnail_filename = 'thumbnail.svg'
self.blog_post_summary_model_1.summary = ''
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'BlogPostSummaryModel\', [u\'Entity id %s: Entity fails '
'domain validation with the error Summary can not be '
'empty\']]' % self.blog_post_id_1
),
u'[u\'fully-validated BlogPostSummaryModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_repeated_title(self):
self.blog_post_summary_model_1.title = 'Sample Title'
self.blog_post_summary_model_2.title = 'Sample Title'
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
self.blog_post_summary_model_2.update_timestamps()
self.blog_post_summary_model_2.put()
expected_output = [
(
u'[u\'failed validation check for unique title for blog post '
'of BlogPostSummaryModel\', '
'[u"Entity id %s: title %s matches with title '
'blog post summary models with ids [\'%s\']",'
' u"Entity id %s: title %s matches'
' with title blog post summary models with ids [\'%s\']"]]'
% (
self.blog_post_id_2, self.blog_post_summary_model_1.title,
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_summary_model_2.title, self.blog_post_id_2
)
)]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
def test_model_with_repeated_url_fragment(self):
self.blog_post_summary_model_1.url_fragment = 'sample-url'
self.blog_post_summary_model_2.url_fragment = 'sample-url'
self.blog_post_summary_model_1.update_timestamps()
self.blog_post_summary_model_1.put()
self.blog_post_summary_model_2.update_timestamps()
self.blog_post_summary_model_2.put()
expected_output = [
(
u'[u\'failed validation check for unique url fragment for '
'blog post of BlogPostSummaryModel\', '
'[u"Entity id %s: url fragment %s matches with url fragment'
' of blog post summary models with ids [\'%s\']",'
' u"Entity id %s: url fragment %s matches with url'
' fragment of blog post summary models with ids [\'%s\']"]]' % (
self.blog_post_id_1,
self.blog_post_summary_model_1.url_fragment,
self.blog_post_id_2, self.blog_post_id_2,
self.blog_post_summary_model_1.url_fragment,
self.blog_post_id_1)
)
]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
class BlogPostRightsModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(BlogPostRightsModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup('abc@gmail.com', 'abc')
self.author_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.author_id_1 = self.get_user_id_from_email('abc@gmail.com')
self.author_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.blog_post_1 = blog_services.create_new_blog_post(self.author_id)
self.blog_post_id_1 = self.blog_post_1.id
self.blog_post_rights_model_1 = (
blog_models.BlogPostRightsModel.get_by_id(self.blog_post_id_1))
self.blog_post_2 = blog_services.create_new_blog_post(self.author_id_1)
self.blog_post_id_2 = self.blog_post_2.id
self.blog_post_rights_model_2 = (
blog_models.BlogPostRightsModel.get_by_id(self.blog_post_id_2))
self.job_class = (
prod_validation_jobs_one_off.BlogPostRightsModelAuditOneOffJob)
def test_standard_operation(self):
expected_output = [
u'[u\'fully-validated BlogPostRightsModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.blog_post_rights_model_1.created_on = (
self.blog_post_rights_model_1.last_updated +
datetime.timedelta(days=1))
self.blog_post_rights_model_1.update_timestamps()
self.blog_post_rights_model_1.put()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of BlogPostRightsModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.blog_post_rights_model_1.id,
self.blog_post_rights_model_1.created_on,
self.blog_post_rights_model_1.last_updated
), u'[u\'fully-validated BlogPostRightsModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_blog_post_model_failure(self):
blog_models.BlogPostModel.get_by_id(self.blog_post_id_1).delete()
expected_output = [
(
u'[u\'failed validation check for blog_post_model_ids '
'field check of BlogPostRightsModel\', '
'[u"Entity id %s: based on field blog_post_model_ids having '
'value %s, expected model BlogPostModel with id %s '
'but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1)
), u'[u\'fully-validated BlogPostRightsModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_summary_model_failure(self):
blog_models.BlogPostSummaryModel.get_by_id(self.blog_post_id_1).delete()
expected_output = [
u'[u\'failed validation check for blog_post_summary_model_ids '
'field check of BlogPostRightsModel\', '
'[u"Entity id %s: based on field blog_post_summary_model_ids '
'having value %s, expected model BlogPostSummaryModel with id %s '
'but it doesn\'t exist"]]' % (
self.blog_post_id_1, self.blog_post_id_1,
self.blog_post_id_1),
u'[u\'fully-validated BlogPostRightsModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_editor_user_model_failure(self):
user_models.UserSettingsModel.get_by_id(self.author_id).delete()
expected_output = [
(
u'[u\'failed validation check for editor_ids '
'field check of BlogPostRightsModel\', '
'[u"Entity id %s: based on field editor_ids having '
'value %s, expected model UserSettingsModel with id %s '
'but it doesn\'t exist"]]') % (
self.blog_post_id_1, self.author_id, self.author_id),
u'[u\'fully-validated BlogPostRightsModel\', 1]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
| 48.195815 | 80 | 0.645908 | 4,155 | 32,243 | 4.633454 | 0.051504 | 0.132973 | 0.137752 | 0.081031 | 0.930812 | 0.925151 | 0.922969 | 0.90775 | 0.90001 | 0.88796 | 0 | 0.010514 | 0.265515 | 32,243 | 668 | 81 | 48.267964 | 0.802424 | 0.021958 | 0 | 0.74199 | 0 | 0 | 0.143909 | 0.010694 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062395 | false | 0 | 0.015177 | 0 | 0.082631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7dc56bcb62fd45134688f10ca0fef45a5f899d9a | 23,474 | py | Python | pynetdicom/tests/test_service_relevant_patient.py | jogerh/pynetdicom | 3ca25f67c32d7cc0d1fe6afe3f3ef333a37bfe72 | [
"MIT"
] | null | null | null | pynetdicom/tests/test_service_relevant_patient.py | jogerh/pynetdicom | 3ca25f67c32d7cc0d1fe6afe3f3ef333a37bfe72 | [
"MIT"
] | null | null | null | pynetdicom/tests/test_service_relevant_patient.py | jogerh/pynetdicom | 3ca25f67c32d7cc0d1fe6afe3f3ef333a37bfe72 | [
"MIT"
] | 1 | 2021-08-09T03:47:41.000Z | 2021-08-09T03:47:41.000Z | """Tests for the RelevantPatientInformationQueryServiceClass."""
from io import BytesIO
import os
import time
import pytest
from pydicom.dataset import Dataset
from pydicom.uid import ExplicitVRLittleEndian
from pynetdicom import AE, evt, debug_logger
from pynetdicom.dimse_primitives import C_FIND
from pynetdicom.service_class import (
RelevantPatientInformationQueryServiceClass
)
from pynetdicom.sop_class import (
GeneralRelevantPatientInformationQuery,
BreastImagingRelevantPatientInformationQuery,
CardiacRelevantPatientInformationQuery,
)
#debug_logger()
class TestRelevantPatientServiceClass(object):
"""Test the RelevantPatientInformationQueryServiceClass"""
def setup(self):
"""Run prior to each test"""
self.query = Dataset()
self.query.QueryRetrieveLevel = "PATIENT"
self.query.PatientName = '*'
self.ae = None
def teardown(self):
"""Clear any active threads"""
if self.ae:
self.ae.shutdown()
def test_bad_req_identifier(self):
"""Test SCP handles a bad request identifier"""
def handle(event):
try:
for elem in event.identifier.iterall():
pass
except:
yield 0xC310, None
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(
GeneralRelevantPatientInformationQuery,
ExplicitVRLittleEndian
)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
req = C_FIND()
req.MessageID = 1
req.AffectedSOPClassUID = GeneralRelevantPatientInformationQuery
req.Priority = 2
req.Identifier = BytesIO(b'\x08\x00\x01\x00\x40\x40\x00\x00\x00\x00\x00\x08\x00\x49')
assoc._reactor_checkpoint.clear()
assoc.dimse.send_msg(req, 1)
with pytest.warns(UserWarning):
cx_id, rsp = assoc.dimse.get_msg(True)
assoc._reactor_checkpoint.set()
assert rsp.Status == 0xC310
assoc.release()
scp.shutdown()
def test_handler_status_dataset(self):
"""Test handler yielding a Dataset status"""
def handle(event):
status = Dataset()
status.Status = 0xFF00
yield status, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(
GeneralRelevantPatientInformationQuery,
ExplicitVRLittleEndian
)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
scp.shutdown()
def test_handler_status_dataset_multi(self):
"""Test handler yielding a Dataset status with other elements"""
def handle(event):
status = Dataset()
status.Status = 0xFF00
status.ErrorComment = "Test"
status.OffendingElement = 0x00010001
yield status, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
assert status.ErrorComment == 'Test'
assert status.OffendingElement == 0x00010001
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
scp.shutdown()
def test_handler_status_int(self):
"""Test handler yielding an int status"""
def handle(event):
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
scp.shutdown()
def test_handler_status_unknown(self):
"""Test SCP handles handler yielding a unknown status"""
def handle(event):
yield 0xFFF0, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFFF0
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_handler_status_invalid(self):
"""Test SCP handles handler yielding a invalid status"""
def handle(event):
yield 'Failed', self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xC002
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_handler_status_none(self):
"""Test SCP handles handler not yielding a status"""
def handle(event):
yield None, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xC002
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_handler_exception(self):
"""Test SCP handles handler yielding an exception"""
def handle(event):
raise ValueError
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xC311
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_handler_bad_identifier(self):
"""Test SCP handles a bad handler identifier"""
def handle(event):
yield 0xFF00, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xC312
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_pending_cancel(self):
"""Test handler yielding pending then cancel status"""
# Note: success should be second, cancel should get ignored
def handle(event):
yield 0xFF00, self.query
yield 0xFE00, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
assert identifier == self.query
status, identifier = next(result)
assert status.Status == 0x0000
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_pending_success(self):
"""Test handler yielding pending then success status"""
def handle(event):
yield 0xFF00, self.query
yield 0x0000, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
assert identifier == self.query
status, identifier = next(result)
assert status.Status == 0x0000
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_pending_failure(self):
"""Test handler yielding pending then failure status"""
def handle(event):
yield 0xFF00, self.query
yield 0xA700, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
assert identifier == self.query
status, identifier = next(result)
assert status.Status == 0x0000
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_cancel(self):
"""Test handler yielding cancel status"""
def handle(event):
yield 0xFE00, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFE00
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_failure(self):
"""Test handler yielding failure status"""
def handle(event):
yield 0xA700, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xA700
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_success(self):
"""Test handler yielding success status"""
def handle(event):
yield 0x0000, None
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0x0000
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_no_response(self):
"""Test handler yielding success status"""
def handle(event):
pass
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0x0000
assert identifier is None
pytest.raises(StopIteration, next, result)
assoc.release()
scp.shutdown()
def test_scp_handler_context(self):
"""Test handler event's context attribute"""
attrs = {}
def handle(event):
attrs['context'] = event.context
attrs['identifier'] = event.identifier
attrs['request'] = event.request
attrs['assoc'] = event.assoc
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
assert assoc.is_released
cx = attrs['context']
assert cx.context_id == 1
assert cx.abstract_syntax == GeneralRelevantPatientInformationQuery
assert cx.transfer_syntax == '1.2.840.10008.1.2'
scp.shutdown()
def test_scp_handler_assoc(self):
"""Test handler event's assoc attribute"""
attrs = {}
def handle(event):
attrs['context'] = event.context
attrs['identifier'] = event.identifier
attrs['request'] = event.request
attrs['assoc'] = event.assoc
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
scp_assoc = attrs['assoc']
assert scp_assoc == scp.active_associations[0]
scp_assoc.release()
assert scp_assoc.is_released
scp.shutdown()
def test_scp_handler_request(self):
"""Test handler event's request attribute"""
attrs = {}
def handle(event):
attrs['context'] = event.context
attrs['identifier'] = event.identifier
attrs['request'] = event.request
attrs['assoc'] = event.assoc
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
assert assoc.is_released
req = attrs['request']
assert isinstance(req, C_FIND)
scp.shutdown()
def test_scp_handler_identifier(self):
"""Test handler event's identifier property"""
attrs = {}
def handle(event):
attrs['context'] = event.context
attrs['identifier'] = event.identifier
attrs['request'] = event.request
attrs['assoc'] = event.assoc
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(self.query, GeneralRelevantPatientInformationQuery)
status, identifier = next(result)
assert status.Status == 0xFF00
status, identifier = next(result)
assert status.Status == 0x0000
assoc.release()
assert assoc.is_released
ds = attrs['identifier']
assert ds.PatientName == '*'
scp.shutdown()
def test_scp_handler_aborts_before(self):
"""Test handler aborts before any yields"""
def handle(event):
event.assoc.abort()
yield 0xFF00, self.query
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(
self.query, GeneralRelevantPatientInformationQuery
)
status, identifier = next(result)
assert status == Dataset()
assert identifier is None
time.sleep(0.1)
assert assoc.is_aborted
scp.shutdown()
def test_scp_handler_aborts_before_solo(self):
"""Test handler aborts before any yields"""
def handle(event):
event.assoc.abort()
handlers = [(evt.EVT_C_FIND, handle)]
self.ae = ae = AE()
ae.add_supported_context(GeneralRelevantPatientInformationQuery)
ae.add_requested_context(GeneralRelevantPatientInformationQuery)
scp = ae.start_server(('', 11112), block=False, evt_handlers=handlers)
ae.acse_timeout = 5
ae.dimse_timeout = 5
assoc = ae.associate('localhost', 11112)
assert assoc.is_established
result = assoc.send_c_find(
self.query, GeneralRelevantPatientInformationQuery
)
status, identifier = next(result)
assert status == Dataset()
assert identifier is None
time.sleep(0.1)
assert assoc.is_aborted
scp.shutdown()
| 35.299248 | 93 | 0.646034 | 2,419 | 23,474 | 6.115337 | 0.072757 | 0.017846 | 0.017846 | 0.054485 | 0.870141 | 0.848847 | 0.830596 | 0.81667 | 0.802271 | 0.78625 | 0 | 0.030163 | 0.262759 | 23,474 | 664 | 94 | 35.35241 | 0.824627 | 0.050013 | 0 | 0.823301 | 0 | 0.001942 | 0.019803 | 0.002526 | 0 | 0 | 0.014977 | 0 | 0.15534 | 1 | 0.08932 | false | 0.003884 | 0.019417 | 0 | 0.11068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7df0b98e6dd136b14be4b3feae782f341ad14680 | 97 | py | Python | YOLO/Stronger-yolo-pytorch/trainers/__init__.py | ForrestPi/ObjectDetection | 54e0821e73f67be5360c36f01229a123c34ab3b3 | [
"MIT"
] | 12 | 2020-03-25T01:24:22.000Z | 2021-09-18T06:40:16.000Z | YOLO/Stronger-yolo-pytorch/trainers/__init__.py | ForrestPi/ObjectDetection | 54e0821e73f67be5360c36f01229a123c34ab3b3 | [
"MIT"
] | 1 | 2020-04-22T07:52:36.000Z | 2020-04-22T07:52:36.000Z | YOLO/Stronger-yolo-pytorch/trainers/__init__.py | ForrestPi/ObjectDetection | 54e0821e73f67be5360c36f01229a123c34ab3b3 | [
"MIT"
] | 4 | 2020-03-25T01:24:26.000Z | 2020-09-20T11:29:09.000Z | from .trainer_voc import Trainer as Trainer_VOC
from .trainer_coco import Trainer as Trainer_COCO | 48.5 | 49 | 0.865979 | 16 | 97 | 5 | 0.375 | 0.275 | 0.375 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113402 | 97 | 2 | 49 | 48.5 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
81d6cbb26ea9e2cea85206aaee5084e7862d9ccd | 16,995 | py | Python | FallingStars.py | burleyinnersbm07/python_fallingStars | 827db2855ece429f2523b837281104b2e8e40db3 | [
"MIT"
] | null | null | null | FallingStars.py | burleyinnersbm07/python_fallingStars | 827db2855ece429f2523b837281104b2e8e40db3 | [
"MIT"
] | null | null | null | FallingStars.py | burleyinnersbm07/python_fallingStars | 827db2855ece429f2523b837281104b2e8e40db3 | [
"MIT"
] | null | null | null | # A simple program that resembles the falling of stars or snow on a screen
# Coded in Python 2.7.10 with PyGame
# by Brett Burley-Inners :: 11/7/2015
import pygame, time, random, sys
pygame.init()
# Default dimensions of the game window (px)
display_width = 1280
display_height = 720
# Create a canvas to display the game on
gameScreen = pygame.display.set_mode((display_width, display_height))
# Title of the game Window
pygame.display.set_caption('Falling Stars')
# Class that creates a star object
class Star:
def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
self.starSize = starSize
self.xCoordinate = xCoordinate
self.yCoordinate = yCoordinate
self.starColor = starColor
self.fallSpeed = fallSpeed
self.fallDirection = fallDirection
def fall(self):
self.yCoordinate += self.fallSpeed
self.xCoordinate += self.fallDirection
pygame.draw.rect(gameScreen, self.starColor, [self.xCoordinate, self.yCoordinate, self.starSize, self.starSize])
if self.yCoordinate > display_height:
fallingStars.remove(self)
# Class that creates a star object
class upStar:
def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
self.starSize = starSize
self.xCoordinate = xCoordinate
self.yCoordinate = yCoordinate
self.starColor = starColor
self.fallSpeed = fallSpeed
self.fallDirection = fallDirection
def fall(self):
self.yCoordinate -= self.fallSpeed
self.xCoordinate += self.fallDirection
pygame.draw.rect(gameScreen, self.starColor, [self.xCoordinate, self.yCoordinate, self.starSize, self.starSize])
if self.yCoordinate < 0:
fallingStars.remove(self)
# Class that creates a star object
class lStar:
def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
self.starSize = starSize
self.xCoordinate = xCoordinate
self.yCoordinate = yCoordinate
self.starColor = starColor
self.fallSpeed = fallSpeed
self.fallDirection = fallDirection
def fall(self):
self.yCoordinate += self.fallDirection
self.xCoordinate -= self.fallSpeed
pygame.draw.rect(gameScreen, self.starColor, [self.xCoordinate, self.yCoordinate, self.starSize, self.starSize])
if self.xCoordinate < 0:
fallingStars.remove(self)
# Class that creates a star object
class rStar:
def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
self.starSize = starSize
self.xCoordinate = xCoordinate
self.yCoordinate = yCoordinate
self.starColor = starColor
self.fallSpeed = fallSpeed
self.fallDirection = fallDirection
def fall(self):
self.yCoordinate += self.fallDirection
self.xCoordinate += self.fallSpeed
pygame.draw.rect(gameScreen, self.starColor, [self.xCoordinate, self.yCoordinate, self.starSize, self.starSize])
if self.xCoordinate > display_width:
fallingStars.remove(self)
# Colors
white = (255, 255, 255)
darkGray = (50, 50, 50)
darkerGray = (25, 25, 25)
darkestGray = (10, 10, 10)
lightGray = (150, 150, 150)
rLightGray = (200, 200, 200)
rrLightGray = (220, 220, 220)
black = (0, 0, 0)
red = (245, 0, 0)
darkRed = (150, 0, 0)
green = (0, 235, 0)
darkGreen = (0, 150, 0)
lightBlue = (55, 210, 225)
blue = (0, 0, 215)
darkBlue = (0, 0, 115)
pink = (225, 55, 135)
# List of colors
colorList = []
colorList.append(darkerGray)
colorList.append(darkestGray)
colorList.append(lightGray)
colorList.append(rLightGray)
colorList.append(rrLightGray)
colorList.append(lightBlue)
# Clock and FPS stuff
clock = pygame.time.Clock()
# List to maintain star objects
fallingStars = []
# variables for the while loop... 1's and 0's work too
starFall = True
makeStars = True
# Main loop for the falling star effect
while starFall:
# refresh rate of gameScreen (times per second)
clock.tick(60)
# make the 'close'/'x' button work
for event in pygame.event.get():
if event.type == pygame.QUIT:
starFall = False
sys.exit()
# background color, drawn before the stars each time
gameScreen.fill(darkGray)
# keep making the stars...
if makeStars:
# stars going down
fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 20), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
# stars going up
fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), random.randrange(1, display_width), display_height + 5, colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#stars going left
# Class that creates a star object
#class lStar:
# def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
fallingStars.append(lStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
fallingStars.append(lStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(lStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), display_width + 5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#stars going right
# Class that creates a star object
#class rStar:
# def __init__(self, starSize, xCoordinate, yCoordinate, starColor, fallSpeed, fallDirection):
fallingStars.append(rStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
fallingStars.append(rStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(rStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-2, 2)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(upStar(random.randrange(1, 20), -5, random.randrange(1, display_height), colorList[random.randrange(0, 6)], random.randrange(1, 10), random.randrange(-3, 3)))
#fallingStars.append(Star(random.randrange(1, 25), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 30)))
#fallingStars.append(Star(random.randrange(1, 25), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 30)))
#fallingStars.append(Star(random.randrange(1, 25), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 30)))
#fallingStars.append(Star(random.randrange(1, 25), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 30)))
#fallingStars.append(Star(random.randrange(1, 25), random.randrange(1, display_width), -5, colorList[random.randrange(0, 6)], random.randrange(1, 30)))
# for every star object in the list, run the fall function (make 'em "move")
for i in fallingStars:
i.fall()
#print(len(fallingStars))
# if the list is too big, remove the first item
# for the computer's sake
if len(fallingStars) > 10000:
del fallingStars[0]
# draw the screen
pygame.display.update()
# That's all, folks!
| 61.133094 | 202 | 0.702677 | 2,244 | 16,995 | 5.267825 | 0.080214 | 0.380678 | 0.247695 | 0.118687 | 0.874122 | 0.874122 | 0.874122 | 0.871415 | 0.871415 | 0.871415 | 0 | 0.058868 | 0.145396 | 16,995 | 277 | 203 | 61.353791 | 0.755026 | 0.639423 | 0 | 0.436364 | 0 | 0 | 0.002153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072727 | false | 0 | 0.009091 | 0 | 0.118182 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
81e0f651aaa77e06f40bf063f4d14c9eb456065f | 9,177 | py | Python | tests/web_api_tests.py | vnaydionov/card-proxy | 1bd8464c91ba5bf18571c691194501f0f5874dfc | [
"MIT"
] | 3 | 2016-12-19T00:09:33.000Z | 2021-12-07T08:24:50.000Z | tests/web_api_tests.py | vnaydionov/card-proxy | 1bd8464c91ba5bf18571c691194501f0f5874dfc | [
"MIT"
] | 1 | 2016-07-17T11:09:21.000Z | 2016-07-18T08:51:16.000Z | tests/web_api_tests.py | vnaydionov/card-proxy | 1bd8464c91ba5bf18571c691194501f0f5874dfc | [
"MIT"
] | 4 | 2015-05-19T07:54:57.000Z | 2021-03-14T06:40:36.000Z | # -*- coding: utf-8 -*-
import os
import sys
import unittest
from proxy_web_api import get_resp_field, call_proxy
from utils import generate_random_card_data, generate_random_number
import logger
log = logger.get_logger('/tmp/web_api_tests-%s.log' % os.environ['USER'])
SERVER_URI = 'http://localhost:17117/'
def log_func_context(func):
def inner(*args, **kwargs):
log.debug('---- Start [%s] ----', func.func_name)
result = func(*args, **kwargs)
log.debug('---- Start [%s] ----', func.func_name)
return result
return inner
class TestBaseWebApi(unittest.TestCase):
'''
tokenize_card, detokenize_card, remove_card, etc.
'''
# TODO add error test
def __init__(self, *args, **kwargs):
super(TestBaseWebApi, self).__init__(*args, **kwargs)
self.server_uri = SERVER_URI
@log_func_context
def test_debug_get(self):
status, resp, f_time = call_proxy(self.server_uri,
'debug_method', 'GET')
self.assertEqual(status, 'success')
@log_func_context
def test_debug_post(self):
status, resp, f_time = call_proxy(self.server_uri,
'debug_method', 'POST')
self.assertEqual(status, 'success')
@log_func_context
def test_check_kek_get(self):
status, resp, f_time = call_proxy(self.server_uri,
'check_kek', 'GET')
self.assertEqual('true', get_resp_field(resp, 'check_kek'))
self.assertEqual(status, 'success')
@log_func_context
def test_dek_status_get(self):
status, resp, f_time = call_proxy(self.server_uri,
'dek_status', 'GET')
self.assertEqual(status, 'success')
@log_func_context
def test_get_token_with_cvn_get(self):
card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET', card_data)
self.assertEqual(status, 'success')
self.assertTrue(get_resp_field(resp, 'card_token'))
self.assertTrue(get_resp_field(resp, 'cvn_token'))
self.assertTrue(get_resp_field(resp, 'pan_masked'))
self.assertFalse(get_resp_field(resp, 'pan'))
self.assertFalse(get_resp_field(resp, 'cvn'))
@log_func_context
def test_get_token_with_cvn_post(self):
card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST', card_data)
self.assertEqual(status, 'success')
self.assertTrue(get_resp_field(resp, 'card_token'))
self.assertTrue(get_resp_field(resp, 'cvn_token'))
self.assertTrue(get_resp_field(resp, 'pan_masked'))
self.assertFalse(get_resp_field(resp, 'pan'))
self.assertFalse(get_resp_field(resp, 'cvn'))
@log_func_context
def test_get_token_without_cvn_get(self):
card_data = generate_random_card_data(mode='without_cvn')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET', card_data)
self.assertEqual(status, 'success')
self.assertTrue(get_resp_field(resp, 'card_token'))
self.assertTrue(get_resp_field(resp, 'pan_masked'))
self.assertFalse(get_resp_field(resp, 'pan'))
@log_func_context
def test_get_token_without_cvn_post(self):
card_data = generate_random_card_data(mode='without_cvn')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST', card_data)
self.assertEqual(status, 'success')
self.assertTrue(get_resp_field(resp, 'card_token'))
self.assertTrue(get_resp_field(resp, 'pan_masked'))
self.assertFalse(get_resp_field(resp, 'pan'))
@log_func_context
def test_get_token_multiple_duplicate_get(self):
card_data = generate_random_card_data(mode='full')
orig_pan = card_data.pan
orig_cvn = card_data.cvn
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET', card_data)
orig_card_token = get_resp_field(resp, 'card_token')
orig_cvn_token = get_resp_field(resp, 'cvn_token')
for _ in range(5):
card_data.pan = orig_pan
card_data.cvn = orig_cvn
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET', card_data)
dup_card_token = get_resp_field(resp, 'card_token')
dup_cvn_token = get_resp_field(resp, 'cvn_token')
self.assertEqual(orig_card_token, dup_card_token)
self.assertNotEqual(orig_cvn_token, dup_cvn_token)
@log_func_context
def test_get_token_multiple_duplicate_post(self):
card_data = generate_random_card_data(mode='full')
orig_pan = card_data.pan
orig_cvn = card_data.cvn
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST', card_data)
orig_card_token = get_resp_field(resp, 'card_token')
orig_cvn_token = get_resp_field(resp, 'cvn_token')
for _ in range(5):
card_data.pan = orig_pan
card_data.cvn = orig_cvn
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST', card_data)
dup_card_token = get_resp_field(resp, 'card_token')
dup_cvn_token = get_resp_field(resp, 'cvn_token')
self.assertEqual(orig_card_token, dup_card_token)
self.assertNotEqual(orig_cvn_token, dup_cvn_token)
@log_func_context
def test_get_card_get(self):
source_card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET', source_card_data)
card_token = get_resp_field(resp, 'card_token')
cvn_token = get_resp_field(resp, 'cvn_token')
status, resp, f_time = call_proxy(
self.server_uri, 'detokenize_card', 'GET',
card_token, cvn_token)
self.assertEqual(status, 'success')
self.assertEqual(source_card_data.pan, get_resp_field(resp, 'pan'))
self.assertEqual(source_card_data.cvn, get_resp_field(resp, 'cvn'))
# self.assertEqual(int(source_card_data.expire_year),
# int(get_resp_field(resp, 'expire_year')))
# self.assertEqual(int(source_card_data.expire_month),
# int(get_resp_field(resp, 'expire_month')))
@log_func_context
def test_get_card_post(self):
source_card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST',
source_card_data)
card_token = get_resp_field(resp, 'card_token')
cvn_token = get_resp_field(resp, 'cvn_token')
status, resp, f_time = call_proxy(
self.server_uri, 'detokenize_card', 'POST',
card_token, cvn_token)
self.assertEqual(status, 'success')
self.assertEqual(source_card_data.pan, get_resp_field(resp, 'pan'))
self.assertEqual(source_card_data.cvn, get_resp_field(resp, 'cvn'))
# self.assertEqual(int(source_card_data.expire_year),
# int(get_resp_field(resp, 'expire_year')))
# self.assertEqual(int(source_card_data.expire_month),
# int(get_resp_field(resp, 'expire_month')))
@log_func_context
def test_remove_card_get(self):
source_card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'GET',
source_card_data)
card_token = get_resp_field(resp, 'card_token')
cvn_token = get_resp_field(resp, 'cvn_token')
status, resp, f_time = call_proxy(
self.server_uri, 'remove_card', 'GET',
card_token, cvn_token)
self.assertEqual(status, 'success')
@log_func_context
def test_remove_card_post(self):
source_card_data = generate_random_card_data(mode='full')
status, resp, f_time = call_proxy(self.server_uri,
'tokenize_card', 'POST',
source_card_data)
card_token = get_resp_field(resp, 'card_token')
cvn_token = get_resp_field(resp, 'cvn_token')
status, resp, f_time = call_proxy(
self.server_uri, 'remove_card', 'POST',
card_token, cvn_token)
self.assertEqual(status, 'success')
if __name__ == '__main__':
import sys
sys.argv.append('-v')
unittest.main()
# vim:ts=4:sts=4:sw=4:tw=85:et:
| 43.7 | 83 | 0.612619 | 1,137 | 9,177 | 4.540018 | 0.093228 | 0.07594 | 0.097637 | 0.127083 | 0.881441 | 0.881441 | 0.875436 | 0.87253 | 0.87253 | 0.822162 | 0 | 0.001971 | 0.281138 | 9,177 | 209 | 84 | 43.909091 | 0.780506 | 0.062112 | 0 | 0.74269 | 1 | 0 | 0.099452 | 0.002915 | 0 | 0 | 0 | 0.004785 | 0.216374 | 1 | 0.099415 | false | 0 | 0.040936 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f20946bdb39ef0691b4091a089e0a77953aa2543 | 126 | py | Python | frappymongouser/__init__.py | ilfrich/frappy-py-mongo-user-store | 7b6c99bdc8dc812a207fa648e3090a2011d430e1 | [
"Apache-2.0"
] | null | null | null | frappymongouser/__init__.py | ilfrich/frappy-py-mongo-user-store | 7b6c99bdc8dc812a207fa648e3090a2011d430e1 | [
"Apache-2.0"
] | null | null | null | frappymongouser/__init__.py | ilfrich/frappy-py-mongo-user-store | 7b6c99bdc8dc812a207fa648e3090a2011d430e1 | [
"Apache-2.0"
] | null | null | null | from frappymongouser.user_store import User, UserStore
from frappymongouser.user_token_store import UserToken, UserTokenStore
| 42 | 70 | 0.888889 | 15 | 126 | 7.266667 | 0.6 | 0.348624 | 0.422018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 126 | 2 | 71 | 63 | 0.939655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1ee6fd334903434941fb9d21f0e156985fa77e7d | 24,375 | py | Python | tests/beem/test_cli.py | emre/beem | d23629bc92960ce0a7eabbfe66c545d89ea1138a | [
"MIT"
] | null | null | null | tests/beem/test_cli.py | emre/beem | d23629bc92960ce0a7eabbfe66c545d89ea1138a | [
"MIT"
] | null | null | null | tests/beem/test_cli.py | emre/beem | d23629bc92960ce0a7eabbfe66c545d89ea1138a | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from builtins import str
from builtins import super
import unittest
import mock
import click
from click.testing import CliRunner
from pprint import pprint
from beem import Steem, exceptions
from beem.account import Account
from beem.amount import Amount
from beemgraphenebase.account import PrivateKey
from beem.cli import cli, balance
from beem.instance import set_shared_steem_instance, shared_steem_instance
from beembase.operationids import getOperationNameForId
from beem.nodelist import NodeList
wif = "5Jt2wTfhUt5GkZHV1HYVfkEaJ6XnY8D2iA4qjtK9nnGXAhThM3w"
posting_key = "5Jh1Gtu2j4Yi16TfhoDmg8Qj3ULcgRi7A49JXdfUUTVPkaFaRKz"
memo_key = "5KPbCuocX26aMxN9CDPdUex4wCbfw9NoT5P7UhcqgDwxXa47bit"
pub_key = "STX52xMqKegLk4tdpNcUXU9Rw5DtdM9fxf3f12Gp55v1UjLX3ELZf"
class Testcases(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.nodelist = NodeList()
cls.nodelist.update_nodes()
cls.nodelist.update_nodes(steem_instance=Steem(node=cls.nodelist.get_nodes(normal=True, appbase=True), num_retries=10))
# stm = shared_steem_instance()
# stm.config.refreshBackup()
runner = CliRunner()
result = runner.invoke(cli, ['-o', 'set', 'default_vote_weight', '100'])
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['-o', 'set', 'default_account', 'beem'])
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['-o', 'set', 'nodes', str(cls.nodelist.get_testnet())])
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['createwallet', '--wipe'], input="test\ntest\n")
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['addkey'], input="test\n" + wif + "\n")
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['addkey'], input="test\n" + posting_key + "\n")
if result.exit_code != 0:
raise AssertionError(str(result))
result = runner.invoke(cli, ['addkey'], input="test\n" + memo_key + "\n")
if result.exit_code != 0:
raise AssertionError(str(result))
@classmethod
def tearDownClass(cls):
stm = shared_steem_instance()
stm.config.recover_with_latest_backup()
def test_balance(self):
runner = CliRunner()
runner.invoke(cli, ['set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['balance', 'beem', 'beem1'])
self.assertEqual(result.exit_code, 0)
def test_interest(self):
runner = CliRunner()
runner.invoke(cli, ['set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['interest', 'beem', 'beem1'])
self.assertEqual(result.exit_code, 0)
def test_config(self):
runner = CliRunner()
result = runner.invoke(cli, ['config'])
self.assertEqual(result.exit_code, 0)
def test_addkey(self):
runner = CliRunner()
result = runner.invoke(cli, ['createwallet', '--wipe'], input="test\ntest\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['addkey'], input="test\n" + wif + "\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['addkey'], input="test\n" + posting_key + "\n")
self.assertEqual(result.exit_code, 0)
def test_parsewif(self):
runner = CliRunner()
result = runner.invoke(cli, ['parsewif'], input=wif + "\nexit\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['parsewif', '--unsafe-import-key', wif])
self.assertEqual(result.exit_code, 0)
def test_delkey(self):
runner = CliRunner()
result = runner.invoke(cli, ['delkey', '--confirm', pub_key], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['addkey'], input="test\n" + wif + "\n")
self.assertEqual(result.exit_code, 0)
def test_listkeys(self):
runner = CliRunner()
result = runner.invoke(cli, ['listkeys'])
self.assertEqual(result.exit_code, 0)
def test_listaccounts(self):
runner = CliRunner()
result = runner.invoke(cli, ['listaccounts'])
self.assertEqual(result.exit_code, 0)
def test_info(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['info'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', 'beem'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', '100'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', '--', '-1'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', pub_key])
self.assertEqual(result.exit_code, 0)
def test_info2(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['info', '--', '-1:1'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', 'gtg'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['info', "@gtg/witness-gtg-log"])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_changepassword(self):
runner = CliRunner()
result = runner.invoke(cli, ['changewalletpassphrase'], input="test\ntest\ntest\n")
self.assertEqual(result.exit_code, 0)
def test_walletinfo(self):
runner = CliRunner()
result = runner.invoke(cli, ['walletinfo'])
self.assertEqual(result.exit_code, 0)
def test_set(self):
runner = CliRunner()
result = runner.invoke(cli, ['-o', 'set', 'set_default_vote_weight', '100'])
self.assertEqual(result.exit_code, 0)
def test_upvote(self):
runner = CliRunner()
result = runner.invoke(cli, ['-o', 'upvote', '@test/abcd'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-o', 'upvote', '@test/abcd', '100'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-o', 'upvote', '--weight', '100', '@test/abcd'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_downvote(self):
runner = CliRunner()
result = runner.invoke(cli, ['-o', 'downvote', '@test/abcd'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-o', 'downvote', '@test/abcd', '100'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-o', 'downvote', '--weight', '100', '@test/abcd'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_transfer(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['transfer', 'beem1', '1', 'SBD', 'test'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_powerdownroute(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['powerdownroute', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_convert(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['convert', '1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_powerup(self):
runner = CliRunner()
result = runner.invoke(cli, ['powerup', '1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_powerdown(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'powerdown', '1e3'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', 'powerdown', '0'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_updatememokey(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'updatememokey'], input="test\ntest\ntest\n")
self.assertEqual(result.exit_code, 0)
def test_permissions(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['permissions', 'beem'])
self.assertEqual(result.exit_code, 0)
def test_follower(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['follower', 'beem1'])
self.assertEqual(result.exit_code, 0)
def test_following(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['following', 'beem'])
self.assertEqual(result.exit_code, 0)
def test_muter(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['muter', 'beem1'])
self.assertEqual(result.exit_code, 0)
def test_muting(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['muting', 'beem'])
self.assertEqual(result.exit_code, 0)
def test_allow_disallow(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'allow', '--account', 'beem', '--permission', 'posting', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', 'disallow', '--account', 'beem', '--permission', 'posting', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_witnesses(self):
runner = CliRunner()
result = runner.invoke(cli, ['witnesses'])
self.assertEqual(result.exit_code, 0)
def test_votes(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['votes', '--direction', 'out', 'test'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['votes', '--direction', 'in', 'test'])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_approvewitness(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-o', 'approvewitness', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_disapprovewitness(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-o', 'disapprovewitness', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_newaccount(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'newaccount', 'beem3'], input="test\ntest\ntest\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', 'newaccount', '--fee', '6 STEEM', 'beem3'], input="test\ntest\ntest\n")
self.assertEqual(result.exit_code, 0)
def test_importaccount(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['importaccount', '--roles', '["owner", "active", "posting", "memo"]', 'beem2'], input="test\numybjvCafrt8LdoCjEimQiQ4\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['delkey', '--confirm', 'STX7mLs2hns87f7kbf3o2HBqNoEaXiTeeU89eVF6iUCrMQJFzBsPo'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['delkey', '--confirm', 'STX7rUmnpnCp9oZqMQeRKDB7GvXTM9KFvhzbA3AKcabgTBfQZgHZp'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['delkey', '--confirm', 'STX6qGWHsCpmHbphnQbS2yfhvhJXDUVDwnsbnrMZkTqfnkNEZRoLP'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['delkey', '--confirm', 'STX8Wvi74GYzBKgnUmiLvptzvxmPtXfjGPJL8QY3rebecXaxGGQyV'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_orderbook(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['orderbook'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['orderbook', '--show-date'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['orderbook', '--chart'])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_buy(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['-d', '-x', 'buy', '1', 'STEEM', '2.2'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'buy', '1', 'STEEM'], input="y\ntest\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'buy', '1', 'SBD', '2.2'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'buy', '1', 'SBD'], input="y\ntest\n")
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_sell(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['-d', '-x', 'sell', '1', 'STEEM', '2.2'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'sell', '1', 'SBD', '2.2'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'sell', '1', 'STEEM'], input="y\ntest\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', '-x', 'sell', '1', 'SBD'], input="y\ntest\n")
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_cancel(self):
runner = CliRunner()
result = runner.invoke(cli, ['-d', 'cancel', '5'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_openorders(self):
runner = CliRunner()
result = runner.invoke(cli, ['openorders'])
self.assertEqual(result.exit_code, 0)
def test_resteem(self):
runner = CliRunner()
result = runner.invoke(cli, ['-o', 'resteem', '@test/abcde'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_follow_unfollow(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'follow', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['-d', 'unfollow', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_mute_unmute(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['mute', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['unfollow', 'beem1'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_witnesscreate(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
result = runner.invoke(cli, ['-d', 'witnesscreate', 'beem', pub_key], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_witnessupdate(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['-o', 'nextnode'])
runner.invoke(cli, ['-o', 'witnessupdate', 'gtg', '--maximum_block_size', 65000, '--account_creation_fee', 0.1, '--sbd_interest_rate', 0, '--url', 'https://google.de', '--signing_key', wif])
self.assertEqual(result.exit_code, 0)
def test_profile(self):
runner = CliRunner()
result = runner.invoke(cli, ['setprofile', 'url', 'https://google.de'], input="test\n")
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['delprofile', 'url'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_claimreward(self):
runner = CliRunner()
result = runner.invoke(cli, ['-d', 'claimreward'], input="test\n")
result = runner.invoke(cli, ['-d', 'claimreward', '--claim_all_steem'], input="test\n")
result = runner.invoke(cli, ['-d', 'claimreward', '--claim_all_sbd'], input="test\n")
result = runner.invoke(cli, ['-d', 'claimreward', '--claim_all_vests'], input="test\n")
self.assertEqual(result.exit_code, 0)
def test_power(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['power'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_nextnode(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['-o', 'nextnode'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_pingnode(self):
runner = CliRunner()
result = runner.invoke(cli, ['pingnode'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pingnode', '--raw'])
self.assertEqual(result.exit_code, 0)
def test_updatenodes(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['updatenodes', '--test'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_currentnode(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['currentnode'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['currentnode', '--url'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['currentnode', '--version'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_ticker(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['ticker'])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_pricehistory(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['pricehistory'])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_pending(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['pending', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pending', '--post', '--comment', '--curation', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pending', '--post', '--comment', '--curation', '--permlink', '--days', '1', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pending', '--post', '--comment', '--curation', '--author', '--days', '1', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pending', '--post', '--comment', '--curation', '--author', '--title', '--days', '1', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['pending', '--post', '--comment', '--curation', '--author', '--permlink', '--length', '30', '--days', '1', 'holger80'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_rewards(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['rewards', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['rewards', '--post', '--comment', '--curation', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['rewards', '--post', '--comment', '--curation', '--permlink', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['rewards', '--post', '--comment', '--curation', '--author', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['rewards', '--post', '--comment', '--curation', '--author', '--title', 'holger80'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['rewards', '--post', '--comment', '--curation', '--author', '--permlink', '--length', '30', 'holger80'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
def test_curation(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['curation', "@gtg/witness-gtg-log"])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_verify(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes(normal=False, appbase=True)])
result = runner.invoke(cli, ['verify', '--trx', '3', '25304468'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['verify', '--trx', '5', '25304468'])
self.assertEqual(result.exit_code, 0)
result = runner.invoke(cli, ['verify', '--trx', '0'])
self.assertEqual(result.exit_code, 0)
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
def test_tradehistory(self):
runner = CliRunner()
runner.invoke(cli, ['-o', 'set', 'nodes', self.nodelist.get_nodes()])
result = runner.invoke(cli, ['tradehistory'])
runner.invoke(cli, ['-o', 'set', 'nodes', str(self.nodelist.get_testnet())])
self.assertEqual(result.exit_code, 0)
| 48.555777 | 198 | 0.611774 | 2,871 | 24,375 | 5.092302 | 0.075235 | 0.139535 | 0.174419 | 0.165185 | 0.827633 | 0.819904 | 0.807114 | 0.768057 | 0.724555 | 0.706566 | 0 | 0.015816 | 0.201067 | 24,375 | 501 | 199 | 48.652695 | 0.734929 | 0.002297 | 0 | 0.552511 | 0 | 0 | 0.154672 | 0.021262 | 0 | 0 | 0 | 0 | 0.257991 | 1 | 0.134703 | false | 0.004566 | 0.050228 | 0 | 0.187215 | 0.004566 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1eeea613b9a2e81903de6d59210f6cac5912bfe0 | 48,616 | py | Python | jutil.py | jskDr/jamespy | 729c496732d8ec2d6ba25d6b97ef2aa02065c18c | [
"MIT"
] | null | null | null | jutil.py | jskDr/jamespy | 729c496732d8ec2d6ba25d6b97ef2aa02065c18c | [
"MIT"
] | null | null | null | jutil.py | jskDr/jamespy | 729c496732d8ec2d6ba25d6b97ef2aa02065c18c | [
"MIT"
] | null | null | null | """
some utility which I made.
Editor - Sungjin Kim, 2015-4-17
"""
#Common library
from sklearn import linear_model, svm, cross_validation, grid_search, metrics
import matplotlib.pyplot as plt
import numpy as np
import time
#import subprocess
import pandas as pd
import itertools
import random
#My personal library
import jchem
import jpyx
from maml.gp import gaussian_process as gp
def _sleast_r0( a = '1000', ln = 10):
"It returns 0 filled string with the length of ln."
if ln > len(a):
return '0'*(ln - len(a)) + a
else:
return a[-ln:]
def sleast( a = '1000', ln = 10):
"It returns 0 filled string with the length of ln."
if ln > len(a):
return a + '0'*(ln - len(a))
else:
return a
def int_bp( b_ch):
"map '0' --> -1, '1' --> -1"
b_int = int( b_ch)
return 1 - 2 * b_int
def prange( pat, st, ed, ic=1):
ar = []
for ii in range( st, ed, ic):
ar.extend( map( lambda jj: ii + jj, pat))
return filter( lambda x: x < ed, ar)
class Timer:
def __enter__(self):
self.start = time.clock()
return self
def __exit__(self, *args):
self.end = time.clock()
self.interval = self.end - self.start
print( 'Elapsed time: {}sec'.format(self.interval))
def mlr( RM, yE, disp = True, graph = True):
clf = linear_model.LinearRegression()
clf.fit( RM, yE)
mlr_show( clf, RM, yE, disp = disp, graph = graph)
def mlr3( RM, yE, disp = True, graph = True):
clf = linear_model.LinearRegression()
clf.fit( RM, yE)
mlr_show3( clf, RM, yE, disp = disp, graph = graph)
def mlr3_coef( RM, yE, disp = True, graph = True):
clf = linear_model.LinearRegression()
clf.fit( RM, yE)
mlr_show3( clf, RM, yE, disp = disp, graph = graph)
return clf.coef_, clf.intercept_
def mlr4_coef( RM, yE, disp = True, graph = True):
clf = linear_model.LinearRegression()
clf.fit( RM, yE)
mlr_show4( clf, RM, yE, disp = disp, graph = graph)
return clf.coef_, clf.intercept_
def mlr_ridge( RM, yE, alpha = 0.5, disp = True, graph = True):
clf = linear_model.Ridge( alpha = alpha)
clf.fit( RM, yE)
mlr_show( clf, RM, yE, disp = disp, graph = graph)
def mlr3_coef_ridge( RM, yE, alpha = 0.5, disp = True, graph = True):
"""
Return regression coefficients and intercept
"""
clf = linear_model.Ridge( alpha = alpha)
clf.fit( RM, yE)
mlr_show( clf, RM, yE, disp = disp, graph = graph)
return clf.coef_, clf.intercept_
def ann_pre( RM, yE, disp = True, graph = True):
"""
In ann case, pre and post processing are used
while in mlr case, all processing is completed by one function (mlr).
ann processing will be performed by shell command
"""
jchem.gen_input_files_valid( RM, yE, RM)
def ann_post( yv, disp = True, graph = True):
"""
After ann_pre and shell command, ann_post can be used.
"""
df_ann = pd.read_csv( 'ann_out.csv')
yv_ann = np.mat( df_ann['out'].tolist()).T
r_sqr, RMSE = ann_show( yv, yv_ann, disp = disp, graph = graph)
return r_sqr, RMSE
def ann_post_range( range_tr, range_val, yv, disp = True, graph = True):
"""
After ann_pre and shell command, ann_post can be used.
"""
df_ann = pd.read_csv( 'ann_out.csv')
yv_ann = np.mat( df_ann['out'].tolist()).T
print "Traning:"
ann_show( yv[range_tr, 0], yv_ann[range_tr, 0], disp = disp, graph = graph)
print "Validation:"
r_sqr, RMSE = ann_show( yv[range_val, 0] , yv_ann[range_val, 0], disp = disp, graph = graph)
return r_sqr, RMSE
def _ann_show_r0( yEv, yEv_calc, disp = True, graph = True):
r_sqr, RMSE = jchem.estimate_accuracy( yEv, yEv_calc, disp = disp)
if graph:
plt.scatter( yEv.tolist(), yEv_calc.tolist())
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Target')
plt.ylabel('Prediction')
plt.show()
return r_sqr, RMSE
def _regress_show_r0( yEv, yEv_calc, disp = True, graph = True, plt_title = None):
# if the output is a vector and the original is a metrix,
# the output is translated to a matrix.
if len( np.shape(yEv)) == 2 and len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
r_sqr, RMSE = jchem.estimate_accuracy( yEv, yEv_calc, disp = disp)
if graph:
plt.scatter( yEv.tolist(), yEv_calc.tolist())
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Target')
plt.ylabel('Prediction')
if plt_title:
plt.title( plt_title)
plt.show()
return r_sqr, RMSE
def regress_show( yEv, yEv_calc, disp = True, graph = True, plt_title = None):
# if the output is a vector and the original is a metrix,
# the output is translated to a matrix.
if len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
if len( np.shape(yEv)) == 1:
yEv = np.mat( yEv).T
r_sqr, RMSE = jchem.estimate_accuracy( yEv, yEv_calc, disp = disp)
if graph:
#plt.scatter( yEv.tolist(), yEv_calc.tolist())
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz) # Change ms
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
if plt_title:
plt.title( plt_title)
else:
plt.title( '$r^2$ = {0:.2e}, RMSE = {1:.2e}'.format( r_sqr, RMSE))
plt.show()
return r_sqr, RMSE
def regress_show3( yEv, yEv_calc, disp = True, graph = True, plt_title = None):
# if the output is a vector and the original is a metrix,
# the output is translated to a matrix.
r_sqr, RMSE, MAE = jchem.estimate_score3( yEv, yEv_calc, disp = disp)
if graph:
#plt.scatter( yEv.tolist(), yEv_calc.tolist())
plt.figure()
ms_sz = max(min( 6000 / yEv.shape[0], 8), 3)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz) # Change ms
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
if plt_title:
plt.title( plt_title)
else:
plt.title( '$r^2$ = {0:.2e}, RMSE = {1:.2e}, MAE = {2:.2e}'.format( r_sqr, RMSE, MAE))
plt.show()
return r_sqr, RMSE, MAE
def regress_show4( yEv, yEv_calc, disp = True, graph = True, plt_title = None):
# if the output is a vector and the original is a metrix,
# the output is translated to a matrix.
r_sqr, RMSE, MAE, DAE = estimate_accuracy4( yEv, yEv_calc, disp = disp)
if graph:
#plt.scatter( yEv.tolist(), yEv_calc.tolist())
plt.figure()
ms_sz = max(min( 6000 / yEv.shape[0], 8), 3)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz) # Change ms
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
if plt_title:
plt.title( plt_title)
else:
plt.title( '$r^2$={0:.1e},$\sigma$={1:.1e},MAE={2:.1e},DAE={3:.1e}'.format( r_sqr, RMSE, MAE, DAE))
plt.show()
return r_sqr, RMSE, MAE, DAE
def cv_show( yEv, yEv_calc, disp = True, graph = True, grid_std = None):
# if the output is a vector and the original is a metrix,
# the output is translated to a matrix.
if len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
if len( np.shape(yEv)) == 1:
yEv = np.mat( yEv).T
r_sqr, RMSE = jchem.estimate_accuracy( yEv, yEv_calc, disp = disp)
if graph:
#plt.scatter( yEv.tolist(), yEv_calc.tolist())
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz) # Change ms
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
if grid_std:
plt.title( '($r^2$, std) = ({0:.2e}, {1:.2e}), RMSE = {2:.2e}'.format( r_sqr, grid_std, RMSE))
else:
plt.title( '$r^2$ = {0:.2e}, RMSE = {1:.2e}'.format( r_sqr, RMSE))
plt.show()
return r_sqr, RMSE
ann_show = regress_show
def mlr_show( clf, RMv, yEv, disp = True, graph = True):
yEv_calc = clf.predict( RMv)
if len( np.shape(yEv)) == 2 and len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
r_sqr, RMSE = jchem.estimate_accuracy( yEv, yEv_calc, disp = disp)
if graph:
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz)
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
plt.title( '$r^2$ = {0:.2e}, RMSE = {1:.2e}'.format( r_sqr, RMSE))
plt.show()
return r_sqr, RMSE
def estimate_accuracy4(yEv, yEv_calc, disp = False):
"""
It was originally located in jchem. However now it is allocated here
since the functionality is more inline with jutil than jchem.
"""
r_sqr = metrics.r2_score( yEv, yEv_calc)
RMSE = np.sqrt( metrics.mean_squared_error( yEv, yEv_calc))
MAE = metrics.mean_absolute_error( yEv, yEv_calc)
DAE = metrics.median_absolute_error( yEv, yEv_calc)
if disp:
print "r^2={0:.2e}, RMSE={1:.2e}, MAE={2:.2e}, DAE={3:.2e}".format( r_sqr, RMSE, MAE, DAE)
return r_sqr, RMSE, MAE, DAE
def mlr_show3( clf, RMv, yEv, disp = True, graph = True):
yEv_calc = clf.predict( RMv)
if len( np.shape(yEv)) == 2 and len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
r_sqr, RMSE, aae = jchem.estimate_accuracy3( yEv, yEv_calc, disp = disp)
if graph:
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz)
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
plt.title( '$r^2$={0:.2e}, RMSE={1:.2e}, AAE={2:.2e}'.format( r_sqr, RMSE, aae))
plt.show()
return r_sqr, RMSE, aae
def mlr_show4( clf, RMv, yEv, disp = True, graph = True):
yEv_calc = clf.predict( RMv)
if len( np.shape(yEv)) == 2 and len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
r_sqr, RMSE, MAE, DAE = estimate_accuracy4( yEv, yEv_calc, disp = disp)
if graph:
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz)
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
#plt.title( '$r^2$={0:.2e}, RMSE={1:.2e}, AAE={2:.2e}'.format( r_sqr, RMSE, aae))
plt.title( '$r^2$={0:.1e},$\sigma$={1:.1e},MAE={2:.1e},DAE={3:.1e}'.format( r_sqr, RMSE, MAE, DAE))
plt.show()
return r_sqr, RMSE, MAE, DAE
def _mlr_val_r0( RM, yE, disp = True, graph = True):
clf = linear_model.LinearRegression()
clf.fit( RM[::2,:], yE[::2,0])
print 'Training result'
mlr_show( clf, RM[::2, :], yE[::2, 0], disp = disp, graph = graph)
print 'Validation result'
mlr_show( clf, RM[1::2, :], yE[1::2, 0], disp = disp, graph = graph)
def mlr_val( RM, yE, disp = True, graph = True, rate = 2, more_train = True, center = None):
"""
Validation is peformed as much as the given ratio.
"""
RMt, yEt, RMv, yEv = jchem.get_valid_mode_data( RM, yE, rate = rate, more_train = more_train, center = center)
clf = linear_model.LinearRegression()
clf.fit( RMt, yEt)
print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
return r_sqr, RMSE
def svr_val( RM, yE, C = 1.0, epsilon = 0.1, disp = True, graph = True, rate = 2, more_train = True, center = None):
"""
Validation is peformed as much as the given ratio.
"""
RMt, yEt, RMv, yEv = jchem.get_valid_mode_data( RM, yE, rate = rate, more_train = more_train, center = center)
clf = svm.SVR( C = C, epsilon = epsilon)
clf.fit( RMt, yEt.A1)
print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_ridge( RM, yE, rate = 2, more_train = True, center = None, alpha = 0.5, disp = True, graph = True):
"""
Validation is peformed as much as the given ratio.
"""
RMt, yEt, RMv, yEv = jchem.get_valid_mode_data( RM, yE, rate = rate, more_train = more_train, center = center)
print "Ridge: alpha = {}".format( alpha)
clf = linear_model.Ridge( alpha = alpha)
clf.fit( RMt, yEt)
print 'Weight value'
#print clf.coef_.flatten()
plt.plot( clf.coef_.flatten())
plt.grid()
plt.xlabel('Tap')
plt.ylabel('Weight')
plt.title('Linear Regression Weights')
plt.show()
print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_avg_2( RM, yE, disp = False, graph = False):
"""
Validation is peformed as much as the given ratio.
"""
r_sqr_list, RMSE_list = [], []
vseq_list = []
org_seq = range( len( yE))
for v_seq in itertools.combinations( org_seq, 2):
t_seq = filter( lambda x: x not in v_seq, org_seq)
RMt, yEt = RM[ t_seq, :], yE[ t_seq, 0]
RMv, yEv = RM[ v_seq, :], yE[ v_seq, 0]
#RMt, yEt, RMv, yEv = jchem.get_valid_mode_data( RM, yE, rate = rate, more_train = more_train, center = center)
clf = linear_model.LinearRegression()
clf.fit( RMt, yEt)
#print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
#print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
"""
#This is blocked since vseq_list is returned.
if r_sqr < 0:
print 'v_seq:', v_seq, '--> r_sqr = ', r_sqr
"""
r_sqr_list.append( r_sqr)
RMSE_list.append( RMSE)
vseq_list.append( v_seq)
print "average r_sqr = {0}, average RMSE = {1}".format( np.average( r_sqr_list), np.average( RMSE_list))
return r_sqr_list, RMSE_list, v_seq
def gen_rand_seq( ln, rate):
vseq = choose( ln, int( ln / rate))
org_seq = range( ln)
tseq = filter( lambda x: x not in vseq, org_seq)
return tseq, vseq
def mlr_val_vseq( RM, yE, v_seq, disp = True, graph = True):
"""
Validation is performed using vseq indexed values.
"""
org_seq = range( len( yE))
t_seq = filter( lambda x: x not in v_seq, org_seq)
RMt, yEt = RM[ t_seq, :], yE[ t_seq, 0]
RMv, yEv = RM[ v_seq, :], yE[ v_seq, 0]
clf = linear_model.LinearRegression()
clf.fit( RMt, yEt)
print 'Weight value'
#print clf.coef_.flatten()
plt.plot( clf.coef_.flatten())
plt.grid()
plt.xlabel('Tap')
plt.ylabel('Weight')
plt.title('Linear Regression Weights')
plt.show()
if disp: print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
if disp: print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
#if r_sqr < 0:
# print 'v_seq:', v_seq, '--> r_sqr = ', r_sqr
return r_sqr, RMSE
def mlr_val_vseq_rand(RM, yE, disp = True, graph = True, rate = 5):
"""
Validation is peformed using vseq indexed values.
vseq is randmly selected with respect to rate.
"""
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq( RM, yE, vseq, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_vseq_ridge_rand( RM, yE, alpha = .5, rate = 2, disp = True, graph = True):
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq_ridge( RM, yE, vseq, alpha = alpha, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_vseq_lasso_rand( RM, yE, alpha = .5, rate = 2, disp = True, graph = True):
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq_lasso( RM, yE, vseq, alpha = alpha, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_vseq_MMSE_rand( RM, yE, alpha = .5, rate = 2, disp = True, graph = True):
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq_MMSE( RM, yE, vseq, alpha = alpha, disp = disp, graph = graph)
return r_sqr, RMSE
def mlr_val_vseq_ridge_rand_profile( RM, yE, alpha = .5, rate = 2, iterN = 10, disp = True, graph = False, hist = True):
r2_rms_list = []
for ii in range( iterN):
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq_ridge( RM, yE, vseq, alpha = alpha, disp = disp, graph = graph)
r2_rms_list.append( (r_sqr, RMSE))
r2_list, rms_list = zip( *r2_rms_list)
#Showing r2 as histogram
pd_r2 = pd.DataFrame( {'r_sqr': r2_list})
pd_r2.plot( kind = 'hist', alpha = 0.5)
#Showing rms as histogram
pd_rms = pd.DataFrame( {'rms': rms_list})
pd_rms.plot( kind = 'hist', alpha = 0.5)
print "r2: mean = {0}, std = {1}".format( np.mean( r2_list), np.std( r2_list))
print "RMSE: mean = {0}, std = {1}".format( np.mean( rms_list), np.std( rms_list))
return r2_list, rms_list
def mlr_val_vseq_lasso_rand_profile( RM, yE, alpha = .001, rate = 2, iterN = 10, disp = True, graph = False, hist = True):
r2_rms_list = []
for ii in range( iterN):
vseq = choose( len( yE), int(len( yE) / rate));
r_sqr, RMSE = mlr_val_vseq_lasso( RM, yE, vseq, alpha = alpha, disp = disp, graph = graph)
r2_rms_list.append( (r_sqr, RMSE))
r2_list, rms_list = zip( *r2_rms_list)
#Showing r2 as histogram
pd_r2 = pd.DataFrame( {'r_sqr': r2_list})
pd_r2.plot( kind = 'hist', alpha = 0.5)
#Showing rms as histogram
pd_rms = pd.DataFrame( {'rms': rms_list})
pd_rms.plot( kind = 'hist', alpha = 0.5)
print "r2: mean = {0}, std = {1}".format( np.mean( r2_list), np.std( r2_list))
print "RMSE: mean = {0}, std = {1}".format( np.mean( rms_list), np.std( rms_list))
return r2_list, rms_list
def mlr_val_vseq_ridge( RM, yE, v_seq, alpha = .5, disp = True, graph = True):
"""
Validation is peformed using vseq indexed values.
"""
org_seq = range( len( yE))
t_seq = filter( lambda x: x not in v_seq, org_seq)
RMt, yEt = RM[ t_seq, :], yE[ t_seq, 0]
RMv, yEv = RM[ v_seq, :], yE[ v_seq, 0]
clf = linear_model.Ridge( alpha = alpha)
clf.fit( RMt, yEt)
if disp: print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
if disp: print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
#if r_sqr < 0:
# print 'v_seq:', v_seq, '--> r_sqr = ', r_sqr
return r_sqr, RMSE
def mlr_val_vseq_lasso( RM, yE, v_seq, alpha = .5, disp = True, graph = True):
"""
Validation is peformed using vseq indexed values.
"""
org_seq = range( len( yE))
t_seq = filter( lambda x: x not in v_seq, org_seq)
RMt, yEt = RM[ t_seq, :], yE[ t_seq, 0]
RMv, yEv = RM[ v_seq, :], yE[ v_seq, 0]
clf = linear_model.Lasso( alpha = alpha)
clf.fit( RMt, yEt)
if disp: print 'Training result'
mlr_show( clf, RMt, yEt, disp = disp, graph = graph)
if disp: print 'Validation result'
r_sqr, RMSE = mlr_show( clf, RMv, yEv, disp = disp, graph = graph)
#if r_sqr < 0:
# print 'v_seq:', v_seq, '--> r_sqr = ', r_sqr
return r_sqr, RMSE
def mlr_val_vseq_MMSE( RM, yE, v_seq, alpha = .5, disp = True, graph = True):
"""
Validation is peformed using vseq indexed values.
"""
org_seq = range( len( yE))
t_seq = filter( lambda x: x not in v_seq, org_seq)
RMt, yEt = RM[ t_seq, :], yE[ t_seq, 0]
RMv, yEv = RM[ v_seq, :], yE[ v_seq, 0]
w, RMt_1 = mmse_with_bias( RMt, yEt)
yEt_c = RMt_1*w
print 'Weight values'
#print clf.coef_.flatten()
plt.plot( w.A1)
plt.grid()
plt.xlabel('Tap')
plt.ylabel('Weight')
plt.title('Linear Regression Weights')
plt.show()
RMv_1 = add_bias_xM( RMv)
yEv_c = RMv_1*w
if disp: print 'Training result'
regress_show( yEt, yEt_c, disp = disp, graph = graph)
if disp: print 'Validation result'
r_sqr, RMSE = regress_show( yEv, yEv_c, disp = disp, graph = graph)
#if r_sqr < 0:
# print 'v_seq:', v_seq, '--> r_sqr = ', r_sqr
return r_sqr, RMSE
def _ann_val_pre_r0( RM, yE, disp = True, graph = True):
"""
In ann case, pre and post processing are used
while in mlr case, all processing is completed by one function (mlr).
ann processing will be performed by shell command
"""
jchem.gen_input_files_valid( RM[::2,:], yE[::2,0], RM)
def ann_val_pre( RM, yE, rate = 2, more_train = True, center = None):
"""
In ann case, pre and post processing are used
while in mlr case, all processing is completed by one function (mlr).
ann processing will be performed by shell command
Now, any percentage of validation will be possible.
Later, random selection will be included, while currently
deterministic selection is applied.
"""
RMt, yEt, RMv, yEv = jchem.get_valid_mode_data( RM, yE, rate = rate, more_train = more_train, center = center)
jchem.gen_input_files_valid( RMt, yEt, RM)
def _ann_val_post_r0( yE, disp = True, graph = True):
"""
After ann_pre and shell command, ann_post can be used.
"""
df_ann = pd.read_csv( 'ann_out.csv')
yv_ann = np.mat( df_ann['out'].tolist()).T
print 'Trainig result'
ann_show( yE[::2,0], yv_ann[::2,0], disp = disp, graph = graph)
print 'Validation result'
r_sqr, RMSE = ann_show( yE[1::2,0], yv_ann[1::2,0], disp = disp, graph = graph)
return r_sqr, RMSE
def ann_val_post( yE, disp = True, graph = True, rate = 2, more_train = True, center = None):
"""
After ann_pre and shell command, ann_post can be used.
"""
df_ann = pd.read_csv( 'ann_out.csv')
yE_c = np.mat( df_ann['out'].tolist()).T
yEt, yEt_c, yEv, yEv_c = jchem.get_valid_mode_data( yE, yE_c, rate = rate, more_train = more_train, center = center)
print 'Trainig result'
ann_show( yEt, yEt_c, disp = disp, graph = graph)
print 'Validation result'
r_sqr, RMSE = ann_show( yEv, yEv_c, disp = disp, graph = graph)
return r_sqr, RMSE
def writeparam_txt( fname = 'param.txt', dic = {"num_neurons_hidden": 4, "desired_error": 0.00001}):
"save param.txt with dictionary"
with open(fname, 'w') as f:
print "Saving", fname
for di in dic:
f.write("{} {}\n".format( di, dic[di]))
def choose(N, n):
"""
Returns n randomly chosen values between 0 to N-1.
"""
x = range( N)
n_list = []
for ii in range( n):
xi = random.choice( x)
n_list.append( xi)
x.remove( xi)
return n_list
def pd_remove_duplist_ID( pdr, dup_l):
pdw = pdr.copy()
for d in dup_l:
for x in d[1:]:
print x, pdw.ID[ x], pdw.Smile[ x]
pdw = pdw[ pdw.ID != pdr.ID[ x]]
return pdw
def pd_remove_faillist_ID( pdr, fail_l):
pdw = pdr.copy()
for x in fail_l:
pdw = pdw[ pdw.ID != pdr.ID[ x]]
return pdw
def mmse( xM_1, yV):
Rxx = xM_1.T * xM_1
Rxy = xM_1.T * yV
w = np.linalg.pinv( Rxx) * Rxy
return w
def add_bias_xM( xM):
xMT_list = xM.T.tolist()
xMT_list.append( np.ones( xM.shape[0], dtype = int).tolist())
xM_1 = np.mat( xMT_list).T
return xM_1
def mmse_with_bias( xM, yV):
xM_1 = add_bias_xM( xM)
w_1 = mmse( xM_1, yV)
return w_1, xM_1
def svm_SVR_C( xM, yV, c_l, graph = True):
"""
SVR is performed iteratively with different C values
until all C in the list are used.
"""
r2_l, sd_l = [], []
for C in c_l:
print 'sklearn.svm.SVR(C={})'.format( C)
clf = svm.SVR( C = C)
clf.fit( xM, yV.A1)
yV_pred = clf.predict(xM)
r2, sd = regress_show( yV, np.mat( yV_pred).T, graph = graph)
for X, x in [[r2_l, r2], [sd_l, sd]]:
X.append( x)
print 'average r2, sd are', np.mean( r2_l), np.mean( sd_l)
if graph:
pdw = pd.DataFrame( { 'log10(C)': np.log10(c_l), 'r2': r2_l, 'sd': sd_l})
pdw.plot( x = 'log10(C)')
return r2_l, sd_l
def corr_xy( x_vec, y_vec):
print type( x_vec), type( y_vec)
if type( x_vec) != np.matrixlib.defmatrix.matrix:
molw_x = np.mat( x_vec).T
else:
molw_x = x_vec
if type( y_vec) != np.matrixlib.defmatrix.matrix:
yV = np.mat( y_vec).T
else:
yV = y_vec
print molw_x.shape, yV.shape
normal_molw_x = molw_x / np.linalg.norm( molw_x)
yV0 = yV - np.mean( yV)
normal_yV0 = yV0 / np.linalg.norm( yV0)
return normal_molw_x.T * normal_yV0
def gs_Lasso( xM, yV, alphas_log = (-1, 1, 9)):
print xM.shape, yV.shape
clf = linear_model.Lasso()
#parmas = {'alpha': np.logspace(1, -1, 9)}
parmas = {'alpha': np.logspace( *alphas_log)}
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
gs = grid_search.GridSearchCV( clf, parmas, scoring = 'r2', cv = kf5, n_jobs = 1)
gs.fit( xM, yV)
return gs
def gs_Lasso_norm( xM, yV, alphas_log = (-1, 1, 9)):
print xM.shape, yV.shape
clf = linear_model.Lasso( normalize = True)
#parmas = {'alpha': np.logspace(1, -1, 9)}
parmas = {'alpha': np.logspace( *alphas_log)}
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
gs = grid_search.GridSearchCV( clf, parmas, scoring = 'r2', cv = kf5, n_jobs = -1)
gs.fit( xM, yV)
return gs
def gs_Lasso_kf( xM, yV, alphas_log_l):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Lasso Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log_l[0])
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_Lasso( xM_in_nz, yV_in, alphas_log_l[1])
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = gs2.score( xM_out_nz, yV_out)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
return score_l
def gs_Lasso_kf_ext( xM, yV, alphas_log_l):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Lasso Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log_l[0])
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_Lasso( xM_in_nz, yV_in, alphas_log_l[1])
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
# Obtain prediction model by whole data including internal validation data
alpha = gs2.best_params_['alpha']
clf = linear_model.Lasso( alpha = alpha)
clf.fit( xM_in_nz, yV_in)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = clf.score( xM_out_nz, yV_out)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
return score_l
def gs_Ridge( xM, yV, alphas_log = (1, -1, 9)):
print xM.shape, yV.shape
clf = linear_model.Ridge()
#parmas = {'alpha': np.logspace(1, -1, 9)}
parmas = {'alpha': np.logspace( *alphas_log)}
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
gs = grid_search.GridSearchCV( clf, parmas, scoring = 'r2', cv = kf5, n_jobs = 1)
gs.fit( xM, yV)
return gs
def gs_Ridge( xM, yV, alphas_log = (1, -1, 9), n_folds = 5):
print xM.shape, yV.shape
clf = linear_model.Ridge()
#parmas = {'alpha': np.logspace(1, -1, 9)}
parmas = {'alpha': np.logspace( *alphas_log)}
kf_n = cross_validation.KFold( xM.shape[0], n_folds=n_folds, shuffle=True)
gs = grid_search.GridSearchCV( clf, parmas, scoring = 'r2', cv = kf_n, n_jobs = 1)
gs.fit( xM, yV)
return gs
def _cv_LinearRegression_r0( xM, yV):
print xM.shape, yV.shape
clf = linear_model.Ridge()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
cv_scores = cross_validation.cross_val_score( clf, xM, yV, scoring = 'r2', cv = kf5, n_jobs = -1)
return cv_scores
def cv_LinearRegression( xM, yV):
print xM.shape, yV.shape
clf = linear_model.LinearRegression()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
cv_scores = cross_validation.cross_val_score( clf, xM, yV, scoring = 'r2', cv = kf5, n_jobs = -1)
print 'R^2 mean, std -->', np.mean( cv_scores), np.std( cv_scores)
return cv_scores
def cv_LinearRegression_A( xM, yV, s_l):
lr = linear_model.LinearRegression()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM_shuffle = np.concatenate( (xM[ train, :], xM[ test, :]), axis = 0)
# print xM_shuffle.shape
A_all = jpyx.calc_tm_sim_M( xM_shuffle)
A = A_all
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
A_molw = A
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def cv_LinearRegression_Asupervising( xM, yV, s_l):
lr = linear_model.LinearRegression()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM_shuffle = np.concatenate( (xM[ train, :], xM[ test, :]), axis = 0)
#print xM_shuffle.shape
A_all = jpyx.calc_tm_sim_M( xM_shuffle)
A = A_all[ :, :len(train)]
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
A_molw = A
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def cv_LinearRegression_Asupervising_molw( xM, yV, s_l):
lr = linear_model.LinearRegression()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM_shuffle = np.concatenate( (xM[ train, :], xM[ test, :]), axis = 0)
# print xM_shuffle.shape
A_all = jpyx.calc_tm_sim_M( xM_shuffle)
A = A_all[ :, :len(train)]
#print 'A.shape', A.shape
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
A_molw = jchem.add_new_descriptor( A, molw_l)
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
#print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def cv_Ridge_Asupervising_molw( xM, yV, s_l, alpha):
lr = linear_model.Ridge( alpha = alpha)
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM_shuffle = np.concatenate( (xM[ train, :], xM[ test, :]), axis = 0)
# print xM_shuffle.shape
A_all = jpyx.calc_tm_sim_M( xM_shuffle)
A = A_all[ :, :len(train)]
#print 'A.shape', A.shape
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
A_molw = jchem.add_new_descriptor( A, molw_l)
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
#print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def cv_Ridge_Asupervising_2fp( xM1, xM2, yV, s_l, alpha):
lr = linear_model.Ridge( alpha = alpha)
kf5 = cross_validation.KFold( len(s_l), n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM1_shuffle = np.concatenate( (xM1[ train, :], xM1[ test, :]), axis = 0)
xM2_shuffle = np.concatenate( (xM2[ train, :], xM2[ test, :]), axis = 0)
# print xM_shuffle.shape
A1_redundant = jpyx.calc_tm_sim_M( xM1_shuffle)
A1 = A1_redundant[ :, :len(train)]
A2_redundant = jpyx.calc_tm_sim_M( xM2_shuffle)
A2 = A2_redundant[ :, :len(train)]
#print 'A.shape', A.shape
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
molwV = np.mat( molw_l).T
#A_molw = jchem.add_new_descriptor( A, molw_l)
print A1.shape, A2.shape, molwV.shape
# A_molw = np.concatenate( (A1, A2, molwV), axis = 1)
A_molw = np.concatenate( (A1, A2), axis = 1)
print A_molw.shape
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
#print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def gs_Ridge_Asupervising_2fp( xM1, xM2, yV, s_l, alpha_l):
"""
This 2fp case uses two fingerprints at the same in order to
combines their preprocessing versions separately.
"""
r2_l2 = list()
for alpha in alpha_l:
print alpha
r2_l = cv_Ridge_Asupervising_2fp( xM1, xM2, yV, s_l, alpha)
r2_l2.append( r2_l)
return r2_l2
def cv_Ridge_Asupervising_2fp_molw( xM1, xM2, yV, s_l, alpha):
lr = linear_model.Ridge( alpha = alpha)
kf5 = cross_validation.KFold( len(s_l), n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM1_shuffle = np.concatenate( (xM1[ train, :], xM1[ test, :]), axis = 0)
xM2_shuffle = np.concatenate( (xM2[ train, :], xM2[ test, :]), axis = 0)
# print xM_shuffle.shape
A1_redundant = jpyx.calc_tm_sim_M( xM1_shuffle)
A1 = A1_redundant[ :, :len(train)]
A2_redundant = jpyx.calc_tm_sim_M( xM2_shuffle)
A2 = A2_redundant[ :, :len(train)]
#print 'A.shape', A.shape
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
molwV = np.mat( molw_l).T
#A_molw = jchem.add_new_descriptor( A, molw_l)
print A1.shape, A2.shape, molwV.shape
A_molw = np.concatenate( (A1, A2, molwV), axis = 1)
print A_molw.shape
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
#print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def gs_Ridge_Asupervising_2fp_molw( xM1, xM2, yV, s_l, alpha_l):
"""
This 2fp case uses two fingerprints at the same in order to
combines their preprocessing versions separately.
"""
r2_l2 = list()
for alpha in alpha_l:
print alpha
r2_l = cv_Ridge_Asupervising_2fp_molw( xM1, xM2, yV, s_l, alpha)
r2_l2.append( r2_l)
return r2_l2
def gs_Ridge_Asupervising_molw( xM, yV, s_l, alpha_l):
r2_l2 = list()
for alpha in alpha_l:
print alpha
r2_l = cv_Ridge_Asupervising_molw( xM, yV, s_l, alpha)
r2_l2.append( r2_l)
return r2_l2
def gs_Ridge_Asupervising( xM, yV, s_l, alpha_l):
r2_l2 = list()
for alpha in alpha_l:
print alpha
r2_l = cv_Ridge_Asupervising( xM, yV, s_l, alpha)
r2_l2.append( r2_l)
return r2_l2
def cv_Ridge_Asupervising( xM, yV, s_l, alpha):
lr = linear_model.Ridge( alpha = alpha)
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
r2_l = list()
for train, test in kf5:
xM_shuffle = np.concatenate( (xM[ train, :], xM[ test, :]), axis = 0)
# print xM_shuffle.shape
A_all = jpyx.calc_tm_sim_M( xM_shuffle)
A = A_all[ :, :len(train)]
#print 'A.shape', A.shape
s_l_shuffle = [s_l[x] for x in train] #train
s_l_shuffle.extend( [s_l[x] for x in test] ) #test
molw_l = jchem.rdkit_molwt( s_l_shuffle)
A_molw = A
A_molw_train = A_molw[:len(train), :]
A_molw_test = A_molw[len(train):, :]
#print A_molw_train.shape, yV[ train, 0].shape
lr.fit( A_molw_train, yV[ train, 0])
#print A_molw_test.shape, yV[ test, 0].shape
r2_l.append( lr.score( A_molw_test, yV[ test, 0]))
print 'R^2 mean, std -->', np.mean( r2_l), np.std( r2_l)
return r2_l
def gs_RidgeByLasso_kf_ext( xM, yV, alphas_log_l):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Ridge Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log_l[0])
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_Ridge( xM_in_nz, yV_in, alphas_log_l[1])
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
# Obtain prediction model by whole data including internal validation data
alpha = gs2.best_params_['alpha']
clf = linear_model.Ridge( alpha = alpha)
clf.fit( xM_in_nz, yV_in)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = clf.score( xM_out_nz, yV_out)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
return score_l
def gs_SVR( xM, yV, svr_params):
print xM.shape, yV.shape
clf = svm.SVR()
#parmas = {'alpha': np.logspace(1, -1, 9)}
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
gs = grid_search.GridSearchCV( clf, svr_params, scoring = 'r2', cv = kf5, n_jobs = -1)
gs.fit( xM, yV.A1)
return gs
def gs_SVRByLasso_kf_ext( xM, yV, alphas_log, svr_params):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Ridge Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log)
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_SVR( xM_in_nz, yV_in, svr_params)
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
# Obtain prediction model by whole data including internal validation data
C = gs2.best_params_['C']
gamma = gs2.best_params_['gamma']
epsilon = gs2.best_params_['epsilon']
clf = svm.SVR( C = C, gamma = gamma, epsilon = epsilon)
clf.fit( xM_in_nz, yV_in.A1)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = clf.score( xM_out_nz, yV_out.A1)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
return score_l
def gs_SVRByLasso( xM, yV, alphas_log, svr_params):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score1_l = []
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Ridge Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log)
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
score1_l.append( gs1.best_score_)
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_SVR( xM_in_nz, yV_in, svr_params)
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
# Obtain prediction model by whole data including internal validation data
C = gs2.best_params_['C']
gamma = gs2.best_params_['gamma']
epsilon = gs2.best_params_['epsilon']
clf = svm.SVR( C = C, gamma = gamma, epsilon = epsilon)
clf.fit( xM_in_nz, yV_in.A1)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = clf.score( xM_out_nz, yV_out.A1)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
print 'First stage scores', score1_l
print 'Average first stage scores', np.mean( score1_l)
return score_l, score1_l
def gs_ElasticNet( xM, yV, en_params):
print xM.shape, yV.shape
clf = linear_model.ElasticNet()
kf5 = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
gs = grid_search.GridSearchCV( clf, en_params, scoring = 'r2', cv = kf5, n_jobs = -1)
gs.fit( xM, yV)
return gs
def gs_SVRByElasticNet( xM, yV, en_params, svr_params):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score1_l = []
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Ridge Stage'
gs1 = gs_ElasticNet( xM_in, yV_in, en_params)
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
score1_l.append( gs1.best_score_)
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second Lasso Stage'
gs2 = gs_SVR( xM_in_nz, yV_in, svr_params)
print 'Best score:', gs2.best_score_
print 'Best param:', gs2.best_params_
print gs2.grid_scores_
print 'External Validation Stage'
# Obtain prediction model by whole data including internal validation data
C = gs2.best_params_['C']
gamma = gs2.best_params_['gamma']
epsilon = gs2.best_params_['epsilon']
clf = svm.SVR( C = C, gamma = gamma, epsilon = epsilon)
clf.fit( xM_in_nz, yV_in.A1)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
score = clf.score( xM_out_nz, yV_out.A1)
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
print 'First stage scores', score1_l
print 'Average first stage scores', np.mean( score1_l)
return score_l, score1_l
def gs_GPByLasso( xM, yV, alphas_log):
kf5_ext = cross_validation.KFold( xM.shape[0], n_folds=5, shuffle=True)
score1_l = []
score_l = []
for ix, (tr, te) in enumerate( kf5_ext):
print '{}th fold external validation stage ============================'.format( ix + 1)
xM_in = xM[ tr, :]
yV_in = yV[ tr, 0]
print 'First Ridge Stage'
gs1 = gs_Lasso( xM_in, yV_in, alphas_log)
print 'Best score:', gs1.best_score_
print 'Best param:', gs1.best_params_
print gs1.grid_scores_
score1_l.append( gs1.best_score_)
nz_idx = gs1.best_estimator_.sparse_coef_.indices
xM_in_nz = xM_in[ :, nz_idx]
print 'Second GP Stage'
Xa_in_nz = np.array( xM_in_nz)
ya_in = np.array( yV_in)
xM_out = xM[ te, :]
yV_out = yV[ te, 0]
xM_out_nz = xM_out[:, nz_idx]
Xa_out_nz = np.array( xM_out_nz)
ya_out = np.array( yV_out)
#jgp = gp.GaussianProcess( Xa_in_nz, ya_in, Xa_out_nz, ya_out)
# the y array should be send as [:,0] form to be sent as vector array
jgp = gp.GaussianProcess( Xa_in_nz, ya_in[:,0], Xa_out_nz, ya_out[:,0])
jgp.optimize_noise_and_amp()
jgp.run_gp()
#ya_out_pred = np.mat(jgp.predicted_targets)
ya_out_pred = jgp.predicted_targets
#print ya_out[:,0].shape, jgp.predicted_targets.shape
r2, rmse = regress_show( ya_out[:,0], ya_out_pred)
score = r2
print score
score_l.append( score)
print ''
print 'all scores:', score_l
print 'average scores:', np.mean( score_l)
print 'First stage scores', score1_l
print 'Average first stage scores', np.mean( score1_l)
return score_l, score1_l
def show_gs_alpha( grid_scores):
alphas = np.array([ x[0]['alpha'] for x in grid_scores])
r2_mean = np.array([ x[1] for x in grid_scores])
r2_std = np.array([ np.std(x[2]) for x in grid_scores])
r2_mean_pos = r2_mean + r2_std
r2_mean_neg = r2_mean - r2_std
plt.semilogx( alphas, r2_mean, 'x-', label = 'E[$r^2$]')
plt.semilogx( alphas, r2_mean_pos, ':k', label = 'E[$r^2$]+$\sigma$')
plt.semilogx( alphas, r2_mean_neg, ':k', label = 'E[$r^2$]-$\sigma$')
plt.grid()
plt.legend( loc = 2)
plt.show()
best_idx = np.argmax( r2_mean)
best_r2_mean = r2_mean[ best_idx]
best_r2_std = r2_std[ best_idx]
best_alpha = alphas[ best_idx]
print "Best: r2(alpha = {0}) -> mean:{1}, std:{2}".format( best_alpha, best_r2_mean, best_r2_std)
def count( a_l, a, inverse = False):
"""
It returns the number of elements which are equal to
the target value.
In order to resolve when x is an array with more than
one dimensions, converstion from array to list is used.
"""
if inverse == False:
x = np.where( np.array( a_l) == a)
else:
x = np.where( np.array( a_l) != a)
return len(x[0].tolist())
def show_cdf( data, xlabel_str = None, label_str = ''):
"""
Show cdf graph of data which should be list or array in 1-D from.
xlabel_str is the name of x-axis.
show() is not included for aggregated plot controlling later.
"""
data_sorted = np.sort( data)
# calculate the proportional values of samples
p = 1. * np.arange(len(data)) / (len(data) - 1)
plt.plot( data_sorted, p, label = label_str)
if xlabel_str:
plt.xlabel( xlabel_str)
plt.ylabel( 'Cumulative Fraction')
def mlr_show4_pred( clf, RMv, yEv, disp = True, graph = True):
yEv_calc = clf.predict( RMv)
if len( np.shape(yEv)) == 2 and len( np.shape(yEv_calc)) == 1:
yEv_calc = np.mat( yEv_calc).T
r_sqr, RMSE, MAE, DAE = estimate_accuracy4( yEv, yEv_calc, disp = disp)
if graph:
plt.figure()
ms_sz = max(min( 4000 / yEv.shape[0], 8), 1)
plt.plot( yEv.tolist(), yEv_calc.tolist(), '.', ms = ms_sz)
ax = plt.gca()
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
#ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.plot(lims, lims, '-', color = 'pink')
plt.xlabel('Experiment')
plt.ylabel('Prediction')
#plt.title( '$r^2$={0:.2e}, RMSE={1:.2e}, AAE={2:.2e}'.format( r_sqr, RMSE, aae))
plt.title( '$r^2$={0:.1e},$\sigma$={1:.1e},MAE={2:.1e},DAE={3:.1e}'.format( r_sqr, RMSE, MAE, DAE))
plt.show()
return (r_sqr, RMSE, MAE, DAE), yEv_calc
def mlr4_coef_pred( RM, yE, disp = True, graph = True):
"""
Return: coef_, intercept_, yEp
"""
clf = linear_model.LinearRegression()
clf.fit( RM, yE)
_, yEp = mlr_show4_pred( clf, RM, yE, disp = disp, graph = graph)
return clf.coef_, clf.intercept_, yEp | 28.4637 | 122 | 0.658281 | 8,423 | 48,616 | 3.595275 | 0.05948 | 0.012152 | 0.017964 | 0.022587 | 0.851567 | 0.833603 | 0.820394 | 0.809002 | 0.797708 | 0.784731 | 0 | 0.022408 | 0.184877 | 48,616 | 1,708 | 123 | 28.4637 | 0.741773 | 0.093138 | 0 | 0.726246 | 0 | 0.006585 | 0.084544 | 0.009192 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009407 | null | null | 0.156162 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4822021002b91c4151214f2905654bffdee2031a | 74,149 | py | Python | disciplinereport/apps/form/migrations/0001_initial.py | ninapavlich/disciplinereport | 02e1a6dbed767fa160517e4b20c1c24e52b37bf2 | [
"MIT"
] | null | null | null | disciplinereport/apps/form/migrations/0001_initial.py | ninapavlich/disciplinereport | 02e1a6dbed767fa160517e4b20c1c24e52b37bf2 | [
"MIT"
] | null | null | null | disciplinereport/apps/form/migrations/0001_initial.py | ninapavlich/disciplinereport | 02e1a6dbed767fa160517e4b20c1c24e52b37bf2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('email', '0001_initial'),
('media', '0001_initial'),
('core', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='FieldEntry',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('value', models.TextField(null=True, blank=True)),
('created_by', models.ForeignKey(related_name='form_fieldentry_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ['form_field__order'],
'abstract': False,
'verbose_name': 'Field Entry',
'verbose_name_plural': 'Field Entries',
},
),
migrations.CreateModel(
name='Form',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('title', models.CharField(help_text=b'The display title for this object.', max_length=255, null=True, verbose_name='Title', blank=True)),
('slug', models.CharField(help_text=b'Auto-generated page slug for this object.', max_length=255, verbose_name='Slug', db_index=True, blank=True)),
('uuid', models.CharField(help_text=b'UUID generated for object; can be used for short URLs', max_length=255, verbose_name='UUID', blank=True)),
('order', models.IntegerField(default=0, help_text=b'')),
('path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='path', blank=True)),
('title_path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='title path', blank=True)),
('path_generated', models.CharField(help_text=b'The URL path to this page, based on page hierarchy and slug.', max_length=255, null=True, verbose_name='generated path', blank=True)),
('path_override', models.CharField(help_text=b'The URL path to this page, defined absolutely.', max_length=255, null=True, verbose_name='path override', blank=True)),
('hierarchy', models.CharField(null=True, max_length=255, blank=True, help_text=b'Administrative Hierarchy', unique=True, verbose_name='hierarchy')),
('temporary_redirect', models.CharField(help_text=b'Temporarily redirect to a different path', max_length=255, verbose_name='Temporary Redirect', blank=True)),
('permanent_redirect', models.CharField(help_text=b'Permanently redirect to a different path', max_length=255, verbose_name='Permanent Redirect', blank=True)),
('publication_date', models.DateTimeField(null=True, verbose_name='Publication Date', blank=True)),
('publication_status', models.IntegerField(default=10, help_text=b'Current publication status', choices=[(10, 'Draft'), (20, 'Needs Review'), (100, 'Published'), (40, 'Unpublished')])),
('publish_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Published' on this date.", null=True, verbose_name='Publish on Date', blank=True)),
('expire_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Expired' on this date.", null=True, verbose_name='Expire on Date', blank=True)),
('page_meta_description', models.CharField(help_text=b'A short description of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Description', blank=True)),
('page_meta_keywords', models.CharField(help_text=b'A short list of keywords of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Page Keywords', blank=True)),
('is_searchable', models.BooleanField(default=True, help_text=b'Allow search engines to index this object and display in sitemap.')),
('in_sitemap', models.BooleanField(default=True, help_text=b'Is in sitemap')),
('noindex', models.BooleanField(default=False, help_text=b'Robots noindex')),
('nofollow', models.BooleanField(default=False, help_text=b'Robots nofollow')),
('sitemap_changefreq', models.CharField(default=b'monthly', help_text=b'How frequently does page content update', max_length=255, verbose_name='Sitemap Change Frequency', choices=[(b'never', 'Never'), (b'yearly', 'Yearly'), (b'monthly', 'Monthly'), (b'weekly', 'Weekly'), (b'daily', 'Daily'), (b'hourly', 'Hourly'), (b'always', 'Always')])),
('sitemap_priority', models.CharField(default=b'0.5', max_length=255, blank=True, help_text=b'Sitemap priority', null=True, verbose_name=b'Sitemap Priority')),
('shareable', models.BooleanField(default=False, help_text=b'Show sharing widget')),
('tiny_url', models.CharField(help_text=b'Tiny URL used for social sharing', max_length=255, null=True, verbose_name='tiny url', blank=True)),
('social_share_type', models.CharField(default=b'article', choices=[(b'article', b'Article'), (b'book', b'Book'), (b'profile', b'Profile'), (b'website', b'Website'), (b'video.movie', b'Video - Movie'), (b'video.episode', b'Video - Episode'), (b'video.tv_show', b'Video - TV Show'), (b'video.other', b'Video - Other'), (b'music.song', b'Music - Song'), (b'music.album', b'Music - Album'), (b'music.radio_station', b'Music - Playlist'), (b'music.radio_station', b'Music - Radio Station')], max_length=255, blank=True, null=True, verbose_name=b'Social type')),
('facebook_author_id', models.CharField(help_text=b'Numeric Facebook ID', max_length=255, null=True, verbose_name=b'Facebook Author ID', blank=True)),
('twitter_author_id', models.CharField(help_text=b'Twitter handle, including "@" e.g. @cgpartners', max_length=255, null=True, verbose_name=b'Twitter Admin ID', blank=True)),
('google_author_id', models.CharField(help_text=b'Google author id, e.g. the AUTHOR_ID in https://plus.google.com/AUTHOR_ID/posts', max_length=255, null=True, verbose_name=b'Google Admin ID', blank=True)),
('content', models.TextField(help_text=b'', null=True, verbose_name='content', blank=True)),
('synopsis', models.TextField(help_text=b'', null=True, verbose_name='synopsis', blank=True)),
('form_action', models.CharField(default=b'form-page', help_text=b'Defines whether to display this form on its own page with its own URL, or whether to embed it on another page elsehwere in the site. NOTE: Several of the subsections below only apply if the form action is a standalone form.', max_length=255, choices=[(b'form-page', 'Standalone Form'), (b'embedded-page', 'Form Embedded in Page')])),
('required_logged_in_user', models.BooleanField(default=False, help_text=b'Requires user to log in or create an account before filling out form. NOTE: This should only be turned on if you have enabled user registration on the site.')),
('is_editable', models.BooleanField(default=False, help_text=b'Allows user to update the entry. NOTE: If this is checked, unless you also require a logged in user on the form, anyone with the correct URL can later update the entry. Therefore it is recommended that you use this in conjunction with requiring a logged in user.')),
('email_admin_override', models.CharField(help_text=b'Separate email addresses with comma, semi-color or space. Leave blank to send to default email address (support@disciplinereport.com)', max_length=255, null=True, verbose_name='Admins to email on submission', blank=True)),
('email_admin_on_submission', models.BooleanField(default=True, help_text=b'')),
('admin_email', models.EmailField(help_text=b'', max_length=255, null=True, blank=True)),
('email_user_field_slug', models.CharField(help_text=b"Enter the slug of the field that should be used to determine the user's email address", max_length=255, null=True, blank=True)),
('email_user_on_submission', models.BooleanField(default=True, help_text=b'')),
('redirect_url_on_submission', models.CharField(help_text=b'When a form is submitted you may override where the user is redirected.', max_length=255, null=True, blank=True)),
('submission_content', models.TextField(help_text=b'', null=True, blank=True)),
('submit_label', models.CharField(default=b'Submit', help_text=b'Label on the submit button.', max_length=255)),
('form_error_message', models.CharField(help_text=b'Global message to show user when there is an error in the form. NOTE: Individual fields have separate error messages.', max_length=255, null=True, blank=True)),
('form_create_message', models.CharField(help_text=b'Message to show user when they successfully submit the form.', max_length=255, null=True, blank=True)),
('form_update_message', models.CharField(help_text=b'Message to show user when they successfully update the form. NOTE: Form must be editable to allow users to update the form.', max_length=255, null=True, blank=True)),
('extra_css_classes', models.CharField(help_text=b'Adds custom css classes into the form template.', max_length=255, null=True, blank=True)),
('third_party_id', models.CharField(help_text=b'An identifier to integrate the form with another system', max_length=255, null=True, blank=True)),
('created_by', models.ForeignKey(related_name='form_form_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('email_admin_on_submission_category', models.ForeignKey(related_name='email_admin_on_submission_category', blank=True, to='email.EmailCategory', help_text=b'', null=True)),
('email_admin_on_submission_template', models.ForeignKey(related_name='email_admin_on_submission_template', blank=True, to='email.EmailTemplate', help_text=b'', null=True)),
('email_user_on_submission_category', models.ForeignKey(related_name='email_user_on_submission_category', blank=True, to='email.EmailCategory', help_text=b'', null=True)),
('email_user_on_submission_template', models.ForeignKey(related_name='email_user_on_submission_template', blank=True, to='email.EmailTemplate', help_text=b'', null=True)),
('image', models.ForeignKey(related_name='form_form_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Featured image', null=True)),
('modified_by', models.ForeignKey(related_name='form_form_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('published_by', models.ForeignKey(related_name='form_form_published_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('social_share_image', models.ForeignKey(related_name='form_form_social_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Standards for the social share image vary, but an image at least 300x200px should work well.', null=True)),
('submit_template', models.ForeignKey(related_name='template_submit_template', blank=True, to='core.Template', help_text=b'', null=True)),
('template', models.ForeignKey(blank=True, to='core.Template', help_text=b'Template for view', null=True)),
],
options={
'abstract': False,
'verbose_name': 'Forms',
'verbose_name_plural': 'Forms',
},
),
migrations.CreateModel(
name='FormEntry',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('status', models.CharField(default=b'new', max_length=255, choices=[(b'new', 'New'), (b'read', 'Read'), (b'replied', 'Replied'), (b'archived', 'Archived')])),
('created_by', models.ForeignKey(related_name='form_formentry_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('form_schema', models.ForeignKey(blank=True, to='form.Form', null=True)),
('modified_by', models.ForeignKey(related_name='form_formentry_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'abstract': False,
'verbose_name': 'Form Entry',
'verbose_name_plural': 'Form Entries',
},
),
migrations.CreateModel(
name='FormEntryStatus',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('title', models.CharField(help_text=b'The display title for this object.', max_length=255, null=True, verbose_name='Title', blank=True)),
('slug', models.CharField(help_text=b'Auto-generated page slug for this object.', max_length=255, verbose_name='Slug', db_index=True, blank=True)),
('uuid', models.CharField(help_text=b'UUID generated for object; can be used for short URLs', max_length=255, verbose_name='UUID', blank=True)),
('order', models.IntegerField(default=0, help_text=b'')),
('path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='path', blank=True)),
('title_path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='title path', blank=True)),
('path_generated', models.CharField(help_text=b'The URL path to this page, based on page hierarchy and slug.', max_length=255, null=True, verbose_name='generated path', blank=True)),
('path_override', models.CharField(help_text=b'The URL path to this page, defined absolutely.', max_length=255, null=True, verbose_name='path override', blank=True)),
('hierarchy', models.CharField(null=True, max_length=255, blank=True, help_text=b'Administrative Hierarchy', unique=True, verbose_name='hierarchy')),
('temporary_redirect', models.CharField(help_text=b'Temporarily redirect to a different path', max_length=255, verbose_name='Temporary Redirect', blank=True)),
('permanent_redirect', models.CharField(help_text=b'Permanently redirect to a different path', max_length=255, verbose_name='Permanent Redirect', blank=True)),
('publication_date', models.DateTimeField(null=True, verbose_name='Publication Date', blank=True)),
('publication_status', models.IntegerField(default=10, help_text=b'Current publication status', choices=[(10, 'Draft'), (20, 'Needs Review'), (100, 'Published'), (40, 'Unpublished')])),
('publish_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Published' on this date.", null=True, verbose_name='Publish on Date', blank=True)),
('expire_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Expired' on this date.", null=True, verbose_name='Expire on Date', blank=True)),
('page_meta_description', models.CharField(help_text=b'A short description of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Description', blank=True)),
('page_meta_keywords', models.CharField(help_text=b'A short list of keywords of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Page Keywords', blank=True)),
('is_searchable', models.BooleanField(default=True, help_text=b'Allow search engines to index this object and display in sitemap.')),
('in_sitemap', models.BooleanField(default=True, help_text=b'Is in sitemap')),
('noindex', models.BooleanField(default=False, help_text=b'Robots noindex')),
('nofollow', models.BooleanField(default=False, help_text=b'Robots nofollow')),
('sitemap_changefreq', models.CharField(default=b'monthly', help_text=b'How frequently does page content update', max_length=255, verbose_name='Sitemap Change Frequency', choices=[(b'never', 'Never'), (b'yearly', 'Yearly'), (b'monthly', 'Monthly'), (b'weekly', 'Weekly'), (b'daily', 'Daily'), (b'hourly', 'Hourly'), (b'always', 'Always')])),
('sitemap_priority', models.CharField(default=b'0.5', max_length=255, blank=True, help_text=b'Sitemap priority', null=True, verbose_name=b'Sitemap Priority')),
('shareable', models.BooleanField(default=False, help_text=b'Show sharing widget')),
('tiny_url', models.CharField(help_text=b'Tiny URL used for social sharing', max_length=255, null=True, verbose_name='tiny url', blank=True)),
('social_share_type', models.CharField(default=b'article', choices=[(b'article', b'Article'), (b'book', b'Book'), (b'profile', b'Profile'), (b'website', b'Website'), (b'video.movie', b'Video - Movie'), (b'video.episode', b'Video - Episode'), (b'video.tv_show', b'Video - TV Show'), (b'video.other', b'Video - Other'), (b'music.song', b'Music - Song'), (b'music.album', b'Music - Album'), (b'music.radio_station', b'Music - Playlist'), (b'music.radio_station', b'Music - Radio Station')], max_length=255, blank=True, null=True, verbose_name=b'Social type')),
('facebook_author_id', models.CharField(help_text=b'Numeric Facebook ID', max_length=255, null=True, verbose_name=b'Facebook Author ID', blank=True)),
('twitter_author_id', models.CharField(help_text=b'Twitter handle, including "@" e.g. @cgpartners', max_length=255, null=True, verbose_name=b'Twitter Admin ID', blank=True)),
('google_author_id', models.CharField(help_text=b'Google author id, e.g. the AUTHOR_ID in https://plus.google.com/AUTHOR_ID/posts', max_length=255, null=True, verbose_name=b'Google Admin ID', blank=True)),
('content', models.TextField(help_text=b'', null=True, verbose_name='content', blank=True)),
('synopsis', models.TextField(help_text=b'', null=True, verbose_name='synopsis', blank=True)),
('created_by', models.ForeignKey(related_name='form_formentrystatus_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('image', models.ForeignKey(related_name='form_formentrystatus_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Featured image', null=True)),
('modified_by', models.ForeignKey(related_name='form_formentrystatus_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('parent', models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, blank=True, to='form.FormEntryStatus', null=True)),
('published_by', models.ForeignKey(related_name='form_formentrystatus_published_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('social_share_image', models.ForeignKey(related_name='form_formentrystatus_social_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Standards for the social share image vary, but an image at least 300x200px should work well.', null=True)),
('template', models.ForeignKey(blank=True, to='core.Template', help_text=b'Template for view', null=True)),
],
options={
'abstract': False,
'verbose_name': 'Form Entry Status',
'verbose_name_plural': 'Form Entry Statuses',
},
),
migrations.CreateModel(
name='FormEntryTag',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('title', models.CharField(help_text=b'The display title for this object.', max_length=255, null=True, verbose_name='Title', blank=True)),
('slug', models.CharField(help_text=b'Auto-generated page slug for this object.', max_length=255, verbose_name='Slug', db_index=True, blank=True)),
('uuid', models.CharField(help_text=b'UUID generated for object; can be used for short URLs', max_length=255, verbose_name='UUID', blank=True)),
('order', models.IntegerField(default=0, help_text=b'')),
('path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='path', blank=True)),
('title_path', models.CharField(help_text=b'Actual path used based on generated and override path', max_length=255, null=True, verbose_name='title path', blank=True)),
('path_generated', models.CharField(help_text=b'The URL path to this page, based on page hierarchy and slug.', max_length=255, null=True, verbose_name='generated path', blank=True)),
('path_override', models.CharField(help_text=b'The URL path to this page, defined absolutely.', max_length=255, null=True, verbose_name='path override', blank=True)),
('hierarchy', models.CharField(null=True, max_length=255, blank=True, help_text=b'Administrative Hierarchy', unique=True, verbose_name='hierarchy')),
('temporary_redirect', models.CharField(help_text=b'Temporarily redirect to a different path', max_length=255, verbose_name='Temporary Redirect', blank=True)),
('permanent_redirect', models.CharField(help_text=b'Permanently redirect to a different path', max_length=255, verbose_name='Permanent Redirect', blank=True)),
('publication_date', models.DateTimeField(null=True, verbose_name='Publication Date', blank=True)),
('publication_status', models.IntegerField(default=10, help_text=b'Current publication status', choices=[(10, 'Draft'), (20, 'Needs Review'), (100, 'Published'), (40, 'Unpublished')])),
('publish_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Published' on this date.", null=True, verbose_name='Publish on Date', blank=True)),
('expire_on_date', models.DateTimeField(help_text=b"Object state will be set to 'Expired' on this date.", null=True, verbose_name='Expire on Date', blank=True)),
('page_meta_description', models.CharField(help_text=b'A short description of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Description', blank=True)),
('page_meta_keywords', models.CharField(help_text=b'A short list of keywords of the page, used for SEO and not displayed to the user; aim for 150-160 characters.', max_length=2000, verbose_name='Meta Page Keywords', blank=True)),
('is_searchable', models.BooleanField(default=True, help_text=b'Allow search engines to index this object and display in sitemap.')),
('in_sitemap', models.BooleanField(default=True, help_text=b'Is in sitemap')),
('noindex', models.BooleanField(default=False, help_text=b'Robots noindex')),
('nofollow', models.BooleanField(default=False, help_text=b'Robots nofollow')),
('sitemap_changefreq', models.CharField(default=b'monthly', help_text=b'How frequently does page content update', max_length=255, verbose_name='Sitemap Change Frequency', choices=[(b'never', 'Never'), (b'yearly', 'Yearly'), (b'monthly', 'Monthly'), (b'weekly', 'Weekly'), (b'daily', 'Daily'), (b'hourly', 'Hourly'), (b'always', 'Always')])),
('sitemap_priority', models.CharField(default=b'0.5', max_length=255, blank=True, help_text=b'Sitemap priority', null=True, verbose_name=b'Sitemap Priority')),
('shareable', models.BooleanField(default=False, help_text=b'Show sharing widget')),
('tiny_url', models.CharField(help_text=b'Tiny URL used for social sharing', max_length=255, null=True, verbose_name='tiny url', blank=True)),
('social_share_type', models.CharField(default=b'article', choices=[(b'article', b'Article'), (b'book', b'Book'), (b'profile', b'Profile'), (b'website', b'Website'), (b'video.movie', b'Video - Movie'), (b'video.episode', b'Video - Episode'), (b'video.tv_show', b'Video - TV Show'), (b'video.other', b'Video - Other'), (b'music.song', b'Music - Song'), (b'music.album', b'Music - Album'), (b'music.radio_station', b'Music - Playlist'), (b'music.radio_station', b'Music - Radio Station')], max_length=255, blank=True, null=True, verbose_name=b'Social type')),
('facebook_author_id', models.CharField(help_text=b'Numeric Facebook ID', max_length=255, null=True, verbose_name=b'Facebook Author ID', blank=True)),
('twitter_author_id', models.CharField(help_text=b'Twitter handle, including "@" e.g. @cgpartners', max_length=255, null=True, verbose_name=b'Twitter Admin ID', blank=True)),
('google_author_id', models.CharField(help_text=b'Google author id, e.g. the AUTHOR_ID in https://plus.google.com/AUTHOR_ID/posts', max_length=255, null=True, verbose_name=b'Google Admin ID', blank=True)),
('content', models.TextField(help_text=b'', null=True, verbose_name='content', blank=True)),
('synopsis', models.TextField(help_text=b'', null=True, verbose_name='synopsis', blank=True)),
('created_by', models.ForeignKey(related_name='form_formentrytag_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('image', models.ForeignKey(related_name='form_formentrytag_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Featured image', null=True)),
('modified_by', models.ForeignKey(related_name='form_formentrytag_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('published_by', models.ForeignKey(related_name='form_formentrytag_published_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('social_share_image', models.ForeignKey(related_name='form_formentrytag_social_images', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='media.Image', help_text=b'Standards for the social share image vary, but an image at least 300x200px should work well.', null=True)),
('template', models.ForeignKey(blank=True, to='core.Template', help_text=b'Template for view', null=True)),
],
options={
'abstract': False,
'verbose_name': 'Form Entry Tag',
'verbose_name_plural': 'Form Entry Tags',
},
),
migrations.CreateModel(
name='FormField',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('version', models.IntegerField(default=0)),
('created_date', models.DateTimeField(auto_now_add=True, verbose_name='Created Date', null=True)),
('modified_date', models.DateTimeField(auto_now=True, verbose_name='Modified Date', null=True)),
('admin_note', models.TextField(help_text=b'Not publicly visible', null=True, verbose_name='admin note', blank=True)),
('title', models.CharField(help_text=b'The display title for this object.', max_length=255, null=True, verbose_name='Title', blank=True)),
('slug', models.CharField(help_text=b'Auto-generated page slug for this object.', max_length=255, verbose_name='Slug', db_index=True, blank=True)),
('is_required', models.BooleanField(default=False, help_text=b'If this field is required, a value of some sort is needed for the user to submit the form. See the advanced validation options to apply more specific validation parameters.')),
('is_digits', models.BooleanField(default=False, help_text=b'')),
('is_alphanumeric', models.BooleanField(default=False, help_text=b'')),
('min_length', models.IntegerField(help_text=b'', null=True, blank=True)),
('max_length', models.IntegerField(help_text=b'', null=True, blank=True)),
('min_words', models.IntegerField(help_text=b'', null=True, blank=True)),
('max_words', models.IntegerField(help_text=b'', null=True, blank=True)),
('min_value', models.IntegerField(help_text=b'', null=True, blank=True)),
('max_value', models.IntegerField(help_text=b'', null=True, blank=True)),
('min_check', models.IntegerField(help_text=b'', null=True, blank=True)),
('max_check', models.IntegerField(help_text=b'', null=True, blank=True)),
('min_date', models.DateField(help_text=b'', null=True, blank=True)),
('max_date', models.DateField(help_text=b'', null=True, blank=True)),
('min_datetime', models.DateTimeField(help_text=b'', null=True, blank=True)),
('max_datetime', models.DateTimeField(help_text=b'', null=True, blank=True)),
('min_width', models.IntegerField(help_text=b'Applies to image uploads', null=True, blank=True)),
('max_width', models.IntegerField(help_text=b'Applies to image uploads', null=True, blank=True)),
('min_height', models.IntegerField(help_text=b'Applies to image uploads', null=True, blank=True)),
('max_height', models.IntegerField(help_text=b'Applies to image uploads', null=True, blank=True)),
('min_size', models.IntegerField(help_text=b'Applies to image and file uploads, measured in MB; e.g. 5000 is 5GB, 0.5 is 500KB', null=True, blank=True)),
('max_size', models.IntegerField(help_text=b'Applies to image and file uploads, measure in MB; e.g. 5000 is 5GB, 0.5 is 500KB', null=True, blank=True)),
('step_interval', models.DecimalField(help_text=b'If values is a score, range, or slider, pick the step interval.', null=True, max_digits=9, decimal_places=2, blank=True)),
('pattern', models.CharField(help_text=b'Match a value or validate file types (e.g. .*\\.txt|.*\\.pdf|.*\\.doc)', max_length=255, null=True, blank=True)),
('pattern_error_message', models.CharField(max_length=255, null=True, blank=True)),
('equal_to', models.CharField(help_text=b'Enter form field slug that this field should match', max_length=255, null=True, blank=True)),
('equal_to_error_message', models.CharField(max_length=255, null=True, blank=True)),
('type', models.CharField(help_text=b'Fill in choices field (in advanced options) for select fields. Fill in content field for instructions. WARNING: Only use password and secure file in conjunction with HTTPS.', max_length=255, choices=[(b'text-field', 'Single Line Text Field'), (b'email-field', 'Email Field'), (b'url-field', 'URL Field'), (b'integer-field', 'Integer Field'), (b'number-field', 'Number Field'), (b'text-area', 'Multiple Lines Text Area'), (b'boolean-checkboxes', 'Single Checkbox'), (b'boolean-toggle', 'Toggle'), (b'select-dropdown', 'Select with Dropdown'), (b'select-radio-buttons', 'Select with Radio Buttons'), (b'select-buttons', 'Select with Buttons'), (b'select-image', 'Select Image'), (b'select-multiple-checkboxes', 'Select Multiple with Checkboxes'), (b'select-multiple-autosuggest', 'Select Multiple with Autosuggest'), (b'select-multiple-horizontal', 'Select Multiple with Horizontal Lists'), (b'select-multiple-buttons', 'Select Multiple with Buttons'), (b'select-multiple-images', 'Select Multiple Images'), (b'comma-separated-list', 'List of Items'), (b'file', 'File'), (b'secure-file', 'Secure File'), (b'image', 'Image'), (b'date', 'Date'), (b'time', 'Time'), (b'date-time', 'Date and Time'), (b'score', 'Score'), (b'range', 'Range'), (b'number-slider', 'Number on a Slider'), (b'password', 'Password'), (b'form-instructions', 'Form Instructions'), (b'form-divider', 'Form Divider'), (b'form-step', 'Form Step'), (b'hidden-field', 'Hidden Field'), (b'honeypot-field', 'Honeypot Field')])),
('order', models.IntegerField(default=0, help_text=b'')),
('secondary_label', models.CharField(help_text=b'', max_length=255, null=True, blank=True)),
('placeholder_text', models.CharField(help_text=b'', max_length=255, null=True, blank=True)),
('help_text', models.CharField(help_text=b'', max_length=255, null=True, blank=True)),
('content', models.TextField(help_text=b'Rich-text instructions', null=True, blank=True)),
('choices', models.TextField(help_text=b'Comma separated options where applicable. If an option itself contains commas, surround the option starting with the `character and ending with the ` character.', null=True, blank=True)),
('default', models.CharField(help_text=b'Default field value', max_length=255, null=True, blank=True)),
('extra_css_classes', models.CharField(help_text=b'Adds custom css classes onto the form field in the template.', max_length=255, null=True, blank=True)),
('icon_right', models.CharField(blank=True, max_length=255, null=True, help_text=b'Add icon to the right side of the field. Preview icons at http://fontawesome.io/icons/', choices=[(b'glass', b'Glass'), (b'music', b'Music'), (b'search', b'Search'), (b'envelope-o', b'Envelope O'), (b'heart', b'Heart'), (b'star', b'Star'), (b'star-o', b'Star o'), (b'user', b'User'), (b'film', b'Film'), (b'th-large', b'Th Large'), (b'th', b'Th'), (b'th-list', b'Th List'), (b'check', b'Check'), (b'remove', b'Remove'), (b'close', b'Close'), (b'times', b'Times'), (b'search-plus', b'Search Plus'), (b'search-minus', b'Search Minus'), (b'power-off', b'Power Off'), (b'signal', b'Signal'), (b'gear', b'Gear'), (b'cog', b'Cog'), (b'trash-o', b'Trash o'), (b'home', b'Home'), (b'file-o', b'File o'), (b'clock-o', b'Clock o'), (b'road', b'Road'), (b'download', b'Download'), (b'arrow-circle-o-down', b'Arrow circle-o Down'), (b'arrow-circle-o-up', b'Arrow circle-o Up'), (b'inbox', b'Inbox'), (b'play-circle-o', b'Play circle o'), (b'rotate-right', b'Rotate Right'), (b'repeat', b'Repeat'), (b'refresh', b'Refresh'), (b'list-alt', b'List Alt'), (b'lock', b'Lock'), (b'flag', b'Flag'), (b'headphones', b'Headphones'), (b'volume-off', b'Volume Off'), (b'volume-down', b'Volume Down'), (b'volume-up', b'Volume Up'), (b'qrcode', b'Qrcode'), (b'barcode', b'Barcode'), (b'tag', b'Tag'), (b'tags', b'Tags'), (b'book', b'Book'), (b'bookmark', b'Bookmark'), (b'print', b'Print'), (b'camera', b'Camera'), (b'font', b'Font'), (b'bold', b'Bold'), (b'italic', b'Italic'), (b'text-height', b'Text Height'), (b'text-width', b'Text Width'), (b'align-left', b'Align Left'), (b'align-center', b'Align Center'), (b'align-right', b'Align Right'), (b'align-justify', b'Align Justify'), (b'list', b'List'), (b'dedent', b'Dedent'), (b'outdent', b'Outdent'), (b'indent', b'Indent'), (b'video-camera', b'Video Camera'), (b'photo', b'Photo'), (b'image', b'Image'), (b'picture-o', b'Picture o'), (b'pencil', b'Pencil'), (b'map-marker', b'Map Marker'), (b'adjust', b'Adjust'), (b'tint', b'Tint'), (b'edit', b'Edit'), (b'pencil-square-o', b'Pencil square o'), (b'share-square-o', b'Share square o'), (b'check-square-o', b'Check square o'), (b'arrows', b'Arrows'), (b'step-backward', b'Step Backward'), (b'fast-backward', b'Fast Backward'), (b'backward', b'Backward'), (b'play', b'Play'), (b'pause', b'Pause'), (b'stop', b'Stop'), (b'forward', b'Forward'), (b'fast-forward', b'Fast Forward'), (b'step-forward', b'Step Forward'), (b'eject', b'Eject'), (b'chevron-left', b'Chevron Left'), (b'chevron-right', b'Chevron Right'), (b'plus-circle', b'Plus Circle'), (b'minus-circle', b'Minus Circle'), (b'times-circle', b'Times Circle'), (b'check-circle', b'Check Circle'), (b'question-circle', b'Question Circle'), (b'info-circle', b'Info Circle'), (b'crosshairs', b'Crosshairs'), (b'times-circle-o', b'Times circle o'), (b'check-circle-o', b'Check circle o'), (b'ban', b'Ban'), (b'arrow-left', b'Arrow Left'), (b'arrow-right', b'Arrow Right'), (b'arrow-up', b'Arrow Up'), (b'arrow-down', b'Arrow Down'), (b'mail-forward', b'Mail Forward'), (b'share', b'Share'), (b'expand', b'Expand'), (b'compress', b'Compress'), (b'plus', b'Plus'), (b'minus', b'Minus'), (b'asterisk', b'Asterisk'), (b'exclamation-circle', b'Exclamation Circle'), (b'gift', b'Gift'), (b'leaf', b'Leaf'), (b'fire', b'Fire'), (b'eye', b'Eye'), (b'eye-slash', b'Eye Slash'), (b'warning', b'Warning'), (b'exclamation-triangle', b'Exclamation Triangle'), (b'plane', b'Plane'), (b'calendar', b'Calendar'), (b'random', b'Random'), (b'comment', b'Comment'), (b'magnet', b'Magnet'), (b'chevron-up', b'Chevron Up'), (b'chevron-down', b'Chevron Down'), (b'retweet', b'Retweet'), (b'shopping-cart', b'Shopping Cart'), (b'folder', b'Folder'), (b'folder-open', b'Folder Open'), (b'arrows-v', b'Arrows v'), (b'arrows-h', b'Arrows h'), (b'bar-chart-o', b'Bar chart o'), (b'bar-chart', b'Bar Chart'), (b'twitter-square', b'Twitter Square'), (b'facebook-square', b'Facebook Square'), (b'camera-retro', b'Camera Retro'), (b'key', b'Key'), (b'gears', b'Gears'), (b'cogs', b'Cogs'), (b'comments', b'Comments'), (b'thumbs-o-up', b'Thumbs o Up'), (b'thumbs-o-down', b'Thumbs o Down'), (b'star-half', b'Star Half'), (b'heart-o', b'Heart o'), (b'sign-out', b'Sign Out'), (b'linkedin-square', b'Linkedin Square'), (b'thumb-tack', b'Thumb Tack'), (b'external-link', b'External Link'), (b'sign-in', b'Sign In'), (b'trophy', b'Trophy'), (b'github-square', b'Github Square'), (b'upload', b'Upload'), (b'lemon-o', b'Lemon o'), (b'phone', b'Phone'), (b'square-o', b'Square o'), (b'bookmark-o', b'Bookmark o'), (b'phone-square', b'Phone Square'), (b'twitter', b'Twitter'), (b'facebook-f', b'Facebook f'), (b'facebook', b'Facebook'), (b'github', b'Github'), (b'unlock', b'Unlock'), (b'credit-card', b'Credit Card'), (b'rss', b'Rss'), (b'hdd-o', b'Hdd o'), (b'bullhorn', b'Bullhorn'), (b'bell', b'Bell'), (b'certificate', b'Certificate'), (b'hand-o-right', b'Hand o Right'), (b'hand-o-left', b'Hand o Left'), (b'hand-o-up', b'Hand o Up'), (b'hand-o-down', b'Hand o Down'), (b'arrow-circle-left', b'Arrow circle Left'), (b'arrow-circle-right', b'Arrow circle Right'), (b'arrow-circle-up', b'Arrow circle Up'), (b'arrow-circle-down', b'Arrow circle Down'), (b'globe', b'Globe'), (b'wrench', b'Wrench'), (b'tasks', b'Tasks'), (b'filter', b'Filter'), (b'briefcase', b'Briefcase'), (b'arrows-alt', b'Arrows Alt'), (b'group', b'Group'), (b'users', b'Users'), (b'chain', b'Chain'), (b'link', b'Link'), (b'cloud', b'Cloud'), (b'flask', b'Flask'), (b'cut', b'Cut'), (b'scissors', b'Scissors'), (b'copy', b'Copy'), (b'files-o', b'Files o'), (b'paperclip', b'Paperclip'), (b'save', b'Save'), (b'floppy-o', b'Floppy o'), (b'square', b'Square'), (b'navicon', b'Navicon'), (b'reorder', b'Reorder'), (b'bars', b'Bars'), (b'list-ul', b'List Ul'), (b'list-ol', b'List Ol'), (b'strikethrough', b'Strikethrough'), (b'underline', b'Underline'), (b'table', b'Table'), (b'magic', b'Magic'), (b'truck', b'Truck'), (b'pinterest', b'Pinterest'), (b'pinterest-square', b'Pinterest Square'), (b'google-plus-square', b'Google plus Square'), (b'google-plus', b'Google Plus'), (b'money', b'Money'), (b'caret-down', b'Caret Down'), (b'caret-up', b'Caret Up'), (b'caret-left', b'Caret Left'), (b'caret-right', b'Caret Right'), (b'columns', b'Columns'), (b'unsorted', b'Unsorted'), (b'sort', b'Sort'), (b'sort-down', b'Sort Down'), (b'sort-desc', b'Sort Desc'), (b'sort-up', b'Sort Up'), (b'sort-asc', b'Sort Asc'), (b'envelope', b'Envelope'), (b'linkedin', b'Linkedin'), (b'rotate-left', b'Rotate Left'), (b'undo', b'Undo'), (b'legal', b'Legal'), (b'gavel', b'Gavel'), (b'dashboard', b'Dashboard'), (b'tachometer', b'Tachometer'), (b'comment-o', b'Comment o'), (b'comments-o', b'Comments o'), (b'flash', b'Flash'), (b'bolt', b'Bolt'), (b'sitemap', b'Sitemap'), (b'umbrella', b'Umbrella'), (b'paste', b'Paste'), (b'clipboard', b'Clipboard'), (b'lightbulb-o', b'Lightbulb o'), (b'exchange', b'Exchange'), (b'cloud-download', b'Cloud Download'), (b'cloud-upload', b'Cloud Upload'), (b'user-md', b'User Md'), (b'stethoscope', b'Stethoscope'), (b'suitcase', b'Suitcase'), (b'bell-o', b'Bell o'), (b'coffee', b'Coffee'), (b'cutlery', b'Cutlery'), (b'file-text-o', b'File text o'), (b'building-o', b'Building o'), (b'hospital-o', b'Hospital o'), (b'ambulance', b'Ambulance'), (b'medkit', b'Medkit'), (b'fighter-jet', b'Fighter Jet'), (b'beer', b'Beer'), (b'h-square', b'h Square'), (b'plus-square', b'Plus Square'), (b'angle-double-left', b'Angle double Left'), (b'angle-double-right', b'Angle double Right'), (b'angle-double-up', b'Angle double Up'), (b'angle-double-down', b'Angle double Down'), (b'angle-left', b'Angle Left'), (b'angle-right', b'Angle Right'), (b'angle-up', b'Angle Up'), (b'angle-down', b'Angle Down'), (b'desktop', b'Desktop'), (b'laptop', b'Laptop'), (b'tablet', b'Tablet'), (b'mobile-phone', b'Mobile Phone'), (b'mobile', b'Mobile'), (b'circle-o', b'Circle o'), (b'quote-left', b'Quote Left'), (b'quote-right', b'Quote Right'), (b'spinner', b'Spinner'), (b'circle', b'Circle'), (b'mail-reply', b'Mail Reply'), (b'reply', b'Reply'), (b'github-alt', b'Github Alt'), (b'folder-o', b'Folder o'), (b'folder-open-o', b'Folder open o'), (b'smile-o', b'Smile o'), (b'frown-o', b'Frown o'), (b'meh-o', b'Meh o'), (b'gamepad', b'Gamepad'), (b'keyboard-o', b'Keyboard o'), (b'flag-o', b'Flag o'), (b'flag-checkered', b'Flag Checkered'), (b'terminal', b'Terminal'), (b'code', b'Code'), (b'mail-reply-all', b'Mail reply All'), (b'reply-all', b'Reply All'), (b'star-half-empty', b'Star half Empty'), (b'star-half-full', b'Star half Full'), (b'star-half-o', b'Star half o'), (b'location-arrow', b'Location Arrow'), (b'crop', b'Crop'), (b'code-fork', b'Code Fork'), (b'unlink', b'Unlink'), (b'chain-broken', b'Chain Broken'), (b'question', b'Question'), (b'info', b'Info'), (b'exclamation', b'Exclamation'), (b'superscript', b'Superscript'), (b'subscript', b'Subscript'), (b'eraser', b'Eraser'), (b'puzzle-piece', b'Puzzle Piece'), (b'microphone', b'Microphone'), (b'microphone-slash', b'Microphone Slash'), (b'shield', b'Shield'), (b'calendar-o', b'Calendar o'), (b'fire-extinguisher', b'Fire Extinguisher'), (b'rocket', b'Rocket'), (b'maxcdn', b'Maxcdn'), (b'chevron-circle-left', b'Chevron circle Left'), (b'chevron-circle-right', b'Chevron circle Right'), (b'chevron-circle-up', b'Chevron circle Up'), (b'chevron-circle-down', b'Chevron circle Down'), (b'html5', b'Html5'), (b'css3', b'Css3'), (b'anchor', b'Anchor'), (b'unlock-alt', b'Unlock Alt'), (b'bullseye', b'Bullseye'), (b'ellipsis-h', b'Ellipsis h'), (b'ellipsis-v', b'Ellipsis v'), (b'rss-square', b'Rss Square'), (b'play-circle', b'Play Circle'), (b'ticket', b'Ticket'), (b'minus-square', b'Minus Square'), (b'minus-square-o', b'Minus square o'), (b'level-up', b'Level Up'), (b'level-down', b'Level Down'), (b'check-square', b'Check Square'), (b'pencil-square', b'Pencil Square'), (b'external-link-square', b'External link Square'), (b'share-square', b'Share Square'), (b'compass', b'Compass'), (b'toggle-down', b'Toggle Down'), (b'caret-square-o-down', b'Caret square-o Down'), (b'toggle-up', b'Toggle Up'), (b'caret-square-o-up', b'Caret square-o Up'), (b'toggle-right', b'Toggle Right'), (b'caret-square-o-right', b'Caret square-o Right'), (b'euro', b'Euro'), (b'eur', b'Eur'), (b'gbp', b'Gbp'), (b'dollar', b'Dollar'), (b'usd', b'Usd'), (b'rupee', b'Rupee'), (b'inr', b'Inr'), (b'cny', b'Cny'), (b'rmb', b'Rmb'), (b'yen', b'Yen'), (b'jpy', b'Jpy'), (b'ruble', b'Ruble'), (b'rouble', b'Rouble'), (b'rub', b'Rub'), (b'won', b'Won'), (b'krw', b'Krw'), (b'bitcoin', b'Bitcoin'), (b'btc', b'Btc'), (b'file', b'File'), (b'file-text', b'File Text'), (b'sort-alpha-asc', b'Sort alpha Asc'), (b'sort-alpha-desc', b'Sort alpha Desc'), (b'sort-amount-asc', b'Sort amount Asc'), (b'sort-amount-desc', b'Sort amount Desc'), (b'sort-numeric-asc', b'Sort numeric Asc'), (b'sort-numeric-desc', b'Sort numeric Desc'), (b'thumbs-up', b'Thumbs Up'), (b'thumbs-down', b'Thumbs Down'), (b'youtube-square', b'Youtube Square'), (b'youtube', b'Youtube'), (b'xing', b'Xing'), (b'xing-square', b'Xing Square'), (b'youtube-play', b'Youtube Play'), (b'dropbox', b'Dropbox'), (b'stack-overflow', b'Stack Overflow'), (b'instagram', b'Instagram'), (b'flickr', b'Flickr'), (b'adn', b'Adn'), (b'bitbucket', b'Bitbucket'), (b'bitbucket-square', b'Bitbucket Square'), (b'tumblr', b'Tumblr'), (b'tumblr-square', b'Tumblr Square'), (b'long-arrow-down', b'Long arrow Down'), (b'long-arrow-up', b'Long arrow Up'), (b'long-arrow-left', b'Long arrow Left'), (b'long-arrow-right', b'Long arrow Right'), (b'apple', b'Apple'), (b'windows', b'Windows'), (b'android', b'Android'), (b'linux', b'Linux'), (b'dribbble', b'Dribbble'), (b'skype', b'Skype'), (b'foursquare', b'Foursquare'), (b'trello', b'Trello'), (b'female', b'Female'), (b'male', b'Male'), (b'gittip', b'Gittip'), (b'gratipay', b'Gratipay'), (b'sun-o', b'Sun o'), (b'moon-o', b'Moon o'), (b'archive', b'Archive'), (b'bug', b'Bug'), (b'vk', b'Vk'), (b'weibo', b'Weibo'), (b'renren', b'Renren'), (b'pagelines', b'Pagelines'), (b'stack-exchange', b'Stack Exchange'), (b'arrow-circle-o-right', b'Arrow circle-o Right'), (b'arrow-circle-o-left', b'Arrow circle-o Left'), (b'toggle-left', b'Toggle Left'), (b'caret-square-o-left', b'Caret square-o Left'), (b'dot-circle-o', b'Dot circle o'), (b'wheelchair', b'Wheelchair'), (b'vimeo-square', b'Vimeo Square'), (b'turkish-lira', b'Turkish Lira'), (b'try', b'Try'), (b'plus-square-o', b'Plus square o'), (b'space-shuttle', b'Space Shuttle'), (b'slack', b'Slack'), (b'envelope-square', b'Envelope Square'), (b'wordpress', b'Wordpress'), (b'openid', b'Openid'), (b'institution', b'Institution'), (b'bank', b'Bank'), (b'university', b'University'), (b'mortar-board', b'Mortar Board'), (b'graduation-cap', b'Graduation Cap'), (b'yahoo', b'Yahoo'), (b'google', b'Google'), (b'reddit', b'Reddit'), (b'reddit-square', b'Reddit Square'), (b'stumbleupon-circle', b'Stumbleupon Circle'), (b'stumbleupon', b'Stumbleupon'), (b'delicious', b'Delicious'), (b'digg', b'Digg'), (b'pied-piper', b'Pied Piper'), (b'pied-piper-alt', b'Pied piper Alt'), (b'drupal', b'Drupal'), (b'joomla', b'Joomla'), (b'language', b'Language'), (b'fax', b'Fax'), (b'building', b'Building'), (b'child', b'Child'), (b'paw', b'Paw'), (b'spoon', b'Spoon'), (b'cube', b'Cube'), (b'cubes', b'Cubes'), (b'behance', b'Behance'), (b'behance-square', b'Behance Square'), (b'steam', b'Steam'), (b'steam-square', b'Steam Square'), (b'recycle', b'Recycle'), (b'automobile', b'Automobile'), (b'car', b'Car'), (b'cab', b'Cab'), (b'taxi', b'Taxi'), (b'tree', b'Tree'), (b'spotify', b'Spotify'), (b'deviantart', b'Deviantart'), (b'soundcloud', b'Soundcloud'), (b'database', b'Database'), (b'file-pdf-o', b'File pdf o'), (b'file-word-o', b'File word o'), (b'file-excel-o', b'File excel o'), (b'file-powerpoint-o', b'File powerpoint o'), (b'file-photo-o', b'File photo o'), (b'file-picture-o', b'File picture o'), (b'file-image-o', b'File image o'), (b'file-zip-o', b'File zip o'), (b'file-archive-o', b'File archive o'), (b'file-sound-o', b'File sound o'), (b'file-audio-o', b'File audio o'), (b'file-movie-o', b'File movie o'), (b'file-video-o', b'File video o'), (b'file-code-o', b'File code o'), (b'vine', b'Vine'), (b'codepen', b'Codepen'), (b'jsfiddle', b'Jsfiddle'), (b'life-bouy', b'Life Bouy'), (b'life-buoy', b'Life Buoy'), (b'life-saver', b'Life Saver'), (b'support', b'Support'), (b'life-ring', b'Life Ring'), (b'circle-o-notch', b'Circle o Notch'), (b'ra', b'Ra'), (b'rebel', b'Rebel'), (b'ge', b'Ge'), (b'empire', b'Empire'), (b'git-square', b'Git Square'), (b'git', b'Git'), (b'hacker-news', b'Hacker News'), (b'tencent-weibo', b'Tencent Weibo'), (b'qq', b'Qq'), (b'wechat', b'Wechat'), (b'weixin', b'Weixin'), (b'send', b'Send'), (b'paper-plane', b'Paper Plane'), (b'send-o', b'Send o'), (b'paper-plane-o', b'Paper plane o'), (b'history', b'History'), (b'genderless', b'Genderless'), (b'circle-thin', b'Circle Thin'), (b'header', b'Header'), (b'paragraph', b'Paragraph'), (b'sliders', b'Sliders'), (b'share-alt', b'Share Alt'), (b'share-alt-square', b'Share alt Square'), (b'bomb', b'Bomb'), (b'soccer-ball-o', b'Soccer ball o'), (b'futbol-o', b'Futbol o'), (b'tty', b'Tty'), (b'binoculars', b'Binoculars'), (b'plug', b'Plug'), (b'slideshare', b'Slideshare'), (b'twitch', b'Twitch'), (b'yelp', b'Yelp'), (b'newspaper-o', b'Newspaper o'), (b'wifi', b'Wifi'), (b'calculator', b'Calculator'), (b'paypal', b'Paypal'), (b'google-wallet', b'Google Wallet'), (b'cc-visa', b'Cc Visa'), (b'cc-mastercard', b'Cc Mastercard'), (b'cc-discover', b'Cc Discover'), (b'cc-amex', b'Cc Amex'), (b'cc-paypal', b'Cc Paypal'), (b'cc-stripe', b'Cc Stripe'), (b'bell-slash', b'Bell Slash'), (b'bell-slash-o', b'Bell slash o'), (b'trash', b'Trash'), (b'copyright', b'Copyright'), (b'at', b'At'), (b'eyedropper', b'Eyedropper'), (b'paint-brush', b'Paint Brush'), (b'birthday-cake', b'Birthday Cake'), (b'area-chart', b'Area Chart'), (b'pie-chart', b'Pie Chart'), (b'line-chart', b'Line Chart'), (b'lastfm', b'Lastfm'), (b'lastfm-square', b'Lastfm Square'), (b'toggle-off', b'Toggle Off'), (b'toggle-on', b'Toggle On'), (b'bicycle', b'Bicycle'), (b'bus', b'Bus'), (b'ioxhost', b'Ioxhost'), (b'angellist', b'Angellist'), (b'cc', b'Cc'), (b'shekel', b'Shekel'), (b'sheqel', b'Sheqel'), (b'ils', b'Ils'), (b'meanpath', b'Meanpath'), (b'buysellads', b'Buysellads'), (b'connectdevelop', b'Connectdevelop'), (b'dashcube', b'Dashcube'), (b'forumbee', b'Forumbee'), (b'leanpub', b'Leanpub'), (b'sellsy', b'Sellsy'), (b'shirtsinbulk', b'Shirtsinbulk'), (b'simplybuilt', b'Simplybuilt'), (b'skyatlas', b'Skyatlas'), (b'cart-plus', b'Cart Plus'), (b'cart-arrow-down', b'Cart arrow Down'), (b'diamond', b'Diamond'), (b'ship', b'Ship'), (b'user-secret', b'User Secret'), (b'motorcycle', b'Motorcycle'), (b'street-view', b'Street View'), (b'heartbeat', b'\\U$1\\L$2'), (b'venus', b'Venus'), (b'mars', b'Mars'), (b'mercury', b'Mercury'), (b'transgender', b'Transgender'), (b'transgender-alt', b'Transgender Alt'), (b'venus-double', b'Venus Double'), (b'mars-double', b'Mars Double'), (b'venus-mars', b'Venus Mars'), (b'mars-stroke', b'Mars Stroke'), (b'mars-stroke-v', b'Mars stroke v'), (b'mars-stroke-h', b'Mars stroke h'), (b'neuter', b'Neuter'), (b'facebook-official', b'Facebook Official'), (b'pinterest-p', b'Pinterest p'), (b'whatsapp', b'Whatsapp'), (b'server', b'Server'), (b'user-plus', b'User Plus'), (b'user-times', b'User Times'), (b'hotel', b'Hotel'), (b'bed', b'Bed'), (b'viacoin', b'Viacoin'), (b'train', b'Train'), (b'subway', b'Subway'), (b'medium', b'Medium')])),
('icon_left', models.CharField(blank=True, max_length=255, null=True, help_text=b'Add icon to the left side of the field. Preview icons at http://fontawesome.io/icons/', choices=[(b'glass', b'Glass'), (b'music', b'Music'), (b'search', b'Search'), (b'envelope-o', b'Envelope O'), (b'heart', b'Heart'), (b'star', b'Star'), (b'star-o', b'Star o'), (b'user', b'User'), (b'film', b'Film'), (b'th-large', b'Th Large'), (b'th', b'Th'), (b'th-list', b'Th List'), (b'check', b'Check'), (b'remove', b'Remove'), (b'close', b'Close'), (b'times', b'Times'), (b'search-plus', b'Search Plus'), (b'search-minus', b'Search Minus'), (b'power-off', b'Power Off'), (b'signal', b'Signal'), (b'gear', b'Gear'), (b'cog', b'Cog'), (b'trash-o', b'Trash o'), (b'home', b'Home'), (b'file-o', b'File o'), (b'clock-o', b'Clock o'), (b'road', b'Road'), (b'download', b'Download'), (b'arrow-circle-o-down', b'Arrow circle-o Down'), (b'arrow-circle-o-up', b'Arrow circle-o Up'), (b'inbox', b'Inbox'), (b'play-circle-o', b'Play circle o'), (b'rotate-right', b'Rotate Right'), (b'repeat', b'Repeat'), (b'refresh', b'Refresh'), (b'list-alt', b'List Alt'), (b'lock', b'Lock'), (b'flag', b'Flag'), (b'headphones', b'Headphones'), (b'volume-off', b'Volume Off'), (b'volume-down', b'Volume Down'), (b'volume-up', b'Volume Up'), (b'qrcode', b'Qrcode'), (b'barcode', b'Barcode'), (b'tag', b'Tag'), (b'tags', b'Tags'), (b'book', b'Book'), (b'bookmark', b'Bookmark'), (b'print', b'Print'), (b'camera', b'Camera'), (b'font', b'Font'), (b'bold', b'Bold'), (b'italic', b'Italic'), (b'text-height', b'Text Height'), (b'text-width', b'Text Width'), (b'align-left', b'Align Left'), (b'align-center', b'Align Center'), (b'align-right', b'Align Right'), (b'align-justify', b'Align Justify'), (b'list', b'List'), (b'dedent', b'Dedent'), (b'outdent', b'Outdent'), (b'indent', b'Indent'), (b'video-camera', b'Video Camera'), (b'photo', b'Photo'), (b'image', b'Image'), (b'picture-o', b'Picture o'), (b'pencil', b'Pencil'), (b'map-marker', b'Map Marker'), (b'adjust', b'Adjust'), (b'tint', b'Tint'), (b'edit', b'Edit'), (b'pencil-square-o', b'Pencil square o'), (b'share-square-o', b'Share square o'), (b'check-square-o', b'Check square o'), (b'arrows', b'Arrows'), (b'step-backward', b'Step Backward'), (b'fast-backward', b'Fast Backward'), (b'backward', b'Backward'), (b'play', b'Play'), (b'pause', b'Pause'), (b'stop', b'Stop'), (b'forward', b'Forward'), (b'fast-forward', b'Fast Forward'), (b'step-forward', b'Step Forward'), (b'eject', b'Eject'), (b'chevron-left', b'Chevron Left'), (b'chevron-right', b'Chevron Right'), (b'plus-circle', b'Plus Circle'), (b'minus-circle', b'Minus Circle'), (b'times-circle', b'Times Circle'), (b'check-circle', b'Check Circle'), (b'question-circle', b'Question Circle'), (b'info-circle', b'Info Circle'), (b'crosshairs', b'Crosshairs'), (b'times-circle-o', b'Times circle o'), (b'check-circle-o', b'Check circle o'), (b'ban', b'Ban'), (b'arrow-left', b'Arrow Left'), (b'arrow-right', b'Arrow Right'), (b'arrow-up', b'Arrow Up'), (b'arrow-down', b'Arrow Down'), (b'mail-forward', b'Mail Forward'), (b'share', b'Share'), (b'expand', b'Expand'), (b'compress', b'Compress'), (b'plus', b'Plus'), (b'minus', b'Minus'), (b'asterisk', b'Asterisk'), (b'exclamation-circle', b'Exclamation Circle'), (b'gift', b'Gift'), (b'leaf', b'Leaf'), (b'fire', b'Fire'), (b'eye', b'Eye'), (b'eye-slash', b'Eye Slash'), (b'warning', b'Warning'), (b'exclamation-triangle', b'Exclamation Triangle'), (b'plane', b'Plane'), (b'calendar', b'Calendar'), (b'random', b'Random'), (b'comment', b'Comment'), (b'magnet', b'Magnet'), (b'chevron-up', b'Chevron Up'), (b'chevron-down', b'Chevron Down'), (b'retweet', b'Retweet'), (b'shopping-cart', b'Shopping Cart'), (b'folder', b'Folder'), (b'folder-open', b'Folder Open'), (b'arrows-v', b'Arrows v'), (b'arrows-h', b'Arrows h'), (b'bar-chart-o', b'Bar chart o'), (b'bar-chart', b'Bar Chart'), (b'twitter-square', b'Twitter Square'), (b'facebook-square', b'Facebook Square'), (b'camera-retro', b'Camera Retro'), (b'key', b'Key'), (b'gears', b'Gears'), (b'cogs', b'Cogs'), (b'comments', b'Comments'), (b'thumbs-o-up', b'Thumbs o Up'), (b'thumbs-o-down', b'Thumbs o Down'), (b'star-half', b'Star Half'), (b'heart-o', b'Heart o'), (b'sign-out', b'Sign Out'), (b'linkedin-square', b'Linkedin Square'), (b'thumb-tack', b'Thumb Tack'), (b'external-link', b'External Link'), (b'sign-in', b'Sign In'), (b'trophy', b'Trophy'), (b'github-square', b'Github Square'), (b'upload', b'Upload'), (b'lemon-o', b'Lemon o'), (b'phone', b'Phone'), (b'square-o', b'Square o'), (b'bookmark-o', b'Bookmark o'), (b'phone-square', b'Phone Square'), (b'twitter', b'Twitter'), (b'facebook-f', b'Facebook f'), (b'facebook', b'Facebook'), (b'github', b'Github'), (b'unlock', b'Unlock'), (b'credit-card', b'Credit Card'), (b'rss', b'Rss'), (b'hdd-o', b'Hdd o'), (b'bullhorn', b'Bullhorn'), (b'bell', b'Bell'), (b'certificate', b'Certificate'), (b'hand-o-right', b'Hand o Right'), (b'hand-o-left', b'Hand o Left'), (b'hand-o-up', b'Hand o Up'), (b'hand-o-down', b'Hand o Down'), (b'arrow-circle-left', b'Arrow circle Left'), (b'arrow-circle-right', b'Arrow circle Right'), (b'arrow-circle-up', b'Arrow circle Up'), (b'arrow-circle-down', b'Arrow circle Down'), (b'globe', b'Globe'), (b'wrench', b'Wrench'), (b'tasks', b'Tasks'), (b'filter', b'Filter'), (b'briefcase', b'Briefcase'), (b'arrows-alt', b'Arrows Alt'), (b'group', b'Group'), (b'users', b'Users'), (b'chain', b'Chain'), (b'link', b'Link'), (b'cloud', b'Cloud'), (b'flask', b'Flask'), (b'cut', b'Cut'), (b'scissors', b'Scissors'), (b'copy', b'Copy'), (b'files-o', b'Files o'), (b'paperclip', b'Paperclip'), (b'save', b'Save'), (b'floppy-o', b'Floppy o'), (b'square', b'Square'), (b'navicon', b'Navicon'), (b'reorder', b'Reorder'), (b'bars', b'Bars'), (b'list-ul', b'List Ul'), (b'list-ol', b'List Ol'), (b'strikethrough', b'Strikethrough'), (b'underline', b'Underline'), (b'table', b'Table'), (b'magic', b'Magic'), (b'truck', b'Truck'), (b'pinterest', b'Pinterest'), (b'pinterest-square', b'Pinterest Square'), (b'google-plus-square', b'Google plus Square'), (b'google-plus', b'Google Plus'), (b'money', b'Money'), (b'caret-down', b'Caret Down'), (b'caret-up', b'Caret Up'), (b'caret-left', b'Caret Left'), (b'caret-right', b'Caret Right'), (b'columns', b'Columns'), (b'unsorted', b'Unsorted'), (b'sort', b'Sort'), (b'sort-down', b'Sort Down'), (b'sort-desc', b'Sort Desc'), (b'sort-up', b'Sort Up'), (b'sort-asc', b'Sort Asc'), (b'envelope', b'Envelope'), (b'linkedin', b'Linkedin'), (b'rotate-left', b'Rotate Left'), (b'undo', b'Undo'), (b'legal', b'Legal'), (b'gavel', b'Gavel'), (b'dashboard', b'Dashboard'), (b'tachometer', b'Tachometer'), (b'comment-o', b'Comment o'), (b'comments-o', b'Comments o'), (b'flash', b'Flash'), (b'bolt', b'Bolt'), (b'sitemap', b'Sitemap'), (b'umbrella', b'Umbrella'), (b'paste', b'Paste'), (b'clipboard', b'Clipboard'), (b'lightbulb-o', b'Lightbulb o'), (b'exchange', b'Exchange'), (b'cloud-download', b'Cloud Download'), (b'cloud-upload', b'Cloud Upload'), (b'user-md', b'User Md'), (b'stethoscope', b'Stethoscope'), (b'suitcase', b'Suitcase'), (b'bell-o', b'Bell o'), (b'coffee', b'Coffee'), (b'cutlery', b'Cutlery'), (b'file-text-o', b'File text o'), (b'building-o', b'Building o'), (b'hospital-o', b'Hospital o'), (b'ambulance', b'Ambulance'), (b'medkit', b'Medkit'), (b'fighter-jet', b'Fighter Jet'), (b'beer', b'Beer'), (b'h-square', b'h Square'), (b'plus-square', b'Plus Square'), (b'angle-double-left', b'Angle double Left'), (b'angle-double-right', b'Angle double Right'), (b'angle-double-up', b'Angle double Up'), (b'angle-double-down', b'Angle double Down'), (b'angle-left', b'Angle Left'), (b'angle-right', b'Angle Right'), (b'angle-up', b'Angle Up'), (b'angle-down', b'Angle Down'), (b'desktop', b'Desktop'), (b'laptop', b'Laptop'), (b'tablet', b'Tablet'), (b'mobile-phone', b'Mobile Phone'), (b'mobile', b'Mobile'), (b'circle-o', b'Circle o'), (b'quote-left', b'Quote Left'), (b'quote-right', b'Quote Right'), (b'spinner', b'Spinner'), (b'circle', b'Circle'), (b'mail-reply', b'Mail Reply'), (b'reply', b'Reply'), (b'github-alt', b'Github Alt'), (b'folder-o', b'Folder o'), (b'folder-open-o', b'Folder open o'), (b'smile-o', b'Smile o'), (b'frown-o', b'Frown o'), (b'meh-o', b'Meh o'), (b'gamepad', b'Gamepad'), (b'keyboard-o', b'Keyboard o'), (b'flag-o', b'Flag o'), (b'flag-checkered', b'Flag Checkered'), (b'terminal', b'Terminal'), (b'code', b'Code'), (b'mail-reply-all', b'Mail reply All'), (b'reply-all', b'Reply All'), (b'star-half-empty', b'Star half Empty'), (b'star-half-full', b'Star half Full'), (b'star-half-o', b'Star half o'), (b'location-arrow', b'Location Arrow'), (b'crop', b'Crop'), (b'code-fork', b'Code Fork'), (b'unlink', b'Unlink'), (b'chain-broken', b'Chain Broken'), (b'question', b'Question'), (b'info', b'Info'), (b'exclamation', b'Exclamation'), (b'superscript', b'Superscript'), (b'subscript', b'Subscript'), (b'eraser', b'Eraser'), (b'puzzle-piece', b'Puzzle Piece'), (b'microphone', b'Microphone'), (b'microphone-slash', b'Microphone Slash'), (b'shield', b'Shield'), (b'calendar-o', b'Calendar o'), (b'fire-extinguisher', b'Fire Extinguisher'), (b'rocket', b'Rocket'), (b'maxcdn', b'Maxcdn'), (b'chevron-circle-left', b'Chevron circle Left'), (b'chevron-circle-right', b'Chevron circle Right'), (b'chevron-circle-up', b'Chevron circle Up'), (b'chevron-circle-down', b'Chevron circle Down'), (b'html5', b'Html5'), (b'css3', b'Css3'), (b'anchor', b'Anchor'), (b'unlock-alt', b'Unlock Alt'), (b'bullseye', b'Bullseye'), (b'ellipsis-h', b'Ellipsis h'), (b'ellipsis-v', b'Ellipsis v'), (b'rss-square', b'Rss Square'), (b'play-circle', b'Play Circle'), (b'ticket', b'Ticket'), (b'minus-square', b'Minus Square'), (b'minus-square-o', b'Minus square o'), (b'level-up', b'Level Up'), (b'level-down', b'Level Down'), (b'check-square', b'Check Square'), (b'pencil-square', b'Pencil Square'), (b'external-link-square', b'External link Square'), (b'share-square', b'Share Square'), (b'compass', b'Compass'), (b'toggle-down', b'Toggle Down'), (b'caret-square-o-down', b'Caret square-o Down'), (b'toggle-up', b'Toggle Up'), (b'caret-square-o-up', b'Caret square-o Up'), (b'toggle-right', b'Toggle Right'), (b'caret-square-o-right', b'Caret square-o Right'), (b'euro', b'Euro'), (b'eur', b'Eur'), (b'gbp', b'Gbp'), (b'dollar', b'Dollar'), (b'usd', b'Usd'), (b'rupee', b'Rupee'), (b'inr', b'Inr'), (b'cny', b'Cny'), (b'rmb', b'Rmb'), (b'yen', b'Yen'), (b'jpy', b'Jpy'), (b'ruble', b'Ruble'), (b'rouble', b'Rouble'), (b'rub', b'Rub'), (b'won', b'Won'), (b'krw', b'Krw'), (b'bitcoin', b'Bitcoin'), (b'btc', b'Btc'), (b'file', b'File'), (b'file-text', b'File Text'), (b'sort-alpha-asc', b'Sort alpha Asc'), (b'sort-alpha-desc', b'Sort alpha Desc'), (b'sort-amount-asc', b'Sort amount Asc'), (b'sort-amount-desc', b'Sort amount Desc'), (b'sort-numeric-asc', b'Sort numeric Asc'), (b'sort-numeric-desc', b'Sort numeric Desc'), (b'thumbs-up', b'Thumbs Up'), (b'thumbs-down', b'Thumbs Down'), (b'youtube-square', b'Youtube Square'), (b'youtube', b'Youtube'), (b'xing', b'Xing'), (b'xing-square', b'Xing Square'), (b'youtube-play', b'Youtube Play'), (b'dropbox', b'Dropbox'), (b'stack-overflow', b'Stack Overflow'), (b'instagram', b'Instagram'), (b'flickr', b'Flickr'), (b'adn', b'Adn'), (b'bitbucket', b'Bitbucket'), (b'bitbucket-square', b'Bitbucket Square'), (b'tumblr', b'Tumblr'), (b'tumblr-square', b'Tumblr Square'), (b'long-arrow-down', b'Long arrow Down'), (b'long-arrow-up', b'Long arrow Up'), (b'long-arrow-left', b'Long arrow Left'), (b'long-arrow-right', b'Long arrow Right'), (b'apple', b'Apple'), (b'windows', b'Windows'), (b'android', b'Android'), (b'linux', b'Linux'), (b'dribbble', b'Dribbble'), (b'skype', b'Skype'), (b'foursquare', b'Foursquare'), (b'trello', b'Trello'), (b'female', b'Female'), (b'male', b'Male'), (b'gittip', b'Gittip'), (b'gratipay', b'Gratipay'), (b'sun-o', b'Sun o'), (b'moon-o', b'Moon o'), (b'archive', b'Archive'), (b'bug', b'Bug'), (b'vk', b'Vk'), (b'weibo', b'Weibo'), (b'renren', b'Renren'), (b'pagelines', b'Pagelines'), (b'stack-exchange', b'Stack Exchange'), (b'arrow-circle-o-right', b'Arrow circle-o Right'), (b'arrow-circle-o-left', b'Arrow circle-o Left'), (b'toggle-left', b'Toggle Left'), (b'caret-square-o-left', b'Caret square-o Left'), (b'dot-circle-o', b'Dot circle o'), (b'wheelchair', b'Wheelchair'), (b'vimeo-square', b'Vimeo Square'), (b'turkish-lira', b'Turkish Lira'), (b'try', b'Try'), (b'plus-square-o', b'Plus square o'), (b'space-shuttle', b'Space Shuttle'), (b'slack', b'Slack'), (b'envelope-square', b'Envelope Square'), (b'wordpress', b'Wordpress'), (b'openid', b'Openid'), (b'institution', b'Institution'), (b'bank', b'Bank'), (b'university', b'University'), (b'mortar-board', b'Mortar Board'), (b'graduation-cap', b'Graduation Cap'), (b'yahoo', b'Yahoo'), (b'google', b'Google'), (b'reddit', b'Reddit'), (b'reddit-square', b'Reddit Square'), (b'stumbleupon-circle', b'Stumbleupon Circle'), (b'stumbleupon', b'Stumbleupon'), (b'delicious', b'Delicious'), (b'digg', b'Digg'), (b'pied-piper', b'Pied Piper'), (b'pied-piper-alt', b'Pied piper Alt'), (b'drupal', b'Drupal'), (b'joomla', b'Joomla'), (b'language', b'Language'), (b'fax', b'Fax'), (b'building', b'Building'), (b'child', b'Child'), (b'paw', b'Paw'), (b'spoon', b'Spoon'), (b'cube', b'Cube'), (b'cubes', b'Cubes'), (b'behance', b'Behance'), (b'behance-square', b'Behance Square'), (b'steam', b'Steam'), (b'steam-square', b'Steam Square'), (b'recycle', b'Recycle'), (b'automobile', b'Automobile'), (b'car', b'Car'), (b'cab', b'Cab'), (b'taxi', b'Taxi'), (b'tree', b'Tree'), (b'spotify', b'Spotify'), (b'deviantart', b'Deviantart'), (b'soundcloud', b'Soundcloud'), (b'database', b'Database'), (b'file-pdf-o', b'File pdf o'), (b'file-word-o', b'File word o'), (b'file-excel-o', b'File excel o'), (b'file-powerpoint-o', b'File powerpoint o'), (b'file-photo-o', b'File photo o'), (b'file-picture-o', b'File picture o'), (b'file-image-o', b'File image o'), (b'file-zip-o', b'File zip o'), (b'file-archive-o', b'File archive o'), (b'file-sound-o', b'File sound o'), (b'file-audio-o', b'File audio o'), (b'file-movie-o', b'File movie o'), (b'file-video-o', b'File video o'), (b'file-code-o', b'File code o'), (b'vine', b'Vine'), (b'codepen', b'Codepen'), (b'jsfiddle', b'Jsfiddle'), (b'life-bouy', b'Life Bouy'), (b'life-buoy', b'Life Buoy'), (b'life-saver', b'Life Saver'), (b'support', b'Support'), (b'life-ring', b'Life Ring'), (b'circle-o-notch', b'Circle o Notch'), (b'ra', b'Ra'), (b'rebel', b'Rebel'), (b'ge', b'Ge'), (b'empire', b'Empire'), (b'git-square', b'Git Square'), (b'git', b'Git'), (b'hacker-news', b'Hacker News'), (b'tencent-weibo', b'Tencent Weibo'), (b'qq', b'Qq'), (b'wechat', b'Wechat'), (b'weixin', b'Weixin'), (b'send', b'Send'), (b'paper-plane', b'Paper Plane'), (b'send-o', b'Send o'), (b'paper-plane-o', b'Paper plane o'), (b'history', b'History'), (b'genderless', b'Genderless'), (b'circle-thin', b'Circle Thin'), (b'header', b'Header'), (b'paragraph', b'Paragraph'), (b'sliders', b'Sliders'), (b'share-alt', b'Share Alt'), (b'share-alt-square', b'Share alt Square'), (b'bomb', b'Bomb'), (b'soccer-ball-o', b'Soccer ball o'), (b'futbol-o', b'Futbol o'), (b'tty', b'Tty'), (b'binoculars', b'Binoculars'), (b'plug', b'Plug'), (b'slideshare', b'Slideshare'), (b'twitch', b'Twitch'), (b'yelp', b'Yelp'), (b'newspaper-o', b'Newspaper o'), (b'wifi', b'Wifi'), (b'calculator', b'Calculator'), (b'paypal', b'Paypal'), (b'google-wallet', b'Google Wallet'), (b'cc-visa', b'Cc Visa'), (b'cc-mastercard', b'Cc Mastercard'), (b'cc-discover', b'Cc Discover'), (b'cc-amex', b'Cc Amex'), (b'cc-paypal', b'Cc Paypal'), (b'cc-stripe', b'Cc Stripe'), (b'bell-slash', b'Bell Slash'), (b'bell-slash-o', b'Bell slash o'), (b'trash', b'Trash'), (b'copyright', b'Copyright'), (b'at', b'At'), (b'eyedropper', b'Eyedropper'), (b'paint-brush', b'Paint Brush'), (b'birthday-cake', b'Birthday Cake'), (b'area-chart', b'Area Chart'), (b'pie-chart', b'Pie Chart'), (b'line-chart', b'Line Chart'), (b'lastfm', b'Lastfm'), (b'lastfm-square', b'Lastfm Square'), (b'toggle-off', b'Toggle Off'), (b'toggle-on', b'Toggle On'), (b'bicycle', b'Bicycle'), (b'bus', b'Bus'), (b'ioxhost', b'Ioxhost'), (b'angellist', b'Angellist'), (b'cc', b'Cc'), (b'shekel', b'Shekel'), (b'sheqel', b'Sheqel'), (b'ils', b'Ils'), (b'meanpath', b'Meanpath'), (b'buysellads', b'Buysellads'), (b'connectdevelop', b'Connectdevelop'), (b'dashcube', b'Dashcube'), (b'forumbee', b'Forumbee'), (b'leanpub', b'Leanpub'), (b'sellsy', b'Sellsy'), (b'shirtsinbulk', b'Shirtsinbulk'), (b'simplybuilt', b'Simplybuilt'), (b'skyatlas', b'Skyatlas'), (b'cart-plus', b'Cart Plus'), (b'cart-arrow-down', b'Cart arrow Down'), (b'diamond', b'Diamond'), (b'ship', b'Ship'), (b'user-secret', b'User Secret'), (b'motorcycle', b'Motorcycle'), (b'street-view', b'Street View'), (b'heartbeat', b'\\U$1\\L$2'), (b'venus', b'Venus'), (b'mars', b'Mars'), (b'mercury', b'Mercury'), (b'transgender', b'Transgender'), (b'transgender-alt', b'Transgender Alt'), (b'venus-double', b'Venus Double'), (b'mars-double', b'Mars Double'), (b'venus-mars', b'Venus Mars'), (b'mars-stroke', b'Mars Stroke'), (b'mars-stroke-v', b'Mars stroke v'), (b'mars-stroke-h', b'Mars stroke h'), (b'neuter', b'Neuter'), (b'facebook-official', b'Facebook Official'), (b'pinterest-p', b'Pinterest p'), (b'whatsapp', b'Whatsapp'), (b'server', b'Server'), (b'user-plus', b'User Plus'), (b'user-times', b'User Times'), (b'hotel', b'Hotel'), (b'bed', b'Bed'), (b'viacoin', b'Viacoin'), (b'train', b'Train'), (b'subway', b'Subway'), (b'medium', b'Medium')])),
('inset_text_right', models.CharField(help_text=b'Inset field with content on the right', max_length=255, null=True, blank=True)),
('inset_text_left', models.CharField(help_text=b'Inset field with content on the left', max_length=255, null=True, blank=True)),
('hide', models.BooleanField(default=False, help_text=b'Hide field from form without deleting and data entered by users. Use this instead of deleting a form field.')),
('error_message', models.CharField(help_text=b'Message to display when this field is invalid.', max_length=255, null=True, blank=True)),
('third_party_id', models.CharField(help_text=b'An identifier to integrate the form with another system', max_length=255, null=True, blank=True)),
('created_by', models.ForeignKey(related_name='form_formfield_created_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='form_formfield_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('parent', models.ForeignKey(blank=True, to='form.Form', null=True)),
],
options={
'abstract': False,
'verbose_name': 'Form Field',
'verbose_name_plural': 'Form Fields',
},
),
migrations.AddField(
model_name='formentry',
name='tags',
field=models.ManyToManyField(to='form.FormEntryTag', blank=True),
),
migrations.AddField(
model_name='fieldentry',
name='form_entry',
field=models.ForeignKey(blank=True, to='form.FormEntry', null=True),
),
migrations.AddField(
model_name='fieldentry',
name='form_field',
field=models.ForeignKey(blank=True, to='form.FormField', null=True),
),
migrations.AddField(
model_name='fieldentry',
name='modified_by',
field=models.ForeignKey(related_name='form_fieldentry_modified_by', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
]
| 233.908517 | 17,592 | 0.647898 | 11,451 | 74,149 | 4.122609 | 0.076151 | 0.010507 | 0.031457 | 0.032643 | 0.91351 | 0.90705 | 0.90366 | 0.888112 | 0.873284 | 0.862587 | 0 | 0.00652 | 0.137453 | 74,149 | 316 | 17,593 | 234.648734 | 0.731601 | 0.000283 | 0 | 0.574194 | 0 | 0.074194 | 0.471994 | 0.016431 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009677 | 0.012903 | 0 | 0.022581 | 0.006452 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
487754e4d5d82d9c6c8682ebcb9e4cf0a57b904a | 99 | py | Python | {{cookiecutter.project_name}}/apps/token/constants/jwt.py | DemonXD/fast-api-project-template | 10643ab7385f9c220953b297d437a1187401f2c6 | [
"MIT"
] | 50 | 2019-06-25T23:30:35.000Z | 2022-02-14T14:12:41.000Z | {{cookiecutter.project_name}}/apps/token/constants/jwt.py | DemonXD/fast-api-project-template | 10643ab7385f9c220953b297d437a1187401f2c6 | [
"MIT"
] | 2 | 2019-05-22T15:28:12.000Z | 2020-03-15T23:12:28.000Z | {{cookiecutter.project_name}}/apps/token/constants/jwt.py | DemonXD/fast-api-project-template | 10643ab7385f9c220953b297d437a1187401f2c6 | [
"MIT"
] | 8 | 2019-12-24T17:36:48.000Z | 2022-03-01T09:47:11.000Z | # -*- coding: utf-8 -*-
JWT_REGEX = r'^{} [A-Za-z0-9-_=]+\.[A-Za-z0-9-_=]+\.?[A-Za-z0-9-_.+/=]*$'
| 24.75 | 73 | 0.414141 | 18 | 99 | 2.055556 | 0.555556 | 0.243243 | 0.405405 | 0.486486 | 0.486486 | 0.486486 | 0.486486 | 0.486486 | 0 | 0 | 0 | 0.078652 | 0.10101 | 99 | 3 | 74 | 33 | 0.337079 | 0.212121 | 0 | 0 | 0 | 1 | 0.763158 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
488998ee0840f7220e2957d05cb6158bf345e59b | 3,371 | py | Python | examples/self_supervised/datasets.py | Thiefwerty/catalyst | 58c4e0e3ca3928f7402cfc750fbc9a77e44a2b66 | [
"Apache-2.0"
] | 2,693 | 2019-01-23T19:16:12.000Z | 2022-03-31T02:12:42.000Z | examples/self_supervised/datasets.py | Thiefwerty/catalyst | 58c4e0e3ca3928f7402cfc750fbc9a77e44a2b66 | [
"Apache-2.0"
] | 763 | 2019-01-22T20:12:56.000Z | 2022-03-27T18:36:10.000Z | examples/self_supervised/datasets.py | Thiefwerty/catalyst | 58c4e0e3ca3928f7402cfc750fbc9a77e44a2b66 | [
"Apache-2.0"
] | 445 | 2019-01-23T17:07:09.000Z | 2022-03-30T05:38:45.000Z | from torchvision import datasets, transforms
DATASETS = {
"MNIST": {
"dataset": datasets.MNIST,
"in_size": 28,
"in_channels": 1,
"train_transform": transforms.Compose(
[
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
]
),
"valid_transform": transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
]
),
},
"CIFAR-10": {
"dataset": datasets.CIFAR10,
"in_size": 32,
"in_channels": 3,
"train_transform": transforms.Compose(
[
transforms.RandomApply(
[
transforms.ColorJitter(
brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1
)
],
p=0.8,
),
transforms.RandomGrayscale(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
),
"valid_transform": transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
),
},
"CIFAR-100": {
"dataset": datasets.CIFAR100,
"in_size": 32,
"in_channels": 3,
"train_transform": transforms.Compose(
[
transforms.RandomApply(
[
transforms.ColorJitter(
brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1
)
],
p=0.8,
),
transforms.RandomGrayscale(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
),
"valid_transform": transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
),
},
"STL10": {
"dataset": datasets.STL10,
"in_size": 96,
"in_channels": 3,
"train_transform": transforms.Compose(
[
transforms.RandomApply(
[
transforms.ColorJitter(
brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1
)
],
p=0.8,
),
transforms.RandomGrayscale(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
transforms.Normalize((0.43, 0.42, 0.39), (0.27, 0.26, 0.27)),
]
),
"valid_transform": transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.43, 0.42, 0.39), (0.27, 0.26, 0.27)),
]
),
},
}
| 32.413462 | 89 | 0.426283 | 284 | 3,371 | 5.003521 | 0.18662 | 0.014075 | 0.146376 | 0.202674 | 0.870514 | 0.848698 | 0.848698 | 0.848698 | 0.836031 | 0.765658 | 0 | 0.13106 | 0.443192 | 3,371 | 103 | 90 | 32.728155 | 0.625999 | 0 | 0 | 0.607843 | 0 | 0 | 0.073272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009804 | 0 | 0.009804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6fdb4189567973e56024bb33c350d32523c861b7 | 190 | py | Python | tests/test_import.py | movermeyer/py_smartyparse | 999b4cebe87a88072608a7c738cc71c4f956dde5 | [
"Unlicense"
] | 15 | 2016-02-04T00:12:03.000Z | 2018-10-02T09:56:27.000Z | tests/test_import.py | movermeyer/py_smartyparse | 999b4cebe87a88072608a7c738cc71c4f956dde5 | [
"Unlicense"
] | 1 | 2016-02-04T18:27:55.000Z | 2016-02-04T19:43:06.000Z | tests/test_import.py | movermeyer/py_smartyparse | 999b4cebe87a88072608a7c738cc71c4f956dde5 | [
"Unlicense"
] | 3 | 2016-02-05T12:51:02.000Z | 2018-03-05T01:03:45.000Z | def test():
import smartyparse
from smartyparse import parsers
from smartyparse import SmartyParser
from smartyparse import ParseHelper
if __name__ == '__main__':
test() | 23.75 | 40 | 0.731579 | 20 | 190 | 6.55 | 0.55 | 0.343511 | 0.480916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215789 | 190 | 8 | 41 | 23.75 | 0.879195 | 0 | 0 | 0 | 0 | 0 | 0.041885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.571429 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
82fc8f731de42134d08209f3d5c97622106b42f5 | 13,181 | py | Python | tests/gold_tests/post/post-continue.test.py | cmcfarlen/trafficserver | 2aa1d3106398eb082e5a454212b0273c63d5f69d | [
"Apache-2.0"
] | 1,351 | 2015-01-03T08:25:40.000Z | 2022-03-31T09:14:08.000Z | tests/gold_tests/post/post-continue.test.py | cmcfarlen/trafficserver | 2aa1d3106398eb082e5a454212b0273c63d5f69d | [
"Apache-2.0"
] | 7,009 | 2015-01-14T16:22:45.000Z | 2022-03-31T17:18:04.000Z | tests/gold_tests/post/post-continue.test.py | cmcfarlen/trafficserver | 2aa1d3106398eb082e5a454212b0273c63d5f69d | [
"Apache-2.0"
] | 901 | 2015-01-11T19:21:08.000Z | 2022-03-18T18:21:33.000Z | '''
'''
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
# ----
# Setup Test
# ----
Test.Summary = '''
Test the Expect header in post
'''
# Require HTTP/2 enabled Curl
Test.SkipUnless(
Condition.HasCurlFeature('http2'),
)
Test.ContinueOnFail = True
# ----
# Setup httpbin Origin Server
# ----
httpbin = Test.MakeHttpBinServer("httpbin")
# ----
# Setup ATS
# ----
ts = Test.MakeATSProcess("ts", select_ports=True, enable_tls=True, enable_cache=False)
ts2 = Test.MakeATSProcess("ts2", select_ports=True, enable_tls=True, enable_cache=False)
# add ssl materials like key, certificates for the server
ts.addDefaultSSLFiles()
ts2.addDefaultSSLFiles()
ts.Disk.remap_config.AddLine(
'map / http://127.0.0.1:{0}'.format(httpbin.Variables.Port)
)
ts.Disk.ssl_multicert_config.AddLine(
'dest_ip=* ssl_cert_name=server.pem ssl_key_name=server.key'
)
ts.Disk.records_config.update({
'proxy.config.ssl.server.cert.path': '{0}'.format(ts.Variables.SSLDir),
'proxy.config.ssl.server.private_key.path': '{0}'.format(ts.Variables.SSLDir),
'proxy.config.diags.debug.enabled': 1,
'proxy.config.diags.debug.tags': 'http',
})
ts2.Disk.remap_config.AddLine(
'map / http://127.0.0.1:{0}'.format(httpbin.Variables.Port)
)
ts2.Disk.ssl_multicert_config.AddLine(
'dest_ip=* ssl_cert_name=server.pem ssl_key_name=server.key'
)
ts2.Disk.records_config.update({
'proxy.config.ssl.server.cert.path': '{0}'.format(ts.Variables.SSLDir),
'proxy.config.ssl.server.private_key.path': '{0}'.format(ts.Variables.SSLDir),
'proxy.config.diags.debug.enabled': 0,
'proxy.config.diags.debug.tags': 'http',
'proxy.config.http.send_100_continue_response': 1
})
big_post_body = "0123456789" * 131070
big_post_body_file = open(os.path.join(Test.RunDirectory, "big_post_body"), "w")
big_post_body_file.write(big_post_body)
big_post_body_file.close()
test_run = Test.AddTestRun("http1.1 POST small body with Expect header")
test_run.Processes.Default.StartBefore(httpbin, ready=When.PortOpen(httpbin.Variables.Port))
test_run.Processes.Default.StartBefore(Test.Processes.ts)
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: 100-continue" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("Expect: 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST large body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: 100-continue" -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("Expect: 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST small body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect:" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/1.1 100 Continue", "Does not have Expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("Expect: 100-continue", "Does not have Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST large body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: " -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/1.1 100 Continue", "Does not have Expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("Expect: 100-continue", "Does not have Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST small body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: 100-continue" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST large body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: 100-continue" -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST small body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: " -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST large body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: " -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts
test_run.Processes.Default.ReturnCode = 0
# Do them all again against the TS that will return 100-continue immediately
test_run = Test.AddTestRun("http1.1 POST small body with Expect header")
test_run.Processes.Default.StartBefore(Test.Processes.ts2)
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: 100-continue" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("Expect: 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST large body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: 100-continue" -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("Expect: 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST small body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect:" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("Expect 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http1.1 POST large body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http1.1 -H "Expect: " -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h1.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/1.1 100 Continue", "Has Expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("Expect 100-continue", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST small body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: 100-continue" -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST large body with Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: 100-continue" -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ContainsExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST small body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: " -d "small body" -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
test_run = Test.AddTestRun("http2 POST large body w/o Expect header")
test_run.Processes.Default.Command = 'curl -v -o /dev/null --http2 -H "Expect: " -d @big_post_body -k https://127.0.0.1:{0}/post'.format(
ts2.Variables.ssl_port)
test_run.Processes.Default.Streams.All = "gold/post-h2.gold"
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("xpect: 100-continue", "Has expect header")
test_run.Processes.Default.Streams.All += Testers.ExcludesExpression("HTTP/2 100", "Has Expect header")
test_run.StillRunningAfter = httpbin
test_run.StillRunningAfter = ts2
test_run.Processes.Default.ReturnCode = 0
| 54.020492 | 151 | 0.76322 | 1,967 | 13,181 | 5.009151 | 0.097102 | 0.093068 | 0.134781 | 0.193748 | 0.87415 | 0.873237 | 0.86735 | 0.861261 | 0.861261 | 0.852329 | 0 | 0.034225 | 0.095592 | 13,181 | 243 | 152 | 54.242798 | 0.792299 | 0.076094 | 0 | 0.825397 | 0 | 0.084656 | 0.343354 | 0.033438 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005291 | 0 | 0.005291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d21bac637049d04e19a7f7352e858dfe06193631 | 97,065 | py | Python | buidl/buidl/test/test_psbt.py | rugrah/ru | ebe5451709ebcc94e58f4de368fd66cc91c92d21 | [
"Unlicense"
] | null | null | null | buidl/buidl/test/test_psbt.py | rugrah/ru | ebe5451709ebcc94e58f4de368fd66cc91c92d21 | [
"Unlicense"
] | null | null | null | buidl/buidl/test/test_psbt.py | rugrah/ru | ebe5451709ebcc94e58f4de368fd66cc91c92d21 | [
"Unlicense"
] | null | null | null | from unittest import TestCase
from io import BytesIO
from buidl.ecc import PrivateKey
from buidl.hd import HDPrivateKey
from buidl.helper import serialize_binary_path, encode_varstr, SIGHASH_ALL, read_varstr
from buidl.psbt import PSBT, NamedHDPublicKey
from buidl.script import RedeemScript, Script, WitnessScript
from buidl.tx import Tx, TxIn, TxOut
class NamedHDPublicKeyTest(TestCase):
def test_redeem_script_lookup(self):
hex_named_hd = "4f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080"
stream = BytesIO(bytes.fromhex(hex_named_hd))
named_hd = NamedHDPublicKey.parse(read_varstr(stream), stream)
redeem_script_lookup = named_hd.redeem_script_lookup(
max_external=1, max_internal=1
)
want = {
bytes.fromhex("e2e642a0ab2cd9a77ae21e7f66610bc7e6647788"): RedeemScript(
[0, bytes.fromhex("9a9bfaf8ef6c4b061a30e8e162da3458cfa122c6")]
),
bytes.fromhex("df71c379eef82782c8f88b5228a9caf3f1ca3ecb"): RedeemScript(
[0, bytes.fromhex("b0c0277be1a8ee3e709e279d47eda9ed1058e5fc")]
),
bytes.fromhex("fad70562a3a2f5fdaeacfac35da9411b8d42934f"): RedeemScript(
[0, bytes.fromhex("c9bb368409c824f0a900f2f9b935d6de8c8b3ef7")]
),
bytes.fromhex("7d3dc1a56742708417819e201a4c572887e9555c"): RedeemScript(
[0, bytes.fromhex("1d36b1aa0b873fc919d3823e8bd162eba62ecf5d")]
),
}
self.assertEqual(redeem_script_lookup, want)
class PSBTTest(TestCase):
def test_create(self):
tx_in_0 = TxIn(
bytes.fromhex(
"75ddabb27b8845f5247975c8a5ba7c6f336c4570708ebe230caf6db5217ae858"
),
0,
)
tx_in_1 = TxIn(
bytes.fromhex(
"1dea7cd05979072a3578cab271c02244ea8a090bbb46aa680a65ecd027048d83"
),
1,
)
tx_out_0 = TxOut(
149990000,
Script([0, bytes.fromhex("d85c2b71d0060b09c9886aeb815e50991dda124d")]),
)
tx_out_1 = TxOut(
100000000,
Script([0, bytes.fromhex("00aea9a2e5f0f876a588df5546e8742d1d87008f")]),
)
tx_obj = Tx(2, [tx_in_0, tx_in_1], [tx_out_0, tx_out_1], 0)
psbt = PSBT.create(tx_obj)
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAAAAAA="
self.assertEqual(psbt.serialize_base64(), want)
def test_update_p2pkh(self):
psbt_obj = PSBT.parse(
BytesIO(
bytes.fromhex(
"70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac0000000000000000"
)
)
)
hex_named_hd = "4f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080"
stream = BytesIO(bytes.fromhex(hex_named_hd))
named_hd = NamedHDPublicKey.parse(read_varstr(stream), stream)
psbt_obj.tx_obj.testnet = True
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
pubkey_lookup = named_hd.bip44_lookup()
psbt_obj.update(tx_lookup, pubkey_lookup)
want = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a2271800220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c00008001000080000000800000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_sign_p2pkh(self):
hex_psbt = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a2271800220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c00008001000080000000800000000000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPeL2qb9uLkgTKhLHSUUHsxmr2fcGFRBVh6EiBrxHZNTagx3kDXN4yjHsYV5rUYZhpsLCrZYBXzWLWHA4xL3FcCF6CZz1LDGM"
)
self.assertTrue(psbt_obj.sign(hd_priv))
want = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a2271800220202c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c483045022100b98bb5a69a081543e7e6de6b62b3243c8870211c679a8cf568916631494e99d50220631e1f70231286f059f5cdef8d746f7b8986cfec47346bdfea163528250d7d2401220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c00008001000080000000800000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_finalize_p2pkh(self):
hex_psbt = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a2271800220202c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c483045022100b98bb5a69a081543e7e6de6b62b3243c8870211c679a8cf568916631494e99d50220631e1f70231286f059f5cdef8d746f7b8986cfec47346bdfea163528250d7d2401220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c00008001000080000000800000000000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.finalize()
want = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a227180001076b483045022100b98bb5a69a081543e7e6de6b62b3243c8870211c679a8cf568916631494e99d50220631e1f70231286f059f5cdef8d746f7b8986cfec47346bdfea163528250d7d24012102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_final_tx(self):
hex_psbt = "70736274ff0100770100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f0887480100000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000000100fda40102000000000102816f71fa2b62d7235ae316d54cb174053c793d16644064405a8326094518aaa901000000171600148900fe9d1950305978d57ebbc25f722bbf131b53feffffff6e3e62f2e005db1bb2a1f12e5ca2bfbb4f82f2ca023c23b0a10a035cabb38fb60000000017160014ae01dce99edb5398cee5e4dc536173d35a9495a9feffffff0278de16000000000017a914a2be7a5646958a5b53f1c3de5a896f6c0ff5419f8740420f00000000001976a9149a9bfaf8ef6c4b061a30e8e162da3458cfa122c688ac02473044022017506b1a15e0540efe5453fcc9c61dcc4457dd00d22cba5e5b937c56944f96ff02207a1c071a8e890cf69c4adef5154d6556e5b356fc09d74a7c811484de289c2d41012102de6c105c8ed6c54d9f7a166fbe3012fecbf4bb3cecda49a8aad1d0c07784110c0247304402207035217de1a2c587b1aaeb5605b043189d551451697acb74ffc99e5a288f4fde022013b7f33a916f9e05846d333b6ea314f56251e74f243682e0ec45ce9e16c6344d01210205174b405fba1b53a44faf08679d63c871cece6c3b2c343bd2d7c559aa32dfb1a227180001076b483045022100b98bb5a69a081543e7e6de6b62b3243c8870211c679a8cf568916631494e99d50220631e1f70231286f059f5cdef8d746f7b8986cfec47346bdfea163528250d7d24012102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.tx_obj.testnet = True
tx_obj = psbt_obj.final_tx()
want = "0100000001192f88eeabc44ac213604adbb5b699678815d24b718b5940f5b1b1853f088748010000006b483045022100b98bb5a69a081543e7e6de6b62b3243c8870211c679a8cf568916631494e99d50220631e1f70231286f059f5cdef8d746f7b8986cfec47346bdfea163528250d7d24012102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77cffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f0700000000001976a9144df14c8c8873451290c53e95ebd1ee8fe488f0ed88ac00000000"
self.assertEqual(tx_obj.serialize().hex(), want)
def test_update_p2sh(self):
hex_psbt = "70736274ff01007501000000015c59ecb919792ecc26e031e9f4a6d4d74afce7b17dfe039002ef82b1f30bb63e0000000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f07000000000017a91481a19f39772bd741501e851e97ddd6a7f1ec194b870000000000000000"
hex_redeem_scripts = [
"47522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae",
"47522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae",
]
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.tx_obj.testnet = True
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
key_1 = bytes.fromhex(
"02043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af"
)
key_2 = bytes.fromhex(
"02043587cf0398242fbc80000000959cb81379545d7a34287f41485a3c08fc6ecf66cb89caff8a4f618b484d6e7d0362f19f492715b6041723d97403f166da0e3246eb614d80635c036a8d2f753393"
)
stream_1 = BytesIO(
encode_varstr(
bytes.fromhex("fbfef36f") + serialize_binary_path("m/44'/1'/0'")
)
)
stream_2 = BytesIO(
encode_varstr(
bytes.fromhex("797dcdac") + serialize_binary_path("m/44'/1'/0'")
)
)
hd_1 = NamedHDPublicKey.parse(key_1, stream_1)
hd_2 = NamedHDPublicKey.parse(key_2, stream_2)
pubkey_lookup = {**hd_1.bip44_lookup(), **hd_2.bip44_lookup()}
redeem_lookup = {}
for hex_redeem_script in hex_redeem_scripts:
redeem_script = RedeemScript.parse(
BytesIO(bytes.fromhex(hex_redeem_script))
)
redeem_lookup[redeem_script.hash160()] = redeem_script
psbt_obj.update(tx_lookup, pubkey_lookup, redeem_lookup)
want = "70736274ff01007501000000015c59ecb919792ecc26e031e9f4a6d4d74afce7b17dfe039002ef82b1f30bb63e0000000000ffffffff0220a10700000000001976a91426d5d464d148454c76f7095fdf03afc8bc8d82c388ac2c9f07000000000017a91481a19f39772bd741501e851e97ddd6a7f1ec194b8700000000000100fda201020000000001024b9f6ab9def1aabadd74f37c61361d4c555c08b3518b0f393e0df037a538058b010000001716001446fe25a61b6afad8e8619854ec65eaa5a3d707c2feffffff03df61643d0f37ca92b9e67d94d7acffb58bf167b3a73692ff2ca1933b51123f0100000017160014a77769eca770c1cafbcfa7bb06e44a7fc3748ef5feffffff0240420f000000000017a914c5bea2bad6a3171dff5fad0b99d2e60fca1d8bee87966f1b000000000017a914f10824ee9939fa638b9cc75e516408dc1d9fe248870247304402205c5f2ed7d4ce4da4913ee08b1413a7f0dadd8c59c6fe9c94fe299c8a7456076102203abb3b6f895938bf489a2473591877c7aa2cc7fddb1ca2e9632294b06d80f3a90121025ab592b2533bc8a4e4b3b52794b5f2318850c004b3dc24099271fb7db080ef820247304402204f57bbd3cc35c15bc7de0a8890c656d5608ab41c731c64413c45730fb0b05a5c0220162c676a55b2ff349cbea7d1908f034443419e30caf20a47beb5f209116cb0c3012102fed02d7c44b8bb82f23948e26e005572ff08fec43d6094daf67d2bc691f4d64d9f271800010447522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c000080010000800000008000000000000000000000010047522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_finalize_p2sh(self):
hex_psbt = "70736274ff0100530100000001e8be6d62ba1983b5d1c65406f87f7d73c2d7200d4075cf52589c53579870542b0000000000ffffffff01583e0f000000000017a91481a19f39772bd741501e851e97ddd6a7f1ec194b87000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080000100fd01010100000000010187a22bb77a836c0a3bbb62e1e04950cffdf6a45489a8d7801b24b18c124d84850100000000ffffffff0340420f000000000017a914c5bea2bad6a3171dff5fad0b99d2e60fca1d8bee8740420f00000000001976a914f0cd79383f13584bdeca184cecd16135b8a79fc288ac10c69b01000000001600146e13971913b9aa89659a9f53d327baa8826f2d750247304402204edcdf923bdddad9b77b17ae0c65817f032b7cb6efd95c0c4101fa48aba17e4e02202158c3a077a0ee0a7bc7e2763a9356470ae3aa4866ae4e62a6f8faa2729b02da0121031dbe3aff7b9ad64e2612b8b15e9f5e4a3130663a526df91abfb7b1bd16de5d6e00000000220202c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c47304402207360ee58276e8135ae1efdf1bbd7b3d87d1c7f072f3141cfe8afa78f3e36cdf7022059462d2e4598e3b441fa2503eb73b6d6b644838d3c9af547f09760b0655ce9380122020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f2473044022038c818f86a2cb1e092c55f2e30c74904c4ebbf80805ba7235369b626444ff7a402202594d8fa4f855be4dbecc148804056c2938218e7fe1a7b805a0d18f2d47a31e801010447522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c0000800100008000000080000000000000000000010047522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.finalize()
want = "70736274ff0100530100000001e8be6d62ba1983b5d1c65406f87f7d73c2d7200d4075cf52589c53579870542b0000000000ffffffff01583e0f000000000017a91481a19f39772bd741501e851e97ddd6a7f1ec194b87000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080000100fd01010100000000010187a22bb77a836c0a3bbb62e1e04950cffdf6a45489a8d7801b24b18c124d84850100000000ffffffff0340420f000000000017a914c5bea2bad6a3171dff5fad0b99d2e60fca1d8bee8740420f00000000001976a914f0cd79383f13584bdeca184cecd16135b8a79fc288ac10c69b01000000001600146e13971913b9aa89659a9f53d327baa8826f2d750247304402204edcdf923bdddad9b77b17ae0c65817f032b7cb6efd95c0c4101fa48aba17e4e02202158c3a077a0ee0a7bc7e2763a9356470ae3aa4866ae4e62a6f8faa2729b02da0121031dbe3aff7b9ad64e2612b8b15e9f5e4a3130663a526df91abfb7b1bd16de5d6e000000000107d90047304402207360ee58276e8135ae1efdf1bbd7b3d87d1c7f072f3141cfe8afa78f3e36cdf7022059462d2e4598e3b441fa2503eb73b6d6b644838d3c9af547f09760b0655ce93801473044022038c818f86a2cb1e092c55f2e30c74904c4ebbf80805ba7235369b626444ff7a402202594d8fa4f855be4dbecc148804056c2938218e7fe1a7b805a0d18f2d47a31e80147522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae00010047522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_update_p2wpkh(self):
hex_psbt = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef00000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.tx_obj.testnet = True
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
key = bytes.fromhex(
"02043587cf0398242fbc80000000959cb81379545d7a34287f41485a3c08fc6ecf66cb89caff8a4f618b484d6e7d0362f19f492715b6041723d97403f166da0e3246eb614d80635c036a8d2f753393"
)
stream = BytesIO(
encode_varstr(
bytes.fromhex("797dcdac") + serialize_binary_path("m/44'/1'/0'")
)
)
hd = NamedHDPublicKey.parse(key, stream)
psbt_obj.update(tx_lookup, hd.bip44_lookup())
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc222060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_sign_p2wpkh(self):
hex_psbt = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc222060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPeZ6mVBLfLQ7HTpmX8QWKrxbqAtk5BAiwEa9t5WjLryMZUo8qD6mNwGjx98NyDLqbqGcBKor6khRgnQG4XTbUPpxu8YdFKCF"
)
self.assertTrue(psbt_obj.sign(hd_priv))
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc222020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f24730440220575870ef714252a26bc4e61a6ee31db0f3896606a4792d11a42ef7d30c9f1b33022007cd28fb8618b704cbcf1cc6292d9be901bf3c99d967b0cace7307532619811e0122060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_finalize_p2wpkh(self):
hex_psbt = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc222020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f24730440220575870ef714252a26bc4e61a6ee31db0f3896606a4792d11a42ef7d30c9f1b33022007cd28fb8618b704cbcf1cc6292d9be901bf3c99d967b0cace7307532619811e0122060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.finalize()
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc201070001086b024730440220575870ef714252a26bc4e61a6ee31db0f3896606a4792d11a42ef7d30c9f1b33022007cd28fb8618b704cbcf1cc6292d9be901bf3c99d967b0cace7307532619811e01210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f2002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_final_tx_p2wpkh(self):
hex_psbt = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef000000000001011f40420f0000000000160014f0cd79383f13584bdeca184cecd16135b8a79fc201070001086b024730440220575870ef714252a26bc4e61a6ee31db0f3896606a4792d11a42ef7d30c9f1b33022007cd28fb8618b704cbcf1cc6292d9be901bf3c99d967b0cace7307532619811e01210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f2002202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c0000800100008000000080010000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.tx_obj.testnet = True
tx_obj = psbt_obj.final_tx()
want = "010000000001015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060000000000ffffffff01583e0f000000000016001427459b7e4317d1c9e1d0f8320d557c6bb08731ef024730440220575870ef714252a26bc4e61a6ee31db0f3896606a4792d11a42ef7d30c9f1b33022007cd28fb8618b704cbcf1cc6292d9be901bf3c99d967b0cace7307532619811e01210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f200000000"
self.assertEqual(tx_obj.serialize().hex(), want)
def test_p2sh_p2wpkh(self):
hex_tx = "01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d7500000000"
tx_obj = Tx.parse(BytesIO(bytes.fromhex(hex_tx)))
psbt_obj = PSBT.create(tx_obj)
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d7500000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
psbt_obj.tx_obj.testnet = True
hex_named_hd = "4f01043587cf0398242fbc80000000959cb81379545d7a34287f41485a3c08fc6ecf66cb89caff8a4f618b484d6e7d0362f19f492715b6041723d97403f166da0e3246eb614d80635c036a8d2f75339310797dcdac2c0000800100008000000080"
stream = BytesIO(bytes.fromhex(hex_named_hd))
named_hd = NamedHDPublicKey.parse(read_varstr(stream), stream)
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
pubkey_lookup = named_hd.bip44_lookup()
redeem_lookup = named_hd.redeem_script_lookup()
psbt_obj.update(tx_lookup, pubkey_lookup, redeem_lookup)
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a914990dd86ae46c3d568535e5e482ac35151836d3cd870104160014f0cd79383f13584bdeca184cecd16135b8a79fc222060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c000080010000800000008000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPeZ6mVBLfLQ7HTpmX8QWKrxbqAtk5BAiwEa9t5WjLryMZUo8qD6mNwGjx98NyDLqbqGcBKor6khRgnQG4XTbUPpxu8YdFKCF"
)
self.assertTrue(psbt_obj.sign(hd_priv))
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a914990dd86ae46c3d568535e5e482ac35151836d3cd8722020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f2483045022100f332008498ada0d5c83717c638b6d9f2bc6b79e657ab1db0bd45538e1390905202203060d6ffa36bb49b3469ea806a03644958926d56dda96701e7eaa3ca5320c49f010104160014f0cd79383f13584bdeca184cecd16135b8a79fc222060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c000080010000800000008000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
psbt_obj.finalize()
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a914990dd86ae46c3d568535e5e482ac35151836d3cd87010717160014f0cd79383f13584bdeca184cecd16135b8a79fc201086c02483045022100f332008498ada0d5c83717c638b6d9f2bc6b79e657ab1db0bd45538e1390905202203060d6ffa36bb49b3469ea806a03644958926d56dda96701e7eaa3ca5320c49f01210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f20000"
self.assertEqual(psbt_obj.serialize().hex(), want)
tx_obj = psbt_obj.final_tx()
want = "010000000001015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060100000017160014f0cd79383f13584bdeca184cecd16135b8a79fc2ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d7502483045022100f332008498ada0d5c83717c638b6d9f2bc6b79e657ab1db0bd45538e1390905202203060d6ffa36bb49b3469ea806a03644958926d56dda96701e7eaa3ca5320c49f01210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f200000000"
self.assertEqual(tx_obj.serialize().hex(), want)
def test_update_p2wsh(self):
hex_psbt = "70736274ff01005e01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060200000000ffffffff01583e0f0000000000220020878ce58b26789632a24ec6b62542e5d4e844dee56a7ddce7db41618049c3928c000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080000000"
hex_witness_scripts = [
"47522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae",
"47522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae",
]
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.tx_obj.testnet = True
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
key_1 = bytes.fromhex(
"02043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af"
)
key_2 = bytes.fromhex(
"02043587cf0398242fbc80000000959cb81379545d7a34287f41485a3c08fc6ecf66cb89caff8a4f618b484d6e7d0362f19f492715b6041723d97403f166da0e3246eb614d80635c036a8d2f753393"
)
bin_path = serialize_binary_path("m/44'/1'/0'")
stream_1 = BytesIO(encode_varstr(bytes.fromhex("fbfef36f") + bin_path))
stream_2 = BytesIO(encode_varstr(bytes.fromhex("797dcdac") + bin_path))
hd_1 = NamedHDPublicKey.parse(key_1, stream_1)
hd_2 = NamedHDPublicKey.parse(key_2, stream_2)
pubkey_lookup = {**hd_1.bip44_lookup(), **hd_2.bip44_lookup()}
witness_lookup = {}
for hex_witness_script in hex_witness_scripts:
witness_script = WitnessScript.parse(
BytesIO(bytes.fromhex(hex_witness_script))
)
witness_lookup[witness_script.sha256()] = witness_script
psbt_obj.update(tx_lookup, pubkey_lookup, witness_lookup=witness_lookup)
want = "70736274ff01005e01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060200000000ffffffff01583e0f0000000000220020878ce58b26789632a24ec6b62542e5d4e844dee56a7ddce7db41618049c3928c000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c00008001000080000000800001012b40420f0000000000220020c1b4fff485af1ac26714340af2e13d2e89ad70389332a0756d91a123c7fe7f5d010547522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c0000800100008000000080000000000000000000010147522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_finalize_p2wsh(self):
hex_psbt = "70736274ff01005e01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060200000000ffffffff01583e0f0000000000220020878ce58b26789632a24ec6b62542e5d4e844dee56a7ddce7db41618049c3928c000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c00008001000080000000800001012b40420f0000000000220020c1b4fff485af1ac26714340af2e13d2e89ad70389332a0756d91a123c7fe7f5d220202c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c47304402203f26a975aae04a7ae12c964cdcea318c850351a3072aebbab7902e89957008ea022019f895271f70d1515f9da776d6ac17c21bcbca769d87c1beb4ebbf4c7a56fbc20122020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f247304402204fd654c27002d4c9e53bb001229e3d7587e5be245a81b6f7ead3bf136643af40022060ebf1193a6b3e82615a564f0043e5ae88e661bfdb7fd254c9a30bae8160583901010547522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c0000800100008000000080000000000000000000010147522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
psbt_obj = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
psbt_obj.finalize()
want = "70736274ff01005e01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060200000000ffffffff01583e0f0000000000220020878ce58b26789632a24ec6b62542e5d4e844dee56a7ddce7db41618049c3928c000000004f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c00008001000080000000800001012b40420f0000000000220020c1b4fff485af1ac26714340af2e13d2e89ad70389332a0756d91a123c7fe7f5d0107000108da040047304402203f26a975aae04a7ae12c964cdcea318c850351a3072aebbab7902e89957008ea022019f895271f70d1515f9da776d6ac17c21bcbca769d87c1beb4ebbf4c7a56fbc20147304402204fd654c27002d4c9e53bb001229e3d7587e5be245a81b6f7ead3bf136643af40022060ebf1193a6b3e82615a564f0043e5ae88e661bfdb7fd254c9a30bae816058390147522102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f252ae00010147522102db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29021026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d52ae2202026421c7673552fdad57193e102df96134be00649195b213fec9d07c6d918f418d18797dcdac2c00008001000080000000800100000000000000220202db8b701c3210e1bf6f2a8a9a657acad18be1e8bff3f7435d48f973de8408f29018fbfef36f2c0000800100008000000080010000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
def test_p2sh_p2wsh(self):
hex_tx = "01000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d7500000000"
tx_obj = Tx.parse(BytesIO(bytes.fromhex(hex_tx)))
psbt_obj = PSBT.create(tx_obj)
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d7500000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
psbt_obj.tx_obj.testnet = True
hex_witness_scripts = [
"69532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae"
]
hex_named_hd = "4f01043587cf0398242fbc80000000959cb81379545d7a34287f41485a3c08fc6ecf66cb89caff8a4f618b484d6e7d0362f19f492715b6041723d97403f166da0e3246eb614d80635c036a8d2f75339310797dcdac2c0000800100008000000080"
stream = BytesIO(bytes.fromhex(hex_named_hd))
named_hd = NamedHDPublicKey.parse(read_varstr(stream), stream)
tx_lookup = psbt_obj.tx_obj.get_input_tx_lookup()
pubkey_lookup = named_hd.bip44_lookup()
redeem_lookup = {}
witness_lookup = {}
for hex_witness_script in hex_witness_scripts:
witness_script = WitnessScript.parse(
BytesIO(bytes.fromhex(hex_witness_script))
)
witness_lookup[witness_script.sha256()] = witness_script
redeem_script = RedeemScript([0, witness_script.sha256()])
redeem_lookup[redeem_script.hash160()] = redeem_script
psbt_obj.update(tx_lookup, pubkey_lookup, redeem_lookup, witness_lookup)
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a91423358e259fbcf478331138ceb9619d9a8c8350738701042200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef38010569532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c000080010000800000008000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPeZ6mVBLfLQ7HTpmX8QWKrxbqAtk5BAiwEa9t5WjLryMZUo8qD6mNwGjx98NyDLqbqGcBKor6khRgnQG4XTbUPpxu8YdFKCF"
)
self.assertTrue(psbt_obj.sign(hd_priv))
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a91423358e259fbcf478331138ceb9619d9a8c8350738722020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f24830450221009b79ecffc98bf334ed4e2a1dddb6e18ce1aa54cb3c19d2d4b41b9ee3f87ae1b3022013f67f2e7caeb8a13463a954e054b04ddd7fbef94b77c4cd1fe32658ed5909590101042200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef38010569532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c000080010000800000008000000000000000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
hex_named_hd = "4f01043587cf034d513c1580000000fb406c9fec09b6957a3449d2102318717b0c0d230b657d0ebc6698abd52145eb02eaf3397fea02c5dac747888a9e535eaf3c7e7cb9d5f2da77ddbdd943592a14af10fbfef36f2c0000800100008000000080"
stream = BytesIO(bytes.fromhex(hex_named_hd))
named_hd = NamedHDPublicKey.parse(read_varstr(stream), stream)
psbt_obj.update({}, named_hd.bip44_lookup())
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a91423358e259fbcf478331138ceb9619d9a8c8350738722020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f24830450221009b79ecffc98bf334ed4e2a1dddb6e18ce1aa54cb3c19d2d4b41b9ee3f87ae1b3022013f67f2e7caeb8a13463a954e054b04ddd7fbef94b77c4cd1fe32658ed5909590101042200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef38010569532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c000080010000800000008000000000000000002206031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b18fbfef36f2c000080010000800000008000000000010000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
private_keys = [
PrivateKey.parse("cP88EsR4DgJNeswxecL4sE4Eornf3q1ZoRxoCnk8y9eEkQyxu3D7"),
PrivateKey.parse("cP9BYGBfMbhsN5Lvyza3otuC14oKjqHbgbRXhm7QCF47EgYWQb6S"),
]
self.assertTrue(psbt_obj.sign_with_private_keys(private_keys))
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a91423358e259fbcf478331138ceb9619d9a8c83507387220202c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c47304402206c79809b2534d3c3ebb9f57958c3e1e24c523c33a47bea9d64e3201622dd194d02206042cc6138b85b865493d5d8cce419d5536112060c9fa73d36244bf2df555600012202031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b473044022077adf39dc6639cfa63bee2a05c07facf682009f87af6382c84b00f18b15ae4d602207588712aaf8c9f381273fe7985af86955ac3a090c4a87a37995eb6a7cb8023c90122020247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f24830450221009b79ecffc98bf334ed4e2a1dddb6e18ce1aa54cb3c19d2d4b41b9ee3f87ae1b3022013f67f2e7caeb8a13463a954e054b04ddd7fbef94b77c4cd1fe32658ed5909590101042200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef38010569532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae22060247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f218797dcdac2c00008001000080000000800000000000000000220602c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c18fbfef36f2c000080010000800000008000000000000000002206031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b18fbfef36f2c000080010000800000008000000000010000000000"
self.assertEqual(psbt_obj.serialize().hex(), want)
psbt_obj.finalize()
want = "70736274ff01005201000000015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f9060300000000ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75000000000001012040420f000000000017a91423358e259fbcf478331138ceb9619d9a8c835073870107232200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef380108fd4501050047304402206c79809b2534d3c3ebb9f57958c3e1e24c523c33a47bea9d64e3201622dd194d02206042cc6138b85b865493d5d8cce419d5536112060c9fa73d36244bf2df55560001473044022077adf39dc6639cfa63bee2a05c07facf682009f87af6382c84b00f18b15ae4d602207588712aaf8c9f381273fe7985af86955ac3a090c4a87a37995eb6a7cb8023c9014830450221009b79ecffc98bf334ed4e2a1dddb6e18ce1aa54cb3c19d2d4b41b9ee3f87ae1b3022013f67f2e7caeb8a13463a954e054b04ddd7fbef94b77c4cd1fe32658ed5909590169532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae0000"
self.assertEqual(psbt_obj.serialize().hex(), want)
tx_obj = psbt_obj.final_tx()
want = "010000000001015c89191dc2abf62339e0f114cb4c3bf8fb399d522d112c9afa2dc7a43759f90603000000232200207fcc2ca7381db4bdfd02e1f2b5eb3d72435b8e09bdbd8bfe3d748bf19d78ef38ffffffff01583e0f00000000001600146e13971913b9aa89659a9f53d327baa8826f2d75050047304402206c79809b2534d3c3ebb9f57958c3e1e24c523c33a47bea9d64e3201622dd194d02206042cc6138b85b865493d5d8cce419d5536112060c9fa73d36244bf2df55560001473044022077adf39dc6639cfa63bee2a05c07facf682009f87af6382c84b00f18b15ae4d602207588712aaf8c9f381273fe7985af86955ac3a090c4a87a37995eb6a7cb8023c9014830450221009b79ecffc98bf334ed4e2a1dddb6e18ce1aa54cb3c19d2d4b41b9ee3f87ae1b3022013f67f2e7caeb8a13463a954e054b04ddd7fbef94b77c4cd1fe32658ed5909590169532102c1b6ac6e6a625fee295dc2d580f80aae08b7e76eca54ae88a854e956095af77c21031b31547c895b5e301206740ea9890a0d6d127baeebb7fffb07356527323c915b210247aed77c3def4b8ce74a8db08d7f5fd315f8d96b6cd801729a910c3045d750f253ae00000000"
self.assertEqual(tx_obj.serialize().hex(), want)
def test_errors(self):
tests = [
[
"AgAAAAEmgXE3Ht/yhek3re6ks3t4AAwFZsuzrWRkFxPKQhcb9gAAAABqRzBEAiBwsiRRI+a/R01gxbUMBD1MaRpdJDXwmjSnZiqdwlF5CgIgATKcqdrPKAvfMHQOwDkEIkIsgctFg5RXrrdvwS7dlbMBIQJlfRGNM1e44PTCzUbbezn22cONmnCry5st5dyNv+TOMf7///8C09/1BQAAAAAZdqkU0MWZA8W6woaHYOkP1SGkZlqnZSCIrADh9QUAAAAAF6kUNUXm4zuDLEcFDyTT7rk8nAOUi8eHsy4TAA==",
SyntaxError,
],
[
"cHNidP8BAHUCAAAAASaBcTce3/KF6Tet7qSze3gADAVmy7OtZGQXE8pCFxv2AAAAAAD+////AtPf9QUAAAAAGXapFNDFmQPFusKGh2DpD9UhpGZap2UgiKwA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHh7MuEwAAAQD9pQEBAAAAAAECiaPHHqtNIOA3G7ukzGmPopXJRjr6Ljl/hTPMti+VZ+UBAAAAFxYAFL4Y0VKpsBIDna89p95PUzSe7LmF/////4b4qkOnHf8USIk6UwpyN+9rRgi7st0tAXHmOuxqSJC0AQAAABcWABT+Pp7xp0XpdNkCxDVZQ6vLNL1TU/////8CAMLrCwAAAAAZdqkUhc/xCX/Z4Ai7NK9wnGIZeziXikiIrHL++E4sAAAAF6kUM5cluiHv1irHU6m80GfWx6ajnQWHAkcwRAIgJxK+IuAnDzlPVoMR3HyppolwuAJf3TskAinwf4pfOiQCIAGLONfc0xTnNMkna9b7QPZzMlvEuqFEyADS8vAtsnZcASED0uFWdJQbrUqZY3LLh+GFbTZSYG2YVi/jnF6efkE/IQUCSDBFAiEA0SuFLYXc2WHS9fSrZgZU327tzHlMDDPOXMMJ/7X85Y0CIGczio4OFyXBl/saiK9Z9R5E5CVbIBZ8hoQDHAXR8lkqASECI7cr7vCWXRC+B3jv7NYfysb3mk6haTkzgHNEZPhPKrMAAAAAAA==",
IOError,
],
[
"cHNidP8BAP0KAQIAAAACqwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QAAAAAakcwRAIgR1lmF5fAGwNrJZKJSGhiGDR9iYZLcZ4ff89X0eURZYcCIFMJ6r9Wqk2Ikf/REf3xM286KdqGbX+EhtdVRs7tr5MZASEDXNxh/HupccC1AaZGoqg7ECy0OIEhfKaC3Ibi1z+ogpL+////qwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QBAAAAAP7///8CYDvqCwAAAAAZdqkUdopAu9dAy+gdmI5x3ipNXHE5ax2IrI4kAAAAAAAAGXapFG9GILVT+glechue4O/p+gOcykWXiKwAAAAAAAABASAA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHhwEEFgAUhdE1N/LiZUBaNNuvqePdoB+4IwgAAAA=",
ValueError,
],
[
"cHNidP8AAQD9pQEBAAAAAAECiaPHHqtNIOA3G7ukzGmPopXJRjr6Ljl/hTPMti+VZ+UBAAAAFxYAFL4Y0VKpsBIDna89p95PUzSe7LmF/////4b4qkOnHf8USIk6UwpyN+9rRgi7st0tAXHmOuxqSJC0AQAAABcWABT+Pp7xp0XpdNkCxDVZQ6vLNL1TU/////8CAMLrCwAAAAAZdqkUhc/xCX/Z4Ai7NK9wnGIZeziXikiIrHL++E4sAAAAF6kUM5cluiHv1irHU6m80GfWx6ajnQWHAkcwRAIgJxK+IuAnDzlPVoMR3HyppolwuAJf3TskAinwf4pfOiQCIAGLONfc0xTnNMkna9b7QPZzMlvEuqFEyADS8vAtsnZcASED0uFWdJQbrUqZY3LLh+GFbTZSYG2YVi/jnF6efkE/IQUCSDBFAiEA0SuFLYXc2WHS9fSrZgZU327tzHlMDDPOXMMJ/7X85Y0CIGczio4OFyXBl/saiK9Z9R5E5CVbIBZ8hoQDHAXR8lkqASECI7cr7vCWXRC+B3jv7NYfysb3mk6haTkzgHNEZPhPKrMAAAAAAA==",
SyntaxError,
],
[
"cHNidP8BAHUCAAAAASaBcTce3/KF6Tet7qSze3gADAVmy7OtZGQXE8pCFxv2AAAAAAD+////AtPf9QUAAAAAGXapFNDFmQPFusKGh2DpD9UhpGZap2UgiKwA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHh7MuEwAAAQD9pQEBAAAAAAECiaPHHqtNIOA3G7ukzGmPopXJRjr6Ljl/hTPMti+VZ+UBAAAAFxYAFL4Y0VKpsBIDna89p95PUzSe7LmF/////4b4qkOnHf8USIk6UwpyN+9rRgi7st0tAXHmOuxqSJC0AQAAABcWABT+Pp7xp0XpdNkCxDVZQ6vLNL1TU/////8CAMLrCwAAAAAZdqkUhc/xCX/Z4Ai7NK9wnGIZeziXikiIrHL++E4sAAAAF6kUM5cluiHv1irHU6m80GfWx6ajnQWHAkcwRAIgJxK+IuAnDzlPVoMR3HyppolwuAJf3TskAinwf4pfOiQCIAGLONfc0xTnNMkna9b7QPZzMlvEuqFEyADS8vAtsnZcASED0uFWdJQbrUqZY3LLh+GFbTZSYG2YVi/jnF6efkE/IQUCSDBFAiEA0SuFLYXc2WHS9fSrZgZU327tzHlMDDPOXMMJ/7X85Y0CIGczio4OFyXBl/saiK9Z9R5E5CVbIBZ8hoQDHAXR8lkqASECI7cr7vCWXRC+B3jv7NYfysb3mk6haTkzgHNEZPhPKrMAAAAAAQA/AgAAAAH//////////////////////////////////////////wAAAAAA/////wEAAAAAAAAAAANqAQAAAAAAAAAA",
KeyError,
],
[
"cHNidP8CAAFVAgAAAAEnmiMjpd+1H8RfIg+liw/BPh4zQnkqhdfjbNYzO1y8OQAAAAAA/////wGgWuoLAAAAABl2qRT/6cAGEJfMO2NvLLBGD6T8Qn0rRYisAAAAAAABASCVXuoLAAAAABepFGNFIA9o0YnhrcDfHE0W6o8UwNvrhyICA7E0HMunaDtq9PEjjNbpfnFn1Wn6xH8eSNR1QYRDVb1GRjBDAiAEJLWO/6qmlOFVnqXJO7/UqJBkIkBVzfBwtncUaUQtBwIfXI6w/qZRbWC4rLM61k7eYOh4W/s6qUuZvfhhUduamgEBBCIAIHcf0YrUWWZt1J89Vk49vEL0yEd042CtoWgWqO1IjVaBAQVHUiEDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYhA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9Uq4iBgOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RhC0prpnAAAAgAAAAIAEAACAIgYD3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg70QtKa6ZwAAAIAAAACABQAAgAAA",
KeyError,
],
[
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAIBACCVXuoLAAAAABepFGNFIA9o0YnhrcDfHE0W6o8UwNvrhyICA7E0HMunaDtq9PEjjNbpfnFn1Wn6xH8eSNR1QYRDVb1GRjBDAiAEJLWO/6qmlOFVnqXJO7/UqJBkIkBVzfBwtncUaUQtBwIfXI6w/qZRbWC4rLM61k7eYOh4W/s6qUuZvfhhUduamgEBBCIAIHcf0YrUWWZt1J89Vk49vEL0yEd042CtoWgWqO1IjVaBAQVHUiEDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYhA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9Uq4iBgOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RhC0prpnAAAAgAAAAIAEAACAIgYD3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg70QtKa6ZwAAAIAAAACABQAAgAAA",
KeyError,
],
[
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAEBIJVe6gsAAAAAF6kUY0UgD2jRieGtwN8cTRbqjxTA2+uHIQIDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYwQwIgBCS1jv+qppThVZ6lyTu/1KiQZCJAVc3wcLZ3FGlELQcCH1yOsP6mUW1guKyzOtZO3mDoeFv7OqlLmb34YVHbmpoBAQQiACB3H9GK1FlmbdSfPVZOPbxC9MhHdONgraFoFqjtSI1WgQEFR1IhA7E0HMunaDtq9PEjjNbpfnFn1Wn6xH8eSNR1QYRDVb1GIQPeVdHh2sgF4/iljB+/m5TALz26r+En/vykmV8m+CCDvVKuIgYDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYQtKa6ZwAAAIAAAACABAAAgCIGA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9ELSmumcAAACAAAAAgAUAAIAAAA==",
ValueError,
],
[
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAEBIJVe6gsAAAAAF6kUY0UgD2jRieGtwN8cTRbqjxTA2+uHIgIDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUZGMEMCIAQktY7/qqaU4VWepck7v9SokGQiQFXN8HC2dxRpRC0HAh9cjrD+plFtYLisszrWTt5g6Hhb+zqpS5m9+GFR25qaAQIEACIAIHcf0YrUWWZt1J89Vk49vEL0yEd042CtoWgWqO1IjVaBAQVHUiEDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYhA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9Uq4iBgOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RhC0prpnAAAAgAAAAIAEAACAIgYD3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg70QtKa6ZwAAAIAAAACABQAAgAAA",
KeyError,
],
[
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAEBIJVe6gsAAAAAF6kUY0UgD2jRieGtwN8cTRbqjxTA2+uHIgIDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUZGMEMCIAQktY7/qqaU4VWepck7v9SokGQiQFXN8HC2dxRpRC0HAh9cjrD+plFtYLisszrWTt5g6Hhb+zqpS5m9+GFR25qaAQEEIgAgdx/RitRZZm3Unz1WTj28QvTIR3TjYK2haBao7UiNVoECBQBHUiEDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUYhA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9Uq4iBgOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RhC0prpnAAAAgAAAAIAEAACAIgYD3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg70QtKa6ZwAAAIAAAACABQAAgAAA",
KeyError,
],
[
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAEBIJVe6gsAAAAAF6kUY0UgD2jRieGtwN8cTRbqjxTA2+uHIgIDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUZGMEMCIAQktY7/qqaU4VWepck7v9SokGQiQFXN8HC2dxRpRC0HAh9cjrD+plFtYLisszrWTt5g6Hhb+zqpS5m9+GFR25qaAQEEIgAgdx/RitRZZm3Unz1WTj28QvTIR3TjYK2haBao7UiNVoEBBUdSIQOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RiED3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg71SriEGA7E0HMunaDtq9PEjjNbpfnFn1Wn6xH8eSNR1QYRDVb0QtKa6ZwAAAIAAAACABAAAgCIGA95V0eHayAXj+KWMH7+blMAvPbqv4Sf+/KSZXyb4IIO9ELSmumcAAACAAAAAgAUAAIAAAA==",
KeyError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAIAALsCAAAAAarXOTEBi9JfhK5AC2iEi+CdtwbqwqwYKYur7nGrZW+LAAAAAEhHMEQCIFj2/HxqM+GzFUjUgcgmwBW9MBNarULNZ3kNq2bSrSQ7AiBKHO0mBMZzW2OT5bQWkd14sA8MWUL7n3UYVvqpOBV9ugH+////AoDw+gIAAAAAF6kUD7lGNCFpa4LIM68kHHjBfdveSTSH0PIKJwEAAAAXqRQpynT4oI+BmZQoGFyXtdhS5AY/YYdlAAAAAQfaAEcwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMAUgwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gFHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4AAQEgAMLrCwAAAAAXqRS39fr0Dj1ApaRZsds1NfK3L6kh6IcBByMiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEI2gQARzBEAiBi63pVYQenxz9FrEq1od3fb3B1+xJ1lpp/OD7/94S8sgIgDAXbt0cNvy8IVX3TVscyXB7TCRPpls04QJRdsSIo2l8BRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
KeyError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAACBwDaAEcwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMAUgwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gFHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4AAQEgAMLrCwAAAAAXqRS39fr0Dj1ApaRZsds1NfK3L6kh6IcBByMiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEI2gQARzBEAiBi63pVYQenxz9FrEq1od3fb3B1+xJ1lpp/OD7/94S8sgIgDAXbt0cNvy8IVX3TVscyXB7TCRPpls04QJRdsSIo2l8BRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
KeyError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABB9oARzBEAiB0AYrUGACXuHMyPAAVcgs2hMyBI4kQSOfbzZtVrWecmQIgc9Npt0Dj61Pc76M4I8gHBRTKVafdlUTxV8FnkTJhEYwBSDBFAiEA9hA4swjcHahlo0hSdG8BV3KTQgjG0kRUOTzZm98iF3cCIAVuZ1pnWm0KArhbFOXikHTYolqbV2C+ooFvZhkQoAbqAUdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSrgABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEHIyIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAggA2gQARzBEAiBi63pVYQenxz9FrEq1od3fb3B1+xJ1lpp/OD7/94S8sgIgDAXbt0cNvy8IVX3TVscyXB7TCRPpls04QJRdsSIo2l8BRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
KeyError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABB9oARzBEAiB0AYrUGACXuHMyPAAVcgs2hMyBI4kQSOfbzZtVrWecmQIgc9Npt0Dj61Pc76M4I8gHBRTKVafdlUTxV8FnkTJhEYwBSDBFAiEA9hA4swjcHahlo0hSdG8BV3KTQgjG0kRUOTzZm98iF3cCIAVuZ1pnWm0KArhbFOXikHTYolqbV2C+ooFvZhkQoAbqAUdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSrgABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEHIyIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQjaBABHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwFHMEQCIGX0W6WZi1mif/4ae+0BavHx+Q1Us6qPdFCqX1aiUQO9AiB/ckcDrR7blmgLKEtW1P/LiPf7dZ6rvgiqMPKbhROD0gFHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4AIQIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1PtnuylhxDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA",
KeyError,
],
[
"cHNidP8BAHMCAAAAATAa6YblFqHsisW0vGVz0y+DtGXiOtdhZ9aLOOcwtNvbAAAAAAD/////AnR7AQAAAAAAF6kUA6oXrogrXQ1Usl1jEE5P/s57nqKHYEOZOwAAAAAXqRS5IbG6b3IuS/qDtlV6MTmYakLsg4cAAAAAAAEBHwDKmjsAAAAAFgAU0tlLZK4IWH7vyO6xh8YB6Tn5A3wCAwABAAAAAAEAFgAUYunpgv/zTdgjlhAxawkM0qO3R8sAAQAiACCHa62DLx0WgBXtQSMqnqZaGBXZ7xPA74dZ9ktbKyeKZQEBJVEhA7fOI6AcW0vwCmQlN836uzFbZoMyhnR471EwnSvVf4qHUa4A",
KeyError,
],
[
"cHNidP8BAHMCAAAAATAa6YblFqHsisW0vGVz0y+DtGXiOtdhZ9aLOOcwtNvbAAAAAAD/////AnR7AQAAAAAAF6kUA6oXrogrXQ1Usl1jEE5P/s57nqKHYEOZOwAAAAAXqRS5IbG6b3IuS/qDtlV6MTmYakLsg4cAAAAAAAEBHwDKmjsAAAAAFgAU0tlLZK4IWH7vyO6xh8YB6Tn5A3wAAgAAFgAUYunpgv/zTdgjlhAxawkM0qO3R8sAAQAiACCHa62DLx0WgBXtQSMqnqZaGBXZ7xPA74dZ9ktbKyeKZQEBJVEhA7fOI6AcW0vwCmQlN836uzFbZoMyhnR471EwnSvVf4qHUa4A",
KeyError,
],
[
"cHNidP8BAHMCAAAAATAa6YblFqHsisW0vGVz0y+DtGXiOtdhZ9aLOOcwtNvbAAAAAAD/////AnR7AQAAAAAAF6kUA6oXrogrXQ1Usl1jEE5P/s57nqKHYEOZOwAAAAAXqRS5IbG6b3IuS/qDtlV6MTmYakLsg4cAAAAAAAEBHwDKmjsAAAAAFgAU0tlLZK4IWH7vyO6xh8YB6Tn5A3wAAQAWABRi6emC//NN2COWEDFrCQzSo7dHywABACIAIIdrrYMvHRaAFe1BIyqeploYFdnvE8Dvh1n2S1srJ4plIQEAJVEhA7fOI6AcW0vwCmQlN836uzFbZoMyhnR471EwnSvVf4qHUa4A",
KeyError,
],
[
"cHNidP8BAKACAAAAAqsJSaCMWvfEm4IS9Bfi8Vqz9cM9zxU4IagTn4d6W3vkAAAAAAD+////qwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QBAAAAAP7///8CYDvqCwAAAAAZdqkUdopAu9dAy+gdmI5x3ipNXHE5ax2IrI4kAAAAAAAAGXapFG9GILVT+glechue4O/p+gOcykWXiKwAAAAAAAEBItPf9QUAAAAAGXapFNSO0xELlAFMsRS9Mtb00GbcdCVriKwAAQEgAOH1BQAAAAAXqRQ1RebjO4MsRwUPJNPuuTycA5SLx4cBBBYAFIXRNTfy4mVAWjTbr6nj3aAfuCMIACICAurVlmh8qAYEPtw94RbN8p1eklfBls0FXPaYyNAr8k6ZELSmumcAAACAAAAAgAIAAIAAIgIDlPYr6d8ZlSxVh3aK63aYBhrSxKJciU9H2MFitNchPQUQtKa6ZwAAAIABAACAAgAAgAA=",
ValueError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU210gwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gEBAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq8iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohyICAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBAQMEAQAAAAEEIgAgjCNTFzdDtZXftKB7crqOQuN5fadOh/59nXSX47ICiQMBBUdSIQMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3CECOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnNSriIGAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zENkMak8AAACAAAAAgAMAAIAiBgMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3BDZDGpPAAAAgAAAAIACAACAACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
ValueError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU210gwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gEBAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohyICAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBAQMEAQAAAAEEIgAgjCNTFzdDtZXftKB7crqOQuN5fadOh/59nXSX47ICiQABBUdSIQMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3CECOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnNSriIGAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zENkMak8AAACAAAAAgAMAAIAiBgMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3BDZDGpPAAAAgAAAAIACAACAACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
ValueError,
],
[
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU210gwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gEBAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohyICAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBAQMEAQAAAAEEIgAgjCNTFzdDtZXftKB7crqOQuN5fadOh/59nXSX47ICiQMBBUdSIQMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3CECOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnNSrSIGAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zENkMak8AAACAAAAAgAMAAIAiBgMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3BDZDGpPAAAAgAAAAIACAACAACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA=",
ValueError,
],
]
for base64_psbt, error in tests:
with self.assertRaises(error):
print(PSBT.parse_base64(base64_psbt))
def test_parse(self):
tests = [
"cHNidP8BAHUCAAAAASaBcTce3/KF6Tet7qSze3gADAVmy7OtZGQXE8pCFxv2AAAAAAD+////AtPf9QUAAAAAGXapFNDFmQPFusKGh2DpD9UhpGZap2UgiKwA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHh7MuEwAAAQD9pQEBAAAAAAECiaPHHqtNIOA3G7ukzGmPopXJRjr6Ljl/hTPMti+VZ+UBAAAAFxYAFL4Y0VKpsBIDna89p95PUzSe7LmF/////4b4qkOnHf8USIk6UwpyN+9rRgi7st0tAXHmOuxqSJC0AQAAABcWABT+Pp7xp0XpdNkCxDVZQ6vLNL1TU/////8CAMLrCwAAAAAZdqkUhc/xCX/Z4Ai7NK9wnGIZeziXikiIrHL++E4sAAAAF6kUM5cluiHv1irHU6m80GfWx6ajnQWHAkcwRAIgJxK+IuAnDzlPVoMR3HyppolwuAJf3TskAinwf4pfOiQCIAGLONfc0xTnNMkna9b7QPZzMlvEuqFEyADS8vAtsnZcASED0uFWdJQbrUqZY3LLh+GFbTZSYG2YVi/jnF6efkE/IQUCSDBFAiEA0SuFLYXc2WHS9fSrZgZU327tzHlMDDPOXMMJ/7X85Y0CIGczio4OFyXBl/saiK9Z9R5E5CVbIBZ8hoQDHAXR8lkqASECI7cr7vCWXRC+B3jv7NYfysb3mk6haTkzgHNEZPhPKrMAAAAAAAAA",
"cHNidP8BAHUCAAAAASaBcTce3/KF6Tet7qSze3gADAVmy7OtZGQXE8pCFxv2AAAAAAD+////AtPf9QUAAAAAGXapFNDFmQPFusKGh2DpD9UhpGZap2UgiKwA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHh7MuEwAAAQD9pQEBAAAAAAECiaPHHqtNIOA3G7ukzGmPopXJRjr6Ljl/hTPMti+VZ+UBAAAAFxYAFL4Y0VKpsBIDna89p95PUzSe7LmF/////4b4qkOnHf8USIk6UwpyN+9rRgi7st0tAXHmOuxqSJC0AQAAABcWABT+Pp7xp0XpdNkCxDVZQ6vLNL1TU/////8CAMLrCwAAAAAZdqkUhc/xCX/Z4Ai7NK9wnGIZeziXikiIrHL++E4sAAAAF6kUM5cluiHv1irHU6m80GfWx6ajnQWHAkcwRAIgJxK+IuAnDzlPVoMR3HyppolwuAJf3TskAinwf4pfOiQCIAGLONfc0xTnNMkna9b7QPZzMlvEuqFEyADS8vAtsnZcASED0uFWdJQbrUqZY3LLh+GFbTZSYG2YVi/jnF6efkE/IQUCSDBFAiEA0SuFLYXc2WHS9fSrZgZU327tzHlMDDPOXMMJ/7X85Y0CIGczio4OFyXBl/saiK9Z9R5E5CVbIBZ8hoQDHAXR8lkqASECI7cr7vCWXRC+B3jv7NYfysb3mk6haTkzgHNEZPhPKrMAAAAAAQMEAQAAAAAAAA==",
"cHNidP8BAKACAAAAAqsJSaCMWvfEm4IS9Bfi8Vqz9cM9zxU4IagTn4d6W3vkAAAAAAD+////qwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QBAAAAAP7///8CYDvqCwAAAAAZdqkUdopAu9dAy+gdmI5x3ipNXHE5ax2IrI4kAAAAAAAAGXapFG9GILVT+glechue4O/p+gOcykWXiKwAAAAAAAEA3wIAAAABJoFxNx7f8oXpN63upLN7eAAMBWbLs61kZBcTykIXG/YAAAAAakcwRAIgcLIkUSPmv0dNYMW1DAQ9TGkaXSQ18Jo0p2YqncJReQoCIAEynKnazygL3zB0DsA5BCJCLIHLRYOUV663b8Eu3ZWzASECZX0RjTNXuOD0ws1G23s59tnDjZpwq8ubLeXcjb/kzjH+////AtPf9QUAAAAAGXapFNDFmQPFusKGh2DpD9UhpGZap2UgiKwA4fUFAAAAABepFDVF5uM7gyxHBQ8k0+65PJwDlIvHh7MuEwAAAQEgAOH1BQAAAAAXqRQ1RebjO4MsRwUPJNPuuTycA5SLx4cBBBYAFIXRNTfy4mVAWjTbr6nj3aAfuCMIACICAurVlmh8qAYEPtw94RbN8p1eklfBls0FXPaYyNAr8k6ZELSmumcAAACAAAAAgAIAAIAAIgIDlPYr6d8ZlSxVh3aK63aYBhrSxKJciU9H2MFitNchPQUQtKa6ZwAAAIABAACAAgAAgAA=",
"cHNidP8BAFUCAAAAASeaIyOl37UfxF8iD6WLD8E+HjNCeSqF1+Ns1jM7XLw5AAAAAAD/////AaBa6gsAAAAAGXapFP/pwAYQl8w7Y28ssEYPpPxCfStFiKwAAAAAAAEBIJVe6gsAAAAAF6kUY0UgD2jRieGtwN8cTRbqjxTA2+uHIgIDsTQcy6doO2r08SOM1ul+cWfVafrEfx5I1HVBhENVvUZGMEMCIAQktY7/qqaU4VWepck7v9SokGQiQFXN8HC2dxRpRC0HAh9cjrD+plFtYLisszrWTt5g6Hhb+zqpS5m9+GFR25qaAQEEIgAgdx/RitRZZm3Unz1WTj28QvTIR3TjYK2haBao7UiNVoEBBUdSIQOxNBzLp2g7avTxI4zW6X5xZ9Vp+sR/HkjUdUGEQ1W9RiED3lXR4drIBeP4pYwfv5uUwC89uq/hJ/78pJlfJvggg71SriIGA7E0HMunaDtq9PEjjNbpfnFn1Wn6xH8eSNR1QYRDVb1GELSmumcAAACAAAAAgAQAAIAiBgPeVdHh2sgF4/iljB+/m5TALz26r+En/vykmV8m+CCDvRC0prpnAAAAgAAAAIAFAACAAAA=",
"cHNidP8BAD8CAAAAAf//////////////////////////////////////////AAAAAAD/////AQAAAAAAAAAAA2oBAAAAAAAACg8BAgMEBQYHCAkPAQIDBAUGBwgJCgsMDQ4PAAA=",
"cHNidP8BAJ0BAAAAAnEOp2q0XFy2Q45gflnMA3YmmBgFrp4N/ZCJASq7C+U1AQAAAAD/////GQmU1qizyMgsy8+y+6QQaqBmObhyqNRHRlwNQliNbWcAAAAAAP////8CAOH1BQAAAAAZdqkUtrwsDuVlWoQ9ea/t0MzD991kNAmIrGBa9AUAAAAAFgAUEYjvjkzgRJ6qyPsUHL9aEXbmoIgAAAAATwEEiLIeA55TDKyAAAAAPbyKXJdp8DGxfnf+oVGGAyIaGP0Y8rmlTGyMGsdcvDUC8jBYSxVdHH8c1FEgplPEjWULQxtnxbLBPyfXFCA3wWkQJ1acUDEAAIAAAACAAAAAgAABAR8A4fUFAAAAABYAFDO5gvkbKPFgySC0q5XljOUN2jpKIgIDMJaA8zx9446mpHzU7NZvH1pJdHxv+4gI7QkDkkPjrVxHMEQCIC1wTO2DDFapCTRL10K2hS3M0QPpY7rpLTjnUlTSu0JFAiAthsQ3GV30bAztoITyopHD2i1kBw92v5uQsZXn7yj3cgEiBgMwloDzPH3jjqakfNTs1m8fWkl0fG/7iAjtCQOSQ+OtXBgnVpxQMQAAgAAAAIAAAACAAAAAAAEAAAAAAQEfAOH1BQAAAAAWABQ4j7lEMH63fvRRl9CwskXgefAR3iICAsd3Fh9z0LfHK57nveZQKT0T8JW8dlatH1Jdpf0uELEQRzBEAiBMsftfhpyULg4mEAV2ElQ5F5rojcqKncO6CPeVOYj6pgIgUh9JynkcJ9cOJzybFGFphZCTYeJb4nTqIA1+CIJ+UU0BIgYCx3cWH3PQt8crnue95lApPRPwlbx2Vq0fUl2l/S4QsRAYJ1acUDEAAIAAAACAAAAAgAAAAAAAAAAAAAAiAgLSDKUC7iiWhtIYFb1DqAY3sGmOH7zb5MrtRF9sGgqQ7xgnVpxQMQAAgAAAAIAAAACAAAAAAAQAAAAA",
]
for i, base64_psbt in enumerate(tests):
# parse does all the validation
psbt = PSBT.parse_base64(base64_psbt)
self.assertEqual(psbt.serialize_base64(), base64_psbt)
def test_parse_2(self):
hex_psbt = "70736274ff01009d0100000002710ea76ab45c5cb6438e607e59cc037626981805ae9e0dfd9089012abb0be5350100000000ffffffff190994d6a8b3c8c82ccbcfb2fba4106aa06639b872a8d447465c0d42588d6d670000000000ffffffff0200e1f505000000001976a914b6bc2c0ee5655a843d79afedd0ccc3f7dd64340988ac605af405000000001600141188ef8e4ce0449eaac8fb141cbf5a1176e6a088000000004f010488b21e039e530cac800000003dbc8a5c9769f031b17e77fea1518603221a18fd18f2b9a54c6c8c1ac75cbc3502f230584b155d1c7f1cd45120a653c48d650b431b67c5b2c13f27d7142037c1691027569c503100008000000080000000800001011f00e1f5050000000016001433b982f91b28f160c920b4ab95e58ce50dda3a4a220203309680f33c7de38ea6a47cd4ecd66f1f5a49747c6ffb8808ed09039243e3ad5c47304402202d704ced830c56a909344bd742b6852dccd103e963bae92d38e75254d2bb424502202d86c437195df46c0ceda084f2a291c3da2d64070f76bf9b90b195e7ef28f77201220603309680f33c7de38ea6a47cd4ecd66f1f5a49747c6ffb8808ed09039243e3ad5c1827569c5031000080000000800000008000000000010000000001011f00e1f50500000000160014388fb944307eb77ef45197d0b0b245e079f011de220202c777161f73d0b7c72b9ee7bde650293d13f095bc7656ad1f525da5fd2e10b11047304402204cb1fb5f869c942e0e26100576125439179ae88dca8a9dc3ba08f7953988faa60220521f49ca791c27d70e273c9b14616985909361e25be274ea200d7e08827e514d01220602c777161f73d0b7c72b9ee7bde650293d13f095bc7656ad1f525da5fd2e10b1101827569c5031000080000000800000008000000000000000000000220202d20ca502ee289686d21815bd43a80637b0698e1fbcdbe4caed445f6c1a0a90ef1827569c50310000800000008000000080000000000400000000"
psbt = PSBT.parse(BytesIO(bytes.fromhex(hex_psbt)))
self.assertEqual(psbt.serialize().hex(), hex_psbt)
def test_update_1(self):
psbt = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAAAAAA="
)
transaction_data = [
"0200000001aad73931018bd25f84ae400b68848be09db706eac2ac18298babee71ab656f8b0000000048473044022058f6fc7c6a33e1b31548d481c826c015bd30135aad42cd67790dab66d2ad243b02204a1ced2604c6735b6393e5b41691dd78b00f0c5942fb9f751856faa938157dba01feffffff0280f0fa020000000017a9140fb9463421696b82c833af241c78c17ddbde493487d0f20a270100000017a91429ca74f8a08f81999428185c97b5d852e4063f618765000000",
"0200000000010158e87a21b56daf0c23be8e7070456c336f7cbaa5c8757924f545887bb2abdd7501000000171600145f275f436b09a8cc9a2eb2a2f528485c68a56323feffffff02d8231f1b0100000017a914aed962d6654f9a2b36608eb9d64d2b260db4f1118700c2eb0b0000000017a914b7f5faf40e3d40a5a459b1db3535f2b72fa921e88702483045022100a22edcc6e5bc511af4cc4ae0de0fcd75c7e04d8c1c3a8aa9d820ed4b967384ec02200642963597b9b1bc22c75e9f3e117284a962188bf5e8a74c895089046a20ad770121035509a48eb623e10aace8bfd0212fdb8a8e5af3c94b0b133b95e114cab89e4f7965000000",
]
redeem_script_data = [
"475221029583bf39ae0a609747ad199addd634fa6108559d6c5cd39b4c2183f1ab96e07f2102dab61ff49a14db6a7d02b0cd1fbb78fc4b18312b5b4e54dae4dba2fbfef536d752ae",
"2200208c2353173743b595dfb4a07b72ba8e42e3797da74e87fe7d9d7497e3b2028903",
]
witness_script_data = [
"47522103089dc10c7ac6db54f91329af617333db388cead0c231f723379d1b99030b02dc21023add904f3d6dcf59ddb906b0dee23529b7ffb9ed50e5e86151926860221f0e7352ae"
]
tx_lookup = {}
for hex_tx in transaction_data:
tx_obj = Tx.parse(BytesIO(bytes.fromhex(hex_tx)))
tx_lookup[tx_obj.hash()] = tx_obj
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPd9TeAdPADNnSyH9SSUUbTVeFszDE23Ki6TBB5nCefAdHkK8Fm3qMQR6sHwA56zqRmKmxnHk37JkiFzvncDqoKmPWubu7hDF"
)
pubkey_lookup = {}
for i in range(6):
path = "m/0'/0'/{}'".format(i)
named_pubkey = NamedHDPublicKey.from_hd_priv(hd_priv, path)
pubkey_lookup[named_pubkey.sec()] = named_pubkey
pubkey_lookup[named_pubkey.hash160()] = named_pubkey
redeem_lookup = {}
for hex_redeem_script in redeem_script_data:
redeem_script = RedeemScript.parse(
BytesIO(bytes.fromhex(hex_redeem_script))
)
redeem_lookup[redeem_script.hash160()] = redeem_script
witness_lookup = {}
for hex_witness_script in witness_script_data:
witness_script = WitnessScript.parse(
BytesIO(bytes.fromhex(hex_witness_script))
)
witness_lookup[witness_script.sha256()] = witness_script
psbt.update(tx_lookup, pubkey_lookup, redeem_lookup, witness_lookup)
self.assertTrue(psbt.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHAQQiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEFR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuIgYCOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnMQ2QxqTwAAAIAAAACAAwAAgCIGAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcENkMak8AAACAAAAAgAIAAIAAIgIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1Ptnuylh3EQ2QxqTwAAAIAAAACABAAAgAAiAgJ/Y5l1fS7/VaE2rQLGhLGDi2VW5fG2s0KCqUtrUAUQlhDZDGpPAAAAgAAAAIAFAACAAA=="
self.assertEqual(psbt.serialize_base64(), want)
def test_update_2(self):
psbt = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHAQQiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEFR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuIgYCOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnMQ2QxqTwAAAIAAAACAAwAAgCIGAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcENkMak8AAACAAAAAgAIAAIAAIgIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1Ptnuylh3EQ2QxqTwAAAIAAAACABAAAgAAiAgJ/Y5l1fS7/VaE2rQLGhLGDi2VW5fG2s0KCqUtrUAUQlhDZDGpPAAAAgAAAAIAFAACAAA=="
)
psbt.psbt_ins[0].hash_type = SIGHASH_ALL
psbt.psbt_ins[1].hash_type = SIGHASH_ALL
self.assertTrue(psbt.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEDBAEAAAABBCIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQVHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4iBgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8OcxDZDGpPAAAAgAAAAIADAACAIgYDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwQ2QxqTwAAAIAAAACAAgAAgAAiAgOppMN/WZbTqiXbrGtXCvBlA5RJKUJGCzVHU+2e7KWHcRDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA"
self.assertEqual(psbt.serialize_base64(), want)
def test_sign_1(self):
psbt = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEDBAEAAAABBCIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQVHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4iBgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8OcxDZDGpPAAAAgAAAAIADAACAIgYDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwQ2QxqTwAAAIAAAACAAgAAgAAiAgOppMN/WZbTqiXbrGtXCvBlA5RJKUJGCzVHU+2e7KWHcRDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA"
)
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPd9TeAdPADNnSyH9SSUUbTVeFszDE23Ki6TBB5nCefAdHkK8Fm3qMQR6sHwA56zqRmKmxnHk37JkiFzvncDqoKmPWubu7hDF"
)
private_keys = [
hd_priv.traverse("m/0'/0'/0'").private_key,
hd_priv.traverse("m/0'/0'/2'").private_key,
]
psbt.sign_with_private_keys(private_keys)
self.assertTrue(psbt.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgf0cwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMAQEDBAEAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHIgIDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtxHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwEBAwQBAAAAAQQiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEFR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuIgYCOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnMQ2QxqTwAAAIAAAACAAwAAgCIGAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcENkMak8AAACAAAAAgAIAAIAAIgIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1Ptnuylh3EQ2QxqTwAAAIAAAACABAAAgAAiAgJ/Y5l1fS7/VaE2rQLGhLGDi2VW5fG2s0KCqUtrUAUQlhDZDGpPAAAAgAAAAIAFAACAAA=="
self.assertEqual(psbt.serialize_base64(), want)
def test_sign_2(self):
psbt = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEDBAEAAAABBCIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQVHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4iBgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8OcxDZDGpPAAAAgAAAAIADAACAIgYDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwQ2QxqTwAAAIAAAACAAgAAgAAiAgOppMN/WZbTqiXbrGtXCvBlA5RJKUJGCzVHU+2e7KWHcRDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA"
)
hd_priv = HDPrivateKey.parse(
"tprv8ZgxMBicQKsPd9TeAdPADNnSyH9SSUUbTVeFszDE23Ki6TBB5nCefAdHkK8Fm3qMQR6sHwA56zqRmKmxnHk37JkiFzvncDqoKmPWubu7hDF"
)
private_keys = [
hd_priv.traverse("m/0'/0'/1'").private_key,
hd_priv.traverse("m/0'/0'/3'").private_key,
]
psbt.sign_with_private_keys(private_keys)
self.assertTrue(psbt.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU210gwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gEBAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohyICAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBAQMEAQAAAAEEIgAgjCNTFzdDtZXftKB7crqOQuN5fadOh/59nXSX47ICiQMBBUdSIQMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3CECOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnNSriIGAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zENkMak8AAACAAAAAgAMAAIAiBgMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3BDZDGpPAAAAgAAAAIACAACAACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA="
self.assertEqual(psbt.serialize_base64(), want)
def test_combine(self):
psbt_1 = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgf0cwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMAQEDBAEAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHIgIDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtxHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwEBAwQBAAAAAQQiACCMI1MXN0O1ld+0oHtyuo5C43l9p06H/n2ddJfjsgKJAwEFR1IhAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcIQI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc1KuIgYCOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnMQ2QxqTwAAAIAAAACAAwAAgCIGAwidwQx6xttU+RMpr2FzM9s4jOrQwjH3IzedG5kDCwLcENkMak8AAACAAAAAgAIAAIAAIgIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1Ptnuylh3EQ2QxqTwAAAIAAAACABAAAgAAiAgJ/Y5l1fS7/VaE2rQLGhLGDi2VW5fG2s0KCqUtrUAUQlhDZDGpPAAAAgAAAAIAFAACAAA=="
)
psbt_2 = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU210gwRQIhAPYQOLMI3B2oZaNIUnRvAVdyk0IIxtJEVDk82ZvfIhd3AiAFbmdaZ1ptCgK4WxTl4pB02KJam1dgvqKBb2YZEKAG6gEBAwQBAAAAAQRHUiEClYO/Oa4KYJdHrRma3dY0+mEIVZ1sXNObTCGD8auW4H8hAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXUq4iBgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfxDZDGpPAAAAgAAAAIAAAACAIgYC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtcQ2QxqTwAAAIAAAACAAQAAgAABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohyICAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zRzBEAiBl9FulmYtZon/+GnvtAWrx8fkNVLOqj3RQql9WolEDvQIgf3JHA60e25ZoCyhLVtT/y4j3+3Weq74IqjDym4UTg9IBAQMEAQAAAAEEIgAgjCNTFzdDtZXftKB7crqOQuN5fadOh/59nXSX47ICiQMBBUdSIQMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3CECOt2QTz1tz1nduQaw3uI1Kbf/ue1Q5ehhUZJoYCIfDnNSriIGAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zENkMak8AAACAAAAAgAMAAIAiBgMIncEMesbbVPkTKa9hczPbOIzq0MIx9yM3nRuZAwsC3BDZDGpPAAAAgAAAAIACAACAACICA6mkw39ZltOqJdusa1cK8GUDlEkpQkYLNUdT7Z7spYdxENkMak8AAACAAAAAgAQAAIAAIgICf2OZdX0u/1WhNq0CxoSxg4tlVuXxtrNCgqlLa1AFEJYQ2QxqTwAAAIAAAACABQAAgAA="
)
psbt_1.combine(psbt_2)
self.assertTrue(psbt_1.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgf0cwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMASICAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXSDBFAiEA9hA4swjcHahlo0hSdG8BV3KTQgjG0kRUOTzZm98iF3cCIAVuZ1pnWm0KArhbFOXikHTYolqbV2C+ooFvZhkQoAbqAQEDBAEAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHIgIDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtxHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwEiAgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc0cwRAIgZfRbpZmLWaJ//hp77QFq8fH5DVSzqo90UKpfVqJRA70CIH9yRwOtHtuWaAsoS1bU/8uI9/t1nqu+CKow8puFE4PSAQEDBAEAAAABBCIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQVHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4iBgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8OcxDZDGpPAAAAgAAAAIADAACAIgYDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwQ2QxqTwAAAIAAAACAAgAAgAAiAgOppMN/WZbTqiXbrGtXCvBlA5RJKUJGCzVHU+2e7KWHcRDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA"
self.assertEqual(psbt_1.serialize_base64(), want)
def test_combine_extra(self):
psbt_1 = PSBT.parse_base64(
"cHNidP8BAD8CAAAAAf//////////////////////////////////////////AAAAAAD/////AQAAAAAAAAAAA2oBAAAAAAAKDwECAwQFBgcICQ8BAgMEBQYHCAkKCwwNDg8ACg8BAgMEBQYHCAkPAQIDBAUGBwgJCgsMDQ4PAAoPAQIDBAUGBwgJDwECAwQFBgcICQoLDA0ODwA="
)
psbt_2 = PSBT.parse_base64(
"cHNidP8BAD8CAAAAAf//////////////////////////////////////////AAAAAAD/////AQAAAAAAAAAAA2oBAAAAAAAKDwECAwQFBgcIEA8BAgMEBQYHCAkKCwwNDg8ACg8BAgMEBQYHCBAPAQIDBAUGBwgJCgsMDQ4PAAoPAQIDBAUGBwgQDwECAwQFBgcICQoLDA0ODwA="
)
psbt_1.combine(psbt_2)
self.assertTrue(psbt_1.validate())
want = "cHNidP8BAD8CAAAAAf//////////////////////////////////////////AAAAAAD/////AQAAAAAAAAAAA2oBAAAAAAAKDwECAwQFBgcICQ8BAgMEBQYHCAkKCwwNDg8KDwECAwQFBgcIEA8BAgMEBQYHCAkKCwwNDg8ACg8BAgMEBQYHCAkPAQIDBAUGBwgJCgsMDQ4PCg8BAgMEBQYHCBAPAQIDBAUGBwgJCgsMDQ4PAAoPAQIDBAUGBwgJDwECAwQFBgcICQoLDA0ODwoPAQIDBAUGBwgQDwECAwQFBgcICQoLDA0ODwA="
self.assertEqual(psbt_1.serialize_base64(), want)
def test_finalize(self):
psbt = PSBT.parse_base64(
"cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAAiAgKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgf0cwRAIgdAGK1BgAl7hzMjwAFXILNoTMgSOJEEjn282bVa1nnJkCIHPTabdA4+tT3O+jOCPIBwUUylWn3ZVE8VfBZ5EyYRGMASICAtq2H/SaFNtqfQKwzR+7ePxLGDErW05U2uTbovv+9TbXSDBFAiEA9hA4swjcHahlo0hSdG8BV3KTQgjG0kRUOTzZm98iF3cCIAVuZ1pnWm0KArhbFOXikHTYolqbV2C+ooFvZhkQoAbqAQEDBAEAAAABBEdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSriIGApWDvzmuCmCXR60Zmt3WNPphCFWdbFzTm0whg/GrluB/ENkMak8AAACAAAAAgAAAAIAiBgLath/0mhTban0CsM0fu3j8SxgxK1tOVNrk26L7/vU21xDZDGpPAAAAgAAAAIABAACAAAEBIADC6wsAAAAAF6kUt/X69A49QKWkWbHbNTXyty+pIeiHIgIDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtxHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwEiAgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8Oc0cwRAIgZfRbpZmLWaJ//hp77QFq8fH5DVSzqo90UKpfVqJRA70CIH9yRwOtHtuWaAsoS1bU/8uI9/t1nqu+CKow8puFE4PSAQEDBAEAAAABBCIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQVHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4iBgI63ZBPPW3PWd25BrDe4jUpt/+57VDl6GFRkmhgIh8OcxDZDGpPAAAAgAAAAIADAACAIgYDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwQ2QxqTwAAAIAAAACAAgAAgAAiAgOppMN/WZbTqiXbrGtXCvBlA5RJKUJGCzVHU+2e7KWHcRDZDGpPAAAAgAAAAIAEAACAACICAn9jmXV9Lv9VoTatAsaEsYOLZVbl8bazQoKpS2tQBRCWENkMak8AAACAAAAAgAUAAIAA"
)
psbt.finalize()
self.assertTrue(psbt.validate())
want = "cHNidP8BAJoCAAAAAljoeiG1ba8MI76OcHBFbDNvfLqlyHV5JPVFiHuyq911AAAAAAD/////g40EJ9DsZQpoqka7CwmK6kQiwHGyyng1Kgd5WdB86h0BAAAAAP////8CcKrwCAAAAAAWABTYXCtx0AYLCcmIauuBXlCZHdoSTQDh9QUAAAAAFgAUAK6pouXw+HaliN9VRuh0LR2HAI8AAAAAAAEAuwIAAAABqtc5MQGL0l+ErkALaISL4J23BurCrBgpi6vucatlb4sAAAAASEcwRAIgWPb8fGoz4bMVSNSByCbAFb0wE1qtQs1neQ2rZtKtJDsCIEoc7SYExnNbY5PltBaR3XiwDwxZQvufdRhW+qk4FX26Af7///8CgPD6AgAAAAAXqRQPuUY0IWlrgsgzryQceMF9295JNIfQ8gonAQAAABepFCnKdPigj4GZlCgYXJe12FLkBj9hh2UAAAABB9oARzBEAiB0AYrUGACXuHMyPAAVcgs2hMyBI4kQSOfbzZtVrWecmQIgc9Npt0Dj61Pc76M4I8gHBRTKVafdlUTxV8FnkTJhEYwBSDBFAiEA9hA4swjcHahlo0hSdG8BV3KTQgjG0kRUOTzZm98iF3cCIAVuZ1pnWm0KArhbFOXikHTYolqbV2C+ooFvZhkQoAbqAUdSIQKVg785rgpgl0etGZrd1jT6YQhVnWxc05tMIYPxq5bgfyEC2rYf9JoU22p9ArDNH7t4/EsYMStbTlTa5Nui+/71NtdSrgABASAAwusLAAAAABepFLf1+vQOPUClpFmx2zU18rcvqSHohwEHIyIAIIwjUxc3Q7WV37Sge3K6jkLjeX2nTof+fZ10l+OyAokDAQjaBABHMEQCIGLrelVhB6fHP0WsSrWh3d9vcHX7EnWWmn84Pv/3hLyyAiAMBdu3Rw2/LwhVfdNWxzJcHtMJE+mWzThAlF2xIijaXwFHMEQCIGX0W6WZi1mif/4ae+0BavHx+Q1Us6qPdFCqX1aiUQO9AiB/ckcDrR7blmgLKEtW1P/LiPf7dZ6rvgiqMPKbhROD0gFHUiEDCJ3BDHrG21T5EymvYXMz2ziM6tDCMfcjN50bmQMLAtwhAjrdkE89bc9Z3bkGsN7iNSm3/7ntUOXoYVGSaGAiHw5zUq4AIgIDqaTDf1mW06ol26xrVwrwZQOUSSlCRgs1R1Ptnuylh3EQ2QxqTwAAAIAAAACABAAAgAAiAgJ/Y5l1fS7/VaE2rQLGhLGDi2VW5fG2s0KCqUtrUAUQlhDZDGpPAAAAgAAAAIAFAACAAA=="
self.assertEqual(psbt.serialize_base64(), want)
tx_obj = psbt.final_tx()
want = "0200000000010258e87a21b56daf0c23be8e7070456c336f7cbaa5c8757924f545887bb2abdd7500000000da00473044022074018ad4180097b873323c0015720b3684cc8123891048e7dbcd9b55ad679c99022073d369b740e3eb53dcefa33823c8070514ca55a7dd9544f157c167913261118c01483045022100f61038b308dc1da865a34852746f015772934208c6d24454393cd99bdf2217770220056e675a675a6d0a02b85b14e5e29074d8a25a9b5760bea2816f661910a006ea01475221029583bf39ae0a609747ad199addd634fa6108559d6c5cd39b4c2183f1ab96e07f2102dab61ff49a14db6a7d02b0cd1fbb78fc4b18312b5b4e54dae4dba2fbfef536d752aeffffffff838d0427d0ec650a68aa46bb0b098aea4422c071b2ca78352a077959d07cea1d01000000232200208c2353173743b595dfb4a07b72ba8e42e3797da74e87fe7d9d7497e3b2028903ffffffff0270aaf00800000000160014d85c2b71d0060b09c9886aeb815e50991dda124d00e1f5050000000016001400aea9a2e5f0f876a588df5546e8742d1d87008f000400473044022062eb7a556107a7c73f45ac4ab5a1dddf6f7075fb1275969a7f383efff784bcb202200c05dbb7470dbf2f08557dd356c7325c1ed30913e996cd3840945db12228da5f01473044022065f45ba5998b59a27ffe1a7bed016af1f1f90d54b3aa8f7450aa5f56a25103bd02207f724703ad1edb96680b284b56d4ffcb88f7fb759eabbe08aa30f29b851383d20147522103089dc10c7ac6db54f91329af617333db388cead0c231f723379d1b99030b02dc21023add904f3d6dcf59ddb906b0dee23529b7ffb9ed50e5e86151926860221f0e7352ae00000000"
self.assertEqual(tx_obj.serialize().hex(), want)
| 175.842391 | 2,131 | 0.902014 | 2,629 | 97,065 | 33.106124 | 0.157474 | 0.00563 | 0.006549 | 0.006319 | 0.471966 | 0.466244 | 0.458006 | 0.453157 | 0.442955 | 0.434809 | 0 | 0.359502 | 0.063391 | 97,065 | 551 | 2,132 | 176.161525 | 0.597864 | 0.000299 | 0 | 0.459615 | 0 | 0.065385 | 0.826411 | 0.82502 | 0 | 1 | 0 | 0 | 0.094231 | 1 | 0.05 | false | 0 | 0.015385 | 0 | 0.069231 | 0.001923 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d255160f444ee93bbb6ccf3ac73f1605edb91b4c | 151,065 | py | Python | tests/conftest.py | jacopoabbate/datavault-api-python-client | 70c3113b56db77de3835b4210dd7bffb22b34c9f | [
"MIT"
] | null | null | null | tests/conftest.py | jacopoabbate/datavault-api-python-client | 70c3113b56db77de3835b4210dd7bffb22b34c9f | [
"MIT"
] | null | null | null | tests/conftest.py | jacopoabbate/datavault-api-python-client | 70c3113b56db77de3835b4210dd7bffb22b34c9f | [
"MIT"
] | null | null | null | import datetime
from pathlib import Path
import pytest
import responses
from datavault_api_client.data_structures import (
ConcurrentDownloadManifest,
DiscoveredFileInfo,
DownloadDetails,
PartitionDownloadDetails,
)
@pytest.fixture
def mocked_response():
"""A pytest fixture to mock the behaviour of a server sending back a response."""
with responses.RequestsMock() as resp:
yield resp
@pytest.fixture
def mocked_top_level_datavault_api(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list",
json=[
{
'name': '2020',
'parent': '/v2/list',
'url': '/v2/list/2020',
'size': 0,
'createdAt': '2020-01-01T00:00:00',
'updatedAt': '2020-12-01T00:00:00',
'writable': False,
'directory': True
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Tue, 01 Dec 2020 16:49:36 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
@pytest.fixture
def mocked_top_level_datavault_api_failed_request(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list",
json=[
{
'error': 'ClientError',
}
],
status=400,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Tue, 01 Dec 2020 16:49:36 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
@pytest.fixture
def mocked_datavault_api_with_down_the_line_failed_request(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020",
json=[
{
'name': '12',
'parent': '/v2/list/2020',
'url': '/v2/list/2020/12',
'size': 0,
'createdAt': '2020-12-01T00:00:00',
'updatedAt': '2020-12-02T00:00:00',
'writable': False,
'directory': True
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 13:21:52 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff', 'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0', 'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/12",
json=[
{
'error': 'unauthorized',
'error_description': 'Full authentication is required to access this resource',
}
],
status=401,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 13:24:50 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'Cache-Control': 'no-store',
'Pragma': 'no-cache',
'WWW-Authenticate': (
'Bearer realm="resource", error="unauthorized", '
'error_description="Full authentication is required to access this resource"'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
@pytest.fixture
def mocked_datavault_api_with_repeated_node(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020",
json=[
{
'name': '12',
'parent': '/v2/list/2020',
'url': '/v2/list/2020/12',
'size': 0,
'createdAt': '2020-12-01T00:00:00',
'updatedAt': '2020-12-02T00:00:00',
'writable': False,
'directory': True
},
{
'name': '12',
'parent': '/v2/list/2020',
'url': '/v2/list/2020/12',
'size': 0,
'createdAt': '2020-12-01T00:00:00',
'updatedAt': '2020-12-02T00:00:00',
'writable': False,
'directory': True
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 13:21:52 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff', 'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0', 'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/12",
json=[
{
'name': '01',
'parent': '/v2/list/2020/12',
'url': '/v2/list/2020/12/01',
'size': 0,
'createdAt': '2020-12-01T23:21:18',
'updatedAt': '2020-12-02T09:14:31',
'writable': False,
'directory': True
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 14:08:39 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/12/01",
json=[
{
'name': 'S945',
'parent': '/v2/list/2020/12/01',
'url': '/v2/list/2020/12/01/S945',
'size': 0,
'createdAt': '2020-12-01T23:10:48',
'updatedAt': '2020-12-01T23:21:18',
'writable': False,
'directory': True
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 14:16:28 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/12/01/S945",
json=[
{
'name': 'CORE',
'parent': '/v2/list/2020/12/01/S945',
'url': '/v2/list/2020/12/01/S945/CORE',
'size': 0,
'createdAt': '2020-12-01T23:10:48',
'updatedAt': '2020-12-01T23:10:48',
'writable': False,
'directory': True
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 14:18:35 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/12/01/S945/CORE",
json=[
{
'name': 'COREREF_945_20201201.txt.bz2',
'fid': '20201201-S945_CORE_ALL_0_0',
'parent': '/v2/list/2020/12/01/S945/CORE',
'url': '/v2/data/2020/12/01/S945/CORE/20201201-S945_CORE_ALL_0_0',
'size': 15680,
'md5sum': 'c9cc20020def775933be0be9690a9b5a',
'createdAt': '2020-12-01T23:10:48',
'updatedAt': '2020-12-01T23:10:48',
'writable': False,
'directory': False,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
'Date': 'Wed, 02 Dec 2020 14:19:38 GMT',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS, DELETE, PUT',
'Access-Control-Max-Age': '3600',
'Access-Control-Allow-Headers': 'x-request-with, authorization, content-type',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Expose-Headers': (
'Cache-Control, Content-Language, Content-Length, Content-Type, '
'Expires, Last-Modified, Pragma'
),
'X-Content-Type-Options': 'nosniff',
'X-XSS-Protection': '1; mode=block',
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0',
'Strict-Transport-Security': 'max-age=31536000 ; includeSubDomains',
'X-Frame-Options': 'DENY',
},
)
"""Datavault API simulated at the instrument level."""
@pytest.fixture
def mocked_datavault_api_instrument_level(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/16/S367/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_367_20200716.txt.bz2",
"fid": "20200716-S367_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/16/S367/WATCHLIST",
"url": "/v2/data/2020/07/16/S367/WATCHLIST/20200716-S367_WATCHLIST_username_0_0",
"size": 100145874,
"md5sum": "fb34325ec9262adc74c945a9e7c9b465",
"createdAt": "2020-07-17T02:18:08",
"updatedAt": "2020-07-17T02:18:08",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:25:03 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization,"
" content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language,"
" Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
@pytest.fixture
def mocked_files_available_to_download_single_instrument():
files_available_to_download = [
DiscoveredFileInfo(
file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/"
"07/16/S367/WATCHLIST/20200716-S367_WATCHLIST_username_0_0"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=16),
size=100145874,
md5sum="fb34325ec9262adc74c945a9e7c9b465",
),
]
return files_available_to_download
@pytest.fixture
def mocked_download_details_single_instrument():
download_details = DownloadDetails(
file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/"
"S367/WATCHLIST/20200716-S367_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716.txt.bz2"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=16),
size=100145874,
md5sum="fb34325ec9262adc74c945a9e7c9b465",
is_partitioned=True,
)
return download_details
@pytest.fixture
def mocked_file_partitions_single_instrument():
list_of_file_partitions = [
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=0&end=5242880"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=5242881&end=10485760"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=10485761&end=15728640"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=15728641&end=20971520"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=20971521&end=26214400"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=26214401&end=31457280"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_6.txt"
),
partition_index=6,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=31457281&end=36700160"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_7.txt"
),
partition_index=7,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=36700161&end=41943040"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_8.txt"
),
partition_index=8,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=41943041&end=47185920"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_9.txt"
),
partition_index=9,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=47185921&end=52428800"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_10.txt"
),
partition_index=10,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=52428801&end=57671680"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_11.txt"
),
partition_index=11,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=57671681&end=62914560"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_12.txt"
),
partition_index=12,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=62914561&end=68157440"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_13.txt"
),
partition_index=13,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=68157441&end=73400320"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_14.txt"
),
partition_index=14,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=73400321&end=78643200"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_15.txt"
),
partition_index=15,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=78643201&end=83886080"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_16.txt"
),
partition_index=16,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=83886081&end=89128960"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_17.txt"
),
partition_index=17,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=89128961&end=94371840"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_18.txt"
),
partition_index=18,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=94371841&end=99614720"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_19.txt"
),
partition_index=19,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200716.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/16/S367/WATCHLIST/"
"20200716-S367_WATCHLIST_username_0_0?start=99614721&end=100145874"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/16/S367/WATCHLIST", "WATCHLIST_367_20200716_20.txt"
),
partition_index=20,
),
]
return list_of_file_partitions
"""Datavault API with single source and a single day."""
@pytest.fixture
def mocked_datavault_api_single_source_single_day(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list",
json=[
{
"name": "2020",
"parent": "/v2/list",
"url": "/v2/list/2020",
"size": 0,
"createdAt": "2020-01-01T00:00:00",
"updatedAt": "2020-07-30T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:19:56 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020",
json=[
{
"name": "07",
"parent": "/v2/list/2020",
"url": "/v2/list/2020/07",
"size": 0,
"createdAt": "2020-07-01T00:00:00",
"updatedAt": "2020-07-30T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:20:44 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07",
json=[
{
"name": "22",
"parent": "/v2/list/2020/07",
"url": "/v2/list/2020/07/22",
"size": 0,
"createdAt": "2020-07-22T22:44:01",
"updatedAt": "2020-07-23T05:10:57",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:22:42 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/22",
json=[
{
"name": "S945",
"parent": "/v2/list/2020/07/22",
"url": "/v2/list/2020/07/22/S945",
"size": 0,
"createdAt": "2020-07-22T22:40:41",
"updatedAt": "2020-07-22T22:44:01",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:23:38 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/22/S945",
json=[
{
"name": "CORE",
"parent": "/v2/list/2020/07/22/S945",
"url": "/v2/list/2020/07/22/S945/CORE",
"size": 0,
"createdAt": "2020-07-22T22:41:41",
"updatedAt": "2020-07-22T22:41:41",
"writable": False,
"directory": True,
},
{
"name": "CROSS",
"parent": "/v2/list/2020/07/22/S945",
"url": "/v2/list/2020/07/22/S945/CROSS",
"size": 0,
"createdAt": "2020-07-22T22:40:41",
"updatedAt": "2020-07-22T22:40:41",
"writable": False,
"directory": True,
},
{
"name": "WATCHLIST",
"parent": "/v2/list/2020/07/22/S945",
"url": "/v2/list/2020/07/22/S945/WATCHLIST",
"size": 0,
"createdAt": "2020-07-22T22:44:01",
"updatedAt": "2020-07-22T22:44:01",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:24:08 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/22/S945/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_945_20200722.txt.bz2",
"fid": "20200722-S945_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/22/S945/WATCHLIST",
"url": "/v2/data/2020/07/22/S945/WATCHLIST/20200722-S945_WATCHLIST_username_0_0",
"size": 61663360,
"md5sum": "78571e930fb12fcfb2fb70feb07c7bcf",
"createdAt": "2020-07-22T22:44:01",
"updatedAt": "2020-07-22T22:44:01",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:25:04 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/22/S945/CORE",
json=[
{
"name": "COREREF_945_20200722.txt.bz2",
"fid": "20200722-S945_CORE_ALL_0_0",
"parent": "/v2/list/2020/07/22/S945/CORE",
"url": "/v2/data/2020/07/22/S945/CORE/20200722-S945_CORE_ALL_0_0",
"size": 17734,
"md5sum": "3548e03c8833b0e2133c80ac3b1dcdac",
"createdAt": "2020-07-22T22:41:41",
"updatedAt": "2020-07-22T22:41:41",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:26:03 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/22/S945/CROSS",
json=[
{
"name": "CROSSREF_945_20200722.txt.bz2",
"fid": "20200722-S945_CROSS_ALL_0_0",
"parent": "/v2/list/2020/07/22/S945/CROSS",
"url": "/v2/data/2020/07/22/S945/CROSS/20200722-S945_CROSS_ALL_0_0",
"size": 32822,
"md5sum": "936c0515dcbc27d2e2fc3ebdcf5f883a",
"createdAt": "2020-07-22T22:40:41",
"updatedAt": "2020-07-22T22:40:41",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Thu, 30 Jul 2020 11:27:03 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
@pytest.fixture
def mocked_files_available_to_download_single_source_single_day():
set_of_files_available_to_download = [
DiscoveredFileInfo(
file_name="COREREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/"
"CORE/20200722-S945_CORE_ALL_0_0"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=17734,
md5sum="3548e03c8833b0e2133c80ac3b1dcdac",
),
DiscoveredFileInfo(
file_name="CROSSREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/"
"CROSS/20200722-S945_CROSS_ALL_0_0"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=32822,
md5sum="936c0515dcbc27d2e2fc3ebdcf5f883a",
),
DiscoveredFileInfo(
file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/"
"WATCHLIST/20200722-S945_WATCHLIST_username_0_0"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=61663360,
md5sum="78571e930fb12fcfb2fb70feb07c7bcf",
),
]
return set_of_files_available_to_download
@pytest.fixture
def mocked_whole_files_download_details_single_source_single_day():
list_of_download_details = [
DownloadDetails(
file_name="COREREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/CORE/"
"20200722-S945_CORE_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/CORE", "COREREF_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=17734,
md5sum="3548e03c8833b0e2133c80ac3b1dcdac",
is_partitioned=False,
),
DownloadDetails(
file_name="CROSSREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/CROSS/"
"20200722-S945_CROSS_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/CROSS", "CROSSREF_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=32822,
md5sum="936c0515dcbc27d2e2fc3ebdcf5f883a",
is_partitioned=False,
),
DownloadDetails(
file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=61663360,
md5sum="78571e930fb12fcfb2fb70feb07c7bcf",
is_partitioned=True,
),
]
return list_of_download_details
@pytest.fixture
def mocked_whole_files_download_details_single_source_single_day_synchronous_case():
list_of_download_details = [
DownloadDetails(
file_name="COREREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/CORE/"
"20200722-S945_CORE_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/CORE", "COREREF_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=17734,
md5sum="3548e03c8833b0e2133c80ac3b1dcdac",
is_partitioned=None,
),
DownloadDetails(
file_name="CROSSREF_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/CROSS/"
"20200722-S945_CROSS_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/CROSS", "CROSSREF_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=32822,
md5sum="936c0515dcbc27d2e2fc3ebdcf5f883a",
is_partitioned=None,
),
DownloadDetails(
file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=7, day=22),
size=61663360,
md5sum="78571e930fb12fcfb2fb70feb07c7bcf",
is_partitioned=None,
),
]
return list_of_download_details
@pytest.fixture
def mocked_partitions_download_details_single_source_single_day():
return [
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=0&end=5242880"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=5242881&end=10485760"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=10485761&end=15728640"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=15728641&end=20971520"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=20971521&end=26214400"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=26214401&end=31457280"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_6.txt"
),
partition_index=6,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=31457281&end=36700160"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_7.txt"
),
partition_index=7,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=36700161&end=41943040"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_8.txt"
),
partition_index=8,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=41943041&end=47185920"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_9.txt"
),
partition_index=9,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=47185921&end=52428800"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_10.txt"
),
partition_index=10,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=52428801&end=57671680"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_11.txt"
),
partition_index=11,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_945_20200722.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/22/S945/WATCHLIST/"
"20200722-S945_WATCHLIST_username_0_0?start=57671681&end=61663360"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/22/S945/WATCHLIST", "WATCHLIST_945_20200722_12.txt"
),
partition_index=12,
),
]
""""Datavault API with single source and multiple days."""
@pytest.fixture
def mocked_datavault_api_single_source_multiple_days(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list",
json=[
{
"name": "2020",
"parent": "/v2/list",
"url": "/v2/list/2020",
"size": 0,
"createdAt": "2020-01-01T00:00:00",
"updatedAt": "2020-07-30T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:14:00 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, "
"content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language, "
"Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020",
json=[
{
"name": "07",
"parent": "/v2/list/2020",
"url": "/v2/list/2020/07",
"size": 0,
"createdAt": "2020-07-01T00:00:00",
"updatedAt": "2020-07-30T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:15:28 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, "
"content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language,"
" Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07",
json=[
{
"name": "20",
"parent": "/v2/list/2020/07",
"url": "/v2/list/2020/07/20",
"size": 0,
"createdAt": "2020-07-20T22:08:28",
"updatedAt": "2020-07-23T22:02:26",
"writable": False,
"directory": True,
},
{
"name": "17",
"parent": "/v2/list/2020/07",
"url": "/v2/list/2020/07/17",
"size": 0,
"createdAt": "2020-07-17T23:45:36",
"updatedAt": "2020-07-20T07:48:01",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:16:33 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, "
"content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language, "
"Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/20",
json=[
{
"name": "S207",
"parent": "/v2/list/2020/07/20",
"url": "/v2/list/2020/07/20/S207",
"size": 0,
"createdAt": "2020-07-21T06:35:36",
"updatedAt": "2020-07-21T06:41:03",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:19:10 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, "
"content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language, "
"Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/20/S207",
json=[
{
"name": "CORE",
"parent": "/v2/list/2020/07/20/S207",
"url": "/v2/list/2020/07/20/S207/CORE",
"size": 0,
"createdAt": "2020-07-21T06:41:03",
"updatedAt": "2020-07-21T06:41:03",
"writable": False,
"directory": True,
},
{
"name": "CROSS",
"parent": "/v2/list/2020/07/20/S207",
"url": "/v2/list/2020/07/20/S207/CROSS",
"size": 0,
"createdAt": "2020-07-21T06:38:41",
"updatedAt": "2020-07-21T06:38:41",
"writable": False,
"directory": True,
},
{
"name": "WATCHLIST",
"parent": "/v2/list/2020/07/20/S207",
"url": "/v2/list/2020/07/20/S207/WATCHLIST",
"size": 0,
"createdAt": "2020-07-21T06:35:36",
"updatedAt": "2020-07-21T06:35:36",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:21:32 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization,"
" content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language,"
" Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/20/S207/CORE",
json=[
{
"name": "COREREF_207_20200720.txt.bz2",
"fid": "20200720-S207_CORE_ALL_0_0",
"parent": "/v2/list/2020/07/20/S207/CORE",
"url": "/v2/data/2020/07/20/S207/CORE/20200720-S207_CORE_ALL_0_0",
"size": 4548016,
"md5sum": "a46a5f07b6a402d4023ef550df6a12e4",
"createdAt": "2020-07-21T06:41:03",
"updatedAt": "2020-07-21T06:41:03",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:24:37 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization,"
" content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Cache-Control, Content-Language,"
" Content-Length, Content-Type"
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/20/S207/CROSS",
json=[
{
"name": "CROSSREF_207_20200720.txt.bz2",
"fid": "20200720-S207_CROSS_ALL_0_0",
"parent": "/v2/list/2020/07/20/S207/CROSS",
"url": "/v2/data/2020/07/20/S207/CROSS/20200720-S207_CROSS_ALL_0_0",
"size": 14571417,
"md5sum": "6b3dbd152e7dccf4147f62b6ce1c78c3",
"createdAt": "2020-07-21T06:38:41",
"updatedAt": "2020-07-21T06:38:41",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:26:11 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/20/S207/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_207_20200720.txt.bz2",
"fid": "20200720-S207_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/20/S207/WATCHLIST",
"url": "/v2/data/2020/07/20/S207/WATCHLIST/20200720-S207_WATCHLIST_username_0_0",
"size": 70613654,
"md5sum": "ba2c00511520a3cf4b5383ceedb3b41d",
"createdAt": "2020-07-21T06:35:36",
"updatedAt": "2020-07-21T06:35:36",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:27:51 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/17",
json=[
{
"name": "S207",
"parent": "/v2/list/2020/07/17",
"url": "/v2/list/2020/07/17/S207",
"size": 0,
"createdAt": "2020-07-18T07:02:07",
"updatedAt": "2020-07-18T07:07:02",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:30:40 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/17/S207",
json=[
{
"name": "CORE",
"parent": "/v2/list/2020/07/17/S207",
"url": "/v2/list/2020/07/17/S207/CORE",
"size": 0,
"createdAt": "2020-07-18T07:07:02",
"updatedAt": "2020-07-18T07:07:02",
"writable": False,
"directory": True,
},
{
"name": "CROSS",
"parent": "/v2/list/2020/07/17/S207",
"url": "/v2/list/2020/07/17/S207/CROSS",
"size": 0,
"createdAt": "2020-07-18T07:05:13",
"updatedAt": "2020-07-18T07:05:13",
"writable": False,
"directory": True,
},
{
"name": "WATCHLIST",
"parent": "/v2/list/2020/07/17/S207",
"url": "/v2/list/2020/07/17/S207/WATCHLIST",
"size": 0,
"createdAt": "2020-07-18T07:02:07",
"updatedAt": "2020-07-18T07:02:07",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:32:26 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/17/S207/CORE",
json=[
{
"name": "COREREF_207_20200717.txt.bz2",
"fid": "20200717-S207_CORE_ALL_0_0",
"parent": "/v2/list/2020/07/17/S207/CORE",
"url": "/v2/data/2020/07/17/S207/CORE/20200717-S207_CORE_ALL_0_0",
"size": 3910430,
"md5sum": "63958e5bc651b95da410e76a1763dde7",
"createdAt": "2020-07-18T07:07:02",
"updatedAt": "2020-07-18T07:07:02",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:34:45 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/17/S207/CROSS",
json=[
{
"name": "CROSSREF_207_20200717.txt.bz2",
"fid": "20200717-S207_CROSS_ALL_0_0",
"parent": "/v2/list/2020/07/17/S207/CROSS",
"url": "/v2/data/2020/07/17/S207/CROSS/20200717-S207_CROSS_ALL_0_0",
"size": 13816558,
"md5sum": "d1316740714e9b13cf03acf02a23c596",
"createdAt": "2020-07-18T07:05:13",
"updatedAt": "2020-07-18T07:05:13",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:36:58 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/17/S207/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_207_20200717.txt.bz2",
"fid": "20200717-S207_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/17/S207/WATCHLIST",
"url": "/v2/data/2020/07/17/S207/WATCHLIST/20200717-S207_WATCHLIST_username_0_0",
"size": 63958346,
"md5sum": "9be9099186dfd8a7e0012e58fd49a3da",
"createdAt": "2020-07-18T07:02:07",
"updatedAt": "2020-07-18T07:02:07",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Tue, 04 Aug 2020 09:38:30 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
@pytest.fixture
def mocked_files_available_to_download_single_source_multiple_days():
set_of_files_available_to_download = [
DiscoveredFileInfo(
file_name="COREREF_207_20200717.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/CORE/"
"20200717-S207_CORE_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=17),
size=3910430,
md5sum="63958e5bc651b95da410e76a1763dde7",
),
DiscoveredFileInfo(
file_name="CROSSREF_207_20200717.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/CROSS/"
"20200717-S207_CROSS_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=17),
size=13816558,
md5sum="d1316740714e9b13cf03acf02a23c596",
),
DiscoveredFileInfo(
file_name="WATCHLIST_207_20200717.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/WATCHLIST/"
"20200717-S207_WATCHLIST_username_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=17),
size=63958346,
md5sum="9be9099186dfd8a7e0012e58fd49a3da",
),
DiscoveredFileInfo(
file_name="COREREF_207_20200720.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CORE/"
"20200720-S207_CORE_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=20),
size=4548016,
md5sum="a46a5f07b6a402d4023ef550df6a12e4",
),
DiscoveredFileInfo(
file_name="CROSSREF_207_20200720.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CROSS/"
"20200720-S207_CROSS_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=20),
size=14571417,
md5sum="6b3dbd152e7dccf4147f62b6ce1c78c3",
),
DiscoveredFileInfo(
file_name="WATCHLIST_207_20200720.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/WATCHLIST/"
"20200720-S207_WATCHLIST_username_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=20),
size=70613654,
md5sum="ba2c00511520a3cf4b5383ceedb3b41d",
),
]
return set_of_files_available_to_download
@pytest.fixture
def mocked_download_info_single_source_multiple_days_synchronous():
set_of_files_available_to_download = [
DownloadDetails(
file_name='COREREF_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/CORE/'
'20200717-S207_CORE_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/17/S207/CORE/COREREF_207_20200717.txt.bz2'
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=3910430,
md5sum='63958e5bc651b95da410e76a1763dde7',
is_partitioned=None,
),
DownloadDetails(
file_name='CROSSREF_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/'
'CROSS/20200717-S207_CROSS_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/17/S207/CROSS/CROSSREF_207_20200717.txt.bz2'
),
source_id=207, reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=13816558, md5sum='d1316740714e9b13cf03acf02a23c596',
is_partitioned=None,
),
DownloadDetails(
file_name='WATCHLIST_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/WATCHLIST/'
'20200717-S207_WATCHLIST_username_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/17/S207/WATCHLIST/WATCHLIST_207_20200717.txt.bz2'
),
source_id=207, reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=63958346, md5sum='9be9099186dfd8a7e0012e58fd49a3da',
is_partitioned=None,
),
DownloadDetails(
file_name='COREREF_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CORE/'
'20200720-S207_CORE_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/20/S207/CORE/COREREF_207_20200720.txt.bz2'
),
source_id=207, reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=4548016, md5sum='a46a5f07b6a402d4023ef550df6a12e4',
is_partitioned=None,
),
DownloadDetails(
file_name='CROSSREF_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CROSS/'
'20200720-S207_CROSS_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/20/S207/CROSS/CROSSREF_207_20200720.txt.bz2'
),
source_id=207, reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=14571417, md5sum='6b3dbd152e7dccf4147f62b6ce1c78c3',
is_partitioned=None,
),
DownloadDetails(
file_name='WATCHLIST_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/WATCHLIST/'
'20200720-S207_WATCHLIST_username_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/', '2020/07/20/S207/WATCHLIST/WATCHLIST_207_20200720.txt.bz2'
),
source_id=207, reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=70613654, md5sum='ba2c00511520a3cf4b5383ceedb3b41d',
is_partitioned=None,
),
]
return set_of_files_available_to_download
@pytest.fixture
def mocked_download_info_single_source_multiple_days_concurrent():
set_of_files_available_to_download = [
DownloadDetails(
file_name='COREREF_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/CORE/'
'20200717-S207_CORE_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/17/S207/CORE/COREREF_207_20200717.txt.bz2',
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=3910430,
md5sum='63958e5bc651b95da410e76a1763dde7',
is_partitioned=False,
),
DownloadDetails(
file_name='CROSSREF_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/CROSS/'
'20200717-S207_CROSS_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/17/S207/CROSS/CROSSREF_207_20200717.txt.bz2',
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=13816558,
md5sum='d1316740714e9b13cf03acf02a23c596',
is_partitioned=False,
),
DownloadDetails(
file_name='WATCHLIST_207_20200717.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/17/S207/WATCHLIST/'
'20200717-S207_WATCHLIST_username_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/17/S207/WATCHLIST/WATCHLIST_207_20200717.txt.bz2',
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 17, 0, 0),
size=63958346,
md5sum='9be9099186dfd8a7e0012e58fd49a3da',
is_partitioned=True,
),
DownloadDetails(
file_name='COREREF_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CORE/'
'20200720-S207_CORE_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/20/S207/CORE/COREREF_207_20200720.txt.bz2',
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=4548016,
md5sum='a46a5f07b6a402d4023ef550df6a12e4',
is_partitioned=False,
),
DownloadDetails(
file_name='CROSSREF_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/CROSS/'
'20200720-S207_CROSS_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/20/S207/CROSS/CROSSREF_207_20200720.txt.bz2',
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=14571417,
md5sum='6b3dbd152e7dccf4147f62b6ce1c78c3',
is_partitioned=False,
),
DownloadDetails(
file_name='WATCHLIST_207_20200720.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/07/20/S207/WATCHLIST/'
'20200720-S207_WATCHLIST_username_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
'Temp/Data/2020/07/20/S207/WATCHLIST/WATCHLIST_207_20200720.txt.bz2'
),
source_id=207,
reference_date=datetime.datetime(2020, 7, 20, 0, 0),
size=70613654,
md5sum='ba2c00511520a3cf4b5383ceedb3b41d',
is_partitioned=True,
)
]
return set_of_files_available_to_download
"""Datavault API with multiple sources over a single day."""
@pytest.fixture
def mocked_datavault_api_multiple_sources_single_day(mocked_response):
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list",
json=[
{
"name": "2020",
"parent": "/v2/list",
"url": "/v2/list/2020",
"size": 0,
"createdAt": "2020-01-01T00:00:00",
"updatedAt": "2020-08-05T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:23:14 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020",
json=[
{
"name": "07",
"parent": "/v2/list/2020",
"url": "/v2/list/2020/07",
"size": 0,
"createdAt": "2020-07-01T00:00:00",
"updatedAt": "2020-07-31T00:00:00",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:33:34 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07",
json=[
{
"name": "21",
"parent": "/v2/list/2020/07",
"url": "/v2/list/2020/07/21",
"size": 0,
"createdAt": "2020-07-21T22:00:49",
"updatedAt": "2020-07-23T21:34:01",
"writable": False,
"directory": True,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:35:25 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21",
json=[
{
"name": "S367",
"parent": "/v2/list/2020/07/21",
"url": "/v2/list/2020/07/21/S367",
"size": 0,
"createdAt": "2020-07-22T00:59:44",
"updatedAt": "2020-07-23T15:28:41",
"writable": False,
"directory": True,
},
{
"name": "S207",
"parent": "/v2/list/2020/07/21",
"url": "/v2/list/2020/07/21/S207",
"size": 0,
"createdAt": "2020-07-22T06:36:31",
"updatedAt": "2020-07-22T06:43:36",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:38:21 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S367",
json=[
{
"name": "CORE",
"parent": "/v2/list/2020/07/21/S367",
"url": "/v2/list/2020/07/21/S367/CORE",
"size": 0,
"createdAt": "2020-07-22T01:00:24",
"updatedAt": "2020-07-23T15:23:11",
"writable": False,
"directory": True,
},
{
"name": "CROSS",
"parent": "/v2/list/2020/07/21/S367",
"url": "/v2/list/2020/07/21/S367/CROSS",
"size": 0,
"createdAt": "2020-07-22T00:59:44",
"updatedAt": "2020-07-23T15:28:41",
"writable": False,
"directory": True,
},
{
"name": "WATCHLIST",
"parent": "/v2/list/2020/07/21/S367",
"url": "/v2/list/2020/07/21/S367/WATCHLIST",
"size": 0,
"createdAt": "2020-07-22T01:00:06",
"updatedAt": "2020-07-22T01:00:06",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:43:26 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S367/CORE",
json=[
{
"name": "COREREF_367_20200721.txt.bz2",
"fid": "20200721-S367_CORE_ALL_0_0",
"parent": "/v2/list/2020/07/21/S367/CORE",
"url": "/v2/data/2020/07/21/S367/CORE/20200721-S367_CORE_ALL_0_0",
"size": 706586,
"md5sum": "e28385e918aa71720235232c9a895b64",
"createdAt": "2020-07-22T01:00:24",
"updatedAt": "2020-07-23T15:23:11",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:46:15 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S367/CROSS",
json=[
{
"name": "CROSSREF_367_20200721.txt.bz2",
"fid": "20200721-S367_CROSS_ALL_0_0",
"parent": "/v2/list/2020/07/21/S367/CROSS",
"url": "/v2/data/2020/07/21/S367/CROSS/20200721-S367_CROSS_ALL_0_0",
"size": 879897,
"md5sum": "fdb7592c8806a28f59c4d4da1e934c43",
"createdAt": "2020-07-22T00:59:44",
"updatedAt": "2020-07-23T15:28:41",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:46:30 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S367/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_367_20200721.txt.bz2",
"fid": "20200721-S367_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/21/S367/WATCHLIST",
"url": "/v2/data/2020/07/21/S367/WATCHLIST/20200721-S367_WATCHLIST_username_0_0",
"size": 82451354,
"md5sum": "62df718ef5eb5f9f1ea3f6ea1f826c30",
"createdAt": "2020-07-22T01:00:06",
"updatedAt": "2020-07-22T01:00:06",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:46:44 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S207",
json=[
{
"name": "CORE",
"parent": "/v2/list/2020/07/21/S207",
"url": "/v2/list/2020/07/21/S207/CORE",
"size": 0,
"createdAt": "2020-07-22T06:43:36",
"updatedAt": "2020-07-22T06:43:36",
"writable": False,
"directory": True,
},
{
"name": "CROSS",
"parent": "/v2/list/2020/07/21/S207",
"url": "/v2/list/2020/07/21/S207/CROSS",
"size": 0,
"createdAt": "2020-07-22T06:41:50",
"updatedAt": "2020-07-22T06:41:50",
"writable": False,
"directory": True,
},
{
"name": "WATCHLIST",
"parent": "/v2/list/2020/07/21/S207",
"url": "/v2/list/2020/07/21/S207/WATCHLIST",
"size": 0,
"createdAt": "2020-07-22T06:36:31",
"updatedAt": "2020-07-22T06:36:31",
"writable": False,
"directory": True,
},
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 10:52:19 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S207/CORE",
json=[
{
"name": "COREREF_207_20200721.txt.bz2",
"fid": "20200721-S207_CORE_ALL_0_0",
"parent": "/v2/list/2020/07/21/S207/CORE",
"url": "/v2/data/2020/07/21/S207/CORE/20200721-S207_CORE_ALL_0_0",
"size": 4590454,
"md5sum": "c1a079841f84676e91b5021afd3f5272",
"createdAt": "2020-07-22T06:43:36",
"updatedAt": "2020-07-22T06:43:36",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 11:00:59 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S207/CROSS",
json=[
{
"name": "CROSSREF_207_20200721.txt.bz2",
"fid": "20200721-S207_CROSS_ALL_0_0",
"parent": "/v2/list/2020/07/21/S207/CROSS",
"url": "/v2/data/2020/07/21/S207/CROSS/20200721-S207_CROSS_ALL_0_0",
"size": 14690557,
"md5sum": "f2683cd87a7b29f3b8776373d56a8456",
"createdAt": "2020-07-22T06:41:50",
"updatedAt": "2020-07-22T06:41:50",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 11:01:25 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
mocked_response.add(
responses.GET,
url="https://api.icedatavault.icedataservices.com/v2/list/2020/07/21/S207/WATCHLIST",
json=[
{
"name": "WATCHLIST_username_207_20200721.txt.bz2",
"fid": "20200721-S207_WATCHLIST_username_0_0",
"parent": "/v2/list/2020/07/21/S207/WATCHLIST",
"url": "/v2/data/2020/07/21/S207/WATCHLIST/20200721-S207_WATCHLIST_username_0_0",
"size": 72293374,
"md5sum": "36e444a8362e7db52af50ee0f8dc0d2e",
"createdAt": "2020-07-22T06:36:31",
"updatedAt": "2020-07-22T06:36:31",
"writable": False,
"directory": False,
}
],
status=200,
content_type="application/json;charset=UTF-8",
headers={
"Date": "Wed, 05 Aug 2020 11:02:08 GMT",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, GET, OPTIONS, DELETE, PUT",
"Access-Control-Max-Age": "3600",
"Access-Control-Allow-Headers": "x-request-with, authorization, content-type",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers":
"Cache-Control, Content-Language, Content-Length, Content-Type, "
"Expires, Last-Modified, Pragma",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Cache-Control": "no-cache, no-store, max-age=0, must-revalidate",
"Pragma": "no-cache",
"Expires": "0",
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Frame-Options": "DENY",
},
)
@pytest.fixture
def mocked_files_available_to_download_multiple_sources_single_day():
set_of_files_available_to_download = [
DiscoveredFileInfo(
file_name="COREREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CORE/"
"20200721-S207_CORE_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=4590454,
md5sum="c1a079841f84676e91b5021afd3f5272",
),
DiscoveredFileInfo(
file_name="COREREF_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/CORE/"
"20200721-S367_CORE_ALL_0_0"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=706586,
md5sum="e28385e918aa71720235232c9a895b64",
),
DiscoveredFileInfo(
file_name="CROSSREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CROSS/"
"20200721-S207_CROSS_ALL_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=14690557,
md5sum="f2683cd87a7b29f3b8776373d56a8456",
),
DiscoveredFileInfo(
file_name="CROSSREF_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/CROSS/"
"20200721-S367_CROSS_ALL_0_0"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=879897,
md5sum="fdb7592c8806a28f59c4d4da1e934c43",
),
DiscoveredFileInfo(
file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=72293374,
md5sum="36e444a8362e7db52af50ee0f8dc0d2e",
),
DiscoveredFileInfo(
file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=82451354,
md5sum="62df718ef5eb5f9f1ea3f6ea1f826c30",
),
]
return set_of_files_available_to_download
@pytest.fixture
def mocked_download_details_multiple_sources_single_day():
download_details = [
DownloadDetails(
file_name="COREREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/"
"S207/CORE/20200721-S207_CORE_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/CORE", "COREREF_207_20200721.txt.bz2"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=4590454,
md5sum="c1a079841f84676e91b5021afd3f5272",
is_partitioned=False,
),
DownloadDetails(
file_name="COREREF_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/"
"S367/CORE/20200721-S367_CORE_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/CORE", "COREREF_367_20200721.txt.bz2"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=706586,
md5sum="e28385e918aa71720235232c9a895b64",
is_partitioned=False,
),
DownloadDetails(
file_name="CROSSREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CROSS/"
"20200721-S207_CROSS_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/CROSS", "CROSSREF_207_20200721.txt.bz2"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=14690557,
md5sum="f2683cd87a7b29f3b8776373d56a8456",
is_partitioned=True,
),
DownloadDetails(
file_name="CROSSREF_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/CROSS/"
"20200721-S367_CROSS_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/CROSS", "CROSSREF_367_20200721.txt.bz2"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=879897,
md5sum="fdb7592c8806a28f59c4d4da1e934c43",
is_partitioned=False,
),
DownloadDetails(
file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721.txt.bz2"
),
source_id=207,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=72293374,
md5sum="36e444a8362e7db52af50ee0f8dc0d2e",
is_partitioned=True,
),
DownloadDetails(
file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721.txt.bz2"
),
source_id=367,
reference_date=datetime.datetime(year=2020, month=7, day=21),
size=82451354,
md5sum="62df718ef5eb5f9f1ea3f6ea1f826c30",
is_partitioned=True,
),
]
return download_details
@pytest.fixture
def mocked_partitions_download_details_multiple_sources_single_day():
return [
PartitionDownloadDetails(
parent_file_name="CROSSREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CROSS/"
"20200721-S207_CROSS_ALL_0_0?start=0&end=5242880"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/CROSS", "CROSSREF_207_20200721_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name="CROSSREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CROSS/"
"20200721-S207_CROSS_ALL_0_0?start=5242881&end=10485760"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/CROSS", "CROSSREF_207_20200721_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name="CROSSREF_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/CROSS/"
"20200721-S207_CROSS_ALL_0_0?start=10485761&end=14690557"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/CROSS", "CROSSREF_207_20200721_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=0&end=5242880"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=5242881&end=10485760"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=10485761&end=15728640"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=15728641&end=20971520"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=20971521&end=26214400"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=26214401&end=31457280"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_6.txt"
),
partition_index=6,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=31457281&end=36700160"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_7.txt"
),
partition_index=7,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=36700161&end=41943040"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_8.txt"
),
partition_index=8,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=41943041&end=47185920"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_9.txt"
),
partition_index=9,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=47185921&end=52428800"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_10.txt"
),
partition_index=10,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=52428801&end=57671680"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_11.txt"
),
partition_index=11,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=57671681&end=62914560"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_12.txt"
),
partition_index=12,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=62914561&end=68157440"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_13.txt"
),
partition_index=13,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_207_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S207/WATCHLIST/"
"20200721-S207_WATCHLIST_username_0_0?start=68157441&end=72293374"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S207/WATCHLIST", "WATCHLIST_207_20200721_14.txt"
),
partition_index=14,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=0&end=5242880"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=5242881&end=10485760"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=10485761&end=15728640"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=15728641&end=20971520"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=20971521&end=26214400"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=26214401&end=31457280"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_6.txt"
),
partition_index=6,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=31457281&end=36700160"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_7.txt"
),
partition_index=7,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=36700161&end=41943040"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_8.txt"
),
partition_index=8,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=41943041&end=47185920"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_9.txt"
),
partition_index=9,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=47185921&end=52428800"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_10.txt"
),
partition_index=10,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=52428801&end=57671680"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_11.txt"
),
partition_index=11,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=57671681&end=62914560"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_12.txt"
),
partition_index=12,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=62914561&end=68157440"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_13.txt"
),
partition_index=13,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=68157441&end=73400320"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_14.txt"
),
partition_index=14,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=73400321&end=78643200"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_15.txt"
),
partition_index=15,
),
PartitionDownloadDetails(
parent_file_name="WATCHLIST_367_20200721.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/07/21/S367/WATCHLIST/"
"20200721-S367_WATCHLIST_username_0_0?start=78643201&end=82451354"
),
file_path=Path(__file__).resolve().parent.joinpath(
"Data/2020/07/21/S367/WATCHLIST", "WATCHLIST_367_20200721_16.txt"
),
partition_index=16,
),
]
"""Others."""
@pytest.fixture(scope="session")
def simulated_downloaded_partitions(tmp_path_factory):
path_to_tmp_dir = tmp_path_factory.mktemp("Data")
partition_file_names = [
"WATCHLIST_367_20200721_1.txt",
"WATCHLIST_367_20200721_2.txt",
"WATCHLIST_367_20200721_3.txt",
"WATCHLIST_367_20200721_4.txt",
"WATCHLIST_367_20200721_5.txt",
"WATCHLIST_367_20200721_6.txt",
"WATCHLIST_367_20200721_7.txt",
"WATCHLIST_367_20200721_8.txt",
"WATCHLIST_367_20200721_9.txt",
"WATCHLIST_367_20200721_10.txt",
"WATCHLIST_367_20200721_11.txt",
"WATCHLIST_367_20200721_12.txt",
"WATCHLIST_367_20200721_13.txt",
"WATCHLIST_367_20200721_14.txt",
"WATCHLIST_367_20200721_15.txt",
]
for name in partition_file_names:
f_path = path_to_tmp_dir / name
f_path.touch()
return path_to_tmp_dir
@pytest.fixture()
def mocked_concurrent_download_manifest():
download_manifest = ConcurrentDownloadManifest(
files_reference_data=[
DownloadDetails(
file_name="COREREF_945_20201218.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/"
"20201218-S945_CORE_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1",
"2020/12/18/CORE/COREREF_945_20201218.txt.bz2",
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=12, day=18),
size=24326963,
md5sum="8fc8fa1402e23f2d552899525b808514",
is_partitioned=True,
),
DownloadDetails(
file_name="CROSSREF_945_20201218.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CROSS/"
"20201218-S945_CROSS_ALL_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1",
"2020/12/18/CROSS/CROSSREF_945_20201218.txt.bz2",
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=12, day=18),
size=35150,
md5sum="13da7cea9a7337cd71fd9aea4f909bc6",
is_partitioned=False,
),
DownloadDetails(
file_name="WATCHLIST_945_20201218.txt.bz2",
download_url=(
"https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/WATCHLIST"
"/20201218-S945_WATCHLIST_username_0_0"
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1",
"2020/12/18/WATCHLIST/WATCHLIST_945_20201218.txt.bz2",
),
source_id=945,
reference_date=datetime.datetime(year=2020, month=12, day=18),
size=51648457,
md5sum="11c5253a7cd1743aea93ec5124fd974d",
is_partitioned=True,
),
],
whole_files_to_download=[
DownloadDetails(
file_name='CROSSREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CROSS/'
'20201218-S945_CROSS_ALL_0_0'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CROSS/"
"CROSSREF_945_20201218.txt.bz2"
),
source_id=945,
reference_date=datetime.datetime(2020, 12, 18, 0, 0),
size=35150,
md5sum='13da7cea9a7337cd71fd9aea4f909bc6',
is_partitioned=False
),
],
partitions_to_download=[
PartitionDownloadDetails(
parent_file_name='COREREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/'
'20201218-S945_CORE_ALL_0_0?start=0&end=5242880'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CORE/"
"COREREF_945_20201218_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name='COREREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/'
'20201218-S945_CORE_ALL_0_0?start=5242881&end=10485760'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CORE/"
"COREREF_945_20201218_2.txt"
),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name='COREREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/'
'20201218-S945_CORE_ALL_0_0?start=10485761&end=15728640'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CORE/"
"COREREF_945_20201218_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name='COREREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/'
'20201218-S945_CORE_ALL_0_0?start=15728641&end=20971520'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CORE/"
"COREREF_945_20201218_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name='COREREF_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/CORE/'
'20201218-S945_CORE_ALL_0_0?start=20971521&end=24326963'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/CORE/"
"COREREF_945_20201218_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=0&end=5242880'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_1.txt"
),
partition_index=1,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=5242881&end=10485760'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_2.txt"),
partition_index=2,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=10485761&end=15728640'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_3.txt"
),
partition_index=3,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=15728641&end=20971520'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_4.txt"
),
partition_index=4,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=20971521&end=26214400'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_5.txt"
),
partition_index=5,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=26214401&end=31457280'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_6.txt"
),
partition_index=6,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=31457281&end=36700160'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_7.txt"
),
partition_index=7,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=36700161&end=41943040'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_8.txt"
),
partition_index=8,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=41943041&end=47185920'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_9.txt"
),
partition_index=9,
),
PartitionDownloadDetails(
parent_file_name='WATCHLIST_945_20201218.txt.bz2',
download_url=(
'https://api.icedatavault.icedataservices.com/v2/data/2020/12/18/S945/'
'WATCHLIST/20201218-S945_WATCHLIST_username_0_0?start=47185921&end=51648457'
),
file_path=Path(__file__).resolve().parent.joinpath(
"static_data/post_processing_scenario_1/2020/12/18/WATCHLIST/"
"WATCHLIST_945_20201218_10.txt"
),
partition_index=10,
),
]
)
return download_manifest
| 43.372093 | 100 | 0.55112 | 15,279 | 151,065 | 5.268015 | 0.025002 | 0.029445 | 0.026339 | 0.048006 | 0.967251 | 0.956193 | 0.94526 | 0.927221 | 0.922338 | 0.917779 | 0 | 0.152981 | 0.31268 | 151,065 | 3,482 | 101 | 43.384549 | 0.622229 | 0.000496 | 0 | 0.770492 | 0 | 0.042155 | 0.463723 | 0.221191 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007026 | false | 0 | 0.001464 | 0.000585 | 0.012881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9661f02ed3b999e2df1c60fc945194a9d3f93295 | 50,726 | py | Python | spark_fhir_schemas/r4/resources/procedure.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | 2 | 2020-10-31T23:25:01.000Z | 2021-06-09T14:12:42.000Z | spark_fhir_schemas/r4/resources/procedure.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | null | null | null | spark_fhir_schemas/r4/resources/procedure.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | null | null | null | from typing import Union, List, Optional
from pyspark.sql.types import (
StructType,
StructField,
StringType,
ArrayType,
DataType,
TimestampType,
)
# This file is auto-generated by generate_schema so do not edit it manually
# noinspection PyPep8Naming
class ProcedureSchema:
"""
An action that is or was performed on or for a patient. This can be a physical
intervention like an operation, or less invasive like long term services,
counseling, or hypnotherapy.
"""
# noinspection PyDefaultArgument
@staticmethod
def get_schema(
max_nesting_depth: Optional[int] = 6,
nesting_depth: int = 0,
nesting_list: List[str] = [],
max_recursion_limit: Optional[int] = 2,
include_extension: Optional[bool] = False,
extension_fields: Optional[List[str]] = [
"valueBoolean",
"valueCode",
"valueDate",
"valueDateTime",
"valueDecimal",
"valueId",
"valueInteger",
"valuePositiveInt",
"valueString",
"valueTime",
"valueUnsignedInt",
"valueUri",
"valueUrl",
],
extension_depth: int = 0,
max_extension_depth: Optional[int] = 2,
include_modifierExtension: Optional[bool] = False,
) -> Union[StructType, DataType]:
"""
An action that is or was performed on or for a patient. This can be a physical
intervention like an operation, or less invasive like long term services,
counseling, or hypnotherapy.
resourceType: This is a Procedure resource
id: The logical id of the resource, as used in the URL for the resource. Once
assigned, this value never changes.
meta: The metadata about the resource. This is content that is maintained by the
infrastructure. Changes to the content might not always be associated with
version changes to the resource.
implicitRules: A reference to a set of rules that were followed when the resource was
constructed, and which must be understood when processing the content. Often,
this is a reference to an implementation guide that defines the special rules
along with other profiles etc.
language: The base language in which the resource is written.
text: A human-readable narrative that contains a summary of the resource and can be
used to represent the content of the resource to a human. The narrative need
not encode all the structured data, but is required to contain sufficient
detail to make it "clinically safe" for a human to just read the narrative.
Resource definitions may define what content should be represented in the
narrative to ensure clinical safety.
contained: These resources do not have an independent existence apart from the resource
that contains them - they cannot be identified independently, and nor can they
have their own independent transaction scope.
extension: May be used to represent additional information that is not part of the basic
definition of the resource. To make the use of extensions safe and manageable,
there is a strict set of governance applied to the definition and use of
extensions. Though any implementer can define an extension, there is a set of
requirements that SHALL be met as part of the definition of the extension.
modifierExtension: May be used to represent additional information that is not part of the basic
definition of the resource and that modifies the understanding of the element
that contains it and/or the understanding of the containing element's
descendants. Usually modifier elements provide negation or qualification. To
make the use of extensions safe and manageable, there is a strict set of
governance applied to the definition and use of extensions. Though any
implementer is allowed to define an extension, there is a set of requirements
that SHALL be met as part of the definition of the extension. Applications
processing a resource are required to check for modifier extensions.
Modifier extensions SHALL NOT change the meaning of any elements on Resource
or DomainResource (including cannot change the meaning of modifierExtension
itself).
identifier: Business identifiers assigned to this procedure by the performer or other
systems which remain constant as the resource is updated and is propagated
from server to server.
instantiatesCanonical: The URL pointing to a FHIR-defined protocol, guideline, order set or other
definition that is adhered to in whole or in part by this Procedure.
instantiatesUri: The URL pointing to an externally maintained protocol, guideline, order set or
other definition that is adhered to in whole or in part by this Procedure.
basedOn: A reference to a resource that contains details of the request for this
procedure.
partOf: A larger event of which this particular procedure is a component or step.
status: A code specifying the state of the procedure. Generally, this will be the in-
progress or completed state.
statusReason: Captures the reason for the current state of the procedure.
category: A code that classifies the procedure for searching, sorting and display
purposes (e.g. "Surgical Procedure").
code: The specific procedure that is performed. Use text if the exact nature of the
procedure cannot be coded (e.g. "Laparoscopic Appendectomy").
subject: The person, animal or group on which the procedure was performed.
encounter: The Encounter during which this Procedure was created or performed or to which
the creation of this record is tightly associated.
performedDateTime: Estimated or actual date, date-time, period, or age when the procedure was
performed. Allows a period to support complex procedures that span more than
one date, and also allows for the length of the procedure to be captured.
performedPeriod: Estimated or actual date, date-time, period, or age when the procedure was
performed. Allows a period to support complex procedures that span more than
one date, and also allows for the length of the procedure to be captured.
performedString: Estimated or actual date, date-time, period, or age when the procedure was
performed. Allows a period to support complex procedures that span more than
one date, and also allows for the length of the procedure to be captured.
performedAge: Estimated or actual date, date-time, period, or age when the procedure was
performed. Allows a period to support complex procedures that span more than
one date, and also allows for the length of the procedure to be captured.
performedRange: Estimated or actual date, date-time, period, or age when the procedure was
performed. Allows a period to support complex procedures that span more than
one date, and also allows for the length of the procedure to be captured.
recorder: Individual who recorded the record and takes responsibility for its content.
asserter: Individual who is making the procedure statement.
performer: Limited to "real" people rather than equipment.
location: The location where the procedure actually happened. E.g. a newborn at home, a
tracheostomy at a restaurant.
reasonCode: The coded reason why the procedure was performed. This may be a coded entity
of some type, or may simply be present as text.
reasonReference: The justification of why the procedure was performed.
bodySite: Detailed and structured anatomical location information. Multiple locations
are allowed - e.g. multiple punch biopsies of a lesion.
outcome: The outcome of the procedure - did it resolve the reasons for the procedure
being performed?
report: This could be a histology result, pathology report, surgical report, etc.
complication: Any complications that occurred during the procedure, or in the immediate
post-performance period. These are generally tracked separately from the
notes, which will typically describe the procedure itself rather than any
'post procedure' issues.
complicationDetail: Any complications that occurred during the procedure, or in the immediate
post-performance period.
followUp: If the procedure required specific follow up - e.g. removal of sutures. The
follow up may be represented as a simple note or could potentially be more
complex, in which case the CarePlan resource can be used.
note: Any other notes and comments about the procedure.
focalDevice: A device that is implanted, removed or otherwise manipulated (calibration,
battery replacement, fitting a prosthesis, attaching a wound-vac, etc.) as a
focal portion of the Procedure.
usedReference: Identifies medications, devices and any other substance used as part of the
procedure.
usedCode: Identifies coded items that were used as part of the procedure.
"""
from spark_fhir_schemas.r4.simple_types.id import idSchema
from spark_fhir_schemas.r4.complex_types.meta import MetaSchema
from spark_fhir_schemas.r4.simple_types.uri import uriSchema
from spark_fhir_schemas.r4.simple_types.code import codeSchema
from spark_fhir_schemas.r4.complex_types.narrative import NarrativeSchema
from spark_fhir_schemas.r4.complex_types.resourcelist import ResourceListSchema
from spark_fhir_schemas.r4.complex_types.extension import ExtensionSchema
from spark_fhir_schemas.r4.complex_types.identifier import IdentifierSchema
from spark_fhir_schemas.r4.simple_types.canonical import canonicalSchema
from spark_fhir_schemas.r4.complex_types.reference import ReferenceSchema
from spark_fhir_schemas.r4.complex_types.codeableconcept import (
CodeableConceptSchema,
)
from spark_fhir_schemas.r4.complex_types.period import PeriodSchema
from spark_fhir_schemas.r4.complex_types.age import AgeSchema
from spark_fhir_schemas.r4.complex_types.range import RangeSchema
from spark_fhir_schemas.r4.complex_types.procedure_performer import (
Procedure_PerformerSchema,
)
from spark_fhir_schemas.r4.complex_types.annotation import AnnotationSchema
from spark_fhir_schemas.r4.complex_types.procedure_focaldevice import (
Procedure_FocalDeviceSchema,
)
if (
max_recursion_limit
and nesting_list.count("Procedure") >= max_recursion_limit
) or (max_nesting_depth and nesting_depth >= max_nesting_depth):
return StructType([StructField("id", StringType(), True)])
# add my name to recursion list for later
my_nesting_list: List[str] = nesting_list + ["Procedure"]
schema = StructType(
[
# This is a Procedure resource
StructField("resourceType", StringType(), True),
# The logical id of the resource, as used in the URL for the resource. Once
# assigned, this value never changes.
StructField(
"id",
idSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The metadata about the resource. This is content that is maintained by the
# infrastructure. Changes to the content might not always be associated with
# version changes to the resource.
StructField(
"meta",
MetaSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# A reference to a set of rules that were followed when the resource was
# constructed, and which must be understood when processing the content. Often,
# this is a reference to an implementation guide that defines the special rules
# along with other profiles etc.
StructField(
"implicitRules",
uriSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The base language in which the resource is written.
StructField(
"language",
codeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# A human-readable narrative that contains a summary of the resource and can be
# used to represent the content of the resource to a human. The narrative need
# not encode all the structured data, but is required to contain sufficient
# detail to make it "clinically safe" for a human to just read the narrative.
# Resource definitions may define what content should be represented in the
# narrative to ensure clinical safety.
StructField(
"text",
NarrativeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# These resources do not have an independent existence apart from the resource
# that contains them - they cannot be identified independently, and nor can they
# have their own independent transaction scope.
StructField(
"contained",
ArrayType(
ResourceListSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource. To make the use of extensions safe and manageable,
# there is a strict set of governance applied to the definition and use of
# extensions. Though any implementer can define an extension, there is a set of
# requirements that SHALL be met as part of the definition of the extension.
StructField(
"extension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource and that modifies the understanding of the element
# that contains it and/or the understanding of the containing element's
# descendants. Usually modifier elements provide negation or qualification. To
# make the use of extensions safe and manageable, there is a strict set of
# governance applied to the definition and use of extensions. Though any
# implementer is allowed to define an extension, there is a set of requirements
# that SHALL be met as part of the definition of the extension. Applications
# processing a resource are required to check for modifier extensions.
#
# Modifier extensions SHALL NOT change the meaning of any elements on Resource
# or DomainResource (including cannot change the meaning of modifierExtension
# itself).
StructField(
"modifierExtension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Business identifiers assigned to this procedure by the performer or other
# systems which remain constant as the resource is updated and is propagated
# from server to server.
StructField(
"identifier",
ArrayType(
IdentifierSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The URL pointing to a FHIR-defined protocol, guideline, order set or other
# definition that is adhered to in whole or in part by this Procedure.
StructField(
"instantiatesCanonical",
ArrayType(
canonicalSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The URL pointing to an externally maintained protocol, guideline, order set or
# other definition that is adhered to in whole or in part by this Procedure.
StructField(
"instantiatesUri",
ArrayType(
uriSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# A reference to a resource that contains details of the request for this
# procedure.
StructField(
"basedOn",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# A larger event of which this particular procedure is a component or step.
StructField(
"partOf",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# A code specifying the state of the procedure. Generally, this will be the in-
# progress or completed state.
StructField(
"status",
codeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Captures the reason for the current state of the procedure.
StructField(
"statusReason",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# A code that classifies the procedure for searching, sorting and display
# purposes (e.g. "Surgical Procedure").
StructField(
"category",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The specific procedure that is performed. Use text if the exact nature of the
# procedure cannot be coded (e.g. "Laparoscopic Appendectomy").
StructField(
"code",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The person, animal or group on which the procedure was performed.
StructField(
"subject",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The Encounter during which this Procedure was created or performed or to which
# the creation of this record is tightly associated.
StructField(
"encounter",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Estimated or actual date, date-time, period, or age when the procedure was
# performed. Allows a period to support complex procedures that span more than
# one date, and also allows for the length of the procedure to be captured.
StructField("performedDateTime", TimestampType(), True),
# Estimated or actual date, date-time, period, or age when the procedure was
# performed. Allows a period to support complex procedures that span more than
# one date, and also allows for the length of the procedure to be captured.
StructField(
"performedPeriod",
PeriodSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Estimated or actual date, date-time, period, or age when the procedure was
# performed. Allows a period to support complex procedures that span more than
# one date, and also allows for the length of the procedure to be captured.
StructField("performedString", StringType(), True),
# Estimated or actual date, date-time, period, or age when the procedure was
# performed. Allows a period to support complex procedures that span more than
# one date, and also allows for the length of the procedure to be captured.
StructField(
"performedAge",
AgeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Estimated or actual date, date-time, period, or age when the procedure was
# performed. Allows a period to support complex procedures that span more than
# one date, and also allows for the length of the procedure to be captured.
StructField(
"performedRange",
RangeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Individual who recorded the record and takes responsibility for its content.
StructField(
"recorder",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Individual who is making the procedure statement.
StructField(
"asserter",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Limited to "real" people rather than equipment.
StructField(
"performer",
ArrayType(
Procedure_PerformerSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The location where the procedure actually happened. E.g. a newborn at home, a
# tracheostomy at a restaurant.
StructField(
"location",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The coded reason why the procedure was performed. This may be a coded entity
# of some type, or may simply be present as text.
StructField(
"reasonCode",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The justification of why the procedure was performed.
StructField(
"reasonReference",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Detailed and structured anatomical location information. Multiple locations
# are allowed - e.g. multiple punch biopsies of a lesion.
StructField(
"bodySite",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The outcome of the procedure - did it resolve the reasons for the procedure
# being performed?
StructField(
"outcome",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# This could be a histology result, pathology report, surgical report, etc.
StructField(
"report",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Any complications that occurred during the procedure, or in the immediate
# post-performance period. These are generally tracked separately from the
# notes, which will typically describe the procedure itself rather than any
# 'post procedure' issues.
StructField(
"complication",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Any complications that occurred during the procedure, or in the immediate
# post-performance period.
StructField(
"complicationDetail",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# If the procedure required specific follow up - e.g. removal of sutures. The
# follow up may be represented as a simple note or could potentially be more
# complex, in which case the CarePlan resource can be used.
StructField(
"followUp",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Any other notes and comments about the procedure.
StructField(
"note",
ArrayType(
AnnotationSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# A device that is implanted, removed or otherwise manipulated (calibration,
# battery replacement, fitting a prosthesis, attaching a wound-vac, etc.) as a
# focal portion of the Procedure.
StructField(
"focalDevice",
ArrayType(
Procedure_FocalDeviceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Identifies medications, devices and any other substance used as part of the
# procedure.
StructField(
"usedReference",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Identifies coded items that were used as part of the procedure.
StructField(
"usedCode",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
]
)
if not include_extension:
schema.fields = [
c
if c.name != "extension"
else StructField("extension", StringType(), True)
for c in schema.fields
]
if not include_modifierExtension:
schema.fields = [
c
if c.name != "modifierExtension"
else StructField("modifierExtension", StringType(), True)
for c in schema.fields
]
return schema
| 52.511387 | 105 | 0.545795 | 4,553 | 50,726 | 5.844937 | 0.099495 | 0.070795 | 0.044529 | 0.068541 | 0.890087 | 0.88554 | 0.884037 | 0.856268 | 0.846798 | 0.843755 | 0 | 0.002664 | 0.415428 | 50,726 | 965 | 106 | 52.565803 | 0.894783 | 0.283701 | 0 | 0.780488 | 0 | 0 | 0.01762 | 0.000594 | 0 | 0 | 0 | 0 | 0.001435 | 1 | 0.001435 | false | 0 | 0.02726 | 0 | 0.032999 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9662e4834aaa43043581c02c44e2d373e711e3d4 | 9,376 | py | Python | mpf/tests/test_Tilt.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | mpf/tests/test_Tilt.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | mpf/tests/test_Tilt.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | from mpf.tests.MpfTestCase import MpfTestCase
from unittest.mock import MagicMock
class TestTilt(MpfTestCase):
def getConfigFile(self):
return 'config.yaml'
def getMachinePath(self):
return 'tests/machine_files/tilt/'
def get_platform(self):
return 'smart_virtual'
def _tilted(self, **kwargs):
del kwargs
self._is_tilted = True
def test_simple_tilt(self):
self._is_tilted = False
self.machine.events.add_handler("tilt", self._tilted)
self.machine.ball_controller.num_balls_known = 0
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.machine.switch_controller.process_switch('s_ball_switch2', 1)
self.advance_time_and_run(2)
self.assertEqual(None, self.machine.game)
self.assertEqual(2, self.machine.ball_controller.num_balls_known)
self.assertEqual(2, self.machine.ball_devices.bd_trough.balls)
self.machine.switch_controller.process_switch('s_start', 1)
self.machine.switch_controller.process_switch('s_start', 0)
self.advance_time_and_run(10)
# flipper actived
self.assertTrue(self.machine.flippers.f_test._enabled)
self.assertTrue(self.machine.mode_controller.is_active('tilt'))
self.assertNotEqual(None, self.machine.game)
# scoring should work
self.post_event("test_scoring")
self.assertPlayerVarEqual(100, "score")
self.assertFalse(self._is_tilted)
self.machine.switch_controller.process_switch('s_tilt', 1)
self.machine.switch_controller.process_switch('s_tilt', 0)
self.advance_time_and_run(1)
self.assertTrue(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.assertEqual(True, self.machine.game.tilted)
# flipper deactived
self.assertFalse(self.machine.flippers.f_test._enabled)
# scoring should no longer work
self.assertPlayerVarEqual(100, "score")
self.post_event("test_scoring")
self.assertPlayerVarEqual(100, "score")
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.advance_time_and_run(1)
self.assertEqual(False, self.machine.game.tilted)
def test_tilt_event(self):
self._is_tilted = False
self.machine.events.add_handler("tilt", self._tilted)
self.machine.ball_controller.num_balls_known = 0
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.machine.switch_controller.process_switch('s_ball_switch2', 1)
self.advance_time_and_run(2)
self.assertEqual(None, self.machine.game)
self.assertEqual(2, self.machine.ball_controller.num_balls_known)
self.assertEqual(2, self.machine.ball_devices.bd_trough.balls)
self.machine.switch_controller.process_switch('s_start', 1)
self.machine.switch_controller.process_switch('s_start', 0)
self.advance_time_and_run(10)
self.assertTrue(self.machine.mode_controller.is_active('tilt'))
self.assertNotEqual(None, self.machine.game)
self.assertFalse(self._is_tilted)
self.machine.events.post("tilt_event")
self.advance_time_and_run(1)
self.machine.events.post("tilt_event")
self.advance_time_and_run(1)
self.machine.events.post("tilt_event")
self.advance_time_and_run(1)
self.assertTrue(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.assertEqual(True, self.machine.game.tilted)
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.advance_time_and_run(1)
self.assertEqual(False, self.machine.game.tilted)
def test_simple_tilt_ball_not_on_pf_yet(self):
self._is_tilted = False
self.machine.events.add_handler("tilt", self._tilted)
self.machine.ball_controller.num_balls_known = 0
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.machine.switch_controller.process_switch('s_ball_switch2', 1)
self.advance_time_and_run(2)
self.assertEqual(None, self.machine.game)
self.assertEqual(2, self.machine.ball_controller.num_balls_known)
self.assertEqual(2, self.machine.ball_devices.bd_trough.balls)
self.machine.switch_controller.process_switch('s_start', 1)
self.machine.switch_controller.process_switch('s_start', 0)
self.advance_time_and_run(1)
self.assertTrue(self.machine.mode_controller.is_active('tilt'))
self.assertNotEqual(None, self.machine.game)
self.assertFalse(self._is_tilted)
self.machine.switch_controller.process_switch('s_tilt', 1)
self.machine.switch_controller.process_switch('s_tilt', 0)
self.advance_time_and_run(.1)
self.assertTrue(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.assertEqual(True, self.machine.game.tilted)
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.advance_time_and_run(1)
self.assertEqual(False, self.machine.game.tilted)
def test_tilt_warning(self):
self._is_tilted = False
self.machine.events.add_handler("tilt", self._tilted)
self.machine.ball_controller.num_balls_known = 0
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.machine.switch_controller.process_switch('s_ball_switch2', 1)
self.advance_time_and_run(2)
self.assertEqual(None, self.machine.game)
self.assertEqual(2, self.machine.ball_controller.num_balls_known)
self.assertEqual(2, self.machine.ball_devices.bd_trough.balls)
self.machine.switch_controller.process_switch('s_start', 1)
self.machine.switch_controller.process_switch('s_start', 0)
self.advance_time_and_run(10)
self.assertTrue(self.machine.mode_controller.is_active('tilt'))
self.assertNotEqual(None, self.machine.game)
self.assertFalse(self._is_tilted)
# multiple hits in 300ms window
self.machine.switch_controller.process_switch('s_tilt_warning', 1)
self.machine.switch_controller.process_switch('s_tilt_warning', 0)
self.advance_time_and_run(.1)
self.machine.switch_controller.process_switch('s_tilt_warning', 1)
self.machine.switch_controller.process_switch('s_tilt_warning', 0)
self.advance_time_and_run(.1)
self.machine.switch_controller.process_switch('s_tilt_warning', 1)
self.machine.switch_controller.process_switch('s_tilt_warning', 0)
self.advance_time_and_run(1)
self.assertFalse(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.machine.switch_controller.process_switch('s_tilt_warning', 1)
self.machine.switch_controller.process_switch('s_tilt_warning', 0)
self.advance_time_and_run(1)
self.assertFalse(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.machine.switch_controller.process_switch('s_tilt_warning', 1)
self.machine.switch_controller.process_switch('s_tilt_warning', 0)
self.advance_time_and_run(1)
self.assertTrue(self._is_tilted)
self.assertNotEqual(None, self.machine.game)
self.assertEqual(True, self.machine.game.tilted)
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.advance_time_and_run(1)
self.assertNotEqual(None, self.machine.game)
# wait for settle time (5s) since last s_tilt_warning hit
self.advance_time_and_run(3.5)
self.assertEqual(False, self.machine.game.tilted)
def test_slam_tilt(self):
self._is_tilted = False
self.machine.events.add_handler("tilt", self._tilted)
self.machine.ball_controller.num_balls_known = 0
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.machine.switch_controller.process_switch('s_ball_switch2', 1)
self.advance_time_and_run(2)
self.assertEqual(None, self.machine.game)
self.assertEqual(2, self.machine.ball_controller.num_balls_known)
self.assertEqual(2, self.machine.ball_devices.bd_trough.balls)
self.machine.switch_controller.process_switch('s_start', 1)
self.machine.switch_controller.process_switch('s_start', 0)
self.advance_time_and_run(10)
# flipper actived
self.assertTrue(self.machine.flippers.f_test._enabled)
self.assertTrue(self.machine.mode_controller.is_active('tilt'))
self.assertNotEqual(None, self.machine.game)
self.assertFalse(self._is_tilted)
self.machine.switch_controller.process_switch('s_slam_tilt', 1)
self.machine.switch_controller.process_switch('s_slam_tilt', 0)
self.advance_time_and_run(1)
self.assertNotEqual(None, self.machine.game)
# flipper deactived
self.assertFalse(self.machine.flippers.f_test._enabled)
self.machine.switch_controller.process_switch('s_ball_switch1', 1)
self.advance_time_and_run(1)
self.assertEqual(None, self.machine.game)
# test that it does not crash outside the game
self.post_event("tilt_reset_warnings")
self.advance_time_and_run()
| 40.943231 | 74 | 0.710431 | 1,228 | 9,376 | 5.125407 | 0.088762 | 0.17477 | 0.11074 | 0.175882 | 0.922148 | 0.910391 | 0.907372 | 0.907372 | 0.907372 | 0.890372 | 0 | 0.015199 | 0.186007 | 9,376 | 228 | 75 | 41.122807 | 0.809486 | 0.026451 | 0 | 0.868263 | 0 | 0 | 0.07052 | 0.002742 | 0 | 0 | 0 | 0 | 0.359281 | 1 | 0.053892 | false | 0 | 0.011976 | 0.017964 | 0.08982 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
969a9ca45f52c99540c26207b8b63f79a91571a5 | 99 | py | Python | src/api/errors.py | ericdaat/notflix | 0d8697e13f28d658d6777b7c854e4fd0b207ca11 | [
"MIT"
] | null | null | null | src/api/errors.py | ericdaat/notflix | 0d8697e13f28d658d6777b7c854e4fd0b207ca11 | [
"MIT"
] | 1 | 2022-01-20T16:48:50.000Z | 2022-01-20T16:48:50.000Z | src/api/errors.py | ericdaat/notflix | 0d8697e13f28d658d6777b7c854e4fd0b207ca11 | [
"MIT"
] | null | null | null | from flask import jsonify
def page_not_found(e):
return jsonify(error="Page not found"), 404
| 16.5 | 47 | 0.737374 | 16 | 99 | 4.4375 | 0.75 | 0.197183 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.171717 | 99 | 5 | 48 | 19.8 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
96a161fe47e8f871c24506407344b5ce1e132b9d | 8,582 | py | Python | test/programytest/security/linking/test_accountlinker_mongo.py | cdoebler1/AIML2 | ee692ec5ea3794cd1bc4cc8ec2a6b5e5c20a0d6a | [
"MIT"
] | 345 | 2016-11-23T22:37:04.000Z | 2022-03-30T20:44:44.000Z | test/programytest/security/linking/test_accountlinker_mongo.py | sofi2305/Nik | e8bb4a6614c16c334cd0df3a16b30a9daac0070d | [
"MIT"
] | 275 | 2016-12-07T10:30:28.000Z | 2022-02-08T21:28:33.000Z | test/programytest/security/linking/test_accountlinker_mongo.py | sofi2305/Nik | e8bb4a6614c16c334cd0df3a16b30a9daac0070d | [
"MIT"
] | 159 | 2016-11-28T18:59:30.000Z | 2022-03-20T18:02:44.000Z | import unittest
from unittest.mock import patch
import programytest.storage.engines as Engines
from programy.security.linking.accountlinker import BasicAccountLinkerService
from programy.storage.stores.nosql.mongo.config import MongoStorageConfiguration
from programy.storage.stores.nosql.mongo.engine import MongoStorageEngine
from programy.storage.stores.nosql.mongo.dao.link import Link
from programytest.security.linking.accounlinker_asserts import AccountLinkerAsserts
class MongoAccountLinkerServiceTests(AccountLinkerAsserts):
def setUp(self):
config = MongoStorageConfiguration()
config.drop_all_first = True
self.storage_engine = MongoStorageEngine(config)
self.storage_engine.initialise()
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_init(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assertIsNotNone(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_generate_key(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generate_key(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_generate_expirary(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generate_expirary(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_happy_path(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_happy_path(mgr)
def patch_add_user(self, userid, clientid):
return None
@patch('programy.storage.stores.nosql.mongo.store.users.MongoUserStore.add_user', patch_add_user)
def test_link_user_to_client_add_user_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_link_user_to_client_add_user_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_user_client_link_already_exists(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_user_client_link_already_exists(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_provided_key_not_matched(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_provided_key_not_matched(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_generated_key_not_matched(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generated_key_not_matched(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_generated_key_expired(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generated_key_expired(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_lockout_after_max_retries(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_lockout_after_max_retries(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_unlink_user_from_client(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_client(mgr)
def patch_remove_user(self, userid, clientid):
return False
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.users.MongoUserStore.remove_user', patch_remove_user)
def test_unlink_user_from_client_remove_user_fails1(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_client_fails(mgr)
def patch_remove_link(self, userid):
return False
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.links.MongoLinkStore.remove_link', patch_remove_link)
def test_unlink_user_from_client_remove_user_fails2(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_client_fails(mgr)
def patch_unlink_accounts(self, userid):
return False
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.linkedaccounts.MongoLinkedAccountStore.unlink_accounts', patch_unlink_accounts)
def test_unlink_user_from_client_remove_user_fails3(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_client_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_unlink_user_from_all_clients(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_all_clients(mgr)
def patch_remove_user_from_all_clients(self, userid):
return False
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.users.MongoUserStore.remove_user_from_all_clients',patch_remove_user_from_all_clients)
def test_unlink_user_from_all_clients_remove_user_from_all_clients_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_all_clients_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.links.MongoLinkStore.remove_link', patch_remove_link)
def test_unlink_user_from_all_clients_remove_link_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_all_clients_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.linkedaccounts.MongoLinkedAccountStore.unlink_accounts', patch_unlink_accounts)
def test_unlink_user_from_all_clients_unlink_accounts_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_unlink_user_from_all_clients_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_generate_link(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generate_link(mgr)
def patch_create_link(self, userid, provided_key, generated_key, expires):
return None
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.links.MongoLinkStore.create_link', patch_create_link)
def test_generate_link_create_link_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_generate_link_create_link_fails(mgr)
def patch_get_link(self, userid):
return None
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.links.MongoLinkStore.get_link', patch_get_link)
def test_reset_link_get_link_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_reset_link_get_link_fails(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
def test_link_accounts(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_link_accounts_success(mgr)
def patch_get_link(self, userid):
link = Link("userid1", "abcdefg", "xxxxxxxxxx", expires=None, expired=True, retry_count=0)
link.expired = True
return link
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.links.MongoLinkStore.get_link', patch_get_link)
def test_link_accounts_link_expired(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_link_accounts_failure(mgr)
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.users.MongoUserStore.add_user', patch_add_user)
def test_link_accounts_add_user_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_link_accounts_failure(mgr)
def patch_link_accounts(self, userid, linked_userid):
return None
@unittest.skipIf(Engines.mongo is False, Engines.mongo_disabled)
@patch('programy.storage.stores.nosql.mongo.store.linkedaccounts.MongoLinkedAccountStore.link_accounts', patch_link_accounts)
def test_link_accounts_link_accounts_fails(self):
mgr = BasicAccountLinkerService(self.storage_engine)
self.assert_link_accounts_failure(mgr)
| 46.896175 | 140 | 0.777791 | 1,060 | 8,582 | 5.977358 | 0.091509 | 0.090909 | 0.072443 | 0.142045 | 0.844223 | 0.803346 | 0.776673 | 0.752367 | 0.705177 | 0.681818 | 0 | 0.000679 | 0.141692 | 8,582 | 182 | 141 | 47.153846 | 0.85949 | 0 | 0 | 0.524138 | 0 | 0 | 0.114309 | 0.111512 | 0 | 0 | 0 | 0 | 0.186207 | 1 | 0.241379 | false | 0 | 0.055172 | 0.055172 | 0.365517 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73db93e32cfc911467cd93df1e7d19304df98f9a | 38,581 | py | Python | nyoka/tests/testScoreWithAdapaSklearn.py | nimeshgit/nyoka | 43bf049825922213eeb3e6a8f39864f9b75d01d5 | [
"Apache-2.0"
] | null | null | null | nyoka/tests/testScoreWithAdapaSklearn.py | nimeshgit/nyoka | 43bf049825922213eeb3e6a8f39864f9b75d01d5 | [
"Apache-2.0"
] | 2 | 2021-08-25T16:16:45.000Z | 2022-02-10T05:28:52.000Z | nyoka/tests/testScoreWithAdapaSklearn.py | nimeshgit/nyoka | 43bf049825922213eeb3e6a8f39864f9b75d01d5 | [
"Apache-2.0"
] | null | null | null | import sys, os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
sys.path.append(BASE_DIR)
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler, Imputer, LabelEncoder, LabelBinarizer,\
Binarizer, MinMaxScaler, MaxAbsScaler, RobustScaler
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.tree import DecisionTreeRegressor, DecisionTreeClassifier
from sklearn.svm import SVC, SVR, LinearSVC, LinearSVR, OneClassSVM
from sklearn.decomposition import PCA
from sklearn.naive_bayes import GaussianNB
from sklearn_pandas import DataFrameMapper
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor, \
RandomForestClassifier, RandomForestRegressor, IsolationForest
from sklearn.linear_model import LinearRegression, LogisticRegression, RidgeClassifier, SGDClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neural_network import MLPClassifier, MLPRegressor
from nyoka import skl_to_pmml
from nyoka import PMML44 as pml
import unittest
import ast
import numpy
from adapaUtilities import AdapaUtility
from dataUtilities import DataUtility
class TestCases(unittest.TestCase):
@classmethod
def setUpClass(self):
print("******* Unit Test for sklearn *******")
self.data_utility = DataUtility()
self.adapa_utility = AdapaUtility()
def test_01_linear_regression(self):
print("\ntest 01 (linear regression without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = LinearRegression()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test01sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_02_linear_regression_with_scaler(self):
print("\ntest 02 (linear regression with preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = LinearRegression()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test02sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_03_logistic_regression_with_scaler(self):
print("\ntest 03 (logistic regression with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = LogisticRegression()
pipeline_obj = Pipeline([
("mapper", DataFrameMapper([
(["sepal length (cm)", "sepal width (cm)"], MinMaxScaler()),
(["petal length (cm)", "petal width (cm)"], None)
])
),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test03sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_04_logistic_regression_with_scaler(self):
print("\ntest 04 (logistic regression with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = LogisticRegression()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test04sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_05_logistic_regression(self):
print("\ntest 05 (logistic regression without preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = LogisticRegression()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test05sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_06_logistic_regression(self):
print("\ntest 06 (logistic regression without preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = LogisticRegression()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test06sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_07_ridge_classifier(self):
print("\ntest 07 (Ridge Classifier) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = RidgeClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test07sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = model._predict_proba_lr(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_08_ridge_classifier(self):
print("\ntest 08 (Ridge Classifier) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = RidgeClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test08sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = model._predict_proba_lr(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_09_sgd_classifier(self):
print("\ntest 09 (SGD Classifier with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = SGDClassifier(loss="log")
pipeline_obj = Pipeline([
("scaler", StandardScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test09sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_10_sgd_classifier(self):
print("\ntest 10 (SGD Classifier with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = SGDClassifier(loss="log")
pipeline_obj = Pipeline([
("scaler", StandardScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test10sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_11_lda(self):
print("\ntest 11 (LDA with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = LinearDiscriminantAnalysis()
pipeline_obj = Pipeline([
("scaler", MaxAbsScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test11sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_12_lda(self):
print("\ntest 12 (LDA with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = LinearDiscriminantAnalysis()
pipeline_obj = Pipeline([
("scaler", StandardScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test12sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_13_linearsvc(self):
print("\ntest 13 (LinearSVC with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = LinearSVC()
pipeline_obj = Pipeline([
("scaler", StandardScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test13sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.decision_function(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_14_linearsvc(self):
print("\ntest 14 (LinearSVC with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = LinearSVC()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test14sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = model._predict_proba_lr(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_15_linearsvr(self):
print("\ntest 15 (linear svr without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = LinearSVR()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test15sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_16_linearsvr(self):
print("\ntest 16 (linear svr with preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = LinearSVR()
pipeline_obj = Pipeline([
("scaler", MinMaxScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test16sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_17_decisiontreeclassifier(self):
print("\ntest 17 (decision tree classifier with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = DecisionTreeClassifier()
pipeline_obj = Pipeline([
("scaler", Binarizer()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test17sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_18_decisiontreeclassifier(self):
print("\ntest 18 (decision tree classifier with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = DecisionTreeClassifier()
pipeline_obj = Pipeline([
("scaler", Binarizer()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test18sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_19_decisiontreeclassifier(self):
print("\ntest 19 (decision tree classifier without preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = DecisionTreeClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test19sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_20_decisiontreeclassifier(self):
print("\ntest 20 (decision tree classifier without preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = DecisionTreeClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test20sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_21_svr(self):
print("\ntest 21 (SVR without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = SVR()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test21sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_22_gaussian_nb(self):
print("\ntest 22 (GaussianNB without preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = GaussianNB()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test22sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_23_gaussian_nb(self):
print("\ntest 23 (GaussianNB without preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = GaussianNB()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test23sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_24_gaussian_nb(self):
print("\ntest 24 (GaussianNB with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = GaussianNB()
pipeline_obj = Pipeline([
('scaler', StandardScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test24sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_25_random_forest_regressor(self):
print("\ntest 25 (random forest regressor without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = RandomForestRegressor()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test25sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
@unittest.skip("")
def test_26_random_forest_classifier(self):
print("\ntest 26 (random forest classifier with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = RandomForestClassifier()
pipeline_obj = Pipeline([
('scaler',MinMaxScaler()),
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test26sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_27_random_forest_classifier(self):
print("\ntest 27 (random forest classifier with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = RandomForestClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test27sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_28_gradient_boosting_classifier(self):
print("\ntest 28 (gradient boosting classifier with preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = GradientBoostingClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test28sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_29_gradient_boosting_classifier(self):
print("\ntest 29 (gradient boosting classifier with preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = GradientBoostingClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test29sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_30_gradient_boosting_regressor(self):
print("\ntest 30 (gradient boosting regressor without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = GradientBoostingRegressor()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test30sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
@unittest.skip("")
def test_31_knn_classifier(self):
print("\ntest 31 (knn classifier without preprocessing) [binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = KNeighborsClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test31sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_32_knn_classifier(self):
print("\ntest 32 (knn classifier without preprocessing) [multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = KNeighborsClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test32sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_33_knn_regressor(self):
print("\ntest 33 (knn regressor without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = KNeighborsRegressor()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test33sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_34_kmeans(self):
print("\ntest 34 (kmeans without preprocessing\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = KMeans(n_clusters=2)
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test34sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.transform(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@unittest.skip("")
def test_35_isolation_forest(self):
print("\ntest 34 (Isolation Forest\n")
detection_map = {
'true': -1,
'false': 1
}
X = numpy.array([
[1,2,3,4],
[2,1,3,4],
[3,2,1,4],
[3,2,4,1],
[4,3,2,1],
[2,4,3,1]
], dtype=numpy.float32)
test_data = numpy.array([[0,4,0,7],[4,0,4,7]])
features = ['a','b','c','d']
model = IsolationForest(n_estimators=40,contamination=0)
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X)
file_name = 'test35sklearn.pmml'
skl_to_pmml(pipeline_obj, features, '', file_name)
model_pred = pipeline_obj.predict(test_data)
model_scores = model.score_samples(test_data)
model_name = self.adapa_utility.upload_to_zserver(file_name)
z_predictions = self.adapa_utility.score_in_zserver(model_name,'nyoka/tests/test_forest.csv','ANOMALY')
cnt = 0
for idx, value in enumerate(z_predictions):
score, is_anomaly = value.split(",")
score = -1 * float(score)
if "{:.6f}".format(score) != "{:.6f}".format(model_scores[idx]) or model_pred[idx] != detection_map[is_anomaly]:
cnt += 1
self.assertEqual(cnt,0)
@unittest.skip("")
def test_36_one_class_svm(self):
print("\ntest 36 (One Class SVM\n")
detection_map = {
'true': -1,
'false': 1
}
df = pd.read_csv("nyoka/tests/train_ocsvm.csv")
df_test = pd.read_csv("nyoka/tests/test_ocsvm.csv")
features = df.columns
model = OneClassSVM(nu=0.1)
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(df)
file_name = 'test36sklearn.pmml'
skl_to_pmml(pipeline_obj, features, '', file_name)
model_pred = pipeline_obj.predict(df_test)
model_scores = pipeline_obj.decision_function(df_test)
model_name = self.adapa_utility.upload_to_zserver(file_name)
z_predictions = self.adapa_utility.score_in_zserver(model_name,'nyoka/tests/test_ocsvm.csv','ANOMALY')
cnt = 0
for idx, value in enumerate(z_predictions):
score, is_anomaly = value.split(",")
score = float(score)
if "{:.6f}".format(score) != "{:.6f}".format(model_scores[idx]) or model_pred[idx] != detection_map[is_anomaly]:
cnt += 1
self.assertEqual(cnt,0)
def test_37_mlp_regressor(self):
print("\ntest 37 (mlp regressor without preprocessing)\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_regression()
model = MLPRegressor()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test37sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, _ = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
def test_38_mlp_classifier(self):
print("\ntest 38 (mlp classifier without preprocessing)[multi-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_multi_class_classification()
model = MLPClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test38sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
def test_39_mlp_classifier(self):
print("\ntest 39 (mlp classifier without preprocessing)[binary-class]\n")
X, X_test, y, features, target, test_file = self.data_utility.get_data_for_binary_classification()
model = MLPClassifier()
pipeline_obj = Pipeline([
("model", model)
])
pipeline_obj.fit(X,y)
file_name = 'test39sklearn.pmml'
skl_to_pmml(pipeline_obj, features, target, file_name)
model_name = self.adapa_utility.upload_to_zserver(file_name)
predictions, probabilities = self.adapa_utility.score_in_zserver(model_name, test_file)
model_pred = pipeline_obj.predict(X_test)
model_prob = pipeline_obj.predict_proba(X_test)
self.assertEqual(self.adapa_utility.compare_predictions(predictions, model_pred), True)
self.assertEqual(self.adapa_utility.compare_probability(probabilities, model_prob), True)
@classmethod
def tearDownClass(self):
print("\n******* Finished *******\n")
if __name__ == '__main__':
unittest.main(warnings='ignore') | 47.107448 | 124 | 0.68977 | 4,662 | 38,581 | 5.378807 | 0.060489 | 0.079837 | 0.091881 | 0.062211 | 0.864731 | 0.844194 | 0.834304 | 0.825969 | 0.824254 | 0.819788 | 0 | 0.00958 | 0.212669 | 38,581 | 819 | 125 | 47.107448 | 0.81594 | 0 | 0 | 0.717765 | 0 | 0 | 0.091934 | 0.004277 | 0 | 0 | 0 | 0 | 0.095989 | 1 | 0.058739 | false | 0 | 0.034384 | 0 | 0.094556 | 0.058739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73e7c4f288e8a6d04aa55d7a6ccdd3dc70db86a0 | 196 | py | Python | sagemaker/generate_docker_image_tag.py | Intrical-AI/aws-sagemaker-deploy | 69b5928a23f63864b02366eb76cd57111339cffe | [
"Apache-2.0"
] | null | null | null | sagemaker/generate_docker_image_tag.py | Intrical-AI/aws-sagemaker-deploy | 69b5928a23f63864b02366eb76cd57111339cffe | [
"Apache-2.0"
] | null | null | null | sagemaker/generate_docker_image_tag.py | Intrical-AI/aws-sagemaker-deploy | 69b5928a23f63864b02366eb76cd57111339cffe | [
"Apache-2.0"
] | null | null | null | def generate_docker_image_tag(registry_uri, bento_name, bento_version):
# image_tag = f"{bento_name}-{bento_version}".lower()
image_tag = "latest"
return f"{registry_uri}:{image_tag}"
| 39.2 | 71 | 0.734694 | 28 | 196 | 4.714286 | 0.5 | 0.242424 | 0.212121 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127551 | 196 | 4 | 72 | 49 | 0.77193 | 0.260204 | 0 | 0 | 1 | 0 | 0.223776 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fb40313d13048eaae29404d6076e384ab8566721 | 2,828 | py | Python | ftc/migrations/0008_auto_20201002_1408.py | drkane/find-that-charity | 25f778cfa1429e465bc19a6465b09f0473cfe113 | [
"MIT"
] | 14 | 2018-09-14T11:51:26.000Z | 2021-02-28T22:00:29.000Z | ftc/migrations/0008_auto_20201002_1408.py | drkane/find-that-charity | 25f778cfa1429e465bc19a6465b09f0473cfe113 | [
"MIT"
] | 89 | 2018-01-26T22:20:43.000Z | 2022-01-20T14:16:25.000Z | ftc/migrations/0008_auto_20201002_1408.py | drkane/find-that-charity | 25f778cfa1429e465bc19a6465b09f0473cfe113 | [
"MIT"
] | 7 | 2019-01-31T11:23:17.000Z | 2022-03-09T07:42:08.000Z | # Generated by Django 3.1.1 on 2020-10-02 13:08
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("ftc", "0007_auto_20201001_1656"),
]
operations = [
migrations.AddField(
model_name="organisation",
name="geo_ctry",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_cty",
field=models.CharField(blank=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_lat",
field=models.FloatField(blank=True, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_laua",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_lep1",
field=models.CharField(blank=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_lep2",
field=models.CharField(blank=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_long",
field=models.FloatField(blank=True, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_lsoa11",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_msoa11",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_oa11",
field=models.CharField(blank=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_pcon",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_rgn",
field=models.CharField(blank=True, db_index=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_ttwa",
field=models.CharField(blank=True, max_length=9, null=True),
),
migrations.AddField(
model_name="organisation",
name="geo_ward",
field=models.CharField(blank=True, max_length=9, null=True),
),
]
| 33.666667 | 87 | 0.566478 | 293 | 2,828 | 5.300341 | 0.1843 | 0.162267 | 0.207341 | 0.2434 | 0.869285 | 0.869285 | 0.869285 | 0.839665 | 0.839665 | 0.839665 | 0 | 0.026235 | 0.312588 | 2,828 | 83 | 88 | 34.072289 | 0.772634 | 0.015912 | 0 | 0.727273 | 1 | 0 | 0.110392 | 0.00827 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012987 | 0 | 0.051948 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
fb4f0a9894dcd5a05e7c3f0043ea239adaca7656 | 26,560 | py | Python | powerdns_client/api/zonecryptokey_api.py | nrfta/python-powerdns-client | 57dd0460995a5407c6f5c963553b4df0f4859667 | [
"MIT"
] | 1 | 2021-04-05T21:37:17.000Z | 2021-04-05T21:37:17.000Z | powerdns_client/api/zonecryptokey_api.py | nrfta/python-powerdns-client | 57dd0460995a5407c6f5c963553b4df0f4859667 | [
"MIT"
] | null | null | null | powerdns_client/api/zonecryptokey_api.py | nrfta/python-powerdns-client | 57dd0460995a5407c6f5c963553b4df0f4859667 | [
"MIT"
] | 1 | 2021-12-18T04:33:58.000Z | 2021-12-18T04:33:58.000Z | # coding: utf-8
"""
PowerDNS Authoritative HTTP API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 0.0.13
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from powerdns_client.api_client import ApiClient
class ZonecryptokeyApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_cryptokey(self, server_id, zone_id, cryptokey, **kwargs): # noqa: E501
"""Creates a Cryptokey # noqa: E501
This method adds a new key to a zone. The key can either be generated or imported by supplying the content parameter. if content, bits and algo are null, a key will be generated based on the default-ksk-algorithm and default-ksk-size settings for a KSK and the default-zsk-algorithm and default-zsk-size options for a ZSK. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_cryptokey(server_id, zone_id, cryptokey, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: (required)
:param Cryptokey cryptokey: Add a Cryptokey (required)
:return: Cryptokey
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_cryptokey_with_http_info(server_id, zone_id, cryptokey, **kwargs) # noqa: E501
else:
(data) = self.create_cryptokey_with_http_info(server_id, zone_id, cryptokey, **kwargs) # noqa: E501
return data
def create_cryptokey_with_http_info(self, server_id, zone_id, cryptokey, **kwargs): # noqa: E501
"""Creates a Cryptokey # noqa: E501
This method adds a new key to a zone. The key can either be generated or imported by supplying the content parameter. if content, bits and algo are null, a key will be generated based on the default-ksk-algorithm and default-ksk-size settings for a KSK and the default-zsk-algorithm and default-zsk-size options for a ZSK. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_cryptokey_with_http_info(server_id, zone_id, cryptokey, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: (required)
:param Cryptokey cryptokey: Add a Cryptokey (required)
:return: Cryptokey
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['server_id', 'zone_id', 'cryptokey'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_cryptokey" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'server_id' is set
if ('server_id' not in params or
params['server_id'] is None):
raise ValueError("Missing the required parameter `server_id` when calling `create_cryptokey`") # noqa: E501
# verify the required parameter 'zone_id' is set
if ('zone_id' not in params or
params['zone_id'] is None):
raise ValueError("Missing the required parameter `zone_id` when calling `create_cryptokey`") # noqa: E501
# verify the required parameter 'cryptokey' is set
if ('cryptokey' not in params or
params['cryptokey'] is None):
raise ValueError("Missing the required parameter `cryptokey` when calling `create_cryptokey`") # noqa: E501
collection_formats = {}
path_params = {}
if 'server_id' in params:
path_params['server_id'] = params['server_id'] # noqa: E501
if 'zone_id' in params:
path_params['zone_id'] = params['zone_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'cryptokey' in params:
body_params = params['cryptokey']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/servers/{server_id}/zones/{zone_id}/cryptokeys', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Cryptokey', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_cryptokey(self, server_id, zone_id, cryptokey_id, **kwargs): # noqa: E501
"""This method deletes a key specified by cryptokey_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_cryptokey(server_id, zone_id, cryptokey_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:param str cryptokey_id: The id value of the Cryptokey (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, **kwargs) # noqa: E501
else:
(data) = self.delete_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, **kwargs) # noqa: E501
return data
def delete_cryptokey_with_http_info(self, server_id, zone_id, cryptokey_id, **kwargs): # noqa: E501
"""This method deletes a key specified by cryptokey_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:param str cryptokey_id: The id value of the Cryptokey (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['server_id', 'zone_id', 'cryptokey_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_cryptokey" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'server_id' is set
if ('server_id' not in params or
params['server_id'] is None):
raise ValueError("Missing the required parameter `server_id` when calling `delete_cryptokey`") # noqa: E501
# verify the required parameter 'zone_id' is set
if ('zone_id' not in params or
params['zone_id'] is None):
raise ValueError("Missing the required parameter `zone_id` when calling `delete_cryptokey`") # noqa: E501
# verify the required parameter 'cryptokey_id' is set
if ('cryptokey_id' not in params or
params['cryptokey_id'] is None):
raise ValueError("Missing the required parameter `cryptokey_id` when calling `delete_cryptokey`") # noqa: E501
collection_formats = {}
path_params = {}
if 'server_id' in params:
path_params['server_id'] = params['server_id'] # noqa: E501
if 'zone_id' in params:
path_params['zone_id'] = params['zone_id'] # noqa: E501
if 'cryptokey_id' in params:
path_params['cryptokey_id'] = params['cryptokey_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/servers/{server_id}/zones/{zone_id}/cryptokeys/{cryptokey_id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_cryptokey(self, server_id, zone_id, cryptokey_id, **kwargs): # noqa: E501
"""Returns all data about the CryptoKey, including the privatekey. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_cryptokey(server_id, zone_id, cryptokey_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:param str cryptokey_id: The id value of the CryptoKey (required)
:return: Cryptokey
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, **kwargs) # noqa: E501
else:
(data) = self.get_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, **kwargs) # noqa: E501
return data
def get_cryptokey_with_http_info(self, server_id, zone_id, cryptokey_id, **kwargs): # noqa: E501
"""Returns all data about the CryptoKey, including the privatekey. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:param str cryptokey_id: The id value of the CryptoKey (required)
:return: Cryptokey
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['server_id', 'zone_id', 'cryptokey_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_cryptokey" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'server_id' is set
if ('server_id' not in params or
params['server_id'] is None):
raise ValueError("Missing the required parameter `server_id` when calling `get_cryptokey`") # noqa: E501
# verify the required parameter 'zone_id' is set
if ('zone_id' not in params or
params['zone_id'] is None):
raise ValueError("Missing the required parameter `zone_id` when calling `get_cryptokey`") # noqa: E501
# verify the required parameter 'cryptokey_id' is set
if ('cryptokey_id' not in params or
params['cryptokey_id'] is None):
raise ValueError("Missing the required parameter `cryptokey_id` when calling `get_cryptokey`") # noqa: E501
collection_formats = {}
path_params = {}
if 'server_id' in params:
path_params['server_id'] = params['server_id'] # noqa: E501
if 'zone_id' in params:
path_params['zone_id'] = params['zone_id'] # noqa: E501
if 'cryptokey_id' in params:
path_params['cryptokey_id'] = params['cryptokey_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/servers/{server_id}/zones/{zone_id}/cryptokeys/{cryptokey_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Cryptokey', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_cryptokeys(self, server_id, zone_id, **kwargs): # noqa: E501
"""Get all CryptoKeys for a zone, except the privatekey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cryptokeys(server_id, zone_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:return: list[Cryptokey]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_cryptokeys_with_http_info(server_id, zone_id, **kwargs) # noqa: E501
else:
(data) = self.list_cryptokeys_with_http_info(server_id, zone_id, **kwargs) # noqa: E501
return data
def list_cryptokeys_with_http_info(self, server_id, zone_id, **kwargs): # noqa: E501
"""Get all CryptoKeys for a zone, except the privatekey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_cryptokeys_with_http_info(server_id, zone_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: The id of the zone to retrieve (required)
:return: list[Cryptokey]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['server_id', 'zone_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_cryptokeys" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'server_id' is set
if ('server_id' not in params or
params['server_id'] is None):
raise ValueError("Missing the required parameter `server_id` when calling `list_cryptokeys`") # noqa: E501
# verify the required parameter 'zone_id' is set
if ('zone_id' not in params or
params['zone_id'] is None):
raise ValueError("Missing the required parameter `zone_id` when calling `list_cryptokeys`") # noqa: E501
collection_formats = {}
path_params = {}
if 'server_id' in params:
path_params['server_id'] = params['server_id'] # noqa: E501
if 'zone_id' in params:
path_params['zone_id'] = params['zone_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/servers/{server_id}/zones/{zone_id}/cryptokeys', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Cryptokey]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def modify_cryptokey(self, server_id, zone_id, cryptokey_id, cryptokey, **kwargs): # noqa: E501
"""This method (de)activates a key from zone_name specified by cryptokey_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.modify_cryptokey(server_id, zone_id, cryptokey_id, cryptokey, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: (required)
:param str cryptokey_id: Cryptokey to manipulate (required)
:param Cryptokey cryptokey: the Cryptokey (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.modify_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, cryptokey, **kwargs) # noqa: E501
else:
(data) = self.modify_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, cryptokey, **kwargs) # noqa: E501
return data
def modify_cryptokey_with_http_info(self, server_id, zone_id, cryptokey_id, cryptokey, **kwargs): # noqa: E501
"""This method (de)activates a key from zone_name specified by cryptokey_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.modify_cryptokey_with_http_info(server_id, zone_id, cryptokey_id, cryptokey, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str server_id: The id of the server to retrieve (required)
:param str zone_id: (required)
:param str cryptokey_id: Cryptokey to manipulate (required)
:param Cryptokey cryptokey: the Cryptokey (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['server_id', 'zone_id', 'cryptokey_id', 'cryptokey'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method modify_cryptokey" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'server_id' is set
if ('server_id' not in params or
params['server_id'] is None):
raise ValueError("Missing the required parameter `server_id` when calling `modify_cryptokey`") # noqa: E501
# verify the required parameter 'zone_id' is set
if ('zone_id' not in params or
params['zone_id'] is None):
raise ValueError("Missing the required parameter `zone_id` when calling `modify_cryptokey`") # noqa: E501
# verify the required parameter 'cryptokey_id' is set
if ('cryptokey_id' not in params or
params['cryptokey_id'] is None):
raise ValueError("Missing the required parameter `cryptokey_id` when calling `modify_cryptokey`") # noqa: E501
# verify the required parameter 'cryptokey' is set
if ('cryptokey' not in params or
params['cryptokey'] is None):
raise ValueError("Missing the required parameter `cryptokey` when calling `modify_cryptokey`") # noqa: E501
collection_formats = {}
path_params = {}
if 'server_id' in params:
path_params['server_id'] = params['server_id'] # noqa: E501
if 'zone_id' in params:
path_params['zone_id'] = params['zone_id'] # noqa: E501
if 'cryptokey_id' in params:
path_params['cryptokey_id'] = params['cryptokey_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'cryptokey' in params:
body_params = params['cryptokey']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader'] # noqa: E501
return self.api_client.call_api(
'/servers/{server_id}/zones/{zone_id}/cryptokeys/{cryptokey_id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.119601 | 344 | 0.626242 | 3,247 | 26,560 | 4.890668 | 0.058516 | 0.045844 | 0.026448 | 0.030856 | 0.965491 | 0.963287 | 0.962531 | 0.951134 | 0.947166 | 0.946411 | 0 | 0.01488 | 0.283923 | 26,560 | 601 | 345 | 44.193012 | 0.820075 | 0.345369 | 0 | 0.808511 | 1 | 0 | 0.22828 | 0.037666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033435 | false | 0 | 0.012158 | 0 | 0.094225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fbaf51fa7560d935e5348c149272760b16ef362f | 68 | py | Python | mamba/infrastructure/__init__.py | jaimegildesagredo/mamba | f7cdb231b5eec036edba05752ae90d174751aa10 | [
"MIT"
] | null | null | null | mamba/infrastructure/__init__.py | jaimegildesagredo/mamba | f7cdb231b5eec036edba05752ae90d174751aa10 | [
"MIT"
] | null | null | null | mamba/infrastructure/__init__.py | jaimegildesagredo/mamba | f7cdb231b5eec036edba05752ae90d174751aa10 | [
"MIT"
] | null | null | null | import sys
def is_python3():
return sys.version_info >= (3, 0)
| 13.6 | 37 | 0.661765 | 11 | 68 | 3.909091 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.205882 | 68 | 4 | 38 | 17 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
fbb9aa2c582ce1d5ded56464c4412041a6b97546 | 42,232 | py | Python | goutdotcom/history/migrations/0001_initial.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | goutdotcom/history/migrations/0001_initial.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | goutdotcom/history/migrations/0001_initial.py | Spiewart/goutdotcom | 0916155732a72fcb8c8a2fb0f4dd81efef618af8 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2022-01-08 23:46
from django.db import migrations, models
import django.utils.timezone
import django_extensions.db.fields
import multiselectfield.db.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Alcohol',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you drink alcohol?', null=True, verbose_name='alcohol')),
('number', models.IntegerField(blank=True, default=False, help_text='How many drinks do you have per week?', null=True)),
('wine', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you drink wine?', null=True)),
('beer', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you drink beer?', null=True)),
('liquor', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you drink liquor?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='AllopurinolHypersensitivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had a side effect or reaction to allopurinol?', null=True, verbose_name='Allopurinol Hypersensitivity')),
('rash', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had a rash side effect due to allopurinol?.', null=True, verbose_name='Allopurinol Rash')),
('transaminitis', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had elevated liver function tests as a side effect of allopurinol?.', null=True, verbose_name='Allopurinol Transaminitis')),
('cytopenia', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had low blood counts as a side effect of allopurinol?.', null=True, verbose_name='Allopurinol Cytopenia')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Angina',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], help_text="Do you get <a href='https://www.heart.org/en/health-topics/heart-attack/angina-chest-pain' target='_blank'>angina</a>?", null=True, verbose_name='Angina (cardiac chest pain)')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Anticoagulation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('date', models.DateField(blank=True, help_text='When did you start this medication?', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Are you on <a href='https://en.wikipedia.org/wiki/Anticoagulant' target='_blank'>anticoagulation</a> (blood thinners)</a>)?", null=True, verbose_name='Anticoagulation')),
('apixaban', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on apixaban / Eliquis?', null=True)),
('clopidogrel', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on clopidogrel / Plavix?', null=True)),
('dabigatran', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on dabigatran / Pradaxa?', null=True)),
('enoxaparin', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on enoxaparin / Lovenox?', null=True)),
('rivaroxaban', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on rivaroxaban / Xarelto?', null=True)),
('warfarin', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on warfarin / Coumadin?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Bleed',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('number', models.IntegerField(blank=True, default=1, help_text='How many have you had?', null=True)),
('date', models.DateField(blank=True, help_text='When was it? The most recent if multiple.', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you ever had a major bleed (<a href='https://en.wikipedia.org/wiki/Gastrointestinal_bleeding' target='_blank'>gastrointestinal bleeding</a> (GI), <a href='https://en.wikipedia.org/wiki/Peptic_ulcer_disease' target='_blank'>peptic ulcer disease</a>, brain (CNS))", null=True, verbose_name='major bleed')),
('GIB', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you ever had <a href='https://en.wikipedia.org/wiki/Gastrointestinal_bleeding' target='_blank'>gastrointestinal bleeding</a>?", null=True)),
('GIB_date', models.DateTimeField(blank=True, default=django.utils.timezone.now, help_text='When was the last time you has a gastrointestinal bleed?', null=True)),
('CNS', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you had an intracranial bleed?', null=True)),
('CNS_date', models.DateTimeField(blank=True, default=django.utils.timezone.now, help_text='When was the last time you had an intracranial bleed?', null=True)),
('transfusion', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Did you require a transfusion?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='CHF',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('systolic', models.BooleanField(blank=True, choices=[(True, 'Systolic'), (False, 'Diastolic')], help_text="Do you have systolic (reduced <a href='https://en.wikipedia.org/wiki/Ejection_fraction' target='_blank'>ejection fraction</a>) heart failure?", null=True, verbose_name='Systolic or diastolic heart failure')),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Do you have CHF (<a href='https://en.wikipedia.org/wiki/Heart_failure' target='_blank'>congestive heart failure</a>)?", null=True, verbose_name='Congestive Heart Failure (CHF)')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='CKD',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('stage', models.IntegerField(choices=[(1, 'I'), (2, 'II'), (3, 'III'), (4, 'IV'), (5, 'V')], default=None, help_text="What <a href='https://www.kidney.org/sites/default/files/01-10-7278_HBG_CKD_Stages_Flyer_GFR.gif' target='_blank'>stage</a> is your CKD??", null=True, verbose_name='CKD stage')),
('dialysis', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], help_text="Are you on <a href='https://en.wikipedia.org/wiki/Hemodialysis' target='_blank'>dialysis</a>?", null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Do you have CKD (<a href='https://en.wikipedia.org/wiki/Chronic_kidney_disease' target='_blank'>chronic kidney disease</a>)?", null=True, verbose_name='Chronic Kidney Disease (CKD)')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ColchicineInteractions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('date', models.DateField(blank=True, help_text='When did you start this medication?', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Are you on any medications that could <a href='https://www.rxlist.com/colchicine-drug.htm#interactions' target='_blank'>interact</a> with colchicine? (common ones are simvastatin, atorvastatin, oral <a href='https://en.wikipedia.org/wiki/Antifungal' target='_blank'>antifungals</a>)?", null=True, verbose_name='Colchicine Medication Interactions')),
('clarithromycin', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on clarithromycin', null=True)),
('simvastatin', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on simvastatin?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Cyclosporine',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have a history?', null=True)),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('date', models.DateField(blank=True, help_text='When did you start this medication?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Diabetes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('type', models.IntegerField(blank=True, choices=[(1, 'One'), (2, 'Two')], help_text="Do you have <a href='https://en.wikipedia.org/wiki/Type_1_diabetes' target='_blank'>type I</a> or <a href='https://en.wikipedia.org/wiki/Type_2_diabetes' target='_blank'>type II</a> diabetes?", null=True, verbose_name='Type 1 or type 2 diabetes?')),
('insulin', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Are you on <a href='https://en.wikipedia.org/wiki/Insulin' target='_blank'>kidney stones</a>?", null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Do you have <a href='https://en.wikipedia.org/wiki/Diabetes' target='_blank'>diabetes</a>?", null=True, verbose_name='Diabetes')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Diuretics',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have a history?', null=True)),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('date', models.DateField(blank=True, help_text='When did you start this medication?', null=True)),
('hydrochlorothiazide', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on hydrochlorothiazide?', null=True)),
('furosemide', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on Lasix / furosemide?', null=True)),
('bumetanide', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on Bumex / bumetanide?', null=True)),
('torsemide', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on torsemide?', null=True)),
('metolazone', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on metolazone?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Erosions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have erosions on your x-rays?', null=True, verbose_name='Erosions')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='FebuxostatHypersensitivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had a side effect or reaction to febuxostat?', null=True, verbose_name='Febuxostat Hypersensitivity')),
('rash', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had a rash side effect due to febuxostat?.', null=True, verbose_name='Febuxostat Rash')),
('transaminitis', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had elevated liver function tests as a side effect of febuxostat?.', null=True, verbose_name='Febuxostat Transaminitis')),
('cytopenia', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you ever had low blood counts as a side effect of febuxostat?.', null=True, verbose_name='Febuxostat Cytopenia')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Fructose',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you eat a lot of fructose such as the sugar found in soda/pop, processed candies, or juices?', null=True, verbose_name='fructose')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Gout',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('family_member', multiselectfield.db.fields.MultiSelectField(choices=[('Father', 'Father'), ('Mother', 'Mother'), ('Sister', 'Sister'), ('Brother', 'Brother'), ('Uncle', 'Uncle'), ('Aunt', 'Aunt'), ('Son', 'Son'), ('Daughter', 'Daughter'), ('Grandpa', 'Grandpa'), ('Grandma', 'Grandma')], default=True, help_text='Which family members had family history?', max_length=68, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have a family history of gout?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='HeartAttack',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('number', models.IntegerField(blank=True, default=1, help_text='How many have you had?', null=True)),
('date', models.DateField(blank=True, help_text='When was it? The most recent if multiple.', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you ever had <a href='https://en.wikipedia.org/wiki/Myocardial_infarction' target='_blank'>heart attack</a>?", null=True, verbose_name='heart attack')),
('stent', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you had one or more <a href='https://en.wikipedia.org/wiki/Stent' target='_blank'>stent</a> placed?", null=True, verbose_name='stent')),
('stent_date', models.DateTimeField(blank=True, default=django.utils.timezone.now, help_text='When was the last time you has a stent?', null=True)),
('cabg', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you had <a href='https://en.wikipedia.org/wiki/Coronary_artery_bypass_surgery' target='_blank'>bypass</a>?", null=True, verbose_name='cabg')),
('cabg_date', models.DateTimeField(blank=True, default=django.utils.timezone.now, help_text='When did you have a bypass?', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Hypertension',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('medication', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Are you on <a href='https://www.heart.org/en/health-topics/high-blood-pressure/changes-you-can-make-to-manage-high-blood-pressure/types-of-blood-pressure-medications' target='_blank'>medications</a> for high blood pressure?", null=True, verbose_name='Blood pressure medications')),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], help_text="Do you have <a href='https://en.wikipedia.org/wiki/Hypertension' target='_blank'>hypertension</a>?", null=True, verbose_name='Hypertension (high blood pressure)')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Hyperuricemia',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have a history of elevated levels (> 9.0 mg/dL) of uric acid in your blood?', null=True, verbose_name='Hyperuricemia')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='IBD',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Do you have <a href='https://en.wikipedia.org/wiki/Inflammatory_bowel_disease' target='_blank'>IBD</a> (inflammatory bowel disease=Crohn's disease or ulcerative colitis)?", null=True, verbose_name='Inflammatory Bowel Disease')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='OrganTransplant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('organ', multiselectfield.db.fields.MultiSelectField(choices=[('Heart', 'Heart'), ('Kidney', 'Kidney'), ('Liver', 'Liver'), ('Lung', 'Lung'), ('Pancreas', 'Pancreas'), ('Face', 'Face')], default='', help_text='Which organ did you have transplanted?', max_length=37, null=True, verbose_name='Organ(s) transplanted')),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Have you had an organ transplant?', null=True, verbose_name='Organ transplant')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Osteoporosis',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Do you have <a href='https://en.wikipedia.org/wiki/Osteoporosis' target='_blank'>osteoporosis</a>?", null=True, verbose_name='Osteoporosis')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='PVD',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], help_text="Do you have <a href='https://en.wikipedia.org/wiki/Peripheral_artery_disease' target='_blank'>peripheral vascular disease</a>?", null=True, verbose_name='Peripheral Vascular Disease')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Shellfish',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you eat a lot of shellfish?', null=True, verbose_name='shellfish')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Stroke',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('number', models.IntegerField(blank=True, default=1, help_text='How many have you had?', null=True)),
('date', models.DateField(blank=True, help_text='When was it? The most recent if multiple.', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you ever had <a href='https://en.wikipedia.org/wiki/Stroke' target='_blank'>stroke</a>?", null=True, verbose_name='stroke')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Tophi',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Do you have gouty tophi?', null=True, verbose_name='Tophi')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='UrateKidneyStones',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Have you had urate <a href='https://en.wikipedia.org/wiki/Kidney_stone_disease' target='_blank'>kidney stones</a>?", null=True, verbose_name='Urate Kidney Stones')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='XOIInteractions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('last_modified', models.CharField(blank=True, choices=[('ContraindicationsProfile', 'ContraindicationsProfile'), ('FlareAid', 'FlareAid'), ('Flare', 'Flare'), ('FamilyProfile', 'FamilyProfile'), ('MedicalProfile', 'MedicalProfile'), ('SocialProfile', 'SocialProfile'), ('ULT', 'ULT'), ('ULTAid', 'ULTAid')], max_length=75, null=True)),
('date', models.DateField(blank=True, help_text='When did you start this medication?', null=True)),
('value', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text="Are you on <a href='https://en.wikipedia.org/wiki/Mercaptopurine' target='_blank'>mercaptopurine</a> (6-MP, Purixan), <a href='https://en.wikipedia.org/wiki/Azathioprine' target='_blank'>azathioprine</a> (AZA, Imuran)?", null=True)),
('six_mp', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on 6-mercaptopurine / 6-MP?', null=True)),
('azathioprine', models.BooleanField(blank=True, choices=[(True, 'Yes'), (False, 'No')], default=False, help_text='Are you on azathioprine / Imuran?', null=True)),
],
options={
'abstract': False,
},
),
]
| 100.075829 | 473 | 0.635797 | 4,381 | 42,232 | 6.013924 | 0.077836 | 0.048848 | 0.051239 | 0.061487 | 0.851558 | 0.836224 | 0.807303 | 0.780506 | 0.762288 | 0.760011 | 0 | 0.002916 | 0.187843 | 42,232 | 421 | 474 | 100.313539 | 0.765241 | 0.001066 | 0 | 0.683575 | 1 | 0.062802 | 0.323148 | 0.049591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.004831 | 0.009662 | 0 | 0.019324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fbbb7734f6bead1b0baa566121ee18dbd0915d30 | 22,967 | py | Python | gan_topics.py | DarthSid95/RumiGANs | 9f7876e89caa0d39bd563947ab9c41f4e3745021 | [
"MIT"
] | 26 | 2020-10-31T06:00:22.000Z | 2022-02-13T19:30:49.000Z | gan_topics.py | DarthSid95/RumiGANs | 9f7876e89caa0d39bd563947ab9c41f4e3745021 | [
"MIT"
] | 3 | 2021-03-01T05:43:03.000Z | 2021-07-10T13:08:18.000Z | gan_topics.py | DarthSid95/RumiGANs | 9f7876e89caa0d39bd563947ab9c41f4e3745021 | [
"MIT"
] | 5 | 2021-04-12T10:59:20.000Z | 2021-06-04T08:52:51.000Z | from __future__ import print_function
import os, sys, time, argparse
from datetime import date
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math
from absl import app
from absl import flags
import json
from gan_data import *
from gan_src import *
# import tensorflow_probability as tfp
# tfd = tfp.distributions
from matplotlib.backends.backend_pgf import PdfPages
'''
GAN_topic is the Overarching class file, where corresponding parents are instantialized, along with setting up the calling functions for these and files and folders for resutls, etc. data reading is also done from here. Sometimes display functions, architectures, etc may be modified here if needed (overloading parent classes)
'''
'''***********************************************************************************
********** GAN Baseline setup ********************************************************
***********************************************************************************'''
class GAN_Base(GAN_SRC, GAN_DATA_Base):
def __init__(self,FLAGS_dict):
''' Set up the GAN_SRC class - defines all fundamental ops and metric functions'''
GAN_SRC.__init__(self,FLAGS_dict)
''' Set up the GAN_DATA class'''
GAN_DATA_Base.__init__(self)
def initial_setup(self):
''' Initial Setup function. define function names '''
self.gen_func = 'self.gen_func_'+self.data+'()'
self.gen_model = 'self.generator_model_'+self.data+'()'
self.disc_model = 'self.discriminator_model_'+self.data+'()'
self.loss_func = 'self.loss_'+self.loss+'()'
self.dataset_func = 'self.dataset_'+self.data+'(self.train_data, self.batch_size)'
self.show_result_func = 'self.show_result_'+self.data+'(images = predictions, num_epoch=epoch, show = False, save = True, path = path)'
self.FID_func = 'self.FID_'+self.data+'()'
''' Define dataset and tf.data function. batch sizing done'''
# self.get_data()
# self.create_models()
# self.create_optimizer()
# self.create_load_checkpoint()
def get_data(self):
# with tf.device('/CPU'):
self.train_data = eval(self.gen_func)
self.num_batches = int(np.floor((self.train_data.shape[0] * self.reps)/self.batch_size))
''' Set PRINT and SAVE iters if 0'''
self.print_step = tf.constant(max(int(self.num_batches/10),1),dtype='int64')
self.save_step = tf.constant(max(int(self.num_batches/2),1),dtype='int64')
self.train_dataset = eval(self.dataset_func)
self.train_dataset_size = self.train_data.shape[0]
print(" Batch Size {}, Final Num Batches {}, Print Step {}, Save Step {}".format(self.batch_size, self.num_batches,self.print_step, self.save_step))
def create_models(self):
with tf.device(self.device):
self.total_count = tf.Variable(0,dtype='int64')
self.generator = eval(self.gen_model)
self.discriminator = eval(self.disc_model)
if self.res_flag == 1:
with open(self.run_loc+'/'+self.run_id+'_Models.txt','a') as fh:
# Pass the file handle in as a lambda function to make it callable
fh.write("\n\n GENERATOR MODEL: \n\n")
self.generator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
fh.write("\n\n DISCRIMINATOR MODEL: \n\n")
self.discriminator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
print("Model Successfully made")
print(self.generator.summary())
print(self.discriminator.summary())
return
def create_load_checkpoint(self):
self.checkpoint = tf.train.Checkpoint(G_optimizer = self.G_optimizer,
D_optimizer = self.D_optimizer,
generator = self.generator,
discriminator = self.discriminator,
total_count = self.total_count)
self.manager = tf.train.CheckpointManager(self.checkpoint, self.checkpoint_dir, max_to_keep=10)
self.checkpoint_prefix = os.path.join(self.checkpoint_dir, "ckpt")
if self.resume:
try:
self.checkpoint.restore(tf.train.latest_checkpoint(self.checkpoint_dir))
except:
print("Checkpoint loading Failed. It could be a model mismatch. H5 files will be loaded instead")
try:
self.generator = tf.keras.models.load_model(self.checkpoint_dir+'/model_generator.h5')
self.discriminator = tf.keras.models.load_model(self.checkpoint_dir+'/model_discriminator.h5')
except:
print("H5 file loading also failed. Please Check the LOG_FOLDER and RUN_ID flags")
print("Model restored...")
print("Starting at Iteration - "+str(self.total_count.numpy()))
print("Starting at Epoch - "+str(int((self.total_count.numpy() * self.batch_size_big) / (self.train_data.shape[0])) + 1))
return
def train(self):
start = int((self.total_count.numpy() * self.batch_size) / (self.train_data.shape[0])) + 1
for epoch in range(start,self.num_epochs):
if self.pbar_flag:
bar = self.pbar(epoch)
start = time.time()
batch_count = tf.Variable(0,dtype='int64')
start_time =0
for image_batch in self.train_dataset:
# print(image_batch.shape)
self.total_count.assign_add(1)
batch_count.assign_add(1)
start_time = time.time()
with tf.device(self.device):
self.train_step(image_batch)
self.eval_metrics()
train_time = time.time()-start_time
if self.pbar_flag:
bar.postfix[0] = f'{batch_count.numpy():6.0f}'
bar.postfix[1] = f'{self.D_loss.numpy():2.4e}'
bar.postfix[2] = f'{self.G_loss.numpy():2.4e}'
bar.update(self.batch_size.numpy())
if (batch_count.numpy() % self.print_step.numpy()) == 0 or self.total_count <= 2:
if self.res_flag:
self.res_file.write("Epoch {:>3d} Batch {:>3d} in {:>2.4f} sec; D_loss - {:>2.4f}; G_loss - {:>2.4f} \n".format(epoch,batch_count.numpy(),train_time,self.D_loss.numpy(),self.G_loss.numpy()))
self.print_batch_outputs(epoch)
# Save the model every SAVE_ITERS iterations
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
if self.save_all:
self.checkpoint.save(file_prefix = self.checkpoint_prefix)
else:
self.manager.save()
if self.pbar_flag:
bar.close()
del bar
tf.print('Time for epoch {} is {} sec'.format(epoch, time.time()-start))
self.generator.save(self.checkpoint_dir + '/model_generator.h5', overwrite = True)
self.discriminator.save(self.checkpoint_dir + '/model_discriminator.h5', overwrite = True)
def print_batch_outputs(self,epoch):
if self.total_count.numpy() <= 2:
self.generate_and_save_batch(epoch)
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
self.generate_and_save_batch(epoch)
def test(self):
for i in range(self.num_test_images):
path = self.impath+'_Testing_'+str(self.total_count.numpy())+'_TestCase_'+str(i)+'.png'
label = 'TEST SAMPLES AT ITERATION '+str(self.total_count.numpy())
size_figure_grid = self.num_to_print
test_batch_size = size_figure_grid*size_figure_grid
noise = tf.random.normal([self.batch_size, self.noise_dims],self.noise_mean, self.noise_stddev)
images = self.generator(noise, training=False)
if self.data != 'celeba':
images = (images + 1.0)/2.0
self.save_image_batch(images = images,label = label, path = path)
# self.impath += '_Testing_'
# for img_batch in self.train_dataset:
# self.reals = img_batch
# self.generate_and_save_batch(0)
# return
'''***********************************************************************************
********** Conditional GAN (cGAN-PD, ACGAN, TACGAN) setup ****************************
***********************************************************************************'''
class GAN_CondGAN(GAN_SRC, GAN_DATA_CondGAN):
def __init__(self,FLAGS_dict):
''' Set up the GAN_SRC class - defines all GAN architectures'''
GAN_SRC.__init__(self,FLAGS_dict)
''' Set up the GAN_DATA class'''
GAN_DATA_CondGAN.__init__(self)
# eval('GAN_DATA_'+FLAGS.topic+'.__init__(self,data)')
def initial_setup(self):
''' Initial Setup function. define function names '''
self.gen_func = 'self.gen_func_'+self.data+'()'
self.gen_model = 'self.generator_model_'+self.data+'()'
self.disc_model = 'self.discriminator_model_'+self.data+'()'
self.loss_func = 'self.loss_'+self.loss+'()'
self.dataset_func = 'self.dataset_'+self.data+'(self.train_data, self.train_labels, self.batch_size)'
# self.show_result_func = 'self.show_result_'+self.data+'(images = predictions, num_epoch=epoch, show = False, save = True, path = path)'
self.FID_func = 'self.FID_'+self.data+'()'
if self.loss == 'FS':
self.gen_model = 'self.generator_model_'+self.data+'_'+self.latent_kind+'()'
self.disc_model = 'self.discriminator_model_'+self.data+'_'+self.latent_kind+'()'
self.EncDec_func = 'self.encoder_model_'+self.data+'_'+self.latent_kind+'()'
self.DEQ_func = 'self.discriminator_ODE()'
''' Define dataset and tf.data function. batch sizing done'''
# self.get_data()
# self.create_models()
# self.create_optimizer()
# self.create_load_checkpoint()
def get_data(self):
# with tf.device('/CPU'):
self.train_data, self.train_labels = eval(self.gen_func)
self.num_batches = int(np.floor((self.train_data.shape[0])/self.batch_size))
''' Set PRINT and SAVE iters if 0'''
self.print_step = tf.constant(max(int(self.num_batches/10),1),dtype='int64')
self.save_step = tf.constant(max(int(self.num_batches/2),1),dtype='int64')
self.train_dataset = eval(self.dataset_func)
print("Dataset created - this is it")
print(self.train_dataset)
self.train_dataset_size = self.train_data.shape[0]
print(" Batch Size {}, Final Num Batches {}, Print Step {}, Save Step {}".format(self.batch_size,
self.num_batches,self.print_step, self.save_step))
def get_noise(self,noise_case,batch_size):
noise = tf.random.normal([batch_size, self.noise_dims], mean = self.noise_mean, stddev = self.noise_stddev)
if noise_case == 'test':
if self.data in ['mnist', 'cifar10']:
if self.testcase in ['single', 'few']:
noise_labels = self.number*np.ones((batch_size,1)).astype('int32')
elif self.testcase in ['sharp']:
noise_labels = np.expand_dims(np.random.choice([1,2,4,5,7,9], batch_size), axis = 1).astype('int32')
elif self.testcase in ['even']:
noise_labels = np.expand_dims(np.random.choice([0,2,4,6,8], batch_size), axis = 1).astype('int32')
elif self.testcase in ['odd']:
noise_labels = np.expand_dims(np.random.choice([1,3,5,7,9], batch_size), axis = 1).astype('int32')
elif self.testcase in ['animals']:
noise_labels = np.expand_dims(np.random.choice([2,3,4,5,6,7], batch_size), axis = 1).astype('int32')
elif self.data in ['celeba']:
if self.testcase in ['male', 'fewmale', 'bald', 'hat']:
noise_labels = np.ones((batch_size,1)).astype('int32')
elif self.testcase in ['female', 'fewfemale']:
noise_labels = np.zeros((batch_size,1)).astype('int32')
if noise_case == 'train':
noise_labels = np.random.randint(0, self.num_classes, batch_size)
if self.data == 'celeba':
noise_labels = np.expand_dims(noise_labels, axis = 1)
return noise, noise_labels
def create_models(self):
with tf.device(self.device):
self.total_count = tf.Variable(0,dtype='int64')
self.generator = eval(self.gen_model)
self.discriminator = eval(self.disc_model)
if self.res_flag == 1:
with open(self.run_loc+'/'+self.run_id+'_Models.txt','a') as fh:
# Pass the file handle in as a lambda function to make it callable
fh.write("\n\n GENERATOR MODEL: \n\n")
self.generator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
fh.write("\n\n DISCRIMINATOR MODEL: \n\n")
self.discriminator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
print("Model Successfully made")
print(self.generator.summary())
print(self.discriminator.summary())
return
def create_load_checkpoint(self):
self.checkpoint = tf.train.Checkpoint(G_optimizer = self.G_optimizer,
D_optimizer = self.D_optimizer,
generator = self.generator,
discriminator = self.discriminator,
total_count = self.total_count)
self.manager = tf.train.CheckpointManager(self.checkpoint, self.checkpoint_dir, max_to_keep=10)
self.checkpoint_prefix = os.path.join(self.checkpoint_dir, "ckpt")
if self.resume:
try:
self.checkpoint.restore(tf.train.latest_checkpoint(self.checkpoint_dir))
except:
print("Checkpoint loading Failed. It could be a model mismatch. H5 files will be loaded instead")
try:
self.generator = tf.keras.models.load_model(self.checkpoint_dir+'/model_generator.h5')
self.discriminator = tf.keras.models.load_model(self.checkpoint_dir+'/model_discriminator.h5')
except:
print("H5 file loading also failed. Please Check the LOG_FOLDER and RUN_ID flags")
print("Model restored...")
print("Starting at Iteration - "+str(self.total_count.numpy()))
print("Starting at Epoch - "+str(int((self.total_count.numpy() * self.batch_size_big) / (self.train_data.shape[0])) + 1))
return
def train(self):
start = int((self.total_count.numpy() * self.batch_size) / (self.train_data.shape[0])) + 1
for epoch in range(start,self.num_epochs):
if self.pbar_flag:
bar = self.pbar(epoch)
start = time.time()
batch_count = tf.Variable(0, dtype='int64')
start_time = 0
for image_batch,labels_batch in self.train_dataset:
self.total_count.assign_add(1)
batch_count.assign_add(1)
start_time = time.time()
with tf.device(self.device):
self.train_step(image_batch,labels_batch)
self.eval_metrics()
train_time = time.time()-start_time
if self.pbar_flag:
bar.postfix[0] = f'{batch_count.numpy():6.0f}'
bar.postfix[1] = f'{self.D_loss.numpy():2.4e}'
bar.postfix[2] = f'{self.G_loss.numpy():2.4e}'
bar.update(self.batch_size.numpy())
if (batch_count.numpy() % self.print_step.numpy()) == 0 or self.total_count <= 2:
if self.res_flag:
self.res_file.write("Epoch {:>3d} Batch {:>3d} in {:>2.4f} sec; D_loss - {:>2.4f}; G_loss - {:>2.4f} \n".format(epoch,batch_count.numpy(),train_time,self.D_loss.numpy(),self.G_loss.numpy()))
self.print_batch_outputs(epoch)
# Save the model every SAVE_ITERS iterations
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
if self.save_all:
self.checkpoint.save(file_prefix = self.checkpoint_prefix)
else:
self.manager.save()
if (self.total_count.numpy() % 1000) == 0:
self.test()
if self.pbar_flag:
bar.close()
del bar
tf.print('Time for epoch {} is {} sec'.format(epoch, time.time()-start))
self.generator.save(self.checkpoint_dir + '/model_generator.h5', overwrite = True)
self.discriminator.save(self.checkpoint_dir + '/model_discriminator.h5', overwrite = True)
def print_batch_outputs(self,epoch):
if self.total_count.numpy() <= 2:
self.generate_and_save_batch(epoch)
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
self.generate_and_save_batch(epoch)
def test(self):
for i in range(10):
path = self.impath+'_Testing_'+str(self.total_count.numpy())+'_TestCase_'+str(i)+'.png'
label = 'TEST SAMPLES AT ITERATION '+str(self.total_count.numpy())
size_figure_grid = self.num_to_print
test_batch_size = size_figure_grid*size_figure_grid
noise, noise_labels = self.get_noise('test',test_batch_size)
if self.label_style == 'base':
#if base mode, ACGAN generator takes in one_hot labels
noise_labels = tf.one_hot(np.squeeze(noise_labels), depth = self.num_classes)
images = self.generator([noise,noise_labels] , training=False)
if self.data != 'celeba':
images = (images + 1.0)/2.0
self.save_image_batch(images = images,label = label, path = path)
'''***********************************************************************************
********** GAN RumiGAN setup *********************************************************
***********************************************************************************'''
class GAN_RumiGAN(GAN_SRC, GAN_DATA_RumiGAN):
def __init__(self,FLAGS_dict):
''' Set up the GAN_SRC class - defines all GAN architectures'''
GAN_SRC.__init__(self,FLAGS_dict)
''' Set up the GAN_DATA class'''
GAN_DATA_RumiGAN.__init__(self)
def initial_setup(self):
''' Initial Setup function. define function names '''
self.gen_func = 'self.gen_func_'+self.data+'()'
self.gen_model = 'self.generator_model_'+self.data+'()'
self.disc_model = 'self.discriminator_model_'+self.data+'()'
self.loss_func = 'self.loss_'+self.loss+'()'
self.dataset_func = 'self.dataset_'+self.data+'(self.train_data_pos, self.train_data_neg, self.batch_size)'
self.show_result_func = 'self.show_result_'+self.data+'(images = predictions, num_epoch=epoch, show = False, save = True, path = path)'
self.FID_func = 'self.FID_'+self.data+'()'
''' Define dataset and tf.data function. batch sizing done'''
# self.get_data()
# self.create_models()
# self.create_optimizer()
# self.create_load_checkpoint()
def get_data(self):
with tf.device('/CPU'):
self.train_data_pos, self.train_data_neg = eval(self.gen_func)
self.max_data_size = max(self.train_data_pos.shape[0],self.train_data_neg.shape[0])
self.num_batches = int(np.floor(self.max_data_size/self.batch_size))
''' Set PRINT and SAVE iters if 0'''
self.print_step = tf.constant(max(int(self.num_batches/10),1),dtype='int64')
self.save_step = tf.constant(max(int(self.num_batches/2),1),dtype='int64')
self.train_dataset_pos, self.train_dataset_neg = eval(self.dataset_func)
self.train_dataset_size = self.max_data_size
print(" Batch Size {}, Final Num Batches {}, Print Step {}, Save Step {}".format(self.batch_size,
self.num_batches,self.print_step, self.save_step))
def create_models(self):
with tf.device(self.device):
self.total_count = tf.Variable(0,dtype='int64')
self.generator = eval(self.gen_model)
self.discriminator = eval(self.disc_model)
if self.res_flag == 1:
with open(self.run_loc+'/'+self.run_id+'_Models.txt','a') as fh:
# Pass the file handle in as a lambda function to make it callable
fh.write("\n\n GENERATOR MODEL: \n\n")
self.generator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
fh.write("\n\n DISCRIMINATOR MODEL: \n\n")
self.discriminator.summary(line_length=80, print_fn=lambda x: fh.write(x + '\n'))
print("Model Successfully made")
print(self.generator.summary())
print(self.discriminator.summary())
return
def create_load_checkpoint(self):
self.checkpoint = tf.train.Checkpoint(G_optimizer = self.G_optimizer,
D_optimizer = self.D_optimizer,
generator = self.generator,
discriminator = self.discriminator,
total_count = self.total_count)
self.manager = tf.train.CheckpointManager(self.checkpoint, self.checkpoint_dir, max_to_keep=10)
self.checkpoint_prefix = os.path.join(self.checkpoint_dir, "ckpt")
if self.resume:
try:
self.checkpoint.restore(tf.train.latest_checkpoint(self.checkpoint_dir))
except:
print("Checkpoint loading Failed. It could be a model mismatch. H5 files will be loaded instead")
try:
self.generator = tf.keras.models.load_model(self.checkpoint_dir+'/model_generator.h5')
self.discriminator = tf.keras.models.load_model(self.checkpoint_dir+'/model_discriminator.h5')
except:
print("H5 file loading also failed. Please Check the LOG_FOLDER and RUN_ID flags")
print("Model restored...")
print("Starting at Iteration - "+str(self.total_count.numpy()))
print("Starting at Epoch - "+str(int((self.total_count.numpy() * self.batch_size_big) / (self.train_data.shape[0])) + 1))
return
def train(self):
start = int((self.total_count.numpy() * self.batch_size) / (max(self.train_data_pos.shape[0],self.train_data_neg.shape[0]))) + 1
for epoch in range(start,self.num_epochs):
if self.pbar_flag:
bar = self.pbar(epoch)
start = time.time()
batch_count = tf.Variable(0,dtype='int64')
start_time =0
for image_batch_pos,image_batch_neg in zip(self.train_dataset_pos,self.train_dataset_neg):
self.total_count.assign_add(1)
batch_count.assign_add(self.Dloop)
start_time = time.time()
with tf.device(self.device):
self.train_step(image_batch_pos,image_batch_neg)
self.eval_metrics()
train_time = time.time()-start_time
if self.pbar_flag:
bar.postfix[0] = f'{batch_count.numpy():6.0f}'
bar.postfix[1] = f'{self.D_loss.numpy():2.4e}'
bar.postfix[2] = f'{self.G_loss.numpy():2.4e}'
bar.update(self.batch_size.numpy())
if (batch_count.numpy() % self.print_step.numpy()) == 0 or self.total_count <= 2:
if self.res_flag:
self.res_file.write("Epoch {:>3d} Batch {:>3d} in {:>2.4f} sec; D_loss - {:>2.4f}; G_loss - {:>2.4f} \n".format(epoch,batch_count.numpy(),train_time,self.D_loss.numpy(),self.G_loss.numpy()))
self.print_batch_outputs(epoch)
# Save the model every SAVE_ITERS iterations
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
if self.save_all:
self.checkpoint.save(file_prefix = self.checkpoint_prefix)
else:
self.manager.save()
if self.pbar_flag:
bar.close()
del bar
tf.print('Time for epoch {} is {} sec'.format(epoch, time.time()-start))
self.generator.save(self.checkpoint_dir + '/model_generator.h5', overwrite = True)
self.discriminator.save(self.checkpoint_dir + '/model_discriminator.h5', overwrite = True)
def print_batch_outputs(self,epoch):
if self.total_count.numpy() <= 2 and 'g' not in self.data:
predictions = self.reals_pos[0:self.num_to_print*self.num_to_print]
if self.data!='celeba':
predictions = (predictions + 1.0)/(2.0)
path = self.impath + 'pos.png'
label = 'POSITIVE CLASS SAMPLES'
self.save_image_batch(images = predictions,label = label, path = path)
# eval(self.show_result_func)
predictions = self.reals_neg[0:self.num_to_print*self.num_to_print]
if self.data!='celeba':
predictions = (predictions + 1.0)/(2.0)
path = self.impath + 'negs.png'
label = "NEGATIVE CLASS SAMPLES"
self.save_image_batch(images = predictions,label = label, path = path)
# eval(self.show_result_func)
if self.total_count.numpy() <= 2:
self.generate_and_save_batch(epoch)
if (self.total_count.numpy() % self.save_step.numpy()) == 0:
self.generate_and_save_batch(epoch)
def test(self):
for i in range(self.num_test_images):
path = self.impath+'_Testing_'+str(self.total_count.numpy())+'_TestCase_'+str(i)+'.png'
label = 'TEST SAMPLES AT ITERATION '+str(self.total_count.numpy())
size_figure_grid = self.num_to_print
test_batch_size = size_figure_grid*size_figure_grid
noise = tf.random.normal([self.batch_size, self.noise_dims],self.noise_mean, self.noise_stddev)
images = self.generator(noise, training=False)
if self.data != 'celeba':
images = (images + 1.0)/2.0
self.save_image_batch(images = images,label = label, path = path) | 40.577739 | 327 | 0.681761 | 3,369 | 22,967 | 4.435738 | 0.093203 | 0.017264 | 0.0356 | 0.033057 | 0.879082 | 0.86958 | 0.862353 | 0.856799 | 0.843081 | 0.829162 | 0 | 0.013156 | 0.146123 | 22,967 | 566 | 328 | 40.577739 | 0.748865 | 0.064832 | 0 | 0.780051 | 0 | 0.012788 | 0.159936 | 0.030189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063939 | false | 0 | 0.033248 | 0 | 0.122762 | 0.148338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7ba0fa86e08d5a31bc9f999c6d3d532222e1834 | 60 | py | Python | test/executor/testModule3.py | hysds/sciflo | f706288405c8eee59a2f883bab3dcb5229615367 | [
"Apache-2.0"
] | null | null | null | test/executor/testModule3.py | hysds/sciflo | f706288405c8eee59a2f883bab3dcb5229615367 | [
"Apache-2.0"
] | null | null | null | test/executor/testModule3.py | hysds/sciflo | f706288405c8eee59a2f883bab3dcb5229615367 | [
"Apache-2.0"
] | 1 | 2019-02-07T01:08:34.000Z | 2019-02-07T01:08:34.000Z | import random
def getRandom():
return random.random()
| 10 | 26 | 0.7 | 7 | 60 | 6 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 60 | 5 | 27 | 12 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
f7bfc601d258d048bfc29d4dbdddaf9baeff1bd7 | 31,785 | py | Python | tests/services/pids/test_pids_service.py | lnielsen/invenio-rdm-records | c8f2c857f28ecb8a478637c585a7d61f318a2b5c | [
"MIT"
] | null | null | null | tests/services/pids/test_pids_service.py | lnielsen/invenio-rdm-records | c8f2c857f28ecb8a478637c585a7d61f318a2b5c | [
"MIT"
] | null | null | null | tests/services/pids/test_pids_service.py | lnielsen/invenio-rdm-records | c8f2c857f28ecb8a478637c585a7d61f318a2b5c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (C) 2021 CERN
#
# Invenio-RDM-Records is free software; you can redistribute it
# and/or modify it under the terms of the MIT License; see LICENSE file for
# more details.
"""PID related tests for Invenio RDM Records.
This tests both the PIDsService and the RDMService behaviour related to pids.
"""
import pytest
from invenio_pidstore.errors import PIDDoesNotExistError
from invenio_pidstore.models import PIDStatus
from marshmallow import ValidationError
from invenio_rdm_records.proxies import current_rdm_records
@pytest.fixture()
def mock_public_doi(mocker):
def public_doi(self, *args, **kwargs):
# success
pass
mocker.patch("invenio_rdm_records.services.pids.providers.datacite." +
"DataCiteRESTClient.public_doi", public_doi)
@pytest.fixture()
def mock_hide_doi(mocker):
def hide_doi(self, *args, **kwargs):
# success
pass
mocker.patch("invenio_rdm_records.services.pids.providers.datacite." +
"DataCiteRESTClient.hide_doi", hide_doi)
#
# Reserve & Discard
#
def test_resolve_pid(running_app, es_clear, minimal_record):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
# create the draft
draft = service.create(superuser_identity, minimal_record)
# publish the record
record = service.publish(draft.id, superuser_identity)
doi = record["pids"]["doi"]["identifier"]
# test resolution
resolved_record = service.pids.resolve(
id_=doi,
identity=superuser_identity,
scheme="doi"
)
assert resolved_record.id == record.id
assert resolved_record["pids"]["doi"]["identifier"] == doi
def test_resolve_non_existing_pid(running_app, es_clear, minimal_record):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
# create the draft
draft = service.create(superuser_identity, minimal_record)
# publish the record
service.publish(draft.id, superuser_identity)
# test resolution
fake_doi = "10.4321/client.12345-abdce"
with pytest.raises(PIDDoesNotExistError):
service.pids.resolve(
id_=fake_doi,
identity=superuser_identity,
scheme="doi"
)
def test_reserve_pid(running_app, es_clear, minimal_record):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
# create the draft
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
# publish the record
doi = draft["pids"]["doi"]["identifier"]
# FIXME: remove all occurences of _ methods, create methods in manager
provider = service.pids.pid_manager._get_provider("doi", "datacite")
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
def test_discard_existing_pid(running_app, es_clear, minimal_record):
# note discard is only performed over NEW pids for pids in status RESERVED
# or REGISTERED the invalidate function must be used
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
# create the draft
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
# publish the record
doi = draft["pids"]["doi"]["identifier"]
provider = service.pids.pid_manager._get_provider("doi", "datacite")
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
draft = service.pids.discard(draft.id, superuser_identity, "doi")
assert not draft["pids"].get("doi")
with pytest.raises(PIDDoesNotExistError):
pid = provider.get(pid_value=doi)
def test_discard_non_exisisting_pid(running_app, es_clear, minimal_record):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
# create the draft
draft = service.create(superuser_identity, minimal_record)
with pytest.raises(PIDDoesNotExistError):
service.pids.discard(draft.id, superuser_identity, "doi")
def test_oai_pid_default_created(running_app, es_clear, minimal_record):
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
minimal_record["pids"] = {}
# create the draft
draft = service.create(superuser_identity, minimal_record)
# publish the record
record = service.publish(draft.id, superuser_identity)
published_oai = record.to_dict()["pids"]["oai"]
assert published_oai["identifier"]
assert published_oai["provider"] == "oai"
assert "client" not in published_oai
#
# Workflows
#
# Use cases list:
#
# | Creation
# |--------------------------------------------------|-----------------------------------| # noqa
# | Draft creation from scratch (no pid) | basic_flow | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Publish with no pid (creation of mandatory ones) | basic_flow | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Do not allow duplicates | duplicates | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Fail on empty (invalid) value for external pid | creation_invalid_external_payload | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
#
# | Reservation
# |--------------------------------------------------|-----------------------------------| # noqa
# | Reserve pid | reserve_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Fail to reserve with already existing managed | reserve_fail_existing_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Fail to reserve with already existing external | reserve_fail_existing_external | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
#
# | Update on drafts (prefix test_pids_drafts)
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from external to managed on a draft | updates_external_to_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from external to no pid on a draft | updates_external_to_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from managed to external on a draft | updates_managed_to_external | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from managed to no pid on a draft | updates_managed_to_no_pid | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from no pid to external on a draft | updates_no_pid_to_external | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from no pid to managed on a draft | updates_no_pid_to_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
#
# | Update on records
# | Note that cases with no function assigned are not testable because doi is mandatory and # noqa
# | one will always be assinged on publishing.
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from external to managed on a record | updates_flow_external_to_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from external to no pid on a record | updates_flow_external_to_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from managed to external on a record | updates_managed_to_external_fail | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from managed to no pid on a record | updates_managed_to_no_pid_fail | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from no pid to external on a record | | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Update from no pid to managed on a record | | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
#
# | Publishing
# |--------------------------------------------------|-----------------------------------| # noqa
# | Publish with a managed pid (from reserve) | publish_managed | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Publish with an external pid | publish_external | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
#
# | Deletion
# |--------------------------------------------------|-----------------------------------| # noqa
# | Delete a draft with a managed pid | delete_managed_pid_from_draft | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Delete a draft with an external pid | delete_external_pid_from_draft | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Delete an edit (draft) with a managed pid | delete_managed_pid_from_record | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# | Delete an edit (draft) with an external pid | delete_external_pid_from_record | # noqa
# |--------------------------------------------------|-----------------------------------| # noqa
# Creation
def test_pids_basic_flow(running_app, es_clear, minimal_record,
mock_public_doi):
# external doi and mandatory assignation when empty pids
# is tested at resources level
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
minimal_record["pids"] = {}
# create the draft
draft = service.create(superuser_identity, minimal_record)
assert draft["pids"] == {}
# publish the record with a managed PID
record = service.publish(draft.id, superuser_identity)
published_doi = record["pids"]["doi"]
assert published_doi["identifier"]
assert published_doi["provider"] == "datacite" # default
provider = service.pids.pid_manager._get_provider("doi", "datacite")
pid = provider.get(pid_value=published_doi["identifier"])
assert pid.status == PIDStatus.REGISTERED # registration is async
def test_pids_duplicates(running_app, es_clear, minimal_record):
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create an external pid for an already existing NEW managed one
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
data = minimal_record.copy()
data["pids"]["doi"] = {
"identifier": doi,
"provider": "external"
}
duplicated_draft = service.create(superuser_identity, data)
error_msg = {
'field': 'pids.doi',
'messages': [
f'doi:{doi} already exists.',
'The prefix \'10.1234\' is administrated locally.',
]
}
assert error_msg in duplicated_draft.errors
# create an external pid for an already existing RESERVED managed one
record = service.publish(draft.id, superuser_identity)
duplicated_draft = service.create(superuser_identity, data)
error_msg = {
'field': 'pids.doi',
'messages': [
f'doi:{doi} already exists.',
'The prefix \'10.1234\' is administrated locally.',
]
}
assert error_msg in duplicated_draft.errors
# create an external pid for an already existing external one
data = minimal_record.copy()
doi = "10.4321/test.1234"
data["pids"]["doi"] = {"identifier": doi, "provider": "external"}
draft = service.create(superuser_identity, data)
record = service.publish(draft.id, superuser_identity)
duplicated_draft = service.create(superuser_identity, data)
error_msg = {
'field': 'pids.doi',
'messages': [f'doi:{doi} already exists.']
}
assert error_msg in duplicated_draft.errors
# create a managed pid for an already existing external one
draft = service.create(superuser_identity, minimal_record)
doi = draft["pids"]["doi"]["identifier"]
data = minimal_record.copy()
data["pids"]["doi"] = {"identifier": doi, "provider": "external"}
duplicated_draft = service.create(superuser_identity, data)
error_msg = {
'field': 'pids.doi',
'messages': [f'doi:{doi} already exists.']
}
assert error_msg in duplicated_draft.errors
def test_pids_creation_invalid_external_payload(
running_app, es_clear, minimal_record
):
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
data = minimal_record.copy()
data["pids"]["doi"] = {
"identifier": "",
"provider": "external",
}
draft = service.create(superuser_identity, data)
assert draft.errors == [
{'field': 'pids.doi', 'messages': ['Missing DOI for required field.']}
]
# Reservation
def test_pids_reserve_managed(running_app, es_clear, minimal_record):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
# "reserve" pid
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
def test_pids_reserve_fail_existing_managed(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
# "reserve" pid (first assignation)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
# reserve again
with pytest.raises(ValidationError):
service.pids.create(draft.id, superuser_identity, "doi")
def test_pids_reserve_fail_existing_external(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
data = minimal_record.copy()
data["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.create(superuser_identity, minimal_record)
# reserve again
with pytest.raises(ValidationError):
service.pids.create(draft.id, superuser_identity, "doi")
# Update on drafts
def test_pids_drafts_updates_external_to_managed(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
data = minimal_record.copy()
data["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.create(superuser_identity, minimal_record)
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(
pid_value=draft["pids"]["doi"]["identifier"],
pid_provider="external"
)
# remove and reserve a managed one
draft["pids"].pop("doi")
draft = service.update_draft(
id_=draft.id, identity=superuser_identity, data=draft.data)
assert not draft["pids"].get("doi")
# managed pids needs to first be created (reserve)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert provider.get(pid_value=doi).status == PIDStatus.NEW
def test_pids_drafts_updates_managed_to_external(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert provider.get(pid_value=doi).status == PIDStatus.NEW
# remove doi: mandatory delete action, press the X in the UI
draft = service.pids.discard(draft.id, superuser_identity, "doi")
# replace by external
draft["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.update_draft(
id_=draft.id, identity=superuser_identity, data=draft.data)
assert draft["pids"]["doi"]["identifier"] == "10.4321/dummy.1234"
assert draft["pids"]["doi"]["provider"] == "external"
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(
pid_value=draft["pids"]["doi"]["identifier"],
pid_provider="external"
)
with pytest.raises(PIDDoesNotExistError): # original doi was also deleted
provider.get(pid_value=doi)
def test_pids_drafts_updates_managed_to_no_pid(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert provider.get(pid_value=doi).status == PIDStatus.NEW
# remove doi: mandatory delete action, press the X in the UI
draft = service.pids.discard(draft.id, superuser_identity, "doi")
assert not draft["pids"].get("doi")
with pytest.raises(PIDDoesNotExistError): # original doi was also deleted
provider.get(pid_value=doi)
def test_pids_drafts_updates_no_pid_to_external(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
assert draft["pids"] == {}
# add external
draft["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.update_draft(
id_=draft.id, identity=superuser_identity, data=draft.data)
assert draft["pids"]["doi"]["identifier"] == "10.4321/dummy.1234"
assert draft["pids"]["doi"]["provider"] == "external"
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(
pid_value=draft["pids"]["doi"]["identifier"],
pid_provider="external"
)
def test_pids_drafts_updates_no_pid_to_managed(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
assert draft["pids"] == {}
# add managed
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert provider.get(pid_value=doi).status == PIDStatus.NEW
# Update on records
def _create_and_publish_external(service, provider, identity, data):
"""Creates a draft with a managed doi and publishes it."""
# create the draft
data["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.create(identity, data)
# publish and check the doi is in pidstore
record = service.publish(draft.id, identity)
pid = provider.get(pid_value="10.4321/dummy.1234")
assert pid.status == PIDStatus.REGISTERED
return record
def _create_and_publish_managed(service, provider, identity, data):
"""Creates a draft with a managed doi and publishes it."""
# create the draft
draft = service.create(identity, data)
# "reserve" pid if not given
draft = service.pids.create(draft.id, identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
# publish and check the doi is in pidstore
record = service.publish(draft.id, identity)
assert provider.get(pid_value=doi).status == PIDStatus.RESERVED
return record
def test_pids_records_updates_external_to_managed(
running_app, es_clear, minimal_record, identity_simple
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
record = _create_and_publish_external(
service, provider, superuser_identity, minimal_record)
# create draft
draft = service.edit(record.id, superuser_identity)
# remove external pid allowed
old_doi = draft["pids"].pop("doi")
draft = service.update_draft(
id_=draft.id, identity=superuser_identity, data=draft.data)
assert not draft["pids"].get("doi")
# add a new managed doi
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.NEW
# publish with managed doi
record = service.publish(draft.id, superuser_identity)
pid = provider.get(pid_value=doi)
assert pid.status == PIDStatus.RESERVED
# the old external should be completely deleted
assert pytest.raises(
PIDDoesNotExistError,
provider.get,
pid_value=old_doi["identifier"],
pid_provider=old_doi["provider"]
)
def test_pids_records_updates_managed_to_external_fail(
running_app, es_clear, minimal_record, authenticated_identity,
mock_hide_doi
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
record = _create_and_publish_managed(
service, provider, authenticated_identity, minimal_record)
# create draft
draft = service.edit(record.id, authenticated_identity)
# fail to remove doi due to lack of permissions (validation error)
with pytest.raises(ValidationError):
service.pids.discard(draft.id, authenticated_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert doi
assert provider.get(pid_value=doi).status == PIDStatus.RESERVED
def test_pids_records_updates_managed_to_no_pid_fail(
running_app, es_clear, minimal_record, authenticated_identity
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
record = _create_and_publish_managed(
service, provider, authenticated_identity, minimal_record)
# create draft
draft = service.edit(record.id, authenticated_identity)
# fail to remove doi due to lack of permissions (validation error)
with pytest.raises(ValidationError):
service.pids.discard(draft.id, authenticated_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert doi
assert provider.get(pid_value=doi).status == PIDStatus.RESERVED
# Publishing
def test_pids_publish_managed(running_app, es_clear, minimal_record):
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
doi = draft["pids"]["doi"]["identifier"]
assert provider.get(pid_value=doi).status == PIDStatus.NEW
# publish
record = service.publish(draft.id, superuser_identity)
# registration is async
assert provider.get(pid_value=doi).status == PIDStatus.RESERVED
def test_pids_publish_external(running_app, es_clear, minimal_record):
superuser_identity = running_app.superuser_identity
service = current_rdm_records.records_service
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create the draft
data = minimal_record.copy()
data["pids"]["doi"] = {
"identifier": "10.4321/dummy.1234",
"provider": "external"
}
draft = service.create(superuser_identity, data)
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(
pid_value=draft["pids"]["doi"]["identifier"],
pid_provider="external"
)
# publish
record = service.publish(draft.id, superuser_identity)
pid = provider.get(
pid_value=record["pids"]["doi"]["identifier"],
pid_provider="external"
)
assert pid.pid_value == record["pids"]["doi"]["identifier"]
# registration is async
assert pid.status == PIDStatus.REGISTERED
# Deletion
def test_pids_delete_external_pid_from_draft(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create draft
data = minimal_record.copy()
data["pids"] = {
"doi": {"identifier": "10.4321/dummy.1234", "provider": "external"}
}
draft = service.create(superuser_identity, data)
# delete draft
assert service.delete_draft(draft.id, superuser_identity)
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(
pid_value=data["pids"]["doi"]["identifier"],
pid_provider="external"
)
def test_pids_delete_managed_pid_from_draft(
running_app, es_clear, minimal_record
):
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create draft and doi
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
pid = provider.get(pid_value=draft["pids"]["doi"]["identifier"])
assert pid.status == PIDStatus.NEW
assert pid.pid_value == draft["pids"]["doi"]["identifier"]
# delete draft
assert service.delete_draft(draft.id, superuser_identity)
with pytest.raises(PIDDoesNotExistError): # pid should not exist
provider.get(pid_value=pid.pid_value, pid_provider="external")
def test_pids_delete_external_pid_from_record(
running_app, es_clear, minimal_record
):
# This test aims to delete from a draft created out of a published record
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create draft
data = minimal_record.copy()
data["pids"] = {
"doi": {"identifier": "10.4321/dummy.1234", "provider": "external"}
}
draft = service.create(superuser_identity, data)
# publish
record = service.publish(draft.id, superuser_identity)
pid = provider.get(
pid_value=record["pids"]["doi"]["identifier"],
pid_provider=record["pids"]["doi"]["provider"]
)
assert pid.status == PIDStatus.REGISTERED
assert pid.pid_value == record["pids"]["doi"]["identifier"]
# create new draft
draft = service.edit(record.id, superuser_identity)
pid = provider.get(
pid_value=draft["pids"]["doi"]["identifier"],
pid_provider=draft["pids"]["doi"]["provider"]
)
assert pid.status == PIDStatus.REGISTERED
assert pid.pid_value == draft["pids"]["doi"]["identifier"]
# delete draft (should not delete pid since it is part of an active record)
assert service.delete_draft(draft.id, superuser_identity)
pid = provider.get(
pid_value=record["pids"]["doi"]["identifier"],
pid_provider=record["pids"]["doi"]["provider"]
)
assert pid.status == PIDStatus.REGISTERED
assert pid.pid_value == record["pids"]["doi"]["identifier"]
def test_pids_delete_managed_pid_from_record(
running_app, es_clear, minimal_record
):
# This test aims to delete from a draft created out of a published record
service = current_rdm_records.records_service
superuser_identity = running_app.superuser_identity
provider = service.pids.pid_manager._get_provider("doi", "datacite")
# create draft and managed doi
draft = service.create(superuser_identity, minimal_record)
draft = service.pids.create(draft.id, superuser_identity, "doi")
# publish
record = service.publish(draft.id, superuser_identity)
pid = provider.get(pid_value=record["pids"]["doi"]["identifier"])
assert pid.status == PIDStatus.RESERVED
assert pid.pid_value == record["pids"]["doi"]["identifier"]
# create new draft
draft = service.edit(record.id, superuser_identity)
pid = provider.get(pid_value=draft["pids"]["doi"]["identifier"])
assert pid.status == PIDStatus.RESERVED
assert pid.pid_value == draft["pids"]["doi"]["identifier"]
# delete draft (should not delete pid since it is part of an active record)
assert service.delete_draft(draft.id, superuser_identity)
pid = provider.get(pid_value=record["pids"]["doi"]["identifier"])
assert pid.status == PIDStatus.RESERVED
assert pid.pid_value == record["pids"]["doi"]["identifier"]
#
# Versioning
#
def test_pids_versioning():
# TODO: implement
# versioning flow
# create draft and publish
# concept doi + doi
# new version + publish
# concept doi still the same, doi is different
pass
| 39.880803 | 102 | 0.631682 | 3,528 | 31,785 | 5.475907 | 0.068878 | 0.109995 | 0.045758 | 0.035406 | 0.849837 | 0.836068 | 0.810239 | 0.786014 | 0.755267 | 0.730473 | 0 | 0.005705 | 0.183829 | 31,785 | 796 | 103 | 39.930905 | 0.738995 | 0.276955 | 0 | 0.768116 | 0 | 0 | 0.105541 | 0.008274 | 0 | 0 | 0 | 0.001256 | 0.132505 | 1 | 0.068323 | false | 0.006211 | 0.010352 | 0 | 0.082816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7f64265fd2a202daf0cd6a120015d70562e8ab7 | 29,414 | py | Python | pyaff4/encryptedstream_test.py | aff4/python-aff4 | 94a3583475c07ad92147f70ff8a19e9e36f12aa9 | [
"Apache-2.0"
] | 34 | 2017-10-21T16:12:58.000Z | 2022-02-18T00:37:08.000Z | pyaff4/encryptedstream_test.py | aff4/python-aff4 | 94a3583475c07ad92147f70ff8a19e9e36f12aa9 | [
"Apache-2.0"
] | 23 | 2017-11-06T17:01:04.000Z | 2021-12-26T14:09:38.000Z | pyaff4/encryptedstream_test.py | aff4/python-aff4 | 94a3583475c07ad92147f70ff8a19e9e36f12aa9 | [
"Apache-2.0"
] | 17 | 2019-02-11T00:47:02.000Z | 2022-03-14T02:52:04.000Z | # Copyright 2019 Schatz Forensic Pty Ltd All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
#
# Author: Bradley L Schatz bradley@evimetry.com
from __future__ import unicode_literals
import tempfile
from future import standard_library
standard_library.install_aliases()
from builtins import range
import os
import io
import unittest
from pyaff4 import aff4_image
from pyaff4 import data_store
from pyaff4 import lexicon
from pyaff4 import rdfvalue
from pyaff4 import zip
from pyaff4 import container
from pyaff4 import keybag
class AFF4EncryptedStreamTest(unittest.TestCase):
filename = tempfile.gettempdir() + u"/aff4_encryptedstream_test.zip"
filename_urn = rdfvalue.URN.FromFileName(filename)
image_name = "image.dd"
def setUp(self):
try:
os.unlink(self.filename)
pass
except (IOError, OSError):
pass
def tearDown(self):
try:
os.unlink(self.filename)
pass
except (IOError, OSError):
pass
#@unittest.skip
def testSmallWriteNoEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.chunk_size = 5
image.chunks_per_segment = 2
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcd")
self.assertEquals(b"abcd", image.Read(4))
image.SeekRead(0,0)
self.assertEquals(b"abcd", image.Read(5))
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(4, image.Size())
self.assertEqual(b"abcd", image.ReadAll())
#@unittest.skip
def testChunkSizeWriteNoEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.chunk_size = 5
image.chunks_per_segment = 2
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcda")
self.assertEquals(b"abcda", image.Read(5))
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(5, image.Size())
self.assertEqual(b"abcda", image.ReadAll())
#@unittest.skip
def testChunkSizePlusOneWriteNoEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.chunk_size = 5
image.chunks_per_segment = 2
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcdaa")
self.assertEquals(b"abcdaa", image.Read(6))
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(6, image.Size())
self.assertEqual(b"abcdaa", image.ReadAll())
#@unittest.skip
def testBevySizeWriteNoEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.chunk_size = 5
image.chunks_per_segment = 2
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcdeabcde")
image.SeekRead(5,0)
self.assertEqual(b"abcde", image.Read(5))
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(10, image.Size())
self.assertEqual(b"abcdeabcde", image.ReadAll())
#@unittest.skip
def testBevySizePlusOneWriteNoEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.chunk_size = 5
image.chunks_per_segment = 2
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcdeabcdea")
image.SeekRead(5, 0)
self.assertEqual(b"abcdea", image.Read(6))
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
resolver.Set(lexicon.transient_graph, image.urn, lexicon.AFF4_STORED,
rdfvalue.URN(zip_file.urn))
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(11, image.Size())
self.assertEqual(b"abcdeabcdea", image.ReadAll())
#@unittest.skip
def testSmallWriteEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = False
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b"abcd")
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = False
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(4, image.Size())
self.assertEqual(b"abcd", image.ReadAll())
#@unittest.skip
def testChunkSizeWriteEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = False
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(txt)
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = False
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(512, image.Size())
self.assertEqual(txt, image.ReadAll())
#@unittest.skip
def testChunkSizePlusOneWriteEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 + b'b'
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = False
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(txt)
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = False
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(513, image.Size())
self.assertEqual(txt, image.ReadAll())
#@unittest.skip
def testBevySizeWriteEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 * 1024
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(txt)
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = False
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(512*1024, image.Size())
self.assertEqual(txt, image.ReadAll())
#@unittest.skip
def testBevySizePlusOneWriteEncryption(self):
version = container.Version(0, 1, "pyaff4")
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 * 1024 + b'b'
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = False
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(txt)
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = False
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(512*1024+1, image.Size())
self.assertEqual(txt, image.ReadAll())
#@unittest.skip
def testAppendOfEncryptedOutOfOrder(self):
version = container.Version(0, 1, "pyaff4")
print(self.filename)
kb = keybag.PasswordWrappedKeyBag.create("secret")
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.SeekWrite(512 * 1024 +2, 0)
image.Write(b'b' * 512)
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("random"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.SeekWrite(0, 0)
image.Write(b'b')
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(1024*512+2+512, image.Size())
all = image.ReadAll()
expected = b'b' + (b'\0'*((512*1024)-1)) + (b'\0'*2) + (b'b'* 512)
self.assertEquals(expected , all)
#@unittest.skip
def testAppendOfEncryptedSingleChunkPlusOne(self):
version = container.Version(0, 1, "pyaff4")
print(self.filename)
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 * 1024 + b'b'
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b'a' * 512)
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("random"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.SeekWrite(512, 0)
image.Write(b'b')
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(513, image.Size())
self.assertEquals(b'a'*512 + b'b', image.ReadAll())
#@unittest.skip
def testAppendOfEncryptedSingleChunk(self):
version = container.Version(0, 1, "pyaff4")
print(self.filename)
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 * 1024 + b'b'
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b'a' * 512)
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("random"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b'b')
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(512, image.Size())
self.assertEquals(b'b' + b'a'*511, image.ReadAll())
#@unittest.skip
def testAppendOfEncryptedSubChunk(self):
version = container.Version(0, 1, "pyaff4")
print(self.filename)
kb = keybag.PasswordWrappedKeyBag.create("secret")
txt = b'a' * 512 * 1024 + b'b'
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("truncate"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b'a' * 2)
with data_store.MemoryDataStore() as resolver:
resolver.Set(lexicon.transient_graph, self.filename_urn, lexicon.AFF4_STREAM_WRITE_MODE,
rdfvalue.XSDString("random"))
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
self.volume_urn = zip_file.urn
self.image_urn = self.volume_urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with aff4_image.AFF4Image.NewAFF4Image(
resolver, self.image_urn_2, self.volume_urn, type=lexicon.AFF4_ENCRYPTEDSTREAM_TYPE) as image:
image.DEBUG = True
image.setKeyBag(kb)
image.setKey(kb.unwrap_key("secret"))
image.Write(b'b')
with data_store.MemoryDataStore() as resolver:
with zip.ZipFile.NewZipFile(resolver, version, self.filename_urn) as zip_file:
image_urn = zip_file.urn.Append(self.image_name)
self.image_urn_2 = self.image_urn.Append("2")
with resolver.AFF4FactoryOpen(self.image_urn_2) as image:
image.setKeyBag(kb)
image.DEBUG = True
image.setKey(kb.unwrap_key("secret"))
self.assertEquals(2, image.Size())
self.assertEquals(b'ba', image.ReadAll())
if __name__ == '__main__':
#logging.getLogger().setLevel(logging.DEBUG)
unittest.main()
| 46.98722 | 114 | 0.597539 | 3,305 | 29,414 | 5.129803 | 0.061422 | 0.077504 | 0.080689 | 0.049074 | 0.892002 | 0.870473 | 0.867465 | 0.863808 | 0.862864 | 0.862864 | 0 | 0.020441 | 0.308119 | 29,414 | 625 | 115 | 47.0624 | 0.812638 | 0.029374 | 0 | 0.843299 | 0 | 0 | 0.025106 | 0.001052 | 0 | 0 | 0 | 0 | 0.070103 | 1 | 0.03299 | false | 0.037113 | 0.028866 | 0 | 0.070103 | 0.008247 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
79153de5163626c15c6cb7bf7ad8d808e645e901 | 4,790 | py | Python | src/tests/python_tests/particle_system_data_tests.py | Whitemane/fluid-engine-dev | 93c3e942182cd73d54b74b7c2a283854e79911be | [
"MIT"
] | 1 | 2018-04-16T13:09:03.000Z | 2018-04-16T13:09:03.000Z | src/tests/python_tests/particle_system_data_tests.py | kentbarber/fluid-engine-dev | fb2256badb80c04702db536b63b14754699038ca | [
"MIT"
] | null | null | null | src/tests/python_tests/particle_system_data_tests.py | kentbarber/fluid-engine-dev | fb2256badb80c04702db536b63b14754699038ca | [
"MIT"
] | null | null | null | """
Copyright (c) 2018 Doyub Kim
I am making my contributions/submissions to this project solely in my personal
capacity and am not conveying any rights to any intellectual property of any
third parties.
"""
import pyjet
import unittest
import numpy as np
class ParticleSystemData2Tests(unittest.TestCase):
def testInit(self):
ps = pyjet.ParticleSystemData2()
self.assertEqual(ps.numberOfParticles, 0)
ps2 = pyjet.ParticleSystemData2(100)
self.assertEqual(ps2.numberOfParticles, 100)
def testResize(self):
ps = pyjet.ParticleSystemData2()
ps.resize(12)
self.assertEqual(ps.numberOfParticles, 12)
def testAddScalarData(self):
ps = pyjet.ParticleSystemData2()
ps.resize(12)
a0 = ps.addScalarData(2.0)
a1 = ps.addScalarData(9.0)
self.assertEqual(ps.numberOfParticles, 12)
self.assertEqual(a0, 0)
self.assertEqual(a1, 1)
as0 = np.array(ps.scalarDataAt(a0))
for val in as0:
self.assertEqual(val, 2.0)
as1 = np.array(ps.scalarDataAt(a1))
for val in as1:
self.assertEqual(val, 9.0)
def testAddVectorData(self):
ps = pyjet.ParticleSystemData2()
ps.resize(12)
a0 = ps.addVectorData((2.0, 4.0))
a1 = ps.addVectorData((9.0, -2.0))
self.assertEqual(ps.numberOfParticles, 12)
self.assertEqual(a0, 3)
self.assertEqual(a1, 4)
as0 = np.array(ps.vectorDataAt(a0))
for val in as0:
self.assertEqual(val.tolist(), [2.0, 4.0])
as1 = np.array(ps.vectorDataAt(a1))
for val in as1:
self.assertEqual(val.tolist(), [9.0, -2.0])
def testAddParticles(self):
ps = pyjet.ParticleSystemData2()
ps.resize(12)
ps.addParticles([(1.0, 2.0), (4.0, 5.0)],
[(7.0, 8.0), (8.0, 7.0)],
[(5.0, 4.0), (2.0, 1.0)])
self.assertEqual(ps.numberOfParticles, 14)
p = np.array(ps.positions)
v = np.array(ps.velocities)
f = np.array(ps.forces)
self.assertEqual([1.0, 2.0], p[12].tolist())
self.assertEqual([4.0, 5.0], p[13].tolist())
self.assertEqual([7.0, 8.0], v[12].tolist())
self.assertEqual([8.0, 7.0], v[13].tolist())
self.assertEqual([5.0, 4.0], f[12].tolist())
self.assertEqual([2.0, 1.0], f[13].tolist())
class ParticleSystemData3Tests(unittest.TestCase):
def testInit(self):
ps = pyjet.ParticleSystemData3()
self.assertEqual(ps.numberOfParticles, 0)
ps2 = pyjet.ParticleSystemData3(100)
self.assertEqual(ps2.numberOfParticles, 100)
def testResize(self):
ps = pyjet.ParticleSystemData3()
ps.resize(12)
self.assertEqual(ps.numberOfParticles, 12)
def testAddScalarData(self):
ps = pyjet.ParticleSystemData3()
ps.resize(12)
a0 = ps.addScalarData(2.0)
a1 = ps.addScalarData(9.0)
self.assertEqual(ps.numberOfParticles, 12)
self.assertEqual(a0, 0)
self.assertEqual(a1, 1)
as0 = np.array(ps.scalarDataAt(a0))
for val in as0:
self.assertEqual(val, 2.0)
as1 = np.array(ps.scalarDataAt(a1))
for val in as1:
self.assertEqual(val, 9.0)
def testAddVectorData(self):
ps = pyjet.ParticleSystemData3()
ps.resize(12)
a0 = ps.addVectorData((2.0, 4.0, -1.0))
a1 = ps.addVectorData((9.0, -2.0, 5.0))
self.assertEqual(ps.numberOfParticles, 12)
self.assertEqual(a0, 3)
self.assertEqual(a1, 4)
as0 = np.array(ps.vectorDataAt(a0))
for val in as0:
self.assertEqual(val.tolist(), [2.0, 4.0, -1.0])
as1 = np.array(ps.vectorDataAt(a1))
for val in as1:
self.assertEqual(val.tolist(), [9.0, -2.0, 5.0])
def testAddParticles(self):
ps = pyjet.ParticleSystemData3()
ps.resize(12)
ps.addParticles([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)],
[(7.0, 8.0, 9.0), (8.0, 7.0, 6.0)],
[(5.0, 4.0, 3.0), (2.0, 1.0, 3.0)])
self.assertEqual(ps.numberOfParticles, 14)
p = np.array(ps.positions)
v = np.array(ps.velocities)
f = np.array(ps.forces)
self.assertEqual([1.0, 2.0, 3.0], p[12].tolist())
self.assertEqual([4.0, 5.0, 6.0], p[13].tolist())
self.assertEqual([7.0, 8.0, 9.0], v[12].tolist())
self.assertEqual([8.0, 7.0, 6.0], v[13].tolist())
self.assertEqual([5.0, 4.0, 3.0], f[12].tolist())
self.assertEqual([2.0, 1.0, 3.0], f[13].tolist())
def main():
pyjet.Logging.mute()
unittest.main()
if __name__ == '__main__':
main()
| 29.567901 | 78 | 0.571399 | 646 | 4,790 | 4.224458 | 0.139319 | 0.219861 | 0.046171 | 0.124588 | 0.851594 | 0.844632 | 0.814951 | 0.719678 | 0.704287 | 0.655918 | 0 | 0.090857 | 0.273904 | 4,790 | 161 | 79 | 29.751553 | 0.69379 | 0.041754 | 0 | 0.672414 | 0 | 0 | 0.001746 | 0 | 0 | 0 | 0 | 0 | 0.344828 | 1 | 0.094828 | false | 0 | 0.025862 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f70c3267c713a80d0e3f5d9f83fe64fcabba8b3f | 3,941 | py | Python | python/317_shortest_distance_from_all_buildings.py | liaison/LeetCode | 8b10a1f6bbeb3ebfda99248994f7c325140ee2fd | [
"MIT"
] | 17 | 2016-03-01T22:40:53.000Z | 2021-04-19T02:15:03.000Z | python/317_shortest_distance_from_all_buildings.py | liaison/LeetCode | 8b10a1f6bbeb3ebfda99248994f7c325140ee2fd | [
"MIT"
] | null | null | null | python/317_shortest_distance_from_all_buildings.py | liaison/LeetCode | 8b10a1f6bbeb3ebfda99248994f7c325140ee2fd | [
"MIT"
] | 3 | 2019-03-07T03:48:43.000Z | 2020-04-05T01:11:36.000Z |
class SolutionTLE:
def shortestDistance(self, grid: List[List[int]]) -> int:
buildings = []
rows, cols = len(grid), len(grid[0])
for row in range(rows):
for col in range(cols):
if grid[row][col] == 1:
buildings.append((row, col))
def bfs(start):
row, col = start
visited = set()
queue = deque([(row, col, 0)])
distance = {}
while queue:
curr_row, curr_col, steps = queue.popleft()
for offset_row, offset_col in [(0, 1), (1, 0), (0, -1), (-1, 0)]:
next_row, next_col = curr_row + offset_row, curr_col + offset_col
if next_row < 0 or next_row >= rows \
or next_col < 0 or next_col >= cols:
continue
if grid[next_row][next_col] == 0:
if (next_row, next_col) not in visited:
visited.add((next_row, next_col))
distance[(next_row, next_col)] = steps + 1
queue.append((next_row, next_col, steps + 1))
return distance
total_distance = {}
for start in buildings:
distances = bfs(start)
for land, min_distance in distances.items():
if land not in total_distance:
total_distance[land] = (0, 0)
curr_count, curr_distance = total_distance[land]
total_distance[land] = (curr_count + 1, curr_distance + min_distance)
total_buildings = len(buildings)
min_distance_sum = float('inf')
for count, min_distance in total_distance.values():
if count == total_buildings:
min_distance_sum = min(min_distance_sum, min_distance)
return min_distance_sum if min_distance_sum != float('inf') else -1
class SolutionArray:
def shortestDistance(self, grid: List[List[int]]) -> int:
buildings = []
rows, cols = len(grid), len(grid[0])
for row in range(rows):
for col in range(cols):
if grid[row][col] == 1:
buildings.append((row, col))
def bfs(start):
row, col = start
visited = [[False]*cols for _ in range(rows)]
queue = deque([(row, col, 0)])
distance = {}
while queue:
curr_row, curr_col, steps = queue.popleft()
for offset_row, offset_col in [(0, 1), (1, 0), (0, -1), (-1, 0)]:
next_row, next_col = curr_row + offset_row, curr_col + offset_col
if next_row < 0 or next_row >= rows \
or next_col < 0 or next_col >= cols:
continue
if grid[next_row][next_col] == 0:
if not visited[next_row][next_col]:
visited[next_row][next_col] = True
distance[(next_row, next_col)] = steps + 1
queue.append((next_row, next_col, steps + 1))
return distance
total_distance = {}
for start in buildings:
distances = bfs(start)
for land, min_distance in distances.items():
if land not in total_distance:
total_distance[land] = (0, 0)
curr_count, curr_distance = total_distance[land]
total_distance[land] = (curr_count + 1, curr_distance + min_distance)
total_buildings = len(buildings)
min_distance_sum = float('inf')
for count, min_distance in total_distance.values():
if count == total_buildings:
min_distance_sum = min(min_distance_sum, min_distance)
return min_distance_sum if min_distance_sum != float('inf') else -1
| 36.831776 | 85 | 0.511038 | 458 | 3,941 | 4.176856 | 0.115721 | 0.103502 | 0.069002 | 0.08782 | 0.93884 | 0.916884 | 0.916884 | 0.916884 | 0.916884 | 0.916884 | 0 | 0.016618 | 0.389241 | 3,941 | 106 | 86 | 37.179245 | 0.778147 | 0 | 0 | 0.9 | 0 | 0 | 0.00305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f729c48d7065946b20b2e2dc9ba72d301fe58164 | 8,819 | py | Python | petibmpy/createxdmf.py | mesnardo/petibmpy | 3ab67cba8d170dcffb4ac7b6b35abd04145dbaf9 | [
"BSD-3-Clause"
] | 1 | 2020-08-08T13:37:28.000Z | 2020-08-08T13:37:28.000Z | petibmpy/createxdmf.py | mesnardo/petibmpy | 3ab67cba8d170dcffb4ac7b6b35abd04145dbaf9 | [
"BSD-3-Clause"
] | null | null | null | petibmpy/createxdmf.py | mesnardo/petibmpy | 3ab67cba8d170dcffb4ac7b6b35abd04145dbaf9 | [
"BSD-3-Clause"
] | null | null | null | """Module to create a XDMF file for a PetIBM field variable."""
import sys
import pathlib
from lxml import etree
from .grid import read_grid_hdf5
def write_xdmf(outpath, datadir, gridpath, name,
nstart=None, nt=None, nsave=None,
states=None, times=None):
"""Write a XDMF file to read the solution of a PetIBM variable.
Parameters
----------
outpath : pathlib.Path object
Path of the XDMF file to create.
datadir : pathlib.Path object
Data directory.
gridpath : pathlib.Path object
Path of the file containing the gridline coordinates.
name : string
Name of the field variable.
nstart : integer (optional)
Starting time step; default: None.
nt : integer (optional)
Number of time steps; default: None.
nsave : integer (optional)
Frequency of saving in number of time steps; default: None.
states : list of integers (optional)
The list of time-step indices to consider in the XDMF file;
default: None.
times : list of floats (optional)
The list of time values; default: None.
"""
# Initialize XDMF file.
xdmf = etree.Element('Xdmf', Version='2.2')
info = etree.SubElement(xdmf, 'Information',
Name='MetaData',
Value='ID-23454')
domain = etree.SubElement(xdmf, 'Domain')
grid_time_series = etree.SubElement(domain, 'Grid',
Name='TimeSeries',
GridType='Collection',
CollectionType='Temporal')
# Read grid to get dimension and number of points.
grid = read_grid_hdf5(gridpath, name)
dim = len(grid)
topology_type = '{}DRectMesh'.format(dim)
geometry_type = 'VXVY' + (dim == 3) * 'VZ'
components = ('x', 'y', 'z')[:dim]
gridsize = [len(line) for line in grid]
number_of_elements = ' '.join(str(n) for n in gridsize[::-1])
precision = '8'
# Get time-step indices and time values.
if states is None:
states = list(range(nstart, nstart + nt + 1, nsave))
# Generate the time series.
for i, state in enumerate(states):
grid = etree.SubElement(grid_time_series, 'Grid',
Name='Grid',
GridType='Uniform')
if times is not None:
time_value = '{:.6f}'.format(times[i])
else:
time_value = '{:0>7}'.format(state)
time = etree.SubElement(grid, 'Time',
Value=time_value)
topology = etree.SubElement(grid, 'Topology',
TopologyType=topology_type,
NumberOfElements=number_of_elements)
geometry = etree.SubElement(grid, 'Geometry',
GeometryType=geometry_type)
# Create XDMF block for the grid. (Use of loop for code-reuse.)
for component, n in zip(components, gridsize):
dataitem = etree.SubElement(geometry, 'DataItem',
Dimensions=str(n),
NumberType='Float',
Precision=precision,
Format='HDF')
dataitem.text = ':/'.join([str(gridpath), name + '/' + component])
# Create XDMF block for the scalar field variable.
attribute = etree.SubElement(grid, 'Attribute',
Name=name,
AttributeType='Scalar',
Center='Node')
dataitem = etree.SubElement(attribute, 'DataItem',
Dimensions=number_of_elements,
NumberType='Float',
Precision=precision,
Format='HDF')
filepath = datadir / '{:0>7}.h5'.format(state)
dataitem.text = ':/'.join([str(filepath), name])
# Write XDMF file.
tree = etree.ElementTree(xdmf)
tree.write(str(outpath), pretty_print=True, xml_declaration=True)
return
def write_xdmf_multi(outpath, config,
nstart=None, nt=None, nsave=None,
states=None, times=None):
"""Write a XDMF file to read the solution of multiple PetIBM variables.
Parameters
----------
outpath : pathlib.Path object
Path of the XDMF file to create.
config : dictionary
Should contains two keys: 'grid' and 'data'.
The value mapped to 'grid' is the path of the HDF5 grid file.
The value mapped to 'data' is a dictionary.
Each item of the 'data' dictionary is labeled with the name
of the variable to add to the XDMF file that is mapped to
the path of the directory that contains the numerical solution
for that variable.
nstart : integer (optional)
Starting time step; default: None.
nt : integer (optional)
Number of time steps; default: None.
nsave : integer (optional)
Frequency of saving in number of time steps; default: None.
states : list of integers (optional)
The list of time-step indices to consider in the XDMF file;
default: None.
times : list of floats (optional)
The list of time values; default: None.
"""
# Initialize XDMF file.
xdmf = etree.Element('Xdmf', Version='2.2')
info = etree.SubElement(xdmf, 'Information',
Name='MetaData',
Value='ID-23454')
domain = etree.SubElement(xdmf, 'Domain')
grid_time_series = etree.SubElement(domain, 'Grid',
Name='TimeSeries',
GridType='Collection',
CollectionType='Temporal')
# Read grid to get dimension and number of points.
master_name = list(config['data'].keys())[0]
gridpath = config['grid']
grid = read_grid_hdf5(gridpath, master_name)
dim = len(grid)
topology_type = '{}DRectMesh'.format(dim)
geometry_type = 'VXVY' + (dim == 3) * 'VZ'
components = ('x', 'y', 'z')[:dim]
gridsize = [len(line) for line in grid]
number_of_elements = ' '.join(str(n) for n in gridsize[::-1])
precision = '8'
# Get time-step indices and time values.
if states is None:
states = list(range(nstart, nstart + nt + 1, nsave))
# Generate the time series.
for i, state in enumerate(states):
grid = etree.SubElement(grid_time_series, 'Grid',
Name='Grid',
GridType='Uniform')
if times is not None:
time_value = '{:.6f}'.format(times[i])
else:
time_value = '{:0>7}'.format(state)
time = etree.SubElement(grid, 'Time',
Value=time_value)
topology = etree.SubElement(grid, 'Topology',
TopologyType=topology_type,
NumberOfElements=number_of_elements)
geometry = etree.SubElement(grid, 'Geometry',
GeometryType=geometry_type)
# Create XDMF block for the grid. (Use of loop for code-reuse.)
for component, n in zip(components, gridsize):
dataitem = etree.SubElement(geometry, 'DataItem',
Dimensions=str(n),
NumberType='Float',
Precision=precision,
Format='HDF')
dataitem.text = ':/'.join([str(gridpath),
master_name + '/' + component])
# Create XDMF block for each scalar field variable.
for name, datadir in config['data'].items():
attribute = etree.SubElement(grid, 'Attribute',
Name=name,
AttributeType='Scalar',
Center='Node')
dataitem = etree.SubElement(attribute, 'DataItem',
Dimensions=number_of_elements,
NumberType='Float',
Precision=precision,
Format='HDF')
filepath = datadir / '{:0>7}.h5'.format(state)
dataitem.text = ':/'.join([str(filepath), name])
# Write XDMF file.
tree = etree.ElementTree(xdmf)
tree.write(str(outpath), pretty_print=True, xml_declaration=True)
return
| 44.540404 | 78 | 0.52591 | 905 | 8,819 | 5.069613 | 0.185635 | 0.065388 | 0.041412 | 0.014821 | 0.839799 | 0.828684 | 0.809503 | 0.809503 | 0.809503 | 0.809503 | 0 | 0.007093 | 0.376573 | 8,819 | 197 | 79 | 44.766497 | 0.827392 | 0.263409 | 0 | 0.887097 | 0 | 0 | 0.067527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016129 | false | 0 | 0.032258 | 0 | 0.064516 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e3a42e4edf80ff572efff8363a6e2d93417b591d | 1,125 | py | Python | run_deepneo.py | kaistomics/DeepNeo-tcr | e3bd7edcfb8f0465394283ce0d26f5e9359733cb | [
"MIT"
] | null | null | null | run_deepneo.py | kaistomics/DeepNeo-tcr | e3bd7edcfb8f0465394283ce0d26f5e9359733cb | [
"MIT"
] | null | null | null | run_deepneo.py | kaistomics/DeepNeo-tcr | e3bd7edcfb8f0465394283ce0d26f5e9359733cb | [
"MIT"
] | null | null | null | #!/usr/bin/python
import os, sys
mhc_class = sys.argv[1]
predtype= sys.argv[2]
Inputname = sys.argv[3]
Resultname = sys.argv[4]
if mhc_class == "class1" and predtype== 'tcr' :
os.system('THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 ' \
+'python cnn.py ' \
+'../data/tcr1-pan.pkl.gz ' \
+ Inputname + ' ' \
+ Resultname)
print "\nThe running is completed!\n"
if mhc_class=="class1" and predtype=='mhc':
os.system('THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 python cnn.py ../data/mhc1-pan.pkl.gz '+Inputname+' '+Resultname)
print"\nThe running is completed!\n"
if mhc_class=="class2" and predtype=='mhc':
os.system('THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 python cnn.py ../data/mhc2-pan.pkl.gz '+Inputname+' '+Resultname)
print"\nThe running is completed!\n"
if mhc_class == "class2" and predtype== 'tcr' :
os.system('THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 ' \
+'python cnn.py ' \
+'../data/tcr2-pan.pkl.gz ' \
+ Inputname + ' ' \
+ Resultname)
print "\nThe running is completed!\n"
| 37.5 | 132 | 0.653333 | 159 | 1,125 | 4.540881 | 0.295597 | 0.055402 | 0.055402 | 0.105263 | 0.853186 | 0.853186 | 0.822715 | 0.822715 | 0.822715 | 0.822715 | 0 | 0.025834 | 0.174222 | 1,125 | 29 | 133 | 38.793103 | 0.751346 | 0.014222 | 0 | 0.48 | 0 | 0.08 | 0.476965 | 0.278229 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04 | null | null | 0.16 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e3d90ecf4953be9a30c3c7879950ff5a43e4ff9c | 5,164 | py | Python | fba/data/datasets/cse.py | hukkelas/full_body_anonymization | c61745b137c84ffb742ef6ab2f4721db4acf22b7 | [
"MIT"
] | 27 | 2022-01-06T20:15:24.000Z | 2022-03-29T11:54:49.000Z | fba/data/datasets/cse.py | hukkelas/full_body_anonymization | c61745b137c84ffb742ef6ab2f4721db4acf22b7 | [
"MIT"
] | 2 | 2022-03-17T06:04:23.000Z | 2022-03-25T08:50:57.000Z | fba/data/datasets/cse.py | hukkelas/full_body_anonymization | c61745b137c84ffb742ef6ab2f4721db4acf22b7 | [
"MIT"
] | 2 | 2022-01-07T13:16:59.000Z | 2022-01-16T02:10:50.000Z | import pickle
from typing import Callable, Optional, Union
from fba import logger
import torchvision
import torch
import pathlib
import numpy as np
from .build import DATASET_REGISTRY
from fba.utils.utils import cache_embed_stats
@DATASET_REGISTRY.register_module
class CocoCSE(torch.utils.data.Dataset):
def __init__(self,
dirpath: Union[str, pathlib.Path],
transform: Optional[Callable],
**kwargs):
dirpath = pathlib.Path(dirpath)
self.dirpath = dirpath
if transform is None:
self.transform = lambda x: x
else:
self.transform = transform
assert self.dirpath.is_dir(),\
f"Did not find dataset at: {dirpath}"
self.image_paths, self.embedding_paths = self._load_impaths()
self.embed_map = torch.from_numpy(np.load(self.dirpath.joinpath("embed_map.npy")))
cache_embed_stats(self.embed_map)
logger.info(
f"Dataset loaded from: {dirpath}. Number of samples:{len(self)}")
def _load_impaths(self):
image_dir = self.dirpath.joinpath("images")
image_paths = list(image_dir.glob("*.png"))
image_paths.sort()
embedding_paths = [
self.dirpath.joinpath("embedding", x.stem + ".npy") for x in image_paths
]
return image_paths, embedding_paths
def __len__(self):
return len(self.image_paths)
def __getitem__(self, idx):
im = torchvision.io.read_image(str(self.image_paths[idx]))
vertices, mask, border = np.split(np.load(self.embedding_paths[idx]), 3, axis=-1)
vertices = torch.from_numpy(vertices.squeeze()).long()
mask = torch.from_numpy(mask.squeeze()).float()
border = torch.from_numpy(border.squeeze()).float()[None]
E_mask = 1 - mask - border
batch = {
"img": im,
"vertices": vertices,
"mask": mask,
"embed_map": self.embed_map,
"border": border,
"E_mask": E_mask
}
return self.transform(batch)
@DATASET_REGISTRY.register_module
class CocoCSEWithFace(CocoCSE):
def __init__(self,
dirpath: Union[str, pathlib.Path],
transform: Optional[Callable],
**kwargs):
super().__init__(dirpath, transform, **kwargs)
with open(self.dirpath.joinpath("face_boxes_XYXY.pickle"), "rb") as fp:
self.face_boxes = pickle.load(fp)
def __getitem__(self, idx):
item = super().__getitem__(idx)
item["boxes_XYXY"] = self.face_boxes[self.image_paths[idx].name]
return item
@DATASET_REGISTRY.register_module
class CocoCSESemantic(torch.utils.data.Dataset):
def __init__(self,
dirpath: Union[str, pathlib.Path],
transform: Optional[Callable],
**kwargs):
dirpath = pathlib.Path(dirpath)
self.dirpath = dirpath
if transform is None:
self.transform = lambda x: x
else:
self.transform = transform
assert self.dirpath.is_dir(),\
f"Did not find dataset at: {dirpath}"
self.image_paths, self.embedding_paths = self._load_impaths()
self.vertx2cat = torch.from_numpy(np.load(self.dirpath.parent.joinpath("vertx2cat.npy")))
self.embed_map = torch.from_numpy(np.load(self.dirpath.joinpath("embed_map.npy")))
logger.info(
f"Dataset loaded from: {dirpath}. Number of samples:{len(self)}")
def _load_impaths(self):
image_dir = self.dirpath.joinpath("images")
image_paths = list(image_dir.glob("*.png"))
image_paths.sort()
embedding_paths = [
self.dirpath.joinpath("embedding", x.stem + ".npy") for x in image_paths
]
return image_paths, embedding_paths
def __len__(self):
return len(self.image_paths)
def __getitem__(self, idx):
im = torchvision.io.read_image(str(self.image_paths[idx]))
vertices, mask, border = np.split(np.load(self.embedding_paths[idx]), 3, axis=-1)
vertices = torch.from_numpy(vertices.squeeze()).long()
mask = torch.from_numpy(mask.squeeze()).float()
border = torch.from_numpy(border.squeeze()).float()[None]
batch = {
"img": im,
"vertices": vertices,
"mask": mask,
"border": border,
"vertx2cat": self.vertx2cat,
"embed_map": self.embed_map,
}
return self.transform(batch)
@DATASET_REGISTRY.register_module
class CocoCSESemanticWithFace(CocoCSESemantic):
def __init__(self,
dirpath: Union[str, pathlib.Path],
transform: Optional[Callable],
**kwargs):
super().__init__(dirpath, transform, **kwargs)
with open(self.dirpath.joinpath("face_boxes_XYXY.pickle"), "rb") as fp:
self.face_boxes = pickle.load(fp)
def __getitem__(self, idx):
item = super().__getitem__(idx)
item["boxes_XYXY"] = self.face_boxes[self.image_paths[idx].name]
return item
| 34.891892 | 97 | 0.610573 | 600 | 5,164 | 5.021667 | 0.173333 | 0.062064 | 0.041819 | 0.0385 | 0.864587 | 0.828742 | 0.828742 | 0.795885 | 0.795885 | 0.757385 | 0 | 0.002391 | 0.271108 | 5,164 | 147 | 98 | 35.129252 | 0.798087 | 0 | 0 | 0.822581 | 0 | 0 | 0.081348 | 0.008522 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.096774 | false | 0 | 0.072581 | 0.016129 | 0.266129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
541c976f06845bf9eeb676a64a4b79b8e881e8c7 | 12,937 | py | Python | tests/typer_tests/variant_typer_tests/test_type_simple_vars.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | 1 | 2020-08-08T01:08:01.000Z | 2020-08-08T01:08:01.000Z | tests/typer_tests/variant_typer_tests/test_type_simple_vars.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | tests/typer_tests/variant_typer_tests/test_type_simple_vars.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | from unittest import TestCase
from mykrobe.variants.schema.models import Variant
from mykrobe.variants.schema.models import VariantCall
from mykrobe.typing import VariantTyper
from mykrobe.typing import ProbeCoverage
from mykrobe.typing import SequenceProbeCoverage
from mykrobe.typing import VariantProbeCoverage
class VariantTyperTest(TestCase):
def setUp(self):
self.vt = VariantTyper(expected_depths=[100])
def teardown(self):
pass
def test_wt_vars(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=100,
k_count=100,
klen=31)
alternate_coverages = [ProbeCoverage(min_depth=100,
percent_coverage=3,
median_depth=100,
k_count=3,
klen=31)]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt.type([v1])
assert call['genotype'] == [0, 0]
assert call["info"].get('expected_depths') == [100]
def test_alt_vars(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=3,
median_depth=100,
k_count=3,
klen=31)
alternate_coverages = [ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=100,
k_count=100,
klen=31)]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt.type([v1])
assert call['genotype'] == [1, 1]
def test_mixed_vars(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=50,
k_count=50,
klen=31)
alternate_coverages = [ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=50,
k_count=50,
klen=31)]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt.type(v1)
assert call['genotype'] == [0, 1]
def test_mixed_vars2(self):
reference_coverage = ProbeCoverage(min_depth=11,
percent_coverage=100,
median_depth=42,
k_count=42,
klen=31)
alternate_coverages = [ProbeCoverage(min_depth=94,
percent_coverage=100,
median_depth=102,
k_count=94,
klen=31)]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt.type(v1)
assert call['genotype'] == [0, 1]
class VariantTyperWithContamination(TestCase):
def setUp(self):
self.vt_no_contaim = VariantTyper(
expected_depths=[100],
contamination_depths=[])
# To do add contamination type
# self.vt_contaim = VariantTyper(
# expected_depths=[80],
# contamination_depths=[20])
def teardown(self):
pass
def test_simple_case(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=80,
k_count=80,
klen=31)
alternate_coverages = [ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=20,
k_count=40,
klen=31)]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt_no_contaim.type(v1)
assert call['genotype'] == [0, 1]
# call = self.vt_contaim.type(v1)
# assert call['genotype'] == [0, 0]
class TestVariantTyperWithMultipleAlternateCoverages(TestCase):
def setUp(self):
# to do, test should pass on kc model also
self.vt_no_contaim = VariantTyper(
expected_depths=[100],
contamination_depths=[],
model="median_depth")
def teardown(self):
pass
def test_simple_case(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=70,
median_depth=80,
k_count=80,
klen=31)
alt1 = ProbeCoverage(min_depth=100,
percent_coverage=70,
median_depth=20,
k_count=20,
klen=31)
alt2 = ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=80,
k_count=80,
klen=31)
alternate_coverages = [alt1, alt2]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
assert v1._choose_best_alternate_coverage() == alt2
call = self.vt_no_contaim.type(v1)
assert call['genotype'] == [1, 1]
class TestVariantTyperWithMultipleProbeCoverages(TestCase):
def setUp(self):
self.vt_no_contaim = VariantTyper(
expected_depths=[100],
contamination_depths=[])
def teardown(self):
pass
def test_simple_case(self):
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=80,
median_depth=80,
k_count=80,
klen=31)
alt1 = ProbeCoverage(min_depth=100,
percent_coverage=50,
median_depth=20,
k_count=20,
klen=31)
alt2 = ProbeCoverage(min_depth=100,
percent_coverage=40,
median_depth=80,
k_count=30,
klen=31)
alternate_coverages = [alt1, alt2]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
reference_coverage = ProbeCoverage(min_depth=100,
percent_coverage=80,
median_depth=80,
k_count=20,
klen=31)
alt1 = ProbeCoverage(min_depth=100,
percent_coverage=50,
median_depth=20,
k_count=20,
klen=31)
alt2 = ProbeCoverage(min_depth=100,
percent_coverage=100,
median_depth=80,
k_count=100,
klen=31)
alternate_coverages = [alt1, alt2]
v2 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt_no_contaim.type([v1, v2])
assert call['genotype'] == [1, 1]
class TestVariantTyperWithLowMinimum(TestCase):
def setUp(self):
self.vt_no_contaim = VariantTyper(
expected_depths=[100],
contamination_depths=[])
self.vt2_no_contaim = VariantTyper(
expected_depths=[1],
contamination_depths=[])
def teardown(self):
pass
def test_2(self):
reference_coverage = ProbeCoverage(min_depth=131,
percent_coverage=95.2381,
median_depth=155,
k_count=131,
klen=31)
alt1 = ProbeCoverage(min_depth=1,
percent_coverage=100,
median_depth=1,
k_count=1,
klen=31)
alternate_coverages = [alt1]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt_no_contaim.type(v1)
assert call['genotype'] == [0, 0]
def test_3(self):
reference_coverage = ProbeCoverage(min_depth=2,
percent_coverage=59.52,
median_depth=2,
k_count=60,
klen=31)
alt1 = ProbeCoverage(min_depth=1,
percent_coverage=83.33,
median_depth=1,
k_count=83,
klen=31)
alternate_coverages = [alt1]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = self.vt2_no_contaim.type(v1)
assert call['genotype'] == [1, 1]
assert call["info"]["conf"] < 150
def test_4(self):
vt = VariantTyper(
expected_depths=[6],
contamination_depths=[],
confidence_threshold=3)
reference_coverage = ProbeCoverage(min_depth=1,
percent_coverage=100,
median_depth=2,
k_count=2,
klen=31)
alt1 = ProbeCoverage(min_depth=1,
percent_coverage=100,
median_depth=1,
k_count=1,
klen=31)
alternate_coverages = [alt1]
v1 = VariantProbeCoverage(var_name="A123T",
reference_coverages=[reference_coverage],
alternate_coverages=alternate_coverages
)
call = vt.type(v1)
assert call['genotype'] == [0, 1]
print(call["info"])
assert call["info"]["conf"] < 100
| 42.140065 | 75 | 0.426374 | 959 | 12,937 | 5.50365 | 0.111575 | 0.112543 | 0.09947 | 0.077302 | 0.830618 | 0.785146 | 0.730201 | 0.715801 | 0.695529 | 0.669193 | 0 | 0.065504 | 0.507923 | 12,937 | 306 | 76 | 42.277778 | 0.763588 | 0.017315 | 0 | 0.744186 | 0 | 0 | 0.01464 | 0 | 0 | 0 | 0 | 0 | 0.054264 | 1 | 0.077519 | false | 0.01938 | 0.027132 | 0 | 0.124031 | 0.003876 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
58217c4f2fda0c71d23f0bc09b84ad582b4fd19c | 82,820 | py | Python | test_api.py | j-woodlee/twitter-investor-parser | 204b1c9f79f2d3d0d343ba48b6d3cb89561a7a63 | [
"MIT"
] | null | null | null | test_api.py | j-woodlee/twitter-investor-parser | 204b1c9f79f2d3d0d343ba48b6d3cb89561a7a63 | [
"MIT"
] | null | null | null | test_api.py | j-woodlee/twitter-investor-parser | 204b1c9f79f2d3d0d343ba48b6d3cb89561a7a63 | [
"MIT"
] | null | null | null | def tim_ferris():
dic = {"users": [{"id": 34929992, "id_str": "34929992", "name": "Amer Delic", "screen_name": "AmerDelic", "location": "Austin, TX", "url": None, "description": "Former tennis player. Current golf hack. Unprofessional runner/cyclist/pickle ball player. Food/candy enthusiast. \ud83c\udde7\ud83c\udde6-\ud83c\uddfa\ud83c\uddf8. Refugee. #Illini #DTWD #Cubs", "protected": False, "followers_count": 15051, "friends_count": 632, "listed_count": 646, "created_at": "Fri Apr 24 13:55:10 +0000 2009", "favourites_count": 22671, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 12063, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/699397698244349952/pqf22r_8_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/699397698244349952/pqf22r_8_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/34929992/1529956717", "profile_link_color": "91D2FA", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 171515161, "id_str": "171515161", "name": "Beckley Foundation | Psychedelic Research", "screen_name": "BeckleyResearch", "location": "Beckley, Oxford", "url": "http://www.beckleyfoundation.org", "description": "Initiating and funding #research into #psychedelics such as #LSD, #Psilocybin, and #DMT, as well as #Cannabis to support evidence-based drug policy reform.", "protected": False, "followers_count": 36755, "friends_count": 2617, "listed_count": 608, "created_at": "Tue Jul 27 14:41:01 +0000 2010", "favourites_count": 3097, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 16423, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "CBE5EE", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme3/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme3/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1011887798377484288/P_brk5S1_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1011887798377484288/P_brk5S1_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/171515161/1565710400", "profile_link_color": "5EA891", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "E3E2DE", "profile_text_color": "634047", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 56753730, "id_str": "56753730", "name": "Fred Barrett", "screen_name": "FredBarrettPhD", "location": "Baltimore, MD", "url": "http://www.hopkinsmedicine.org/profiles/results/directory/profile/10000707/Frederick-Barrett", "description": "@HopkinsMedicine @JHPsychedelics Zymurgist Aikidoka Affective Neuropsychopharmacologist. He/him/VMO/BLM. No relation to the supreme court nominee.", "protected": False, "followers_count": 1430, "friends_count": 937, "listed_count": 21, "created_at": "Tue Jul 14 17:18:21 +0000 2009", "favourites_count": 8255, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": False, "statuses_count": 1736, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "709397", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1268149747463917569/ZIWMB8wR_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1268149747463917569/ZIWMB8wR_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/56753730/1601648149", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "86A4A6", "profile_sidebar_fill_color": "A0C5C7", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 85694915, "id_str": "85694915", "name": "Erin Brockovich", "screen_name": "ErinBrockovich", "location": "Agoura Hills, California ", "url": "http://www.brockovich.com", "description": "I am the *real* Erin Brockovich. Mother and consumer advocate. \u201cSuperman's Not Coming\u201d is out from @PantheonBooks 8/25. Be the hero you've been waiting for.", "protected": False, "followers_count": 66535, "friends_count": 898, "listed_count": 725, "created_at": "Tue Oct 27 23:46:45 +0000 2009", "favourites_count": 837, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 2242, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "B2DFDA", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme13/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme13/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/575771265916518401/01PptHrC_normal.jpeg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/575771265916518401/01PptHrC_normal.jpeg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/85694915/1594834443", "profile_link_color": "93A644", "profile_sidebar_border_color": "EEEEEE", "profile_sidebar_fill_color": "FFFFFF", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 19617105, "id_str": "19617105", "name": "Air New Zealand\u2708\ufe0f", "screen_name": "FlyAirNZ", "location": "", "url": "http://airnewzealand.com", "description": "The official Air New Zealand Twitter account \u2708 We're listening 24/7. Please call 0800 737 000 for immediate assistance.", "protected": False, "followers_count": 678537, "friends_count": 21365, "listed_count": 3008, "created_at": "Tue Jan 27 21:18:45 +0000 2009", "favourites_count": 18144, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 67408, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1242966746429935616/6ExNqwZJ_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1242966746429935616/6ExNqwZJ_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/19617105/1597384285", "profile_link_color": "000000", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "595959", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 158414847, "id_str": "158414847", "name": "The Daily Show", "screen_name": "TheDailyShow", "location": "", "url": None, "description": "Trevor Noah and The World's Fakest News Team. Weeknights 11/10c on @ComedyCentral. Visit https://t.co/3BZcz6tBEx to take action against the issues you care about most", "protected": False, "followers_count": 9127373, "friends_count": 759, "listed_count": 35965, "created_at": "Tue Jun 22 16:41:05 +0000 2010", "favourites_count": 2331, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 24359, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1299421580989267970/VhmsZ1xE_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1299421580989267970/VhmsZ1xE_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/158414847/1602091990", "profile_link_color": "0084B4", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 612473, "id_str": "612473", "name": "BBC News (UK)", "screen_name": "BBCNews", "location": "London", "url": "http://www.bbc.co.uk/news", "description": "News, features and analysis. For world news, follow @BBCWorld. Breaking news, follow @BBCBreaking. Latest sport news @BBCSport. Our Instagram: BBCNews", "protected": False, "followers_count": 11481620, "friends_count": 100, "listed_count": 43559, "created_at": "Mon Jan 08 08:05:57 +0000 2007", "favourites_count": 44, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 432733, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": True, "profile_background_color": "FFFFFF", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1150718511129477120/2N_GW7HR_normal.png", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1150718511129477120/2N_GW7HR_normal.png", "profile_banner_url": "https://pbs.twimg.com/profile_banners/612473/1584532383", "profile_link_color": "1F527B", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "FFFFFF", "profile_text_color": "5A5A5A", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "regular"}, {"id": 23543947, "id_str": "23543947", "name": "All Blacks", "screen_name": "AllBlacks", "location": "New Zealand", "url": None, "description": "The official home of the All Blacks and New Zealand Rugby on Twitter. Join us on Facebook and Instagram.", "protected": False, "followers_count": 973971, "friends_count": 599, "listed_count": 3748, "created_at": "Tue Mar 10 02:24:11 +0000 2009", "favourites_count": 3057, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 44550, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1154513873501687810/qEj8toqz_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1154513873501687810/qEj8toqz_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/23543947/1575409643", "profile_link_color": "0D0C0D", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "FFFFFF", "profile_text_color": "000000", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 184910040, "id_str": "184910040", "name": "Adele", "screen_name": "Adele", "location": "London", "url": "http://adele.com", "description": "http://adele.com", "protected": False, "followers_count": 26926089, "friends_count": 0, "listed_count": 29306, "created_at": "Mon Aug 30 19:53:19 +0000 2010", "favourites_count": 0, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 311, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "131516", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/657199367556866048/EBEIl2ol_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/657199367556866048/EBEIl2ol_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/184910040/1445523732", "profile_link_color": "E84037", "profile_sidebar_border_color": "EEEEEE", "profile_sidebar_fill_color": "EFEFEF", "profile_text_color": "333333", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "regular"}, {"id": 6480682, "id_str": "6480682", "name": "Aziz Ansari", "screen_name": "azizansari", "location": "New York, NY", "url": "http://azizansari.com", "description": "Pasta lover. I don't tweet much. My new Netflix series Master of None is now streaming on Netflix. I wrote a book called Modern Romance.", "protected": False, "followers_count": 10416182, "friends_count": 0, "listed_count": 27048, "created_at": "Thu May 31 19:06:49 +0000 2007", "favourites_count": 409, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 7589, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "053285", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/421377161/azizlittletwitter_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/421377161/azizlittletwitter_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/6480682/1398398057", "profile_link_color": "0000FF", "profile_sidebar_border_color": "87BC44", "profile_sidebar_fill_color": "E0FF92", "profile_text_color": "000000", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 16089776, "id_str": "16089776", "name": "Sasha Grey", "screen_name": "SashaGrey", "location": "Sashagrey.com", "url": "http://bit.ly/sghween", "description": "ALLY. FUNKY. Hot sauce enthusiast. Single Malt Drinkin, Dean Martin Wannabe http://Twitch.tv/sashagrey #rsf #secretsauce Host of Grey Area on @watchvenn", "protected": False, "followers_count": 1437940, "friends_count": 708, "listed_count": 5173, "created_at": "Mon Sep 01 23:46:34 +0000 2008", "favourites_count": 5829, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 14099, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "6D7878", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/940341877974249472/tvxiPY2Y_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/940341877974249472/tvxiPY2Y_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/16089776/1528920248", "profile_link_color": "0084B4", "profile_sidebar_border_color": "CDD2CB", "profile_sidebar_fill_color": "C8CDC6", "profile_text_color": "191515", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 1381595371, "id_str": "1381595371", "name": "Uber New Zealand", "screen_name": "Uber_NZ", "location": "New Zealand", "url": "http://uber.com/jobs", "description": "Moving people (and now food with Uber Eats) in New Zealand and beyond with the tap of a button.", "protected": False, "followers_count": 4611, "friends_count": 1152, "listed_count": 25, "created_at": "Fri Apr 26 10:24:58 +0000 2013", "favourites_count": 724, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 3404, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "E7E7E7", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1040021334883852288/a-2fcJmM_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1040021334883852288/a-2fcJmM_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/1381595371/1536795397", "profile_link_color": "1FBAD6", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 1110572521, "id_str": "1110572521", "name": "All Blacks Sevens", "screen_name": "AllBlacks7s", "location": "", "url": "http://www.allblacks.com", "description": "The official home of the All Blacks Sevens. Instagram: AllBlacks7s || FB: AllBlacks7s", "protected": False, "followers_count": 51152, "friends_count": 313, "listed_count": 294, "created_at": "Tue Jan 22 03:54:46 +0000 2013", "favourites_count": 676, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 7960, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "01090D", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1006399965189402624/-o0s0T25_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1006399965189402624/-o0s0T25_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/1110572521/1548444056", "profile_link_color": "000000", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 251496825, "id_str": "251496825", "name": "Ryan Gosling", "screen_name": "RyanGosling", "location": "LA", "url": None, "description": "", "protected": False, "followers_count": 2122212, "friends_count": 41, "listed_count": 5122, "created_at": "Sun Feb 13 07:35:03 +0000 2011", "favourites_count": 41, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 285, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "DFE5C1", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/883368541763665920/OaNY1eRC_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/883368541763665920/OaNY1eRC_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/251496825/1499446346", "profile_link_color": "1B95E0", "profile_sidebar_border_color": "81A9A1", "profile_sidebar_fill_color": "354555", "profile_text_color": "C0CBA3", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 4128627800, "id_str": "4128627800", "name": "Wilderpeople", "screen_name": "wilderpeople", "location": "New Zealand", "url": "http://www.madmanfilms.com.au/hunt-for-the-wilderpeople", "description": "@TaikaWaititi's classic starring @JulianDennison & @TwoPaddocks. Out now on DVD, Blu-ray and Digital #Wilderpeople", "protected": False, "followers_count": 6393, "friends_count": 2212, "listed_count": 41, "created_at": "Wed Nov 04 22:49:20 +0000 2015", "favourites_count": 6279, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 2732, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/775937971191107584/C5gJVzx6_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/775937971191107584/C5gJVzx6_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/4128627800/1473832977", "profile_link_color": "4A913C", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 25387735, "id_str": "25387735", "name": "Franklin Leonard", "screen_name": "franklinleonard", "location": "Los Angeles, CA; London, UK", "url": "http://www.franklinleonard.com", "description": ".@theblcklst founder; Film & TV producer; Politics and football (soccer) person. @vanityfair contributing editor. IG: @franklinjleonard Speaking: @freshspeakers", "protected": False, "followers_count": 147440, "friends_count": 8675, "listed_count": 1394, "created_at": "Thu Mar 19 21:19:06 +0000 2009", "favourites_count": 39718, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 90058, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1098628316943138816/5FKx3xCI_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1098628316943138816/5FKx3xCI_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/25387735/1460234412", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 16017475, "id_str": "16017475", "name": "Nate Silver", "screen_name": "NateSilver538", "location": "New York", "url": "http://fivethirtyeight.com/", "description": "Editor-in-Chief, @FiveThirtyEight. Author, The Signal and the Noise (http://amzn.to/QdyFYV). Sports/politics/food geek.", "protected": False, "followers_count": 3673610, "friends_count": 1349, "listed_count": 35877, "created_at": "Wed Aug 27 20:56:45 +0000 2008", "favourites_count": 1274, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 33374, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/668814368008708096/5HABV7bJ_normal.png", "profile_image_url_https": "https://pbs.twimg.com/profile_images/668814368008708096/5HABV7bJ_normal.png", "profile_link_color": "000092", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 12266442, "id_str": "12266442", "name": "Janelle Mon\u00e1e, Cindi Mayweather\ud83d\udc7d\ud83d\ude86\ud83e\udd16\ud83d\ude80\ud83e\ude90", "screen_name": "JanelleMonae", "location": "", "url": "http://soundtracks.lnk.to/antebellum", "description": "pro nows they/she/them/her/freeassmuthafucka Don\u2019t let anyone or anything stop your evolution even if it\u2019s you . \ud83d\udc7d\ud83d\ude86", "protected": False, "followers_count": 1266807, "friends_count": 626, "listed_count": 6908, "created_at": "Tue Jan 15 11:59:23 +0000 2008", "favourites_count": 7511, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 19358, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "131516", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/1287474630270189569/cTMQXoiw_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1287474630270189569/cTMQXoiw_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/12266442/1595792847", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "EFEFEF", "profile_text_color": "333333", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 19637934, "id_str": "19637934", "name": "RainnWilson", "screen_name": "rainnwilson", "location": "Los Angeles-ish", "url": "http://lidehaiti.org", "description": "Winner of the genetic lottery", "protected": False, "followers_count": 4473207, "friends_count": 625, "listed_count": 32018, "created_at": "Wed Jan 28 05:28:45 +0000 2009", "favourites_count": 2039, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 20713, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": True, "profile_background_color": "FFFFFF", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/949431044977049602/S7fDuxFB_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/949431044977049602/S7fDuxFB_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/19637934/1560285035", "profile_link_color": "9FA1A3", "profile_sidebar_border_color": "22262B", "profile_sidebar_fill_color": "17161A", "profile_text_color": "585F65", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 729468343, "id_str": "729468343", "name": "Rick and Morty", "screen_name": "RickandMorty", "location": "", "url": "http://www.rickandmorty.com", "description": "OFFICIAL the Rick and Morty Twitter! for online social stuff, broooooh\nWatch #RickandMorty Season 4 on the Adult Swim app.", "protected": False, "followers_count": 2035144, "friends_count": 225, "listed_count": 1932, "created_at": "Tue Jul 31 23:17:58 +0000 2012", "favourites_count": 13176, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 4840, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/897250392022540288/W1T-QjML_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/897250392022540288/W1T-QjML_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/729468343/1595876433", "profile_link_color": "0084B4", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 1308908961538535424, "id_str": "1308908961538535424", "name": "Borat", "screen_name": "BoratSagdiyev", "location": "", "url": "https://www.amazon.com/dp/B08K4723DQ", "description": "Name: Borat Sagdiyev \u2022 Age 536 moons \u2022 Length: 19.6 cm \u2022 Profession: #4 journalist Repubic of Kazakhstan \u2022 Health: Strong, crush syphilis 15 time! #Trump2020", "protected": False, "followers_count": 385705, "friends_count": 33, "listed_count": 462, "created_at": "Wed Sep 23 23:20:39 +0000 2020", "favourites_count": 69, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 45, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "F5F8FA", "profile_background_image_url": None, "profile_background_image_url_https": None, "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1313999242449416193/mY-iNLOf_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1313999242449416193/mY-iNLOf_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/1308908961538535424/1604634922", "profile_link_color": "1DA1F2", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": True, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 380524610, "id_str": "380524610", "name": "Patrick Radden Keefe", "screen_name": "praddenkeefe", "location": "1 World Trade Ctr 38th Fl. NYC", "url": "https://www.patrickraddenkeefe.com/", "description": "Staff writer @NewYorker. Author of NYT bestseller SAY NOTHING. Podcast: WIND OF CHANGE. Writing a book on the Sackler family & the opioid crisis (out 2021).", "protected": False, "followers_count": 39898, "friends_count": 1077, "listed_count": 509, "created_at": "Mon Sep 26 19:58:33 +0000 2011", "favourites_count": 9927, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 3110, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/446024091653251072/KiQdpLGj_normal.jpeg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/446024091653251072/KiQdpLGj_normal.jpeg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/380524610/1588035840", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 95610041, "id_str": "95610041", "name": "Taika Waititi", "screen_name": "TaikaWaititi", "location": "HATER & WRECKER", "url": "https://www.youtube.com/watch?v=wUzZXuslkSg&feature=youtu.be", "description": "Bespoke tweets hand-crafted from locally sourced vintage dickhead.", "protected": False, "followers_count": 1211693, "friends_count": 743, "listed_count": 2826, "created_at": "Wed Dec 09 09:19:22 +0000 2009", "favourites_count": 8973, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 4760, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/789629339969073152/FD7HrH4J_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/789629339969073152/FD7HrH4J_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/95610041/1453832988", "profile_link_color": "0084B4", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 509140285, "id_str": "509140285", "name": "Audio-Technica USA", "screen_name": "USAudioTechnica", "location": "Stow, OH", "url": "https://www.audio-technica.com/en-us/", "description": "The official Audio-Technica USA Twitter! We are a worldwide organization devoted to developing award-winning microphones, headphones & other audio equipment.", "protected": False, "followers_count": 67984, "friends_count": 813, "listed_count": 378, "created_at": "Wed Feb 29 19:31:39 +0000 2012", "favourites_count": 10825, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 12143, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "FFFFFF", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/997127421672345600/CiU5u_gN_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/997127421672345600/CiU5u_gN_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/509140285/1588963949", "profile_link_color": "0084B4", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 899671109393682432, "id_str": "899671109393682432", "name": "Austin FC", "screen_name": "AustinFC", "location": "Austin, TX", "url": "http://theuniformforaustin.com", "description": "In a city of legends, something new is rising. This is Austin FC. #VERDE | #LISTOS", "protected": False, "followers_count": 31207, "friends_count": 342, "listed_count": 241, "created_at": "Mon Aug 21 16:34:42 +0000 2017", "favourites_count": 4905, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 4295, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "F5F8FA", "profile_background_image_url": None, "profile_background_image_url_https": None, "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1272392310651789312/sgzM3THO_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1272392310651789312/sgzM3THO_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/899671109393682432/1592196943", "profile_link_color": "1DA1F2", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": True, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 19362341, "id_str": "19362341", "name": "Maria Shriver", "screen_name": "mariashriver", "location": "Los Angeles, CA", "url": "https://bit.ly/2XC8Gmh", "description": "Proud mom & NBC anchor. Founder of Shriver Media & @womensalz. Inspiring Hearts & Minds w/ #TheSundayPaper. Watch #SundayPaperLive on IG and YouTube.", "protected": False, "followers_count": 2193245, "friends_count": 200435, "listed_count": 11520, "created_at": "Thu Jan 22 21:23:52 +0000 2009", "favourites_count": 4998, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 17776, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C6E2EE", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme2/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme2/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/522145107785428992/zP6X2qUd_normal.jpeg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/522145107785428992/zP6X2qUd_normal.jpeg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/19362341/1489252850", "profile_link_color": "1F98C7", "profile_sidebar_border_color": "C6E2EE", "profile_sidebar_fill_color": "DAECF4", "profile_text_color": "663B12", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 522701657, "id_str": "522701657", "name": "Zoom", "screen_name": "zoom_us", "location": "San Jose, CA", "url": "https://www.zoom.us", "description": "Bringing the world together, one meeting at a time.", "protected": False, "followers_count": 1087953, "friends_count": 2022, "listed_count": 1494, "created_at": "Tue Mar 13 00:09:22 +0000 2012", "favourites_count": 15366, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 25031, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "9AE4E8", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme16/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme16/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1285693559992008704/oD_oPSBP_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1285693559992008704/oD_oPSBP_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/522701657/1602897700", "profile_link_color": "0084B4", "profile_sidebar_border_color": "BDDCAD", "profile_sidebar_fill_color": "DDFFCC", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 3302, "id_str": "3302", "name": "Andrew Wilkinson", "screen_name": "awilkinson", "location": "Victoria, Canada", "url": "http://www.tinycapital.com", "description": "Co-founder of Tiny w/ @_Sparling_. We own @Dribbble, @MetaLab, and many others. Buying, starting, and investing in wonderful internet businesses since 2007.", "protected": False, "followers_count": 87673, "friends_count": 2969, "listed_count": 2037, "created_at": "Fri Jul 28 02:16:08 +0000 2006", "favourites_count": 25255, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 15973, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme4/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme4/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1311779549693317121/0GpBW9T-_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1311779549693317121/0GpBW9T-_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/3302/1590611045", "profile_link_color": "1B95E0", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 40932856, "id_str": "40932856", "name": "Skoll Foundation", "screen_name": "SkollFoundation", "location": "Palo Alto", "url": "http://www.skoll.org", "description": "Driving large-scale change by investing in, connecting, & celebrating social entrepreneurs & innovators dedicated to solving the world\u2019s most pressing problems", "protected": False, "followers_count": 455626, "friends_count": 1501, "listed_count": 5130, "created_at": "Mon May 18 18:26:56 +0000 2009", "favourites_count": 14353, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 26618, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "F15323", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/695324286848888832/IaHF7XBG_normal.png", "profile_image_url_https": "https://pbs.twimg.com/profile_images/695324286848888832/IaHF7XBG_normal.png", "profile_banner_url": "https://pbs.twimg.com/profile_banners/40932856/1560465542", "profile_link_color": "F15323", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "A0C5C7", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 18713254, "id_str": "18713254", "name": "Pegg News", "screen_name": "simonpegg", "location": "Earth", "url": "http://www.simonpegg.net", "description": "This account is moderated on Simon's behalf", "protected": False, "followers_count": 5852323, "friends_count": 1, "listed_count": 30676, "created_at": "Wed Jan 07 06:19:34 +0000 2009", "favourites_count": 465, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 16185, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": True, "profile_background_color": "131516", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/864793956814651396/3BKfTLH0_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/864793956814651396/3BKfTLH0_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/18713254/1483470740", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "EEEEEE", "profile_sidebar_fill_color": "EFEFEF", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 24447643, "id_str": "24447643", "name": "Eddie Izzard", "screen_name": "eddieizzard", "location": "Earth", "url": "http://www.eddieizzard.com", "description": "I'm a British European, think like an American & born in an Arabic country. I've run a few marathons & have performed my show now in 45 countries in 4 languages", "protected": False, "followers_count": 4442655, "friends_count": 608, "listed_count": 24678, "created_at": "Sat Mar 14 23:13:44 +0000 2009", "favourites_count": 39, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 8585, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "022330", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme15/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme15/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1229832538434072576/abRTvZzH_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1229832538434072576/abRTvZzH_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/24447643/1582048950", "profile_link_color": "0084B4", "profile_sidebar_border_color": "A8C7F7", "profile_sidebar_fill_color": "C0DFEC", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 17562763, "id_str": "17562763", "name": "Conor White-Sullivan", "screen_name": "Conaw", "location": "Oakland, CA", "url": "http://roamresearch.com", "description": "Co-founder of @RoamResearch. Believer in tools for thought.", "protected": False, "followers_count": 24209, "friends_count": 836, "listed_count": 888, "created_at": "Sat Nov 22 20:50:49 +0000 2008", "favourites_count": 36982, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 12880, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme2/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme2/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1244392329533775872/GU-on2fT_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1244392329533775872/GU-on2fT_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/17562763/1523429391", "profile_link_color": "ABB8C2", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "regular"}, {"id": 953791371910955008, "id_str": "953791371910955008", "name": "Deep Sentinel", "screen_name": "deep_sentinel", "location": "Pleasanton, CA", "url": "http://www.deepsentinel.com", "description": "Deep Sentinel is a pioneer in AI-based protection. The company\u2019s intelligent crime prevention will transform both residential & business security forever.", "protected": False, "followers_count": 493, "friends_count": 210, "listed_count": 6, "created_at": "Thu Jan 18 00:49:18 +0000 2018", "favourites_count": 47, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": False, "statuses_count": 398, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/953798075885039616/3_hdfPEr_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/953798075885039616/3_hdfPEr_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/953791371910955008/1524267278", "profile_link_color": "220B3A", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 427089628, "id_str": "427089628", "name": "Lex Fridman", "screen_name": "lexfridman", "location": "Boston, MA", "url": "https://lexfridman.com/podcast", "description": "AI researcher. Podcast host.", "protected": False, "followers_count": 364047, "friends_count": 2, "listed_count": 2054, "created_at": "Sat Dec 03 03:06:19 +0000 2011", "favourites_count": 4141, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": False, "statuses_count": 1074, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/427089628/1586371087", "profile_link_color": "444444", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 40571569, "id_str": "40571569", "name": "Marcel Khalife", "screen_name": "Marcel_Khalife", "location": "Lebanon", "url": "http://www.marcelkhalife.com", "description": "Marcel Khalife, Lebanese composer, oud master and singer.", "protected": False, "followers_count": 35493, "friends_count": 60, "listed_count": 91, "created_at": "Sun May 17 00:13:21 +0000 2009", "favourites_count": 47, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 7582, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "1A1B1F", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/3214697417/77a3cc356eddbff7c31dde96f2a2b219_normal.jpeg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/3214697417/77a3cc356eddbff7c31dde96f2a2b219_normal.jpeg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/40571569/1408433995", "profile_link_color": "2FC2EF", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "252429", "profile_text_color": "666666", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 724427032826802176, "id_str": "724427032826802176", "name": "Dee W Hock", "screen_name": "deewhock", "location": "email--deehock@comcast.net", "url": "http://www.deewhock.com", "description": "Founder-CEO Emeritus, VISA Inc. - - -Author, \"Birth of The Chaordic Age\",\"One From Many,\" and \"Autobiography of a Restless Mind.\" - - - All content is mine.", "protected": False, "followers_count": 3012, "friends_count": 0, "listed_count": 72, "created_at": "Mon Apr 25 02:37:18 +0000 2016", "favourites_count": 0, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": False, "statuses_count": 818, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "F5F8FA", "profile_background_image_url": None, "profile_background_image_url_https": None, "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/728686531926351872/Fh3CDRc-_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/728686531926351872/Fh3CDRc-_normal.jpg", "profile_link_color": "1DA1F2", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": True, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 222520102, "id_str": "222520102", "name": "ChiliSleep", "screen_name": "ChiliSleep", "location": "Mooresville, NC", "url": "http://www.ChiliTechnology.com", "description": "Imagine never being too \ud83d\udd25 or too \u2744\ufe0f when you sleep\ud83e\udd14we did, so you could sleep deeper and longer\ud83d\ude34\ud83d\ude4c\nInvest in your sleep and become a #ChiliSleeper\u23ec", "protected": False, "followers_count": 3055, "friends_count": 1721, "listed_count": 40, "created_at": "Fri Dec 03 17:17:33 +0000 2010", "favourites_count": 845, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 2899, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "E7E7E7", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1313073110648848389/62IgnNuW_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1313073110648848389/62IgnNuW_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/222520102/1557775520", "profile_link_color": "263747", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 554600140, "id_str": "554600140", "name": "Anna Russell", "screen_name": "anna_russell", "location": "London, England", "url": "http://annarussellwrites.com", "description": "Contributing writer @newyorker. Based in London. Formerly @WSJ.", "protected": False, "followers_count": 3011, "friends_count": 2006, "listed_count": 99, "created_at": "Sun Apr 15 19:53:52 +0000 2012", "favourites_count": 376, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 634, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1091330538328199169/P0FKooTU_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1091330538328199169/P0FKooTU_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/554600140/1382675618", "profile_link_color": "0084B4", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 245423010, "id_str": "245423010", "name": "Anne Steele", "screen_name": "AnneMarieSteele", "location": "Los Angeles, CA", "url": "http://wsj.com", "description": "Music industry reporter for @WSJ", "protected": False, "followers_count": 1847, "friends_count": 526, "listed_count": 76, "created_at": "Mon Jan 31 17:25:40 +0000 2011", "favourites_count": 776, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 1236, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/551196463561060352/Qeks8azR_normal.jpeg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/551196463561060352/Qeks8azR_normal.jpeg", "profile_link_color": "1DA1F2", "profile_sidebar_border_color": "C0DEED", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": True, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 14740219, "id_str": "14740219", "name": "Amazon Music", "screen_name": "amazonmusic", "location": "Seattle", "url": "https://amzn.to/2P64VhL", "description": "Unlimited access to 70 million songs | Listen in HD + Ultra HD | Podcasts and live streams", "protected": False, "followers_count": 1925223, "friends_count": 3380, "listed_count": 10264, "created_at": "Mon May 12 04:02:08 +0000 2008", "favourites_count": 32935, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 32829, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1238578586396680192/eLXgVJEn_normal.png", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1238578586396680192/eLXgVJEn_normal.png", "profile_banner_url": "https://pbs.twimg.com/profile_banners/14740219/1604509424", "profile_link_color": "669933", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "EBEBEB", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 262310943, "id_str": "262310943", "name": "WWE WrestleMania", "screen_name": "WrestleMania", "location": "", "url": "https://www.wwe.com/shows/wrestlemania", "description": "The official Twitter for @WWE @WrestleMania, April 5, 2020 on @WWENetwork. Follow us for breaking news & updates.", "protected": False, "followers_count": 1566127, "friends_count": 44, "listed_count": 2574, "created_at": "Mon Mar 07 20:16:05 +0000 2011", "favourites_count": 11, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 5704, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "1A1B1F", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme9/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1176240864529719296/kvPnZT8w_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1176240864529719296/kvPnZT8w_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/262310943/1586144006", "profile_link_color": "FF691F", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "252429", "profile_text_color": "666666", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 150490233, "id_str": "150490233", "name": "Stance", "screen_name": "stance", "location": "", "url": "http://www.stance.com", "description": "Uncover The Uncommon", "protected": False, "followers_count": 64968, "friends_count": 20, "listed_count": 306, "created_at": "Tue Jun 01 02:13:32 +0000 2010", "favourites_count": 13391, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 11323, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/722783723825967105/ehRM34Au_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/722783723825967105/ehRM34Au_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/150490233/1547665776", "profile_link_color": "238DAD", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "EFEFEF", "profile_text_color": "333333", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 37934006, "id_str": "37934006", "name": "Vuori", "screen_name": "vuoriclothing", "location": "Encinitas, CA", "url": "http://www.vuoriclothing.com", "description": "Vuori makes performance apparel inspired by a coastal California lifestyle, an integration of yoga, surf, art, music, and a strong visionary spirit.", "protected": False, "followers_count": 1917, "friends_count": 261, "listed_count": 29, "created_at": "Tue May 05 14:29:18 +0000 2009", "favourites_count": 990, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": False, "statuses_count": 1966, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "FFFFFF", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme6/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1232454439123841026/4MpqNlN4_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1232454439123841026/4MpqNlN4_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/37934006/1582675011", "profile_link_color": "3B94D9", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "80ABD6", "profile_text_color": "595763", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 772493970, "id_str": "772493970", "name": "Eric S. Yuan", "screen_name": "ericsyuan", "location": "San Jose, CA", "url": "http://www.linkedin.com/pub/eric-s-yuan/0/3b/821/", "description": "Founder & CEO @Zoom_us | Your happiness is my happiness. San Jose, CA http://www.zoom.us", "protected": False, "followers_count": 84695, "friends_count": 9603, "listed_count": 552, "created_at": "Tue Aug 21 23:46:38 +0000 2012", "favourites_count": 13556, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 3098, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "C0DEED", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/918616411898908672/QVASJ_NY_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/918616411898908672/QVASJ_NY_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/772493970/1541571294", "profile_link_color": "0084B4", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 6974622, "id_str": "6974622", "name": "roxane gay", "screen_name": "rgay", "location": "The Gaygency, Wakanda ", "url": "http://www.roxanegay.com", "description": "I want a tiny baby elephant. If you clap, I clap back. I write: Ayiti, Untamed State, Bad Feminist, Difficult Women, World of Wakanda, Hunger, Not That Bad.", "protected": False, "followers_count": 797777, "friends_count": 2049, "listed_count": 4330, "created_at": "Wed Jun 20 18:44:41 +0000 2007", "favourites_count": 35418, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 9533, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme15/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme15/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1066623056896901120/rBzWchLh_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1066623056896901120/rBzWchLh_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/6974622/1603825448", "profile_link_color": "D25F9C", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 33394729, "id_str": "33394729", "name": "Trevor Mallard", "screen_name": "SpeakerTrevor", "location": "", "url": "https://www.facebook.com/RtHonTrevorMallard/", "description": "Speaker of @NZParliament. Some tweets as Speaker, some just Trevor.\nAuthorised by Trevor Mallard, Parliament Buildings, Wellington.", "protected": False, "followers_count": 15306, "friends_count": 5597, "listed_count": 191, "created_at": "Mon Apr 20 02:50:04 +0000 2009", "favourites_count": 11489, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 31798, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "DD2E44", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1292657374168137729/1E0ijagQ_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1292657374168137729/1E0ijagQ_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/33394729/1442533943", "profile_link_color": "DD2E44", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "DDEEF6", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 25754056, "id_str": "25754056", "name": "Chris Hipkins", "screen_name": "chrishipkins", "location": "New Zealand", "url": "http://www.chrishipkins.org.nz", "description": "MP for Rimutaka. Leader of the House. Minister of Education. Minister of State Services. Authorised by Timothy Grigg, 160 Willis Street, Wellington", "protected": False, "followers_count": 14470, "friends_count": 1534, "listed_count": 183, "created_at": "Sun Mar 22 00:35:27 +0000 2009", "favourites_count": 644, "utc_offset": None, "time_zone": None, "geo_enabled": False, "verified": True, "statuses_count": 5778, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "EBEBEB", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme4/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme4/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/867583151664771072/M9VVHuGw_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/867583151664771072/M9VVHuGw_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/25754056/1495682921", "profile_link_color": "B80009", "profile_sidebar_border_color": "FF0000", "profile_sidebar_fill_color": "FFC2C2", "profile_text_color": "3C3940", "profile_use_background_image": True, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 774325168122437632, "id_str": "774325168122437632", "name": "Literati", "screen_name": "literati", "location": "Austin, TX", "url": "http://Literati.com", "description": "Subscription book clubs. Thoughtful book curation and conversations for every reader. Now for kids and adults. \u2728", "protected": False, "followers_count": 2281, "friends_count": 498, "listed_count": 24, "created_at": "Fri Sep 09 19:14:41 +0000 2016", "favourites_count": 803, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 965, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "000000", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/1063462787727294464/BDCajDfT_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/1063462787727294464/BDCajDfT_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/774325168122437632/1603988397", "profile_link_color": "16CAC0", "profile_sidebar_border_color": "000000", "profile_sidebar_fill_color": "000000", "profile_text_color": "000000", "profile_use_background_image": False, "has_extended_profile": False, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 346944092, "id_str": "346944092", "name": "Winston Peters", "screen_name": "winstonpeters", "location": "New Zealand", "url": "http://www.facebook.com/winstonpeters", "description": "Deputy Prime Minister, Leader of New Zealand First. #foreignaffairs #racing #stateownedenterprises Authorised by Winston Peters, Parliament Buildings", "protected": False, "followers_count": 47772, "friends_count": 290, "listed_count": 291, "created_at": "Tue Aug 02 02:15:35 +0000 2011", "favourites_count": 183, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 2888, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "131516", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": True, "profile_image_url": "http://pbs.twimg.com/profile_images/976238423324246017/npjEfDnA_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/976238423324246017/npjEfDnA_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/346944092/1501814191", "profile_link_color": "000000", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "EFEFEF", "profile_text_color": "333333", "profile_use_background_image": False, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}, {"id": 22959763, "id_str": "22959763", "name": "Jacinda Ardern", "screen_name": "jacindaardern", "location": "Auckland, New Zealand", "url": "http://www.labour.org.nz", "description": "Prime Minister of NZ. Leader @nzlabour. Won't tweet what I ate for breakfast-make no promises beyond that. Authorised by Timothy Grigg 160 Willis St, Wellington", "protected": False, "followers_count": 694520, "friends_count": 4238, "listed_count": 1799, "created_at": "Thu Mar 05 18:57:11 +0000 2009", "favourites_count": 669, "utc_offset": None, "time_zone": None, "geo_enabled": True, "verified": True, "statuses_count": 6923, "lang": None, "contributors_enabled": False, "is_translator": False, "is_translation_enabled": False, "profile_background_color": "131516", "profile_background_image_url": "http://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme14/bg.gif", "profile_background_tile": False, "profile_image_url": "http://pbs.twimg.com/profile_images/820351342464016384/_otHuDCr_normal.jpg", "profile_image_url_https": "https://pbs.twimg.com/profile_images/820351342464016384/_otHuDCr_normal.jpg", "profile_banner_url": "https://pbs.twimg.com/profile_banners/22959763/1501620205", "profile_link_color": "FF0000", "profile_sidebar_border_color": "FFFFFF", "profile_sidebar_fill_color": "F0F0F0", "profile_text_color": "333333", "profile_use_background_image": True, "has_extended_profile": True, "default_profile": False, "default_profile_image": False, "following": False, "live_following": False, "follow_request_sent": False, "notifications": False, "muting": False, "blocking": False, "blocked_by": False, "translator_type": "none"}], "next_cursor": 1677297401781858089, "next_cursor_str": "1677297401781858089", "previous_cursor": 0, "previous_cursor_str": "0", "total_count": None}
return dic | 27,606.666667 | 82,787 | 0.76304 | 10,794 | 82,820 | 5.569946 | 0.132944 | 0.032068 | 0.026895 | 0.044011 | 0.708094 | 0.696767 | 0.694272 | 0.694272 | 0.686271 | 0.677922 | 0 | 0.089469 | 0.068401 | 82,820 | 3 | 82,788 | 27,606.666667 | 0.689767 | 0 | 0 | 0 | 0 | 12.666667 | 0.734017 | 0.158921 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
586f8addb33eb05389296dd3460e7a80da2e4c8b | 58,133 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/workdocs/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/workdocs/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/workdocs/paginator.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Dict
from datetime import datetime
from botocore.paginate import Paginator
class DescribeActivities(Paginator):
def paginate(self, AuthenticationToken: str = None, StartTime: datetime = None, EndTime: datetime = None, OrganizationId: str = None, ActivityTypes: str = None, ResourceId: str = None, UserId: str = None, IncludeIndirectActivities: bool = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_activities`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeActivities>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
OrganizationId='string',
ActivityTypes='string',
ResourceId='string',
UserId='string',
IncludeIndirectActivities=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'UserActivities': [
{
'Type': 'DOCUMENT_CHECKED_IN'|'DOCUMENT_CHECKED_OUT'|'DOCUMENT_RENAMED'|'DOCUMENT_VERSION_UPLOADED'|'DOCUMENT_VERSION_DELETED'|'DOCUMENT_VERSION_VIEWED'|'DOCUMENT_VERSION_DOWNLOADED'|'DOCUMENT_RECYCLED'|'DOCUMENT_RESTORED'|'DOCUMENT_REVERTED'|'DOCUMENT_SHARED'|'DOCUMENT_UNSHARED'|'DOCUMENT_SHARE_PERMISSION_CHANGED'|'DOCUMENT_SHAREABLE_LINK_CREATED'|'DOCUMENT_SHAREABLE_LINK_REMOVED'|'DOCUMENT_SHAREABLE_LINK_PERMISSION_CHANGED'|'DOCUMENT_MOVED'|'DOCUMENT_COMMENT_ADDED'|'DOCUMENT_COMMENT_DELETED'|'DOCUMENT_ANNOTATION_ADDED'|'DOCUMENT_ANNOTATION_DELETED'|'FOLDER_CREATED'|'FOLDER_DELETED'|'FOLDER_RENAMED'|'FOLDER_RECYCLED'|'FOLDER_RESTORED'|'FOLDER_SHARED'|'FOLDER_UNSHARED'|'FOLDER_SHARE_PERMISSION_CHANGED'|'FOLDER_SHAREABLE_LINK_CREATED'|'FOLDER_SHAREABLE_LINK_REMOVED'|'FOLDER_SHAREABLE_LINK_PERMISSION_CHANGED'|'FOLDER_MOVED',
'TimeStamp': datetime(2015, 1, 1),
'IsIndirectActivity': True|False,
'OrganizationId': 'string',
'Initiator': {
'Id': 'string',
'Username': 'string',
'GivenName': 'string',
'Surname': 'string',
'EmailAddress': 'string'
},
'Participants': {
'Users': [
{
'Id': 'string',
'Username': 'string',
'GivenName': 'string',
'Surname': 'string',
'EmailAddress': 'string'
},
],
'Groups': [
{
'Id': 'string',
'Name': 'string'
},
]
},
'ResourceMetadata': {
'Type': 'FOLDER'|'DOCUMENT',
'Name': 'string',
'OriginalName': 'string',
'Id': 'string',
'VersionId': 'string',
'Owner': {
'Id': 'string',
'Username': 'string',
'GivenName': 'string',
'Surname': 'string',
'EmailAddress': 'string'
},
'ParentId': 'string'
},
'OriginalParent': {
'Type': 'FOLDER'|'DOCUMENT',
'Name': 'string',
'OriginalName': 'string',
'Id': 'string',
'VersionId': 'string',
'Owner': {
'Id': 'string',
'Username': 'string',
'GivenName': 'string',
'Surname': 'string',
'EmailAddress': 'string'
},
'ParentId': 'string'
},
'CommentMetadata': {
'CommentId': 'string',
'Contributor': {
'Id': 'string',
'Username': 'string',
'EmailAddress': 'string',
'GivenName': 'string',
'Surname': 'string',
'OrganizationId': 'string',
'RootFolderId': 'string',
'RecycleBinFolderId': 'string',
'Status': 'ACTIVE'|'INACTIVE'|'PENDING',
'Type': 'USER'|'ADMIN'|'POWERUSER'|'MINIMALUSER'|'WORKSPACESUSER',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'TimeZoneId': 'string',
'Locale': 'en'|'fr'|'ko'|'de'|'es'|'ja'|'ru'|'zh_CN'|'zh_TW'|'pt_BR'|'default',
'Storage': {
'StorageUtilizedInBytes': 123,
'StorageRule': {
'StorageAllocatedInBytes': 123,
'StorageType': 'UNLIMITED'|'QUOTA'
}
}
},
'CreatedTimestamp': datetime(2015, 1, 1),
'CommentStatus': 'DRAFT'|'PUBLISHED'|'DELETED',
'RecipientId': 'string'
}
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **UserActivities** *(list) --*
The list of activities for the specified user and time period.
- *(dict) --*
Describes the activity information.
- **Type** *(string) --*
The activity type.
- **TimeStamp** *(datetime) --*
The timestamp when the action was performed.
- **IsIndirectActivity** *(boolean) --*
Indicates whether an activity is indirect or direct. An indirect activity results from a direct activity performed on a parent resource. For example, sharing a parent folder (the direct activity) shares all of the subfolders and documents within the parent folder (the indirect activity).
- **OrganizationId** *(string) --*
The ID of the organization.
- **Initiator** *(dict) --*
The user who performed the action.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The name of the user.
- **GivenName** *(string) --*
The given name of the user before a rename operation.
- **Surname** *(string) --*
The surname of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **Participants** *(dict) --*
The list of users or groups impacted by this action. This is an optional field and is filled for the following sharing activities: DOCUMENT_SHARED, DOCUMENT_SHARED, DOCUMENT_UNSHARED, FOLDER_SHARED, FOLDER_UNSHARED.
- **Users** *(list) --*
The list of users.
- *(dict) --*
Describes the metadata of the user.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The name of the user.
- **GivenName** *(string) --*
The given name of the user before a rename operation.
- **Surname** *(string) --*
The surname of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **Groups** *(list) --*
The list of user groups.
- *(dict) --*
Describes the metadata of a user group.
- **Id** *(string) --*
The ID of the user group.
- **Name** *(string) --*
The name of the group.
- **ResourceMetadata** *(dict) --*
The metadata of the resource involved in the user action.
- **Type** *(string) --*
The type of resource.
- **Name** *(string) --*
The name of the resource.
- **OriginalName** *(string) --*
The original name of the resource before a rename operation.
- **Id** *(string) --*
The ID of the resource.
- **VersionId** *(string) --*
The version ID of the resource. This is an optional field and is filled for action on document version.
- **Owner** *(dict) --*
The owner of the resource.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The name of the user.
- **GivenName** *(string) --*
The given name of the user before a rename operation.
- **Surname** *(string) --*
The surname of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **ParentId** *(string) --*
The parent ID of the resource before a rename operation.
- **OriginalParent** *(dict) --*
The original parent of the resource. This is an optional field and is filled for move activities.
- **Type** *(string) --*
The type of resource.
- **Name** *(string) --*
The name of the resource.
- **OriginalName** *(string) --*
The original name of the resource before a rename operation.
- **Id** *(string) --*
The ID of the resource.
- **VersionId** *(string) --*
The version ID of the resource. This is an optional field and is filled for action on document version.
- **Owner** *(dict) --*
The owner of the resource.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The name of the user.
- **GivenName** *(string) --*
The given name of the user before a rename operation.
- **Surname** *(string) --*
The surname of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **ParentId** *(string) --*
The parent ID of the resource before a rename operation.
- **CommentMetadata** *(dict) --*
Metadata of the commenting activity. This is an optional field and is filled for commenting activities.
- **CommentId** *(string) --*
The ID of the comment.
- **Contributor** *(dict) --*
The user who made the comment.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The login name of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **GivenName** *(string) --*
The given name of the user.
- **Surname** *(string) --*
The surname of the user.
- **OrganizationId** *(string) --*
The ID of the organization.
- **RootFolderId** *(string) --*
The ID of the root folder.
- **RecycleBinFolderId** *(string) --*
The ID of the recycle bin folder.
- **Status** *(string) --*
The status of the user.
- **Type** *(string) --*
The type of user.
- **CreatedTimestamp** *(datetime) --*
The time when the user was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the user was modified.
- **TimeZoneId** *(string) --*
The time zone ID of the user.
- **Locale** *(string) --*
The locale of the user.
- **Storage** *(dict) --*
The storage for the user.
- **StorageUtilizedInBytes** *(integer) --*
The amount of storage used, in bytes.
- **StorageRule** *(dict) --*
The storage for a user.
- **StorageAllocatedInBytes** *(integer) --*
The amount of storage allocated, in bytes.
- **StorageType** *(string) --*
The type of storage.
- **CreatedTimestamp** *(datetime) --*
The timestamp that the comment was created.
- **CommentStatus** *(string) --*
The status of the comment.
- **RecipientId** *(string) --*
The ID of the user being replied to.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type StartTime: datetime
:param StartTime:
The timestamp that determines the starting time of the activities. The response includes the activities performed after the specified timestamp.
:type EndTime: datetime
:param EndTime:
The timestamp that determines the end time of the activities. The response includes the activities performed before the specified timestamp.
:type OrganizationId: string
:param OrganizationId:
The ID of the organization. This is a mandatory parameter when using administrative API (SigV4) requests.
:type ActivityTypes: string
:param ActivityTypes:
Specifies which activity types to include in the response. If this field is left empty, all activity types are returned.
:type ResourceId: string
:param ResourceId:
The document or folder ID for which to describe activity types.
:type UserId: string
:param UserId:
The ID of the user who performed the action. The response includes activities pertaining to this user. This is an optional parameter and is only applicable for administrative API (SigV4) requests.
:type IncludeIndirectActivities: boolean
:param IncludeIndirectActivities:
Includes indirect activities. An indirect activity results from a direct activity performed on a parent resource. For example, sharing a parent folder (the direct activity) shares all of the subfolders and documents within the parent folder (the indirect activity).
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeComments(Paginator):
def paginate(self, DocumentId: str, VersionId: str, AuthenticationToken: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_comments`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeComments>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
DocumentId='string',
VersionId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Comments': [
{
'CommentId': 'string',
'ParentId': 'string',
'ThreadId': 'string',
'Text': 'string',
'Contributor': {
'Id': 'string',
'Username': 'string',
'EmailAddress': 'string',
'GivenName': 'string',
'Surname': 'string',
'OrganizationId': 'string',
'RootFolderId': 'string',
'RecycleBinFolderId': 'string',
'Status': 'ACTIVE'|'INACTIVE'|'PENDING',
'Type': 'USER'|'ADMIN'|'POWERUSER'|'MINIMALUSER'|'WORKSPACESUSER',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'TimeZoneId': 'string',
'Locale': 'en'|'fr'|'ko'|'de'|'es'|'ja'|'ru'|'zh_CN'|'zh_TW'|'pt_BR'|'default',
'Storage': {
'StorageUtilizedInBytes': 123,
'StorageRule': {
'StorageAllocatedInBytes': 123,
'StorageType': 'UNLIMITED'|'QUOTA'
}
}
},
'CreatedTimestamp': datetime(2015, 1, 1),
'Status': 'DRAFT'|'PUBLISHED'|'DELETED',
'Visibility': 'PUBLIC'|'PRIVATE',
'RecipientId': 'string'
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Comments** *(list) --*
The list of comments for the specified document version.
- *(dict) --*
Describes a comment.
- **CommentId** *(string) --*
The ID of the comment.
- **ParentId** *(string) --*
The ID of the parent comment.
- **ThreadId** *(string) --*
The ID of the root comment in the thread.
- **Text** *(string) --*
The text of the comment.
- **Contributor** *(dict) --*
The details of the user who made the comment.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The login name of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **GivenName** *(string) --*
The given name of the user.
- **Surname** *(string) --*
The surname of the user.
- **OrganizationId** *(string) --*
The ID of the organization.
- **RootFolderId** *(string) --*
The ID of the root folder.
- **RecycleBinFolderId** *(string) --*
The ID of the recycle bin folder.
- **Status** *(string) --*
The status of the user.
- **Type** *(string) --*
The type of user.
- **CreatedTimestamp** *(datetime) --*
The time when the user was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the user was modified.
- **TimeZoneId** *(string) --*
The time zone ID of the user.
- **Locale** *(string) --*
The locale of the user.
- **Storage** *(dict) --*
The storage for the user.
- **StorageUtilizedInBytes** *(integer) --*
The amount of storage used, in bytes.
- **StorageRule** *(dict) --*
The storage for a user.
- **StorageAllocatedInBytes** *(integer) --*
The amount of storage allocated, in bytes.
- **StorageType** *(string) --*
The type of storage.
- **CreatedTimestamp** *(datetime) --*
The time that the comment was created.
- **Status** *(string) --*
The status of the comment.
- **Visibility** *(string) --*
The visibility of the comment. Options are either PRIVATE, where the comment is visible only to the comment author and document owner and co-owners, or PUBLIC, where the comment is visible to document owners, co-owners, and contributors.
- **RecipientId** *(string) --*
If the comment is a reply to another user's comment, this field contains the user ID of the user being replied to.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type DocumentId: string
:param DocumentId: **[REQUIRED]**
The ID of the document.
:type VersionId: string
:param VersionId: **[REQUIRED]**
The ID of the document version.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeDocumentVersions(Paginator):
def paginate(self, DocumentId: str, AuthenticationToken: str = None, Include: str = None, Fields: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_document_versions`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeDocumentVersions>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
DocumentId='string',
Include='string',
Fields='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'DocumentVersions': [
{
'Id': 'string',
'Name': 'string',
'ContentType': 'string',
'Size': 123,
'Signature': 'string',
'Status': 'INITIALIZED'|'ACTIVE',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'ContentCreatedTimestamp': datetime(2015, 1, 1),
'ContentModifiedTimestamp': datetime(2015, 1, 1),
'CreatorId': 'string',
'Thumbnail': {
'string': 'string'
},
'Source': {
'string': 'string'
}
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **DocumentVersions** *(list) --*
The document versions.
- *(dict) --*
Describes a version of a document.
- **Id** *(string) --*
The ID of the version.
- **Name** *(string) --*
The name of the version.
- **ContentType** *(string) --*
The content type of the document.
- **Size** *(integer) --*
The size of the document, in bytes.
- **Signature** *(string) --*
The signature of the document.
- **Status** *(string) --*
The status of the document.
- **CreatedTimestamp** *(datetime) --*
The timestamp when the document was first uploaded.
- **ModifiedTimestamp** *(datetime) --*
The timestamp when the document was last uploaded.
- **ContentCreatedTimestamp** *(datetime) --*
The timestamp when the content of the document was originally created.
- **ContentModifiedTimestamp** *(datetime) --*
The timestamp when the content of the document was modified.
- **CreatorId** *(string) --*
The ID of the creator.
- **Thumbnail** *(dict) --*
The thumbnail of the document.
- *(string) --*
- *(string) --*
- **Source** *(dict) --*
The source of the document.
- *(string) --*
- *(string) --*
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type DocumentId: string
:param DocumentId: **[REQUIRED]**
The ID of the document.
:type Include: string
:param Include:
A comma-separated list of values. Specify \"INITIALIZED\" to include incomplete versions.
:type Fields: string
:param Fields:
Specify \"SOURCE\" to include initialized versions and a URL for the source document.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeFolderContents(Paginator):
def paginate(self, FolderId: str, AuthenticationToken: str = None, Sort: str = None, Order: str = None, Type: str = None, Include: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_folder_contents`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeFolderContents>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
FolderId='string',
Sort='DATE'|'NAME',
Order='ASCENDING'|'DESCENDING',
Type='ALL'|'DOCUMENT'|'FOLDER',
Include='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Folders': [
{
'Id': 'string',
'Name': 'string',
'CreatorId': 'string',
'ParentFolderId': 'string',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'ResourceState': 'ACTIVE'|'RESTORING'|'RECYCLING'|'RECYCLED',
'Signature': 'string',
'Labels': [
'string',
],
'Size': 123,
'LatestVersionSize': 123
},
],
'Documents': [
{
'Id': 'string',
'CreatorId': 'string',
'ParentFolderId': 'string',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'LatestVersionMetadata': {
'Id': 'string',
'Name': 'string',
'ContentType': 'string',
'Size': 123,
'Signature': 'string',
'Status': 'INITIALIZED'|'ACTIVE',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'ContentCreatedTimestamp': datetime(2015, 1, 1),
'ContentModifiedTimestamp': datetime(2015, 1, 1),
'CreatorId': 'string',
'Thumbnail': {
'string': 'string'
},
'Source': {
'string': 'string'
}
},
'ResourceState': 'ACTIVE'|'RESTORING'|'RECYCLING'|'RECYCLED',
'Labels': [
'string',
]
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Folders** *(list) --*
The subfolders in the specified folder.
- *(dict) --*
Describes a folder.
- **Id** *(string) --*
The ID of the folder.
- **Name** *(string) --*
The name of the folder.
- **CreatorId** *(string) --*
The ID of the creator.
- **ParentFolderId** *(string) --*
The ID of the parent folder.
- **CreatedTimestamp** *(datetime) --*
The time when the folder was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the folder was updated.
- **ResourceState** *(string) --*
The resource state of the folder.
- **Signature** *(string) --*
The unique identifier created from the subfolders and documents of the folder.
- **Labels** *(list) --*
List of labels on the folder.
- *(string) --*
- **Size** *(integer) --*
The size of the folder metadata.
- **LatestVersionSize** *(integer) --*
The size of the latest version of the folder metadata.
- **Documents** *(list) --*
The documents in the specified folder.
- *(dict) --*
Describes the document.
- **Id** *(string) --*
The ID of the document.
- **CreatorId** *(string) --*
The ID of the creator.
- **ParentFolderId** *(string) --*
The ID of the parent folder.
- **CreatedTimestamp** *(datetime) --*
The time when the document was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the document was updated.
- **LatestVersionMetadata** *(dict) --*
The latest version of the document.
- **Id** *(string) --*
The ID of the version.
- **Name** *(string) --*
The name of the version.
- **ContentType** *(string) --*
The content type of the document.
- **Size** *(integer) --*
The size of the document, in bytes.
- **Signature** *(string) --*
The signature of the document.
- **Status** *(string) --*
The status of the document.
- **CreatedTimestamp** *(datetime) --*
The timestamp when the document was first uploaded.
- **ModifiedTimestamp** *(datetime) --*
The timestamp when the document was last uploaded.
- **ContentCreatedTimestamp** *(datetime) --*
The timestamp when the content of the document was originally created.
- **ContentModifiedTimestamp** *(datetime) --*
The timestamp when the content of the document was modified.
- **CreatorId** *(string) --*
The ID of the creator.
- **Thumbnail** *(dict) --*
The thumbnail of the document.
- *(string) --*
- *(string) --*
- **Source** *(dict) --*
The source of the document.
- *(string) --*
- *(string) --*
- **ResourceState** *(string) --*
The resource state.
- **Labels** *(list) --*
List of labels on the document.
- *(string) --*
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type FolderId: string
:param FolderId: **[REQUIRED]**
The ID of the folder.
:type Sort: string
:param Sort:
The sorting criteria.
:type Order: string
:param Order:
The order for the contents of the folder.
:type Type: string
:param Type:
The type of items.
:type Include: string
:param Include:
The contents to include. Specify \"INITIALIZED\" to include initialized documents.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeGroups(Paginator):
def paginate(self, SearchQuery: str, AuthenticationToken: str = None, OrganizationId: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_groups`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeGroups>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
SearchQuery='string',
OrganizationId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Groups': [
{
'Id': 'string',
'Name': 'string'
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Groups** *(list) --*
The list of groups.
- *(dict) --*
Describes the metadata of a user group.
- **Id** *(string) --*
The ID of the user group.
- **Name** *(string) --*
The name of the group.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type SearchQuery: string
:param SearchQuery: **[REQUIRED]**
A query to describe groups by group name.
:type OrganizationId: string
:param OrganizationId:
The ID of the organization.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeNotificationSubscriptions(Paginator):
def paginate(self, OrganizationId: str, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_notification_subscriptions`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeNotificationSubscriptions>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
OrganizationId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Subscriptions': [
{
'SubscriptionId': 'string',
'EndPoint': 'string',
'Protocol': 'HTTPS'
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Subscriptions** *(list) --*
The subscriptions.
- *(dict) --*
Describes a subscription.
- **SubscriptionId** *(string) --*
The ID of the subscription.
- **EndPoint** *(string) --*
The endpoint of the subscription.
- **Protocol** *(string) --*
The protocol of the subscription.
- **NextToken** *(string) --*
A token to resume pagination.
:type OrganizationId: string
:param OrganizationId: **[REQUIRED]**
The ID of the organization.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeResourcePermissions(Paginator):
def paginate(self, ResourceId: str, AuthenticationToken: str = None, PrincipalId: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_resource_permissions`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeResourcePermissions>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
ResourceId='string',
PrincipalId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Principals': [
{
'Id': 'string',
'Type': 'USER'|'GROUP'|'INVITE'|'ANONYMOUS'|'ORGANIZATION',
'Roles': [
{
'Role': 'VIEWER'|'CONTRIBUTOR'|'OWNER'|'COOWNER',
'Type': 'DIRECT'|'INHERITED'
},
]
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Principals** *(list) --*
The principals.
- *(dict) --*
Describes a resource.
- **Id** *(string) --*
The ID of the resource.
- **Type** *(string) --*
The type of resource.
- **Roles** *(list) --*
The permission information for the resource.
- *(dict) --*
Describes the permissions.
- **Role** *(string) --*
The role of the user.
- **Type** *(string) --*
The type of permissions.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type ResourceId: string
:param ResourceId: **[REQUIRED]**
The ID of the resource.
:type PrincipalId: string
:param PrincipalId:
The ID of the principal to filter permissions by.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeRootFolders(Paginator):
def paginate(self, AuthenticationToken: str, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_root_folders`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeRootFolders>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Folders': [
{
'Id': 'string',
'Name': 'string',
'CreatorId': 'string',
'ParentFolderId': 'string',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'ResourceState': 'ACTIVE'|'RESTORING'|'RECYCLING'|'RECYCLED',
'Signature': 'string',
'Labels': [
'string',
],
'Size': 123,
'LatestVersionSize': 123
},
],
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Folders** *(list) --*
The user's special folders.
- *(dict) --*
Describes a folder.
- **Id** *(string) --*
The ID of the folder.
- **Name** *(string) --*
The name of the folder.
- **CreatorId** *(string) --*
The ID of the creator.
- **ParentFolderId** *(string) --*
The ID of the parent folder.
- **CreatedTimestamp** *(datetime) --*
The time when the folder was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the folder was updated.
- **ResourceState** *(string) --*
The resource state of the folder.
- **Signature** *(string) --*
The unique identifier created from the subfolders and documents of the folder.
- **Labels** *(list) --*
List of labels on the folder.
- *(string) --*
- **Size** *(integer) --*
The size of the folder metadata.
- **LatestVersionSize** *(integer) --*
The size of the latest version of the folder metadata.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken: **[REQUIRED]**
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
class DescribeUsers(Paginator):
def paginate(self, AuthenticationToken: str = None, OrganizationId: str = None, UserIds: str = None, Query: str = None, Include: str = None, Order: str = None, Sort: str = None, Fields: str = None, PaginationConfig: Dict = None) -> Dict:
"""
Creates an iterator that will paginate through responses from :py:meth:`WorkDocs.Client.describe_users`.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/workdocs-2016-05-01/DescribeUsers>`_
**Request Syntax**
::
response_iterator = paginator.paginate(
AuthenticationToken='string',
OrganizationId='string',
UserIds='string',
Query='string',
Include='ALL'|'ACTIVE_PENDING',
Order='ASCENDING'|'DESCENDING',
Sort='USER_NAME'|'FULL_NAME'|'STORAGE_LIMIT'|'USER_STATUS'|'STORAGE_USED',
Fields='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
**Response Syntax**
::
{
'Users': [
{
'Id': 'string',
'Username': 'string',
'EmailAddress': 'string',
'GivenName': 'string',
'Surname': 'string',
'OrganizationId': 'string',
'RootFolderId': 'string',
'RecycleBinFolderId': 'string',
'Status': 'ACTIVE'|'INACTIVE'|'PENDING',
'Type': 'USER'|'ADMIN'|'POWERUSER'|'MINIMALUSER'|'WORKSPACESUSER',
'CreatedTimestamp': datetime(2015, 1, 1),
'ModifiedTimestamp': datetime(2015, 1, 1),
'TimeZoneId': 'string',
'Locale': 'en'|'fr'|'ko'|'de'|'es'|'ja'|'ru'|'zh_CN'|'zh_TW'|'pt_BR'|'default',
'Storage': {
'StorageUtilizedInBytes': 123,
'StorageRule': {
'StorageAllocatedInBytes': 123,
'StorageType': 'UNLIMITED'|'QUOTA'
}
}
},
],
'TotalNumberOfUsers': 123,
'NextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **Users** *(list) --*
The users.
- *(dict) --*
Describes a user.
- **Id** *(string) --*
The ID of the user.
- **Username** *(string) --*
The login name of the user.
- **EmailAddress** *(string) --*
The email address of the user.
- **GivenName** *(string) --*
The given name of the user.
- **Surname** *(string) --*
The surname of the user.
- **OrganizationId** *(string) --*
The ID of the organization.
- **RootFolderId** *(string) --*
The ID of the root folder.
- **RecycleBinFolderId** *(string) --*
The ID of the recycle bin folder.
- **Status** *(string) --*
The status of the user.
- **Type** *(string) --*
The type of user.
- **CreatedTimestamp** *(datetime) --*
The time when the user was created.
- **ModifiedTimestamp** *(datetime) --*
The time when the user was modified.
- **TimeZoneId** *(string) --*
The time zone ID of the user.
- **Locale** *(string) --*
The locale of the user.
- **Storage** *(dict) --*
The storage for the user.
- **StorageUtilizedInBytes** *(integer) --*
The amount of storage used, in bytes.
- **StorageRule** *(dict) --*
The storage for a user.
- **StorageAllocatedInBytes** *(integer) --*
The amount of storage allocated, in bytes.
- **StorageType** *(string) --*
The type of storage.
- **TotalNumberOfUsers** *(integer) --*
The total number of users included in the results.
- **NextToken** *(string) --*
A token to resume pagination.
:type AuthenticationToken: string
:param AuthenticationToken:
Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.
:type OrganizationId: string
:param OrganizationId:
The ID of the organization.
:type UserIds: string
:param UserIds:
The IDs of the users.
:type Query: string
:param Query:
A query to filter users by user name.
:type Include: string
:param Include:
The state of the users. Specify \"ALL\" to include inactive users.
:type Order: string
:param Order:
The order for the results.
:type Sort: string
:param Sort:
The sorting criteria.
:type Fields: string
:param Fields:
A comma-separated list of values. Specify \"STORAGE_METADATA\" to include the user storage quota and utilization information.
:type PaginationConfig: dict
:param PaginationConfig:
A dictionary that provides parameters to control pagination.
- **MaxItems** *(integer) --*
The total number of items to return. If the total number of items available is more than the value specified in max-items then a ``NextToken`` will be provided in the output that you can use to resume pagination.
- **PageSize** *(integer) --*
The size of each page.
- **StartingToken** *(string) --*
A token to specify where to start paginating. This is the ``NextToken`` from a previous response.
:rtype: dict
:returns:
"""
pass
| 47.964521 | 858 | 0.464917 | 4,693 | 58,133 | 5.734711 | 0.083102 | 0.028239 | 0.015606 | 0.019322 | 0.798016 | 0.764426 | 0.736521 | 0.719429 | 0.69446 | 0.683127 | 0 | 0.009558 | 0.429481 | 58,133 | 1,211 | 859 | 48.004129 | 0.801906 | 0.803434 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.3 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
58885bc625757092d8e0c9a856b3b69ef64f5405 | 11,434 | py | Python | cryomem/cmtools/lib/plothyst.py | bebaek/cryomem | 088fba2568d10451adda51a068c15c8c2a73d9ce | [
"MIT"
] | 1 | 2018-09-16T12:29:04.000Z | 2018-09-16T12:29:04.000Z | cryomem/cmtools/lib/plothyst.py | bebaek/cryomem | 088fba2568d10451adda51a068c15c8c2a73d9ce | [
"MIT"
] | null | null | null | cryomem/cmtools/lib/plothyst.py | bebaek/cryomem | 088fba2568d10451adda51a068c15c8c2a73d9ce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Feb 16 17:46:27 2013
@author: linda
"""
import matplotlib.pyplot as plt
import numpy as np
import copy
# plot hysteretic 1-d data
def plothyst_old(x, y, color='black', label='data'):
dx = x[1:] - x[:-1]
dxcorr = dx[1:]*dx[:-1]
iturn = (dxcorr<0).nonzero()[0] + 1
iturn = np.hstack((np.array([0]), iturn, np.array([len(x)-1])))
#self.axes = self.figure.add_subplot(111)
#self.axes.hold(True)
for m in range(len(iturn)-1):
idx = list(range(iturn[m],iturn[m+1])) + [iturn[m+1]]
if m == 0:
plt.plot(x[idx], y[idx], color=color, linewidth=m+1, label=label)
else:
plt.plot(x[idx], y[idx], color=color, linewidth=m+1)
# general purpose Feb 2017
def plothyst(*args, **kwargs):
"""Plot hysteretic y(x)
keywords:
sglcolor
colors, markers, mfcolors -- [<down sweep>, <up sweep>]
any keyword for pyplot.plot()
"""
# list args
if not hasattr(args[0], '__iter__'):
plothyst_old2(*args, **kwargs) # backward compatible
return 1
else:
x, y = args[:2]
ax = plt.gca()
# keyword args
plotparam = kwargs
sglcolor = plotparam.get('sglcolor', False)
if sglcolor:
color = plotparam.get('c', plotparam.get('color', 'b'))
colors = plotparam.get('colors', ['b', 'r'])
markers = plotparam.get('markers', ['s-', 'o-'])
mfcolors = plotparam.get('mfcolors', ['w', 'w'])
if 'c' in plotparam: del plotparam['c']
if 'color' in plotparam: del plotparam['color']
if 'sglcolor' in plotparam: del plotparam['sglcolor']
if 'autocolor' in plotparam: del plotparam['autocolor']
if 'colors' in plotparam: del plotparam['colors']
if 'markers' in plotparam: del plotparam['markers']
if 'mfcolors' in plotparam: del plotparam['mfcolors']
if not 'ms' in plotparam: plotparam['ms'] = 6
if not 'mew' in plotparam: plotparam['mew'] = 1.6
if not 'alpha' in plotparam: plotparam['alpha'] = 1
# split different sweep directions
#~ dx = x[1:] - x[:-1]
#~ dxcorr = dx[1:]*dx[:-1]
#~ iturn = (dxcorr<0).nonzero()[0] + 1
#~ iturn = np.hstack((np.array([0]), iturn, np.array([len(x)-1])))
#~ print(iturn)
iturn = [0]
prevsign = float(np.sign(x[1]-x[0]))
for i in range(2,len(x)):
thissign = float(np.sign(x[i]-x[i-1]))
if thissign == -prevsign and thissign != 0:
iturn.append(i-1)
prevsign = thissign
if not i in iturn:
iturn.append(i)
#print(iturn)
#iturn = np.array(iturn)
# plot
#self.axes = self.figure.add_subplot(111)
#self.axes.hold(True)
ax.set_color_cycle(None) # windows bug?
for m in range(len(iturn)-1):
idx = list(range(iturn[m],iturn[m+1])) + list([iturn[m+1]])
if m == 0: # 1st segment
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
if sglcolor:
plotparam['color'] = color # windows prefers 'color' to 'c'?
plotparam['mec'] = color
else:
plotparam['color'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
if 'label' in plotparam: del plotparam['label']
# mark the first data point
plotparam0 = copy.deepcopy(plotparam)
plotparam0['mew'] = 2.4
plotparam0['ms'] = plotparam['ms']*2.2
ax.plot(x[idx[0]], y[idx[0]], 'x', **plotparam0)
else: # the rest after the 1st segment
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
if sglcolor:
plotparam['color'] = color
plotparam['mec'] = color
else:
plotparam['color'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
# general purpose (obsolete)
def plothyst_old2(ax, x, y, **plotparam):
"""Plot hysteretic y(x)
ax -- axes
keywords:
sglcolor
colors, markers, mfcolors -- [<down sweep>, <up sweep>]
any keyword for pyplot.plot()
"""
# plot parameters
sglcolor = plotparam.get('sglcolor', False)
if sglcolor:
color = plotparam.get('c', plotparam.get('color', 'b'))
colors = plotparam.get('colors', ['b', 'r'])
markers = plotparam.get('markers', ['s-', 'o-'])
mfcolors = plotparam.get('mfcolors', ['w', 'w'])
if 'c' in plotparam: del plotparam['c']
if 'color' in plotparam: del plotparam['color']
if 'sglcolor' in plotparam: del plotparam['sglcolor']
if 'autocolor' in plotparam: del plotparam['autocolor']
if 'colors' in plotparam: del plotparam['colors']
if 'markers' in plotparam: del plotparam['markers']
if 'mfcolors' in plotparam: del plotparam['mfcolors']
if not 'ms' in plotparam: plotparam['ms'] = 6
if not 'mew' in plotparam: plotparam['mew'] = 1.6
if not 'alpha' in plotparam: plotparam['alpha'] = 1
# split different sweep directions
#~ dx = x[1:] - x[:-1]
#~ dxcorr = dx[1:]*dx[:-1]
#~ iturn = (dxcorr<0).nonzero()[0] + 1
#~ iturn = np.hstack((np.array([0]), iturn, np.array([len(x)-1])))
#~ print(iturn)
iturn = [0]
prevsign = float(np.sign(x[1]-x[0]))
for i in range(2,len(x)):
thissign = float(np.sign(x[i]-x[i-1]))
if thissign == -prevsign and thissign != 0:
iturn.append(i-1)
prevsign = thissign
if not i in iturn:
iturn.append(i)
#print(iturn)
#iturn = np.array(iturn)
# plot
#self.axes = self.figure.add_subplot(111)
#self.axes.hold(True)
ax.set_color_cycle(None) # windows bug?
for m in range(len(iturn)-1):
idx = list(range(iturn[m],iturn[m+1])) + list([iturn[m+1]])
if m == 0: # 1st segment
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
if sglcolor:
plotparam['color'] = color # windows prefers 'color' to 'c'?
plotparam['mec'] = color
else:
plotparam['color'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
if 'label' in plotparam: del plotparam['label']
# mark the first data point
plotparam0 = copy.deepcopy(plotparam)
plotparam0['mew'] = 2.4
plotparam0['ms'] = plotparam['ms']*2.2
ax.plot(x[idx[0]], y[idx[0]], 'x', **plotparam0)
else: # the rest after the 1st segment
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
if sglcolor:
plotparam['color'] = color
plotparam['mec'] = color
else:
plotparam['color'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
# deprecated by plothyst 2/24/15
def plothystcolor(ax, x, y, **plotparam):
# plot parameters
colors = plotparam.get('colors', ['b', 'r'])
markers = plotparam.get('markers', ['s-', 'o-'])
mfcolors = plotparam.get('mfcolors', ['w', 'w'])
if 'colors' in plotparam: del plotparam['colors']
if 'markders' in plotparam: del plotparam['markers']
if 'mfcolors' in plotparam: del plotparam['mfcolors']
if not 'ms' in plotparam: plotparam['ms'] = 6
if not 'mew' in plotparam: plotparam['mew'] = 1.6
if not 'alpha' in plotparam: plotparam['alpha'] = 1
# split different sweep directions
#~ dx = x[1:] - x[:-1]
#~ dxcorr = dx[1:]*dx[:-1]
#~ iturn = (dxcorr<0).nonzero()[0] + 1
#~ iturn = np.hstack((np.array([0]), iturn, np.array([len(x)-1])))
#~ print(iturn)
iturn = [0]
prevsign = np.sign(x[1]-x[0])
for i in range(2,len(x)):
thissign = np.sign(x[i]-x[i-1])
if thissign == -prevsign:
iturn.append(i)
prevsign = thissign
#iturn = np.array(iturn)
# plot
#self.axes = self.figure.add_subplot(111)
#self.axes.hold(True)
for m in range(len(iturn)-1):
idx = list(range(iturn[m],iturn[m+1])) + list([iturn[m+1]])
if m == 0:
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
plotparam['c'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
# mark the first data point
plotparam0 = copy.deepcopy(plotparam)
plotparam0['mew'] = 2.4
plotparam0['ms'] = plotparam['ms']*2.2
ax.plot(x[idx[0]], y[idx[0]], 'x', **plotparam0)
else:
isw = 1 if (x[idx[0]] < x[idx[-1]]) else 0 # sweep up or down?
mk = markers[isw]
plotparam['c'] = colors[isw]
plotparam['mec'] = colors[isw]
plotparam['mfc'] = mfcolors[isw]
ax.plot(x[idx], y[idx], mk, **plotparam)
def plothystcolor_old(ax, x,y,colors=['b','r'],markers=['s-','o-'],label='data',\
mfcolor=['w','w'], msize=6):
dx = x[1:] - x[:-1]
dxcorr = dx[1:]*dx[:-1]
iturn = (dxcorr<0).nonzero()[0] + 1
iturn = np.hstack((np.array([0]), iturn, np.array([len(x)-1])))
#self.axes = self.figure.add_subplot(111)
#self.axes.hold(True)
for m in range(len(iturn)-1):
idx = list(range(iturn[m],iturn[m+1])) + list([iturn[m+1]])
if m == 0:
if (x[idx[0]] < x[idx[-1]]): # choose color based on x direction
col = colors[1]; mk = markers[1]; mfcc = mfcolor[1]
else:
col = colors[0]; mk = markers[0]; mfcc = mfcolor[0]
ax.plot(x[idx], y[idx], mk, alpha=1,mfc=mfcc,c=col,\
mec=col,mew=1,ms=msize, label=label)
ax.plot(x[idx[0]], y[idx[0]], 'x', alpha=1,mfc=mfcc,c=col,\
mec=col,mew=2.4,ms=msize*2.2)
else:
if (x[idx[0]] < x[idx[-1]]): # choose color based on x direction
col = colors[1]; mk = markers[1]; mfcc = mfcolor[1]
else:
col = colors[0]; mk = markers[0]; mfcc = mfcolor[0]
ax.plot(x[idx], y[idx], mk, alpha=1,mfc=mfcc,c=col,\
mec=col,mew=1,ms=msize)
def plothystcolor2(x, y, colors=['blue','red'], label='data', markersize=6):
dx = x[1:] - x[:-1]
iinc = (dx>0).nonzero()[0]
idec = (dx<0).nonzero()[0]
plt.plot(x[iinc], y[iinc], 'o', alpha=1,mfc='white',mec=colors[0],mew=1,ms=markersize, label=label)
plt.plot(x[idec], y[idec], 'o', alpha=1,mfc='white',mec=colors[1],mew=1,ms=markersize)
def plothystcolor3(x, y, marker='o', colors=['blue','red'], mfc='white', mew=1,\
**params):
dx = x[1:] - x[:-1]
iinc = (dx>0).nonzero()[0]
idec = (dx<0).nonzero()[0]
plt.plot(x[iinc], y[iinc], marker, mec=colors[1], mfc=mfc,mew=mew,**params)
plt.plot(x[idec], y[idec], marker, mec=colors[0], mfc=mfc,mew=mew,**params)
| 38.628378 | 103 | 0.534284 | 1,586 | 11,434 | 3.84111 | 0.100252 | 0.019698 | 0.043664 | 0.071733 | 0.87262 | 0.861129 | 0.853414 | 0.845535 | 0.839462 | 0.832403 | 0 | 0.030647 | 0.283715 | 11,434 | 295 | 104 | 38.759322 | 0.713187 | 0.170107 | 0 | 0.84058 | 0 | 0 | 0.063668 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033816 | false | 0 | 0.014493 | 0 | 0.05314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
54349ac36d63b583d2cecddc186d7de441efc818 | 8,814 | py | Python | tests/test_jitterbuffer.py | thedilletante/aiortc | c0504b6962484ac26ba8ad065191794ac6f607a4 | [
"BSD-3-Clause"
] | 1,021 | 2018-02-28T07:56:06.000Z | 2022-03-15T04:45:57.000Z | tests/test_jitterbuffer.py | thedilletante/aiortc | c0504b6962484ac26ba8ad065191794ac6f607a4 | [
"BSD-3-Clause"
] | 137 | 2018-02-28T08:00:16.000Z | 2019-01-29T12:59:50.000Z | tests/test_jitterbuffer.py | thedilletante/aiortc | c0504b6962484ac26ba8ad065191794ac6f607a4 | [
"BSD-3-Clause"
] | 149 | 2018-03-08T08:23:51.000Z | 2022-03-22T16:45:29.000Z | from unittest import TestCase
from aiortc.jitterbuffer import JitterBuffer
from aiortc.rtp import RtpPacket
class JitterBufferTest(TestCase):
def assertPackets(self, jbuffer, expected):
found = [x.sequence_number if x else None for x in jbuffer._packets]
self.assertEqual(found, expected)
def test_create(self):
jbuffer = JitterBuffer(capacity=2)
self.assertEqual(jbuffer._packets, [None, None])
self.assertEqual(jbuffer._origin, None)
jbuffer = JitterBuffer(capacity=4)
self.assertEqual(jbuffer._packets, [None, None, None, None])
self.assertEqual(jbuffer._origin, None)
def test_add_ordered(self):
jbuffer = JitterBuffer(capacity=4)
frame = jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [0, None, None, None])
self.assertEqual(jbuffer._origin, 0)
frame = jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [0, 1, None, None])
self.assertEqual(jbuffer._origin, 0)
frame = jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [0, 1, 2, None])
self.assertEqual(jbuffer._origin, 0)
frame = jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [0, 1, 2, 3])
self.assertEqual(jbuffer._origin, 0)
def test_add_unordered(self):
jbuffer = JitterBuffer(capacity=4)
frame = jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, 1, None, None])
self.assertEqual(jbuffer._origin, 1)
frame = jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, 1, None, 3])
self.assertEqual(jbuffer._origin, 1)
frame = jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, 1, 2, 3])
self.assertEqual(jbuffer._origin, 1)
def test_add_seq_too_low_drop(self):
jbuffer = JitterBuffer(capacity=4)
frame = jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, None, 2, None])
self.assertEqual(jbuffer._origin, 2)
frame = jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, None, 2, None])
self.assertEqual(jbuffer._origin, 2)
def test_add_seq_too_low_reset(self):
jbuffer = JitterBuffer(capacity=4)
frame = jbuffer.add(RtpPacket(sequence_number=2000, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [2000, None, None, None])
self.assertEqual(jbuffer._origin, 2000)
frame = jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertIsNone(frame)
self.assertPackets(jbuffer, [None, 1, None, None])
self.assertEqual(jbuffer._origin, 1)
def test_add_seq_too_high_discard_one(self):
jbuffer = JitterBuffer(capacity=4)
jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=4, timestamp=1234))
self.assertEqual(jbuffer._origin, 1)
self.assertPackets(jbuffer, [4, 1, 2, 3])
def test_add_seq_too_high_discard_four(self):
jbuffer = JitterBuffer(capacity=4)
jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=7, timestamp=1234))
self.assertEqual(jbuffer._origin, 4)
self.assertPackets(jbuffer, [None, None, None, 7])
def test_add_seq_too_high_discard_more(self):
jbuffer = JitterBuffer(capacity=4)
jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
jbuffer.add(RtpPacket(sequence_number=8, timestamp=1234))
self.assertEqual(jbuffer._origin, 8)
self.assertPackets(jbuffer, [8, None, None, None])
def test_add_seq_too_high_reset(self):
jbuffer = JitterBuffer(capacity=4)
jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
self.assertPackets(jbuffer, [0, None, None, None])
jbuffer.add(RtpPacket(sequence_number=3000, timestamp=1234))
self.assertEqual(jbuffer._origin, 3000)
self.assertPackets(jbuffer, [3000, None, None, None])
def test_remove(self):
jbuffer = JitterBuffer(capacity=4)
jbuffer.add(RtpPacket(sequence_number=0, timestamp=1234))
jbuffer.add(RtpPacket(sequence_number=1, timestamp=1234))
jbuffer.add(RtpPacket(sequence_number=2, timestamp=1234))
jbuffer.add(RtpPacket(sequence_number=3, timestamp=1234))
self.assertEqual(jbuffer._origin, 0)
self.assertPackets(jbuffer, [0, 1, 2, 3])
# remove 1 packet
jbuffer.remove(1)
self.assertEqual(jbuffer._origin, 1)
self.assertPackets(jbuffer, [None, 1, 2, 3])
# remove 2 packets
jbuffer.remove(2)
self.assertEqual(jbuffer._origin, 3)
self.assertPackets(jbuffer, [None, None, None, 3])
def test_remove_audio_frame(self):
"""
Audio jitter buffer.
"""
jbuffer = JitterBuffer(capacity=16, prefetch=4)
packet = RtpPacket(sequence_number=0, timestamp=1234)
packet._data = b"0000"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=1, timestamp=1235)
packet._data = b"0001"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=2, timestamp=1236)
packet._data = b"0002"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=3, timestamp=1237)
packet._data = b"0003"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=4, timestamp=1238)
packet._data = b"0003"
frame = jbuffer.add(packet)
self.assertIsNotNone(frame)
self.assertEqual(frame.data, b"0000")
self.assertEqual(frame.timestamp, 1234)
packet = RtpPacket(sequence_number=5, timestamp=1239)
packet._data = b"0004"
frame = jbuffer.add(packet)
self.assertIsNotNone(frame)
self.assertEqual(frame.data, b"0001")
self.assertEqual(frame.timestamp, 1235)
def test_remove_video_frame(self):
"""
Video jitter buffer.
"""
jbuffer = JitterBuffer(capacity=128)
packet = RtpPacket(sequence_number=0, timestamp=1234)
packet._data = b"0000"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=1, timestamp=1234)
packet._data = b"0001"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=2, timestamp=1234)
packet._data = b"0002"
frame = jbuffer.add(packet)
self.assertIsNone(frame)
packet = RtpPacket(sequence_number=3, timestamp=1235)
packet._data = b"0003"
frame = jbuffer.add(packet)
self.assertIsNotNone(frame)
self.assertEqual(frame.data, b"000000010002")
self.assertEqual(frame.timestamp, 1234)
| 35.829268 | 76 | 0.665305 | 1,017 | 8,814 | 5.634218 | 0.078663 | 0.105061 | 0.168586 | 0.161257 | 0.871728 | 0.831937 | 0.774171 | 0.731239 | 0.686213 | 0.673124 | 0 | 0.056601 | 0.22226 | 8,814 | 245 | 77 | 35.97551 | 0.779285 | 0.008509 | 0 | 0.666667 | 0 | 0 | 0.006904 | 0 | 0 | 0 | 0 | 0 | 0.468927 | 1 | 0.073446 | false | 0 | 0.016949 | 0 | 0.096045 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3fc3400ee10eb1c4f6eee84292d7ff46a7a35017 | 1,812 | py | Python | dusty/systems/docker/files.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 421 | 2015-06-02T16:29:59.000Z | 2021-06-03T18:44:42.000Z | dusty/systems/docker/files.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 404 | 2015-06-02T20:23:42.000Z | 2019-08-21T16:59:41.000Z | dusty/systems/docker/files.py | gamechanger/dusty | dd9778e3a4f0c623209e53e98aa9dc1fe76fc309 | [
"MIT"
] | 16 | 2015-06-16T17:21:02.000Z | 2020-03-27T02:27:09.000Z | from . import exec_in_container, get_container_for_app_or_service
from ...path import parent_dir
def _create_dir_in_container(container, path):
return exec_in_container(container, 'mkdir -p', path)
def _remove_path_in_container(container, path):
return exec_in_container(container, 'rm -rf', path)
def _move_in_container(container, source_path, dest_path):
return exec_in_container(container, 'mv', source_path, dest_path)
def _recursive_copy_in_container(container, source_path, dest_path):
return exec_in_container(container, 'cp -r', source_path, dest_path)
def copy_path_inside_container(app_or_service_name, source_path, dest_path):
container = get_container_for_app_or_service(app_or_service_name, raise_if_not_found=True)
_create_dir_in_container(container, parent_dir(dest_path))
_recursive_copy_in_container(container, source_path, dest_path)
def move_dir_inside_container(app_or_service_name, source_path, dest_path):
container = get_container_for_app_or_service(app_or_service_name, raise_if_not_found=True)
_create_dir_in_container(container, parent_dir(dest_path))
_remove_path_in_container(container, dest_path)
_move_in_container(container, '{}/'.format(source_path), dest_path)
def move_file_inside_container(app_or_service_name, source_path, dest_path):
container = get_container_for_app_or_service(app_or_service_name, raise_if_not_found=True)
_create_dir_in_container(container, parent_dir(dest_path))
_move_in_container(container, source_path, dest_path)
def container_path_exists(app_or_service_name, path):
container = get_container_for_app_or_service(app_or_service_name, raise_if_not_found=True)
return exec_in_container(container, 'sh -c \'[ -e {} ] && echo "yes" || echo "no"\''.format(path)).rstrip() == "yes"
| 47.684211 | 120 | 0.809603 | 274 | 1,812 | 4.79562 | 0.167883 | 0.142314 | 0.243531 | 0.136986 | 0.875951 | 0.783866 | 0.737443 | 0.710046 | 0.670472 | 0.539574 | 0 | 0 | 0.100993 | 1,812 | 37 | 121 | 48.972973 | 0.80663 | 0 | 0 | 0.269231 | 0 | 0 | 0.021523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0.153846 | 0.576923 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
3fef884c2edada2d08ac415078914c42b0a6924e | 124 | py | Python | Code-Collection/Clebsch-Gordan-Coeffs/CG-Series/gordan.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | 1 | 2021-04-20T04:49:59.000Z | 2021-04-20T04:49:59.000Z | Code-Collection/Clebsch-Gordan-Coeffs/CG-Series/gordan.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | 43 | 2021-01-19T05:02:48.000Z | 2022-03-12T01:07:32.000Z | Code-Collection/Clebsch-Gordan-Coeffs/CG-Series/gordan.py | basavyr/physics-code-collection | 6ce50ec184ff2de081d0ca29e679e54dbb21f592 | [
"MIT"
] | null | null | null | #!/Users/robertpoenaru/.pyenv/shims/python
from sympy.physics.quantum.cg import CG
from sympy import S
from sympy import *
| 20.666667 | 42 | 0.790323 | 19 | 124 | 5.157895 | 0.631579 | 0.27551 | 0.306122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 124 | 5 | 43 | 24.8 | 0.890909 | 0.330645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b76022543a0a9ad53ffff97dda4b9eec442522ff | 6,856 | py | Python | test/test_indentation.py | zultron/catkin_lint | 7076a3626f5673e58c519346fa52cc78e759d100 | [
"BSD-3-Clause"
] | null | null | null | test/test_indentation.py | zultron/catkin_lint | 7076a3626f5673e58c519346fa52cc78e759d100 | [
"BSD-3-Clause"
] | null | null | null | test/test_indentation.py | zultron/catkin_lint | 7076a3626f5673e58c519346fa52cc78e759d100 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from .helper import create_env, create_manifest, mock_lint
import sys
sys.stderr = sys.stdout
import os
class IndentationTest(unittest.TestCase):
def test_regular(self):
"""Test indentation check for regular command sequences"""
env = create_env()
pkg = create_manifest("mock")
result = mock_lint(env, pkg,
"""
cmd1()
cmd2()
cmd3()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
cmd1()
cmd2()
cmd3()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
cmd1() cmd2()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
def test_macro(self):
"""Test indentation check for sequences with macro calls"""
env = create_env()
pkg = create_manifest("mock")
result = mock_lint(env, pkg,
"""
macro(test)
cmd2()
endmacro()
cmd1()
test()
cmd3()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
macro(test)
if()
cmd()
endif()
endmacro()
cmd1()
test()
cmd3()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
macro(test2)
cmd()
endmacro()
macro(test)
if()
cmd()
test2()
cmd()
endif()
endmacro()
cmd1()
test()
cmd3()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
macro(test4)
cmd()
if()
cmd()
endif()
endmacro()
macro(test3)
test4()
endmacro()
macro(test2)
test3()
if()
if()
if()
cmd()
test3()
endif()
endif()
endif()
endmacro()
macro(test)
test2()
if()
cmd()
test2()
else()
foreach(a b c d e)
test2()
endforeach()
endif()
endmacro()
cmd1()
test()
cmd3()
""", checks=None, indentation=True)
def test_if(self):
"""Test indentation check for if()/else()/endif() blocks"""
env = create_env()
pkg = create_manifest("mock")
result = mock_lint(env, pkg,
"""
cmd()
if()
cmd()
cmd()
else()
cmd()
cmd()
endif()
cmd()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
if()
else()
endif()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
if()
if()
endif()
else()
if()
endif()
endif()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
if()
cmd()
cmd()
endif()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
if()
cmd()
cmd()
endif()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
if()
cmd()
else()
cmd()
endif()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
if()
cmd()
else()
cmd()
endif()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
if()
cmd()
else()
cmd()
endif()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
def test_foreach(self):
"""Test indentation checks for foreach()/endforeach) blocks"""
env = create_env()
pkg = create_manifest("mock")
result = mock_lint(env, pkg,
"""
cmd()
foreach(a 1)
cmd()
cmd()
endforeach()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
foreach(a 1)
cmd()
cmd()
endforeach()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
result = mock_lint(env, pkg,
"""
foreach(a 1)
endforeach()
""", checks=None, indentation=True)
self.assertEqual([], result)
result = mock_lint(env, pkg,
"""
foreach(a 1)
cmd()
endforeach()
""", checks=None, indentation=True)
self.assertEqual(["INDENTATION"], result)
| 28.448133 | 70 | 0.361581 | 478 | 6,856 | 5.115063 | 0.112971 | 0.056442 | 0.108793 | 0.132106 | 0.789775 | 0.756646 | 0.756646 | 0.749284 | 0.746012 | 0.721881 | 0 | 0.009901 | 0.528588 | 6,856 | 240 | 71 | 28.566667 | 0.746597 | 0.031651 | 0 | 0.864865 | 0 | 0 | 0.04293 | 0 | 0 | 0 | 0 | 0 | 0.243243 | 1 | 0.054054 | false | 0 | 0.054054 | 0 | 0.121622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4da105bd6691be03b703c629b909dcf607f6ecc0 | 49 | py | Python | samples/src/main/resources/datasets/python/91.py | sritchie/kotlingrad | 8165ed1cd77220a5347c58cded4c6f2bcf22ee30 | [
"Apache-2.0"
] | 11 | 2020-12-19T01:19:44.000Z | 2021-12-25T20:43:33.000Z | src/main/resources/datasets/python/91.py | breandan/katholic | 081c39f3acc73ff41f5865563debe78a36e1038f | [
"Apache-2.0"
] | null | null | null | src/main/resources/datasets/python/91.py | breandan/katholic | 081c39f3acc73ff41f5865563debe78a36e1038f | [
"Apache-2.0"
] | 2 | 2021-01-25T07:59:20.000Z | 2021-08-07T07:13:49.000Z | def test3():
1, 2 + 3, 4
(1, 2) + (3, 4)
| 12.25 | 19 | 0.326531 | 10 | 49 | 1.6 | 0.6 | 0.25 | 0.375 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.310345 | 0.408163 | 49 | 3 | 20 | 16.333333 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0 | 1 | 1 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
4dc44542fe426a82adb0e27fbba8009f1b7af947 | 14,166 | py | Python | pymtl3/passes/backends/verilog/import_/test/VNameMangle_test.py | tancheng/pymtl3 | 9e3a582c805a1aa3d9c12a208e907bc73f2514d5 | [
"BSD-3-Clause"
] | 1 | 2022-01-03T06:22:11.000Z | 2022-01-03T06:22:11.000Z | pymtl3/passes/backends/verilog/import_/test/VNameMangle_test.py | tancheng/pymtl3 | 9e3a582c805a1aa3d9c12a208e907bc73f2514d5 | [
"BSD-3-Clause"
] | null | null | null | pymtl3/passes/backends/verilog/import_/test/VNameMangle_test.py | tancheng/pymtl3 | 9e3a582c805a1aa3d9c12a208e907bc73f2514d5 | [
"BSD-3-Clause"
] | null | null | null | #=========================================================================
# VNameMangle_test.py
#=========================================================================
# Author : Peitian Pan
# Date : May 30, 2019
"""Test the SystemVerilog name mangling."""
from pymtl3.datatypes import Bits1, Bits32, bitstruct
from pymtl3.dsl import Component, InPort, Interface, OutPort
from pymtl3.passes.backends.verilog.util.utility import gen_mapped_ports
from pymtl3.passes.rtlir import RTLIRDataType as rdt
from pymtl3.passes.rtlir import RTLIRType as rt
from pymtl3.passes.rtlir.util.test_utility import do_test
def local_do_test( m ):
m.elaborate()
result = gen_mapped_ports( m, {} )
assert result == m._ref_ports
def test_port_single( do_test ):
class A( Component ):
def construct( s ):
s.in_ = InPort( Bits32 )
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Port('input', rdt.Vector(32)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = a._ref_ports
do_test( a )
def test_port_array( do_test ):
class A( Component ):
def construct( s ):
s.in_ = [ InPort( Bits32 ) for _ in range( 3 ) ]
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Array([3], rt.Port('input', rdt.Vector(32))) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_[0]'], 'in___0', rt.Port('input', rdt.Vector(32)) ),
( ['in_[1]'], 'in___1', rt.Port('input', rdt.Vector(32)) ),
( ['in_[2]'], 'in___2', rt.Port('input', rdt.Vector(32)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_port_2d_array( do_test ):
class A( Component ):
def construct( s ):
s.in_ = [ [ InPort( Bits32 ) for _ in range(2) ] for _ in range(3) ]
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Array( [3, 2], rt.Port('input', rdt.Vector(32)) ) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_[0][0]'], 'in___0__0', rt.Port('input', rdt.Vector(32)) ),
( ['in_[0][1]'], 'in___0__1', rt.Port('input', rdt.Vector(32)) ),
( ['in_[1][0]'], 'in___1__0', rt.Port('input', rdt.Vector(32)) ),
( ['in_[1][1]'], 'in___1__1', rt.Port('input', rdt.Vector(32)) ),
( ['in_[2][0]'], 'in___2__0', rt.Port('input', rdt.Vector(32)) ),
( ['in_[2][1]'], 'in___2__1', rt.Port('input', rdt.Vector(32)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_struct_port_single( do_test ):
@bitstruct
class struct:
bar: Bits32
foo: Bits32
class A( Component ):
def construct( s ):
s.in_ = InPort( struct )
a = A()
st = rdt.Struct('struct', {'bar':rdt.Vector(32), 'foo':rdt.Vector(32)})
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Port('input', st ) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_.bar'], 'in___bar', rt.Port('input', rdt.Vector(32) ) ),
( ['in_.foo'], 'in___foo', rt.Port('input', rdt.Vector(32) ) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_struct_port_array( do_test ):
@bitstruct
class struct:
bar: Bits32
foo: Bits32
class A( Component ):
def construct( s ):
s.in_ = [ InPort( struct ) for _ in range(2) ]
a = A()
st = rdt.Struct('struct', {'bar':rdt.Vector(32), 'foo':rdt.Vector(32)})
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Array([2], rt.Port('input', st)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_[0].bar'], 'in___0__bar', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[0].foo'], 'in___0__foo', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[1].bar'], 'in___1__bar', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[1].foo'], 'in___1__foo', rt.Port('input', rdt.Vector(32) ) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_packed_array_port_array( do_test ):
@bitstruct
class struct:
bar: Bits32
foo: [ [ Bits32 ] * 2 ] * 3
class A( Component ):
def construct( s ):
s.in_ = [ InPort( struct ) for _ in range(2) ]
a = A()
foo = rdt.PackedArray([3,2], rdt.Vector(32))
st = rdt.Struct('struct', {'bar':rdt.Vector(32), 'foo':foo})
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Array([2], rt.Port('input', st ))),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_[0].bar'], 'in___0__bar', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[0][0]'], 'in___0__foo__0__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[0][1]'], 'in___0__foo__0__1', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[1][0]'], 'in___0__foo__1__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[1][1]'], 'in___0__foo__1__1', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[2][0]'], 'in___0__foo__2__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[0].foo[2][1]'], 'in___0__foo__2__1', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].bar'], 'in___1__bar', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[0][0]'], 'in___1__foo__0__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[0][1]'], 'in___1__foo__0__1', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[1][0]'], 'in___1__foo__1__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[1][1]'], 'in___1__foo__1__1', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[2][0]'], 'in___1__foo__2__0', rt.Port('input', rdt.Vector(32) )),
( ['in_[1].foo[2][1]'], 'in___1__foo__2__1', rt.Port('input', rdt.Vector(32) )),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_nested_struct( do_test ):
@bitstruct
class inner_struct:
foo: Bits32
@bitstruct
class struct:
bar: Bits32
inner: inner_struct
class A( Component ):
def construct( s ):
s.in_ = [ InPort( struct ) for _ in range(2) ]
a = A()
inner = rdt.Struct('inner_struct', {'foo':rdt.Vector(32)})
st = rdt.Struct('struct', {'bar':rdt.Vector(32), 'inner':inner})
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_'], 'in_', rt.Array([2], rt.Port('input', st )) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['in_[0].bar'], 'in___0__bar', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[0].inner.foo'], 'in___0__inner__foo', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[1].bar'], 'in___1__bar', rt.Port('input', rdt.Vector(32) ) ),
( ['in_[1].inner.foo'], 'in___1__inner__foo', rt.Port('input', rdt.Vector(32) ) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
def test_interface( do_test ):
class Ifc( Interface ):
def construct( s ):
s.msg = InPort( Bits32 )
s.val = InPort( Bits1 )
s.rdy = OutPort( Bits1 )
class A( Component ):
def construct( s ):
s.ifc = Ifc()
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc.msg'], 'ifc__msg', rt.Port('input', rdt.Vector(32)) ),
( ['ifc.rdy'], 'ifc__rdy', rt.Port('output', rdt.Vector(1)) ),
( ['ifc.val'], 'ifc__val', rt.Port('input', rdt.Vector(1)) )
]
a._ref_ports_yosys = a._ref_ports
do_test( a )
def test_interface_array( do_test ):
class Ifc( Interface ):
def construct( s ):
s.msg = InPort( Bits32 )
s.val = InPort( Bits1 )
s.rdy = OutPort( Bits1 )
class A( Component ):
def construct( s ):
s.ifc = [ Ifc() for _ in range(2) ]
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].msg', 'ifc[1].msg'], 'ifc__msg', rt.Array([2], rt.Port('input', rdt.Vector(32))) ),
( ['ifc[0].rdy', 'ifc[1].rdy'], 'ifc__rdy', rt.Array([2], rt.Port('output', rdt.Vector(1))) ),
( ['ifc[0].val', 'ifc[1].val'], 'ifc__val', rt.Array([2], rt.Port('input', rdt.Vector(1))) ),
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].msg'], 'ifc__0__msg', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[0].rdy'], 'ifc__0__rdy', rt.Port('output', rdt.Vector(1)) ),
( ['ifc[0].val'], 'ifc__0__val', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[1].msg'], 'ifc__1__msg', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[1].rdy'], 'ifc__1__rdy', rt.Port('output', rdt.Vector(1)) ),
( ['ifc[1].val'], 'ifc__1__val', rt.Port('input', rdt.Vector(1)) ),
]
do_test( a )
def test_nested_interface( do_test ):
class InnerIfc( Interface ):
def construct( s ):
s.msg = InPort( Bits32 )
s.val = InPort( Bits1 )
s.rdy = OutPort( Bits1 )
class Ifc( Interface ):
def construct( s ):
s.valrdy_ifc = InnerIfc()
s.ctrl_bar = InPort( Bits32 )
s.ctrl_foo = OutPort( Bits32 )
class A( Component ):
def construct( s ):
s.ifc = [ Ifc() for _ in range(2) ]
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].ctrl_bar', 'ifc[1].ctrl_bar'], 'ifc__ctrl_bar', rt.Array([2], rt.Port('input', rdt.Vector(32)))),
( ['ifc[0].ctrl_foo', 'ifc[1].ctrl_foo'], 'ifc__ctrl_foo', rt.Array([2], rt.Port('output', rdt.Vector(32)))),
( ['ifc[0].valrdy_ifc.msg', 'ifc[1].valrdy_ifc.msg'], 'ifc__valrdy_ifc__msg', rt.Array([2], rt.Port('input', rdt.Vector(32)))),
( ['ifc[0].valrdy_ifc.rdy', 'ifc[1].valrdy_ifc.rdy'], 'ifc__valrdy_ifc__rdy', rt.Array([2], rt.Port('output', rdt.Vector(1)))),
( ['ifc[0].valrdy_ifc.val', 'ifc[1].valrdy_ifc.val'], 'ifc__valrdy_ifc__val', rt.Array([2], rt.Port('input', rdt.Vector(1)))),
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].ctrl_bar'], 'ifc__0__ctrl_bar', rt.Port('input', rdt.Vector(32))),
( ['ifc[0].ctrl_foo'], 'ifc__0__ctrl_foo', rt.Port('output', rdt.Vector(32))),
( ['ifc[0].valrdy_ifc.msg'], 'ifc__0__valrdy_ifc__msg', rt.Port('input', rdt.Vector(32))),
( ['ifc[0].valrdy_ifc.rdy'], 'ifc__0__valrdy_ifc__rdy', rt.Port('output', rdt.Vector(1))),
( ['ifc[0].valrdy_ifc.val'], 'ifc__0__valrdy_ifc__val', rt.Port('input', rdt.Vector(1))),
( ['ifc[1].ctrl_bar'], 'ifc__1__ctrl_bar', rt.Port('input', rdt.Vector(32))),
( ['ifc[1].ctrl_foo'], 'ifc__1__ctrl_foo', rt.Port('output', rdt.Vector(32))),
( ['ifc[1].valrdy_ifc.msg'], 'ifc__1__valrdy_ifc__msg', rt.Port('input', rdt.Vector(32))),
( ['ifc[1].valrdy_ifc.rdy'], 'ifc__1__valrdy_ifc__rdy', rt.Port('output', rdt.Vector(1))),
( ['ifc[1].valrdy_ifc.val'], 'ifc__1__valrdy_ifc__val', rt.Port('input', rdt.Vector(1))),
]
do_test( a )
def test_nested_interface_port_array( do_test ):
class InnerIfc( Interface ):
def construct( s ):
s.msg = [ InPort( Bits32 ) for _ in range(2) ]
s.val = InPort( Bits1 )
s.rdy = OutPort( Bits1 )
class Ifc( Interface ):
def construct( s ):
s.valrdy_ifc = InnerIfc()
s.ctrl_bar = InPort( Bits32 )
s.ctrl_foo = OutPort( Bits32 )
class A( Component ):
def construct( s ):
s.ifc = [ Ifc() for _ in range(2) ]
a = A()
a._ref_ports = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].ctrl_bar', 'ifc[1].ctrl_bar'], 'ifc__ctrl_bar', rt.Array([2], rt.Port('input', rdt.Vector(32)))),
( ['ifc[0].ctrl_foo', 'ifc[1].ctrl_foo'], 'ifc__ctrl_foo', rt.Array([2], rt.Port('output', rdt.Vector(32)))),
( ['ifc[0].valrdy_ifc.msg', 'ifc[1].valrdy_ifc.msg'], 'ifc__valrdy_ifc__msg', rt.Array([2, 2], rt.Port('input', rdt.Vector(32)))),
( ['ifc[0].valrdy_ifc.rdy', 'ifc[1].valrdy_ifc.rdy'], 'ifc__valrdy_ifc__rdy', rt.Array([2], rt.Port('output', rdt.Vector(1)))),
( ['ifc[0].valrdy_ifc.val', 'ifc[1].valrdy_ifc.val'], 'ifc__valrdy_ifc__val', rt.Array([2], rt.Port('input', rdt.Vector(1)))),
]
a._ref_ports_yosys = [
( ['clk'], 'clk', rt.Port('input', rdt.Vector(1)) ),
( ['reset'], 'reset', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[0].ctrl_bar'], 'ifc__0__ctrl_bar', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[0].ctrl_foo'], 'ifc__0__ctrl_foo', rt.Port('output', rdt.Vector(32)) ),
( ['ifc[0].valrdy_ifc.msg[0]'], 'ifc__0__valrdy_ifc__msg__0', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[0].valrdy_ifc.msg[1]'], 'ifc__0__valrdy_ifc__msg__1', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[0].valrdy_ifc.rdy'], 'ifc__0__valrdy_ifc__rdy', rt.Port('output', rdt.Vector(1)) ),
( ['ifc[0].valrdy_ifc.val'], 'ifc__0__valrdy_ifc__val', rt.Port('input', rdt.Vector(1)) ),
( ['ifc[1].ctrl_bar'], 'ifc__1__ctrl_bar', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[1].ctrl_foo'], 'ifc__1__ctrl_foo', rt.Port('output', rdt.Vector(32)) ),
( ['ifc[1].valrdy_ifc.msg[0]'], 'ifc__1__valrdy_ifc__msg__0', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[1].valrdy_ifc.msg[1]'], 'ifc__1__valrdy_ifc__msg__1', rt.Port('input', rdt.Vector(32)) ),
( ['ifc[1].valrdy_ifc.rdy'], 'ifc__1__valrdy_ifc__rdy', rt.Port('output', rdt.Vector(1)) ),
( ['ifc[1].valrdy_ifc.val'], 'ifc__1__valrdy_ifc__val', rt.Port('input', rdt.Vector(1)) )
]
do_test( a )
| 44.54717 | 134 | 0.56685 | 2,188 | 14,166 | 3.365631 | 0.038848 | 0.156437 | 0.161325 | 0.197719 | 0.911325 | 0.892178 | 0.884302 | 0.884302 | 0.876969 | 0.840304 | 0 | 0.042927 | 0.171185 | 14,166 | 317 | 135 | 44.687697 | 0.584277 | 0.017436 | 0 | 0.639731 | 0 | 0 | 0.255662 | 0.064131 | 0 | 0 | 0 | 0 | 0.003367 | 1 | 0.097643 | false | 0.013468 | 0.020202 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4ddabcc94718b54f12c1e46e1a55a88171c69a1d | 14,161 | py | Python | tests/test_kalahboard.py | torlenor/kalah | 12a5520445c60855ed42c5bd30e512c168d531ca | [
"MIT"
] | 1 | 2020-11-30T21:20:33.000Z | 2020-11-30T21:20:33.000Z | tests/test_kalahboard.py | torlenor/kalah | 12a5520445c60855ed42c5bd30e512c168d531ca | [
"MIT"
] | 6 | 2020-11-13T11:07:53.000Z | 2020-11-13T14:33:32.000Z | tests/test_kalahboard.py | torlenor/kalah | 12a5520445c60855ed42c5bd30e512c168d531ca | [
"MIT"
] | 1 | 2020-12-10T17:53:06.000Z | 2020-12-10T17:53:06.000Z | from kalah.kalahboard import KalahBoard
import unittest
# Unique board constelations to test:
#
# Normal move, no points
# Normal move, one seed in house
# Normal move, around the board, skip opponents house
# Hit own house, repeat move
# Hit own house after one full round around the board, repeat move
# Hit own empty bin, capture opponents and own seeds
# Hit own empty bin, but opponents bin empty, nothing should happen
# Hit enemy empty bin, nothing should happen
# End of game, opponent gets all his remaining seeds
class Test_TestKalahBoard(unittest.TestCase):
def test_default_board(self):
board = KalahBoard(6,4)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 4, 0, 4, 4, 4, 4, 4, 4, 0])
board = KalahBoard(9,6)
self.assertEqual(board.get_board(), [6, 6, 6, 6, 6, 6, 6, 6, 6, 0, 6, 6, 6, 6, 6, 6, 6, 6, 6, 0])
def test_get_house(self):
board = KalahBoard(2,2)
self.assertEqual(board._get_house(0), 2)
self.assertEqual(board._get_house(1), 5)
board = KalahBoard(4,2)
self.assertEqual(board._get_house(0), 4)
self.assertEqual(board._get_house(1), 9)
board = KalahBoard(4,4)
self.assertEqual(board._get_house(0), 4)
self.assertEqual(board._get_house(1), 9)
board = KalahBoard(6,4)
self.assertEqual(board._get_house(0), 6)
self.assertEqual(board._get_house(1), 13)
board = KalahBoard(6,6)
self.assertEqual(board._get_house(0), 6)
self.assertEqual(board._get_house(1), 13)
def test_get_house_id(self):
board = KalahBoard(2,2)
self.assertEqual(board.get_house_id(0), 2)
self.assertEqual(board.get_house_id(1), 5)
board = KalahBoard(4,2)
self.assertEqual(board.get_house_id(0), 4)
self.assertEqual(board.get_house_id(1), 9)
board = KalahBoard(4,4)
self.assertEqual(board.get_house_id(0), 4)
self.assertEqual(board.get_house_id(1), 9)
board = KalahBoard(6,4)
self.assertEqual(board.get_house_id(0), 6)
self.assertEqual(board.get_house_id(1), 13)
board = KalahBoard(6,6)
self.assertEqual(board.get_house_id(0), 6)
self.assertEqual(board.get_house_id(1), 13)
def test_first_moves_6_4(self):
board = KalahBoard(6,4)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 4, 0, 4, 4, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [0, 0])
self.assertEqual(board.allowed_moves(), [0, 1, 2, 3, 4, 5])
self.assertEqual(board.move(6), False)
self.assertEqual(board.move(7), False)
self.assertEqual(board.move(13), False)
self.assertEqual(board.move(123), False)
self.assertEqual(board.move(0), True)
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [0, 0])
self.assertEqual(board.allowed_moves(), [7, 8, 9, 10, 11, 12])
self.assertEqual(board.move(7), True)
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [0, 0])
self.assertEqual(board.allowed_moves(), [1, 2, 3, 4, 5])
def test_move_into_house_6_4(self):
board = KalahBoard(6,4)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 4, 0, 4, 4, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [0, 0])
self.assertEqual(board.allowed_moves(), [0, 1, 2, 3, 4, 5])
self.assertEqual(board.move(2), True)
self.assertEqual(board.get_board(), [4, 4, 0, 5, 5, 5, 1, 4, 4, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [0, 1, 3, 4, 5])
self.assertEqual(board.move(2), False)
self.assertEqual(board.move(1), True)
self.assertEqual(board.get_board(), [4, 0, 1, 6, 6, 6, 1, 4, 4, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [7, 8, 9, 10, 11, 12])
def test_moves_6_4(self):
board = KalahBoard(6,4)
board.set_board([0, 0, 0, 0, 0, 1, 24, 0, 0, 0, 2, 0, 0, 21])
board.set_current_player(1)
self.assertEqual(board.get_board(), [0, 0, 0, 0, 0, 1, 24, 0, 0, 0, 2, 0, 0, 21])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [24, 21])
self.assertEqual(board.allowed_moves(), [10])
self.assertEqual(board.move(10), True)
self.assertEqual(board.get_board(), [0, 0, 0, 0, 0, 1, 24, 0, 0, 0, 0, 1, 1, 21])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [24, 21])
self.assertEqual(board.allowed_moves(), [5])
initial_board = [4, 4, 4, 4, 4, 0, 1, 5, 5, 5, 4, 4, 4, 0]
board = KalahBoard(6,4)
board.set_board(initial_board)
board.set_current_player(1)
self.assertEqual(board.get_board(), initial_board)
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [7, 8, 9, 10, 11, 12])
self.assertEqual(board.move(8), True)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 0, 1, 5, 0, 6, 5, 5, 5, 1])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 1])
self.assertEqual(board.allowed_moves(), [7, 9, 10, 11, 12])
def test_move_over_house_into_opponent_6_4(self):
board = KalahBoard(6,4)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 4, 0, 4, 4, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [0, 0])
self.assertEqual(board.allowed_moves(), [0, 1, 2, 3, 4, 5])
self.assertEqual(board.move(5), True)
self.assertEqual(board.get_board(), [4, 4, 4, 4, 4, 0, 1, 5, 5, 5, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [7, 8, 9, 10, 11, 12])
def test_end_game_collect_all_remaining_seeds_6_4(self):
board = KalahBoard(6,4)
board.set_board([0, 0, 1, 1, 0, 1, 30, 0, 0, 0, 0, 1, 0, 14])
self.assertEqual(board.get_board(), [0, 0, 1, 1, 0, 1, 30, 0, 0, 0, 0, 1, 0, 14])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [30, 14])
self.assertEqual(board.allowed_moves(), [2, 3, 5])
self.assertEqual(board.move(2), True)
self.assertEqual(board.get_board(), [0, 0, 0, 2, 0, 1, 30, 0, 0, 0, 0, 1, 0, 14])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [30, 14])
self.assertEqual(board.allowed_moves(), [11])
self.assertEqual(board.move(11), True)
self.assertEqual(board.get_board(), [0, 0, 0, 2, 0, 1, 30, 0, 0, 0, 0, 0, 1, 14])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [30, 14])
self.assertEqual(board.allowed_moves(), [3, 5])
self.assertEqual(board.move(3), True)
self.assertEqual(board.get_board(), [0, 0, 0, 0, 1, 2, 30, 0, 0, 0, 0, 0, 1, 14])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [30, 14])
self.assertEqual(board.allowed_moves(), [12])
self.assertEqual(board.move(12), True)
self.assertEqual(board.get_board(), [0, 0, 0, 0, 0, 0, 33, 0, 0, 0, 0, 0, 0, 15])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), True)
self.assertEqual(board.score(), [33, 15])
self.assertEqual(board.allowed_moves(), [])
def test_end_game_collect_all_remaining_seeds_second_test_6_4(self):
board = KalahBoard(6,4)
board.set_board([0, 0, 0, 1, 1, 0, 24, 0, 0, 0, 0, 0, 1, 21])
self.assertEqual(board.get_board(), [0, 0, 0, 1, 1, 0, 24, 0, 0, 0, 0, 0, 1, 21])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [24, 21])
self.assertEqual(board.allowed_moves(), [3, 4])
self.assertEqual(board.move(4), True)
self.assertEqual(board.get_board(), [0, 0, 0, 1, 0, 1, 24, 0, 0, 0, 0, 0, 1, 21])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [24, 21])
self.assertEqual(board.allowed_moves(), [12])
self.assertEqual(board.move(12), True)
self.assertEqual(board.get_board(), [0, 0, 0, 0, 0, 0, 26, 0, 0, 0, 0, 0, 0, 22])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), True)
self.assertEqual(board.score(), [26, 22])
self.assertEqual(board.allowed_moves(), [])
def test_end_game_collect_all_remaining_seeds_third_test_2_2(self):
board = KalahBoard(2,2)
board.set_board([0, 3, 1, 2, 2, 0])
self.assertEqual(board.get_board(), [0, 3, 1, 2, 2, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [1])
self.assertEqual(board.move(1), True)
self.assertEqual(board.get_board(), [0, 0, 2, 0, 0, 6])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), True)
self.assertEqual(board.score(), [2, 6])
self.assertEqual(board.allowed_moves(), [])
def test_empty_pit_capture_4_4(self):
# Test for player 1
board = KalahBoard(4,4)
board.set_current_player(0)
board.set_board([1, 0, 4, 4, 7, 4, 4, 4, 4, 0])
self.assertEqual(board.get_board(), [1, 0, 4, 4, 7, 4, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [7, 0])
self.assertEqual(board.allowed_moves(), [0, 2, 3])
self.assertEqual(board.move(0), True)
self.assertEqual(board.get_board(), [0, 0, 4, 4, 12, 4, 0, 4, 4, 0])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [12, 0])
self.assertEqual(board.allowed_moves(), [5, 7, 8])
# Test for player 2
board = KalahBoard(4,4)
board.set_current_player(1)
board.set_board([4, 0, 5, 5, 1, 5, 4, 4, 4, 0])
self.assertEqual(board.get_board(), [4, 0, 5, 5, 1, 5, 4, 4, 4, 0])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 0])
self.assertEqual(board.allowed_moves(), [5, 6, 7, 8])
self.assertEqual(board.move(7), True)
self.assertEqual(board.get_board(), [5, 1, 5, 5, 1, 5, 4, 0, 5, 1])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [1, 1])
self.assertEqual(board.allowed_moves(), [0, 1, 2, 3])
def test_empty_pit_opposite_no_empty_capture_4_4(self):
# We do not have the "empty capture" rule
board = KalahBoard(4,4)
board.set_board([1, 0, 4, 4, 7, 4, 0, 4, 4, 4])
self.assertEqual(board.get_board(), [1, 0, 4, 4, 7, 4, 0, 4, 4, 4])
self.assertEqual(board.current_player(), 0)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [7, 4])
self.assertEqual(board.allowed_moves(), [0, 2, 3])
self.assertEqual(board.move(0), True)
self.assertEqual(board.get_board(), [0, 1, 4, 4, 7, 4, 0, 4, 4, 4])
self.assertEqual(board.current_player(), 1)
self.assertEqual(board.game_over(), False)
self.assertEqual(board.score(), [7, 4])
self.assertEqual(board.allowed_moves(), [5, 7, 8])
def test_first_last_bin_functions(self):
board = KalahBoard(4,4)
self.assertEqual(board._get_first_bin(0), 0)
self.assertEqual(board._get_last_bin(0), 3)
self.assertEqual(board._get_first_bin(1), 5)
self.assertEqual(board._get_last_bin(1), 8)
board = KalahBoard(4,6)
self.assertEqual(board._get_first_bin(0), 0)
self.assertEqual(board._get_last_bin(0), 3)
self.assertEqual(board._get_first_bin(1), 5)
self.assertEqual(board._get_last_bin(1), 8)
board = KalahBoard(2,4)
self.assertEqual(board._get_first_bin(0), 0)
self.assertEqual(board._get_last_bin(0), 1)
self.assertEqual(board._get_first_bin(1), 3)
self.assertEqual(board._get_last_bin(1), 4)
board = KalahBoard(6,4)
self.assertEqual(board._get_first_bin(0), 0)
self.assertEqual(board._get_last_bin(0), 5)
self.assertEqual(board._get_first_bin(1), 7)
self.assertEqual(board._get_last_bin(1), 12)
if __name__ == '__main__':
unittest.main() | 39.336111 | 105 | 0.60942 | 2,097 | 14,161 | 3.972818 | 0.050548 | 0.3565 | 0.475333 | 0.176689 | 0.886808 | 0.866643 | 0.854759 | 0.82199 | 0.794983 | 0.777578 | 0 | 0.078868 | 0.22908 | 14,161 | 360 | 106 | 39.336111 | 0.684254 | 0.036721 | 0 | 0.636719 | 0 | 0 | 0.000587 | 0 | 0 | 0 | 0 | 0 | 0.773438 | 1 | 0.050781 | false | 0 | 0.007813 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
12c026e935e4107076c7ddcc8128a0f4252c60b4 | 104,088 | py | Python | afdb_outils_csv.py | Semaine52/AuFilDuBoamp_Outils_CSV | 36ba4e87f5f299ed0270000b1516019eb8baf4d4 | [
"MIT"
] | null | null | null | afdb_outils_csv.py | Semaine52/AuFilDuBoamp_Outils_CSV | 36ba4e87f5f299ed0270000b1516019eb8baf4d4 | [
"MIT"
] | null | null | null | afdb_outils_csv.py | Semaine52/AuFilDuBoamp_Outils_CSV | 36ba4e87f5f299ed0270000b1516019eb8baf4d4 | [
"MIT"
] | null | null | null | BOAMP_2021_GITHUB_CSV = ['https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_01_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_02_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_03_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_04_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_05_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2021/main/df_boamp_2021_06_G_04_indexation.csv',]
BOAMP_2020_GITHUB_CSV = ['https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_01_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_02_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_03_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_04_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_05_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_06_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_07_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_08_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_09_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_10_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_11_G_04_indexation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_01_identite.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_02_typeorganisme.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_03_typepouvoiradjudicateur.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_04_activiteprincipale.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_05_objet.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_06_procedure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_07_publicationanterieure.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_08_attribution.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_09_rectif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_10_annulation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_11_conditiondelai.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_12_conditionrelativemarche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_13_conditionparticipation.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_14_conditionparticipationsystmequalif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_15_conditionadministrative.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_16_renseignementscomplementaires.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_17_modif.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_18_annexed1.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_19_annexed2.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_20_annexed3.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_D_21_annexed4.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_G_01_reference.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_G_02_marche.csv',
'https://raw.githubusercontent.com/Semaine52/AuFilduBoamp_Data_2020/main/df_boamp_2020_12_G_04_indexation.csv']
BOAMP_ARBRES_HTML = ['https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_01_identite.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_02_typeorganisme.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_03_typepouvoiradjudicateur.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_04_activiteprincipale.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_05_objet.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_06_procedure.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_07_publicationanterieure.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_08_attribution.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_09_rectif.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_10_annulation.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_11_conditiondelai.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_12_conditionrelativemarche.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_13_conditionparticipation.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_14_conditionparticipationsystmequalif.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_15_conditionadministrative.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_16_renseignementscomplementaires.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_17_modif.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_18_annexed1.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_19_annexed2.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_20_annexed3.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_d_21_annexed4.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_g_01_reference.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_g_02_marche.html',
'https://www.aufilduboamp.com/DOCS/BOAMP_ARBRES_HTML/boamp_plan_g_04_indexation.html']
BOAMP_2021_GITHUB_PKL4 = ['https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_01_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_02_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_03_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_04_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_05_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2021_PKL4/blob/main/df_boamp_2021_06_G_04_indexation.pkl?raw=true']
BOAMP_2020_GITHUB_PKL4 = ['https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_01_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_02_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_03_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_04_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_05_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_06_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_07_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_08_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_09_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_10_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_11_G_04_indexation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_01_identite.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_02_typeorganisme.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_03_typepouvoiradjudicateur.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_04_activiteprincipale.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_05_objet.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_06_procedure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_07_publicationanterieure.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_08_attribution.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_09_rectif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_10_annulation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_11_conditiondelai.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_12_conditionrelativemarche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_13_conditionparticipation.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_14_conditionparticipationsystmequalif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_15_conditionadministrative.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_16_renseignementscomplementaires.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_17_modif.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_18_annexed1.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_19_annexed2.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_20_annexed3.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_D_21_annexed4.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_G_01_reference.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_G_02_marche.pkl?raw=true',
'https://github.com/Semaine52/AuFilDuBoamp_Data_2020_PKL4/blob/main/df_boamp_2020_12_G_04_indexation.pkl?raw=true']
| 115.78198 | 139 | 0.872166 | 16,339 | 104,088 | 5.097007 | 0.004284 | 0.124496 | 0.248991 | 0.29049 | 0.994957 | 0.994765 | 0.994645 | 0.994525 | 0.993708 | 0.993564 | 0 | 0.123497 | 0.008752 | 104,088 | 898 | 140 | 115.910913 | 0.68366 | 0 | 0 | 0 | 0 | 0.486486 | 0.964577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
428902f0a10a79aceae656d7012896242876e92e | 257 | py | Python | venv/Lib/site-packages/text_engine/base/Rule.py | GabrielAmare/Darts | 182748d821b8c1838071f3b28724d0d9b095dcf9 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/text_engine/base/Rule.py | GabrielAmare/Darts | 182748d821b8c1838071f3b28724d0d9b095dcf9 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/text_engine/base/Rule.py | GabrielAmare/Darts | 182748d821b8c1838071f3b28724d0d9b095dcf9 | [
"MIT"
] | null | null | null | class Rule:
def parse(self, tokens: list, position: int, parser, backward: bool = False):
raise NotImplementedError
def __and__(self, other):
raise NotImplementedError
def __or__(self, other):
raise NotImplementedError
| 25.7 | 81 | 0.677043 | 27 | 257 | 6.148148 | 0.666667 | 0.433735 | 0.325301 | 0.39759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245136 | 257 | 9 | 82 | 28.555556 | 0.85567 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
35fb3d7b87af4c72ced677550fff2c0419327452 | 106 | py | Python | roc/np.py | willhyper/dnn | 244f04fdb91eeb3f27cca1a5132c9a486bbf788a | [
"MIT"
] | null | null | null | roc/np.py | willhyper/dnn | 244f04fdb91eeb3f27cca1a5132c9a486bbf788a | [
"MIT"
] | null | null | null | roc/np.py | willhyper/dnn | 244f04fdb91eeb3f27cca1a5132c9a486bbf788a | [
"MIT"
] | null | null | null | from sklearn import metrics
def roc_curve(y_true, y_pred):
return metrics.roc_curve(y_true, y_pred)
| 17.666667 | 44 | 0.773585 | 19 | 106 | 4 | 0.578947 | 0.210526 | 0.236842 | 0.342105 | 0.473684 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 106 | 5 | 45 | 21.2 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 9 |
673e4c8993fab2b76f4e2c3beecd1f0157e71189 | 13,873 | py | Python | caffe2/python/operator_test/conv_transpose_test.py | KevinKecc/caffe2 | a2b6c6e2f0686358a84277df65e9489fb7d9ddb2 | [
"Apache-2.0"
] | 585 | 2015-08-10T02:48:52.000Z | 2021-12-01T08:46:59.000Z | caffe2/python/operator_test/conv_transpose_test.py | mingzhe09088/caffe2 | 8f41717c46d214aaf62b53e5b3b9b308b5b8db91 | [
"Apache-2.0"
] | 23 | 2015-08-30T11:54:51.000Z | 2017-03-06T03:01:07.000Z | caffe2/python/operator_test/conv_transpose_test.py | mingzhe09088/caffe2 | 8f41717c46d214aaf62b53e5b3b9b308b5b8db91 | [
"Apache-2.0"
] | 183 | 2015-08-10T02:49:04.000Z | 2021-12-01T08:47:13.000Z | # Copyright (c) 2016-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from hypothesis import assume, given, settings
import hypothesis.strategies as st
from caffe2.python import core
import caffe2.python.hypothesis_test_util as hu
class TestConvolutionTranspose(hu.HypothesisTestCase):
@given(stride=st.integers(1, 3),
pad=st.integers(0, 3),
kernel=st.integers(1, 5),
adj=st.integers(0, 2),
size=st.integers(7, 10),
input_channels=st.integers(1, 8),
output_channels=st.integers(1, 8),
batch_size=st.integers(1, 3),
engine=st.sampled_from(["", "CUDNN", "BLOCK"]),
shared_buffer=st.booleans(),
use_bias=st.booleans(),
**hu.gcs)
def test_convolution_transpose_layout_legacy_args(
self, stride, pad, kernel, adj,
size, input_channels,
output_channels, batch_size,
engine, shared_buffer, use_bias, gc, dc):
assume(adj < stride)
X = np.random.rand(
batch_size, size, size, input_channels).astype(np.float32) - 0.5
w = np.random.rand(
input_channels, kernel, kernel, output_channels)\
.astype(np.float32) - 0.5
b = np.random.rand(output_channels).astype(np.float32) - 0.5
outputs = {}
for order in ["NCHW", "NHWC"]:
op = core.CreateOperator(
"ConvTranspose",
["X", "w", "b"] if use_bias else ["X", "w"],
["Y"],
stride=stride,
kernel=kernel,
pad=pad,
adj=adj,
order=order,
engine=engine,
shared_buffer=int(shared_buffer),
device_option=gc,
)
if order == "NCHW":
X_f = X.transpose((0, 3, 1, 2))
w_f = w.transpose((0, 3, 1, 2))
else:
X_f = X
w_f = w
self.assertDeviceChecks(
dc,
op,
[X_f, w_f, b] if use_bias else [X_f, w_f],
[0])
self.ws.create_blob("X").feed(X_f, device_option=gc)
self.ws.create_blob("w").feed(w_f, device_option=gc)
self.ws.create_blob("b").feed(b, device_option=gc)
self.ws.run(op)
outputs[order] = self.ws.blobs["Y"].fetch()
output_size = (size - 1) * stride + kernel + adj - 2 * pad
self.assertEqual(
outputs["NCHW"].shape,
(batch_size, output_channels, output_size, output_size))
np.testing.assert_allclose(
outputs["NCHW"],
outputs["NHWC"].transpose((0, 3, 1, 2)),
atol=1e-4,
rtol=1e-4)
@given(stride=st.integers(1, 3),
pad=st.integers(0, 3),
kernel=st.integers(1, 5),
adj=st.integers(0, 2),
size=st.integers(7, 10),
input_channels=st.integers(1, 8),
output_channels=st.integers(1, 8),
batch_size=st.integers(1, 3),
engine=st.sampled_from(["", "CUDNN", "BLOCK"]),
shared_buffer=st.booleans(),
use_bias=st.booleans(),
**hu.gcs)
def test_convolution_transpose_layout(
self, stride, pad, kernel, adj,
size, input_channels,
output_channels, batch_size,
engine, shared_buffer, use_bias, gc, dc):
assume(adj < stride)
X = np.random.rand(
batch_size, size, size, input_channels).astype(np.float32) - 0.5
w = np.random.rand(
input_channels, kernel, kernel, output_channels)\
.astype(np.float32) - 0.5
b = np.random.rand(output_channels).astype(np.float32) - 0.5
outputs = {}
for order in ["NCHW", "NHWC"]:
op = core.CreateOperator(
"ConvTranspose",
["X", "w", "b"] if use_bias else ["X", "w"],
["Y"],
strides=[stride] * 2,
kernels=[kernel] * 2,
pads=[pad] * 4,
adjs=[adj] * 2,
order=order,
engine=engine,
shared_buffer=int(shared_buffer),
device_option=gc,
)
if order == "NCHW":
X_f = X.transpose((0, 3, 1, 2))
w_f = w.transpose((0, 3, 1, 2))
else:
X_f = X
w_f = w
self.assertDeviceChecks(
dc,
op,
[X_f, w_f, b] if use_bias else [X_f, w_f],
[0])
self.ws.create_blob("X").feed(X_f, device_option=gc)
self.ws.create_blob("w").feed(w_f, device_option=gc)
self.ws.create_blob("b").feed(b, device_option=gc)
self.ws.run(op)
outputs[order] = self.ws.blobs["Y"].fetch()
output_size = (size - 1) * stride + kernel + adj - 2 * pad
self.assertEqual(
outputs["NCHW"].shape,
(batch_size, output_channels, output_size, output_size))
np.testing.assert_allclose(
outputs["NCHW"],
outputs["NHWC"].transpose((0, 3, 1, 2)),
atol=1e-4,
rtol=1e-4)
# CUDNN does not support separate stride and pad so we skip it.
@given(stride_h=st.integers(1, 3),
stride_w=st.integers(1, 3),
pad_t=st.integers(0, 3),
pad_l=st.integers(0, 3),
pad_b=st.integers(0, 3),
pad_r=st.integers(0, 3),
kernel=st.integers(1, 5),
adj_h=st.integers(0, 2),
adj_w=st.integers(0, 2),
size=st.integers(7, 10),
input_channels=st.integers(1, 8),
output_channels=st.integers(1, 8),
batch_size=st.integers(1, 3),
engine=st.sampled_from(["", "BLOCK"]),
use_bias=st.booleans(),
**hu.gcs)
def test_convolution_transpose_separate_stride_pad_adj_layout(
self, stride_h, stride_w, pad_t, pad_l, pad_b, pad_r, kernel,
adj_h, adj_w, size, input_channels, output_channels, batch_size,
engine, use_bias, gc, dc):
assume(adj_h < stride_h)
assume(adj_w < stride_w)
X = np.random.rand(
batch_size, size, size, input_channels).astype(np.float32) - 0.5
w = np.random.rand(
input_channels, kernel, kernel, output_channels)\
.astype(np.float32) - 0.5
b = np.random.rand(output_channels).astype(np.float32) - 0.5
outputs = {}
for order in ["NCHW", "NHWC"]:
op = core.CreateOperator(
"ConvTranspose",
["X", "w", "b"] if use_bias else ["X", "w"],
["Y"],
stride_h=stride_h,
stride_w=stride_w,
kernel=kernel,
pad_t=pad_t,
pad_l=pad_l,
pad_b=pad_b,
pad_r=pad_r,
adj_h=adj_h,
adj_w=adj_w,
order=order,
engine=engine,
device_option=gc,
)
if order == "NCHW":
X_f = X.transpose((0, 3, 1, 2))
w_f = w.transpose((0, 3, 1, 2))
else:
X_f = X
w_f = w
self.assertDeviceChecks(
dc,
op,
[X_f, w_f, b] if use_bias else [X_f, w_f],
[0])
self.ws.create_blob("X").feed(X_f, device_option=gc)
self.ws.create_blob("w").feed(w_f, device_option=gc)
self.ws.create_blob("b").feed(b, device_option=gc)
self.ws.run(op)
outputs[order] = self.ws.blobs["Y"].fetch()
output_h = (size - 1) * stride_h + kernel + adj_h - pad_t - pad_b
output_w = (size - 1) * stride_w + kernel + adj_w - pad_l - pad_r
self.assertEqual(
outputs["NCHW"].shape,
(batch_size, output_channels, output_h, output_w))
np.testing.assert_allclose(
outputs["NCHW"],
outputs["NHWC"].transpose((0, 3, 1, 2)),
atol=1e-4,
rtol=1e-4)
@given(stride=st.integers(1, 3),
pad=st.integers(0, 3),
kernel=st.integers(1, 5),
adj=st.integers(0, 2),
size=st.integers(7, 10),
input_channels=st.integers(1, 8),
output_channels=st.integers(1, 8),
batch_size=st.integers(1, 3),
order=st.sampled_from(["NCHW", "NHWC"]),
engine=st.sampled_from(["", "CUDNN", "BLOCK"]),
use_bias=st.booleans(),
compute_dX=st.booleans(),
**hu.gcs)
@settings(max_examples=2, timeout=100)
def test_convolution_transpose_gradients(self, stride, pad, kernel, adj,
size, input_channels,
output_channels, batch_size,
order, engine, use_bias,
compute_dX, gc, dc):
assume(adj < stride)
X = np.random.rand(
batch_size, size, size, input_channels).astype(np.float32) - 0.5
w = np.random.rand(
input_channels, kernel, kernel, output_channels)\
.astype(np.float32) - 0.5
b = np.random.rand(output_channels).astype(np.float32) - 0.5
op = core.CreateOperator(
"ConvTranspose",
["X", "w", "b"] if use_bias else ["X", "w"],
["Y"],
stride=stride,
kernel=kernel,
pad=pad,
adj=adj,
order=order,
engine=engine,
no_gradient_to_input=not compute_dX,
)
if order == "NCHW":
X = X.transpose((0, 3, 1, 2))
w = w.transpose((0, 3, 1, 2))
inputs = [X, w, b] if use_bias else [X, w]
self.assertDeviceChecks(dc, op, inputs, [0])
if use_bias and compute_dX:
# w, b, X
outputs_to_check = [1, 2, 0]
elif use_bias:
# w, b
outputs_to_check = [1, 2]
elif compute_dX:
# w, X
outputs_to_check = [1, 0]
else:
# w
outputs_to_check = [1]
for i in outputs_to_check:
self.assertGradientChecks(gc, op, inputs, i, [0])
# CUDNN does not support separate stride and pad so we skip it.
@given(stride_h=st.integers(1, 3),
stride_w=st.integers(1, 3),
pad_t=st.integers(0, 3),
pad_l=st.integers(0, 3),
pad_b=st.integers(0, 3),
pad_r=st.integers(0, 3),
kernel=st.integers(1, 5),
adj_h=st.integers(0, 2),
adj_w=st.integers(0, 2),
size=st.integers(7, 10),
input_channels=st.integers(1, 8),
output_channels=st.integers(1, 8),
batch_size=st.integers(1, 3),
order=st.sampled_from(["NCHW", "NHWC"]),
engine=st.sampled_from(["", "BLOCK"]),
use_bias=st.booleans(),
compute_dX=st.booleans(),
**hu.gcs)
@settings(max_examples=2, timeout=100)
def test_convolution_transpose_separate_stride_pad_adj_gradient(
self, stride_h, stride_w, pad_t, pad_l, pad_b, pad_r, kernel,
adj_h, adj_w, size, input_channels, output_channels, batch_size,
order, engine, use_bias, compute_dX, gc, dc):
assume(adj_h < stride_h)
assume(adj_w < stride_w)
X = np.random.rand(
batch_size, size, size, input_channels).astype(np.float32) - 0.5
w = np.random.rand(
input_channels, kernel, kernel, output_channels)\
.astype(np.float32) - 0.5
b = np.random.rand(output_channels).astype(np.float32) - 0.5
op = core.CreateOperator(
"ConvTranspose",
["X", "w", "b"] if use_bias else ["X", "w"],
["Y"],
stride_h=stride_h,
stride_w=stride_w,
kernel=kernel,
pad_t=pad_t,
pad_l=pad_l,
pad_b=pad_b,
pad_r=pad_r,
adj_h=adj_h,
adj_w=adj_w,
order=order,
engine=engine,
no_gradient_to_input=not compute_dX,
)
if order == "NCHW":
X = X.transpose((0, 3, 1, 2))
w = w.transpose((0, 3, 1, 2))
inputs = [X, w, b] if use_bias else [X, w]
self.assertDeviceChecks(dc, op, inputs, [0])
if use_bias and compute_dX:
# w, b, X
outputs_to_check = [1, 2, 0]
elif use_bias:
# w, b
outputs_to_check = [1, 2]
elif compute_dX:
# w, X
outputs_to_check = [1, 0]
else:
# w
outputs_to_check = [1]
for i in outputs_to_check:
self.assertGradientChecks(gc, op, inputs, i, [0])
if __name__ == "__main__":
import unittest
unittest.main()
| 37.69837 | 78 | 0.509911 | 1,755 | 13,873 | 3.842165 | 0.109972 | 0.074151 | 0.044046 | 0.051164 | 0.863711 | 0.863711 | 0.861931 | 0.861931 | 0.855109 | 0.854367 | 0 | 0.031514 | 0.359547 | 13,873 | 367 | 79 | 37.80109 | 0.727406 | 0.052476 | 0 | 0.899696 | 0 | 0 | 0.019244 | 0 | 0 | 0 | 0 | 0 | 0.039514 | 1 | 0.015198 | false | 0 | 0.027356 | 0 | 0.045593 | 0.00304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
675900d4a3e835f74d0cc32eba13fd009fd0edef | 17,612 | py | Python | NGDAUpdater/WMSFiller.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/WMSFiller.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/WMSFiller.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | import os
import fnmatch
import shutil
import re
import datetime
import time
#import StringIO
import pickle
import sys
'''
This module inserts the WMS URL for download
'''
def WMSFiller(Pass, File):
Theme = Pass
NewFile = File
AppProfile1 = ' <gmd:applicationProfile>\n'
AppProfile2 = ' <gco:CharacterString>http://opengis.net/spec/wms</gco:CharacterString>\n'
AppProfile3 = ' </gmd:applicationProfile>\n'
FinalAppProfile = AppProfile1 + AppProfile2 + AppProfile3
Name1=' <gmd:name>\n'
Name2=' <gco:CharacterString>TIGERweb/tigerWMS_Current (MapServer)</gco:CharacterString>\n'
Name3=' </gmd:name>\n'
FinalAppName = Name1 + Name2 + Name3
Current1=' <gmd:linkage>\n'
Current2=' <gmd:URL>https://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/tigerWMS_Current/MapServer</gmd:URL>\n'
Current3=' </gmd:linkage>\n'
FinalCurrentWMS = Current1 + Current2 + Current3
if re.search('AIANNH', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for Current American Indian/Alaska Native/Native Hawaiian Areas. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('AITS', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for Current American Indian Tribal Subdivision. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('BG', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(
' <gco:CharacterString>This web mapping service contains the layer forBlock Groups. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('CBSA', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service Service contains the Current Metropolitan Statistical Area/Micropolitan Statistical Area (CBSA) Layers. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Congressional District', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for 116th Congressional Districts. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('CNECTA', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Combined New England City and Town Areas. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('Current County and Equivalent', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Current County and Equivalent. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('CSA', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Current Combined Statistical Area (CSA). This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search ('estates', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the estates in the Virgin Islands. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('Current Metropolitan Division', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Current Metropolitan Division. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('NECTA Division National', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the New England City and Town Area Divisions. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('NECTA', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Current New England City and Town Areas. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('Current State and Equivalent', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the States and Equivalents. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Current Tribal Block Group', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for Current Tribal Block Groups. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>\n')
NewFile.write(' </gmd:description>\n')
elif re.search('Current Tribal Census Tract', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for Current Tribal Census Tracts. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Census Urban Area', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>TThis web mapping service contains the layer for the 2010 Census Urban Area Clusters. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('ZCTA5', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the Zip Code Tabulation Areas. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Current County Subdivision',Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the County Sudivisions. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Current Place',Theme,flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the places. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('PUMA',Theme,flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the 2010 Public Use Microdata Areas. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('(SLD) Lower Chamber',Theme,flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for state legislative districts - lower chamber. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Upper Chamber', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for state legislative districts - upper chamber. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('2010 Census Block', Theme,flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for 2010 Census Blocks. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('2020 Census Block', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for 2020 Census Blocks. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('Current Census Tract', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for 2010 Census Tracts. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
elif re.search('All Roads', Theme, flags=0):
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for primary and secondary roads. This URL is to be used in mapping software like ArcMap. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification. </gco:CharacterString>')
NewFile.write(' </gmd:description>\n')
else:
NewFile.write(FinalCurrentWMS)
NewFile.write(FinalAppProfile)
NewFile.write(FinalAppName)
NewFile.write(' <gmd:description>\n')
NewFile.write(' <gco:CharacterString>This web mapping service contains the layer for the '+ Theme+ '. This URL is to be used in mapping software like ArcMap. To use this in a web browser, see the OGC Web Mapping Specification.</gco:CharacterString>')
NewFile.write(' </gmd:description>\n') | 80.054545 | 351 | 0.625767 | 2,013 | 17,612 | 5.47392 | 0.077993 | 0.176423 | 0.073509 | 0.127416 | 0.883928 | 0.882022 | 0.882022 | 0.882022 | 0.882022 | 0.876849 | 0 | 0.006037 | 0.285146 | 17,612 | 220 | 352 | 80.054545 | 0.869182 | 0.000852 | 0 | 0.633803 | 0 | 0.131455 | 0.611855 | 0.095059 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004695 | false | 0.00939 | 0.037559 | 0 | 0.042254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
675df59550792e96d7adfb88600333627d2cd802 | 44,283 | py | Python | pybind/slxos/v17s_1_02/vrf/address_family/ipv6/unicast/ipv6/route/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17s_1_02/vrf/address_family/ipv6/unicast/ipv6/route/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17s_1_02/vrf/address_family/ipv6/unicast/ipv6/route/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import static_route_nh
import static_route_oif
import link_local_static_route_nh
import static_route_nh_vrf
import link_local_static_route_nh_vrf
import ipv6_static_route_oif_vrf
import static
class route(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-vrf - based on the path /vrf/address-family/ipv6/unicast/ipv6/route. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__static_route_nh','__static_route_oif','__link_local_static_route_nh','__static_route_nh_vrf','__link_local_static_route_nh_vrf','__ipv6_static_route_oif_vrf','__static',)
_yang_name = 'route'
_rest_name = 'route'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__static_route_oif = YANGDynClass(base=YANGListType("static_route_dest static_route_oif_type static_route_oif_name",static_route_oif.static_route_oif, yang_name="static-route-oif", rest_name="static-route-oif", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}), is_container='list', yang_name="static-route-oif", rest_name="static-route-oif", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
self.__link_local_static_route_nh = YANGDynClass(base=YANGListType("link_local_static_route_dest link_local_nexthop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh.link_local_static_route_nh, yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='link-local-static-route-dest link-local-nexthop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}), is_container='list', yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
self.__ipv6_static_route_oif_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_oif_type static_route_oif_name",ipv6_static_route_oif_vrf.ipv6_static_route_oif_vrf, yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}), is_container='list', yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
self.__static_route_nh_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_next_hop",static_route_nh_vrf.static_route_nh_vrf, yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}), is_container='list', yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
self.__static = YANGDynClass(base=static.static, is_container='container', presence=False, yang_name="static", rest_name="static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'BFD static route'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='container', is_config=True)
self.__static_route_nh = YANGDynClass(base=YANGListType("static_route_dest static_route_next_hop",static_route_nh.static_route_nh, yang_name="static-route-nh", rest_name="static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}), is_container='list', yang_name="static-route-nh", rest_name="static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
self.__link_local_static_route_nh_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf link_local_next_hop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh_vrf.link_local_static_route_nh_vrf, yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf link-local-next-hop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}), is_container='list', yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'vrf', u'address-family', u'ipv6', u'unicast', u'ipv6', u'route']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'vrf', u'address-family', u'ipv6', u'unicast', u'ipv6', u'route']
def _get_static_route_nh(self):
"""
Getter method for static_route_nh, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_nh (list)
"""
return self.__static_route_nh
def _set_static_route_nh(self, v, load=False):
"""
Setter method for static_route_nh, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_nh (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_static_route_nh is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_static_route_nh() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("static_route_dest static_route_next_hop",static_route_nh.static_route_nh, yang_name="static-route-nh", rest_name="static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}), is_container='list', yang_name="static-route-nh", rest_name="static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """static_route_nh must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("static_route_dest static_route_next_hop",static_route_nh.static_route_nh, yang_name="static-route-nh", rest_name="static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}), is_container='list', yang_name="static-route-nh", rest_name="static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__static_route_nh = t
if hasattr(self, '_set'):
self._set()
def _unset_static_route_nh(self):
self.__static_route_nh = YANGDynClass(base=YANGListType("static_route_dest static_route_next_hop",static_route_nh.static_route_nh, yang_name="static-route-nh", rest_name="static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}), is_container='list', yang_name="static-route-nh", rest_name="static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_static_route_oif(self):
"""
Getter method for static_route_oif, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_oif (list)
"""
return self.__static_route_oif
def _set_static_route_oif(self, v, load=False):
"""
Setter method for static_route_oif, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_oif (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_static_route_oif is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_static_route_oif() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("static_route_dest static_route_oif_type static_route_oif_name",static_route_oif.static_route_oif, yang_name="static-route-oif", rest_name="static-route-oif", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}), is_container='list', yang_name="static-route-oif", rest_name="static-route-oif", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """static_route_oif must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("static_route_dest static_route_oif_type static_route_oif_name",static_route_oif.static_route_oif, yang_name="static-route-oif", rest_name="static-route-oif", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}), is_container='list', yang_name="static-route-oif", rest_name="static-route-oif", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__static_route_oif = t
if hasattr(self, '_set'):
self._set()
def _unset_static_route_oif(self):
self.__static_route_oif = YANGDynClass(base=YANGListType("static_route_dest static_route_oif_type static_route_oif_name",static_route_oif.static_route_oif, yang_name="static-route-oif", rest_name="static-route-oif", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-dest static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}), is_container='list', yang_name="static-route-oif", rest_name="static-route-oif", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with egress interface', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterface'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_link_local_static_route_nh(self):
"""
Getter method for link_local_static_route_nh, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/link_local_static_route_nh (list)
"""
return self.__link_local_static_route_nh
def _set_link_local_static_route_nh(self, v, load=False):
"""
Setter method for link_local_static_route_nh, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/link_local_static_route_nh (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_link_local_static_route_nh is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_link_local_static_route_nh() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("link_local_static_route_dest link_local_nexthop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh.link_local_static_route_nh, yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='link-local-static-route-dest link-local-nexthop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}), is_container='list', yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """link_local_static_route_nh must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("link_local_static_route_dest link_local_nexthop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh.link_local_static_route_nh, yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='link-local-static-route-dest link-local-nexthop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}), is_container='list', yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__link_local_static_route_nh = t
if hasattr(self, '_set'):
self._set()
def _unset_link_local_static_route_nh(self):
self.__link_local_static_route_nh = YANGDynClass(base=YANGListType("link_local_static_route_dest link_local_nexthop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh.link_local_static_route_nh, yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='link-local-static-route-dest link-local-nexthop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}), is_container='list', yang_name="link-local-static-route-nh", rest_name="link-local-static-route-nh", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IP address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6LinkLocalStaticRouteNh'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_static_route_nh_vrf(self):
"""
Getter method for static_route_nh_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_nh_vrf (list)
"""
return self.__static_route_nh_vrf
def _set_static_route_nh_vrf(self, v, load=False):
"""
Setter method for static_route_nh_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static_route_nh_vrf (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_static_route_nh_vrf is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_static_route_nh_vrf() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_next_hop",static_route_nh_vrf.static_route_nh_vrf, yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}), is_container='list', yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """static_route_nh_vrf must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_next_hop",static_route_nh_vrf.static_route_nh_vrf, yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}), is_container='list', yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__static_route_nh_vrf = t
if hasattr(self, '_set'):
self._set()
def _unset_static_route_nh_vrf(self):
self.__static_route_nh_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_next_hop",static_route_nh_vrf.static_route_nh_vrf, yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-next-hop', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}), is_container='list', yang_name="static-route-nh-vrf", rest_name="static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_link_local_static_route_nh_vrf(self):
"""
Getter method for link_local_static_route_nh_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/link_local_static_route_nh_vrf (list)
"""
return self.__link_local_static_route_nh_vrf
def _set_link_local_static_route_nh_vrf(self, v, load=False):
"""
Setter method for link_local_static_route_nh_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/link_local_static_route_nh_vrf (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_link_local_static_route_nh_vrf is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_link_local_static_route_nh_vrf() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("static_route_next_vrf_dest next_hop_vrf link_local_next_hop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh_vrf.link_local_static_route_nh_vrf, yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf link-local-next-hop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}), is_container='list', yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """link_local_static_route_nh_vrf must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf link_local_next_hop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh_vrf.link_local_static_route_nh_vrf, yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf link-local-next-hop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}), is_container='list', yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__link_local_static_route_nh_vrf = t
if hasattr(self, '_set'):
self._set()
def _unset_link_local_static_route_nh_vrf(self):
self.__link_local_static_route_nh_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf link_local_next_hop link_local_route_oif_type link_local_route_oif_name",link_local_static_route_nh_vrf.link_local_static_route_nh_vrf, yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf link-local-next-hop link-local-route-oif-type link-local-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}), is_container='list', yang_name="link-local-static-route-nh-vrf", rest_name="link-local-static-route-nh-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'ipv6-link-local-static-route-next-hop-vrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_ipv6_static_route_oif_vrf(self):
"""
Getter method for ipv6_static_route_oif_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/ipv6_static_route_oif_vrf (list)
"""
return self.__ipv6_static_route_oif_vrf
def _set_ipv6_static_route_oif_vrf(self, v, load=False):
"""
Setter method for ipv6_static_route_oif_vrf, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/ipv6_static_route_oif_vrf (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_ipv6_static_route_oif_vrf is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ipv6_static_route_oif_vrf() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_oif_type static_route_oif_name",ipv6_static_route_oif_vrf.ipv6_static_route_oif_vrf, yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}), is_container='list', yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ipv6_static_route_oif_vrf must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_oif_type static_route_oif_name",ipv6_static_route_oif_vrf.ipv6_static_route_oif_vrf, yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}), is_container='list', yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)""",
})
self.__ipv6_static_route_oif_vrf = t
if hasattr(self, '_set'):
self._set()
def _unset_ipv6_static_route_oif_vrf(self):
self.__ipv6_static_route_oif_vrf = YANGDynClass(base=YANGListType("static_route_next_vrf_dest next_hop_vrf static_route_oif_type static_route_oif_name",ipv6_static_route_oif_vrf.ipv6_static_route_oif_vrf, yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='static-route-next-vrf-dest next-hop-vrf static-route-oif-type static-route-oif-name', extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}), is_container='list', yang_name="ipv6-static-route-oif-vrf", rest_name="ipv6-static-route-oif-vrf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Route with nexthop IPv6 address', u'cli-no-key-completion': None, u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-full-no': None, u'cli-drop-node-name': None, u'callpoint': u'Ipv6StaticRouteInterfaceNexthopVrf'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='list', is_config=True)
def _get_static(self):
"""
Getter method for static, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static (container)
"""
return self.__static
def _set_static(self, v, load=False):
"""
Setter method for static, mapped from YANG variable /vrf/address_family/ipv6/unicast/ipv6/route/static (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_static is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_static() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=static.static, is_container='container', presence=False, yang_name="static", rest_name="static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'BFD static route'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """static must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=static.static, is_container='container', presence=False, yang_name="static", rest_name="static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'BFD static route'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='container', is_config=True)""",
})
self.__static = t
if hasattr(self, '_set'):
self._set()
def _unset_static(self):
self.__static = YANGDynClass(base=static.static, is_container='container', presence=False, yang_name="static", rest_name="static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'BFD static route'}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-rtm', defining_module='brocade-ipv6-rtm', yang_type='container', is_config=True)
static_route_nh = __builtin__.property(_get_static_route_nh, _set_static_route_nh)
static_route_oif = __builtin__.property(_get_static_route_oif, _set_static_route_oif)
link_local_static_route_nh = __builtin__.property(_get_link_local_static_route_nh, _set_link_local_static_route_nh)
static_route_nh_vrf = __builtin__.property(_get_static_route_nh_vrf, _set_static_route_nh_vrf)
link_local_static_route_nh_vrf = __builtin__.property(_get_link_local_static_route_nh_vrf, _set_link_local_static_route_nh_vrf)
ipv6_static_route_oif_vrf = __builtin__.property(_get_ipv6_static_route_oif_vrf, _set_ipv6_static_route_oif_vrf)
static = __builtin__.property(_get_static, _set_static)
_pyangbind_elements = {'static_route_nh': static_route_nh, 'static_route_oif': static_route_oif, 'link_local_static_route_nh': link_local_static_route_nh, 'static_route_nh_vrf': static_route_nh_vrf, 'link_local_static_route_nh_vrf': link_local_static_route_nh_vrf, 'ipv6_static_route_oif_vrf': ipv6_static_route_oif_vrf, 'static': static, }
| 130.244118 | 1,434 | 0.759682 | 7,024 | 44,283 | 4.551395 | 0.027477 | 0.132816 | 0.048047 | 0.066314 | 0.948763 | 0.936845 | 0.918014 | 0.912509 | 0.904095 | 0.900466 | 0 | 0.005297 | 0.091886 | 44,283 | 339 | 1,435 | 130.628319 | 0.789675 | 0.088996 | 0 | 0.455814 | 0 | 0.088372 | 0.530063 | 0.274282 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111628 | false | 0 | 0.069767 | 0 | 0.297674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6768f9f43b2d0fb90e83a6fded8507a3092dded9 | 3,650 | py | Python | api/tests/test_company_attachment.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 1 | 2022-03-03T09:55:57.000Z | 2022-03-03T09:55:57.000Z | api/tests/test_company_attachment.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 7 | 2022-02-09T10:44:53.000Z | 2022-03-28T03:29:43.000Z | api/tests/test_company_attachment.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | null | null | null | import pytest
from db.models import AttachmentKey, ProfileState
# pylint: disable=R0913
@pytest.mark.django_db
def test_incomplete_attachments(login, user_student, upload, file_image_jpg, attachments_for_user,
logout, user_employee, query_attachments_for_slug):
user_employee.company.state = ProfileState.INCOMPLETE
user_employee.company.save()
login(user_employee)
data, errors = upload(user_employee, AttachmentKey.COMPANY_AVATAR, file_image_jpg)
assert data is not None
assert errors is None
assert data.get('upload') is not None
assert data.get('upload').get('success')
attachments = attachments_for_user(user_employee, AttachmentKey.COMPANY_AVATAR)
assert len(attachments) == 1
logout()
login(user_student)
data, errors = query_attachments_for_slug(user_student, user_employee.company.slug)
assert errors is None
assert data is not None
company_avatar_edges = data.get('companyAvatar').get('edges')
assert company_avatar_edges is not None
assert len(company_avatar_edges) == 0
company_avatar_fallback_edges = data.get('companyAvatarFallback').get('edges')
assert company_avatar_fallback_edges is not None
assert len(company_avatar_fallback_edges) == 1
@pytest.mark.django_db
def test_anonymous_attachments(login, user_student, upload, file_image_jpg, attachments_for_user,
logout, user_employee, query_attachments_for_slug):
user_employee.company.state = ProfileState.ANONYMOUS
user_employee.company.save()
login(user_employee)
data, errors = upload(user_employee, AttachmentKey.COMPANY_AVATAR, file_image_jpg)
assert data is not None
assert errors is None
assert data.get('upload') is not None
assert data.get('upload').get('success')
attachments = attachments_for_user(user_employee, AttachmentKey.COMPANY_AVATAR)
assert len(attachments) == 1
logout()
login(user_student)
data, errors = query_attachments_for_slug(user_student, user_employee.company.slug)
assert errors is None
assert data is not None
company_avatar_edges = data.get('companyAvatar').get('edges')
assert company_avatar_edges is not None
assert len(company_avatar_edges) == 1
company_avatar_fallback_edges = data.get('companyAvatarFallback').get('edges')
assert company_avatar_fallback_edges is not None
assert len(company_avatar_fallback_edges) == 1
@pytest.mark.django_db
def test_public_attachments(login, user_student, upload, file_image_jpg, attachments_for_user,
logout, user_employee, query_attachments_for_slug):
user_employee.company.state = ProfileState.PUBLIC
user_employee.company.save()
login(user_employee)
data, errors = upload(user_employee, AttachmentKey.COMPANY_AVATAR, file_image_jpg)
assert data is not None
assert errors is None
assert data.get('upload') is not None
assert data.get('upload').get('success')
attachments = attachments_for_user(user_employee, AttachmentKey.COMPANY_AVATAR)
assert len(attachments) == 1
logout()
login(user_student)
data, errors = query_attachments_for_slug(user_student, user_employee.company.slug)
assert errors is None
assert data is not None
company_avatar_edges = data.get('companyAvatar').get('edges')
assert company_avatar_edges is not None
assert len(company_avatar_edges) == 1
company_avatar_fallback_edges = data.get('companyAvatarFallback').get('edges')
assert company_avatar_fallback_edges is not None
assert len(company_avatar_fallback_edges) == 1
| 37.244898 | 98 | 0.747945 | 473 | 3,650 | 5.4926 | 0.103594 | 0.120092 | 0.051963 | 0.069284 | 0.952271 | 0.952271 | 0.942648 | 0.942648 | 0.942648 | 0.942648 | 0 | 0.004305 | 0.172603 | 3,650 | 97 | 99 | 37.628866 | 0.85596 | 0.005753 | 0 | 0.878378 | 0 | 0 | 0.052109 | 0.01737 | 0 | 0 | 0 | 0 | 0.445946 | 1 | 0.040541 | false | 0 | 0.027027 | 0 | 0.067568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
679a8f89a048c717363c34d5270c5660dc48f06d | 2,239 | py | Python | lantz/core/testsuite/test_processors.py | mtsolmn/lantz-core | 21e7112cecac78d51a98a5a6814566ec986f40ad | [
"BSD-3-Clause"
] | 3 | 2019-05-04T00:10:47.000Z | 2021-06-11T15:51:14.000Z | lantz/core/testsuite/test_processors.py | mtsolmn/lantz-core | 21e7112cecac78d51a98a5a6814566ec986f40ad | [
"BSD-3-Clause"
] | 4 | 2019-01-08T18:30:51.000Z | 2020-09-22T03:19:05.000Z | lantz/core/testsuite/test_processors.py | mtsolmn/lantz-core | 21e7112cecac78d51a98a5a6814566ec986f40ad | [
"BSD-3-Clause"
] | 5 | 2019-09-23T16:26:32.000Z | 2021-07-21T19:24:38.000Z | # -*- coding: utf-8 -*-
import unittest
import doctest
from lantz.core import Q_
import lantz.core.processors as processors
mv = Q_(1, 'mV')
Hz = Q_(1, 'Hz')
V = Q_(1, 'V')
class TestProcessors(unittest.TestCase):
def test_docs(self):
doctest.testmod(processors)
def test_invalid_arguments(self):
self.assertRaises(ValueError, processors.convert_to, V, on_incompatible='na')
self.assertRaises(ValueError, processors.convert_to, V, on_dimensionless='na')
self.assertRaises(ValueError, processors.convert_to, list())
def test_return_float(self):
self.assertEqual(processors.convert_to(V, return_float=True)(1*mv), 0.001)
self.assertRaises(ValueError, processors.convert_to(V, return_float=True, on_incompatible='raise'), Hz)
self.assertWarns(processors.DimensionalityWarning, processors.convert_to(V, return_float=True, on_incompatible='warn'), Hz)
self.assertEqual(processors.convert_to(V, return_float=True, on_incompatible='ignore')(Hz), 1)
self.assertRaises(ValueError, processors.convert_to(V, return_float=True, on_dimensionless='raise'), 1000)
self.assertWarns(processors.DimensionalityWarning, processors.convert_to(V, return_float=True, on_dimensionless='warn'), 1000)
self.assertEqual(processors.convert_to(V, return_float=True, on_dimensionless='ignore')(1000), 1000)
def test_return_quantity(self):
self.assertEqual(processors.convert_to(V)(1*mv), 0.001 * V)
self.assertRaises(ValueError, processors.convert_to(V, on_incompatible='raise'), Hz)
self.assertWarns(processors.DimensionalityWarning, processors.convert_to(V, on_incompatible='warn'), Hz)
self.assertEqual(processors.convert_to(V, on_incompatible='ignore')(Hz), 1 * V)
self.assertRaises(ValueError, processors.convert_to(V, on_dimensionless='raise'), 1000)
self.assertWarns(processors.DimensionalityWarning, processors.convert_to(V, on_dimensionless='warn'), 1000)
self.assertEqual(processors.convert_to(V, on_dimensionless='ignore')(1000), 1000 * V)
self.assertRaises(ValueError, processors.convert_to(V, on_dimensionless='raise'), 1000)
if __name__ == '__main__':
unittest.main()
| 44.78 | 134 | 0.73381 | 279 | 2,239 | 5.677419 | 0.182796 | 0.193182 | 0.215909 | 0.214646 | 0.816919 | 0.787879 | 0.772096 | 0.71149 | 0.71149 | 0.558081 | 0 | 0.026929 | 0.137561 | 2,239 | 49 | 135 | 45.693878 | 0.793371 | 0.009379 | 0 | 0.060606 | 0 | 0 | 0.037004 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 1 | 0.121212 | false | 0 | 0.121212 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
db1d96b8635857af5104bc2a95528d8692bd5945 | 22,730 | py | Python | visualization_utils/plotting_Iulian.py | facebookresearch/Project_FARSI | 12b40e4f16ba7418a0f3b997ad124cdb51f4e7f4 | [
"MIT"
] | 14 | 2021-06-01T16:45:19.000Z | 2022-03-08T20:07:00.000Z | visualization_utils/plotting_Iulian.py | facebookresearch/Project_FARSI | 12b40e4f16ba7418a0f3b997ad124cdb51f4e7f4 | [
"MIT"
] | null | null | null | visualization_utils/plotting_Iulian.py | facebookresearch/Project_FARSI | 12b40e4f16ba7418a0f3b997ad124cdb51f4e7f4 | [
"MIT"
] | 3 | 2021-08-05T16:37:47.000Z | 2022-01-06T00:25:49.000Z | import pandas as pd
import seaborn as sns
import sys
import matplotlib.pyplot as plt
import numpy as np
sys.path.append("..")
#from plot_validations import *
from sklearn.linear_model import LinearRegression
from settings import config_plotting
import os
def abline(slope, intercept, color):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '--', color = color)
def get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name, y_coord_name = "Simulation Time"):
avg_df_lst = []
for x_coord in set(reformatted_df[x_coord_name]):
#print("hola")
#print(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df["FARSI or PA"] == "FARSI")])
simtimes_farsi = list(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df["FARSI or PA"] == "FARSI")][y_coord_name])
simtimes_pa = list(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df["FARSI or PA"] == "PA")][y_coord_name])
print("simtimes_farsi")
print(simtimes_farsi)
print(np.average(simtimes_farsi))
print("simtimes_pa")
print(simtimes_pa)
print(np.average(simtimes_pa))
avg_df_lst.append([np.average(simtimes_farsi), "FARSI", x_coord])
avg_df_lst.append([np.average(simtimes_pa), "PA", x_coord])
return pd.DataFrame(avg_df_lst, columns = ["Simulation Time", "FARSI or PA", x_coord_name])
#not used yet in this script
def get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name, y_coord_name = "Simulation Time", hue_col = "FARSI or PA"):
hues = set(list(reformatted_df[hue_col]))
avg_df_lst = []
for x_coord in set(reformatted_df[x_coord_name]):
#print("hola")
#print(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df["FARSI or PA"] == "FARSI")])
for hue in hues:
selectedy_hue = list(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df[hue_col] == hue)][y_coord_name])
avg_df_lst.append([np.average(selectedy_hue), hue, x_coord])
#simtimes_pa = list(reformatted_df.loc[(reformatted_df[x_coord_name] == x_coord) & (reformatted_df["FARSI or PA"] == "PA")][y_coord_name])
return pd.DataFrame(avg_df_lst, columns = [y_coord_name, hue_col, x_coord_name])
def plot_sim_time_vs_system_char_minimal(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
blk_cnt = list(data["blk_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
tmp_reformatted_df_data = [blk_cnt * 2, pa_sim_time + farsi_sim_time,
["PA"] * len(blk_cnt) + ["FARSI"] * len(blk_cnt)]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in
range(len(blk_cnt) * 2)]
# print(reformatted_df_data[0:3])
# exit()
# for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["Block counts", "Simulation Time",
"FARSI or PA"])
print(reformatted_df.head())
df_blk_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Block counts")
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name = "Block counts", y_coord_name = "Simulation Time", hue_col = "FARSI or PA")
#df_pe_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PE counts")
#df_mem_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Mem counts")
#df_bus_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Bus counts")
#print("Bola")
#print(df_blk_avg)
splot = sns.scatterplot(data=df_avg, x="Block counts", y="Simulation Time", hue="FARSI or PA")
splot.set(yscale="log")
color_per_hue = {"FARSI" : "green", "PA" : "orange"}
hues = set(list(df_avg["FARSI or PA"]))
for hue in hues:
#x required to be in matrix format in sklearn
print(np.isnan(df_avg["Simulation Time"]))
xs_hue = [[x] for x in list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Block counts"])]
ys_hue = np.array(list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Simulation Time"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
#plt.set_ylim(top = 10)
plt.savefig(os.path.join(output_dir,'block_counts_vs_simtime.png'))
plt.close("all")
def plot_sim_time_vs_system_char_minimal_for_paper(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
blk_cnt = list(data["blk_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
tmp_reformatted_df_data = [blk_cnt * 2, pa_sim_time + farsi_sim_time,
["PA"] * len(blk_cnt) + ["FARSI"] * len(blk_cnt)]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in
range(len(blk_cnt) * 2)]
# print(reformatted_df_data[0:3])
# exit()
# for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["Block Counts", "Simulation Time",
"FARSI or PA"])
print(reformatted_df.head())
df_blk_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Block Counts")
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name = "Block Counts", y_coord_name = "Simulation Time", hue_col = "FARSI or PA")
#df_pe_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PE counts")
#df_mem_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Mem counts")
#df_bus_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Bus counts")
#print("Bola")
#print(df_blk_avg)
axis_font = {'size': '20'}
fontSize = 20
sns.set(font_scale=2, rc={'figure.figsize': (6, 4)})
sns.set_style("white")
color_per_hue = {'PA': 'hotpink', 'FARSI': 'green'}
splot = sns.scatterplot(data=df_avg, x="Block Counts", y="Simulation Time", hue="FARSI or PA", sizes=(6, 6), palette=color_per_hue)
splot.set(yscale="log")
splot.legend(title="", fontsize=fontSize, loc="center right")
hues = set(list(df_avg["FARSI or PA"]))
for hue in hues:
#x required to be in matrix format in sklearn
print(np.isnan(df_avg["Simulation Time"]))
xs_hue = [[x] for x in list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Block Counts"])]
ys_hue = np.array(list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Simulation Time"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
#plt.set_ylim(top = 10)
plt.xticks(np.arange(0, 30, 10.0))
plt.yticks(np.power(10.0, [-1, 0, 1, 2, 3]))
plt.xlabel("Block Counts")
plt.ylabel("Simulation Time (s)")
plt.tight_layout()
plt.savefig(os.path.join(output_dir,'block_counts_vs_simtime.png'), bbox_inches='tight')
# plt.show()
plt.close("all")
"""
def plot_sim_time_vs_system_char(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
blk_cnt = list(data["blk_cnt"])
pe_cnt = list(data["pe_cnt"])
mem_cnt = list(data["mem_cnt"])
bus_cnt = list(data["bus_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
tmp_reformatted_df_data = [blk_cnt * 2, pe_cnt * 2, mem_cnt * 2, bus_cnt * 2, pa_sim_time + farsi_sim_time,
["PA"] * len(blk_cnt) + ["FARSI"] * len(blk_cnt)]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in
range(len(blk_cnt) * 2)]
# print(reformatted_df_data[0:3])
# exit()
# for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["Block counts", "PE counts", "Mem counts", "Bus counts", "Simulation Time",
"FARSI or PA"])
print(reformatted_df.head())
df_blk_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Block counts")
df_pe_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PE counts")
df_mem_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Mem counts")
df_bus_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Bus counts")
print("Bola")
print(df_blk_avg)
splot = sns.scatterplot(data=df_blk_avg, x="Block counts", y="Simulation Time", hue="FARSI or PA")
splot.set(yscale="log")
splot_1 = sns.scatterplot(data=df_pe_avg, x="PE counts", y="Simulation Time", hue="FARSI or PA")
splot_1.set(yscale="log")
splot_2 = sns.scatterplot(data=df_mem_avg, x="Mem counts", y="Simulation Time", hue="FARSI or PA")
splot_1.set(yscale="log")
splot_3 = sns.scatterplot(data=df_bus_avg, x="Bus counts", y="Simulation Time", hue="FARSI or PA")
splot_1.set(yscale="log")
plt.savefig(os.path.join(output_dir,'block_counts_vs_simtime.png'))
"""
def plot_error_vs_system_char(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
error = list(data["error"])
blk_cnt = list(data["blk_cnt"])
pe_cnt = list(data["pe_cnt"])
mem_cnt = list(data["mem_cnt"])
bus_cnt = list(data["bus_cnt"])
#channel_cnt = list(data["channel_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
num_counts_cols = 4
tmp_reformatted_df_data = [blk_cnt+pe_cnt+mem_cnt+bus_cnt, ["Block Counts"]*len(blk_cnt)+["PE Counts"]*len(blk_cnt) + ["Mem Counts"]*len(blk_cnt) + ["Bus Counts"]*len(bus_cnt) , error*num_counts_cols]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in range(len(blk_cnt)*num_counts_cols) ]
#print(reformatted_df_data[0:3])
#exit()
#for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data, columns = ["Counts", "ArchParam", "Error"])
print(reformatted_df.tail())
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name = "Counts", y_coord_name = "Error", hue_col = "ArchParam")
color_per_hue = {"Bus Counts" : "green", "Mem Counts" : "orange", "PE Counts" : "blue", "Block Counts" : "red", "Channel Counts" : "pink"}
#df_avg = df_avg.loc[df_avg["ArchParam"] != "Bus Counts"]
splot = sns.scatterplot(data=df_avg, y = "Error", x = "Counts", hue = "ArchParam", palette = color_per_hue)
#splot.set(yscale = "log")
#sklearn.linear_model.LinearRegression()
hues = set(list(df_avg["ArchParam"]))
for hue in hues:
#x required to be in matrix format in sklearn
print(np.isnan(df_avg["Error"]))
xs_hue = [[x] for x in list(df_avg.loc[(df_avg["ArchParam"] == hue) & (df_avg["Error"].notnull())]["Counts"])]
ys_hue = np.array(list(df_avg.loc[(df_avg["ArchParam"] == hue) & (df_avg["Error"].notnull())]["Error"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
#plt.set_ylim(top = 10)
output_file = os.path.join(output_dir, "error_vs_system_char.png")
plt.savefig(output_file)
plt.close("all")
def plot_error_vs_system_char_for_paper(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
error = list(data["error"])
blk_cnt = list(data["blk_cnt"])
pe_cnt = list(data["pe_cnt"])
mem_cnt = list(data["mem_cnt"])
bus_cnt = list(data["bus_cnt"])
#channel_cnt = list(data["channel_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
num_counts_cols = 4
tmp_reformatted_df_data = [blk_cnt+pe_cnt+mem_cnt+bus_cnt, ["Block Counts"]*len(blk_cnt)+["PE Counts"]*len(blk_cnt) + ["Memory Counts"]*len(blk_cnt) + ["NoC Counts"]*len(bus_cnt) , error*num_counts_cols]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in range(len(blk_cnt)*num_counts_cols) ]
#print(reformatted_df_data[0:3])
#exit()
#for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data, columns = ["Counts", "ArchParam", "Error"])
print(reformatted_df.tail())
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name = "Counts", y_coord_name = "Error", hue_col = "ArchParam")
color_per_hue = {"NoC Counts" : "green", "Memory Counts" : "orange", "PE Counts" : "blue", "Block Counts" : "red", "Channel Counts" : "pink"}
#df_avg = df_avg.loc[df_avg["ArchParam"] != "Bus Counts"]
axis_font = {'size': '20'}
fontSize = 20
sns.set(font_scale=2, rc={'figure.figsize': (6, 4.2)})
sns.set_style("white")
splot = sns.scatterplot(data=df_avg, y = "Error", x = "Counts", hue = "ArchParam", palette = color_per_hue, hue_order= ["NoC Counts", "Memory Counts", "PE Counts", "Block Counts"], sizes=(8, 8))
#splot.set(yscale = "log")
#sklearn.linear_model.LinearRegression()
hues = set(list(df_avg["ArchParam"]))
splot.legend(title="", fontsize=fontSize, loc="upper right")
for hue in hues:
#x required to be in matrix format in sklearn
print(np.isnan(df_avg["Error"]))
xs_hue = [[x] for x in list(df_avg.loc[(df_avg["ArchParam"] == hue) & (df_avg["Error"].notnull())]["Counts"])]
ys_hue = np.array(list(df_avg.loc[(df_avg["ArchParam"] == hue) & (df_avg["Error"].notnull())]["Error"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
#plt.set_ylim(top = 10)
plt.xticks(np.arange(-5, 30, 10.0))
plt.yticks(np.arange(-5, 50, 10.0))
plt.xlabel("Block Counts")
plt.ylabel("Error (%)")
plt.tight_layout()
output_file = os.path.join(output_dir, "error_vs_system_char.png")
plt.savefig(output_file, bbox_inches='tight')
# plt.show()
plt.close("all")
def plot_latency_vs_sim_time(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
blk_cnt = list(data["blk_cnt"])
pe_cnt = list(data["pe_cnt"])
mem_cnt = list(data["mem_cnt"])
bus_cnt = list(data["bus_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
pa_predicted_lat = list(data["PA_predicted_latency"])
tmp_reformatted_df_data = [pa_predicted_lat * 2, pa_sim_time + farsi_sim_time,
["PA"] * len(blk_cnt) + ["FARSI"] * len(blk_cnt)]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in
range(len(blk_cnt) * 2)]
# print(reformatted_df_data[0:3])
# exit()
# for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["PA Predicted Latency", "Simulation Time", "FARSI or PA"])
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["PA _predicted_latencys", "Simulation Time",
"FARSI or PA"])
print(reformatted_df.head())
df_blk_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PA _predicted_latencys")
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name="PA _predicted_latencys", y_coord_name="Simulation Time",
hue_col="FARSI or PA")
# df_pe_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PE counts")
# df_mem_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Mem counts")
# df_bus_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Bus counts")
# print("Bola")
# print(df_blk_avg)
splot = sns.scatterplot(data=df_avg, x="PA _predicted_latencys", y="Simulation Time", hue="FARSI or PA")
splot.set(yscale="log")
color_per_hue = {"FARSI": "green", "PA": "orange"}
hues = set(list(df_avg["FARSI or PA"]))
for hue in hues:
# x required to be in matrix format in sklearn
print(np.isnan(df_avg["Simulation Time"]))
xs_hue = [[x] for x in list(
df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["PA _predicted_latencys"])]
ys_hue = np.array(
list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Simulation Time"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
# plt.set_ylim(top = 10)
#plt.savefig(os.path.join(output_dir, 'block_counts_vs_simtime.png'))
plt.savefig(os.path.join(output_dir,'latency_vs_sim_time.png'))
plt.close("all")
"""
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name = "counts", y_coord_name = "Simulation Time", hue_col = "FARSI or PA")
print(reformatted_df.head())
splot = sns.scatterplot(data=reformatted_df, x="PA Predicted Latency", y="Simulation Time", hue="FARSI or PA")
splot.set(yscale="log")
output_file = os.path.join(output_dir, "sim_time_vs_latency.png")
plt.savefig(output_file)
plt.close("all")
"""
def plot_latency_vs_sim_time_for_paper(output_dir, csv_file_addr):
data = pd.read_csv(csv_file_addr)
blk_cnt = list(data["blk_cnt"])
pe_cnt = list(data["pe_cnt"])
mem_cnt = list(data["mem_cnt"])
bus_cnt = list(data["bus_cnt"])
pa_sim_time = list(data["PA simulation time"])
farsi_sim_time = list(data["FARSI simulation time"])
pa_predicted_lat = list(data["PA_predicted_latency"])
tmp_reformatted_df_data = [pa_predicted_lat * 2, pa_sim_time + farsi_sim_time,
["PA"] * len(blk_cnt) + ["FARSI"] * len(blk_cnt)]
reformatted_df_data = [[tmp_reformatted_df_data[j][i] for j in range(len(tmp_reformatted_df_data))] for i in
range(len(blk_cnt) * 2)]
# print(reformatted_df_data[0:3])
# exit()
# for col in reformatted_df_data:
# print("Len of col is {}".format(len(col)))
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["PA Predicted Latency", "Simulation Time", "FARSI or PA"])
reformatted_df = pd.DataFrame(reformatted_df_data,
columns=["PA _predicted_latencys", "Simulation Time",
"FARSI or PA"])
print(reformatted_df.head())
df_blk_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PA _predicted_latencys")
df_avg = get_df_as_avg_for_each_x_coord(reformatted_df, x_coord_name="PA _predicted_latencys", y_coord_name="Simulation Time",
hue_col="FARSI or PA")
# df_pe_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "PE counts")
# df_mem_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Mem counts")
# df_bus_avg = get_df_as_avg_for_each_x_coord(reformatted_df, "Bus counts")
# print("Bola")
# print(df_blk_avg)
axis_font = {'size': '20'}
fontSize = 20
sns.set(font_scale=2, rc={'figure.figsize': (6, 4)})
sns.set_style("white")
color_per_hue = {'PA': 'hotpink', 'FARSI': 'green'}
splot = sns.scatterplot(data=df_avg, x="PA _predicted_latencys", y="Simulation Time", hue="FARSI or PA", palette=color_per_hue)
splot.set(yscale="log")
splot.legend(title="", fontsize=fontSize, loc="center right")
hues = set(list(df_avg["FARSI or PA"]))
for hue in hues:
# x required to be in matrix format in sklearn
print(np.isnan(df_avg["Simulation Time"]))
xs_hue = [[x] for x in list(
df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["PA _predicted_latencys"])]
ys_hue = np.array(
list(df_avg.loc[(df_avg["FARSI or PA"] == hue) & (df_avg["Simulation Time"].notnull())]["Simulation Time"]))
print("xs_hue")
print(xs_hue)
print("ys_hue")
print(ys_hue)
reg = LinearRegression().fit(xs_hue, ys_hue)
m = reg.coef_[0]
n = reg.intercept_
abline(m, n, color_per_hue[hue])
# plt.set_ylim(top = 10)
plt.xticks(np.arange(0, 60, 10.0))
plt.yticks(np.power(10.0, [-1, 0, 1, 2, 3]))
plt.xlabel("Execution latency")
plt.ylabel("Simulation Time (s)")
plt.tight_layout()
#plt.savefig(os.path.join(output_dir, 'block_counts_vs_simtime.png'))
plt.savefig(os.path.join(output_dir,'latency_vs_sim_time.png'), bbox_inches='tight')
# plt.show()
plt.close("all")
if __name__ == "__main__": # Ying: for aggregate_data
run_folder_name = config_plotting.run_folder_name
csv_file_addr = os.path.join(run_folder_name, "input_data","aggregate_data.csv")
output_dir = os.path.join(run_folder_name, "validation")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if config_plotting.draw_for_paper: # Ying: "cross_workloads", from aggregate_data
plot_error_vs_system_char_for_paper(output_dir, csv_file_addr)
plot_sim_time_vs_system_char_minimal_for_paper(output_dir, csv_file_addr)
plot_latency_vs_sim_time_for_paper(output_dir, csv_file_addr)
else:
plot_error_vs_system_char(output_dir, csv_file_addr)
plot_sim_time_vs_system_char_minimal(output_dir, csv_file_addr)
plot_latency_vs_sim_time(output_dir, csv_file_addr)
| 44.135922 | 207 | 0.653718 | 3,462 | 22,730 | 3.958983 | 0.059792 | 0.11287 | 0.063257 | 0.048519 | 0.902451 | 0.900409 | 0.88837 | 0.875456 | 0.875237 | 0.872829 | 0 | 0.006563 | 0.202244 | 22,730 | 514 | 208 | 44.22179 | 0.749297 | 0.137879 | 0 | 0.719595 | 0 | 0 | 0.163613 | 0.00872 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030405 | false | 0 | 0.027027 | 0 | 0.064189 | 0.141892 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2204bcfc62991eebcee379c1b634e914168a992c | 164 | py | Python | Day47/Remove_all_except_numbers_and_letters.py | tushartrip1010/100_days_code_py | ee74b429e98cdd8bdf8661cf987da67c9fee5a3e | [
"Apache-2.0"
] | null | null | null | Day47/Remove_all_except_numbers_and_letters.py | tushartrip1010/100_days_code_py | ee74b429e98cdd8bdf8661cf987da67c9fee5a3e | [
"Apache-2.0"
] | null | null | null | Day47/Remove_all_except_numbers_and_letters.py | tushartrip1010/100_days_code_py | ee74b429e98cdd8bdf8661cf987da67c9fee5a3e | [
"Apache-2.0"
] | null | null | null | import re
def Remove_all(Test_string):
return re.sub(r"[\W_]+", "", Test_string)
Test_string = "123abcjw:, .@! eiw"
print(Remove_all(Test_string))
| 16.4 | 46 | 0.640244 | 23 | 164 | 4.26087 | 0.608696 | 0.408163 | 0.265306 | 0.387755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022556 | 0.189024 | 164 | 9 | 47 | 18.222222 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.154839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
22181f857b0ebcea082bfd7c8ba13fc7920def39 | 55,030 | py | Python | src/training_modules/unused_modules/train_waymo_rnn.py | petergroth/trajectory_forecasting | 35bcf1e60d818cc1aaff746c3818ff56c574e854 | [
"MIT"
] | 1 | 2022-01-26T11:54:46.000Z | 2022-01-26T11:54:46.000Z | src/training_modules/unused_modules/train_waymo_rnn.py | petergroth/trajectory_forecasting | 35bcf1e60d818cc1aaff746c3818ff56c574e854 | [
"MIT"
] | null | null | null | src/training_modules/unused_modules/train_waymo_rnn.py | petergroth/trajectory_forecasting | 35bcf1e60d818cc1aaff746c3818ff56c574e854 | [
"MIT"
] | 1 | 2022-03-18T03:13:01.000Z | 2022-03-18T03:13:01.000Z | import argparse
import math
import os
import random
from typing import Union
import hydra
import pytorch_lightning as pl
import torch
import torch_geometric.nn
import torchmetrics
from omegaconf import DictConfig, OmegaConf
from pytorch_lightning.callbacks import RichProgressBar
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.utilities.seed import seed_everything
from torch_geometric.data import Batch
from src.data.dataset_waymo import (OneStepWaymoDataModule,
SequentialWaymoDataModule)
from src.models.model import *
class WaymoModule(pl.LightningModule):
def __init__(
self,
model_type: Union[None, str],
model_dict: Union[None, dict],
lr: float = 1e-4,
weight_decay: float = 0.0,
noise: Union[None, float] = None,
teacher_forcing_ratio: float = 0.0,
min_dist: int = 0,
n_neighbours: int = 30,
edge_weight: bool = False,
self_loop: bool = False,
out_features: int = 6,
node_features: int = 9,
edge_features: int = 1,
normalise: bool = True,
training_horizon: int = 90,
edge_dropout: float = 0,
prediction_horizon: int = 91,
):
super().__init__()
# Training metrics
self.train_ade_loss = torchmetrics.MeanSquaredError()
self.train_fde_loss = torchmetrics.MeanSquaredError()
self.train_vel_loss = torchmetrics.MeanSquaredError()
# Validation metrics
self.val_ade_loss = torchmetrics.MeanSquaredError()
self.val_fde_loss = torchmetrics.MeanSquaredError()
self.val_vel_loss = torchmetrics.MeanSquaredError()
self.val_fde_ttp_loss = torchmetrics.MeanSquaredError()
self.val_ade_ttp_loss = torchmetrics.MeanSquaredError()
# Testing metrics
self.test_ade_loss = torchmetrics.MeanSquaredError()
self.test_fde_loss = torchmetrics.MeanSquaredError()
self.test_vel_loss = torchmetrics.MeanSquaredError()
self.test_fde_ttp_loss = torchmetrics.MeanSquaredError()
self.test_ade_ttp_loss = torchmetrics.MeanSquaredError()
# Instantiate model
self.model_type = model_type
self.model = eval(model_type)(**model_dict)
# Learning parameters
self.normalise = normalise
self.global_scale = 8.025897979736328
# self.global_scale = 1
self.noise = noise
self.lr = lr
self.weight_decay = weight_decay
self.teacher_forcing_ratio = teacher_forcing_ratio
self.training_horizon = training_horizon
self.norm_index = [0, 1, 2, 3, 4, 5, 6]
self.pos_index = [0, 1]
self.edge_dropout = edge_dropout
self.prediction_horizon = prediction_horizon
# Model parameters
self.rnn_type = model_dict["rnn_type"]
self.out_features = out_features
self.edge_features = edge_features
self.node_features = node_features
# Graph parameters
self.min_dist = min_dist
self.n_neighbours = n_neighbours
self.edge_weight = edge_weight
self.self_loop = self_loop
self.save_hyperparameters()
def training_step(self, batch: Batch, batch_idx: int):
######################
# Initialisation #
######################
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Discard future values not used for training
batch.x = batch.x[:, : (self.training_horizon + 1)]
# Update mask
mask = batch.x[:, :, -1].bool()
# Discard masks and extract static features
batch.x = batch.x[:, :, :-1]
# static_features = torch.cat(
# [batch.x[:, 10, self.out_features :], batch.type], dim=1
# )
static_features = batch.x[:, 10, self.out_features :]
static_features = static_features.type_as(batch.x)
edge_attr = None
# Extract dimensions and allocate predictions
n_nodes = batch.num_nodes
y_predictions = torch.zeros((n_nodes, self.training_horizon, self.out_features))
y_predictions = y_predictions.type_as(batch.x)
# Tensor of position and velocity targets
y_target = batch.x[:, 1 : (self.training_horizon + 1), : self.out_featuers]
y_target = y_target.type_as(batch.x)
assert y_target.shape == y_predictions.shape
# Initial hidden state
if self.rnn_type == "GRU":
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node, c_edge = None, None
else:
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
c_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
c_node = c_node.type_as(batch.x)
c_edge = c_edge.type_as(batch.x)
######################
# History #
######################
for t in range(11):
# Extract current input
mask_t = mask[:, t]
# x_t = torch.cat([batch.x[mask_t, t, :], batch.type[mask_t]], dim=1)
x_t = batch.x[mask_t, t, :]
x_t = x_t.type_as(batch.x)
# Add noise if specified
if self.noise is not None:
x_t[:, : self.out_features] += self.noise * torch.randn_like(
x_t[:, : self.out_features]
)
######################
# Graph construction #
######################
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch[mask_t],
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
if self.edge_dropout > 0:
edge_index, edge_attr = dropout_adj(
edge_index=edge_index, edge_attr=edge_attr, p=self.edge_dropout
)
#######################
# Training 1/2 #
#######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][mask_t][
:, self.pos_index
]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain predicted delta dynamics
if self.rnn_type == "GRU":
hidden_in = (h_node[:, mask_t], h_edge[:, mask_t])
delta_x, h_t = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
# Update hidden states
h_node[:, mask_t] = h_t[0]
h_edge[:, mask_t] = h_t[1]
else: # LSTM
hidden_in = (
(h_node[:, mask_t], c_node[:, mask_t]),
(h_edge[:, mask_t], c_edge[:, mask_t]),
)
delta_x, (h_node_out, h_edge_out) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
h_node[:, mask_t] = h_node_out[0]
c_node[:, mask_t] = h_node_out[1]
h_edge[:, mask_t] = h_edge_out[0]
c_edge[:, mask_t] = h_edge_out[1]
vel = delta_x[:, [0, 1]]
pos = batch.x[mask_t, t][:, self.pos_index] + 0.1 * vel
x_t = torch.cat([pos, vel, static_features[mask_t]], dim=-1)
x_t = x_t.type_as(batch.x)
# Save deltas for loss computation
y_predictions[mask_t, t, :] = x_t[:, : self.out_features]
# If using teacher_forcing, draw sample and accept <teach_forcing_ratio*100> % of the time. Else, deny.
use_groundtruth = random.random() < self.teacher_forcing_ratio
######################
# Future #
######################
for t in range(11, self.training_horizon):
# Use groundtruth 'teacher_forcing_ratio' % of the time
if use_groundtruth:
# x_t = torch.cat([batch.x[:, t, :], batch.type], dim=1)
x_t = batch.x[:, t, :].clone()
x_prev = x_t.clone()
######################
# Graph construction #
######################
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch,
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
if self.edge_dropout > 0:
edge_index, edge_attr = dropout_adj(
edge_index=edge_index, edge_attr=edge_attr, p=self.edge_dropout
)
#######################
# Training 2/2 #
#######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][:, self.pos_index]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
delta_x, (h_node, h_edge) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=(h_node, h_edge),
)
else:
delta_x, ((h_node, c_node), (h_edge, c_edge)) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=((h_node, c_node), (h_edge, c_edge)),
)
vel = delta_x[:, [0, 1]]
pos = x_prev[:, [0, 1]] + 0.1 * vel
x_t = torch.cat([pos, vel, static_features], dim=-1)
x_t = x_t.type_as(batch.x)
# Save deltas for loss computation
y_predictions[:, t, :] = x_t[:, : self.out_features]
# Determine valid input and target pairs. Compute loss mask as their intersection
loss_mask_target = mask[:, 1 : (self.training_horizon + 1)]
loss_mask_input = mask[:, 0 : self.training_horizon]
loss_mask = torch.logical_and(loss_mask_input, loss_mask_target)
# Determine valid end-points
fde_mask_target = mask[:, -1]
fde_mask_input = mask[:, -2]
fde_mask = torch.logical_and(fde_mask_input, fde_mask_target)
assert (y_target[:, :, [0, 1]][loss_mask] == 0).sum() == 0
assert (y_predictions[:, :, [0, 1]][loss_mask] == 0).sum() == 0
# Compute and log loss
fde_loss = self.train_fde_loss(
y_predictions[fde_mask, -1][:, [0, 1]], y_target[fde_mask, -1][:, [0, 1]]
)
ade_loss = self.train_ade_loss(
y_predictions[:, :, [0, 1]][loss_mask], y_target[:, :, [0, 1]][loss_mask]
)
vel_loss = self.train_vel_loss(
y_predictions[:, :, [2, 3]][loss_mask], y_target[:, :, [2, 3]][loss_mask]
)
self.log(
"train_fde_loss",
fde_loss,
on_step=True,
on_epoch=True,
batch_size=fde_mask.sum().item(),
)
self.log(
"train_ade_loss",
ade_loss,
on_step=True,
on_epoch=True,
batch_size=loss_mask.sum().item(),
)
self.log(
"train_vel_loss",
vel_loss,
on_step=True,
on_epoch=True,
batch_size=loss_mask.sum().item(),
)
loss = ade_loss
self.log(
"train_total_loss",
loss,
on_step=True,
on_epoch=True,
batch_size=loss_mask.sum().item(),
)
return loss
def validation_step(self, batch: Batch, batch_idx: int):
######################
# Initialisation #
######################
# Validate on sequential dataset. First 11 observations are used to prime the model.
# Loss is computed on remaining 80 samples using rollout.
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Update input using prediction horizon
batch.x = batch.x[:, : self.prediction_horizon]
# Update mask
mask = batch.x[:, :, -1].bool()
# Allocate target/prediction tensors
n_nodes = batch.num_nodes
y_hat = torch.zeros((self.prediction_horizon - 11, n_nodes, self.out_features))
y_hat = y_hat.type_as(batch.x)
y_target = torch.zeros(
(self.prediction_horizon - 11, n_nodes, self.out_features)
)
y_target = y_target.type_as(batch.x)
batch.x = batch.x[:, :, :-1]
# static_features = torch.cat(
# [batch.x[:, 10, self.out_features :], batch.type], dim=1
# )
static_features = batch.x[:, 10, self.out_features :]
static_features = static_features.type_as(batch.x)
edge_attr = None
# Initial hidden state
if self.rnn_type == "GRU":
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node, c_edge = None, None
else:
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
c_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
c_node = c_node.type_as(batch.x)
c_edge = c_edge.type_as(batch.x)
######################
# History #
######################
for t in range(11):
######################
# Graph construction #
######################
mask_t = mask[:, t]
# x_t = torch.cat([batch.x[mask_t, t, :], batch.type[mask_t]], dim=1)
x_t = batch.x[mask_t, t, :]
x_t = x_t.type_as(batch.x)
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch[mask_t],
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Validation 1/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][mask_t][
:, self.pos_index
]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
hidden_in = (h_node[:, mask_t], h_edge[:, mask_t])
delta_x, h_t = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
# Update hidden states
h_node[:, mask_t] = h_t[0]
h_edge[:, mask_t] = h_t[1]
else: # LSTM
hidden_in = (
(h_node[:, mask_t], c_node[:, mask_t]),
(h_edge[:, mask_t], c_edge[:, mask_t]),
)
delta_x, (h_node_out, h_edge_out) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
h_node[:, mask_t] = h_node_out[0]
c_node[:, mask_t] = h_node_out[1]
h_edge[:, mask_t] = h_edge_out[0]
c_edge[:, mask_t] = h_edge_out[1]
if t == 10:
vel = delta_x[:, [0, 1]]
pos = batch.x[mask_t, t][:, self.pos_index] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features[mask_t]], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save first prediction and target
y_hat[0, mask_t, :] = predicted_graph[:, : self.out_features]
y_target[0, mask_t, :] = batch.x[mask_t, 11, : self.out_features]
######################
# Future #
######################
for t in range(11, self.prediction_horizon - 1):
######################
# Graph construction #
######################
# Latest prediction as input
x_t = predicted_graph.clone()
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch,
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Validation 2/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][:, self.pos_index]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
delta_x, (h_node, h_edge) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=(h_node, h_edge),
)
else:
delta_x, ((h_node, c_node), (h_edge, c_edge)) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=((h_node, c_node), (h_edge, c_edge)),
)
vel = delta_x[:, [0, 1]]
pos = predicted_graph[:, [0, 1]] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save prediction alongside true value (next time step state)
y_hat[t - 10, :, :] = predicted_graph[:, : self.out_features]
y_target[t - 10, :, :] = batch.x[:, t + 1, : self.out_features]
fde_mask = mask[:, -1]
val_mask = mask[:, 11:].permute(1, 0)
# Compute and log loss
fde_loss = self.val_fde_loss(
y_hat[-1, fde_mask][:, [0, 1]], y_target[-1, fde_mask][:, [0, 1]]
)
ade_loss = self.val_ade_loss(
y_hat[:, :, [0, 1]][val_mask], y_target[:, :, [0, 1]][val_mask]
)
vel_loss = self.val_vel_loss(
y_hat[:, :, [2, 3]][val_mask], y_target[:, :, [2, 3]][val_mask]
)
# Compute losses on "tracks_to_predict"
fde_ttp_mask = torch.logical_and(fde_mask, batch.tracks_to_predict)
fde_ttp_loss = self.val_fde_ttp_loss(
y_hat[-1, fde_ttp_mask][:, [0, 1]], y_target[-1, fde_ttp_mask][:, [0, 1]]
)
ade_ttp_mask = torch.logical_and(
val_mask,
batch.tracks_to_predict.expand(
(self.prediction_horizon - 11, mask.size(0))
),
)
ade_ttp_loss = self.val_ade_loss(
y_hat[:, :, [0, 1]][ade_ttp_mask], y_target[:, :, [0, 1]][ade_ttp_mask]
)
######################
# Logging #
######################
self.log("val_ade_loss", ade_loss, batch_size=val_mask.sum().item())
self.log("val_fde_loss", fde_loss, batch_size=fde_mask.sum().item())
self.log("val_vel_loss", vel_loss, batch_size=val_mask.sum().item())
loss = ade_loss
self.log("val_total_loss", loss, batch_size=val_mask.sum().item())
self.log("val_fde_ttp_loss", fde_ttp_loss, batch_size=fde_ttp_mask.sum().item())
self.log("val_ade_ttp_loss", ade_ttp_loss, batch_size=ade_ttp_mask.sum().item())
return loss
def test_step(self, batch: Batch, batch_idx: int):
######################
# Initialisation #
######################
# Test on sequential dataset. First 11 observations are used to prime the model.
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Update input using prediction horizon
batch.x = batch.x[:, : self.prediction_horizon]
# Update mask
mask = batch.x[:, :, -1].bool()
# Allocate target/prediction tensors
n_nodes = batch.num_nodes
y_hat = torch.zeros((self.prediction_horizon - 11, n_nodes, self.out_features))
y_hat = y_hat.type_as(batch.x)
y_target = torch.zeros(
(self.prediction_horizon - 11, n_nodes, self.out_features)
)
y_target = y_target.type_as(batch.x)
batch.x = batch.x[:, :, :-1]
# static_features = torch.cat(
# [batch.x[:, 10, self.out_features :], batch.type], dim=1
# )
static_features = batch.x[:, 10, self.out_features :]
static_features = static_features.type_as(batch.x)
edge_attr = None
# Initial hidden state
if self.rnn_type == "GRU":
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node, c_edge = None, None
else:
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
c_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
c_node = c_node.type_as(batch.x)
c_edge = c_edge.type_as(batch.x)
######################
# History #
######################
for t in range(11):
######################
# Graph construction #
######################
mask_t = mask[:, t]
# x_t = torch.cat([batch.x[mask_t, t, :], batch.type[mask_t]], dim=1)
x_t = batch.x[mask_t, t, :]
x_t = x_t.type_as(batch.x)
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch[mask_t],
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Testing 1/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][mask_t][
:, self.pos_index
]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
hidden_in = (h_node[:, mask_t], h_edge[:, mask_t])
delta_x, h_t = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
# Update hidden states
h_node[:, mask_t] = h_t[0]
h_edge[:, mask_t] = h_t[1]
else: # LSTM
hidden_in = (
(h_node[:, mask_t], c_node[:, mask_t]),
(h_edge[:, mask_t], c_edge[:, mask_t]),
)
delta_x, (h_node_out, h_edge_out) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
h_node[:, mask_t] = h_node_out[0]
c_node[:, mask_t] = h_node_out[1]
h_edge[:, mask_t] = h_edge_out[0]
c_edge[:, mask_t] = h_edge_out[1]
if t == 10:
vel = delta_x[:, [0, 1]]
pos = batch.x[mask_t, t][:, self.pos_index] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features[mask_t]], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save first prediction and target
y_hat[0, mask_t, :] = predicted_graph[:, : self.out_features]
y_target[0, mask_t, :] = batch.x[mask_t, 11, : self.out_features]
######################
# Future #
######################
for t in range(11, self.prediction_horizon - 1):
######################
# Graph construction #
######################
# Latest prediction as input
x_t = predicted_graph.clone()
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch,
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Testing 2/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][:, self.pos_index]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
delta_x, (h_node, h_edge) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=(h_node, h_edge),
)
else:
delta_x, ((h_node, c_node), (h_edge, c_edge)) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=((h_node, c_node), (h_edge, c_edge)),
)
vel = delta_x[:, [0, 1]]
pos = predicted_graph[:, [0, 1]] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save prediction alongside true value (next time step state)
y_hat[t - 10, :, :] = predicted_graph[:, : self.out_features]
y_target[t - 10, :, :] = batch.x[:, t + 1, : self.out_features]
fde_mask = mask[:, -1]
val_mask = mask[:, 11:].permute(1, 0)
# Compute and log loss
fde_loss = self.test_fde_loss(
y_hat[-1, fde_mask][:, [0, 1]], y_target[-1, fde_mask][:, [0, 1]]
)
ade_loss = self.test_ade_loss(
y_hat[:, :, [0, 1]][val_mask], y_target[:, :, [0, 1]][val_mask]
)
vel_loss = self.test_vel_loss(
y_hat[:, :, [2, 3]][val_mask], y_target[:, :, [2, 3]][val_mask]
)
# Compute losses on "tracks_to_predict"
fde_ttp_mask = torch.logical_and(fde_mask, batch.tracks_to_predict)
fde_ttp_loss = self.test_fde_ttp_loss(
y_hat[-1, fde_ttp_mask][:, [0, 1]], y_target[-1, fde_ttp_mask][:, [0, 1]]
)
ade_ttp_mask = torch.logical_and(
val_mask,
batch.tracks_to_predict.expand(
(self.prediction_horizon - 11, mask.size(0))
),
)
ade_ttp_loss = self.test_ade_loss(
y_hat[:, :, [0, 1]][ade_ttp_mask], y_target[:, :, [0, 1]][ade_ttp_mask]
)
######################
# Logging #
######################
self.log("test_ade_loss", ade_loss, batch_size=val_mask.sum().item())
self.log("test_fde_loss", fde_loss, batch_size=fde_mask.sum().item())
self.log("test_vel_loss", vel_loss, batch_size=val_mask.sum().item())
loss = ade_loss
self.log("test_total_loss", loss, batch_size=val_mask.sum().item())
self.log(
"test_fde_ttp_loss", fde_ttp_loss, batch_size=fde_ttp_mask.sum().item()
)
self.log(
"test_ade_ttp_loss", ade_ttp_loss, batch_size=ade_ttp_mask.sum().item()
)
return loss
def predict_step(self, batch, batch_idx=None, prediction_horizon: int = 91):
######################
# Initialisation #
######################
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Reduction: Limit to x/y
batch.x = batch.x[:, :, self.node_indices]
batch.x = batch.x[:, :prediction_horizon]
# Update mask
mask = batch.x[:, :, -1].bool()
# Allocate target/prediction tensors
n_nodes = batch.num_nodes
y_hat = torch.zeros((prediction_horizon - 1, n_nodes, self.node_features))
y_target = torch.zeros((prediction_horizon - 1, n_nodes, self.node_features))
# Ensure device placement
y_hat = y_hat.type_as(batch.x)
y_target = y_target.type_as(batch.x)
batch.x = batch.x[:, :, :-1]
# static_features = torch.cat(
# [batch.x[:, 10, self.out_features :], batch.type], dim=1
# )
static_features = batch.x[:, 10, self.out_features :]
edge_attr = None
# Initial hidden state
if self.rnn_type == "GRU":
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node, c_edge = None, None
else:
h_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
h_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
h_node = h_node.type_as(batch.x)
h_edge = h_edge.type_as(batch.x)
c_node = torch.zeros((self.model.num_layers, n_nodes, self.model.rnn_size))
c_edge = torch.zeros(
(self.model.num_layers, n_nodes, self.model.rnn_edge_size)
)
c_node = c_node.type_as(batch.x)
c_edge = c_edge.type_as(batch.x)
######################
# History #
######################
for t in range(11):
######################
# Graph construction #
######################
mask_t = mask[:, t]
# x_t = torch.cat([batch.x[mask_t, t, :], batch.type[mask_t]], dim=1)
x_t = batch.x[mask_t, t, :]
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch[mask_t],
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Predictions 1/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][mask_t][
:, self.pos_index
]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
hidden_in = (h_node[:, mask_t], h_edge[:, mask_t])
delta_x, h_t = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
# Update hidden states
h_node[:, mask_t] = h_t[0]
h_edge[:, mask_t] = h_t[1]
else: # LSTM
hidden_in = (
(h_node[:, mask_t], c_node[:, mask_t]),
(h_edge[:, mask_t], c_edge[:, mask_t]),
)
delta_x, (h_node_out, h_edge_out) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch[mask_t],
hidden=hidden_in,
)
h_node[:, mask_t] = h_node_out[0]
c_node[:, mask_t] = h_node_out[1]
h_edge[:, mask_t] = h_edge_out[0]
c_edge[:, mask_t] = h_edge_out[1]
vel = delta_x[:, self.pos_index]
pos = batch.x[mask_t, t][:, self.pos_index] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features[mask_t]], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save predictions and targets
y_hat[t, mask_t, :] = predicted_graph
# y_target[t, mask_t, :] = torch.cat(
# [batch.x[mask_t, t + 1, :], batch.type[mask_t]], dim=1
# )
y_target[t, mask_t, :] = batch.x[mask_t, t + 1, :]
######################
# Future #
######################
for t in range(11, (prediction_horizon - 1)):
######################
# Graph construction #
######################
x_t = predicted_graph.clone()
# Construct edges
edge_index = torch_geometric.nn.radius_graph(
x=x_t[:, :2],
r=self.min_dist,
batch=batch.batch,
loop=self.self_loop,
max_num_neighbors=self.n_neighbours,
flow="source_to_target",
)
# Remove duplicates and sort
edge_index = torch_geometric.utils.coalesce(
edge_index, num_nodes=x_t.shape[0]
)
# Create edge_attr if specified
if self.edge_weight:
# Encode distance between nodes as edge_attr
row, col = edge_index
edge_attr = (x_t[row, :2] - x_t[col, :2]).norm(dim=-1).unsqueeze(1)
edge_attr = edge_attr.type_as(batch.x)
######################
# Predictions 2/2 #
######################
# Normalise input graph
if self.normalise:
# Center node positions
x_t[:, self.pos_index] -= batch.loc[batch.batch][:, self.pos_index]
# Scale all features (except yaws) with global scaler
x_t[:, self.norm_index] /= self.global_scale
if edge_attr is not None:
# Scale edge attributes
edge_attr /= self.global_scale
# Obtain normalised predicted delta dynamics
if self.rnn_type == "GRU":
delta_x, (h_node, h_edge) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=(h_node, h_edge),
)
else:
delta_x, ((h_node, c_node), (h_edge, c_edge)) = self.model(
x=x_t,
edge_index=edge_index,
edge_attr=edge_attr,
batch=batch.batch,
hidden=((h_node, c_node), (h_edge, c_edge)),
)
vel = delta_x[:, self.pos_index]
pos = predicted_graph[:, self.pos_index] + 0.1 * vel
predicted_graph = torch.cat([pos, vel, static_features], dim=-1)
predicted_graph = predicted_graph.type_as(batch.x)
# Save prediction alongside true value (next time step state)
y_hat[t, :, :] = predicted_graph
# y_target[t, :, :] = torch.cat([batch.x[:, t + 1, :], batch.type], dim=1)
y_target[t, :, :] = batch.x[:, t + 1, :]
return y_hat, y_target, mask
def configure_optimizers(self):
return torch.optim.Adam(
self.parameters(), lr=self.lr, weight_decay=self.weight_decay
)
class ConstantPhysicalBaselineModule(pl.LightningModule):
def __init__(self, out_features: int = 6, prediction_horizon: int = 91, **kwargs):
super().__init__()
self.val_ade_loss = torchmetrics.MeanSquaredError()
self.val_fde_loss = torchmetrics.MeanSquaredError()
self.val_yaw_loss = torchmetrics.MeanSquaredError()
self.val_vel_loss = torchmetrics.MeanSquaredError()
self.val_fde_ttp_loss = torchmetrics.MeanSquaredError()
self.val_ade_ttp_loss = torchmetrics.MeanSquaredError()
self.prediction_horizon = prediction_horizon
self.out_features = out_features
self.save_hyperparameters()
def training_step(self, batch: Batch, batch_idx: int):
pass
def validation_step(self, batch: Batch, batch_idx: int):
######################
# Initialisation #
######################
# Validate on sequential dataset. First 11 observations are used to prime the model.
# Loss is computed on remaining 80 samples using rollout.
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Update input using prediction horizon
batch.x = batch.x[:, : self.prediction_horizon]
# Limit to x, y, x_vel, y_vel
batch.x = batch.x[:, :, [0, 1, 3, 4, 10]]
# Update mask
mask = batch.x[:, :, -1].bool()
# Allocate target/prediction tensors
n_nodes = batch.num_nodes
y_hat = torch.zeros((self.prediction_horizon - 11, n_nodes, self.out_features))
y_target = torch.zeros(
(self.prediction_horizon - 11, n_nodes, self.out_features)
)
# Remove valid flag from features
batch.x = batch.x[:, :, :-1]
# Find valid agents at time t=11
initial_mask = mask[:, 10]
# Extract final dynamic states to use for predictions
last_pos = batch.x[initial_mask, 10][:, [0, 1]]
last_vel = batch.x[initial_mask, 10][:, [2, 3]]
# Constant change in positions
delta_pos = last_vel * 0.1
# First updated position
predicted_pos = last_pos + delta_pos
predicted_graph = torch.cat([predicted_pos, last_vel], dim=1)
# Save first prediction and target
y_hat[0, :, :] = predicted_graph[:, : self.out_features]
y_target[0, :, :] = batch.x[:, 11, : self.out_features]
for t in range(11, self.prediction_horizon - 1):
predicted_pos += delta_pos
predicted_graph = torch.cat([predicted_pos, last_vel], dim=1)
y_hat[t - 10, :, :] = predicted_graph[:, : self.out_features]
y_target[t - 10, :, :] = batch.x[:, t + 1, : self.out_features]
# Extract loss mask
fde_mask = mask[:, -1]
val_mask = mask[:, 11:].permute(1, 0)
# Compute and log loss
fde_loss = self.val_fde_loss(
y_hat[-1, fde_mask][:, [0, 1]], y_target[-1, fde_mask][:, [0, 1]]
)
ade_loss = self.val_ade_loss(
y_hat[:, :, [0, 1]][val_mask], y_target[:, :, [0, 1]][val_mask]
)
vel_loss = self.val_vel_loss(
y_hat[:, :, [2, 3]][val_mask], y_target[:, :, [2, 3]][val_mask]
)
# Compute losses on "tracks_to_predict"
fde_ttp_mask = torch.logical_and(fde_mask, batch.tracks_to_predict)
fde_ttp_loss = self.val_fde_ttp_loss(
y_hat[-1, fde_ttp_mask][:, [0, 1]], y_target[-1, fde_ttp_mask][:, [0, 1]]
)
ade_ttp_mask = torch.logical_and(
val_mask,
batch.tracks_to_predict.expand(
(self.prediction_horizon - 11, mask.size(0))
),
)
ade_ttp_loss = self.val_ade_loss(
y_hat[:, :, [0, 1]][ade_ttp_mask], y_target[:, :, [0, 1]][ade_ttp_mask]
)
######################
# Logging #
######################
self.log("val_ade_loss", ade_loss)
self.log("val_fde_loss", fde_loss)
self.log("val_vel_loss", vel_loss)
loss = ade_loss
self.log("val_total_loss", loss)
self.log("val_fde_ttp_loss", fde_ttp_loss)
self.log("val_ade_ttp_loss", ade_ttp_loss)
return loss
def predict_step(self, batch, batch_idx=None):
######################
# Initialisation #
######################
# Determine valid initialisations at t=11
mask = batch.x[:, :, -1]
valid_mask = mask[:, 10] > 0
# Discard non-valid nodes as no initial trajectories will be known
batch.x = batch.x[valid_mask]
batch.batch = batch.batch[valid_mask]
batch.tracks_to_predict = batch.tracks_to_predict[valid_mask]
batch.type = batch.type[valid_mask]
# CARS
type_mask = batch.type[:, 1] == 1
batch.x = batch.x[type_mask]
batch.batch = batch.batch[type_mask]
batch.tracks_to_predict = batch.tracks_to_predict[type_mask]
batch.type = batch.type[type_mask]
# Update input using prediction horizon
batch.x = batch.x[:, : self.prediction_horizon]
# Limit to x, y, x_vel, y_vel
batch.x = batch.x[:, :, [0, 1, 3, 4, 10]]
# Update mask
mask = batch.x[:, :, -1].bool()
# Allocate target/prediction tensors
n_nodes = batch.num_nodes
y_hat = torch.zeros((self.prediction_horizon - 1, n_nodes, 4))
# Remove valid flag from features
batch.x = batch.x[:, :, :-1]
# Fill in targets
y_target = batch.x[:, 1:]
y_target = y_target.permute(1, 0, 2)
for t in range(11):
mask_t = mask[:, t]
last_pos = batch.x[mask_t, t][:, [0, 1]]
last_vel = batch.x[mask_t, t][:, [2, 3]]
delta_pos = last_vel * 0.1
predicted_pos = last_pos + delta_pos
predicted_graph = torch.cat([predicted_pos, last_vel], dim=-1)
y_hat[t, mask_t, :] = predicted_graph
for t in range(11, 90):
last_pos = predicted_pos
predicted_pos = last_pos + delta_pos
predicted_graph = torch.cat([predicted_pos, last_vel], dim=-1)
y_hat[t, :, :] = predicted_graph
return y_hat, y_target, mask
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-4)
@hydra.main(config_path="../../../configs/waymo/", config_name="config")
def main(config):
# Print configuration for online monitoring
print(OmegaConf.to_yaml(config))
# Save complete yaml file for logging and reproducibility
log_dir = f"logs/{config.logger.project}/{config.logger.version}"
os.makedirs(log_dir, exist_ok=True)
yaml_path = f"{log_dir}/{config.logger.version}.yaml"
OmegaConf.save(config, f=yaml_path)
# Seed for reproducibility
seed_everything(config["misc"]["seed"], workers=True)
# Load data, model, and regressor
datamodule = eval(config["misc"]["dm_type"])(**config["datamodule"])
# Define model
if config["misc"]["model_type"] != "ConstantModel":
model_dict = config["model"]
model_type = config["misc"]["model_type"]
else:
model_dict, model_type = None, None
# Define LightningModule
regressor = eval(config["misc"]["regressor_type"])(
model_type=model_type, model_dict=dict(model_dict), **config["regressor"]
)
# Setup logging (using saved yaml file)
wandb_logger = WandbLogger(
entity="petergroth",
config=OmegaConf.to_container(config, resolve=True),
**config["logger"],
)
wandb_logger.watch(regressor, log_freq=config["misc"]["log_freq"], log_graph=False)
# Add default dir for logs
# Setup callbacks
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filename=config["logger"]["version"], monitor="val_total_loss", save_last=True
)
# Create trainer, fit, and validate
trainer = pl.Trainer(
logger=wandb_logger, **config["trainer"], callbacks=[checkpoint_callback]
)
if config["misc"]["train"]:
trainer.fit(model=regressor, datamodule=datamodule)
trainer.validate(regressor, datamodule=datamodule)
if __name__ == "__main__":
main()
| 37.308475 | 111 | 0.519735 | 6,643 | 55,030 | 4.029956 | 0.051633 | 0.035411 | 0.022188 | 0.024205 | 0.842964 | 0.819245 | 0.796272 | 0.788876 | 0.780359 | 0.773748 | 0 | 0.014675 | 0.347429 | 55,030 | 1,474 | 112 | 37.333786 | 0.730807 | 0.13131 | 0 | 0.705882 | 0 | 0 | 0.01697 | 0.002452 | 0 | 0 | 0 | 0 | 0.003151 | 1 | 0.012605 | false | 0.00105 | 0.017857 | 0.002101 | 0.040966 | 0.00105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
224adb8fdfdda68cd367a7533e9e00f127a06467 | 3,015 | py | Python | src/converter.py | vdragan1993/serbian-document-network | b9efa3ca47dd5d1d93112bd38a9c54fb9cec79b9 | [
"Apache-2.0"
] | 1 | 2017-11-16T19:26:54.000Z | 2017-11-16T19:26:54.000Z | src/converter.py | vdragan1993/serbian-document-network | b9efa3ca47dd5d1d93112bd38a9c54fb9cec79b9 | [
"Apache-2.0"
] | null | null | null | src/converter.py | vdragan1993/serbian-document-network | b9efa3ca47dd5d1d93112bd38a9c54fb9cec79b9 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
__author__ = "Dragan Vidakovic"
def convert_to_latin(input_text):
"""
Convert Serbian Cyrillic to Latin
:param input_text: Cyrillic text
:return: Latin text
"""
# caps
input_text = input_text.replace("А", "a")
input_text = input_text.replace("Б", "b")
input_text = input_text.replace("В", "v")
input_text = input_text.replace("Г", "g")
input_text = input_text.replace("Д", "d")
input_text = input_text.replace("Ђ", "dj")
input_text = input_text.replace("Е", "e")
input_text = input_text.replace("Ж", "z")
input_text = input_text.replace("З", "z")
input_text = input_text.replace("И", "i")
input_text = input_text.replace("Ј", "j")
input_text = input_text.replace("К", "k")
input_text = input_text.replace("Л", "l")
input_text = input_text.replace("Љ", "lj")
input_text = input_text.replace("М", "m")
input_text = input_text.replace("Н", "n")
input_text = input_text.replace("Њ", "nj")
input_text = input_text.replace("О", "o")
input_text = input_text.replace("П", "p")
input_text = input_text.replace("Р", "r")
input_text = input_text.replace("С", "s")
input_text = input_text.replace("Т", "t")
input_text = input_text.replace("Ћ", "c")
input_text = input_text.replace("У", "u")
input_text = input_text.replace("Ф", "f")
input_text = input_text.replace("Х", "h")
input_text = input_text.replace("Ц", "c")
input_text = input_text.replace("Ч", "c")
input_text = input_text.replace("Џ", "dz")
input_text = input_text.replace("Ш", "s")
# non caps
input_text = input_text.replace("а", "a")
input_text = input_text.replace("б", "b")
input_text = input_text.replace("в", "v")
input_text = input_text.replace("г", "g")
input_text = input_text.replace("д", "d")
input_text = input_text.replace("ђ", "dj")
input_text = input_text.replace("е", "e")
input_text = input_text.replace("ж", "z")
input_text = input_text.replace("з", "z")
input_text = input_text.replace("и", "i")
input_text = input_text.replace("ј", "j")
input_text = input_text.replace("к", "k")
input_text = input_text.replace("л", "l")
input_text = input_text.replace("љ", "lj")
input_text = input_text.replace("м", "m")
input_text = input_text.replace("н", "n")
input_text = input_text.replace("њ", "nj")
input_text = input_text.replace("о", "o")
input_text = input_text.replace("п", "p")
input_text = input_text.replace("р", "r")
input_text = input_text.replace("с", "s")
input_text = input_text.replace("т", "t")
input_text = input_text.replace("ћ", "c")
input_text = input_text.replace("у", "u")
input_text = input_text.replace("ф", "f")
input_text = input_text.replace("х", "h")
input_text = input_text.replace("ц", "c")
input_text = input_text.replace("ч", "c")
input_text = input_text.replace("џ", "dz")
input_text = input_text.replace("ш", "s")
return input_text
| 40.2 | 46 | 0.633499 | 451 | 3,015 | 3.949002 | 0.166297 | 0.621561 | 0.471645 | 0.606401 | 0.918585 | 0.918585 | 0.918585 | 0.918585 | 0.918585 | 0.918585 | 0 | 0.000405 | 0.180431 | 3,015 | 74 | 47 | 40.743243 | 0.720356 | 0.037811 | 0 | 0 | 0 | 0 | 0.050087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2275c02e8489ed31b60752587b53c62e012a471e | 35 | py | Python | python_package_framework/hello_world.py | John-smith-889/python-package-framework | b1d77b95234bb9aaf2f881fdd8fc9e2e45aad9a5 | [
"MIT"
] | null | null | null | python_package_framework/hello_world.py | John-smith-889/python-package-framework | b1d77b95234bb9aaf2f881fdd8fc9e2e45aad9a5 | [
"MIT"
] | null | null | null | python_package_framework/hello_world.py | John-smith-889/python-package-framework | b1d77b95234bb9aaf2f881fdd8fc9e2e45aad9a5 | [
"MIT"
] | null | null | null | def hello_world():
return "hello"
| 11.666667 | 18 | 0.714286 | 5 | 35 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 2 | 19 | 17.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
97dcd4d3ffc4aafc111dfaa9d4d0bbb780536857 | 5,422 | py | Python | tests/test_model.py | openclimatefix/perceiver-pytorch | 62c314b302aec95571796684732b2bcd0a81cc75 | [
"MIT"
] | 7 | 2021-07-30T22:06:26.000Z | 2022-02-24T09:39:02.000Z | tests/test_model.py | openclimatefix/perceiver-pytorch | 62c314b302aec95571796684732b2bcd0a81cc75 | [
"MIT"
] | 16 | 2021-07-27T09:58:03.000Z | 2021-12-16T12:26:53.000Z | tests/test_model.py | openclimatefix/perceiver-pytorch | 62c314b302aec95571796684732b2bcd0a81cc75 | [
"MIT"
] | null | null | null | import torch
from einops import rearrange
from perceiver_pytorch.multi_perceiver_pytorch import MultiPerceiver
from perceiver_pytorch.modalities import InputModality
from perceiver_pytorch.decoders import ImageDecoder
def test_multiperceiver_creation():
# Timeseries input
input_size = 64
max_frequency = 16.0
video_modality = InputModality(
name="timeseries",
input_channels=12,
input_axis=3, # number of axes, 3 for video
num_freq_bands=input_size, # number of freq bands, with original value (2 * K + 1)
max_freq=max_frequency, # maximum frequency, hyperparameter depending on how fine the data is, should be Nyquist frequency (i.e. 112 for 224 input image)
sin_only=False, # Whether if sine only for Fourier encoding, TODO test more
fourier_encode=True, # Whether to encode position with Fourier features
)
# Use image modality for latlon, elevation, other base data?
image_modality = InputModality(
name="base",
input_channels=4,
input_axis=2, # number of axes, 2 for images
num_freq_bands=input_size, # number of freq bands, with original value (2 * K + 1)
max_freq=max_frequency, # maximum frequency, hyperparameter depending on how fine the data is
sin_only=False,
fourier_encode=True,
)
# Sort audio for timestep one-hot encode? Or include under other modality?
timestep_modality = InputModality(
name="forecast_time",
input_channels=1, # number of channels for mono audio
input_axis=1, # number of axes, 2 for images
num_freq_bands=24, # number of freq bands, with original value (2 * K + 1)
max_freq=16.0, # maximum frequency, hyperparameter depending on how fine the data is
sin_only=False,
fourier_encode=True,
)
model = MultiPerceiver(
modalities=[video_modality, image_modality, timestep_modality],
queries_dim=input_size,
depth=6,
forecast_steps=12,
output_shape=input_size,
)
x = {
"timeseries": torch.randn((2, 6, input_size, input_size, 12)),
"base": torch.randn((2, input_size, input_size, 4)),
"forecast_time": torch.randn(2, 24, 1),
}
query = torch.randn((2, input_size * 12, input_size))
model.eval()
with torch.no_grad():
out = model(x, queries=query)
out = rearrange(
out, "b h (w c) -> b c h w", c=12
)
# MetNet creates predictions for the center 1/4th
assert out.size() == (
2,
12,
12 * input_size,
input_size,
)
assert not torch.isnan(out).any(), "Output included NaNs"
def test_multiperceiver_decoder():
# Timeseries input
input_size = 64
max_frequency = 16.0
video_modality = InputModality(
name="timeseries",
input_channels=12,
input_axis=3, # number of axes, 3 for video
num_freq_bands=input_size, # number of freq bands, with original value (2 * K + 1)
max_freq=max_frequency, # maximum frequency, hyperparameter depending on how fine the data is, should be Nyquist frequency (i.e. 112 for 224 input image)
sin_only=False, # Whether if sine only for Fourier encoding, TODO test more
fourier_encode=True, # Whether to encode position with Fourier features
)
# Use image modality for latlon, elevation, other base data?
image_modality = InputModality(
name="base",
input_channels=4,
input_axis=2, # number of axes, 2 for images
num_freq_bands=input_size, # number of freq bands, with original value (2 * K + 1)
max_freq=max_frequency, # maximum frequency, hyperparameter depending on how fine the data is
sin_only=False,
fourier_encode=True,
)
# Sort audio for timestep one-hot encode? Or include under other modality?
timestep_modality = InputModality(
name="forecast_time",
input_channels=1, # number of channels for mono audio
input_axis=1, # number of axes, 2 for images
num_freq_bands=24, # number of freq bands, with original value (2 * K + 1)
max_freq=16.0, # maximum frequency, hyperparameter depending on how fine the data is
sin_only=False,
fourier_encode=True,
)
model = MultiPerceiver(
modalities=[video_modality, image_modality, timestep_modality],
queries_dim=input_size,
depth=6,
forecast_steps=12,
output_shape=(24,input_size,input_size),
)
x = {
"timeseries": torch.randn((2, 6, input_size, input_size, 12)),
"base": torch.randn((2, input_size, input_size, 4)),
"forecast_time": torch.randn(2, 24, 1),
}
query = torch.randn((2, input_size * 12, input_size))
model.eval()
decoder = ImageDecoder(postprocess_type='conv1x1', input_channels=768, output_channels=12, spatial_upsample=1, temporal_upsample=1)
decoder.eval()
with torch.no_grad():
out = model(x, queries=query)
out = rearrange(
out, "b c (t w h) -> b t c h w", t=24, h=input_size, w=input_size
)
out = decoder(out)
# MetNet creates predictions for the center 1/4th
assert out.size() == (
2,
24,
12,
input_size,
input_size,
)
assert not torch.isnan(out).any(), "Output included NaNs"
| 39.289855 | 162 | 0.650129 | 725 | 5,422 | 4.703448 | 0.184828 | 0.07654 | 0.025806 | 0.03695 | 0.873314 | 0.873314 | 0.873314 | 0.873314 | 0.873314 | 0.873314 | 0 | 0.03125 | 0.262265 | 5,422 | 137 | 163 | 39.576642 | 0.82125 | 0.313353 | 0 | 0.752066 | 0 | 0 | 0.053973 | 0 | 0 | 0 | 0 | 0.007299 | 0.033058 | 1 | 0.016529 | false | 0 | 0.041322 | 0 | 0.057851 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
97e7c6e60a7019e1ac9ae4ca986af3189e9691d4 | 9,521 | py | Python | tests/test_bst_traversals.py | rgilbert1/bst | ba226a76e8385b546e94234d7998c90c537a4cf2 | [
"MIT"
] | 1 | 2020-01-21T02:33:32.000Z | 2020-01-21T02:33:32.000Z | tests/test_bst_traversals.py | rgilbert1/bst | ba226a76e8385b546e94234d7998c90c537a4cf2 | [
"MIT"
] | null | null | null | tests/test_bst_traversals.py | rgilbert1/bst | ba226a76e8385b546e94234d7998c90c537a4cf2 | [
"MIT"
] | 1 | 2020-08-18T20:28:17.000Z | 2020-08-18T20:28:17.000Z | import unittest
from bst import Node
from .test_bst_base import TestBSTBase
class TestBSTLevelOrderTraversal(TestBSTBase):
def test_null(self):
result = self.subject.traverse(mode='level_order')
self.assertIsNone(result)
def test_root(self):
self.subject.root = Node(100)
result = self.subject.traverse(mode='level_order')
self.assertEqual([100], result)
def test_left(self):
self.subject.root = Node(80)
self.subject.root.left = Node(30)
result = self.subject.traverse(mode='level_order')
self.assertEqual([80, 30], result)
def test_right(self):
self.subject.root = Node(80)
self.subject.root.right = Node(100)
result = self.subject.traverse(mode='level_order')
self.assertEqual([80, 100], result)
def test_left_subtree(self):
self.subject.root = Node(5)
self.subject.root.left = Node(4)
self.subject.root.left.left = Node(3)
result = self.subject.traverse(mode='level_order')
self.assertEqual([5, 4, 3], result)
def test_right_subtree(self):
self.subject.root = Node(3)
self.subject.root.right = Node(4)
self.subject.root.right.right = Node(5)
result = self.subject.traverse(mode='level_order')
self.assertEqual([3, 4, 5], result)
def test_uneven_tree(self):
self.subject.root = Node(10)
self.subject.root.left = Node(8)
self.subject.root.right = Node(12)
self.subject.root.left.right = Node(9)
self.subject.root.right.left = Node(11)
result = self.subject.traverse(mode='level_order')
self.assertEqual([10, 8, 12, 9, 11], result)
def test_full_tree(self):
self.setup_full_tree()
result = self.subject.traverse(mode='level_order')
self.assertEqual([25, 15, 50, 10, 22, 35, 70, 4, 12, 18, 24, 31, 44, 66, 90], result)
def test_medium_tree(self):
for i in range(-400, 400):
self.subject.insert(i)
result = self.subject.traverse(mode='level_order')
self.assertEqual(len(result), 800)
@unittest.skip('This test case causes maximum recursion depth error.')
def test_large_tree(self):
for i in range(-100_000_000, 100_000_000):
self.subject.insert(i)
result = self.subject.traverse(mode='level_order')
self.assertEqual(len(result), 200_00_000)
class TestBSTInorderTraversal(TestBSTBase):
def test_null(self):
result = self.subject.traverse(mode='inorder')
self.assertIsNone(result)
def test_root(self):
self.subject.root = Node(100)
result = self.subject.traverse(mode='inorder')
self.assertEqual([100], result)
def test_left(self):
self.subject.root = Node(80)
self.subject.root.left = Node(30)
result = self.subject.traverse(mode='inorder')
self.assertEqual([30, 80], result)
def test_right(self):
self.subject.root = Node(80)
self.subject.root.right = Node(100)
result = self.subject.traverse(mode='inorder')
self.assertEqual([80, 100], result)
def test_left_subtree(self):
self.subject.root = Node(5)
self.subject.root.left = Node(4)
self.subject.root.left.left = Node(3)
result = self.subject.traverse(mode='inorder')
self.assertEqual([3, 4, 5], result)
def test_right_subtree(self):
self.subject.root = Node(3)
self.subject.root.right = Node(4)
self.subject.root.right.right = Node(5)
result = self.subject.traverse(mode='inorder')
self.assertEqual([3, 4, 5], result)
def test_uneven_tree(self):
self.subject.root = Node(10)
self.subject.root.left = Node(8)
self.subject.root.right = Node(12)
self.subject.root.left.right = Node(9)
self.subject.root.right.left = Node(11)
result = self.subject.traverse(mode='inorder')
self.assertEqual([8, 9, 10, 11, 12], result)
def test_full_tree(self):
self.setup_full_tree()
result = self.subject.traverse(mode='inorder')
self.assertEqual([4, 10, 12, 15, 18, 22, 24, 25, 31, 35, 44, 50, 66, 70, 90], result)
def test_medium_tree(self):
for i in range(-400, 400):
self.subject.insert(i)
result = self.subject.traverse(mode='inorder')
self.assertEqual(len(result), 800)
@unittest.skip('This test case causes maximum recursion depth error.')
def test_large_tree(self):
for i in range(-100_000_000, 100_000_000):
self.subject.insert(i)
result = self.subject.traverse(mode='inorder')
self.assertEqual(len(result), 200_00_000)
class TestBSTPreorderTraversal(TestBSTBase):
def test_null(self):
result = self.subject.traverse(mode='preorder')
self.assertIsNone(result)
def test_root(self):
self.subject.root = Node(100)
result = self.subject.traverse(mode='preorder')
self.assertEqual([100], result)
def test_left(self):
self.subject.root = Node(80)
self.subject.root.left = Node(30)
result = self.subject.traverse(mode='preorder')
self.assertEqual([80, 30], result)
def test_right(self):
self.subject.root = Node(80)
self.subject.root.right = Node(100)
result = self.subject.traverse(mode='preorder')
self.assertEqual([80, 100], result)
def test_left_subtree(self):
self.subject.root = Node(5)
self.subject.root.left = Node(4)
self.subject.root.left.left = Node(3)
result = self.subject.traverse(mode='preorder')
self.assertEqual([5, 4, 3], result)
def test_right_subtree(self):
self.subject.root = Node(3)
self.subject.root.right = Node(4)
self.subject.root.right.right = Node(5)
result = self.subject.traverse(mode='preorder')
self.assertEqual([3, 4, 5], result)
def test_uneven_tree(self):
self.subject.root = Node(10)
self.subject.root.left = Node(8)
self.subject.root.right = Node(12)
self.subject.root.left.right = Node(9)
self.subject.root.right.left = Node(11)
result = self.subject.traverse(mode='preorder')
self.assertEqual([10, 8, 9, 12, 11], result)
def test_full_tree(self):
self.setup_full_tree()
result = self.subject.traverse(mode='preorder')
self.assertEqual([25, 15, 10, 4, 12, 22, 18, 24, 50, 35, 31, 44, 70, 66, 90], result)
def test_medium_tree(self):
for i in range(-400, 400):
self.subject.insert(i)
result = self.subject.traverse(mode='preorder')
self.assertEqual(len(result), 800)
@unittest.skip('This test case causes maximum recursion depth error.')
def test_large_tree(self):
for i in range(-100_000_000, 100_000_000):
self.subject.insert(i)
result = self.subject.traverse(mode='preorder')
self.assertEqual(len(result), 200_00_000)
class TestBSTPostorderTraversal(TestBSTBase):
def test_null(self):
result = self.subject.traverse(mode='postorder')
self.assertIsNone(result)
def test_root(self):
self.subject.root = Node(100)
result = self.subject.traverse(mode='postorder')
self.assertEqual([100], result)
def test_left(self):
self.subject.root = Node(80)
self.subject.root.left = Node(30)
result = self.subject.traverse(mode='postorder')
self.assertEqual([30, 80], result)
def test_right(self):
self.subject.root = Node(80)
self.subject.root.right = Node(100)
result = self.subject.traverse(mode='postorder')
self.assertEqual([100, 80], result)
def test_left_subtree(self):
self.subject.root = Node(5)
self.subject.root.left = Node(4)
self.subject.root.left.left = Node(3)
result = self.subject.traverse(mode='postorder')
self.assertEqual([3, 4, 5], result)
def test_right_subtree(self):
self.subject.root = Node(3)
self.subject.root.right = Node(4)
self.subject.root.right.right = Node(5)
result = self.subject.traverse(mode='postorder')
self.assertEqual([5, 4, 3], result)
def test_uneven_tree(self):
self.subject.root = Node(10)
self.subject.root.left = Node(8)
self.subject.root.right = Node(12)
self.subject.root.left.right = Node(9)
self.subject.root.right.left = Node(11)
result = self.subject.traverse(mode='postorder')
self.assertEqual([9, 8, 11, 12, 10], result)
def test_full_tree(self):
self.setup_full_tree()
result = self.subject.traverse(mode='postorder')
self.assertEqual([4, 12, 10, 18, 24, 22, 15, 31, 44, 35, 66, 90, 70, 50, 25], result)
def test_medium_tree(self):
for i in range(-400, 400):
self.subject.insert(i)
result = self.subject.traverse(mode='postorder')
self.assertEqual(len(result), 800)
@unittest.skip('This test case causes maximum recursion depth error.')
def test_large_tree(self):
for i in range(-100_000_000, 100_000_000):
self.subject.insert(i)
result = self.subject.traverse(mode='postorder')
self.assertEqual(len(result), 200_00_000)
if __name__ == '__main__':
unittest.main()
| 28.33631 | 93 | 0.623359 | 1,259 | 9,521 | 4.621922 | 0.065131 | 0.21172 | 0.164977 | 0.171851 | 0.94432 | 0.94432 | 0.94432 | 0.938821 | 0.894999 | 0.881595 | 0 | 0.064373 | 0.242937 | 9,521 | 335 | 94 | 28.420896 | 0.742925 | 0 | 0 | 0.917051 | 0 | 0 | 0.059448 | 0 | 0 | 0 | 0 | 0 | 0.184332 | 1 | 0.184332 | false | 0 | 0.013825 | 0 | 0.21659 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
3f08ce58530adf2961b4880817922420f04f5dd0 | 14,686 | py | Python | tests/functional/basic/db/test_19.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/functional/basic/db/test_19.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/functional/basic/db/test_19.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | #coding:utf-8
#
# id: functional.basic.db.19
# title: New DB - RDB$PROCEDURE_PARAMETERS
# decription:
# Check for correct content of RDB$PROCEDURE_PARAMETERS in a new database.
# Checked on:
# 2.5.9.27126: OK, 0.485s.
# 3.0.5.33086: OK, 1.000s.
# 4.0.0.1378: OK, 5.078s.
#
# tracker_id:
# min_versions: []
# versions: 3.0, 4.0
# qmid: functional.basic.db.db_19
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = []
init_script_1 = """"""
db_1 = db_factory(sql_dialect=3, init=init_script_1)
test_script_1 = """
set list on;
set count on;
select *
from rdb$procedure_parameters
order by rdb$procedure_name,rdb$parameter_name,rdb$parameter_number;
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stdout_1 = """
Records affected: 0
"""
@pytest.mark.version('>=3.0,<4.0')
def test_1(act_1: Action):
act_1.expected_stdout = expected_stdout_1
act_1.execute()
assert act_1.clean_expected_stdout == act_1.clean_stdout
# version: 4.0
# resources: None
substitutions_2 = []
init_script_2 = """"""
db_2 = db_factory(sql_dialect=3, init=init_script_2)
test_script_2 = """
set list on;
set count on;
select *
from rdb$procedure_parameters
order by rdb$procedure_name,rdb$parameter_name,rdb$parameter_number;
"""
act_2 = isql_act('db_2', test_script_2, substitutions=substitutions_2)
expected_stdout_2 = """
RDB$PARAMETER_NAME RDB$DST_OFFSET
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 3
RDB$PARAMETER_TYPE 1
RDB$FIELD_SOURCE RDB$TIME_ZONE_OFFSET
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$EFFECTIVE_OFFSET
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 4
RDB$PARAMETER_TYPE 1
RDB$FIELD_SOURCE RDB$TIME_ZONE_OFFSET
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$END_TIMESTAMP
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 1
RDB$PARAMETER_TYPE 1
RDB$FIELD_SOURCE RDB$TIMESTAMP_TZ
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$FROM_TIMESTAMP
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 1
RDB$PARAMETER_TYPE 0
RDB$FIELD_SOURCE RDB$TIMESTAMP_TZ
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$START_TIMESTAMP
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 0
RDB$PARAMETER_TYPE 1
RDB$FIELD_SOURCE RDB$TIMESTAMP_TZ
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$TIME_ZONE_NAME
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 0
RDB$PARAMETER_TYPE 0
RDB$FIELD_SOURCE RDB$TIME_ZONE_NAME
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$TO_TIMESTAMP
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 2
RDB$PARAMETER_TYPE 0
RDB$FIELD_SOURCE RDB$TIMESTAMP_TZ
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
RDB$PARAMETER_NAME RDB$ZONE_OFFSET
RDB$PROCEDURE_NAME TRANSITIONS
RDB$PARAMETER_NUMBER 2
RDB$PARAMETER_TYPE 1
RDB$FIELD_SOURCE RDB$TIME_ZONE_OFFSET
RDB$DESCRIPTION <null>
RDB$SYSTEM_FLAG 1
RDB$DEFAULT_VALUE <null>
RDB$DEFAULT_SOURCE <null>
RDB$COLLATION_ID <null>
RDB$NULL_FLAG 1
RDB$PARAMETER_MECHANISM 0
RDB$FIELD_NAME <null>
RDB$RELATION_NAME <null>
RDB$PACKAGE_NAME RDB$TIME_ZONE_UTIL
Records affected: 8
"""
@pytest.mark.version('>=4.0')
def test_2(act_2: Action):
act_2.expected_stdout = expected_stdout_2
act_2.execute()
assert act_2.clean_expected_stdout == act_2.clean_stdout
| 70.94686 | 288 | 0.260724 | 801 | 14,686 | 4.500624 | 0.128589 | 0.093204 | 0.035506 | 0.052705 | 0.754785 | 0.750347 | 0.748128 | 0.748128 | 0.68932 | 0.68932 | 0 | 0.030769 | 0.707885 | 14,686 | 206 | 289 | 71.291262 | 0.809557 | 0.036974 | 0 | 0.76875 | 0 | 0 | 0.937053 | 0.024782 | 0 | 0 | 0 | 0 | 0.0125 | 1 | 0.0125 | false | 0 | 0.0125 | 0 | 0.025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f0a9e8f9eba368b24b9341ef5fc2defe787fcd0 | 37,827 | py | Python | mrpy/discretization/HERK4_velocity_base.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | 2 | 2020-01-06T10:48:44.000Z | 2020-01-09T20:07:08.000Z | mrpy/discretization/HERK4_velocity_base.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | 1 | 2020-01-09T20:08:50.000Z | 2020-01-09T20:11:20.000Z | mrpy/discretization/HERK4_velocity_base.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function, division
"""This temporal-modules contain the functions needed to comute the advancement in time
of the physical variables simulated. We need a specific temporal scheme to
advance a system of variables. Here, each scheme is implemented in a class. The
class is supposed to be instantiated as a "time-integrator" object in the main
module used to run the simulation. This instance then uses its procedure
attributes to advance the variables defined in the main module. All of the
spatial operations on the variables are devised via the spatial_discretization
operators, so that we have a data abstraction barrier between the procedures
designed here, and the specific data implementation of the discrete variables.
This is done to increase the modularity of this code: as long as we have a valid
spatial_discretization module, we can use this module to advance variables in
time.
Each scheme class inherits from the BaseScheme class. This class is initiated
for now with the veloicty and the pressure, but may change if we need to add
more variables in our simulation. It then processes the following instance
attributes:
- the three main linear spatial operators, divergence, gradient and
laplacian
- the non linear spatial operator for the advection
- a timestep dt
Creating these attributes at the instantiation allows to have them computed once
and for all of the simulation.
The BaseScheme class also has special methods that are generic, such as:
- a solve method that solves a linear system "Ax = b"
- a next-time method that advances the time of the simulation, based on the
current time and the timestep of the class
- a compute-initial-values method that computes the initial values of the
variables over the entire domain
- etc.
If we feel the need for a specific method while designing a new scheme class, we
ask whether other schemes would need this method. If the answer is yes then we
implement this method in the BaseScheme class, so that we only have to modify it
in a single place.
Each scheme class has special methods to implement its specific
time-advancement. The time-advancement is enforced by the method advance, which
each class must possess, but which class-specific. This advance method should
act like a mutator: the variables are implemented as scalars in the main module,
and their local state, which their array of values over every mesh of the
domain, is changed by the call to the advance method.
This module implements the Radau2A scheme. It inherits from the
temporal-impl-RK2-base.py ImplicitRungeKuttaStage2Scheme.
"""
import sys, petsc4py
petsc4py.init(sys.argv)
import petsc4py.PETSc as petsc
import mpi4py.MPI as mpi
import numpy as np
import scipy.sparse as sp
from six.moves import range
import importlib
import math
from mrpy.mr_utils import mesh
from mrpy.mr_utils import op
import mrpy.discretization.spatial as sd
from mrpy.discretization.HERK_velocity_base import HERKScheme
import config as cfg
class HERK4Scheme(HERKScheme):
"""Base scheme for the implementation of 4-stage Half-Explicit Runge-Kutta
methods for the NS equations in 2D."""
def __init__(self, dimension=cfg.dimension, tree_velocity_x=None,
tree_velocity_y=None, tree_velocity_z=None, tree_pressure=None,
tree_vorticity=None, uniform=False,
st_flag_vx=False, st_flag_vy=False, st_flag_vz=False,
st_flag_vc=False, st_flag_s=False, low_mach=False):
HERKScheme.__init__(self, dimension=dimension,
tree_velocity_x=tree_velocity_x, tree_velocity_y=tree_velocity_y,
tree_velocity_z=tree_velocity_z, tree_pressure=tree_pressure,
tree_vorticity=tree_vorticity,
uniform=uniform, st_flag_vx=st_flag_vx, st_flag_vy=st_flag_vy,
st_flag_vz=st_flag_vz, st_flag_vc=st_flag_vc, st_flag_s=st_flag_s,
low_mach=low_mach)
#def __init__(self, dimension=cfg.dimension, tree_velocity_x=None,
# tree_velocity_y=None, tree_velocity_z=None, tree_pressure=None,
# tree_vorticity=None):
# if tree_vorticity is not None:
# HERKScheme.__init__(self, tree_velocity_x=tree_velocity_x,
# tree_velocity_y=tree_velocity_y,
# tree_pressure=tree_pressure, tree_vorticity=tree_vorticity)
# else:
# HERKScheme.__init__(self, tree_velocity_x=tree_velocity_x,
# tree_velocity_y=tree_velocity_y,
# tree_pressure=tree_pressure)
#def compute_A_coefs(self, B_coefs, C_coefs):
# """Computes the A_coefs of the ERK method given the B and C coefs.
# We use the formulas to obtain a 4th order scheme in 4 stages. They can
# be found in Hairer and Wanner, Solving Ordinary Differential Equations
# I.
# """
# b2 = B_coefs[0]
# b3 = B_coefs[1]
# b4 = B_coefs[2]
# c2 = C_coefs[0]
# c3 = C_coefs[1]
# c4 = C_coefs[2]
# self.A_coefs["a43"] = (b3*(1 - c3))/b4
# self.A_coefs["a32"] = (1./(b3*b4*c2*(c4 - c3)))*(b4*c4*c2*b2*(1. - c2) + \
# self.A_coefs["a43"]*c3*c4*b4*b4 - 1/8.*b4)
# self.A_coefs["a42"] = (1./(b3*b4*c2*(c4 - c3)))*(-b3*c3*c2*b2*(1. - c2) - \
# self.A_coefs["a43"]*c3*c4*b3*b4 + 1/8.*b3)
# self.A_coefs["a21"] = c2
# self.A_coefs["a31"] = c3 - self.A_coefs["a32"]
# self.A_coefs["a41"] = c4 - self.A_coefs["a43"] - self.A_coefs["a42"]
def advance(self, v_x=None, v_y=None, v_z=None, p=None, t_ini=0, nsp=None):
# needs an update to take into account a source term in the continuity
# equation
st_rhs_12 = None
st_rhs_13 = None
st_rhs_14 = None
st_rhs_22 = None
st_rhs_23 = None
st_rhs_24 = None
if self.uniform: #v_x, v_y, etc are scalars, and we just advance them
if self.st_flag_vx:
mesh.listing_of_leaves(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c2"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_12 = sd.Scalar(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c3"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_13 = sd.Scalar(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c4"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_14 = sd.Scalar(self.st_tree_vx)
if self.st_flag_vy:
mesh.listing_of_leaves(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c2"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_22 = sd.Scalar(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c3"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_23 = sd.Scalar(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c4"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_24 = sd.Scalar(self.st_tree_vy)
g_11, g_21 = sd.Scalar(), sd.Scalar()
g_11.sc, g_21.sc = v_x.sc.copy(), v_y.sc.copy()
print("stage 1 done")
print("")
g_12 = sd.add_scalars(
v_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt,
sd.mul_num_scalar(self.A_coefs["a21"],
self.make_rhs_ode_x(g_11, g_21, st_rhs_12)))))
g_22 = sd.add_scalars(
v_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt,
sd.mul_num_scalar(self.A_coefs["a21"],
self.make_rhs_ode_y(g_11, g_21, st_rhs_22)))))
g_31 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_12, g_22,
self.A_coefs["a21"]))
g_12 = self.projection_velocity_x(g_12, g_31, self.A_coefs["a21"])
g_22 = self.projection_velocity_y(g_22, g_31, self.A_coefs["a21"])
print("stage 2 done")
print("")
g_13 = sd.add_scalars(
v_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a31"],
self.make_rhs_ade_x(g_11, g_21, g_31, st_rhs_13)),
sd.mul_num_scalar(self.A_coefs["a32"],
self.make_rhs_ode_x(g_12, g_22, st_rhs_13))))))
g_23 = sd.add_scalars(
v_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a31"],
self.make_rhs_ade_y(g_11, g_21, g_31, st_rhs_23)),
sd.mul_num_scalar(self.A_coefs["a32"],
self.make_rhs_ode_y(g_12, g_22, st_rhs_23))))))
g_32 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_13, g_23,
self.A_coefs["a32"]))
g_13 = self.projection_velocity_x(g_13, g_32, self.A_coefs["a32"])
g_23 = self.projection_velocity_y(g_23, g_32, self.A_coefs["a32"])
print("stage 3 done")
print("")
g_14 = sd.add_scalars(
v_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a41"],
self.make_rhs_ade_x(g_11, g_21, g_31, st_rhs_14)),
sd.mul_num_scalar(self.A_coefs["a42"],
self.make_rhs_ade_x(g_12, g_22, g_32, st_rhs_14)),
sd.mul_num_scalar(self.A_coefs["a43"],
self.make_rhs_ode_x(g_13, g_23, st_rhs_14))))))
g_24 = sd.add_scalars(
v_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a41"],
self.make_rhs_ade_y(g_11, g_21, g_31, st_rhs_24)),
sd.mul_num_scalar(self.A_coefs["a42"],
self.make_rhs_ade_y(g_12, g_22, g_32, st_rhs_24)),
sd.mul_num_scalar(self.A_coefs["a43"],
self.make_rhs_ode_y(g_13, g_23, st_rhs_24))))))
g_33 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_14, g_24,
self.A_coefs["a43"]))
g_14 = self.projection_velocity_x(g_14, g_33, self.A_coefs["a43"])
g_24 = self.projection_velocity_y(g_24, g_33, self.A_coefs["a43"])
print("stage 4 done")
print("")
g_1final = sd.add_scalars(
v_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.B_coefs["b1"],
self.make_rhs_ade_x(g_11, g_21, g_31)),
sd.mul_num_scalar(self.B_coefs["b2"],
self.make_rhs_ade_x(g_12, g_22, g_32, st_rhs_12)),
sd.mul_num_scalar(self.B_coefs["b3"],
self.make_rhs_ade_x(g_13, g_23, g_33, st_rhs_13)),
sd.mul_num_scalar(self.B_coefs["b4"],
self.make_rhs_ode_x(g_14, g_24, st_rhs_14))))))
g_2final = sd.add_scalars(
v_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.B_coefs["b1"],
self.make_rhs_ade_y(g_11, g_21, g_31)),
sd.mul_num_scalar(self.B_coefs["b2"],
self.make_rhs_ade_y(g_12, g_22, g_32, st_rhs_22)),
sd.mul_num_scalar(self.B_coefs["b3"],
self.make_rhs_ade_y(g_13, g_23, g_33, st_rhs_23)),
sd.mul_num_scalar(self.B_coefs["b4"],
self.make_rhs_ode_y(g_14, g_24, st_rhs_24))))))
g_34 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_1final, g_2final,
self.B_coefs["b4"]))
v_x.sc = self.projection_velocity_x(g_1final, g_34,
self.B_coefs["b4"]).sc.copy()
v_y.sc = self.projection_velocity_y(g_2final, g_34,
self.B_coefs["b4"]).sc.copy()
# The pressure must be the right Lagrange multiplier of the
# resulting velocity
p.sc = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_equation(v_x, v_y,
st_rhs_12, st_rhs_22), nsp).sc
else: #v_x, etc are trees
velocity_x = sd.Scalar(v_x)
velocity_y = sd.Scalar(v_y)
pressure = sd.Scalar(p)
if self.st_flag_vx: #we need to put the st_tree_vx to the same grading as v_x
op.set_to_same_grading(v_x, self.st_tree_vx)
op.run_pruning(self.st_tree_vx)
mesh.listing_of_leaves(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c2"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_12 = sd.Scalar(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c3"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_13 = sd.Scalar(self.st_tree_vx)
self.compute_source_term(self.st_tree_vx, self.st_func_vx,
t_ini + self.C_coefs["c4"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vx)
st_rhs_14 = sd.Scalar(self.st_tree_vx)
if self.st_flag_vy: #we need to put the st_tree_vy to the same grading as v_y
op.set_to_same_grading(v_y, self.st_tree_vy)
op.run_pruning(self.st_tree_vy)
mesh.listing_of_leaves(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c2"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_22 = sd.Scalar(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c3"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_23 = sd.Scalar(self.st_tree_vy)
self.compute_source_term(self.st_tree_vy, self.st_func_vy,
t_ini + self.C_coefs["c4"]*self.dt)
#mesh.listing_of_leaves(self.st_tree_vy)
st_rhs_24 = sd.Scalar(self.st_tree_vy)
g_11, g_21 = sd.Scalar(), sd.Scalar()
g_11.sc, g_21.sc = velocity_x.sc.copy(), velocity_y.sc.copy()
print("stage 1 done")
print("")
g_12 = sd.add_scalars(
velocity_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt,
sd.mul_num_scalar(self.A_coefs["a21"],
self.make_rhs_ode_x(g_11, g_21, st_rhs_12)))))
g_22 = sd.add_scalars(
velocity_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt,
sd.mul_num_scalar(self.A_coefs["a21"],
self.make_rhs_ode_y(g_11, g_21, st_rhs_22)))))
g_31 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_12, g_22,
self.A_coefs["a21"]))
g_12 = self.projection_velocity_x(g_12, g_31, self.A_coefs["a21"])
g_22 = self.projection_velocity_y(g_22, g_31, self.A_coefs["a21"])
print("stage 2 done")
print("")
g_13 = sd.add_scalars(
velocity_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a31"],
self.make_rhs_ade_x(g_11, g_21, g_31, st_rhs_13)),
sd.mul_num_scalar(self.A_coefs["a32"],
self.make_rhs_ode_x(g_12, g_22, st_rhs_13))))))
g_23 = sd.add_scalars(
velocity_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a31"],
self.make_rhs_ade_y(g_11, g_21, g_31, st_rhs_23)),
sd.mul_num_scalar(self.A_coefs["a32"],
self.make_rhs_ode_y(g_12, g_22, st_rhs_23))))))
g_32 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_13, g_23,
self.A_coefs["a32"]))
g_13 = self.projection_velocity_x(g_13, g_32, self.A_coefs["a32"])
g_23 = self.projection_velocity_y(g_23, g_32, self.A_coefs["a32"])
print("stage 3 done")
print("")
g_14 = sd.add_scalars(
velocity_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a41"],
self.make_rhs_ade_x(g_11, g_21, g_31, st_rhs_14)),
sd.mul_num_scalar(self.A_coefs["a42"],
self.make_rhs_ade_x(g_12, g_22, g_32, st_rhs_14)),
sd.mul_num_scalar(self.A_coefs["a43"],
self.make_rhs_ode_x(g_13, g_23, st_rhs_14))))))
g_24 = sd.add_scalars(
velocity_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.A_coefs["a41"],
self.make_rhs_ade_y(g_11, g_21, g_31, st_rhs_24)),
sd.mul_num_scalar(self.A_coefs["a42"],
self.make_rhs_ade_y(g_12, g_22, g_32, st_rhs_24)),
sd.mul_num_scalar(self.A_coefs["a43"],
self.make_rhs_ode_y(g_13, g_23, st_rhs_24))))))
g_33 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_14, g_24,
self.A_coefs["a43"]))
g_14 = self.projection_velocity_x(g_14, g_33, self.A_coefs["a43"])
g_24 = self.projection_velocity_y(g_24, g_33, self.A_coefs["a43"])
print("stage 4 done")
print("")
g_1final = sd.add_scalars(
velocity_x,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.B_coefs["b1"],
self.make_rhs_ade_x(g_11, g_21, g_31)),
sd.mul_num_scalar(self.B_coefs["b2"],
self.make_rhs_ade_x(g_12, g_22, g_32, st_rhs_12)),
sd.mul_num_scalar(self.B_coefs["b3"],
self.make_rhs_ade_x(g_13, g_23, g_33, st_rhs_13)),
sd.mul_num_scalar(self.B_coefs["b4"],
self.make_rhs_ode_x(g_14, g_24, st_rhs_14))))))
g_2final = sd.add_scalars(
velocity_y,
self.velocity_inverse_mass.apply(
sd.mul_num_scalar(self.dt, sd.add_scalars(
sd.mul_num_scalar(self.B_coefs["b1"],
self.make_rhs_ade_y(g_11, g_21, g_31)),
sd.mul_num_scalar(self.B_coefs["b2"],
self.make_rhs_ade_y(g_12, g_22, g_32, st_rhs_22)),
sd.mul_num_scalar(self.B_coefs["b3"],
self.make_rhs_ade_y(g_13, g_23, g_33, st_rhs_23)),
sd.mul_num_scalar(self.B_coefs["b4"],
self.make_rhs_ode_y(g_14, g_24, st_rhs_24))))))
g_34 = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_update(g_1final, g_2final,
self.B_coefs["b4"]))
velocity_x.sc = self.projection_velocity_x(g_1final, g_34,
self.B_coefs["b4"]).sc.copy()
velocity_y.sc = self.projection_velocity_y(g_2final, g_34,
self.B_coefs["b4"]).sc.copy()
# The pressure must be the right Lagrange multiplier of the
# resulting velocity
pressure.sc = self.solve(self.pressure_divgrad,
self.make_rhs_pressure_equation(velocity_x, velocity_y,
st_rhs_12, st_rhs_22), nsp).sc
self.scalar_to_tree(velocity_x, v_x)
self.scalar_to_tree(velocity_y, v_y)
self.scalar_to_tree(pressure, p)
#def advance(self, v_x=None, v_y=None, v_z=None, p=None, t_ini=0, nsp=None):
## needs an update to take into account a source term in the continuity
## equation
# st_rhs_12 = None
# st_rhs_13 = None
# st_rhs_14 = None
# st_rhs_22 = None
# st_rhs_23 = None
# st_rhs_24 = None
# if self.uniform: #v_x, v_y, etc are scalars, and we just advance them
# if self.st_flag_vx:
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c2"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_12 = sd.Scalar(self.st_tree_vx)
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c3"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_13 = sd.Scalar(self.st_tree_vx)
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c4"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_14 = sd.Scalar(self.st_tree_vx)
# if self.st_flag_vy:
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c2"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_22 = sd.Scalar(self.st_tree_vy)
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c3"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_23 = sd.Scalar(self.st_tree_vy)
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c4"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_24 = sd.Scalar(self.st_tree_vy)
# g_11, g_21 = sd.Scalar(), sd.Scalar()
# g_11.sc, g_21.sc = v_x.sc.copy(), v_y.sc.copy()
# print("stage 1 done")
# print("")
# g_12 = sd.add_scalars(
# v_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt,
# sd.mul_num_scalar(self.A_coefs["a21"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_12)))))
# g_22 = sd.add_scalars(
# v_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt,
# sd.mul_num_scalar(self.A_coefs["a21"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_22)))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_12, g_22))
# g_12 = self.projection_velocity_x(g_12, phi)
# g_22 = self.projection_velocity_y(g_22, phi)
# print("stage 2 done")
# print("")
# g_13 = sd.add_scalars(
# v_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a31"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_13)),
# sd.mul_num_scalar(self.A_coefs["a32"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_13))))))
# g_23 = sd.add_scalars(
# v_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a31"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_23)),
# sd.mul_num_scalar(self.A_coefs["a32"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_23))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_13, g_23))
# g_13 = self.projection_velocity_x(g_13, phi)
# g_23 = self.projection_velocity_y(g_23, phi)
# print("stage 3 done")
# print("")
# g_14 = sd.add_scalars(
# v_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a41"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_14)),
# sd.mul_num_scalar(self.A_coefs["a42"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_14)),
# sd.mul_num_scalar(self.A_coefs["a43"],
# self.make_rhs_ode_x(g_13, g_23, st_rhs_14))))))
# g_24 = sd.add_scalars(
# v_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a41"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_24)),
# sd.mul_num_scalar(self.A_coefs["a42"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_24)),
# sd.mul_num_scalar(self.A_coefs["a43"],
# self.make_rhs_ode_y(g_13, g_23, st_rhs_24))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_14, g_24))
# g_14 = self.projection_velocity_x(g_14, phi)
# g_24 = self.projection_velocity_y(g_24, phi)
# print("stage 4 done")
# print("")
# g_1final = sd.add_scalars(
# v_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.B_coefs["b1"],
# self.make_rhs_ode_x(g_11, g_21)),
# sd.mul_num_scalar(self.B_coefs["b2"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_12)),
# sd.mul_num_scalar(self.B_coefs["b3"],
# self.make_rhs_ode_x(g_13, g_23, st_rhs_13)),
# sd.mul_num_scalar(self.B_coefs["b4"],
# self.make_rhs_ode_x(g_14, g_24, st_rhs_14))))))
# g_2final = sd.add_scalars(
# v_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.B_coefs["b1"],
# self.make_rhs_ode_y(g_11, g_21)),
# sd.mul_num_scalar(self.B_coefs["b2"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_22)),
# sd.mul_num_scalar(self.B_coefs["b3"],
# self.make_rhs_ode_y(g_13, g_23, st_rhs_23)),
# sd.mul_num_scalar(self.B_coefs["b4"],
# self.make_rhs_ode_y(g_14, g_24, st_rhs_24))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_1final, g_2final))
# v_x.sc = self.projection_velocity_x(g_1final, phi).sc.copy()
# v_y.sc = self.projection_velocity_y(g_2final, phi).sc.copy()
# # The pressure must be the right Lagrange multiplier of the
# # resulting velocity
# p.sc = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_equation(v_x, v_y,
# st_rhs_12, st_rhs_22), nsp).sc
# else: #v_x, etc are trees
# velocity_x = sd.Scalar(v_x)
# velocity_y = sd.Scalar(v_y)
# pressure = sd.Scalar(p)
# if self.st_flag_vx: #we need to put the st_tree_vx to the same grading as v_x
# op.set_to_same_grading(v_x, self.st_tree_vx)
# op.run_pruning(self.st_tree_vx)
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c2"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_12 = sd.Scalar(self.st_tree_vx)
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c3"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_13 = sd.Scalar(self.st_tree_vx)
# self.compute_source_term(self.st_tree_vx, self.st_func_vx,
# t_ini + self.C_coefs["c4"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vx)
# st_rhs_14 = sd.Scalar(self.st_tree_vx)
# if self.st_flag_vy: #we need to put the st_tree_vy to the same grading as v_y
# op.set_to_same_grading(v_y, self.st_tree_vy)
# op.run_pruning(self.st_tree_vy)
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c2"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_22 = sd.Scalar(self.st_tree_vy)
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c3"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_23 = sd.Scalar(self.st_tree_vy)
# self.compute_source_term(self.st_tree_vy, self.st_func_vy,
# t_ini + self.C_coefs["c4"]*self.dt)
# mesh.listing_of_leaves(self.st_tree_vy)
# st_rhs_24 = sd.Scalar(self.st_tree_vy)
# g_11, g_21 = sd.Scalar(), sd.Scalar()
# g_11.sc, g_21.sc = velocity_x.sc.copy(), velocity_y.sc.copy()
# print("stage 1 done")
# print("")
# g_12 = sd.add_scalars(
# velocity_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt,
# sd.mul_num_scalar(self.A_coefs["a21"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_12)))))
# g_22 = sd.add_scalars(
# velocity_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt,
# sd.mul_num_scalar(self.A_coefs["a21"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_22)))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_12, g_22))
# g_12 = self.projection_velocity_x(g_12, phi)
# g_22 = self.projection_velocity_x(g_22, phi)
# print("stage 2 done")
# print("")
# g_13 = sd.add_scalars(
# velocity_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a31"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_13)),
# sd.mul_num_scalar(self.A_coefs["a32"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_13))))))
# g_23 = sd.add_scalars(
# velocity_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a31"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_23)),
# sd.mul_num_scalar(self.A_coefs["a32"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_23))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_13, g_23))
# g_13 = self.projection_velocity_x(g_13, phi)
# g_23 = self.projection_velocity_x(g_23, phi)
# print("stage 3 done")
# print("")
# g_14 = sd.add_scalars(
# velocity_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a41"],
# self.make_rhs_ode_x(g_11, g_21, st_rhs_14)),
# sd.mul_num_scalar(self.A_coefs["a42"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_14)),
# sd.mul_num_scalar(self.A_coefs["a43"],
# self.make_rhs_ode_x(g_13, g_23, st_rhs_14))))))
# g_24 = sd.add_scalars(
# velocity_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.A_coefs["a41"],
# self.make_rhs_ode_y(g_11, g_21, st_rhs_24)),
# sd.mul_num_scalar(self.A_coefs["a42"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_24)),
# sd.mul_num_scalar(self.A_coefs["a43"],
# self.make_rhs_ode_y(g_13, g_23, st_rhs_24))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_14, g_24))
# g_14 = self.projection_velocity_x(g_14, phi)
# g_24 = self.projection_velocity_x(g_24, phi)
# print("stage 4 done")
# print("")
# g_1final = sd.add_scalars(
# velocity_x,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.B_coefs["b1"],
# self.make_rhs_ode_x(g_11, g_21)),
# sd.mul_num_scalar(self.B_coefs["b2"],
# self.make_rhs_ode_x(g_12, g_22, st_rhs_12)),
# sd.mul_num_scalar(self.B_coefs["b3"],
# self.make_rhs_ode_x(g_13, g_23, st_rhs_13)),
# sd.mul_num_scalar(self.B_coefs["b4"],
# self.make_rhs_ode_x(g_14, g_24, st_rhs_14))))))
# g_2final = sd.add_scalars(
# velocity_y,
# self.velocity_inverse_mass.apply(
# sd.mul_num_scalar(self.dt, sd.add_scalars(
# sd.mul_num_scalar(self.B_coefs["b1"],
# self.make_rhs_ode_y(g_11, g_21)),
# sd.mul_num_scalar(self.B_coefs["b2"],
# self.make_rhs_ode_y(g_12, g_22, st_rhs_22)),
# sd.mul_num_scalar(self.B_coefs["b3"],
# self.make_rhs_ode_y(g_13, g_23, st_rhs_23)),
# sd.mul_num_scalar(self.B_coefs["b4"],
# self.make_rhs_ode_y(g_14, g_24, st_rhs_24))))))
# phi = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_update(g_1final, g_2final))
# velocity_x.sc = self.projection_velocity_x(g_1final, phi).sc.copy()
# velocity_y.sc = self.projection_velocity_y(g_2final, phi).sc.copy()
# # The pressure must be the right Lagrange multiplier of the
# # resulting velocity
# pressure.sc = self.solve(self.pressure_divgrad,
# self.make_rhs_pressure_equation(velocity_x, velocity_y,
# st_rhs_12, st_rhs_22), nsp).sc
# self.scalar_to_tree(velocity_x, v_x)
# self.scalar_to_tree(velocity_y, v_y)
# self.scalar_to_tree(pressure, p)
| 47.461731 | 90 | 0.550453 | 5,445 | 37,827 | 3.445179 | 0.061341 | 0.072499 | 0.047764 | 0.083587 | 0.836718 | 0.828349 | 0.827176 | 0.827176 | 0.822965 | 0.822005 | 0 | 0.048423 | 0.345415 | 37,827 | 796 | 91 | 47.521357 | 0.70918 | 0.432627 | 0 | 0.813115 | 0 | 0 | 0.015662 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006557 | false | 0 | 0.045902 | 0 | 0.055738 | 0.055738 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f6fcdeedab3006bd09cd8de955ac619c7a334ee | 12,527 | py | Python | python/prepare_database.py | AaronWChen/suggest_recipe | 3d86693c0680804b9af475a428e7db6152ab2628 | [
"MIT"
] | 1 | 2020-12-08T19:42:45.000Z | 2020-12-08T19:42:45.000Z | python/prepare_database.py | AaronWChen/suggest_recipe | 3d86693c0680804b9af475a428e7db6152ab2628 | [
"MIT"
] | 7 | 2020-03-26T22:10:27.000Z | 2022-03-12T00:22:11.000Z | python/prepare_database.py | AaronWChen/suggest_recipe | 3d86693c0680804b9af475a428e7db6152ab2628 | [
"MIT"
] | null | null | null | """ This file contains code needed to prepare the scraped Epicurious recipe
JSON to convert to a database that can be used for cosine similarity analysis.
"""
# Import necessary libraries
import json
import csv
import re
import pandas as pd
import numpy as np
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import string
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import cosine_similarity, pairwise_distances
from sklearn.feature_extraction.text import TfidfVectorizer
import joblib
# Load stopwords and prepare lemmatizer
stopwords_loc = "../../write_data/food_stopwords.csv"
with open(stopwords_loc, "r") as myfile:
reader = csv.reader(myfile)
food_stopwords = [col for row in reader for col in row]
stopwords_list = stopwords.words("english") + list(string.punctuation) + food_stopwords
lemmatizer = WordNetLemmatizer()
# Define functions
def cuisine_namer(text):
"""This function converts redundant and/or rare categories into more common
ones/umbrella ones.
In the future, there's a hope that this renaming mechanism will not have
under sampled cuisine tags.
"""
if text == "Central American/Caribbean":
return "Caribbean"
elif text == "Jewish":
return "Kosher"
elif text == "Eastern European/Russian":
return "Eastern European"
elif text in ["Spanish/Portuguese", "Greek"]:
return "Mediterranean"
elif text == "Central/South American":
return "Latin American"
elif text == "Sushi":
return "Japanese"
elif text == "Southern Italian":
return "Italian"
elif text in ["Southern", "Tex-Mex"]:
return "American"
elif text in ["Southeast Asian", "Korean"]:
return "Asian"
else:
return text
filename = "../../raw_data/recipes-en-201706/epicurious-recipes_m2.json"
with open(filename, "r") as f:
datastore = json.load(f)
def load_data(filepath, test_size=0.1, random_state=10):
""" This function uses a filepath, test_size, and random_state
to load the Epicurious JSON into a dataframe and then split into
train/test sets."""
with open(filepath, "r") as f:
datastore = json.load(f)
datastore_df = pd.DataFrame(datastore)
X_train, X_test = train_test_split(
datastore_df, test_size=test_size, random_state=random_state
)
return X_train, X_test
def prep_data(X):
""" This function takes a dataframe X, drops columns that will not be used,
expands the hierarchical column into the dataframe, renames the columns
to be more human-readable, and drops one column created during dataframe
expansion"""
X.drop(
[
"pubDate",
"author",
"type",
"aggregateRating",
"reviewsCount",
"willMakeAgainPct",
"dateCrawled",
],
axis=1,
inplace=True,
)
concat = pd.concat([X.drop(["tag"], axis=1), X["tag"].apply(pd.Series)], axis=1)
concat.drop(
[
0,
"photosBadgeAltText",
"photosBadgeFileName",
"photosBadgeID",
"photosBadgeRelatedUri",
],
axis=1,
inplace=True,
)
cols = [
"title",
"url",
"photo_data",
"ingredients",
"category",
"name",
"remove"
]
concat.columns = cols
concat.drop("remove", axis=1, inplace=True)
cuisine_only = concat[concat["category"] == "cuisine"]
cuisine_only.dropna(axis=0, inplace=True)
cuisine_only["imputed_label"] = cuisine_only["name"].apply(cuisine_namer)
cuisine_only.drop('name', axis=1, inplace=True)
return cuisine_only
def fit_transform_tfidf_matrix(X_df, stopwords_list):
tfidf = TfidfVectorizer(
stop_words=stopwords_list,
min_df=2,
token_pattern=r"(?u)\b[a-zA-Z]{2,}\b",
preprocessor=lemmatizer.lemmatize,
)
temp = X_df["ingredients"].apply(" ".join).str.lower()
tfidf.fit(temp)
response = tfidf.transform(temp)
print(response.shape)
word_matrix = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names(), index=X_df.index
)
return tfidf, word_matrix
def transform_tfidf(tfidf, recipe):
response = tfidf.transform(recipe["ingredients"])
transformed_recipe = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names(), index=recipe.index
)
return transformed_recipe
def transform_from_test_tfidf(tfidf, df, idx):
recipe = [" ".join(df.iloc[idx]["ingredients"])]
response = tfidf.transform(recipe)
transformed_recipe = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names()
)
return transformed_recipe
def filter_out_cuisine(ingred_word_matrix, X_df, cuisine_name, tfidf):
combo = pd.concat([ingred_word_matrix, X_df["imputed_label"]], axis=1)
filtered_ingred_word_matrix = combo[combo["imputed_label"] != cuisine_name].drop(
"imputed_label", axis=1
)
return filtered_ingred_word_matrix
def find_closest_recipes(filtered_ingred_word_matrix, recipe_tfidf, X_df):
search_vec = np.array(recipe_tfidf).reshape(1, -1)
res_cos_sim = cosine_similarity(filtered_ingred_word_matrix, search_vec)
top_five = np.argsort(res_cos_sim.flatten())[-5:][::-1]
proximity = res_cos_sim[top_five]
recipe_ids = [filtered_ingred_word_matrix.iloc[idx].name for idx in top_five]
suggest_df = X_df.loc[recipe_ids]
return suggest_df, proximity
# Create the dataframe
X_train, X_test = load_data(filename)
with open("joblib/test_subset.joblib", "wb") as fo:
joblib.dump(X_test, fo, compress=True)
prepped = prep_data(X_train)
with open("joblib/recipe_dataframe.joblib", "wb") as fo:
joblib.dump(prepped, fo, compress=True)
# Create the ingredients TF-IDF matrix
ingred_tfidf, ingred_word_matrix = fit_transform_tfidf_matrix(prepped, stopwords_list)
with open("joblib/recipe_tfidf.joblib", "wb") as fo:
joblib.dump(ingred_tfidf, fo, compress=True)
with open("joblib/recipe_word_matrix.joblib", "wb") as fo:
joblib.dump(ingred_word_matrix, fo, compress=True)
=======
""" This file contains code needed to prepare the scraped Epicurious recipe
JSON to convert to a database that can be used for cosine similarity analysis.
"""
# Import necessary libraries
import json
import csv
import re
import pandas as pd
import numpy as np
import nltk
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import string
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import cosine_similarity, pairwise_distances
from sklearn.feature_extraction.text import TfidfVectorizer
import joblib
# Load stopwords and prepare lemmatizer
stopwords_loc = "../write_data/food_stopwords.csv"
with open(stopwords_loc, "r") as myfile:
reader = csv.reader(myfile)
food_stopwords = [col for row in reader for col in row]
stopwords_list = stopwords.words("english") + list(string.punctuation) + food_stopwords
lemmatizer = WordNetLemmatizer()
# Define functions
def cuisine_namer(text):
"""This function converts redundant and/or rare categories into more common
ones/umbrella ones.
In the future, there's a hope that this renaming mechanism will not have
under sampled cuisine tags.
"""
if text == "Central American/Caribbean":
return "Caribbean"
elif text == "Jewish":
return "Kosher"
elif text == "Eastern European/Russian":
return "Eastern European"
elif text in ["Spanish/Portuguese", "Greek"]:
return "Mediterranean"
elif text == "Central/South American":
return "Latin American"
elif text == "Sushi":
return "Japanese"
elif text == "Southern Italian":
return "Italian"
elif text in ["Southern", "Tex-Mex"]:
return "American"
elif text in ["Southeast Asian", "Korean"]:
return "Asian"
else:
return text
filename = "../raw_data/recipes-en-201706/epicurious-recipes_m2.json"
with open(filename, "r") as f:
datastore = json.load(f)
def load_data(filepath, test_size=0.1, random_state=10):
""" This function uses a filepath, test_size, and random_state
to load the Epicurious JSON into a dataframe and then split into
train/test sets."""
with open(filepath, "r") as f:
datastore = json.load(f)
datastore_df = pd.DataFrame(datastore)
X_train, X_test = train_test_split(datastore_df,
test_size=test_size, random_state=random_state
)
return X_train, X_test
def prep_data(X):
""" This function takes a dataframe X, drops columns that will not be used,
expands the hierarchical column into the dataframe, renames the columns
to be more human-readable, and drops one column created during dataframe
expansion"""
X.drop(
[
"pubDate",
"author",
"type",
"aggregateRating",
"reviewsCount",
"willMakeAgainPct",
"dateCrawled",
],
axis=1,
inplace=True,
)
concat = pd.concat([X.drop(["tag"], axis=1), X["tag"].apply(pd.Series)], axis=1)
concat.drop(
[
0,
"photosBadgeAltText",
"photosBadgeFileName",
"photosBadgeID",
"photosBadgeRelatedUri",
],
axis=1,
inplace=True,
)
cols = [
"id",
"description",
"title",
"url",
"photo_data",
"ingredients",
"steps",
"category",
"name",
"remove",
]
concat.columns = cols
concat.drop("remove", axis=1, inplace=True)
cuisine_only = concat[concat["category"] == "cuisine"]
cuisine_only.dropna(axis=0, inplace=True)
cuisine_only["imputed_label"] = cuisine_only["name"].apply(cuisine_namer)
return cuisine_only
def fit_transform_tfidf_matrix(X_df, stopwords_list):
tfidf = TfidfVectorizer(
stop_words=stopwords_list,
min_df=2,
token_pattern=r"(?u)\b[a-zA-Z]{2,}\b",
preprocessor=lemmatizer.lemmatize,
)
temp = X_df["ingredients"].apply(" ".join).str.lower()
tfidf.fit(temp)
response = tfidf.transform(temp)
print(response.shape)
word_matrix = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names(), index=X_df.index
)
return tfidf, word_matrix
def transform_tfidf(tfidf, recipe):
response = tfidf.transform(recipe["ingredients"])
transformed_recipe = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names(), index=recipe.index
)
return transformed_recipe
def transform_from_test_tfidf(tfidf, df, idx):
recipe = [" ".join(df.iloc[idx]["ingredients"])]
response = tfidf.transform(recipe)
transformed_recipe = pd.DataFrame(
response.toarray(), columns=tfidf.get_feature_names()
)
return transformed_recipe
def filter_out_cuisine(ingred_word_matrix, X_df, cuisine_name, tfidf):
combo = pd.concat([ingred_word_matrix, X_df["imputed_label"]], axis=1)
filtered_ingred_word_matrix = combo[combo["imputed_label"] != cuisine_name].drop(
"imputed_label", axis=1
)
return filtered_ingred_word_matrix
def find_closest_recipes(filtered_ingred_word_matrix, recipe_tfidf, X_df):
search_vec = np.array(recipe_tfidf).reshape(1, -1)
res_cos_sim = cosine_similarity(filtered_ingred_word_matrix, search_vec)
top_five = np.argsort(res_cos_sim.flatten())[-5:][::-1]
proximity = res_cos_sim[top_five]
recipe_ids = [filtered_ingred_word_matrix.iloc[idx].name for idx in top_five]
suggest_df = X_df.loc[recipe_ids]
return suggest_df, proximity
# Create the dataframe
X_train, X_test = load_data(filename)
with open("joblib/test_subset.joblib", "wb") as fo:
joblib.dump(X_test, fo, compress=True)
prepped = prep_data(X_train)
with open("joblib/recipe_dataframe.joblib", "wb") as fo:
joblib.dump(prepped, fo, compress=True)
# Create the ingredients TF-IDF matrix
ingred_tfidf, ingred_word_matrix = fit_transform_tfidf_matrix(prepped, stopwords_list)
with open("joblib/recipe_tfidf.joblib", "wb") as fo:
joblib.dump(ingred_tfidf, fo, compress=True)
with open("joblib/recipe_word_matrix.joblib", "wb") as fo:
joblib.dump(ingred_word_matrix, fo, compress=True) | 30.331719 | 87 | 0.679572 | 1,602 | 12,527 | 5.138577 | 0.162297 | 0.029155 | 0.034985 | 0.029155 | 0.991011 | 0.984208 | 0.984208 | 0.984208 | 0.984208 | 0.984208 | 0 | 0.005365 | 0.211463 | 12,527 | 413 | 88 | 30.331719 | 0.828002 | 0.022272 | 0 | 0.872483 | 0 | 0 | 0.151235 | 0.042088 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.09396 | null | null | 0.006711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
58b616747729295e4d6ba7f9c7e716b1dc3d9aa0 | 162 | py | Python | FC-2019.1/saida1.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | FC-2019.1/saida1.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | FC-2019.1/saida1.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | print('-' * 39)
print('|', ' ' * 35, '|')
print('|', ' ' * 35, '|')
print('|', ' ' * 35, '|')
print('|', ' ' * 35, '|')
print('|', ' ' * 35, '|')
print('-' * 39)
| 20.25 | 25 | 0.302469 | 14 | 162 | 3.5 | 0.214286 | 0.714286 | 1.22449 | 1.142857 | 0.816327 | 0.816327 | 0.816327 | 0.816327 | 0.816327 | 0 | 0 | 0.111111 | 0.222222 | 162 | 7 | 26 | 23.142857 | 0.277778 | 0 | 0 | 1 | 0 | 0 | 0.104938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 14 |
450705cb9f5caff8e52cd8d37d71d8dd28422804 | 103,670 | py | Python | network.py | shadowwkl/MinMaxCAM | 24d5f3fdf46fcce591a030c698167a540eca3466 | [
"MIT"
] | 2 | 2021-11-10T23:31:31.000Z | 2022-02-25T06:10:11.000Z | network.py | shadowwkl/MinMaxCAM | 24d5f3fdf46fcce591a030c698167a540eca3466 | [
"MIT"
] | null | null | null | network.py | shadowwkl/MinMaxCAM | 24d5f3fdf46fcce591a030c698167a540eca3466 | [
"MIT"
] | null | null | null | import os
import torch
import torch.nn.functional as F
from torch import nn
from torchvision.models import alexnet, vgg16, vgg16_bn
from torchvision.ops import roi_pool
# from utils import BASE_DIR
import pdb
from torchvision.utils import save_image
import cv2
import numpy as np
from tqdm import tqdm
import itertools
from chainercv.utils.bbox.bbox_iou import bbox_iou
from network_general import resnet50, CLUB, resnet50_cvpr, mobilenet_v2, resnet50_i2c
from network_general import initialize_weights, mobilenet_v1
from sklearn.metrics import auc
from torch.autograd import Variable
class Minmaxcam_resnet(nn.Module):
def __init__(self, base_net="vgg", set_size = 5, numclass=200):
super().__init__()
assert base_net in {"alexnet", "vgg"}, "`base_net` should be in {alexnet, vgg}"
self.base_net = base_net
self.numclass = numclass
self.base = resnet50_cvpr(architecture_type='cam', pretrained=True)
self.pred = nn.Linear(2048, self.numclass)
self.set_size = set_size
self.aa = list(range(0, self.set_size))
self.bb = list(itertools.combinations(self.aa, 2))
self.cc = np.zeros([len(self.bb),2])
for i in range(len(self.bb)):
self.cc[i,0] = self.bb[i][0]
self.cc[i,1] = self.bb[i][1]
self.mse = nn.MSELoss()
def show_tsne(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.base(batch_imgs)
# out_p5 = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
out_p5_hm = self.base(batch_imgs*hm)
repre_cam = torch.mean(out_p5_hm, dim=(2,3))
repre = torch.mean(out_p5, dim=(2,3))
return repre_cam, repre
def loss_common_part_interclass(self, repre_1, repre_2):
# print('hi')
# repre_1 = torch.zeros([len(self.cc), 4096]).cuda()
# repre_2 = torch.zeros([len(self.cc), 4096]).cuda()
c_loss = torch.tensor([0.]).cuda()
for i in range(len(self.cc)):
# pdb.set_trace()
c_loss += self.mse(repre_1[int(self.cc[i,0])].unsqueeze(0), repre_2[int(self.cc[i,1])].unsqueeze(0))
return c_loss/len(self.cc)
def update_classification(self, batch_imgs, label, ss, bs):
for param in self.base.parameters():
param.requires_grad = True
for param in self.pred.parameters():
param.requires_grad = True
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pdb.set_trace()
#################################
# repre_set = torch.mean(repre_masked.reshape([bs,ss,1024]), dim=1)
# loss_set = F.cross_entropy(self.set_pred(repre_set) , label[0])
# pdb.set_trace()
# print(torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0])
if len(label[0]) != ss*bs:
loss_img = F.cross_entropy(pred, torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0], reduction='mean')
else:
# pdb.set_trace()
loss_img = F.cross_entropy(pred, label[0], reduction='mean')
return loss_img
def get_hms(self, batch_imgs, label, ss, bs):
# pdb.set_trace()
# out_extra = out_extra.detach()
# self.base.eval()
out_extra = self.base(batch_imgs)
# self.base.train()
# pdb.set_trace()
# pred = self.pred(torch.mean(out_extra, dim=(2,3)))
predict_cls = torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0]
# pdb.set_trace()
# np.where((predict_cls != 200) and (predict_cls != 201) and(predict_cls != 202))
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
# out_extra_masked = out_extra*hm
# repre_cam = self.gap(out_extra_masked).squeeze(2).squeeze(2)
# repre = self.gap(out_extra).squeeze(2).squeeze(2)
# ################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
return hm
def top1_loc_top15(self, batch_imgs, gt_bbox, gt, ori_size, bprime):
out_p5 = self.base(batch_imgs)
# out_extra = torch.relu(self.extra_conv(out_p5))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls, bb = bprime.predict_acc(batch_imgs)
# pdb.set_trace()
predict_cls_ = gt[0]-1
for i in range(predict_cls_.shape[0]):
if i == 0:
W = self.pred.weight[int(predict_cls_[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls_[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# threshold_list = np.arange(0,1,0.01)
threshold_list = np.array([0.1])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
# pdb.set_trace()
# return torch.sum(predict_cls == gt[0]-1)
for ii in range(batch_imgs.shape[0]):
# pdb.set_trace()
if (predict_cls[ii] == gt[0][ii]-1):
# print('yes')
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,:][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,:][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,:][0]-1 + gt_bbox[ii,:][2]
c_gt_bbox[0,3] = gt_bbox[ii,:][1]-1 + gt_bbox[ii,:][3]
# iouu = np.zeros([100])
for k in range(len(counter_03)):
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
counter_05[k] += 1
counter_03[k] += 1
if gt[0][ii]-1 in bb[ii]:
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,:][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,:][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,:][0]-1 + gt_bbox[ii,:][2]
c_gt_bbox[0,3] = gt_bbox[ii,:][1]-1 + gt_bbox[ii,:][3]
# iouu = np.zeros([100])
for k in range(len(counter_03)):
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
counter_07[k] += 1
# counter_03[k] += 1
return counter_03, counter_05, counter_07
def update_pwnn(self, batch_imgs, label, ss, bs):
for param in self.base.parameters():
param.requires_grad = False
for param in self.pred.parameters():
param.requires_grad = True
out_extra = self.base(batch_imgs)
if len(label[0]) != ss*bs:
predict_cls = torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0]
else:
predict_cls = label[0]
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[predict_cls[i]].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
# out_extra_masked = out_extra*hm
# repre_cam = self.gap(out_extra_masked).squeeze(2).squeeze(2)
# repre = self.gap(out_extra).squeeze(2).squeeze(2)
# ################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
# self.base.eval()
out_extra_hm = self.base(batch_imgs*hm)
repre_cam = torch.mean(out_extra_hm, dim=(2,3))
repre = torch.mean(out_extra, dim=(2,3))
################################################
# pdb.set_trace()
for i in range(bs):
c_loss_common = self.loss_common_part(repre_cam[i*ss:ss*(i+1)])
c_loss_ori = self.loss_ori_img(repre_cam[ss*i:ss*(i+1)], repre[ss*i:ss*(i+1)])
if i == 0:
loss_common = c_loss_common
loss_ori = c_loss_ori
else:
loss_common += c_loss_common
loss_ori += c_loss_ori
loss_common /= bs
loss_ori /= bs
return loss_common, loss_ori
def top1_loc_imagenet(self, batch_imgs, gt_bbox, gt, ori_size):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
predict_cls = gt[0]
# pdb.set_trace()
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.01)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]:
for ii in range(batch_imgs.shape[0]):
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
# pdb.set_trace()
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,1],ori_size[ii,0]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
for p in range(gt_bbox.shape[1]):
if gt_bbox[ii, p].shape[1] != 0:
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,p][0][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,p][0][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,p][0][0]-1 + gt_bbox[ii,p][0][2]
c_gt_bbox[0,3] = gt_bbox[ii,p][0][1]-1 + gt_bbox[ii,p][0][3]
for k in range(len(counter_03)):
if counter_05[k] * counter_03[k] * counter_07[k] == 1:
continue
else:
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for kk in range
# pdb.set_trace()
for kk in range(len(contours)):
# for kk in range(1):
# cc = max(contours, key=cv2.contourArea)
cc = contours[kk]
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
if counter_05[k] == 0:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
if counter_03[k] == 0:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
if counter_07[k] == 0:
counter_07[k] += 1
else:
break
return counter_03, counter_05, counter_07
else:
return 0
def acc(self, batch_imgs, index, gt_bbox, gt, ori_size):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(out_p3)
predict_cls = torch.argmax(pred, dim=1)
# predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
counter = 0
if predict_cls[0] == gt[0][0]-1:
for ii in range(batch_imgs.shape[0]):
counter = counter+1
return counter
else:
return 0
def top1_loc_auc(self, batch_imgs, gt, mask_path):
out_extra = self.base(batch_imgs)
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.001)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
num_bins = len(threshold_list) + 2
threshold_list_right_edge = np.append(threshold_list,
[1.0, 2.0, 3.0])
gt_true_score_hist = np.zeros(num_bins, dtype=np.float)
gt_false_score_hist = np.zeros(num_bins, dtype=np.float)
if predict_cls[0] == gt[0][0]-1:
auc_ = 0
for ii in range(batch_imgs.shape[0]):
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(224,224), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
precision = np.zeros([threshold_list.shape[0]])
recall = np.zeros([threshold_list.shape[0]])
c_mask_path = mask_path[ii]
mask_path_ = []
for kk in range(len(c_mask_path)):
cc_path = c_mask_path[kk]
# pdb.set_trace()
if cc_path.split('_')[-1] == 'ignore.png':
ignore_path_ = cc_path
else:
mask_path_.append(cc_path)
c_gt_mask = get_mask(mask_path_, ignore_path_)
c_hm = c_hm[0,0].detach().cpu().numpy()
gt_true_scores = c_hm[c_gt_mask == 1]
gt_false_scores = c_hm[c_gt_mask == 0]
gt_true_hist, _ = np.histogram(gt_true_scores, bins=threshold_list_right_edge)
gt_true_score_hist += gt_true_hist.astype(np.float)
gt_false_hist, _ = np.histogram(gt_false_scores,
bins=threshold_list_right_edge)
gt_false_score_hist += gt_false_hist.astype(np.float)
# pdb.set_trace()
return gt_true_score_hist, gt_false_score_hist
else:
return 0
def top1_loc_auc_2(self, gt_true_score_hist, gt_false_score_hist):
# pdb.set_trace()
num_gt_true = gt_true_score_hist.sum()
tp = gt_true_score_hist[::-1].cumsum()
fn = num_gt_true - tp
num_gt_false = gt_false_score_hist.sum()
fp = gt_false_score_hist[::-1].cumsum()
tn = num_gt_false - fp
if ((tp + fn) <= 0).all():
raise RuntimeError("No positive ground truth in the eval set.")
if ((tp + fp) <= 0).all():
raise RuntimeError("No positive prediction in the eval set.")
non_zero_indices = (tp + fp) != 0
precision = tp / (tp + fp)
recall = tp / (tp + fn)
auc = (precision[1:] * np.diff(recall))[non_zero_indices[1:]].sum()
# auc *= 100
# print("Mask AUC on split {}: {}".format(self.split, auc))
return auc
def top1_loc(self, batch_imgs, gt_bbox, gt, ori_size):
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(out_p3)
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
threshold_list = np.arange(0,1,0.01)
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]-1:
for ii in range(batch_imgs.shape[0]):
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
# c_hm = F.sigmoid(20*(c_hm - 0.5))
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,:][0]-0
c_gt_bbox[0,1] = gt_bbox[ii,:][1]-0
c_gt_bbox[0,2] = gt_bbox[ii,:][0]-0 + gt_bbox[ii,:][2]
c_gt_bbox[0,3] = gt_bbox[ii,:][1]-0 + gt_bbox[ii,:][3]
# iouu = np.zeros([100])
for k in range(len(counter_03)):
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
counter_07[k] += 1
return counter_03, counter_05, counter_07
else:
return 0
def loss_common_part(self, repre):
# print('hi')
# repre_1 = torch.zeros([len(self.cc), 4096]).cuda()
# repre_2 = torch.zeros([len(self.cc), 4096]).cuda()
c_loss = torch.tensor([0.]).cuda()
for i in range(len(self.cc)):
# pdb.set_trace()
c_loss += self.mse(repre[int(self.cc[i,0])].unsqueeze(0), repre[int(self.cc[i,1])].unsqueeze(0))
return c_loss/len(self.cc)
# pdb.set_trace()
def loss_ori_img(self, repre, repre_ori):
# print('hi')
loss = torch.mean(self.mse(repre, repre_ori))
return loss
def show_hm(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# pdb.set_trace()
for i in range(hm.shape[0]):
c_hm = hm[i].unsqueeze(0)
c_hm = (c_hm - torch.min(c_hm)) /(torch.max(c_hm) - torch.min(c_hm))
c_hm = F.interpolate(c_hm, size=(224,224), mode='bilinear')
c_hm = c_hm.cpu().numpy()
c_hm = c_hm[0][0]
# pdb.set_trace()
cv2.imwrite('test_{}.png'.format(i), 255*c_hm)
def show_hm____(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# pdb.set_trace()
# ccc = 0
for i in range(batch_imgs.shape[0]):
heatmap = np.zeros([ori_size[i,0],ori_size[i,1]])
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
c_gt_bbox = np.zeros([1,4])
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,1],ori_size[i,0]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# # c_hm = c_hm >= 0.13
# c_hm = c_hm >= 0.18
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * 0.13)
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
pdb.set_trace()
# for iii in range(cc.shape[0]):
# heatmap[cc[iii][0][0], cc[iii][0][1]] = 1
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
# c_gt_bbox = np.array([gt_bbox[i,:]-1])
# pdb.set_trace()
# gt_bbox
c_gt_bbox[0,0] = gt_bbox[i,:][0]-1
c_gt_bbox[0,1] = gt_bbox[i,:][1]-1
c_gt_bbox[0,2] = gt_bbox[i,:][0]-1 + gt_bbox[i,:][2]
c_gt_bbox[0,3] = gt_bbox[i,:][1]-1 + gt_bbox[i,:][3]
iou = bbox_iou(c_gt_bbox,estimated_bbox)[0]
# pdb.set_trace()
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
if gt_label[0][i]-1 == predict_cls[i]:
cv2.imwrite('./{}_{}_{}_T_.png'.format(index, i, iou), superimposed_img_1)
else:
cv2.imwrite('./{}_{}_{}_F_.png'.format(index, i, iou), superimposed_img_1)
def show_hm_openimage(self, batch_imgs, index, gt_label):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# pdb.set_trace()
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (224,224))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(224,224), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
cv2.imwrite('./{}_{}.png'.format(index, i), superimposed_img_1)
class Minmaxcam_mobilenet(nn.Module):
def __init__(self, base_net="vgg", set_size = 5, numclass=200):
super().__init__()
self.numclass = numclass
self.base = mobilenet_v2(pretrained=True)
self.features = self.base.features
################################################################
self.pred = nn.Linear(1280, self.numclass)
self.gap = nn.AvgPool2d(28, stride=28)
################################################################
self.set_size = set_size
self.aa = list(range(0, self.set_size))
self.bb = list(itertools.combinations(self.aa, 2))
self.cc = np.zeros([len(self.bb),2])
for i in range(len(self.bb)):
self.cc[i,0] = self.bb[i][0]
self.cc[i,1] = self.bb[i][1]
# self.cc = int(self.cc)
self.cos = nn.CosineSimilarity(dim=1, eps=1e-6)
self.mse = nn.MSELoss()
def show_tsne(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
# out_p5 = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
out_p5_hm = self.features(batch_imgs*hm)
repre_cam = torch.mean(out_p5_hm, dim=(2,3))
repre = torch.mean(out_p5, dim=(2,3))
return repre_cam, repre
def update_classification(self, batch_imgs, label, ss, bs):
for param in self.features.parameters():
param.requires_grad = True
# for param in self.extra_conv.parameters():
# param.requires_grad = True
for param in self.pred.parameters():
param.requires_grad = True
# out_p1 = self.features_p1(batch_imgs)
# out_p2 = self.features_p2(out_p1)
# out_p3 = self.features_p3(out_p2)
# out_p4 = self.features_p4(out_p3)
# out_p5 = self.features_p5(out_p4)
out_p5 = self.features(batch_imgs)
# out_p5 = torch.relu(self.extra_conv(out_p5))
pred = self.pred(self.gap(out_p5).squeeze(2).squeeze(2))
# pdb.set_trace()
#################################
# repre_set = torch.mean(repre_masked.reshape([bs,ss,1024]), dim=1)
# loss_set = F.cross_entropy(self.set_pred(repre_set) , label[0])
# pdb.set_trace()
# print(torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0])
loss_img = F.cross_entropy(pred, torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0], reduction='mean')
return loss_img
def show_hm_bbox_imagenet(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
out_extra = self.features(batch_imgs)
# out_extra = torch.relu(self.extra_conv(out_p5))
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
for ii in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,0],ori_size[i,1]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,1],ori_size[i,0]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
for p in range(gt_bbox.shape[1]):
if gt_bbox[ii, p].shape[1] != 0:
c_gt_bbox = np.zeros([1,4], dtype=int)
c_gt_bbox[0,0] = gt_bbox[ii,p][0][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,p][0][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,p][0][0]-1 + gt_bbox[ii,p][0][2]
c_gt_bbox[0,3] = gt_bbox[ii,p][0][1]-1 + gt_bbox[ii,p][0][3]
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
# threshold_value = int(np.max(cm_) * 0.39)
# threshold_value = int(np.max(cm_) * 0.33)
threshold_value = int(np.max(cm_) * 0.28)
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for kk in range(1):
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4],dtype=int)
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
cv2.imwrite('./{}_{}_T.png'.format(index, i), superimposed_img_1)
def update_pwnn(self, batch_imgs, label, ss, bs):
for param in self.features.parameters():
param.requires_grad = False
for param in self.pred.parameters():
param.requires_grad = True
out_p5 = self.features(batch_imgs)
pred = self.pred(self.gap(out_p5).squeeze(2).squeeze(2))
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[predict_cls[i]].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
out_p5_hm = self.features(batch_imgs*hm)
repre_cam = self.gap(out_p5_hm).squeeze(2).squeeze(2)
repre = self.gap(out_p5).squeeze(2).squeeze(2)
################################################
# repre_cam = self.gap(out_p5*hm).squeeze(2).squeeze(2)
# repre = self.gap(out_p5).squeeze(2).squeeze(2)
################################################
# pdb.set_trace()
for i in range(bs):
c_loss_common = self.loss_common_part(repre_cam[i*ss:ss*(i+1)])
c_loss_ori = self.loss_ori_img(repre_cam[ss*i:ss*(i+1)], repre[ss*i:ss*(i+1)])
if i == 0:
loss_common = c_loss_common
loss_ori = c_loss_ori
else:
loss_common += c_loss_common
loss_ori += c_loss_ori
loss_common /= bs
loss_ori /= bs
# pdb.set_trace()
return loss_common, loss_ori
def get_hms(self, batch_imgs, label, ss, bs):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
# out_extra = torch.relu(self.extra_conv(out_p5))
pred = self.pred(self.gap(out_p5).squeeze(2).squeeze(2))
predict_cls = torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0]
# predict_cls = torch.argmax(pred, dim=0)
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
return hm
def show_hm_bbox(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_extra = self.features(batch_imgs)
# out_extra = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,1],ori_size[i,0]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
c_gt_bbox = np.zeros([1,4], dtype=int)
# pdb.set_trace()
c_gt_bbox[0,0] = int(gt_bbox[0,:][0]-1) #x1
c_gt_bbox[0,1] = gt_bbox[0,:][1]-1 #y1
c_gt_bbox[0,2] = gt_bbox[0,:][0]-1 + gt_bbox[0,:][2] #x2
c_gt_bbox[0,3] = gt_bbox[0,:][1]-1 + gt_bbox[0,:][3] #y2
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * 0.17)
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4],dtype=int)
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
if gt_label[0][i]-1 == predict_cls[i]:
cv2.imwrite('./{}_{}_T.png'.format(index, i), superimposed_img_1)
else:
cv2.imwrite('./{}_{}_F.png'.format(index, i), superimposed_img_1)
# pdb.set_trace()
def top1_loc_imagenet(self, batch_imgs, gt_bbox, gt, ori_size):
out_extra = self.features(batch_imgs)
predict_cls = gt[0]
# pdb.set_trace()
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.01)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]:
for ii in range(batch_imgs.shape[0]):
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
# pdb.set_trace()
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,1],ori_size[ii,0]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
for p in range(gt_bbox.shape[1]):
if gt_bbox[ii, p].shape[1] != 0:
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,p][0][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,p][0][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,p][0][0]-1 + gt_bbox[ii,p][0][2]
c_gt_bbox[0,3] = gt_bbox[ii,p][0][1]-1 + gt_bbox[ii,p][0][3]
for k in range(len(counter_03)):
if counter_05[k] * counter_03[k] * counter_07[k] == 1:
continue
else:
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for kk in range
# pdb.set_trace()
for kk in range(len(contours)):
# for kk in range(1):
# cc = max(contours, key=cv2.contourArea)
cc = contours[kk]
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
if counter_05[k] == 0:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
if counter_03[k] == 0:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
if counter_07[k] == 0:
counter_07[k] += 1
else:
break
return counter_03, counter_05, counter_07
else:
return 0
def top1_loc(self, batch_imgs, gt_bbox, gt, ori_size):
out_extra = self.features(batch_imgs)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.01)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]-1:
for ii in range(batch_imgs.shape[0]):
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,:][0]
c_gt_bbox[0,1] = gt_bbox[ii,:][1]
c_gt_bbox[0,2] = gt_bbox[ii,:][0] + gt_bbox[ii,:][2]
c_gt_bbox[0,3] = gt_bbox[ii,:][1] + gt_bbox[ii,:][3]
# iouu = np.zeros([100])
for k in range(len(counter_03)):
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
# c_hm_ = c_hm >= (torch.max(c_hm)*threshold_list[k])
# c_hm_ = c_hm_[0,0,:,:]
# c_hm_ = c_hm_.cpu().numpy()
# yy, xx = np.where(c_hm_==True)
# estimated_bbox = np.zeros([1,4])
# estimated_bbox[0,1] = np.min(yy)
# estimated_bbox[0,3] = np.max(yy)
# estimated_bbox[0,0] = np.min(xx)
# estimated_bbox[0,2] = np.max(xx)
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
counter_07[k] += 1
# c_gt_bbox = np.array([gt_bbox[i,:]-1])
# pdb.set_trace()
# if np.max(iouu) > 0.5:
# counter += 1
# if bbox_iou(c_gt_bbox,estimated_bbox)[0]>0.5:
# counter += 1
# counter = counter+1
return counter_03, counter_05, counter_07
else:
return 0
def top1_loc_auc(self, batch_imgs, gt, mask_path):
out_extra = self.features(batch_imgs)
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.001)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
num_bins = len(threshold_list) + 2
threshold_list_right_edge = np.append(threshold_list,
[1.0, 2.0, 3.0])
gt_true_score_hist = np.zeros(num_bins, dtype=np.float)
gt_false_score_hist = np.zeros(num_bins, dtype=np.float)
if predict_cls[0] == gt[0][0]-1:
auc_ = 0
for ii in range(batch_imgs.shape[0]):
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(224,224), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
precision = np.zeros([threshold_list.shape[0]])
recall = np.zeros([threshold_list.shape[0]])
c_mask_path = mask_path[ii]
mask_path_ = []
for kk in range(len(c_mask_path)):
cc_path = c_mask_path[kk]
# pdb.set_trace()
if cc_path.split('_')[-1] == 'ignore.png':
ignore_path_ = cc_path
else:
mask_path_.append(cc_path)
c_gt_mask = get_mask(mask_path_, ignore_path_)
# print(ignore_path_)
# print(mask_path_)
# print('\n')
# pdb.set_trace()
# for k in range(len(counter_03)):
# c_hm_ = c_hm >= threshold_list[k]
# # gt_true_scores = c_hm_[gt_mask == 1]
# pdb.set_trace()
# c_hm_ = c_hm_[0,0].detach().cpu().numpy()
# gt_true_scores = c_hm_[c_gt_mask == 1]
# c_gt_mask = c_gt_mask.astype('bool')
# precision[k] = np.sum(c_hm_ * c_gt_mask)/np.sum(c_hm_)
# recall[k] = np.sum(c_hm_ * c_gt_mask)/np.sum(c_gt_mask)
c_hm = c_hm[0,0].detach().cpu().numpy()
gt_true_scores = c_hm[c_gt_mask == 1]
gt_false_scores = c_hm[c_gt_mask == 0]
gt_true_hist, _ = np.histogram(gt_true_scores, bins=threshold_list_right_edge)
gt_true_score_hist += gt_true_hist.astype(np.float)
gt_false_hist, _ = np.histogram(gt_false_scores,
bins=threshold_list_right_edge)
gt_false_score_hist += gt_false_hist.astype(np.float)
# pdb.set_trace()
return gt_true_score_hist, gt_false_score_hist
else:
return 0
def top1_loc_auc_2(self, gt_true_score_hist, gt_false_score_hist):
# pdb.set_trace()
num_gt_true = gt_true_score_hist.sum()
tp = gt_true_score_hist[::-1].cumsum()
fn = num_gt_true - tp
num_gt_false = gt_false_score_hist.sum()
fp = gt_false_score_hist[::-1].cumsum()
tn = num_gt_false - fp
if ((tp + fn) <= 0).all():
raise RuntimeError("No positive ground truth in the eval set.")
if ((tp + fp) <= 0).all():
raise RuntimeError("No positive prediction in the eval set.")
non_zero_indices = (tp + fp) != 0
precision = tp / (tp + fp)
recall = tp / (tp + fn)
f1 = 2*precision*recall / (recall + precision)
np.save('./mobilenet_prec.py', precision)
np.save('./mobilenet_recall.py', recall)
pdb.set_trace()
auc = (precision[1:] * np.diff(recall))[non_zero_indices[1:]].sum()
# auc *= 100
# print("Mask AUC on split {}: {}".format(self.split, auc))
return auc
def loss_common_part(self, repre):
# print('hi')
# repre_1 = torch.zeros([len(self.cc), 4096]).cuda()
# repre_2 = torch.zeros([len(self.cc), 4096]).cuda()
c_loss = torch.tensor([0.]).cuda()
for i in range(len(self.cc)):
# pdb.set_trace()
c_loss += self.mse(repre[int(self.cc[i,0])].unsqueeze(0), repre[int(self.cc[i,1])].unsqueeze(0))
return c_loss/len(self.cc)
# pdb.set_trace()
def loss_ori_img(self, repre, repre_ori):
# print('hi')
# pdb.set_trace()
loss = torch.mean(self.mse(repre, repre_ori))
return loss
def show_hm(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
# out_p5 = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_p5, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,1],ori_size[i,0]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# pdb.set_trace
# c_hm = c_hm >= 0.23
# c_hm = c_hm >= 0.32
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
# c_hm = w_scale[i].unsqueeze(0)
# c_hm = F.interpolate(c_hm, size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# # pdb.set_trace()
# c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
c_hm = c_hm[0,0,:,:]
c_hm = c_hm.cpu().numpy()
yy, xx = np.where(c_hm==True)
estimated_bbox = np.zeros([1,4])
c_gt_bbox = np.zeros([1,4])
estimated_bbox[0,1] = np.min(yy)
estimated_bbox[0,3] = np.max(yy)
estimated_bbox[0,0] = np.min(xx)
estimated_bbox[0,2] = np.max(xx)
# c_gt_bbox = np.array([gt_bbox[i,:]-1])
# pdb.set_trace()
# gt_bbox
c_gt_bbox[0,0] = gt_bbox[i,:][0]-1
c_gt_bbox[0,1] = gt_bbox[i,:][1]-1
c_gt_bbox[0,2] = gt_bbox[i,:][0]-1 + gt_bbox[i,:][2]
c_gt_bbox[0,3] = gt_bbox[i,:][1]-1 + gt_bbox[i,:][3]
iou = bbox_iou(c_gt_bbox,estimated_bbox)[0]
# pdb.set_trace()
if gt_label[0][i]-1 == predict_cls[i]:
cv2.imwrite('./{}_{}_{}_T.png'.format(index, i, iou), superimposed_img_1)
else:
cv2.imwrite('./{}_{}_{}_F.png'.format(index, i, iou), superimposed_img_1)
# counter += 1
# pdb.set_trace()
def show_hm_openimage(self, batch_imgs, index, gt_label):
# pdb.set_trace()
out_extra = self.base(batch_imgs)
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# pdb.set_trace()
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (224,224))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(224,224), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
cv2.imwrite('./{}_{}.png'.format(index, i), superimposed_img_1)
class Minmaxcam_VGG(nn.Module):
def __init__(self, base_net="vgg", set_size = 5, numclass=200):
super().__init__()
self.numclass = numclass
self.base = vgg16(pretrained=True)
self.features = self.base.features[:-1]
self.extra_conv = nn.Conv2d(512, 1024, 3, 1, 1)
self.pred = nn.Linear(1024, self.numclass)
self.gap = nn.AvgPool2d(14, stride=14)
self.set_size = set_size
self.aa = list(range(0, self.set_size))
self.bb = list(itertools.combinations(self.aa, 2))
self.cc = np.zeros([len(self.bb),2])
for i in range(len(self.bb)):
self.cc[i,0] = self.bb[i][0]
self.cc[i,1] = self.bb[i][1]
# self.cc = int(self.cc)
self.cos = nn.CosineSimilarity(dim=1, eps=1e-6)
self.mse = nn.MSELoss()
def update_classification(self, batch_imgs, label, ss, bs):
for param in self.features.parameters():
param.requires_grad = True
for param in self.extra_conv.parameters():
param.requires_grad = True
for param in self.pred.parameters():
param.requires_grad = True
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
pred = self.pred(torch.mean(out_extra, dim=(2,3)))
#################################
loss_img = F.cross_entropy(pred, torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0], reduction='mean')
# pdb.set_trace()
# loss_img = F.binary_cross_entropy(F.sigmoid(pred), 1.*label[0], reduction="mean")
return loss_img
def get_hms(self, batch_imgs, label, ss, bs):
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
predict_cls = torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0]
# predict_cls = torch.argmax(pred, dim=0)
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
pdb.set_trace()
return hm
def update_pwnn(self, batch_imgs, label, ss, bs):
for param in self.features.parameters():
param.requires_grad = False
for param in self.extra_conv.parameters():
param.requires_grad = False
for param in self.pred.parameters():
param.requires_grad = True
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
predict_cls = torch.transpose(label.repeat(1,ss).view(ss,bs),1,0).reshape(1,ss*bs)[0]
# predict_cls = torch.argmax(pred, dim=0)
for i in range(ss*bs):
if i == 0:
W = self.pred.weight[predict_cls[i]].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
min_tmp = torch.min(hm, dim=2)[0]
min_tmp = torch.min(min_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
max_tmp = torch.max(hm, dim=2)[0]
max_tmp = torch.max(max_tmp, dim=2)[0].unsqueeze(2).unsqueeze(2)
hm = (hm - min_tmp)/(max_tmp - min_tmp)
################################################
# out_extra_masked = out_extra*hm
# repre_cam = self.gap(out_extra_masked).squeeze(2).squeeze(2)
# repre = self.gap(out_extra).squeeze(2).squeeze(2)
# ################################################
hm = F.interpolate(hm, size=(224,224), mode='bilinear')
out_p5_hm = self.features(batch_imgs*hm)
out_extra_masked = torch.relu(self.extra_conv(out_p5_hm))
repre_cam = self.gap(out_extra_masked).squeeze(2).squeeze(2)
repre = self.gap(out_extra).squeeze(2).squeeze(2)
################################################
# pdb.set_trace()
for i in range(bs):
c_loss_common = self.loss_common_part(repre_cam[i*ss:ss*(i+1)])
c_loss_ori = self.loss_ori_img(repre_cam[ss*i:ss*(i+1)], repre[ss*i:ss*(i+1)])
if i == 0:
loss_common = c_loss_common
loss_ori = c_loss_ori
else:
loss_common += c_loss_common
loss_ori += c_loss_ori
loss_common /= bs
loss_ori /= bs
# pdb.set_trace()
return loss_common, loss_ori
def show_hm_openimage(self, batch_imgs, index, gt_label):
# pdb.set_trace()
out_extra = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_extra))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# pdb.set_trace()
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (224,224))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(224,224), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
cv2.imwrite('./{}_{}.png'.format(index, i), superimposed_img_1)
def top1_loc(self, batch_imgs, gt_bbox, gt, ori_size):
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
# pdb.set_trace()
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.01)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]-1:
for ii in range(batch_imgs.shape[0]):
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,:][0]
c_gt_bbox[0,1] = gt_bbox[ii,:][1]
c_gt_bbox[0,2] = gt_bbox[ii,:][0] + gt_bbox[ii,:][2]
c_gt_bbox[0,3] = gt_bbox[ii,:][1] + gt_bbox[ii,:][3]
# iouu = np.zeros([100])
for k in range(len(counter_03)):
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
# c_hm_ = c_hm >= (torch.max(c_hm)*threshold_list[k])
# c_hm_ = c_hm_[0,0,:,:]
# c_hm_ = c_hm_.cpu().numpy()
# yy, xx = np.where(c_hm_==True)
# estimated_bbox = np.zeros([1,4])
# estimated_bbox[0,1] = np.min(yy)
# estimated_bbox[0,3] = np.max(yy)
# estimated_bbox[0,0] = np.min(xx)
# estimated_bbox[0,2] = np.max(xx)
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
counter_07[k] += 1
# c_gt_bbox = np.array([gt_bbox[i,:]-1])
# pdb.set_trace()
# if np.max(iouu) > 0.5:
# counter += 1
# if bbox_iou(c_gt_bbox,estimated_bbox)[0]>0.5:
# counter += 1
# counter = counter+1
return counter_03, counter_05, counter_07
else:
return 0
def top1_loc_imagenet(self, batch_imgs, gt_bbox, gt, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]
# pdb.set_trace()
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.01)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
if predict_cls[0] == gt[0][0]:
for ii in range(batch_imgs.shape[0]):
# save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
# ref_1 = cv2.imread('./temp_1.png')
# pdb.set_trace()
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(ori_size[ii,1],ori_size[ii,0]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
for p in range(gt_bbox.shape[1]):
if gt_bbox[ii, p].shape[1] != 0:
c_gt_bbox = np.zeros([1,4])
c_gt_bbox[0,0] = gt_bbox[ii,p][0][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,p][0][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,p][0][0]-1 + gt_bbox[ii,p][0][2]
c_gt_bbox[0,3] = gt_bbox[ii,p][0][1]-1 + gt_bbox[ii,p][0][3]
for k in range(len(counter_03)):
if counter_05[k] * counter_03[k] * counter_07[k] == 1:
continue
else:
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * threshold_list[k])
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for kk in range
# pdb.set_trace()
# for kk in range(len(contours)):
for kk in range(1):
cc = max(contours, key=cv2.contourArea)
# cc = contours[kk]
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4])
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.5:
if counter_05[k] == 0:
counter_05[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.3:
if counter_03[k] == 0:
counter_03[k] += 1
if bbox_iou(c_gt_bbox,estimated_bbox)[0] > 0.7:
if counter_07[k] == 0:
counter_07[k] += 1
else:
break
return counter_03, counter_05, counter_07
else:
return 0
def top1_loc_auc(self, batch_imgs, gt, mask_path):
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[predict_cls[i]].unsqueeze(0)),dim=0)
# pdb.set_trace()
# W = W/torch.sum(W)
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
threshold_list = np.arange(0,1,0.001)
# threshold_list = np.array([0.2])
counter_03 = np.zeros(len(threshold_list))
counter_05 = np.zeros(len(threshold_list))
counter_07 = np.zeros(len(threshold_list))
num_bins = len(threshold_list) + 2
threshold_list_right_edge = np.append(threshold_list,
[1.0, 2.0, 3.0])
gt_true_score_hist = np.zeros(num_bins, dtype=np.float)
gt_false_score_hist = np.zeros(num_bins, dtype=np.float)
if predict_cls[0] == gt[0][0]-1:
auc_ = 0
for ii in range(batch_imgs.shape[0]):
c_hm = hm[ii].unsqueeze(0)
c_hm = F.interpolate(c_hm, size=(batch_imgs.shape[2],batch_imgs.shape[2]), mode='bilinear')
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
precision = np.zeros([threshold_list.shape[0]])
recall = np.zeros([threshold_list.shape[0]])
c_mask_path = mask_path[ii]
mask_path_ = []
for kk in range(len(c_mask_path)):
cc_path = c_mask_path[kk]
# pdb.set_trace()
if cc_path.split('_')[-1] == 'ignore.png':
ignore_path_ = cc_path
else:
mask_path_.append(cc_path)
# pdb.set_trace()
c_gt_mask = get_mask(mask_path_, ignore_path_)
c_hm = c_hm[0,0].detach().cpu().numpy()
gt_true_scores = c_hm[c_gt_mask == 1]
gt_false_scores = c_hm[c_gt_mask == 0]
gt_true_hist, _ = np.histogram(gt_true_scores, bins=threshold_list_right_edge)
gt_true_score_hist += gt_true_hist.astype(np.float)
gt_false_hist, _ = np.histogram(gt_false_scores,
bins=threshold_list_right_edge)
gt_false_score_hist += gt_false_hist.astype(np.float)
# pdb.set_trace()
return gt_true_score_hist, gt_false_score_hist
else:
return 0
def top1_loc_auc_2(self, gt_true_score_hist, gt_false_score_hist):
num_gt_true = gt_true_score_hist.sum()
tp = gt_true_score_hist[::-1].cumsum()
fn = num_gt_true - tp
num_gt_false = gt_false_score_hist.sum()
fp = gt_false_score_hist[::-1].cumsum()
tn = num_gt_false - fp
if ((tp + fn) <= 0).all():
raise RuntimeError("No positive ground truth in the eval set.")
if ((tp + fp) <= 0).all():
raise RuntimeError("No positive prediction in the eval set.")
non_zero_indices = (tp + fp) != 0
precision = tp / (tp + fp)
recall = tp / (tp + fn)
np.save('./vgg_ours_prec.py', precision)
np.save('./vgg_ours_recall.py', recall)
pdb.set_trace()
auc = (precision[1:] * np.diff(recall))[non_zero_indices[1:]].sum()
# auc *= 100
# print("Mask AUC on split {}: {}".format(self.split, auc))
return auc
def loss_common_part(self, repre):
# print('hi')
# repre_1 = torch.zeros([len(self.cc), 4096]).cuda()
# repre_2 = torch.zeros([len(self.cc), 4096]).cuda()
c_loss = torch.tensor([0.]).cuda()
for i in range(len(self.cc)):
# pdb.set_trace()
c_loss += self.mse(repre[int(self.cc[i,0])].unsqueeze(0), repre[int(self.cc[i,1])].unsqueeze(0))
return c_loss/len(self.cc)
# pdb.set_trace()
def loss_ori_img(self, repre, repre_ori):
# print('hi')
# pdb.set_trace()
loss = torch.mean(self.mse(repre, repre_ori))
return loss
def show_hm(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# for i in range(hm.shape[0]):
# c_hm = hm[i].unsqueeze(0)
# c_hm = (c_hm - torch.min(c_hm)) /(torch.max(c_hm) - torch.min(c_hm))
# c_hm = F.interpolate(c_hm, size=(224,224), mode='bilinear')
# c_hm = c_hm.cpu().numpy()
# c_hm = c_hm[0][0]
# # pdb.set_trace()
# cv2.imwrite('test_{}.png'.format(i), 255*c_hm)
# pdb.set_trace()
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,1],ori_size[i,0]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
# c_hm = w_scale[i].unsqueeze(0)
# c_hm = F.interpolate(c_hm, size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# # pdb.set_trace()
# c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
c_hm = c_hm[0,0,:,:]
c_hm = c_hm.cpu().numpy()
yy, xx = np.where(c_hm==True)
estimated_bbox = np.zeros([1,4])
c_gt_bbox = np.zeros([1,4])
estimated_bbox[0,1] = np.min(yy)
estimated_bbox[0,3] = np.max(yy)
estimated_bbox[0,0] = np.min(xx)
estimated_bbox[0,2] = np.max(xx)
# c_gt_bbox = np.array([gt_bbox[i,:]-1])
# pdb.set_trace()
# gt_bbox
c_gt_bbox[0,0] = gt_bbox[i,:][0]-1
c_gt_bbox[0,1] = gt_bbox[i,:][1]-1
c_gt_bbox[0,2] = gt_bbox[i,:][0]-1 + gt_bbox[i,:][2]
c_gt_bbox[0,3] = gt_bbox[i,:][1]-1 + gt_bbox[i,:][3]
iou = bbox_iou(c_gt_bbox,estimated_bbox)[0]
# pdb.set_trace()
if gt_label[0][i]-1 == predict_cls[i]:
cv2.imwrite('./{}_{}_{}_T.png'.format(index, i, iou), superimposed_img_1)
else:
cv2.imwrite('./{}_{}_{}_F.png'.format(index, i, iou), superimposed_img_1)
def show_hm_bbox(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
# c_hm = hm[ii].unsqueeze(0)
# c_hm = F.interpolate(c_hm, size=(ori_size[ii,0],ori_size[ii,1]), mode='bilinear')
# c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm) - torch.min(c_hm))
# # iouu = np.zeros([100])
# for k in range(len(counter_03)):
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,1],ori_size[i,0]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
c_gt_bbox = np.zeros([1,4], dtype=int)
# pdb.set_trace()
c_gt_bbox[0,0] = int(gt_bbox[0,:][0]-1) #x1
c_gt_bbox[0,1] = gt_bbox[0,:][1]-1 #y1
c_gt_bbox[0,2] = gt_bbox[0,:][0]-1 + gt_bbox[0,:][2] #x2
c_gt_bbox[0,3] = gt_bbox[0,:][1]-1 + gt_bbox[0,:][3] #y2
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * 0.36)
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4],dtype=int)
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
if gt_label[0][i]-1 == predict_cls[i]:
cv2.imwrite('./{}_{}_T.png'.format(index, i), superimposed_img_1)
else:
cv2.imwrite('./{}_{}_F.png'.format(index, i), superimposed_img_1)
# pdb.set_trace()
def show_hm_bbox_imagenet(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
for ii in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
ref_1 = cv2.resize(ref_1, (ori_size[i,0],ori_size[i,1]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,1],ori_size[i,0]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.36
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
for p in range(gt_bbox.shape[1]):
if gt_bbox[ii, p].shape[1] != 0:
c_gt_bbox = np.zeros([1,4], dtype=int)
c_gt_bbox[0,0] = gt_bbox[ii,p][0][0]-1
c_gt_bbox[0,1] = gt_bbox[ii,p][0][1]-1
c_gt_bbox[0,2] = gt_bbox[ii,p][0][0]-1 + gt_bbox[ii,p][0][2]
c_gt_bbox[0,3] = gt_bbox[ii,p][0][1]-1 + gt_bbox[ii,p][0][3]
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,0]:c_gt_bbox[0,0]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 0] = 0
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 1] = 255
superimposed_img_1[c_gt_bbox[0,1] : c_gt_bbox[0,3], c_gt_bbox[0,2]:c_gt_bbox[0,2]+3, 2] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,1]:c_gt_bbox[0,1]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 0] = 0
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 1] = 255
superimposed_img_1[c_gt_bbox[0,3]:c_gt_bbox[0,3]+3, c_gt_bbox[0,0]: c_gt_bbox[0,2], 2] = 0
cm_ = 255*c_hm.cpu().numpy()[0][0]
cm_ = cm_.astype('uint8')
threshold_value = int(np.max(cm_) * 0.19)
# threshold_value = int(np.max(cm_) * 0.28)
# threshold_value = int(np.max(cm_) * 0.27)
_, thresholded_gray_heatmap = cv2.threshold(cm_, threshold_value, 255, cv2.THRESH_TOZERO)
contours, _ = cv2.findContours(thresholded_gray_heatmap, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# for kk in range(len(contours)):
for kk in range(1):
# cc = contours[kk]
cc = max(contours, key=cv2.contourArea)
xx, yy, ww, hh = cv2.boundingRect(cc)
# xx, yy, ww, hh = cv2.boundingRect(cc)
estimated_bbox = np.zeros([1,4],dtype=int)
estimated_bbox[0,1] = yy
estimated_bbox[0,3] = yy+hh
estimated_bbox[0,0] = xx
estimated_bbox[0,2] = xx+ww
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,0]:estimated_bbox[0,0]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 0] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 1] = 0
superimposed_img_1[estimated_bbox[0,1] : estimated_bbox[0,3], estimated_bbox[0,2]:estimated_bbox[0,2]+3, 2] = 255
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,1]:estimated_bbox[0,1]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 0] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 1] = 0
superimposed_img_1[estimated_bbox[0,3]:estimated_bbox[0,3]+3, estimated_bbox[0,0]: estimated_bbox[0,2], 2] = 255
cv2.imwrite('./{}_{}_T.png'.format(index, i), superimposed_img_1)
# pdb.set_trace()
def show_hm_imagenet(self, batch_imgs, index, gt_label, gt_bbox, ori_size):
# pdb.set_trace()
out_p5 = self.features(batch_imgs)
out_extra = torch.relu(self.extra_conv(out_p5))
# pred = self.pred(self.gap(out_extra).squeeze(2).squeeze(2))
# predict_cls = torch.argmax(pred, dim=1)
predict_cls = gt_label[0]-1
for i in range(1):
if i == 0:
W = self.pred.weight[int(predict_cls[i])].unsqueeze(0)
else:
W = torch.cat((W, self.pred.weight[int(predict_cls[i])].unsqueeze(0)),dim=0)
# pdb.set_trace()
hm = torch.sum(W.unsqueeze(2).unsqueeze(2) * out_extra, dim=1).unsqueeze(1)
# w_scale = F.interpolate(hm, size=(224,224), mode='bilinear')
pdb.set_trace()
for i in range(batch_imgs.shape[0]):
save_image(batch_imgs[i],'./temp_1.png',normalize=True, nrow=1, pad_value=0, padding=0)
ref_1 = cv2.imread('./temp_1.png')
# pdb.set_trace()
ref_1 = cv2.resize(ref_1, (ori_size[i,0],ori_size[i,1]))
c_hm = F.interpolate(hm[i].unsqueeze(0), size=(ori_size[i,1],ori_size[i,0]), mode='bilinear')
# c_hm = w_scale[i].unsqueeze(0)
# pdb.set_trace()
c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
heatmap = np.uint8(255 * c_hm[0][0].cpu().detach().numpy())
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
superimposed_img_1 = heatmap * 0.7 + ref_1 *0.5
# c_hm = w_scale[i].unsqueeze(0)
# c_hm = F.interpolate(c_hm, size=(ori_size[i,0],ori_size[i,1]), mode='bilinear')
# # pdb.set_trace()
# c_hm = (c_hm - torch.min(c_hm))/(torch.max(c_hm)-torch.min(c_hm))
# c_hm = c_hm >= 0.2
c_hm = c_hm[0,0,:,:]
c_hm = c_hm.cpu().numpy()
yy, xx = np.where(c_hm==True)
estimated_bbox = np.zeros([1,4])
c_gt_bbox = np.zeros([1,4])
estimated_bbox[0,1] = np.min(yy)
estimated_bbox[0,3] = np.max(yy)
estimated_bbox[0,0] = np.min(xx)
estimated_bbox[0,2] = np.max(xx)
cv2.imwrite('./{}_{}_T.png'.format(index, i), superimposed_img_1)
# c_gt_bbox[0,0] = gt_bbox[i,:][0]-1
# c_gt_bbox[0,1] = gt_bbox[i,:][1]-1
# c_gt_bbox[0,2] = gt_bbox[i,:][0]-1 + gt_bbox[i,:][2]
# c_gt_bbox[0,3] = gt_bbox[i,:][1]-1 + gt_bbox[i,:][3]
# iou = bbox_iou(c_gt_bbox,estimated_bbox)[0]
# # pdb.set_trace()
# if gt_label[0][i]-1 == predict_cls[i]:
# cv2.imwrite('./{}_{}_{}_T.png'.format(index, i, iou), superimposed_img_1)
# else:
# cv2.imwrite('./{}_{}_{}_F.png'.format(index, i, iou), superimposed_img_1)
# # counter += 1
# pdb.set_trace()
| 35.035485 | 137 | 0.52833 | 15,335 | 103,670 | 3.327943 | 0.020672 | 0.054669 | 0.041561 | 0.04013 | 0.96855 | 0.964102 | 0.959654 | 0.955461 | 0.950621 | 0.949054 | 0 | 0.059293 | 0.315434 | 103,670 | 2,958 | 138 | 35.047329 | 0.659809 | 0.128417 | 0 | 0.938805 | 0 | 0 | 0.013897 | 0.000236 | 0 | 0 | 0 | 0 | 0.00072 | 1 | 0.033837 | false | 0 | 0.012239 | 0 | 0.078474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4514104bf197ecc3c769deeb4e43402871079741 | 31,659 | py | Python | tests/kibana_discover_test.py | Tsukiand/elastalert2 | ee4f99942ba32278d77e7a7880964dc5fdc0123e | [
"Apache-2.0"
] | null | null | null | tests/kibana_discover_test.py | Tsukiand/elastalert2 | ee4f99942ba32278d77e7a7880964dc5fdc0123e | [
"Apache-2.0"
] | null | null | null | tests/kibana_discover_test.py | Tsukiand/elastalert2 | ee4f99942ba32278d77e7a7880964dc5fdc0123e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from datetime import timedelta
import pytest
from elastalert.kibana_discover import generate_kibana_discover_url
@pytest.mark.parametrize("kibana_version", [
'7.0',
'7.1',
'7.2',
'7.3',
'7.4',
'7.5',
'7.6',
'7.7',
'7.8',
'7.9',
'7.10',
'7.11',
'7.12',
'7.13',
'7.14',
'7.15',
'7.16',
'8.0',
'8.0',
])
def test_generate_kibana_discover_url_with_kibana_7x(kibana_version):
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': kibana_version,
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_relative_kibana_discover_app_url():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'app/discover#/',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': '620ad0e6-43df-4557-bda2-384960fa9086',
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2021-10-08T00:30:00Z'
}
)
expectedUrl = (
'app/discover#/'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272021-10-08T00%3A20%3A00Z%27%2C'
+ 'to%3A%272021-10-08T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3A%27620ad0e6-43df-4557-bda2-384960fa9086%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_missing_kibana_discover_version():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_index_pattern_id': 'logs',
'timestamp_field': 'timestamp',
'name': 'test'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
assert url is None
def test_generate_kibana_discover_url_with_missing_kibana_discover_app_url():
url = generate_kibana_discover_url(
rule={
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs',
'timestamp_field': 'timestamp',
'name': 'test'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
assert url is None
def test_generate_kibana_discover_url_with_missing_kibana_discover_index_pattern_id():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'timestamp_field': 'timestamp',
'name': 'test'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
assert url is None
def test_generate_kibana_discover_url_with_invalid_kibana_version():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '4.5',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
assert url is None
def test_generate_kibana_discover_url_with_kibana_discover_app_url_env_substitution(environ):
environ.update({
'KIBANA_HOST': 'kibana',
'KIBANA_PORT': '5601',
})
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://$KIBANA_HOST:$KIBANA_PORT/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_from_timedelta():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'kibana_discover_from_timedelta': timedelta(hours=1),
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T04:00:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T03%3A00%3A00Z%27%2C'
+ 'to%3A%272019-09-01T04%3A10%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_from_timedelta_and_timeframe():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'kibana_discover_from_timedelta': timedelta(hours=1),
'timeframe': timedelta(minutes=20),
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T04:00:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T03%3A00%3A00Z%27%2C'
+ 'to%3A%272019-09-01T04%3A20%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_to_timedelta():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'kibana_discover_to_timedelta': timedelta(hours=1),
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T04:00:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T03%3A50%3A00Z%27%2C'
+ 'to%3A%272019-09-01T05%3A00%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_to_timedelta_and_timeframe():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'kibana_discover_to_timedelta': timedelta(hours=1),
'timeframe': timedelta(minutes=20),
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T04:00:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T03%3A40%3A00Z%27%2C'
+ 'to%3A%272019-09-01T05%3A00%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_timeframe():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'd6cabfb6-aaef-44ea-89c5-600e9a76991a',
'timeframe': timedelta(minutes=20),
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T04:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T04%3A10%3A00Z%27%2C'
+ 'to%3A%272019-09-01T04%3A50%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3Ad6cabfb6-aaef-44ea-89c5-600e9a76991a%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_custom_columns():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'kibana_discover_columns': ['level', 'message'],
'timestamp_field': 'timestamp'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28level%2Cmessage%29%2C'
+ 'filters%3A%21%28%29%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_single_filter():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'filter': [
{'term': {'level': 30}}
]
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'bool%3A%28must%3A%21%28%28term%3A%28level%3A30%29%29%29%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3Afilter%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Abool%2C'
+ 'negate%3A%21f%2C'
+ 'type%3Acustom%2C'
+ 'value%3A%27%7B%22must%22%3A%5B%7B%22term%22%3A%7B%22level%22%3A30%7D%7D%5D%7D%27'
+ '%29' # meta end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_multiple_filters():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': '90943e30-9a47-11e8-b64d-95841ca0b247',
'timestamp_field': 'timestamp',
'filter': [
{'term': {'app': 'test'}},
{'term': {'level': 30}}
]
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'bool%3A%28must%3A%21%28%28term%3A%28app%3Atest%29%29%2C%28term%3A%28level%3A30%29%29%29%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3Afilter%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%2790943e30-9a47-11e8-b64d-95841ca0b247%27%2C'
+ 'key%3Abool%2C'
+ 'negate%3A%21f%2C'
+ 'type%3Acustom%2C'
+ 'value%3A%27%7B%22must%22%3A%5B' # value start
+ '%7B%22term%22%3A%7B%22app%22%3A%22test%22%7D%7D%2C%7B%22term%22%3A%7B%22level%22%3A30%7D%7D'
+ '%5D%7D%27' # value end
+ '%29' # meta end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%2790943e30-9a47-11e8-b64d-95841ca0b247%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_int_query_key():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'query_key': 'geo.dest'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'geo.dest': 200
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Ageo.dest%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3A200%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3A%27200%27'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'geo.dest%3A%28' # reponse start
+ 'query%3A200%2C'
+ 'type%3Aphrase'
+ '%29' # geo.dest end
+ '%29' # match end
+ '%29' # query end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_str_query_key():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'query_key': 'geo.dest'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'geo': {
'dest': 'ok'
}
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Ageo.dest%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3Aok%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3Aok'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'geo.dest%3A%28' # geo.dest start
+ 'query%3Aok%2C'
+ 'type%3Aphrase'
+ '%29' # geo.dest end
+ '%29' # match end
+ '%29' # query end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_null_query_key_value():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'query_key': 'status'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'status': None
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'exists%3A%28field%3Astatus%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Astatus%2C'
+ 'negate%3A%21t%2C'
+ 'type%3Aexists%2C'
+ 'value%3Aexists'
+ '%29' # meta end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_missing_query_key_value():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'query_key': 'status'
},
match={
'timestamp': '2019-09-01T00:30:00Z'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'exists%3A%28field%3Astatus%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Astatus%2C'
+ 'negate%3A%21t%2C'
+ 'type%3Aexists%2C'
+ 'value%3Aexists'
+ '%29' # meta end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_compound_query_key():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'compound_query_key': ['geo.src', 'geo.dest'],
'query_key': 'geo.src,geo.dest'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'geo': {
'src': 'CA',
'dest': 'US'
}
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # geo.src filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Ageo.src%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3ACA%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3ACA'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'geo.src%3A%28' # reponse start
+ 'query%3ACA%2C'
+ 'type%3Aphrase'
+ '%29' # geo.src end
+ '%29' # match end
+ '%29' # query end
+ '%29%2C' # geo.src filter end
+ '%28' # geo.dest filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Ageo.dest%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3AUS%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3AUS'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'geo.dest%3A%28' # geo.dest start
+ 'query%3AUS%2C'
+ 'type%3Aphrase'
+ '%29' # geo.dest end
+ '%29' # match end
+ '%29' # query end
+ '%29' # geo.dest filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_filter_and_query_key():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'filter': [
{'term': {'level': 30}}
],
'query_key': 'status'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'status': 'ok'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'bool%3A%28must%3A%21%28%28term%3A%28level%3A30%29%29%29%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3Afilter%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Abool%2C'
+ 'negate%3A%21f%2C'
+ 'type%3Acustom%2C'
+ 'value%3A%27%7B%22must%22%3A%5B%7B%22term%22%3A%7B%22level%22%3A30%7D%7D%5D%7D%27'
+ '%29' # meta end
+ '%29%2C' # filter end
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Astatus%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3Aok%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3Aok'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'status%3A%28' # status start
+ 'query%3Aok%2C'
+ 'type%3Aphrase'
+ '%29' # status end
+ '%29' # match end
+ '%29' # query end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
def test_generate_kibana_discover_url_with_querystring_filter_and_query_key():
url = generate_kibana_discover_url(
rule={
'kibana_discover_app_url': 'http://kibana:5601/#/discover',
'kibana_discover_version': '8.0',
'kibana_discover_index_pattern_id': 'logs-*',
'timestamp_field': 'timestamp',
'filter': [
{'query': {'query_string': {'query': 'hello world'}}}
],
'query_key': 'status'
},
match={
'timestamp': '2019-09-01T00:30:00Z',
'status': 'ok'
}
)
expectedUrl = (
'http://kibana:5601/#/discover'
+ '?_g=%28' # global start
+ 'filters%3A%21%28%29%2C'
+ 'refreshInterval%3A%28pause%3A%21t%2Cvalue%3A0%29%2C'
+ 'time%3A%28' # time start
+ 'from%3A%272019-09-01T00%3A20%3A00Z%27%2C'
+ 'to%3A%272019-09-01T00%3A40%3A00Z%27'
+ '%29' # time end
+ '%29' # global end
+ '&_a=%28' # app start
+ 'columns%3A%21%28_source%29%2C'
+ 'filters%3A%21%28' # filters start
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'bool%3A%28must%3A%21%28%28query_string%3A%28query%3A%27hello%20world%27%29%29%29%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3Afilter%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Abool%2C'
+ 'negate%3A%21f%2C'
+ 'type%3Acustom%2C'
+ 'value%3A%27%7B%22must%22%3A%5B%7B%22query_string%22%3A%7B%22query%22%3A%22hello%20world%22%7D%7D%5D%7D%27'
+ '%29' # meta end
+ '%29%2C' # filter end
+ '%28' # filter start
+ '%27%24state%27%3A%28store%3AappState%29%2C'
+ 'meta%3A%28' # meta start
+ 'alias%3A%21n%2C'
+ 'disabled%3A%21f%2C'
+ 'index%3A%27logs-%2A%27%2C'
+ 'key%3Astatus%2C'
+ 'negate%3A%21f%2C'
+ 'params%3A%28query%3Aok%2C' # params start
+ 'type%3Aphrase'
+ '%29%2C' # params end
+ 'type%3Aphrase%2C'
+ 'value%3Aok'
+ '%29%2C' # meta end
+ 'query%3A%28' # query start
+ 'match%3A%28' # match start
+ 'status%3A%28' # status start
+ 'query%3Aok%2C'
+ 'type%3Aphrase'
+ '%29' # status end
+ '%29' # match end
+ '%29' # query end
+ '%29' # filter end
+ '%29%2C' # filters end
+ 'index%3A%27logs-%2A%27%2C'
+ 'interval%3Aauto'
+ '%29' # app end
)
assert url == expectedUrl
| 33.150785 | 117 | 0.529676 | 3,809 | 31,659 | 4.257285 | 0.053295 | 0.102738 | 0.02109 | 0.069376 | 0.944376 | 0.934817 | 0.931611 | 0.930562 | 0.922607 | 0.916441 | 0 | 0.166629 | 0.306011 | 31,659 | 954 | 118 | 33.185535 | 0.571435 | 0.079598 | 0 | 0.794814 | 1 | 0.009019 | 0.473593 | 0.272784 | 0 | 0 | 0 | 0 | 0.024803 | 1 | 0.024803 | false | 0 | 0.003382 | 0 | 0.028185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
188f36eaf2c18efb668385de78c7a5d7461c4cac | 212 | py | Python | bloggingapp/views/__init__.py | mr-shubhamsinghal/blog | 1dc24e0d52ce7432f10faad5a2823190d3f924d8 | [
"MIT"
] | null | null | null | bloggingapp/views/__init__.py | mr-shubhamsinghal/blog | 1dc24e0d52ce7432f10faad5a2823190d3f924d8 | [
"MIT"
] | null | null | null | bloggingapp/views/__init__.py | mr-shubhamsinghal/blog | 1dc24e0d52ce7432f10faad5a2823190d3f924d8 | [
"MIT"
] | null | null | null | from bloggingapp.views.fn_based_views import *
from bloggingapp.views.class_based_view_using_apiviews import *
from bloggingapp.views.generic_api_views import *
from bloggingapp.views.viewsets_api_views import *
| 42.4 | 63 | 0.867925 | 30 | 212 | 5.8 | 0.433333 | 0.344828 | 0.45977 | 0.448276 | 0.356322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 212 | 4 | 64 | 53 | 0.887755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e1447cf1fa08850668c8a24914e74f6617c6df4a | 29,743 | py | Python | pymatflow/cp2k/base/motion_geo_opt.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 6 | 2020-03-06T16:13:08.000Z | 2022-03-09T07:53:34.000Z | pymatflow/cp2k/base/motion_geo_opt.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-10-02T02:23:08.000Z | 2021-11-08T13:29:37.000Z | pymatflow/cp2k/base/motion_geo_opt.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-07-10T16:28:14.000Z | 2021-07-10T16:28:14.000Z | #!/usr/bin/env python
# _*_ coding: utf-8 _*_
class cp2k_motion_geo_opt_bfgs_restart_each:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&EACH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t&END EACH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_bfgs_restart:
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_motion_geo_opt_bfgs_restart_each()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t&RESTART\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t\t\t&END RESTART\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 4:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[3] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_bfgs:
def __init__(self):
self.params = {
}
self.status = False
self.restart = cp2k_motion_geo_opt_bfgs_restart()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t&BFGS\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.restart.status == True:
self.restart.to_input(fout)
fout.write("\t\t&END BFGS\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[2] == "RESTART":
self.restart.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_cg_line_search_2pnt:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&2PNT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t&END 2PNT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_cg_line_search_gold:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&GOLD\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t&END GOLD\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_cg_line_search:
def __init__(self):
self.params = {
}
self.status = False
self._2pnt = cp2k_motion_geo_opt_cg_line_search_2pnt()
self.gold = cp2k_motion_geo_opt_cg_line_search_gold()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t&LINE_SEARCH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self._2pnt.status == True:
self._2pnt.to_input(fout)
if self.gold.status == True:
self.gold.to_input(fout)
fout.write("\t\t\t&END LINE_SEARCH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 4:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[3] == "2PNT":
slef._2pnt.set_params({item: params[item]})
elif item.split("-")[3] == "GOLD":
self.gold.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_cg:
def __init__(self):
self.params = {
}
self.status = False
self.line_serach = cp2k_motion_geo_opt_cg_line_search()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t&CG\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.line_search.status == True:
self.line_search.to_input(fout)
fout.write("\t\t&END CG\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[2] == "LINE_SEARCH":
self.line_search.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_lbfgs:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t&LBFGS\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t&END LBFGS\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_print_program_run_info_each:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&EACH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t&END EACH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_print_program_run_info:
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_motion_geo_opt_print_program_run_info_each()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t&PROGRAM_RUN_INFO\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t\t\t&END PROGRAM_RUN_INFO\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 4:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[3] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_print:
def __init__(self):
self.params = {
}
self.status = False
self.program_run_info = cp2k_motion_geo_opt_print_program_run_info()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t&PRINT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.program_run_info.status == True:
self.program_run_info.to_input(fout)
fout.write("\t\t&END PRINT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[2] == "PROGRAM_RUN_INFO":
self.program_run_info.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_dimer_vector:
def __init__(self):
self.params = {
}
self.status = False
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&DIMER_VECTOR\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t&END DIMER_VECTOR\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_bfgs_restart_each:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t\t&EACH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t\t&END EACH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 8:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_bfgs_restart:
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_bfgs_restart_each()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t&RESTART\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t\t\t\t\t\t&END RESTART\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 7:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[6] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_bfgs:
def __init__(self):
self.params = {
}
self.status = False
self.restart = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_bfgs_restart()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t&BFGS\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.restart.status == True:
self.restart.to_input(fout)
fout.write("\t\t\t\t\t&END BFGS\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 6:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[5] == "RESTART":
self.restart.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search_2pnt:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t\t&2PNT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t\t&END 2PNT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 8:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search_gold:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t\t&GOLD\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t\t&END GOLD\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 8:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search:
def __init__(self):
self.params = {
}
self.status = False
self._2pnt = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search_2pnt()
self.gold = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search_gold()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t&LINE_SEARCH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self._2pnt.status == True:
self._2pnt.to_input(fout)
if self.gold.status == True:
self.gold.to_input(fout)
fout.write("\t\t\t\t\t\t&END LINE_SEARCH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 7:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[6] == "2PNT":
slef._2pnt.set_params({item: params[item]})
elif item.split("-")[6] == "GOLD":
self.gold.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg:
def __init__(self):
self.params = {
}
self.status = False
self.line_serach = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_cg_line_search()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t&CG\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.line_search.status == True:
self.line_search.to_input(fout)
fout.write("\t\t\t\t\t&END CG\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 6:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[5] == "LINE_SEARCH":
self.line_search.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_lbfgs:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t&LBFGS\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t&END LBFGS\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 6:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_program_run_info_each:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t\t&EACH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t\t&END EACH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 8:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_program_run_info:
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_program_run_info_each()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t&PROGRAM_RUN_INFO\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t&END PROGRAM_RUN_INFO\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 7:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_rotational_info_each:
def __init__(self):
self.params = {
}
self.status = False
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t\t&EACH\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
fout.write("\t\t\t\t\t\t\t&END EACH\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 8:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_rotational_info:
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_rotational_info_each()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t\t&ROTATIONAL_INFO\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t\t\t\t\t\t&ENDROTATIONAL_INFO\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 7:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[6] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print:
def __init__(self):
self.params = {
}
self.status = False
self.program_run_info = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_program_run_info()
self.rotational_info = cp2k_motion_geo_opt_transition_state_dimer_rot_opt_print_rotational_info()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t\t&PRINT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.program_run_info.status == True:
self.program_run_info.to_input(fout)
if self.rotational_info.status == True:
self.rotational_info.to_input(fout)
fout.write("\t\t\t\t\t&END PRINT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 6:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[5] == "PROGRAM_RUN_INFO":
self.program_run_info.set_params({item: params[item]})
elif item.split("-")[5] == "ROTATIONAL_INFO":
self.rotational_info.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer_rot_opt:
def __init__(self):
self.params = {
}
self.status = False
self.bfgs = cp2k_motion_geo_opt_bfgs()
self.cg = cp2k_motion_geo_opt_cg()
self.lbfgs = cp2k_motion_geo_opt_lbfgs()
self.printout = cp2k_motion_geo_opt_print()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t\t&ROT_OPT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.bfgs.status == True:
self.bfgs.to_input(fout)
if self.cg.status == True:
self.cg.to_input(fout)
if self.lbfgs.status == True:
self.lbfgs.to_input(fout)
if self.printout.status == True:
self.printout.to_input(fout)
fout.write("\t\t\t\t&END ROT_OPT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 5:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[4] == "BFGS":
self.bfgs.set_params({item: params[item]})
elif item.split("-")[4] == "CG":
self.cg.set_params({item: params[item]})
elif item.split("-")[4] == "LBFGS":
self.lbfgs.set_params({item: params[item]})
elif item.split("-")[4] == "PRINT":
self.printout.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state_dimer:
def __init__(self):
self.params = {
}
self.status = False
self.dimer_vector = cp2k_motion_geo_opt_transition_state_dimer_dimer_vector()
self.rot_opt = cp2k_motion_geo_opt_transition_state_dimer_rot_opt()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t\t&DIMER\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.dimer_vector.status == True:
self.dimer_vector.to_input(fout)
if self.rot_opt.status == True:
self.rot_opt.to_input(fout)
fout.write("\t\t\t&END DIMER\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 4:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[3] == "DIMER_VECTOR":
self.dimer_vector.set_params({item: params[item]})
elif item.split("-")[3] == "ROT_OPT":
self.rot_opt.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt_transition_state:
def __init__(self):
self.params = {
}
self.status = False
self.dimer = cp2k_motion_geo_opt_transition_state_dimer()
# basic setting
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t\t&TRANSITION_STATE\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t\t%s %s\n" % (item, str(self.params[item])))
if self.dimer.status == True:
self.dimer.to_input(fout)
fout.write("\t\t&END TRANSITION_STATE\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[2] == "DIMER":
self.dimer.set_params({item: params[item]})
else:
pass
class cp2k_motion_geo_opt:
def __init__(self):
self.params = {
"MAX_DR": None,
"MAX_FORCE": None,
"MAX_ITER": None,
"RMS_DR": None,
"RMS_FORCE": None,
"OPTIMIZER": None, # BFGS(default), CG, LBFGS
"STEP_START_VAL": None,
"TYPE": None, # MINIMIZATION(default), TRANSITION_STATE
}
self.status = False
self.bfgs = cp2k_motion_geo_opt_bfgs()
self.cg = cp2k_motion_geo_opt_cg()
self.lbfgs = cp2k_motion_geo_opt_lbfgs()
self.printout = cp2k_motion_geo_opt_print()
self.transition_state = cp2k_motion_geo_opt_transition_state()
# basic setting
self.params["MAX_DR"] = 3.0e-3
self.params["MAX_FORCE"] = 4.5e-4
self.params["MAX_ITER"] = 200
self.params["OPTIMIZER"] = "BFGS"
self.params["RMS_DR"] = 1.5e-3
self.params["RMS_FORCE"] = 3.0e-4
self.params["TYPE"] = "MINIMIZATION"
def to_input(self, fout):
"""
fout: a file stream for writing
"""
fout.write("\t&GEO_OPT\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t%s %s\n" % (item, str(self.params[item])))
if self.bfgs.status == True:
self.bfgs.to_input(fout)
if self.cg.status == True:
self.cg.to_input(fout)
if self.lbfgs.status == True:
self.lbfgs.to_input(fout)
if self.printout.status == True:
self.printout.to_input(fout)
if self.transition_state.status == True:
self.transition_state.to_input(fout)
fout.write("\t&END GEO_OPT\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 2:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[1] == "BFGS":
self.bfgs.set_params({item: params[item]})
elif item.split("-")[1] == "CG":
self.cg.set_params({item: params[item]})
elif item.split("-")[1] == "LBFGS":
self.lbfgs.set_params({item: params[item]})
elif item.split("-")[1] == "PRINT":
self.printout.set_params({item: params[item]})
elif item.split("-")[1] == "TRANSITION_STATE":
self.transition_state.set_params({item: params[item]})
else:
pass
| 33.084538 | 108 | 0.513062 | 3,887 | 29,743 | 3.72884 | 0.023669 | 0.044156 | 0.048641 | 0.044432 | 0.94853 | 0.94584 | 0.940527 | 0.934456 | 0.925072 | 0.908445 | 0 | 0.009096 | 0.34946 | 29,743 | 898 | 109 | 33.121381 | 0.739987 | 0.043271 | 0 | 0.718944 | 0 | 0.007764 | 0.07838 | 0.011835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135093 | false | 0.045031 | 0 | 0 | 0.180124 | 0.034161 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e166538dafa2b7c04ca05b29c1ec3bb32df204af | 119 | py | Python | asserts/asserts.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | null | null | null | asserts/asserts.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | null | null | null | asserts/asserts.py | informramiz/data-structures-and-algorithms | 7038c8becc4cbad82867c9c8bca42637ca27c8d7 | [
"Apache-2.0"
] | 1 | 2020-09-24T22:54:52.000Z | 2020-09-24T22:54:52.000Z | def assert_(expected, actual):
assert expected == actual, f"expected={expected}, actual={actual}"
print("Pass") | 39.666667 | 70 | 0.689076 | 14 | 119 | 5.785714 | 0.5 | 0.518519 | 0.493827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 119 | 3 | 71 | 39.666667 | 0.794118 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e17c3f4ca3983d1f4956bfb7651e4188c38ad7cd | 983 | py | Python | tests/data/comments_non_breaking_space.py | BigNuoLi/black | 71e71e5f52e5f6bdeae63cc8c11b1bee44d11c30 | [
"MIT"
] | 16,110 | 2019-07-22T21:54:54.000Z | 2022-03-31T22:52:39.000Z | tests/data/comments_non_breaking_space.py | marnixah/black-but-usable | 83b83d3066d1d857983bfa1a666a409e7255d79d | [
"MIT"
] | 1,981 | 2019-07-22T21:26:16.000Z | 2022-03-31T23:14:35.000Z | tests/data/comments_non_breaking_space.py | marnixah/black-but-usable | 83b83d3066d1d857983bfa1a666a409e7255d79d | [
"MIT"
] | 1,762 | 2019-07-22T21:23:00.000Z | 2022-03-31T06:10:22.000Z | from .config import ( ConfigTypeAttributes, Int, Path, # String,
# DEFAULT_TYPE_ATTRIBUTES,
)
result = 1 # A simple comment
result = ( 1, ) # Another one
result = 1 # type: ignore
result = 1# This comment is talking about type: ignore
square = Square(4) # type: Optional[Square]
def function(a:int=42):
""" This docstring is already formatted
a
b
"""
# There's a NBSP + 3 spaces before
# And 4 spaces on the next line
pass
# output
from .config import (
ConfigTypeAttributes,
Int,
Path, # String,
# DEFAULT_TYPE_ATTRIBUTES,
)
result = 1 # A simple comment
result = (1,) # Another one
result = 1 # type: ignore
result = 1 # This comment is talking about type: ignore
square = Square(4) # type: Optional[Square]
def function(a: int = 42):
"""This docstring is already formatted
a
b
"""
# There's a NBSP + 3 spaces before
# And 4 spaces on the next line
pass
| 21.844444 | 74 | 0.618515 | 131 | 983 | 4.610687 | 0.343511 | 0.092715 | 0.05298 | 0.119205 | 0.990066 | 0.990066 | 0.990066 | 0.990066 | 0.990066 | 0.990066 | 0 | 0.025751 | 0.288912 | 983 | 44 | 75 | 22.340909 | 0.83834 | 0.517803 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.095238 | 0.095238 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e19bae32b30ffa7bcc2fd966867f172bc20b342a | 10,034 | py | Python | app/core/models.py | fxavier/genesissys | 5187addc9fb69c8112551552b58aa745add46bdd | [
"MIT"
] | null | null | null | app/core/models.py | fxavier/genesissys | 5187addc9fb69c8112551552b58aa745add46bdd | [
"MIT"
] | null | null | null | app/core/models.py | fxavier/genesissys | 5187addc9fb69c8112551552b58aa745add46bdd | [
"MIT"
] | null | null | null | from django.db import models
class FamiliaBeneficiaria(models.Model):
uuid = models.CharField(max_length=255, primary_key=True)
codigo_familia = models.CharField(max_length=100)
data_inquerito = models.DateField()
nome_inquiridor = models.CharField(max_length=255)
numero_questionario = models.IntegerField()
local_entrevista = models.CharField(max_length=100)
gps_local_lat_long = models.CharField(max_length=255)
gps_local_accuracy = models.DecimalField(max_digits=10, decimal_places=2)
tipo_beneficiario = models.CharField(max_length=100)
tipo_familia = models.CharField(max_length=100)
nome_agg_familiar = models.CharField(max_length=255)
tipo_documento = models.CharField(max_length=100)
documento = models.CharField(max_length=100)
photo_doc_url = models.CharField(max_length=255, null=True, blank=True)
data_nascimento = models.DateField()
genero = models.CharField(max_length=100)
outro_genero = models.CharField(max_length=100)
contacto = models.CharField(max_length=100)
parte_bd = models.CharField(max_length=20)
criterios_elegib_agg_familiar = models.CharField(max_length=100)
provincia = models.CharField(max_length=100)
distrito = models.CharField(max_length=100)
posto_administrativo = models.CharField(max_length=100)
localidade = models.CharField(max_length=100)
comunidade = models.CharField(max_length=100)
ficha = models.CharField(max_length=100)
class AlocacaoTerra(models.Model):
familia_beneficiaria = models.OneToOneField(
FamiliaBeneficiaria, on_delete=models.CASCADE, primary_key=True)
familia_tem_machamba = models.CharField(
max_length=100, null=True, blank=True)
machamba_familia = models.CharField(max_length=100, null=True, blank=True)
tipo_posse = models.CharField(max_length=100, null=True, blank=True)
outro_tipo_posse = models.CharField(max_length=100, null=True, blank=True)
forma_aquisicao = models.CharField(max_length=100, null=True, blank=True)
outra_forma_aquisicao = models.CharField(
max_length=100, null=True, blank=True)
quando_conseguiu_machamba = models.CharField(
max_length=100, null=True, blank=True)
outra_data = models.CharField(max_length=100, null=True, blank=True)
tamanho_machamba = models.CharField(max_length=100, null=True, blank=True)
local_machamba = models.CharField(max_length=100, null=True, blank=True)
outro_local_machamba = models.CharField(
max_length=100, null=True, blank=True)
caracteristica_solos = models.CharField(
max_length=100, null=True, blank=True)
outra_caracteristica_solos = models.CharField(
max_length=100, null=True, blank=True)
cor_solo = models.CharField(max_length=100, null=True, blank=True)
historico_uso_solo = models.CharField(
max_length=100, null=True, blank=True)
outro_historico_uso_solo = models.CharField(
max_length=100, null=True, blank=True)
tempo_gasto_casa_machamba = models.CharField(
max_length=100, null=True, blank=True)
outro_tempo_gasto = models.CharField(max_length=100, null=True, blank=True)
def __str__(self):
return f"{self.familia_beneficiaria.nome_agg_familiar} {id}"
class Sementeira(models.Model):
familia_beneficiaria = models.OneToOneField(
FamiliaBeneficiaria, on_delete=models.CASCADE, primary_key=True)
recebeu_semente = models.CharField(max_length=100, null=True, blank=True)
quando_recebeu = models.CharField(max_length=100, null=True, blank=True)
outra_data_recebeu = models.CharField(
max_length=100, null=True, blank=True)
identificacao_lote = models.CharField(
max_length=100, null=True, blank=True)
tipo_kit = models.CharField(max_length=100, null=True, blank=True)
composicao_kit_a = models.CharField(max_length=100, null=True, blank=True)
comentario_kit_a = models.CharField(max_length=100, null=True, blank=True)
composicao_kit_b = models.CharField(max_length=100, null=True, blank=True)
comentario_kit_b = models.CharField(max_length=100, null=True, blank=True)
composicao_kit_c = models.CharField(max_length=100, null=True, blank=True)
comentario_kit_c = models.CharField(max_length=100, null=True, blank=True)
composicao_kit_d = models.CharField(max_length=100, null=True, blank=True)
comentario_kit_d = models.CharField(max_length=100, null=True, blank=True)
conservacao_semente = models.CharField(
max_length=100, null=True, blank=True)
foto_semente_url = models.CharField(max_length=255, null=True, blank=True)
de_quem_recebeu_semente = models.CharField(
max_length=100, null=True, blank=True)
outro_de_quem_recebeu_semente = models.CharField(
max_length=100, null=True, blank=True)
quem_escolheu_kit = models.CharField(max_length=100, null=True, blank=True)
outro_quem_escolheu_kit = models.CharField(
max_length=100, null=True, blank=True)
quando_realizou_sementeira = models.CharField(
max_length=100, null=True, blank=True)
familia_necess_nao_recebeu = models.CharField(
max_length=100, null=True, blank=True)
nome_familia = models.CharField(max_length=100, null=True, blank=True)
sementes_germinou = models.CharField(max_length=100, null=True, blank=True)
foto_sementes_germinou_url = models.CharField(
max_length=255, null=True, blank=True)
semente_nao_germinou = models.CharField(
max_length=100, null=True, blank=True)
usou_fertilizante = models.CharField(max_length=100, null=True, blank=True)
tipo_fertilizante = models.CharField(max_length=100, null=True, blank=True)
outro_tipo_fertilizante = models.CharField(
max_length=100, null=True, blank=True)
momento_usou_adubo = models.CharField(
max_length=100, null=True, blank=True)
outro_momento_usou_adubo = models.CharField(
max_length=100, null=True, blank=True)
adubo_usado = models.CharField(max_length=100, null=True, blank=True)
def __str__(self):
return self.familia_beneficiaria.nome_agg_familiar
class TipoSementeGerminou(models.Model):
uuid = models.CharField(max_length=255, primary_key=True)
nome_semente = models.CharField(max_length=100, null=True, blank=True)
familia_beneficiaria = models.ForeignKey(
FamiliaBeneficiaria, on_delete=models.CASCADE)
def __str__(self):
return self.nome_semente
class TipoAreaGerminacao(models.Model):
uuid = models.CharField(max_length=255, primary_key=True)
nome_semente = models.CharField(max_length=100, null=True, blank=True)
area = models.CharField(max_length=100, null=True, blank=True)
familia_beneficiaria = models.ForeignKey(
FamiliaBeneficiaria, on_delete=models.CASCADE)
def __str__(self):
return self.nome_semente
class Treinamento(models.Model):
recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
lugar_treinamento = models.CharField(max_length=100, null=True, blank=True)
outro_lugar_treinamento = models.CharField(
max_length=100, null=True, blank=True)
de_quem_recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
outro_de_quem_recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
quando_recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
outro_quando_recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
tipo_treinamento = models.CharField(max_length=100, null=True, blank=True)
recebeu_visita_assistencia = models.CharField(
max_length=100, null=True, blank=True)
de_quem_recebeu_visita_assistencia = models.CharField(
max_length=100, null=True, blank=True)
outro_de_quem_recebeu_visita_assistencia = models.CharField(
max_length=100, null=True, blank=True)
momento_recebeu_visita = models.CharField(
max_length=100, null=True, blank=True)
familia_nao_recebeu_treinamento = models.CharField(
max_length=100, null=True, blank=True)
nome_familia_nao_recebeu = models.CharField(
max_length=100, null=True, blank=True)
familia_beneficiaria = models.ForeignKey(
FamiliaBeneficiaria, on_delete=models.CASCADE)
def __str__(self):
return self.familia_beneficiaria.nome_agg_familiar
class Reclamacao(models.Model):
canais_apresentar_reclamacao = models.CharField(
max_length=100, null=True, blank=True)
apresentou_reclamacao = models.CharField(
max_length=100, null=True, blank=True)
canal_que_usou = models.CharField(max_length=100, null=True, blank=True)
outro_canal = models.CharField(max_length=100, null=True, blank=True)
tempo_gasto_resolver = models.CharField(
max_length=100, null=True, blank=True)
ficou_satisfeito = models.CharField(max_length=100, null=True, blank=True)
familia_beneficiaria = models.ForeignKey(
FamiliaBeneficiaria, on_delete=models.CASCADE)
class VBG(models.Model):
ouviu_falar_vbg = models.CharField(max_length=100, null=True, blank=True)
ja_foi_vitima_vbg = models.CharField(max_length=100, null=True, blank=True)
canais_denunciar_vbg = models.CharField(
max_length=100, null=True, blank=True)
outro_canal_denuncia = models.CharField(
max_length=100, null=True, blank=True)
teve_toda_assistencia = models.CharField(
max_length=100, null=True, blank=True)
e_comum_vbg_comunidade = models.CharField(
max_length=100, null=True, blank=True)
casos_vbg_ouviu_falar = models.CharField(
max_length=100, null=True, blank=True)
outro_caso_vbg_ouviu_falar = models.CharField(
max_length=100, null=True, blank=True)
foto_caso_vbg_url = models.CharField(max_length=255, null=True, blank=True)
familia_beneficiaria = models.ForeignKey(
FamiliaBeneficiaria, on_delete=models.CASCADE)
| 48.47343 | 79 | 0.745465 | 1,321 | 10,034 | 5.404239 | 0.115821 | 0.220619 | 0.264743 | 0.352991 | 0.881076 | 0.842415 | 0.791848 | 0.791848 | 0.791848 | 0.767194 | 0 | 0.037413 | 0.155571 | 10,034 | 206 | 80 | 48.708738 | 0.805146 | 0 | 0 | 0.378378 | 0 | 0 | 0.004983 | 0.004485 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.005405 | 0.027027 | 0.72973 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e1b393802645150064904d667a00c9f1ce1b922a | 73 | py | Python | index.py | adwaitpande11/investment-tracker | 82c8c5e1aa57c058a46a492f87423da953a7532a | [
"MIT"
] | null | null | null | index.py | adwaitpande11/investment-tracker | 82c8c5e1aa57c058a46a492f87423da953a7532a | [
"MIT"
] | null | null | null | index.py | adwaitpande11/investment-tracker | 82c8c5e1aa57c058a46a492f87423da953a7532a | [
"MIT"
] | null | null | null | from application import app # noqa
from application import routes # noqa
| 24.333333 | 37 | 0.808219 | 10 | 73 | 5.9 | 0.6 | 0.508475 | 0.711864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 2 | 38 | 36.5 | 0.967213 | 0.123288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e1c44053d87cfd7afb1e260b3e7bc0b595aad86c | 15,034 | py | Python | scp_epub/test_unit/download/test_cache.py | elfakyn/scp_epub | 5d0e95d8fa0e11d9ab388c5a4083212c1c857a2f | [
"MIT"
] | 5 | 2020-05-27T15:57:15.000Z | 2021-06-11T01:08:50.000Z | scp_epub/test_unit/download/test_cache.py | elfakyn/scp_epub | 5d0e95d8fa0e11d9ab388c5a4083212c1c857a2f | [
"MIT"
] | null | null | null | scp_epub/test_unit/download/test_cache.py | elfakyn/scp_epub | 5d0e95d8fa0e11d9ab388c5a4083212c1c857a2f | [
"MIT"
] | 2 | 2020-11-14T04:53:51.000Z | 2021-06-12T19:28:32.000Z | import unittest
import unittest.mock
import os
import download.cache
from constants import constants
class TestUseCache(unittest.TestCase):
@unittest.mock.patch('download.utils.normalize_string')
@unittest.mock.patch('download.cache.set_cached_contents')
@unittest.mock.patch('download.cache.get_cached_contents')
def test_use_cache_no_refresh_found_in_cache(self, mock_get_cached_contents, mock_set_cached_contents, mock_normalize_string):
# Arrange
expected_func = unittest.mock.MagicMock()
expected_relative_path = 'foo/bar'
expected_filetype = 'json'
expected_item = 'Tale Of Three Soldiers'
expected_refresh = False
expected_normalized_item = 'tale-of-three-soldiers'
expected_contents = 'contents'
expected_cached_contents = expected_contents
expected_args = [expected_item]
expected_kwargs = {
'refresh': expected_refresh
}
mock_get_cached_contents.return_value = expected_cached_contents
mock_normalize_string.return_value = expected_normalized_item
# Act
actual_contents = download.cache.use_cache(expected_relative_path, expected_filetype)(expected_func)(*expected_args, **expected_kwargs)
# Assert
mock_normalize_string.assert_called_once_with(expected_item)
mock_get_cached_contents.assert_called_once_with(expected_relative_path, expected_normalized_item, expected_filetype)
mock_set_cached_contents.assert_not_called()
expected_func.assert_not_called()
self.assertEqual(expected_contents, actual_contents)
@unittest.mock.patch('download.utils.normalize_string')
@unittest.mock.patch('download.cache.set_cached_contents')
@unittest.mock.patch('download.cache.get_cached_contents')
def test_use_cache_implicit_no_refresh_found_in_cache(self, mock_get_cached_contents, mock_set_cached_contents, mock_normalize_string):
# Arrange
expected_func = unittest.mock.MagicMock()
expected_relative_path = 'foo/bar'
expected_filetype = 'json'
expected_item = 'Tale Of Three Soldiers'
expected_normalized_item = 'tale-of-three-soldiers'
expected_contents = 'contents'
expected_cached_contents = expected_contents
expected_args = [expected_item]
expected_kwargs = dict()
mock_get_cached_contents.return_value = expected_cached_contents
mock_normalize_string.return_value = expected_normalized_item
# Act
actual_contents = download.cache.use_cache(expected_relative_path, expected_filetype)(expected_func)(*expected_args, **expected_kwargs)
# Assert
mock_normalize_string.assert_called_once_with(expected_item)
mock_get_cached_contents.assert_called_once_with(expected_relative_path, expected_normalized_item, expected_filetype)
mock_set_cached_contents.assert_not_called()
expected_func.assert_not_called()
self.assertEqual(expected_contents, actual_contents)
@unittest.mock.patch('download.utils.normalize_string')
@unittest.mock.patch('download.cache.set_cached_contents')
@unittest.mock.patch('download.cache.get_cached_contents')
def test_use_cache_no_refresh_not_found_in_cache(self, mock_get_cached_contents, mock_set_cached_contents, mock_normalize_string):
# Arrange
expected_func = unittest.mock.MagicMock()
expected_relative_path = 'foo/bar'
expected_filetype = 'json'
expected_item = 'Tale Of Three Soldiers'
expected_refresh = False
expected_normalized_item = 'tale-of-three-soldiers'
expected_contents = 'contents'
expected_cached_contents = None
expected_args = [expected_item]
expected_kwargs = {
'refresh': expected_refresh
}
mock_get_cached_contents.return_value = expected_cached_contents
mock_normalize_string.return_value = expected_normalized_item
expected_func.return_value = expected_contents
# Act
actual_contents = download.cache.use_cache(expected_relative_path, expected_filetype)(expected_func)(*expected_args, **expected_kwargs)
# Assert
mock_normalize_string.assert_called_once_with(expected_item)
mock_get_cached_contents.assert_called_once_with(expected_relative_path, expected_normalized_item, expected_filetype)
mock_set_cached_contents.assert_called_once_with(expected_contents, expected_relative_path, expected_normalized_item, expected_filetype)
expected_func.assert_called_once_with(*expected_args, **expected_kwargs)
self.assertEqual(expected_contents, actual_contents)
@unittest.mock.patch('download.utils.normalize_string')
@unittest.mock.patch('download.cache.set_cached_contents')
@unittest.mock.patch('download.cache.get_cached_contents')
def test_use_cache_refresh(self, mock_get_cached_contents, mock_set_cached_contents, mock_normalize_string):
# Arrange
expected_func = unittest.mock.MagicMock()
expected_relative_path = 'foo/bar'
expected_filetype = 'json'
expected_item = 'Tale Of Three Soldiers'
expected_refresh = True
expected_normalized_item = 'tale-of-three-soldiers'
expected_contents = 'contents'
expected_cached_contents = None
expected_args = [expected_item]
expected_kwargs = {
'refresh': expected_refresh
}
mock_get_cached_contents.return_value = expected_cached_contents
mock_normalize_string.return_value = expected_normalized_item
expected_func.return_value = expected_contents
# Act
actual_contents = download.cache.use_cache(expected_relative_path, expected_filetype)(expected_func)(*expected_args, **expected_kwargs)
# Assert
mock_normalize_string.assert_called_once_with(expected_item)
mock_get_cached_contents.assert_not_called()
mock_set_cached_contents.assert_called_once_with(expected_contents, expected_relative_path, expected_normalized_item, expected_filetype)
expected_func.assert_called_once_with(*expected_args, **expected_kwargs)
self.assertEqual(expected_contents, actual_contents)
class TestGetCachedContents(unittest.TestCase):
@unittest.mock.patch('json.loads')
@unittest.mock.patch('download.aws.retrieve_from_s3_cache')
@unittest.mock.patch('download.cache.retrieve_from_local_cache')
def test_get_cached_contents_locally(self, mock_retrieve_from_local_cache, mock_retrieve_from_s3_cache, mock_loads):
# Arrange
os.environ.pop(constants.USE_AWS_VARIABLE, None)
expected_filetype = 'html'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
# Act
actual_contents = download.cache.get_cached_contents(expected_relative_path, expected_item, expected_filetype)
# Assert
self.assertEqual(mock_retrieve_from_local_cache.return_value, actual_contents)
mock_loads.assert_not_called()
mock_retrieve_from_s3_cache.assert_not_called()
mock_retrieve_from_local_cache.assert_called_once_with(expected_relative_path, expected_item, expected_filetype)
@unittest.mock.patch('json.loads')
@unittest.mock.patch('download.aws.retrieve_from_s3_cache')
@unittest.mock.patch('download.cache.retrieve_from_local_cache')
def test_get_cached_contents_s3(self, mock_retrieve_from_local_cache, mock_retrieve_from_s3_cache, mock_loads):
# Arrange
os.environ[constants.USE_AWS_VARIABLE] = constants.USE_AWS_TRUE
expected_filetype = 'html'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
# Act
actual_contents = download.cache.get_cached_contents(expected_relative_path, expected_item, expected_filetype)
# Assert
self.assertEqual(mock_retrieve_from_s3_cache.return_value, actual_contents)
mock_loads.assert_not_called()
mock_retrieve_from_s3_cache.assert_called_once_with(expected_relative_path, expected_item, expected_filetype)
mock_retrieve_from_local_cache.assert_not_called()
@unittest.mock.patch('json.loads')
@unittest.mock.patch('download.aws.retrieve_from_s3_cache')
@unittest.mock.patch('download.cache.retrieve_from_local_cache')
def test_get_cached_contents_load_json(self, mock_retrieve_from_local_cache, mock_retrieve_from_s3_cache, mock_loads):
# Arrange
os.environ[constants.USE_AWS_VARIABLE] = constants.USE_AWS_TRUE
expected_filetype = 'json'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
expected_contents = mock_loads.return_value
# Act
actual_contents = download.cache.get_cached_contents(expected_relative_path, expected_item, expected_filetype)
# Assert
self.assertEqual(expected_contents, actual_contents)
mock_loads.assert_called_once_with(mock_retrieve_from_s3_cache.return_value)
mock_retrieve_from_s3_cache.assert_called_once_with(expected_relative_path, expected_item, expected_filetype)
mock_retrieve_from_local_cache.assert_not_called()
class TestRetrieveFromLocalCache(unittest.TestCase):
@unittest.mock.patch('builtins.open')
def test_retrieve_from_local_cache(self, mock_open):
# Arrange
expected_relative_path = 'foo/bar'
expected_item = 'scp-123'
expected_filetype = 'json'
expected_cache_file = os.path.join(constants.LOCAL_CACHE_BASE_PATH, expected_relative_path, expected_item + '.' + expected_filetype)
expected_encoding = constants.ENCODING
expected_open_type = 'r'
expected_contents = mock_open.return_value.__enter__.return_value.read.return_value
# Act
actual_contents = download.cache.retrieve_from_local_cache(expected_relative_path, expected_item, expected_filetype)
# Assert
self.assertEqual(expected_contents, actual_contents)
mock_open.assert_called_once_with(expected_cache_file, expected_open_type, encoding=expected_encoding)
@unittest.mock.patch('builtins.open')
def test_retrieve_from_local_cache_file_not_found(self, mock_open):
# Arrange
expected_relative_path = 'foo/bar'
expected_item = 'scp-123'
expected_filetype = 'json'
expected_cache_file = os.path.join(constants.LOCAL_CACHE_BASE_PATH, expected_relative_path, expected_item + '.' + expected_filetype)
expected_encoding = constants.ENCODING
expected_open_type = 'r'
mock_open.return_value.__enter__.side_effect = FileNotFoundError
expected_contents = None
# Act
actual_contents = download.cache.retrieve_from_local_cache(expected_relative_path, expected_item, expected_filetype)
# Assert
self.assertEqual(expected_contents, actual_contents)
mock_open.assert_called_once_with(expected_cache_file, expected_open_type, encoding=expected_encoding)
class TestStoreInLocalCache(unittest.TestCase):
@unittest.mock.patch('os.makedirs')
@unittest.mock.patch('builtins.open')
def test_store_in_local_cache(self, mock_open, mock_makedirs):
# Arrange
expected_relative_path = 'foo/bar'
expected_item = 'scp-123'
expected_filetype = 'json'
expected_cache_dir = os.path.join(constants.LOCAL_CACHE_BASE_PATH, expected_relative_path)
expected_cache_file = os.path.join(constants.LOCAL_CACHE_BASE_PATH, expected_relative_path, expected_item + '.' + expected_filetype)
expected_encoding = constants.ENCODING
expected_exist_ok = True
expected_open_type = 'w'
expected_contents = 'contents'
# Act
actual_contents = download.cache.store_in_local_cache(expected_contents, expected_relative_path, expected_item, expected_filetype)
# Assert
mock_makedirs.assert_called_once_with(expected_cache_dir, exist_ok=expected_exist_ok)
mock_open.assert_called_once_with(expected_cache_file, expected_open_type, encoding=expected_encoding)
mock_open.return_value.__enter__.return_value.write.assert_called_once_with(expected_contents)
class TestSetCachedContents(unittest.TestCase):
@unittest.mock.patch('json.dumps')
@unittest.mock.patch('download.aws.store_in_s3_cache')
@unittest.mock.patch('download.cache.store_in_local_cache')
def test_set_cached_contents_locally(self, mock_store_in_local_cache, mock_store_in_s3_cache, mock_loads):
# Arrange
os.environ.pop(constants.USE_AWS_VARIABLE, None)
expected_filetype = 'html'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
expected_contents = 'contents'
# Act
download.cache.set_cached_contents(expected_contents, expected_relative_path, expected_item, expected_filetype)
# Assert
mock_loads.assert_not_called()
mock_store_in_s3_cache.assert_not_called()
mock_store_in_local_cache.assert_called_once_with(expected_contents, expected_relative_path, expected_item, expected_filetype)
@unittest.mock.patch('json.dumps')
@unittest.mock.patch('download.aws.store_in_s3_cache')
@unittest.mock.patch('download.cache.store_in_local_cache')
def test_set_cached_contents_s3(self, mock_store_in_local_cache, mock_store_in_s3_cache, mock_loads):
# Arrange
os.environ[constants.USE_AWS_VARIABLE] = constants.USE_AWS_TRUE
expected_filetype = 'html'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
expected_contents = 'contents'
# Act
download.cache.set_cached_contents(expected_contents, expected_relative_path, expected_item, expected_filetype)
# Assert
mock_loads.assert_not_called()
mock_store_in_local_cache.assert_not_called()
mock_store_in_s3_cache.assert_called_once_with(expected_contents, expected_relative_path, expected_item, expected_filetype)
@unittest.mock.patch('json.dumps')
@unittest.mock.patch('download.aws.store_in_s3_cache')
@unittest.mock.patch('download.cache.store_in_local_cache')
def test_set_cached_contents_load_json(self, mock_store_in_local_cache, mock_store_in_s3_cache, mock_loads):
# Arrange
os.environ[constants.USE_AWS_VARIABLE] = constants.USE_AWS_TRUE
expected_filetype = 'json'
expected_relative_path = 'foo/bar/'
expected_item = 'scp-123'
expected_contents = {'contents': 'contents'}
# Act
download.cache.set_cached_contents(expected_contents, expected_relative_path, expected_item, expected_filetype)
# Assert
mock_loads.assert_called_once_with(expected_contents)
mock_store_in_s3_cache.assert_called_once_with(mock_loads.return_value, expected_relative_path, expected_item, expected_filetype)
mock_store_in_local_cache.assert_not_called()
| 46.544892 | 144 | 0.750432 | 1,812 | 15,034 | 5.748344 | 0.054084 | 0.064516 | 0.078725 | 0.075269 | 0.956125 | 0.942012 | 0.917819 | 0.901594 | 0.893529 | 0.882584 | 0 | 0.003943 | 0.17334 | 15,034 | 322 | 145 | 46.689441 | 0.834165 | 0.016363 | 0 | 0.773333 | 0 | 0 | 0.09581 | 0.061296 | 0 | 0 | 0 | 0 | 0.213333 | 1 | 0.057778 | false | 0 | 0.022222 | 0 | 0.102222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bed34d07e9d891e312a74d5659b2bdf7e128a1de | 217 | py | Python | src/sage/algebras/quantum_groups/all.py | bopopescu/sage | 2d495be78e0bdc7a0a635454290b27bb4f5f70f0 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | src/sage/algebras/quantum_groups/all.py | Ivo-Maffei/sage | 467fbc70a08b552b3de33d9065204ee9cbfb02c7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | src/sage/algebras/quantum_groups/all.py | dimpase/sage | 468f23815ade42a2192b0a9cd378de8fdc594dcd | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | """
Quantum Groups
"""
from sage.misc.lazy_import import lazy_import
lazy_import('sage.algebras.quantum_groups.fock_space', 'FockSpace')
lazy_import('sage.algebras.quantum_groups.quantum_group_gap', 'QuantumGroup')
| 24.111111 | 77 | 0.806452 | 29 | 217 | 5.724138 | 0.482759 | 0.240964 | 0.192771 | 0.26506 | 0.421687 | 0.421687 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 217 | 8 | 78 | 27.125 | 0.817734 | 0.064516 | 0 | 0 | 0 | 0 | 0.546392 | 0.438144 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
3609d13ad9f3356ef06bf9591f5a7da2d7ebdf2e | 4,947 | py | Python | daresnets.py | francisbrochu/DAStylizedTraining | ab154a0cbf84a39ae1694fe0e30c9953af011d04 | [
"MIT"
] | 2 | 2019-05-07T15:58:31.000Z | 2019-10-14T06:49:47.000Z | daresnets.py | francisbrochu/DAStylizedTraining | ab154a0cbf84a39ae1694fe0e30c9953af011d04 | [
"MIT"
] | null | null | null | daresnets.py | francisbrochu/DAStylizedTraining | ab154a0cbf84a39ae1694fe0e30c9953af011d04 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from grl import LambdaLayer, ReverseLayerF
# for dog breed identification
class DBIDAResNet(nn.Module):
def __init__(self, lambda_param=0.1):
super(DBIDAResNet, self).__init__()
self.model = torchvision.models.resnet34(pretrained=True)
input_fc_dim = self.model.fc.in_features
self.model.fc = nn.Linear(input_fc_dim, 120)
self.domainfc = nn.Linear(input_fc_dim, 2)
self.ll = LambdaLayer(lambda_param=lambda_param)
def forward(self, x):
output = self.model.conv1(x)
output = self.model.bn1(output)
output = self.model.relu(output)
output = self.model.maxpool(output)
output = self.model.layer1(output)
output = self.model.layer2(output)
output = self.model.layer3(output)
output = self.model.layer4(output)
output = self.model.avgpool(output)
output = output.view(output.size(0), -1)
classif_output = self.model.fc(output)
domain_output = ReverseLayerF.apply(output)
domain_output = self.domainfc(domain_output)
domain_output = self.ll(domain_output)
return classif_output, domain_output
# for Dogs vs Cats
class DCDAResNet(nn.Module):
def __init__(self, lambda_param=0.1):
super(DCDAResNet, self).__init__()
self.model = torchvision.models.resnet34(pretrained=True)
input_fc_dim = self.model.fc.in_features
self.model.fc = nn.Linear(input_fc_dim, 2)
self.domainfc = nn.Linear(input_fc_dim, 2)
self.ll = LambdaLayer(lambda_param=lambda_param)
def forward(self, x):
output = self.model.conv1(x)
output = self.model.bn1(output)
output = self.model.relu(output)
output = self.model.maxpool(output)
output = self.model.layer1(output)
output = self.model.layer2(output)
output = self.model.layer3(output)
output = self.model.layer4(output)
output = self.model.avgpool(output)
output = output.view(output.size(0), -1)
classif_output = self.model.fc(output)
domain_output = ReverseLayerF.apply(output)
domain_output = self.domainfc(domain_output)
domain_output = self.ll(domain_output)
return classif_output, domain_output
# for dice
class DiceDAResNet(nn.Module):
def __init__(self, lambda_param=0.1):
super(DiceDAResNet, self).__init__()
self.model = torchvision.models.resnet34(pretrained=True)
input_fc_dim = self.model.fc.in_features
self.model.fc = nn.Linear(input_fc_dim, 6)
self.domainfc = nn.Linear(input_fc_dim, 2)
self.ll = LambdaLayer(lambda_param=lambda_param)
def forward(self, x):
output = self.model.conv1(x)
output = self.model.bn1(output)
output = self.model.relu(output)
output = self.model.maxpool(output)
output = self.model.layer1(output)
output = self.model.layer2(output)
output = self.model.layer3(output)
output = self.model.layer4(output)
output = self.model.avgpool(output)
output = output.view(output.size(0), -1)
classif_output = self.model.fc(output)
domain_output = ReverseLayerF.apply(output)
domain_output = self.domainfc(domain_output)
domain_output = self.ll(domain_output)
return classif_output, domain_output
# for Food101
class Food101DAResNet(nn.Module):
def __init__(self, lambda_param=0.1):
super(Food101DAResNet, self).__init__()
self.model = torchvision.models.resnet34(pretrained=True)
input_fc_dim = self.model.fc.in_features
self.model.fc = nn.Linear(input_fc_dim, 101)
self.domainfc = nn.Linear(input_fc_dim, 2)
self.ll = LambdaLayer(lambda_param=lambda_param)
def forward(self, x):
output = self.model.conv1(x)
output = self.model.bn1(output)
output = self.model.relu(output)
output = self.model.maxpool(output)
output = self.model.layer1(output)
output = self.model.layer2(output)
output = self.model.layer3(output)
output = self.model.layer4(output)
output = self.model.avgpool(output)
output = output.view(output.size(0), -1)
classif_output = self.model.fc(output)
domain_output = ReverseLayerF.apply(output)
domain_output = self.domainfc(domain_output)
domain_output = self.ll(domain_output)
return classif_output, domain_output
def load_resnet_model(dataset_name, lambda_param=0.1):
if dataset_name == "DBI":
return DBIDAResNet(lambda_param)
elif dataset_name == "DogsCats":
return DCDAResNet(lambda_param)
elif dataset_name == "Dice":
return DiceDAResNet(lambda_param)
else:
return Food101DAResNet(lambda_param)
| 29.981818 | 65 | 0.661815 | 623 | 4,947 | 5.070626 | 0.125201 | 0.148148 | 0.189934 | 0.186135 | 0.859133 | 0.842672 | 0.842672 | 0.842355 | 0.842355 | 0.842355 | 0 | 0.019494 | 0.232666 | 4,947 | 164 | 66 | 30.164634 | 0.812698 | 0.013341 | 0 | 0.763636 | 0 | 0 | 0.003076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081818 | false | 0 | 0.045455 | 0 | 0.236364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
360a2043cd4c7a43f4ccf535118340f8fb28050e | 26,691 | py | Python | sdk/python/pulumi_alicloud/amqp/binding.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/amqp/binding.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/amqp/binding.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['BindingArgs', 'Binding']
@pulumi.input_type
class BindingArgs:
def __init__(__self__, *,
binding_key: pulumi.Input[str],
binding_type: pulumi.Input[str],
destination_name: pulumi.Input[str],
instance_id: pulumi.Input[str],
source_exchange: pulumi.Input[str],
virtual_host_name: pulumi.Input[str],
argument: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Binding resource.
:param pulumi.Input[str] binding_key: The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
:param pulumi.Input[str] binding_type: The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
:param pulumi.Input[str] destination_name: The Target Queue Or Exchange of the Name.
:param pulumi.Input[str] instance_id: Instance Id.
:param pulumi.Input[str] source_exchange: The Source Exchange Name.
:param pulumi.Input[str] virtual_host_name: Virtualhost Name.
:param pulumi.Input[str] argument: X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
"""
pulumi.set(__self__, "binding_key", binding_key)
pulumi.set(__self__, "binding_type", binding_type)
pulumi.set(__self__, "destination_name", destination_name)
pulumi.set(__self__, "instance_id", instance_id)
pulumi.set(__self__, "source_exchange", source_exchange)
pulumi.set(__self__, "virtual_host_name", virtual_host_name)
if argument is not None:
pulumi.set(__self__, "argument", argument)
@property
@pulumi.getter(name="bindingKey")
def binding_key(self) -> pulumi.Input[str]:
"""
The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
"""
return pulumi.get(self, "binding_key")
@binding_key.setter
def binding_key(self, value: pulumi.Input[str]):
pulumi.set(self, "binding_key", value)
@property
@pulumi.getter(name="bindingType")
def binding_type(self) -> pulumi.Input[str]:
"""
The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
"""
return pulumi.get(self, "binding_type")
@binding_type.setter
def binding_type(self, value: pulumi.Input[str]):
pulumi.set(self, "binding_type", value)
@property
@pulumi.getter(name="destinationName")
def destination_name(self) -> pulumi.Input[str]:
"""
The Target Queue Or Exchange of the Name.
"""
return pulumi.get(self, "destination_name")
@destination_name.setter
def destination_name(self, value: pulumi.Input[str]):
pulumi.set(self, "destination_name", value)
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> pulumi.Input[str]:
"""
Instance Id.
"""
return pulumi.get(self, "instance_id")
@instance_id.setter
def instance_id(self, value: pulumi.Input[str]):
pulumi.set(self, "instance_id", value)
@property
@pulumi.getter(name="sourceExchange")
def source_exchange(self) -> pulumi.Input[str]:
"""
The Source Exchange Name.
"""
return pulumi.get(self, "source_exchange")
@source_exchange.setter
def source_exchange(self, value: pulumi.Input[str]):
pulumi.set(self, "source_exchange", value)
@property
@pulumi.getter(name="virtualHostName")
def virtual_host_name(self) -> pulumi.Input[str]:
"""
Virtualhost Name.
"""
return pulumi.get(self, "virtual_host_name")
@virtual_host_name.setter
def virtual_host_name(self, value: pulumi.Input[str]):
pulumi.set(self, "virtual_host_name", value)
@property
@pulumi.getter
def argument(self) -> Optional[pulumi.Input[str]]:
"""
X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
"""
return pulumi.get(self, "argument")
@argument.setter
def argument(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "argument", value)
@pulumi.input_type
class _BindingState:
def __init__(__self__, *,
argument: Optional[pulumi.Input[str]] = None,
binding_key: Optional[pulumi.Input[str]] = None,
binding_type: Optional[pulumi.Input[str]] = None,
destination_name: Optional[pulumi.Input[str]] = None,
instance_id: Optional[pulumi.Input[str]] = None,
source_exchange: Optional[pulumi.Input[str]] = None,
virtual_host_name: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Binding resources.
:param pulumi.Input[str] argument: X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
:param pulumi.Input[str] binding_key: The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
:param pulumi.Input[str] binding_type: The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
:param pulumi.Input[str] destination_name: The Target Queue Or Exchange of the Name.
:param pulumi.Input[str] instance_id: Instance Id.
:param pulumi.Input[str] source_exchange: The Source Exchange Name.
:param pulumi.Input[str] virtual_host_name: Virtualhost Name.
"""
if argument is not None:
pulumi.set(__self__, "argument", argument)
if binding_key is not None:
pulumi.set(__self__, "binding_key", binding_key)
if binding_type is not None:
pulumi.set(__self__, "binding_type", binding_type)
if destination_name is not None:
pulumi.set(__self__, "destination_name", destination_name)
if instance_id is not None:
pulumi.set(__self__, "instance_id", instance_id)
if source_exchange is not None:
pulumi.set(__self__, "source_exchange", source_exchange)
if virtual_host_name is not None:
pulumi.set(__self__, "virtual_host_name", virtual_host_name)
@property
@pulumi.getter
def argument(self) -> Optional[pulumi.Input[str]]:
"""
X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
"""
return pulumi.get(self, "argument")
@argument.setter
def argument(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "argument", value)
@property
@pulumi.getter(name="bindingKey")
def binding_key(self) -> Optional[pulumi.Input[str]]:
"""
The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
"""
return pulumi.get(self, "binding_key")
@binding_key.setter
def binding_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "binding_key", value)
@property
@pulumi.getter(name="bindingType")
def binding_type(self) -> Optional[pulumi.Input[str]]:
"""
The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
"""
return pulumi.get(self, "binding_type")
@binding_type.setter
def binding_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "binding_type", value)
@property
@pulumi.getter(name="destinationName")
def destination_name(self) -> Optional[pulumi.Input[str]]:
"""
The Target Queue Or Exchange of the Name.
"""
return pulumi.get(self, "destination_name")
@destination_name.setter
def destination_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_name", value)
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> Optional[pulumi.Input[str]]:
"""
Instance Id.
"""
return pulumi.get(self, "instance_id")
@instance_id.setter
def instance_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "instance_id", value)
@property
@pulumi.getter(name="sourceExchange")
def source_exchange(self) -> Optional[pulumi.Input[str]]:
"""
The Source Exchange Name.
"""
return pulumi.get(self, "source_exchange")
@source_exchange.setter
def source_exchange(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_exchange", value)
@property
@pulumi.getter(name="virtualHostName")
def virtual_host_name(self) -> Optional[pulumi.Input[str]]:
"""
Virtualhost Name.
"""
return pulumi.get(self, "virtual_host_name")
@virtual_host_name.setter
def virtual_host_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "virtual_host_name", value)
class Binding(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
argument: Optional[pulumi.Input[str]] = None,
binding_key: Optional[pulumi.Input[str]] = None,
binding_type: Optional[pulumi.Input[str]] = None,
destination_name: Optional[pulumi.Input[str]] = None,
instance_id: Optional[pulumi.Input[str]] = None,
source_exchange: Optional[pulumi.Input[str]] = None,
virtual_host_name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a RabbitMQ (AMQP) Binding resource to bind tha exchange with another exchange or queue.
> **NOTE:** Available in v1.135.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
example_virtual_host = alicloud.amqp.VirtualHost("exampleVirtualHost",
instance_id="amqp-abc12345",
virtual_host_name="my-VirtualHost")
example_exchange = alicloud.amqp.Exchange("exampleExchange",
auto_delete_state=False,
exchange_name="my-Exchange",
exchange_type="HEADERS",
instance_id=example_virtual_host.instance_id,
internal=False,
virtual_host_name=example_virtual_host.virtual_host_name)
example_queue = alicloud.amqp.Queue("exampleQueue",
instance_id=example_virtual_host.instance_id,
queue_name="my-Queue",
virtual_host_name=example_virtual_host.virtual_host_name)
example_binding = alicloud.amqp.Binding("exampleBinding",
argument="x-match:all",
binding_key=example_queue.queue_name,
binding_type="QUEUE",
destination_name="binding-queue",
instance_id=example_exchange.instance_id,
source_exchange=example_exchange.exchange_name,
virtual_host_name=example_exchange.virtual_host_name)
```
## Import
RabbitMQ (AMQP) Binding can be imported using the id, e.g.
```sh
$ pulumi import alicloud:amqp/binding:Binding example <instance_id>:<virtual_host_name>:<source_exchange>:<destination_name>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] argument: X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
:param pulumi.Input[str] binding_key: The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
:param pulumi.Input[str] binding_type: The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
:param pulumi.Input[str] destination_name: The Target Queue Or Exchange of the Name.
:param pulumi.Input[str] instance_id: Instance Id.
:param pulumi.Input[str] source_exchange: The Source Exchange Name.
:param pulumi.Input[str] virtual_host_name: Virtualhost Name.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: BindingArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a RabbitMQ (AMQP) Binding resource to bind tha exchange with another exchange or queue.
> **NOTE:** Available in v1.135.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
example_virtual_host = alicloud.amqp.VirtualHost("exampleVirtualHost",
instance_id="amqp-abc12345",
virtual_host_name="my-VirtualHost")
example_exchange = alicloud.amqp.Exchange("exampleExchange",
auto_delete_state=False,
exchange_name="my-Exchange",
exchange_type="HEADERS",
instance_id=example_virtual_host.instance_id,
internal=False,
virtual_host_name=example_virtual_host.virtual_host_name)
example_queue = alicloud.amqp.Queue("exampleQueue",
instance_id=example_virtual_host.instance_id,
queue_name="my-Queue",
virtual_host_name=example_virtual_host.virtual_host_name)
example_binding = alicloud.amqp.Binding("exampleBinding",
argument="x-match:all",
binding_key=example_queue.queue_name,
binding_type="QUEUE",
destination_name="binding-queue",
instance_id=example_exchange.instance_id,
source_exchange=example_exchange.exchange_name,
virtual_host_name=example_exchange.virtual_host_name)
```
## Import
RabbitMQ (AMQP) Binding can be imported using the id, e.g.
```sh
$ pulumi import alicloud:amqp/binding:Binding example <instance_id>:<virtual_host_name>:<source_exchange>:<destination_name>
```
:param str resource_name: The name of the resource.
:param BindingArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(BindingArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
argument: Optional[pulumi.Input[str]] = None,
binding_key: Optional[pulumi.Input[str]] = None,
binding_type: Optional[pulumi.Input[str]] = None,
destination_name: Optional[pulumi.Input[str]] = None,
instance_id: Optional[pulumi.Input[str]] = None,
source_exchange: Optional[pulumi.Input[str]] = None,
virtual_host_name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = BindingArgs.__new__(BindingArgs)
__props__.__dict__["argument"] = argument
if binding_key is None and not opts.urn:
raise TypeError("Missing required property 'binding_key'")
__props__.__dict__["binding_key"] = binding_key
if binding_type is None and not opts.urn:
raise TypeError("Missing required property 'binding_type'")
__props__.__dict__["binding_type"] = binding_type
if destination_name is None and not opts.urn:
raise TypeError("Missing required property 'destination_name'")
__props__.__dict__["destination_name"] = destination_name
if instance_id is None and not opts.urn:
raise TypeError("Missing required property 'instance_id'")
__props__.__dict__["instance_id"] = instance_id
if source_exchange is None and not opts.urn:
raise TypeError("Missing required property 'source_exchange'")
__props__.__dict__["source_exchange"] = source_exchange
if virtual_host_name is None and not opts.urn:
raise TypeError("Missing required property 'virtual_host_name'")
__props__.__dict__["virtual_host_name"] = virtual_host_name
super(Binding, __self__).__init__(
'alicloud:amqp/binding:Binding',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
argument: Optional[pulumi.Input[str]] = None,
binding_key: Optional[pulumi.Input[str]] = None,
binding_type: Optional[pulumi.Input[str]] = None,
destination_name: Optional[pulumi.Input[str]] = None,
instance_id: Optional[pulumi.Input[str]] = None,
source_exchange: Optional[pulumi.Input[str]] = None,
virtual_host_name: Optional[pulumi.Input[str]] = None) -> 'Binding':
"""
Get an existing Binding resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] argument: X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
:param pulumi.Input[str] binding_key: The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
:param pulumi.Input[str] binding_type: The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
:param pulumi.Input[str] destination_name: The Target Queue Or Exchange of the Name.
:param pulumi.Input[str] instance_id: Instance Id.
:param pulumi.Input[str] source_exchange: The Source Exchange Name.
:param pulumi.Input[str] virtual_host_name: Virtualhost Name.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _BindingState.__new__(_BindingState)
__props__.__dict__["argument"] = argument
__props__.__dict__["binding_key"] = binding_key
__props__.__dict__["binding_type"] = binding_type
__props__.__dict__["destination_name"] = destination_name
__props__.__dict__["instance_id"] = instance_id
__props__.__dict__["source_exchange"] = source_exchange
__props__.__dict__["virtual_host_name"] = virtual_host_name
return Binding(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def argument(self) -> pulumi.Output[str]:
"""
X-match Attributes. Valid Values:
* "x-match:all": Default Value, All the Message Header of Key-Value Pairs Stored in the Must Match.
* "x-match:any": at Least One Pair of the Message Header of Key-Value Pairs Stored in the Must Match.
"""
return pulumi.get(self, "argument")
@property
@pulumi.getter(name="bindingKey")
def binding_key(self) -> pulumi.Output[str]:
"""
The Binding Key.
* For a non-topic source exchange: The binding key can contain only letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
The binding key must be 1 to 255 characters in length.
* For a topic source exchange: The binding key can contain letters, digits, hyphens (-), underscores (_), periods (.), and at signs (@).
If the binding key contains a number sign (#), the binding key must start with a number sign (#) followed by a period (.) or end with a number sign (#) that follows a period (.).
The binding key must be 1 to 255 characters in length.
"""
return pulumi.get(self, "binding_key")
@property
@pulumi.getter(name="bindingType")
def binding_type(self) -> pulumi.Output[str]:
"""
The Target Binding Types. Valid values: `EXCHANGE`, `QUEUE`.
"""
return pulumi.get(self, "binding_type")
@property
@pulumi.getter(name="destinationName")
def destination_name(self) -> pulumi.Output[str]:
"""
The Target Queue Or Exchange of the Name.
"""
return pulumi.get(self, "destination_name")
@property
@pulumi.getter(name="instanceId")
def instance_id(self) -> pulumi.Output[str]:
"""
Instance Id.
"""
return pulumi.get(self, "instance_id")
@property
@pulumi.getter(name="sourceExchange")
def source_exchange(self) -> pulumi.Output[str]:
"""
The Source Exchange Name.
"""
return pulumi.get(self, "source_exchange")
@property
@pulumi.getter(name="virtualHostName")
def virtual_host_name(self) -> pulumi.Output[str]:
"""
Virtualhost Name.
"""
return pulumi.get(self, "virtual_host_name")
| 46.908612 | 193 | 0.639954 | 3,239 | 26,691 | 5.069466 | 0.064526 | 0.063642 | 0.079294 | 0.060292 | 0.90201 | 0.884836 | 0.860171 | 0.837333 | 0.810962 | 0.793971 | 0 | 0.003891 | 0.258589 | 26,691 | 568 | 194 | 46.991197 | 0.825863 | 0.448878 | 0 | 0.635338 | 1 | 0 | 0.116185 | 0.002252 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.003759 | 0.018797 | 0 | 0.270677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
36321fd33563137f6af1603c89ad0ef4f0bd6035 | 160,688 | py | Python | tests/test_grouptheory.py | wsmorgan/phonon-enumeration | 5d7a8d8e3403cc387bdd58cf98a23e4751ea34dd | [
"MIT-0"
] | 5 | 2016-06-17T05:39:27.000Z | 2021-05-30T21:02:08.000Z | tests/test_grouptheory.py | wsmorgan/phonon-enumeration | 5d7a8d8e3403cc387bdd58cf98a23e4751ea34dd | [
"MIT-0"
] | 66 | 2016-04-02T05:02:08.000Z | 2018-07-05T19:43:09.000Z | tests/test_grouptheory.py | wsmorgan/phonon-enumeration | 5d7a8d8e3403cc387bdd58cf98a23e4751ea34dd | [
"MIT-0"
] | 5 | 2017-03-15T21:28:44.000Z | 2020-01-09T14:44:45.000Z | """Methods for testing the subroutines in the grouptheory module."""
import unittest as ut
from phenum.grouptheory import ArrowPerm, RotPermList, OpList
import pytest
import numpy as np
gpath = "tests/grouptheory/"
def _read_fixOp_1D(fname):
import os
i = 1
growing = True
out = []
while growing:
if os.path.isfile(fname+"/_-"+str(i)+"-rot") or os.path.isfile(fname+"/_-"+str(i)+"-shift"):
i += 1
else:
growing = False
for j in range(1,i):
if os.path.isfile(fname+"/_-"+str(j)+"-rot"):
rot = [np.transpose(t) for t in _read_float_3D(fname+"/_-"+str(j)+"-rot")]
else:
rot = None
if os.path.isfile(fname+"/_-"+str(j)+"-shift"):
shift = list(map(list,zip(*_read_float_2D(fname+"/_-"+str(j)+"-shift"))))
else:
shift = None
temp = OpList(rot=rot,shift=shift)
out.append(temp)
return out
def _read_RotPermList_1D(fname,arrowp = None):
import os
i = 1
growing = True
out = []
while growing :
if os.path.isfile(fname+"/_-"+str(i)+"-nL") or os.path.isfile(fname+"/_-"+str(i)+"-v") or os.path.isfile(fname+"/_-"+str(i)+"-RotIndx") or os.path.isfile(fname+"/_-"+str(i)+"-perm"):
i += 1
else:
growing = False
for j in range(1,i):
if os.path.isfile(fname+"/_-"+str(j)+"-nL"):
nL = _read_int(fname+"/_-"+str(j)+"-nL")
else:
nL = None
if os.path.isfile(fname+"/_-"+str(j)+"-v"):
v = _read_float_3D(fname+"/_-"+str(j)+"-v")
else:
v = None
if os.path.isfile(fname+"/_-"+str(j)+"-perm"):
perm = _read_int_2D(fname+"/_-"+str(j)+"-perm")
perm = [[i-1 for i in t] for t in perm]
else:
perm = None
if arrowp == None:
a_perm = None
if os.path.isfile(fname+"/_-"+str(j)+"-RotIndx"):
RotIndx = _read_int_1D(fname+"/_-"+str(j)+"-RotIndx")
RotIndx = [i-1 for i in RotIndx]
else:
RotIndx = None
temp = RotPermList(nL = nL, v = v, perm = perm, arrows=a_perm, RotIndx= RotIndx)
out.append(temp)
return out
def _read_fixOp(fname):
import os
if os.path.isfile(fname+"/_-rot"):
rot = _read_float_3D(fname+"/_-rot")
else:
rot = None
if os.path.isfile(fname+"/_-shift"):
shift = list(map(list,zip(*_read_float_2D(fname+"/_-shift"))))
else:
shift = None
out = OpList(rot=rot,shift=shift)
return out
def _read_RotPermList(fname,arrowp = None):
import os
if os.path.isfile(fname+"/_-nL"):
nL = _read_int(fname+"/_-nL")
else:
nL = None
if os.path.isfile(fname+"/_-v"):
v = _read_float_3D(fname+"/_-v")
else:
v = None
if os.path.isfile(fname+"/_-perm"):
perm = _read_int_2D(fname+"/_-perm")
perm = [[i-1 for i in j] for j in perm]
else:
perm = None
if arrowp == None:
a_perm = None
if os.path.isfile(fname+"/_-RotIndx"):
RotIndx = _read_int_1D(fname+"/_-RotIndx")
RotIndx = [i-1 for i in RotIndx]
else:
RotIndx = None
out = RotPermList(nL = nL, v = v, perm = perm, arrows=a_perm, RotIndx= RotIndx)
return out
def _read_float_3D(fname):
with open(fname,"r") as inf:
temp = inf.readline()
sizes = inf.readline()
sizes = [int(x) for x in sizes.strip().split() if x !="##"]
temp = inf.readline()
in_data = []
in_temp = []
for line in inf:
if "#" not in line:
in_temp.append([float(i) for i in line.strip().split()])
else:
in_data.append(in_temp)
in_temp = []
in_data.append(in_temp)
out = []
for i in range(sizes[2]):
out_t = []
for j in range(sizes[1]):
out_t.append([k[j][i] for k in in_data])
out.append(out_t)
return(out)
def _read_int_3D(fname):
with open(fname,"r") as inf:
temp = inf.readline()
sizes = inf.readline()
sizes = [int(x) for x in sizes.strip().split() if x !="##"]
temp = inf.readline()
in_data = []
in_temp = []
for line in inf:
if "#" not in line:
in_temp.append([int(i) for i in line.strip().split()])
else:
in_data.append(in_temp)
in_temp = []
in_data.append(in_temp)
out = []
for i in range(sizes[2]):
out_t = []
for j in range(sizes[1]):
out_t.append([k[j][i] for k in in_data])
out.append(np.transpose(out_t))
return(out)
def _read_output(test):
values = []
with open("tests/grouptheory/"+test) as f:
for line in f:
values.append(eval(line))
return values
def _read_float_2D(fname):
array = []
with open(fname,"r") as f1:
for line in f1:
if "#" not in line:
array.append([float(i) for i in line.strip().split()])
return array
def _read_float_1D(fname):
array = []
from os import getcwd
with open(fname,"r") as f1:
for line in f1:
if "#" not in line:
array = [float(i) for i in line.strip().split()]
return array
def _read_int_2D(fname):
array = []
with open(fname,"r") as f1:
for line in f1:
if "#" not in line:
array.append([int(i) for i in line.strip().split()])
return array
def _read_int_1D(fname):
array = []
with open(fname,"r") as f1:
for line in f1:
if "#" not in line:
array = [int(i) for i in line.strip().split()]
return array
def _read_int(fname):
with open(fname,"r") as f1:
line = f1.readline()
if "#" in line:
line = f1.readline()
val = int(line.strip())
return val
def _read_float(fname):
with open(fname,"r") as f1:
line = f1.readline()
if "#" in line:
line = f1.readline()
val = float(line.strip())
return val
def _read_logical(fname):
with open(fname,"r") as f1:
line = f1.readline()
if "#" in line:
line = f1.readline()
if "t" in line.lower():
val = True
else:
val = False
return val
class TestGetFullHNF(ut.TestCase):
""" Tests of the get_full_HNF subroutine."""
def test_1(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([1,0,1,0,0,1])
out = [[1,0,0],[0,1,0],[0,0,1]]
self.assertEqual(get_full_HNF(HNF),out)
def test_2(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([2,1,2,1,0,4])
out = [[2,0,0],[1,2,0],[1,0,4]]
self.assertEqual(get_full_HNF(HNF),out)
def test_3(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([1,0,3,1,2,3])
out = [[1,0,0],[0,3,0],[1,2,3]]
self.assertEqual(get_full_HNF(HNF),out)
def test_4(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = [0,0,0,0,0,0]
out = [[0,0,0],[0,0,0],[0,0,0]]
self.assertEqual(get_full_HNF(HNF),out)
def test_5(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([3,0,3,0,0,3])
out = [[3,0,0],[0,3,0],[0,0,3]]
self.assertEqual(get_full_HNF(HNF),out)
def test_1(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([1,1,2,0,2,2])
out = [[1,0,0],[1,2,0],[0,2,2]]
self.assertEqual(get_full_HNF(HNF),out)
def test_1(self):
from phenum.grouptheory import get_full_HNF
from numpy import array
HNF = array([2,0,2,0,2,4])
out = [[2,0,0],[0,2,0],[0,2,4]]
self.assertEqual(get_full_HNF(HNF),out)
class TestSmithNormalForm(ut.TestCase):
""" Tests of the SmithNormalForm subroutine."""
def test_1(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_2(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [0, 1, 2]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 2]], [[1, 0, 0], [0, 1, 0], [0, -1, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_3(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [0, 0, 3]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 3]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_4(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 2, 0], [0, 0, 2]]
out = ([[1, 0, 0], [0, 2, 0], [0, 0, 2]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_5(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [1, 2, 5]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 5]], [[1, 0, 0], [0, 1, 0], [-1, -2, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_6(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [2, 3, 6]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 6]], [[1, 0, 0], [0, 1, 0], [-2, -3, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_7(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [0, 6, 7]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 7]], [[1, 0, 0], [0, 1, 0], [0, -6, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_8(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [1, 2, 0], [1, 0, 4]]
out = ([[1, 0, 0], [0, 2, 0], [0, 0, 4]], [[1, 0, 0], [-1, 1, 0], [-1, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_9(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[2, 0, 0], [0, 2, 0], [0, 0, 2]]
out = ([[2, 0, 0], [0, 2, 0], [0, 0, 2]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_10(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 0, 0], [0, 1, 0], [1, 5, 10]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 10]], [[1, 0, 0], [0, 1, 0], [-1, -5, 1]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_11(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[-1, 0, 0], [0, 1, 0], [0, 0, 1]]
out = ()
with pytest.raises(ValueError):
self.assertEqual(SmithNormalForm(HNF),out)
def test_12(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[0, 1, 0], [0, 0, 1], [1, 0, 0]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[0, 0, 1], [1, 0, 0], [0, 1, 0]], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_13(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, -1, -2], [1, 2, -3], [1, 2, 4]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 21]], [[1, 0, 0], [-1, 1, 0], [-7, 6, 1]], [[1, -2, 7], [0, 0, 1], [0, -1, 3]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_14(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[-1, -2, -3], [-1, -1, -2], [-1, -2, -4]]
out = ([[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [-1, 1, 0], [-1, 0, 1]], [[-1, -2, 1], [0, 1, 1], [0, 0, -1]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_15(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[1, 2.5, 0], [0, 1.5, 1.66], [1.5, 1.25, 1.3]]
with pytest.raises(ValueError):
SmithNormalForm(HNF)
def test_16(self):
from phenum.grouptheory import SmithNormalForm
HNF = [[2, 0, 0], [0, 2, 0], [0, 0, 1]]
out = ([[1, 0, 0], [0, 2, 0], [0, 0, 2]], [[1, 0, 1], [0, 1, 0], [-1, 0, 0]], [[0, 0, -1], [0, 1, 0], [1, 0, 2]])
self.assertEqual(SmithNormalForm(HNF),out)
def test_17(self):
"""Test of the bug reported in issue #56."""
from phenum.grouptheory import SmithNormalForm
HNF = [[1,2,4],[3,3,4],[3,4,2]]
S, L, R = SmithNormalForm(HNF)
self.assertTrue(np.allclose(list(np.dot(np.dot(L,HNF),R)),S))
def test_17(self):
"""Test of the bug reported in issue #61."""
from phenum.grouptheory import SmithNormalForm
HNF = [[41,0,0],[0,21,0],[0,0,41]]
out = ([[1, 0, 0], [0, 41, 0], [0, 0, 861]], [[1, 1, 0], [-42, -41, 1], [42, 41, 0]], [[-1, 0, 21], [2, 0, -41], [0, 1, 21]])
self.assertEqual(SmithNormalForm(HNF),out)
class TestAGroup(ut.TestCase):
""" Tests of the a_group subroutine."""
def test_1(self):
from phenum.grouptheory import a_group
trans = [[0,1],[1,0]]
rots = [[[0,1],[0,1,2,3,4,5]],[[1,0],[2,3,0,1,5,4]],[[1,0],[2,1,0,3,5,4]],[[0,1],[0,3,2,1,5,4]]]
out = _read_output("agroup.out.1")
self.assertEqual(a_group(trans,rots),out)
def test_2(self):
from phenum.grouptheory import a_group
trans = [[j-1 for j in i] for i in [[1, 2, 3, 4], [2, 1, 4, 3], [3, 4, 1, 2], [4, 3, 2, 1]]]
rots = [[[j-1 for j in i] for i in t] for t in [[[1, 2, 3, 4], [1, 2, 3, 4, 5, 6]], [[1, 4, 3, 2], [1, 3, 2, 4, 6, 5]], [[1, 2, 3, 4], [4, 2, 3, 1, 5, 6]], [[1, 4, 3, 2], [4, 3, 2, 1, 6, 5]], [[1, 2, 3, 4], [1, 5, 3, 4, 2, 6]], [[1, 4, 3, 2], [1, 3, 5, 4, 6, 2]], [[1, 2, 3, 4], [4, 5, 3, 1, 2, 6]], [[1, 4, 3, 2], [4, 3, 5, 1, 6, 2]], [[1, 2, 3, 4], [1, 2, 6, 4, 5, 3]], [[1, 4, 3, 2], [1, 6, 2, 4, 3, 5]], [[1, 2, 3, 4], [4, 2, 6, 1, 5, 3]], [[1, 4, 3, 2], [4, 6, 2, 1, 3, 5]], [[1, 2, 3, 4], [1, 5, 6, 4, 2, 3]], [[1, 4, 3, 2], [1, 6, 5, 4, 3, 2]], [[1, 2, 3, 4], [4, 5, 6, 1, 2, 3]], [[1, 4, 3, 2], [4, 6, 5, 1, 3, 2]]]]
out = _read_output("agroup.out.2")
self.assertEqual(a_group(trans,rots),out)
def test_3(self):
from phenum.grouptheory import a_group
trans = [[j-1 for j in i] for i in [[1, 2, 3, 4, 5, 6, 7, 8], [2, 1, 4, 3, 6, 5, 8, 7], [3, 4, 5, 6, 7, 8, 1, 2], [4, 3, 6, 5, 8, 7, 2, 1], [5, 6, 7, 8, 1, 2, 3, 4], [6, 5, 8, 7, 2, 1, 4, 3], [7, 8, 1, 2, 3, 4, 5, 6], [8, 7, 2, 1, 4, 3, 6, 5]]]
rots = [[[j-1 for j in i] for i in t] for t in [[[1, 2, 3, 4, 5, 6, 7, 8], [1, 2, 3, 4, 5, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [4, 2, 3, 1, 5, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [1, 5, 3, 4, 2, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [4, 5, 3, 1, 2, 6]], [[1, 2, 7, 8, 5, 6, 3, 4], [1, 2, 6, 4, 5, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [4, 2, 6, 1, 5, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [1, 5, 6, 4, 2, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [4, 5, 6, 1, 2, 3]]]]
out = _read_output("agroup.out.3")
self.assertEqual(a_group(trans,rots),out)
def test_4(self):
from phenum.grouptheory import a_group
trans =[[j - 1 for j in i] for i in[[1,2,3,4,5,6,7,8], [2,1,4,3,6,5,8,7], [3,4,5,6,7,8,1,2], [4,3,6,5,8,7,2,1], [5,6,7,8,1,2,3,4], [6,5,8,7,2,1,4,3], [7,8,1,2,3,4,5,6], [8,7,2,1,4,3,6,5]]]
rots = [[[0,1,2,3,4,5,6,7],[0,1,2,3]],[[0,1,2,3,4,5,6,7],[2,1,0,3]],[[0,1,6,7,4,5,2,3],[0,3,2,1]],[[0,1,6,7,4,5,2,3],[2,3,0,1]]]
out = _read_output("agroup.out.4")
self.assertEqual(a_group(trans,rots),out)
def test_5(self):
from phenum.grouptheory import a_group
trans =[[j - 1 for j in i] for i in[[1, 2, 3, 4, 5, 6, 7, 8], [2, 1, 4, 3, 6, 5, 8, 7], [3, 4, 5, 6, 7, 8, 1, 2], [4, 3, 6, 5, 8, 7, 2, 1], [5, 6, 7, 8, 1, 2, 3, 4], [6, 5, 8, 7, 2, 1, 4, 3], [7, 8, 1, 2, 3, 4, 5, 6], [8, 7, 2, 1, 4, 3, 6, 5]]]
rots = [[[0,1,2,3,4,5,6,7],[0,1,2,3]],[[0,1,2,3,4,5,6,7],[2,1,0,3]],[[0,1,6,7,4,5,2,3],[0,3,2,1]],[[0,1,6,7,4,5,2,3],[2,3,0,1]]]
out = _read_output("agroup.out.5")
self.assertEqual(a_group(trans,rots),out)
class TestAGroupGen(ut.TestCase):
""" Tests of the a_group subroutine."""
def test_1(self):
from phenum.grouptheory import a_group_gen
trans = [[0,1],[1,0]]
rots = [[[0,1],[0,1,2,3,4,5]],[[1,0],[2,3,0,1,5,4]],[[1,0],[2,1,0,3,5,4]],[[0,1],[0,3,2,1,5,4]]]
out = _read_output("agroupgen.out.1")
self.assertEqual(a_group_gen(trans,rots),out)
def test_2(self):
from phenum.grouptheory import a_group_gen
trans = [[j-1 for j in i] for i in [[1, 2, 3, 4], [2, 1, 4, 3], [3, 4, 1, 2], [4, 3, 2, 1]]]
rots = [[[j-1 for j in i] for i in t] for t in [[[1, 2, 3, 4], [1, 2, 3, 4, 5, 6]], [[1, 4, 3, 2], [1, 3, 2, 4, 6, 5]], [[1, 2, 3, 4], [4, 2, 3, 1, 5, 6]], [[1, 4, 3, 2], [4, 3, 2, 1, 6, 5]], [[1, 2, 3, 4], [1, 5, 3, 4, 2, 6]], [[1, 4, 3, 2], [1, 3, 5, 4, 6, 2]], [[1, 2, 3, 4], [4, 5, 3, 1, 2, 6]], [[1, 4, 3, 2], [4, 3, 5, 1, 6, 2]], [[1, 2, 3, 4], [1, 2, 6, 4, 5, 3]], [[1, 4, 3, 2], [1, 6, 2, 4, 3, 5]], [[1, 2, 3, 4], [4, 2, 6, 1, 5, 3]], [[1, 4, 3, 2], [4, 6, 2, 1, 3, 5]], [[1, 2, 3, 4], [1, 5, 6, 4, 2, 3]], [[1, 4, 3, 2], [1, 6, 5, 4, 3, 2]], [[1, 2, 3, 4], [4, 5, 6, 1, 2, 3]], [[1, 4, 3, 2], [4, 6, 5, 1, 3, 2]]]]
out = _read_output("agroupgen.out.2")
self.assertEqual(a_group_gen(trans,rots),out)
def test_3(self):
from phenum.grouptheory import a_group_gen
trans = [[j-1 for j in i] for i in [[1, 2, 3, 4, 5, 6, 7, 8], [2, 1, 4, 3, 6, 5, 8, 7], [3, 4, 5, 6, 7, 8, 1, 2], [4, 3, 6, 5, 8, 7, 2, 1], [5, 6, 7, 8, 1, 2, 3, 4], [6, 5, 8, 7, 2, 1, 4, 3], [7, 8, 1, 2, 3, 4, 5, 6], [8, 7, 2, 1, 4, 3, 6, 5]]]
rots = [[[j-1 for j in i] for i in t] for t in [[[1, 2, 3, 4, 5, 6, 7, 8], [1, 2, 3, 4, 5, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [4, 2, 3, 1, 5, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [1, 5, 3, 4, 2, 6]], [[1, 2, 3, 4, 5, 6, 7, 8], [4, 5, 3, 1, 2, 6]], [[1, 2, 7, 8, 5, 6, 3, 4], [1, 2, 6, 4, 5, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [4, 2, 6, 1, 5, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [1, 5, 6, 4, 2, 3]], [[1, 2, 7, 8, 5, 6, 3, 4], [4, 5, 6, 1, 2, 3]]]]
out = _read_output("agroupgen.out.3")
self.assertEqual(a_group_gen(trans,rots),out)
def test_4(self):
from phenum.grouptheory import a_group_gen
trans =[[j - 1 for j in i] for i in[[1,2,3,4,5,6,7,8], [2,1,4,3,6,5,8,7], [3,4,5,6,7,8,1,2], [4,3,6,5,8,7,2,1], [5,6,7,8,1,2,3,4], [6,5,8,7,2,1,4,3], [7,8,1,2,3,4,5,6], [8,7,2,1,4,3,6,5]]]
rots = [[[0,1,2,3,4,5,6,7],[0,1,2,3]],[[0,1,2,3,4,5,6,7],[2,1,0,3]],[[0,1,6,7,4,5,2,3],[0,3,2,1]],[[0,1,6,7,4,5,2,3],[2,3,0,1]]]
out = _read_output("agroupgen.out.4")
self.assertEqual(a_group_gen(trans,rots),out)
def test_5(self):
from phenum.grouptheory import a_group_gen
trans =[[j - 1 for j in i] for i in[[1, 2, 3, 4, 5, 6, 7, 8], [2, 1, 4, 3, 6, 5, 8, 7], [3, 4, 5, 6, 7, 8, 1, 2], [4, 3, 6, 5, 8, 7, 2, 1], [5, 6, 7, 8, 1, 2, 3, 4], [6, 5, 8, 7, 2, 1, 4, 3], [7, 8, 1, 2, 3, 4, 5, 6], [8, 7, 2, 1, 4, 3, 6, 5]]]
rots = [[[0,1,2,3,4,5,6,7],[0,1,2,3]],[[0,1,2,3,4,5,6,7],[2,1,0,3]],[[0,1,6,7,4,5,2,3],[0,3,2,1]],[[0,1,6,7,4,5,2,3],[2,3,0,1]]]
out = _read_output("agroupgen.out.5")
self.assertEqual(a_group_gen(trans,rots),out)
class TestMakeMemberList(ut.TestCase):
"""Tests of the _make_member_list subroutine."""
def test_1(self):
from phenum.grouptheory import _make_member_list
case = 1
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_2(self):
from phenum.grouptheory import _make_member_list
case = 2
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_3(self):
from phenum.grouptheory import _make_member_list
case = 3
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_4(self):
from phenum.grouptheory import _make_member_list
case = 4
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_5(self):
from phenum.grouptheory import _make_member_list
case = 5
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_6(self):
from phenum.grouptheory import _make_member_list
case = 6
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_7(self):
from phenum.grouptheory import _make_member_list
case = 7
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_8(self):
from phenum.grouptheory import _make_member_list
case = 8
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_9(self):
from phenum.grouptheory import _make_member_list
case = 9
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_10(self):
from phenum.grouptheory import _make_member_list
case = 10
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_11(self):
from phenum.grouptheory import _make_member_list
case = 11
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_12(self):
from phenum.grouptheory import _make_member_list
case = 12
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_13(self):
from phenum.grouptheory import _make_member_list
case = 13
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_14(self):
from phenum.grouptheory import _make_member_list
case = 14
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_15(self):
from phenum.grouptheory import _make_member_list
case = 15
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_16(self):
from phenum.grouptheory import _make_member_list
case = 16
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_17(self):
from phenum.grouptheory import _make_member_list
case = 17
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_18(self):
from phenum.grouptheory import _make_member_list
case = 18
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_19(self):
from phenum.grouptheory import _make_member_list
case = 19
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
def test_20(self):
from phenum.grouptheory import _make_member_list
case = 20
n = _read_float_1D(gpath+"make_member_list_n.in."+str(case))
out = list(map(list,zip(*_read_float_2D(gpath+"make_member_list_p.out."+str(case)))))
self.assertTrue(np.allclose(_make_member_list(n),out))
class TestFindPermutationOfGroup(ut.TestCase):
"""Tests of the _find_permutation_of_group subroutine."""
def test_1(self):
from phenum.grouptheory import _find_permutation_of_group
case = 1
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_2(self):
from phenum.grouptheory import _find_permutation_of_group
case = 2
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_3(self):
from phenum.grouptheory import _find_permutation_of_group
case = 3
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_4(self):
from phenum.grouptheory import _find_permutation_of_group
case = 4
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_5(self):
from phenum.grouptheory import _find_permutation_of_group
case = 5
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_6(self):
from phenum.grouptheory import _find_permutation_of_group
case = 6
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_7(self):
from phenum.grouptheory import _find_permutation_of_group
case = 7
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_8(self):
from phenum.grouptheory import _find_permutation_of_group
case = 8
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_9(self):
from phenum.grouptheory import _find_permutation_of_group
case = 9
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_10(self):
from phenum.grouptheory import _find_permutation_of_group
case = 10
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_11(self):
from phenum.grouptheory import _find_permutation_of_group
case = 11
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_12(self):
from phenum.grouptheory import _find_permutation_of_group
case = 12
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_13(self):
from phenum.grouptheory import _find_permutation_of_group
case = 13
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_14(self):
from phenum.grouptheory import _find_permutation_of_group
case = 14
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_15(self):
from phenum.grouptheory import _find_permutation_of_group
case = 15
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_16(self):
from phenum.grouptheory import _find_permutation_of_group
case = 16
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_17(self):
from phenum.grouptheory import _find_permutation_of_group
case = 17
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_18(self):
from phenum.grouptheory import _find_permutation_of_group
case = 18
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_19(self):
from phenum.grouptheory import _find_permutation_of_group
case = 19
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
def test_20(self):
from phenum.grouptheory import _find_permutation_of_group
case = 20
g = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_g.in."+str(case)))))
gp = list(map(list,zip(*_read_float_2D(gpath+"find_permutation_of_group_gp.in."+str(case)))))
out = [i-1 for i in _read_int_1D(gpath+"find_permutation_of_group_perm.out."+str(case))]
self.assertEqual(_find_permutation_of_group(g,gp),out)
class TestIsEquivLattice(ut.TestCase):
"""Tests of the _is_equiv_lattice subroutine."""
def test_1(self):
from phenum.grouptheory import _is_equiv_lattice
case = 1
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_2(self):
from phenum.grouptheory import _is_equiv_lattice
case = 2
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_3(self):
from phenum.grouptheory import _is_equiv_lattice
case = 3
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_4(self):
from phenum.grouptheory import _is_equiv_lattice
case = 4
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_5(self):
from phenum.grouptheory import _is_equiv_lattice
case = 5
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_6(self):
from phenum.grouptheory import _is_equiv_lattice
case = 6
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_7(self):
from phenum.grouptheory import _is_equiv_lattice
case = 7
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_8(self):
from phenum.grouptheory import _is_equiv_lattice
case = 8
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_9(self):
from phenum.grouptheory import _is_equiv_lattice
case = 9
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_10(self):
from phenum.grouptheory import _is_equiv_lattice
case = 10
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_11(self):
from phenum.grouptheory import _is_equiv_lattice
case = 11
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_12(self):
from phenum.grouptheory import _is_equiv_lattice
case = 12
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_13(self):
from phenum.grouptheory import _is_equiv_lattice
case = 13
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_14(self):
from phenum.grouptheory import _is_equiv_lattice
case = 14
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_15(self):
from phenum.grouptheory import _is_equiv_lattice
case = 15
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_16(self):
from phenum.grouptheory import _is_equiv_lattice
case = 16
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_17(self):
from phenum.grouptheory import _is_equiv_lattice
case = 17
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_18(self):
from phenum.grouptheory import _is_equiv_lattice
case = 18
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_19(self):
from phenum.grouptheory import _is_equiv_lattice
case = 19
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
def test_20(self):
from phenum.grouptheory import _is_equiv_lattice
case = 20
lat1 = _read_float_2D(gpath+"is_equiv_lattice_lat1.in."+str(case))
lat2 = _read_float_2D(gpath+"is_equiv_lattice_lat2.in."+str(case))
eps = _read_float(gpath+"is_equiv_lattice_eps.in."+str(case))
out = _read_logical(gpath+"is_equiv_lattice.out."+str(case))
self.assertEqual(_is_equiv_lattice(lat1,lat2,eps),out)
class TestGetSLVFixingOperations(ut.TestCase):
"""Tests of the _get_sLV_fixing_operations subroutine."""
def _compare_outputs(self,out1,out2):
fix1 = out1[0]
fix2 = out2[0]
rot1 = out1[1]
rot2 = out2[1]
deg1 = out1[2]
deg2 = out2[2]
self.assertEqual(deg1,deg2)
if len(fix1.rot) == len(fix2.rot):
for i in range(len(fix1.rot)):
for j in range(3):
for k in range(3):
self.assertAlmostEqual(fix1.rot[i][j][k],fix2.rot[i][j][k],places=12)
else:
self.assertEqual(len(fix1.rot),len(fix2.rot))
if len(fix1.shift) == len(fix2.shift):
for i in range(len(fix1.shift)):
for j in range(3):
self.assertAlmostEqual(fix1.shift[i][j],fix2.shift[i][j],places=12)
else:
self.assertEqual(len(fix1.shift),len(fix2.shift))
self.assertEqual(rot1.nL,rot2.nL)
self.assertEqual(rot1.RotIndx, rot2.RotIndx)
if (rot1.v != None) and (rot2.v != None):
if len(rot1.v) == len(rot2.v):
for i in range(len(rot1.v)):
for j in range(len(rot1.v[i])):
for k in range(len(rot1.v[i][j])):
self.assertAlmostEqual(rot1.v[i][j][k],rot2.v[i][j][k],places=12)
else:
self.assertEqual(len(rot1.v),len(rot2.v))
else:
self.assertEqual(rot1.v,rot2.v)
# if (rot1.perm.site_perm != None) and (rot2.perm.site_perm != None):
# if len(rot1.perm.site_perm) == len(rot2.perm.site_perm):
# for i in range(len(rot1.perm.site_perm)):
# for j in range(len(rot1.perm.site_perm[i])):
# self.assertEqual(rot1.perm.site_perm[i][j],rot2.perm.site_perm[i][j])
# else:
# self.assertEqual(len(rot1.perm.site_perm),len(rot2.perm.site_perm))
# else:
# self.assertEqual(rot1.perm.site_perm,rot2.perm.site_perm)
if (rot1.perm.arrow_perm != None) and (rot2.perm.arrow_perm != None):
if len(rot1.perm.arrow_perm) == len(rot2.perm.arrow_perm):
for i in range(len(rot1.perm.arrow_perm)):
for j in range(len(rot1.perm.arrow_perm[i])):
self.assertEqual(rot1.perm.arrow_perm[i][j],rot2.perm.arrow_perm[i][j])
else:
self.assertEqual(len(rot1.perm.arrow_perm),len(rot2.perm.arrow_perm))
else:
self.assertEqual(rot1.perm.arrow_perm,rot2.perm.arrow_perm)
def test_1(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 1
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_2(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 10
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_3(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 20
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_4(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 30
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_5(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 40
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_6(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 50
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_7(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 60
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_8(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 70
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_9(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 80
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_10(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 90
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_11(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 100
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_12(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 110
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_13(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 120
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_14(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 130
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_15(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 140
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
def test_16(self):
from phenum.grouptheory import _get_sLV_fixing_operations
case = 150
HNF = _read_int_2D(gpath+"get_sLV_fixing_operations_HNF.in."+str(case))
pLV = np.transpose(_read_float_2D(gpath+"get_sLV_fixing_operations_pLV.in."+str(case)))
nD = _read_int(gpath+"get_sLV_fixing_operations_nD.in."+str(case))
rot = _read_float_3D(gpath+"get_sLV_fixing_operations_rot.in."+str(case))
shift = list(map(list,zip(*_read_float_2D(gpath+"get_sLV_fixing_operations_shift.in."+str(case)))))
eps = _read_float(gpath+"get_sLV_fixing_operations_eps.in."+str(case))
dPerm = _read_RotPermList(gpath+"get_sLV_fixing_operations_dPerm.in."+str(case))
fixOp, rotPerm, degeneracy = _get_sLV_fixing_operations(HNF,pLV,rot,shift,dPerm,eps)
rotPerm_out = _read_RotPermList(gpath+"get_sLV_fixing_operations_rotPerm.out."+str(case))
degen_out = _read_int(gpath+"get_sLV_fixing_operations_degeneracy.out."+str(case))
fixOp_out = _read_fixOp(gpath+"get_sLV_fixing_operations_fixOp.out."+str(case))
self._compare_outputs([fixOp,rotPerm,degeneracy],[fixOp_out,rotPerm_out,degen_out])
class TestMapDvectorPermutation(ut.TestCase):
"""Tests of the _map_dvector_permutation subroutine."""
def _compare_outputs(self,out1,out2):
if len(out1) == len(out2):
for i in range(len(out1)):
self.assertEqual(out1[i],out2[i])
else:
self.assertEqual(len(out1),len(out2))
def test_1(self):
from phenum.grouptheory import _map_dvector_permutation
case = 1
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_2(self):
from phenum.grouptheory import _map_dvector_permutation
case = 10
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_3(self):
from phenum.grouptheory import _map_dvector_permutation
case = 20
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_4(self):
from phenum.grouptheory import _map_dvector_permutation
case = 30
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_5(self):
from phenum.grouptheory import _map_dvector_permutation
case = 40
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_6(self):
from phenum.grouptheory import _map_dvector_permutation
case = 50
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_7(self):
from phenum.grouptheory import _map_dvector_permutation
case = 60
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_8(self):
from phenum.grouptheory import _map_dvector_permutation
case = 70
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_9(self):
from phenum.grouptheory import _map_dvector_permutation
case = 80
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_10(self):
from phenum.grouptheory import _map_dvector_permutation
case = 90
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_11(self):
from phenum.grouptheory import _map_dvector_permutation
case = 100
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_12(self):
from phenum.grouptheory import _map_dvector_permutation
case = 110
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_13(self):
from phenum.grouptheory import _map_dvector_permutation
case = 120
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_14(self):
from phenum.grouptheory import _map_dvector_permutation
case = 130
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_15(self):
from phenum.grouptheory import _map_dvector_permutation
case = 140
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_16(self):
from phenum.grouptheory import _map_dvector_permutation
case = 150
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_17(self):
from phenum.grouptheory import _map_dvector_permutation
case = 160
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_18(self):
from phenum.grouptheory import _map_dvector_permutation
case = 170
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_19(self):
from phenum.grouptheory import _map_dvector_permutation
case = 180
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_20(self):
from phenum.grouptheory import _map_dvector_permutation
case = 190
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
def test_21(self):
from phenum.grouptheory import _map_dvector_permutation
case = 200
rd = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_rd.in."+str(case)))))
d = list(map(list,zip(*_read_float_2D(gpath+"map_dvector_permutation_d.in."+str(case)))))
eps = _read_float(gpath+"map_dvector_permutation_eps.in."+str(case))
n = len(rd)
out = _read_int_1D(gpath+"map_dvector_permutation_RP.out."+str(case))
out = [i-1 for i in out]
self._compare_outputs(_map_dvector_permutation(rd,d,eps),out)
class TestFindMinmaxIndices(ut.TestCase):
"""Tests of the _find_minmax_indices subroutine."""
def test_1(self):
from phenum.grouptheory import _find_minmax_indices
case = 1
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_2(self):
from phenum.grouptheory import _find_minmax_indices
case = 5
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_3(self):
from phenum.grouptheory import _find_minmax_indices
case = 10
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_4(self):
from phenum.grouptheory import _find_minmax_indices
case = 15
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_5(self):
from phenum.grouptheory import _find_minmax_indices
case = 20
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_6(self):
from phenum.grouptheory import _find_minmax_indices
case = 25
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_7(self):
from phenum.grouptheory import _find_minmax_indices
case = 30
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_8(self):
from phenum.grouptheory import _find_minmax_indices
case = 35
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_9(self):
from phenum.grouptheory import _find_minmax_indices
case = 40
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_10(self):
from phenum.grouptheory import _find_minmax_indices
case = 45
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
def test_11(self):
from phenum.grouptheory import _find_minmax_indices
case = 50
invec = _read_int_1D(gpath+"get_minmax_indices_invec.in."+str(case))
min,max = _find_minmax_indices(invec)
min_out = _read_int(gpath+"get_minmax_indices_min.out."+str(case))-1
max_out = _read_int(gpath+"get_minmax_indices_max.out."+str(case))-1
self.assertEqual(min,min_out)
self.assertEqual(max,max_out)
class TestGetDvectorPermutations(ut.TestCase):
"""Tests of the _get_dvector_permutations subroutine."""
def _compare_outputs(self,rot1,rot2):
# self.assertEqual(rot1.nL,rot2.nL)
self.assertEqual(rot1.RotIndx, rot2.RotIndx)
if (rot1.v != None) and (rot2.v != None):
if len(rot1.v) == len(rot2.v):
for i in range(len(rot1.v)):
for j in range(len(rot1.v[i])):
for k in range(len(rot1.v[i][j])):
self.assertAlmostEqual(rot1.v[i][j][k],rot2.v[i][j][k],places=12)
else:
self.assertEqual(len(rot1.v),len(rot2.v))
else:
self.assertEqual(rot1.v,rot2.v)
if (rot1.perm.site_perm != None) and (rot2.perm.site_perm != None):
if len(rot1.perm.site_perm) == len(rot2.perm.site_perm):
for i in range(len(rot1.perm.site_perm)):
for j in range(len(rot1.perm.site_perm[i])):
self.assertEqual(rot1.perm.site_perm[i][j],rot2.perm.site_perm[i][j])
else:
self.assertEqual(len(rot1.perm.site_perm),len(rot2.perm.site_perm))
else:
self.assertEqual(rot1.perm.site_perm,rot2.perm.site_perm)
if (rot1.perm.arrow_perm != None) and (rot2.perm.arrow_perm != None):
if len(rot1.perm.arrow_perm) == len(rot2.perm.arrow_perm):
for i in range(len(rot1.perm.arrow_perm)):
for j in range(len(rot1.perm.arrow_perm[i])):
self.assertEqual(rot1.perm.arrow_perm[i][j],rot2.perm.arrow_perm[i][j])
else:
self.assertEqual(len(rot1.perm.arrow_perm),len(rot2.perm.arrow_perm))
else:
self.assertEqual(rot1.perm.arrow_perm,rot2.perm.arrow_perm)
def test_1(self):
from phenum.grouptheory import _get_dvector_permutations
case = 1
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_2(self):
from phenum.grouptheory import _get_dvector_permutations
case = 2
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_3(self):
from phenum.grouptheory import _get_dvector_permutations
case = 3
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_4(self):
from phenum.grouptheory import _get_dvector_permutations
case = 4
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_5(self):
from phenum.grouptheory import _get_dvector_permutations
case = 5
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_6(self):
from phenum.grouptheory import _get_dvector_permutations
case = 6
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_7(self):
from phenum.grouptheory import _get_dvector_permutations
case = 7
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_8(self):
from phenum.grouptheory import _get_dvector_permutations
case = 8
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_9(self):
from phenum.grouptheory import _get_dvector_permutations
case = 9
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
self._compare_outputs(dRPList,dRPList_out)
def test_10(self):
from phenum.grouptheory import _get_dvector_permutations
case = 10
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_11(self):
from phenum.grouptheory import _get_dvector_permutations
case = 11
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_12(self):
from phenum.grouptheory import _get_dvector_permutations
case = 12
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_13(self):
from phenum.grouptheory import _get_dvector_permutations
case = 13
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_14(self):
from phenum.grouptheory import _get_dvector_permutations
case = 14
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_15(self):
from phenum.grouptheory import _get_dvector_permutations
case = 15
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_16(self):
from phenum.grouptheory import _get_dvector_permutations
case = 16
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_17(self):
from phenum.grouptheory import _get_dvector_permutations
case = 17
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_18(self):
from phenum.grouptheory import _get_dvector_permutations
case = 18
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_19(self):
from phenum.grouptheory import _get_dvector_permutations
case = 19
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
def test_20(self):
from phenum.grouptheory import _get_dvector_permutations
case = 20
par_lat = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pLV.in."+str(case)))))
bas_vecs = list(map(list,zip(*_read_float_2D(gpath+"get_dvector_permutations_pd.in."+str(case)))))
LatDim = _read_int(gpath+"get_dvector_permutations_LatDim.in."+str(case))
eps = _read_float(gpath+"get_dvector_permutations_eps.in."+str(case))
dRPList_out = _read_RotPermList(gpath+"get_dvector_permutations_dRPList.out."+str(case))
dRPList = _get_dvector_permutations(par_lat,bas_vecs,LatDim,eps)
self._compare_outputs(dRPList,dRPList_out)
class TestGetRotationPermsLists(ut.TestCase):
"""Tests of the _get_rotation_perms_lists subroutine."""
def _compare_outputs(self,out1,out2):
if len(out1) == len(out2):
for t in range(len(out1)):
rot1 = out1[t]
rot2 = out2[t]
if rot1.nL == 0 or rot1.nL == None:
if rot2.nL == 0 or rot2.nL == None:
self.assertEqual(True,True)
else:
self.assertEqual(rot1.nL,rot2.nL)
else:
self.assertEqual(rot1.nL,rot2.nL)
self.assertEqual(rot1.RotIndx, rot2.RotIndx)
if (rot1.v != None) and (rot2.v != None):
if len(rot1.v) == len(rot2.v):
for i in range(len(rot1.v)):
for j in range(len(rot1.v[i])):
for k in range(len(rot1.v[i][j])):
self.assertAlmostEqual(rot1.v[i][j][k],rot2.v[i][j][k],places=12)
else:
self.assertEqual(len(rot1.v),len(rot2.v))
else:
self.assertEqual(rot1.v,rot2.v)
if (rot1.perm.site_perm != None) and (rot2.perm.site_perm != None):
if len(rot1.perm.site_perm) == len(rot2.perm.site_perm):
rot1.perm.site_perm = sorted(rot1.perm.site_perm)
rot2.perm.site_perm = sorted(rot2.perm.site_perm)
for i in range(len(rot1.perm.site_perm)):
for j in range(len(rot1.perm.site_perm[i])):
self.assertEqual(rot1.perm.site_perm[i][j],rot2.perm.site_perm[i][j])
else:
self.assertEqual(len(rot1.perm.site_perm),len(rot2.perm.site_perm))
else:
self.assertEqual(rot1.perm.site_perm,rot2.perm.site_perm)
if (rot1.perm.arrow_perm != None) and (rot2.perm.arrow_perm != None):
if len(rot1.perm.arrow_perm) == len(rot2.perm.arrow_perm):
for i in range(len(rot1.perm.arrow_perm)):
for j in range(len(rot1.perm.arrow_perm[i])):
self.assertEqual(rot1.perm.arrow_perm[i][j],rot2.perm.arrow_perm[i][j])
else:
self.assertEqual(len(rot1.perm.arrow_perm),len(rot2.perm.arrow_perm))
else:
self.assertEqual(rot1.perm.arrow_perm,rot2.perm.arrow_perm)
else:
self.assertEqual(len(out1),len(out2))
def test_1(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 1
A = [[10,0,0],[0,10,0],[0,0,10]]
HNF = [[[1,0,0],[0,1,0],[0,0,1]]]
L = [[[1,0,0],[0,1,0],[0,0,1]]]
SNF = [[[1,0,0],[0,1,0],[0,0,1]]]
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
for out in out1:
out.perm.arrow_perm = None
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
self._compare_outputs(out1,out2)
def test_2(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 2
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
for out in out1:
out.perm.arrow_perm = None
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
self._compare_outputs(out1,out2)
def test_3(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 3
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_4(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 4
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_5(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 5
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_6(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 6
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_7(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 7
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_8(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 8
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_9(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 9
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_10(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 10
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_11(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 11
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_12(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 12
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_13(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 13
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_14(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 14
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_15(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 15
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_16(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 16
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_17(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 17
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_18(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 18
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_19(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 19
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_20(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 20
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_21(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 21
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_22(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 22
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_23(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 23
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_24(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 24
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_25(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 25
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_26(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 26
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_27(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 27
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_28(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 28
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_29(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 29
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_30(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 30
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_31(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 31
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_32(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 32
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_33(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 33
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_34(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 34
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_35(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 35
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_36(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 36
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_37(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 37
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_38(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 38
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_39(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 39
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_50(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 50
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_51(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 51
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_52(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 52
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_53(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 53
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_54(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 54
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_55(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 55
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_56(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 56
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_57(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 57
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
def test_58(self):
from phenum.grouptheory import _get_rotation_perms_lists
case = 58
A = list(map(list,zip(*_read_float_2D(gpath+"get_rotation_perms_lists_A.in."+str(case)))))
HNF = _read_int_3D(gpath+"get_rotation_perms_lists_HNF.in."+str(case))
L = _read_int_3D(gpath+"get_rotation_perms_lists_L.in."+str(case))
SNF = _read_int_3D(gpath+"get_rotation_perms_lists_SNF.in."+str(case))
Op = _read_fixOp_1D(gpath+"get_rotation_perms_lists_Op.in."+str(case))
dperms = _read_RotPermList(gpath+"get_rotation_perms_lists_dperms.in."+str(case))
RPlist = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.in."+str(case))
eps = _read_float(gpath+"get_rotation_perms_lists_eps.in."+str(case))
out1 = _get_rotation_perms_lists(A,HNF,L,SNF,Op,RPlist,dperms,eps)
out2 = _read_RotPermList_1D(gpath+"get_rotation_perms_lists_RPlist.out."+str(case))
for out in out1:
out.perm.arrow_perm = None
self._compare_outputs(out1,out2)
class TestRM3DOperations(ut.TestCase):
"""Tests of the _rm_3D_operations subroutine."""
def test_1(self):
from phenum.grouptheory import _rm_3D_operations
with pytest.raises(ValueError):
_rm_3D_operations([[1,1,0],[1,1,1],[0,1,1]],[0],[0],1E-7)
class TestGetSymGroup(ut.TestCase):
""" Tests of the get_sym_group subroutine."""
def test_1(self):
from phenum.grouptheory import get_sym_group
par_lat = [[0.5, 0.5, 0.0], [0.5, 0.0, 0.5], [0.0, 0.5, 0.5]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[1, 0, 0], [0, 1, 0], [2, 3, 6]]
LatDim = 3
out = _read_output("arrow_group.out.1")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_2(self):
from phenum.grouptheory import get_sym_group
par_lat = [[0.5, 0.5, 0.0], [0.5, 0.0, 0.5], [0.0, 0.5, 0.5]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 5, 6]]
LatDim = 3
out = _read_output("arrow_group.out.2")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_3(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 1, 2]]
LatDim = 3
out = _read_output("arrow_group.out.3")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_4(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 0, 7]]
LatDim = 3
out = _read_output("arrow_group.out.4")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_5(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[1, 0, 0], [1, 2, 0], [1, 0, 2]]
LatDim = 3
out = _read_output("arrow_group.out.5")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_6(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[1, 0, 0], [0, 2, 0], [0, 0, 2]]
LatDim = 3
out = _read_output("arrow_group.out.6")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_7(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5], [0.25, 0.25, 0.75]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 1, 2]]
LatDim = 3
out = _read_output("arrow_group.out.7")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_8(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5], [0.25, 0.25, 0.75]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 0, 3]]
LatDim = 3
out = _read_output("arrow_group.out.8")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_9(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5], [0.25, 0.25, 0.75]]
HNF = [[1, 0, 0], [0, 1, 0], [0, 2, 3]]
LatDim = 3
out = _read_output("arrow_group.out.9")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_10(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[0.0, 0.0, 0.0]]
HNF = [[2, 0, 0], [0, 2, 0], [0, 0, 2]]
LatDim = 3
out = _read_output("arrow_group.out.10")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
def test_11(self):
from phenum.grouptheory import get_sym_group
par_lat = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
bas_vacs = [[2.0, 2.0, 2.0]]
HNF = [[2, 0, 0], [0, 2, 0], [0, 0, 2]]
LatDim = 3
out = _read_output("arrow_group.out.10")
symm = get_sym_group(par_lat,bas_vacs,HNF,LatDim)
agroup = []
for i in range(len(symm.perm.site_perm)):
agroup.append([symm.perm.site_perm[i],[int(j) for j in symm.perm.arrow_perm[i]]])
for perm in agroup:
self.assertIn(perm,out)
| 54.304833 | 632 | 0.660143 | 24,762 | 160,688 | 3.934375 | 0.010944 | 0.070774 | 0.070764 | 0.113166 | 0.978732 | 0.974021 | 0.966856 | 0.961508 | 0.958193 | 0.919844 | 0 | 0.034298 | 0.196916 | 160,688 | 2,958 | 633 | 54.323191 | 0.72065 | 0.007879 | 0 | 0.906274 | 0 | 0 | 0.198767 | 0.194995 | 0 | 0 | 0 | 0 | 0.063504 | 1 | 0.092196 | false | 0 | 0.091431 | 0 | 0.193573 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
365c99c1a69dabfe43b8572da7750f4f710f080f | 571 | py | Python | 05/test2.py | lluxury/pcc_exercise | 947ca87aeb1d58a1c28eb26b851cc45fbbe3809e | [
"MIT"
] | null | null | null | 05/test2.py | lluxury/pcc_exercise | 947ca87aeb1d58a1c28eb26b851cc45fbbe3809e | [
"MIT"
] | null | null | null | 05/test2.py | lluxury/pcc_exercise | 947ca87aeb1d58a1c28eb26b851cc45fbbe3809e | [
"MIT"
] | null | null | null | car = 'subaru'
bike = 'bwm'
bake = 'bwm'
print("Is car == 'subaru'? I predict True.")
print(car == bike)
print("\nIs car == 'audi'? I predict False.")
print(car == bake)
car = 'subaru'
bike = 'bwm'
bake = 'Bwm'
print("Is car == 'subaru'? I predict True.")
print(car == bike)
print("\nIs car == 'audi'? I predict False.")
print(bike.lower() == bake.lower())
print(3 == 5)
print(3 != 5)
print(3 > 5)
print(3 < 5)
print(3 >= 5)
print(3 <= 5) #是一种符号,不是2个条件
print(3 !=5 and 3<=5)
print(3 !=5 or 3>5)
print (3 in [3, 5])
print (3 not in [3, 5]) | 20.392857 | 46 | 0.553415 | 97 | 571 | 3.257732 | 0.226804 | 0.075949 | 0.177215 | 0.202532 | 0.768987 | 0.740506 | 0.740506 | 0.740506 | 0.740506 | 0.740506 | 0 | 0.060948 | 0.224168 | 571 | 28 | 47 | 20.392857 | 0.65237 | 0.021016 | 0 | 0.416667 | 0 | 0 | 0.31203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.