hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f6c41833dca799588b1c5fc38985aefb7db6d806 | 1,179 | py | Python | tests/importer/test_cparser.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | 1 | 2020-11-29T07:25:34.000Z | 2020-11-29T07:25:34.000Z | tests/importer/test_cparser.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | 1 | 2019-04-19T15:06:31.000Z | 2019-04-26T13:24:36.000Z | tests/importer/test_cparser.py | marmeladema/calligra | 912becec93a2246ed322656131b7bd9fe51fff95 | [
"MIT"
] | null | null | null | import unittest
import calligra.stdlib
import calligra.importer.cparser as cparser
import pycparser
import re
class TestCParserImporter(unittest.TestCase):
def test_named_decl_with_named_type(self):
ctx = cparser.ASTContext(calligra.stdlib.namespace)
code = 'struct test {int member;} test;'
ast = pycparser.CParser().parse(code, 'stdin')
self.assertEqual(len(ast.ext), 1)
decl = calligra.importer.cparser.handle_Decl(ast.ext[0], ctx)
code_re = re.compile(r'^struct\s+test\s+test$')
self.assertTrue(code_re.match(decl.code()))
define_re = re.compile(r'^struct\s+test\s+test;\s*$')
self.assertTrue(define_re.match(decl.define()))
def test_named_decl_with_anonymous_type(self):
ctx = cparser.ASTContext(calligra.stdlib.namespace)
code = 'struct {int member;} test;'
ast = pycparser.CParser().parse(code, 'stdin')
self.assertEqual(len(ast.ext), 1)
decl = calligra.importer.cparser.handle_Decl(ast.ext[0], ctx)
code_re = re.compile(r'^struct\s*{\s*int\s+member;\s*}\s*test$')
self.assertTrue(code_re.match(decl.code()))
define_re = re.compile(r'^struct\s*{\s*int\s+member;\s*}\s*test;\s*$')
self.assertTrue(define_re.match(decl.define()))
| 35.727273 | 72 | 0.73028 | 180 | 1,179 | 4.672222 | 0.233333 | 0.035672 | 0.052319 | 0.057075 | 0.818074 | 0.770511 | 0.770511 | 0.770511 | 0.770511 | 0.760999 | 0 | 0.003766 | 0.099237 | 1,179 | 32 | 73 | 36.84375 | 0.788136 | 0 | 0 | 0.461538 | 0 | 0.038462 | 0.167091 | 0.110263 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
f6d3551ca4430165ac52ea391ed82c9427528acd | 33,143 | py | Python | cdhweb/pages/migrations/0001_initial.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 1 | 2017-11-21T16:02:33.000Z | 2017-11-21T16:02:33.000Z | cdhweb/pages/migrations/0001_initial.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 367 | 2017-08-14T16:05:41.000Z | 2021-11-03T15:29:18.000Z | cdhweb/pages/migrations/0001_initial.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 5 | 2017-09-08T21:08:49.000Z | 2020-10-02T04:39:37.000Z | # Generated by Django 2.2.19 on 2021-05-03 16:47
import django.db.models.deletion
import taggit.managers
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.core.models
import wagtail.documents.blocks
import wagtail.embeds.blocks
import wagtail.images.blocks
import wagtail.search.index
import wagtail.snippets.blocks
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("wagtailimages", "0023_add_choose_permissions"),
("taggit", "0003_taggeditem_add_unique_index"),
("wagtailcore", "0060_fix_workflow_unique_constraint"),
]
operations = [
migrations.CreateModel(
name="ContentPage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.Page",
),
),
(
"description",
wagtail.core.fields.RichTextField(
blank=True,
help_text="Optional. Brief description for preview display. Will also be used for search description (without tags), if one is not entered.",
),
),
(
"body",
wagtail.core.fields.StreamField(
[
(
"paragraph",
wagtail.core.blocks.RichTextBlock(
features=[
"h2",
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
]
),
),
(
"image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.images.blocks.ImageChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
]
),
),
(
"svg_image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
(
"extended_description",
wagtail.core.blocks.RichTextBlock(
features=["p"],
help_text="This text will only be read to non-sighted users and should describe the major insights or takeaways from the graphic. Multiple paragraphs are allowed.",
required=False,
),
),
]
),
),
("embed", wagtail.embeds.blocks.EmbedBlock()),
(
"migrated",
wagtail.core.blocks.RichTextBlock(
features=[
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
"image",
"embed",
],
icon="warning",
),
),
],
blank=True,
),
),
(
"attachments",
wagtail.core.fields.StreamField(
[
(
"document",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"link",
wagtail.snippets.blocks.SnippetChooserBlock(
"cdhpages.ExternalAttachment"
),
),
],
blank=True,
),
),
],
options={
"abstract": False,
},
bases=("wagtailcore.page", models.Model),
),
migrations.CreateModel(
name="HomePage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.Page",
),
),
(
"body",
wagtail.core.fields.StreamField(
[
(
"paragraph",
wagtail.core.blocks.RichTextBlock(
features=[
"h2",
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
]
),
),
(
"image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.images.blocks.ImageChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
]
),
),
(
"svg_image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
(
"extended_description",
wagtail.core.blocks.RichTextBlock(
features=["p"],
help_text="This text will only be read to non-sighted users and should describe the major insights or takeaways from the graphic. Multiple paragraphs are allowed.",
required=False,
),
),
]
),
),
("embed", wagtail.embeds.blocks.EmbedBlock()),
(
"migrated",
wagtail.core.blocks.RichTextBlock(
features=[
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
"image",
"embed",
],
icon="warning",
),
),
],
blank=True,
),
),
(
"attachments",
wagtail.core.fields.StreamField(
[
(
"document",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"link",
wagtail.snippets.blocks.SnippetChooserBlock(
"cdhpages.ExternalAttachment"
),
),
],
blank=True,
),
),
],
options={
"verbose_name": "Homepage",
},
bases=("wagtailcore.page",),
),
migrations.CreateModel(
name="LinkPage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.Page",
),
),
(
"link_url",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="link to a custom URL",
),
),
(
"url_append",
models.CharField(
blank=True,
help_text="Use this to optionally append a #hash or querystring to the URL.",
max_length=255,
verbose_name="append to URL",
),
),
(
"extra_classes",
models.CharField(
blank=True,
help_text="Optionally specify css classes to be added to this page when it appears in menus.",
max_length=100,
verbose_name="menu item css classes",
),
),
(
"link_page",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to="wagtailcore.Page",
verbose_name="link to an internal page",
),
),
],
options={
"abstract": False,
},
bases=("wagtailcore.page",),
),
migrations.CreateModel(
name="RelatedLinkType",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(max_length=255)),
("sort_order", models.PositiveIntegerField(default=0)),
],
options={
"ordering": ["sort_order"],
},
),
migrations.CreateModel(
name="PageIntro",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("paragraph", wagtail.core.fields.RichTextField()),
(
"page",
models.OneToOneField(
on_delete=django.db.models.deletion.CASCADE,
to="cdhpages.LinkPage",
),
),
],
),
migrations.CreateModel(
name="LocalAttachment",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("title", models.CharField(max_length=255, verbose_name="title")),
("file", models.FileField(upload_to="documents", verbose_name="file")),
(
"created_at",
models.DateTimeField(auto_now_add=True, verbose_name="created at"),
),
("file_size", models.PositiveIntegerField(editable=False, null=True)),
(
"file_hash",
models.CharField(blank=True, editable=False, max_length=40),
),
(
"author",
models.CharField(
blank=True,
help_text="Citation or list of authors",
max_length=255,
),
),
(
"collection",
models.ForeignKey(
default=wagtail.core.models.get_root_collection_id,
on_delete=django.db.models.deletion.CASCADE,
related_name="+",
to="wagtailcore.Collection",
verbose_name="collection",
),
),
(
"tags",
taggit.managers.TaggableManager(
blank=True,
help_text=None,
through="taggit.TaggedItem",
to="taggit.Tag",
verbose_name="tags",
),
),
(
"uploaded_by_user",
models.ForeignKey(
blank=True,
editable=False,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to=settings.AUTH_USER_MODEL,
verbose_name="uploaded by user",
),
),
],
options={
"verbose_name": "document",
"verbose_name_plural": "documents",
"abstract": False,
},
bases=(wagtail.search.index.Indexed, models.Model),
),
migrations.CreateModel(
name="LandingPage",
fields=[
(
"page_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="wagtailcore.Page",
),
),
(
"body",
wagtail.core.fields.StreamField(
[
(
"paragraph",
wagtail.core.blocks.RichTextBlock(
features=[
"h2",
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
]
),
),
(
"image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.images.blocks.ImageChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
]
),
),
(
"svg_image",
wagtail.core.blocks.StructBlock(
[
(
"image",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"alternative_text",
wagtail.core.blocks.TextBlock(
help_text="Alternative text for visually impaired users to\nbriefly communicate the intended message of the image in this context.",
required=True,
),
),
(
"caption",
wagtail.core.blocks.RichTextBlock(
features=[
"bold",
"italic",
"link",
"superscript",
],
required=False,
),
),
(
"extended_description",
wagtail.core.blocks.RichTextBlock(
features=["p"],
help_text="This text will only be read to non-sighted users and should describe the major insights or takeaways from the graphic. Multiple paragraphs are allowed.",
required=False,
),
),
]
),
),
("embed", wagtail.embeds.blocks.EmbedBlock()),
(
"migrated",
wagtail.core.blocks.RichTextBlock(
features=[
"h3",
"h4",
"bold",
"italic",
"link",
"ol",
"ul",
"hr",
"blockquote",
"document",
"superscript",
"subscript",
"strikethrough",
"code",
"image",
"embed",
],
icon="warning",
),
),
],
blank=True,
),
),
(
"attachments",
wagtail.core.fields.StreamField(
[
(
"document",
wagtail.documents.blocks.DocumentChooserBlock(),
),
(
"link",
wagtail.snippets.blocks.SnippetChooserBlock(
"cdhpages.ExternalAttachment"
),
),
],
blank=True,
),
),
("tagline", models.CharField(max_length=255)),
(
"header_image",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to="wagtailimages.Image",
),
),
],
options={
"abstract": False,
},
bases=("wagtailcore.page",),
),
migrations.CreateModel(
name="ExternalAttachment",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("url", models.URLField()),
("title", models.CharField(max_length=255)),
(
"author",
models.CharField(
blank=True,
help_text="Citation or list of authors",
max_length=255,
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
(
"collection",
models.ForeignKey(
default=wagtail.core.models.get_root_collection_id,
on_delete=django.db.models.deletion.CASCADE,
related_name="+",
to="wagtailcore.Collection",
verbose_name="collection",
),
),
(
"tags",
taggit.managers.TaggableManager(
blank=True,
help_text="A comma-separated list of tags.",
through="taggit.TaggedItem",
to="taggit.Tag",
verbose_name="Tags",
),
),
],
options={
"abstract": False,
},
bases=(wagtail.search.index.Indexed, models.Model),
),
]
| 44.249666 | 220 | 0.258245 | 1,401 | 33,143 | 6.009993 | 0.169165 | 0.052257 | 0.056532 | 0.053444 | 0.783492 | 0.765796 | 0.74133 | 0.736936 | 0.72696 | 0.687055 | 0 | 0.006824 | 0.677247 | 33,143 | 748 | 221 | 44.308824 | 0.780312 | 0.001388 | 0 | 0.767881 | 1 | 0.005398 | 0.113733 | 0.006617 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016194 | 0 | 0.021592 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
63d91af3cde7a955db70a49f6272f60332f29ad4 | 5,847 | py | Python | test/test_command_line.py | caja-matematica/chimera-embedding | 637a3c1823d608e24c04ee355ae43b2590388216 | [
"Apache-2.0"
] | 26 | 2016-05-13T22:17:13.000Z | 2022-01-16T16:48:44.000Z | test/test_command_line.py | caja-matematica/chimera-embedding | 637a3c1823d608e24c04ee355ae43b2590388216 | [
"Apache-2.0"
] | 1 | 2019-06-20T16:49:00.000Z | 2019-06-20T16:49:00.000Z | test/test_command_line.py | caja-matematica/chimera-embedding | 637a3c1823d608e24c04ee355ae43b2590388216 | [
"Apache-2.0"
] | 23 | 2016-10-14T18:08:53.000Z | 2022-01-16T16:48:57.000Z | """
This test is written for command line handling.
For a starting place just tries to check if all six of the major entry
points get reached in very simple situations.
"""
input_data = """
0 4
0 5
0 6
0 7
1 4
1 5
1 6
1 7
2 4
2 5
2 6
2 7
3 4
3 5
3 6
3 7
"""
if __name__ == '__main__':
import runpy
import mock
import sys
import os
sys.path.append(os.path.abspath('bin'))
sys.path.append(os.path.abspath('src'))
from chimera_embedding import processor
# Largest native clique with no other options given
proc = mock.MagicMock(processor)
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().largestNativeClique() in proc.mock_calls)
# Call nativeCliqueEmbed with chain length argument - 1
proc.reset_mock()
m.reset_mock()
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --chainlength=10'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().nativeCliqueEmbed(9) in proc.mock_calls)
# Call tightestNativeClique with clique size provided
proc.reset_mock()
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --cliquesize 4'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().tightestNativeClique(4) in proc.mock_calls)
# Call largestNativeBiClique
proc.reset_mock()
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --bipartite'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().largestNativeBiClique() in proc.mock_calls)
# Call tightestNativeBiClique
proc.reset_mock()
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --bipartite --cliquesize 4'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().tightestNativeBiClique(4, m=4, chain_imbalance=None, max_chain_length=None) in proc.mock_calls)
# Call tightestNativeBiClique again
proc.reset_mock()
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --bipartite --cliquesize 4 7 --chainlength 80'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().tightestNativeBiClique(4, m=7, chain_imbalance=None, max_chain_length=80) in proc.mock_calls)
# Call largestNativeBiClique
proc.reset_mock()
m = mock.mock_open(read_data=input_data)
try:
with mock.patch('chimera_embedding.processor', proc):
with mock.patch('__main__.open', m) as mock_open:
with mock.patch('argparse.open', m) as mock_open:
sys.argv = [sys.argv[0]] + '-i test_file.dat -o not_the_same_name.dat --bipartite --chainlength 6'.split(' ')
runpy.run_module('nativeclique', run_name='__main__')
except Exception as error:
# print error
pass
m.assert_any_call('test_file.dat', 'r', -1)
m.assert_any_call('not_the_same_name.dat', 'w', -1)
assert(mock.call().largestNativeBiClique(max_chain_length=6, chain_imbalance=None) in proc.mock_calls)
| 37.722581 | 147 | 0.634684 | 831 | 5,847 | 4.193742 | 0.138387 | 0.048207 | 0.078336 | 0.044189 | 0.835868 | 0.818364 | 0.76155 | 0.76155 | 0.76155 | 0.76155 | 0 | 0.016173 | 0.238584 | 5,847 | 154 | 148 | 37.967532 | 0.766622 | 0.089277 | 0 | 0.641026 | 0 | 0 | 0.242063 | 0.09127 | 0 | 0 | 0 | 0 | 0.179487 | 1 | 0 | false | 0.059829 | 0.042735 | 0 | 0.042735 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
12211da696cc3b233556cf9b90a4a42fc418cdf5 | 6,015 | py | Python | command_history.py | maxdignan/DataMiningProject | 7348408ed1b9b99a666a690f65ad2da0682e3e8e | [
"MIT"
] | null | null | null | command_history.py | maxdignan/DataMiningProject | 7348408ed1b9b99a666a690f65ad2da0682e3e8e | [
"MIT"
] | null | null | null | command_history.py | maxdignan/DataMiningProject | 7348408ed1b9b99a666a690f65ad2da0682e3e8e | [
"MIT"
] | null | null | null | from exchanges.bitfinex import Bitfinex
Bitfinex.get_current_price
Bitfinex.get_current_price()
Bitfinex().get_current_price()
File "<stdin>", line 1, in <module>
from exchanges.bitstamp import Bitstamp
Bitstamp().get_current_price()
Bitfinex().get_current_price()
import sqlite3
conn = sqlite3.connect('ex.db')
c = conn.cursor()
c.execute("CREATE TABLE btc (data integer, bitstamp real, bitfinex real, okcoin real, huobi real, coinapult real);")
c.execute("INSERT INTO btc VALUES (1000, 6333.33, 633.33, 43443.4, 43.4, 5);")
c.execute("SELECT * FROM btc;")
c
c.rowcount()
c.rowcount
c.fetchone()
print(c.execute("SELECT * FROM btc;"))
import time
time.time()
time.time() * 10000000
time.time() * 1000000
time.time() * 10000000
while True:
sleep(5)
c.execute("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", time.time() * 10000000, Bitstamp.get_current_price(), Bitfinex.get_current_price(), 10,10,10)
c.execute("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", time.time() * 10000000, Bitstamp.get_current_price(), Bitfinex.get_current_price(), 10,10,10)
c.execute("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", time.time() * 10000000, Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10)
c.execute("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", [time.time() * 10000000, Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10])
c.execute("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", (time.time() * 10000000, Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10))
c.executemany("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", [(time.time() * 10000000, Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10)])
c.executemany("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", [(time.time(), Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10)])
calendar.timegm()
import calendar
calendar.timegm(time.strptime('Jul 9, 2009 @ 20:02:58 UTC', '%b %d, %Y @ %H:%M:%S UTC'))
calendar.timegm(time.strptime(time, '%b %d, %Y @ %H:%M:%S UTC'))
calendar.timegm(time.strptime(time.now(), '%b %d, %Y @ %H:%M:%S UTC'))
calendar.timegm(time.strptime(time.time(), '%b %d, %Y @ %H:%M:%S UTC'))
calendar.timegm()
calendar.timegm(())
int(time.time())
c.executemany("INSERT INTO btc VALUES (?, ?, ?, ?, ?, ?);", [(int(time.time()), Bitstamp().get_current_price(), Bitfinex().get_current_price(), 10,10,10)])
c.fetchone()
c.fetchall()
while True:
time.sleep(5)
print({time: int(time.time()), bitfinex: Bitfinex().get_current_price(), bitstamp: Bitstamp().get_current_price()})
while True:
time.sleep(5)
d = dict()
d["time"] = int(time.time())
d["bitfinex"] = Bitfinex().get_current_price()
d["bitstamp"] = Bitstamp().get_current_price()
while True:
time.sleep(5)
d = dict()
d["time"] = int(time.time())
d["bitfinex"] = Bitfinex().get_current_price()
d["bitstamp"] = Bitstamp().get_current_price()
print(d)
from exchanges.okcoin import OKCoin
OKCoin
from exchanges.huobi import Huobi
from exchanges.coinapult import Coinapult
import csv
fields = ["epoch seconds", "bitstamp", "bitfinex", "okcoin", "huobi", "coinapult"]
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow(fields)
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time(), Bitfinex().get_current_price(), Bitstamp().get_current_price(), OKCoin().get_current_price(), Huobi().get_current_price(), Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time(), Bitfinex().get_current_price(), Bitstamp().get_current_price(), OKCoin().get_current_price(), Huobi().get_current_price(), Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time(), Bitfinex().get_current_price(), Bitstamp().get_current_price(), Null, Huobi().get_current_price(), Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time(), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', Huobi().get_current_price(), Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time(), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time()), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price())])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time()), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price()])
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
while True:
time.sleep(5)
writer.writerow([int(time.time()), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price()])
while True:
time.sleep(5)
with open("btc_prices.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time()), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price()])
while True:
time.sleep(5)
with open("btc_prices2.csv", "a") as f:
writer = csv.writer(f)
writer.writerow([int(time.time()), Bitfinex().get_current_price(), Bitstamp().get_current_price(), 'null', 'null', Coinapult().get_current_price()])
with open("btc_prices2.csv", 'r') as f:
reader = csv.reader(f)
with open("btc_prices2.csv", 'r') as f:
reader = csv.reader(f)
reader.read()
with open("btc_prices2.csv", 'r') as f:
reader = csv.reader(f)
print(len(reader))
with open("btc_prices2.csv", 'r') as f:
reader = csv.reader(f)
for line in reader:
print(line)
count = 0
with open("btc_prices2.csv", 'r') as f:
reader = csv.reader(f)
for line in reader:
count += 1
count
1495 * 5
1495 * 5 / 60
%save
save
exit()
| 45.916031 | 192 | 0.689443 | 886 | 6,015 | 4.520316 | 0.111738 | 0.157303 | 0.235955 | 0.143571 | 0.815231 | 0.781773 | 0.767291 | 0.767291 | 0.738577 | 0.738577 | 0 | 0.034164 | 0.099751 | 6,015 | 130 | 193 | 46.269231 | 0.705448 | 0 | 0 | 0.542636 | 0 | 0.015504 | 0.176422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.069767 | null | null | 0.03876 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1239872cc8034ba0610d2c2b53ffb01bdf45aa8d | 20,243 | py | Python | head_Force/motion_ecoli_torque.py | pcmagic/stokes_flow | 464d512d3739eee77b33d1ebf2f27dae6cfa0423 | [
"MIT"
] | 1 | 2018-11-11T05:00:53.000Z | 2018-11-11T05:00:53.000Z | head_Force/motion_ecoli_torque.py | pcmagic/stokes_flow | 464d512d3739eee77b33d1ebf2f27dae6cfa0423 | [
"MIT"
] | null | null | null | head_Force/motion_ecoli_torque.py | pcmagic/stokes_flow | 464d512d3739eee77b33d1ebf2f27dae6cfa0423 | [
"MIT"
] | null | null | null | # coding=utf-8
import sys
import petsc4py
petsc4py.init(sys.argv)
import numpy as np
from time import time
from scipy.io import savemat
# from src.stokes_flow import problem_dic, obj_dic
from petsc4py import PETSc
from src import stokes_flow as sf
from src.myio import *
from src.objComposite import *
# from src.myvtk import save_singleEcoli_vtk
import codeStore.ecoli_common as ec
# import import_my_lib
# Todo: rewrite input and print process.
def get_problem_kwargs(**main_kwargs):
OptDB = PETSc.Options()
fileHandle = OptDB.getString('f', 'motion_ecoli_torque')
OptDB.setValue('f', fileHandle)
problem_kwargs = ec.get_problem_kwargs()
problem_kwargs['fileHandle'] = fileHandle
ini_rot_theta = OptDB.getReal('ini_rot_theta', 0)
ini_rot_phi = OptDB.getReal('ini_rot_phi', 0)
problem_kwargs['ini_rot_theta'] = ini_rot_theta
problem_kwargs['ini_rot_phi'] = ini_rot_phi
ecoli_velocity = OptDB.getReal('ecoli_velocity', 1)
problem_kwargs['ecoli_velocity'] = ecoli_velocity
kwargs_list = (get_shearFlow_kwargs(), get_update_kwargs(), main_kwargs,)
for t_kwargs in kwargs_list:
for key in t_kwargs:
problem_kwargs[key] = t_kwargs[key]
# vtk_matname = OptDB.getString('vtk_matname', 'pipe_dbg')
# t_path = os.path.dirname(os.path.abspath(__file__))
# vtk_matname = os.path.normpath(os.path.join(t_path, vtk_matname))
# problem_kwargs['vtk_matname'] = vtk_matname
return problem_kwargs
def print_case_info(**problem_kwargs):
caseIntro = '-->Ecoli in infinite shear flow case, given speed and torque free case. '
ec.print_case_info(caseIntro, **problem_kwargs)
ecoli_velocity = problem_kwargs['ecoli_velocity']
PETSc.Sys.Print(' ecoli_velocity %f' % ecoli_velocity)
print_update_info(**problem_kwargs)
print_shearFlow_info(**problem_kwargs)
ini_rot_theta = problem_kwargs['ini_rot_theta']
ini_rot_phi = problem_kwargs['ini_rot_phi']
PETSc.Sys.Print(' ini_rot_theta: %f, ini_rot_phi: %f ' % (ini_rot_theta, ini_rot_phi))
return True
# @profile
def main_fun(**main_kwargs):
comm = PETSc.COMM_WORLD.tompi4py()
rank = comm.Get_rank()
# # dbg
# main_kwargs['ecoli_velocity'] = -1.75439131e-02
# # main_kwargs['ffweightx'] = 1
# # main_kwargs['ffweighty'] = 1
# # main_kwargs['ffweightz'] = 1
# # main_kwargs['ffweightT'] = 1
# main_kwargs['max_iter'] = 1
problem_kwargs = get_problem_kwargs(**main_kwargs)
print_case_info(**problem_kwargs)
fileHandle = problem_kwargs['fileHandle']
max_iter = problem_kwargs['max_iter']
eval_dt = problem_kwargs['eval_dt']
ecoli_velocity = problem_kwargs['ecoli_velocity']
iter_tor = 1e-1
if not problem_kwargs['restart']:
# create ecoli
ecoli_comp = create_ecoli_2part(**problem_kwargs)
# create check obj
check_kwargs = problem_kwargs.copy()
check_kwargs['nth'] = problem_kwargs['nth'] - 2 if problem_kwargs['nth'] >= 10 else problem_kwargs['nth'] + 1
check_kwargs['ds'] = problem_kwargs['ds'] * 1.2
check_kwargs['hfct'] = 1
check_kwargs['Tfct'] = 1
ecoli_comp_check = create_ecoli_2part(**check_kwargs)
head_rel_U = ecoli_comp.get_rel_U_list()[0]
tail_rel_U = ecoli_comp.get_rel_U_list()[1]
problem = sf.ShearFlowForceFreeIterateProblem(**problem_kwargs)
problem.add_obj(ecoli_comp)
problem.set_iterate_comp(ecoli_comp)
problem.print_info()
problem_ff = sf.ShearFlowForceFreeProblem(**problem_kwargs)
problem_ff.add_obj(ecoli_comp)
planeShearRate = problem.get_planeShearRate()
# calculate torque
t2 = time()
PETSc.Sys.Print(' ')
PETSc.Sys.Print('############################ Current loop %05d / %05d ############################' %
(0, max_iter))
PETSc.Sys.Print('calculate the motor spin of the ecoli that keeps |ref_U|==ecoli_velocity in free space')
# 1) ini guess
problem_ff.set_planeShearRate(np.zeros(3))
problem.set_planeShearRate(np.zeros(3))
problem_ff.create_matrix()
problem_ff.solve()
ref_U = ecoli_comp.get_ref_U()
fct = ecoli_velocity / np.linalg.norm(ref_U[:3])
PETSc.Sys.Print(' ini ref_U in free space', ref_U * fct)
# 2) optimize force and torque free
problem.create_matrix()
ref_U, _, _ = problem.do_iterate2(ini_refU1=ref_U, tolerate=iter_tor)
# 3) check accurate of force.
ecoli_comp_check.set_rel_U_list([head_rel_U, tail_rel_U])
ecoli_comp_check.set_ref_U(ref_U)
velocity_err_list = problem.vtk_check(fileHandle, ecoli_comp_check)
PETSc.Sys.Print('velocity error of head (total, x, y, z): ', next(velocity_err_list))
PETSc.Sys.Print('velocity error of tail (total, x, y, z): ', next(velocity_err_list))
# 4) set parameters
fct = ecoli_velocity / np.linalg.norm(ref_U[:3])
ecoli_comp.set_rel_U_list([head_rel_U * fct, tail_rel_U * fct])
ecoli_comp.set_ref_U(ref_U * fct)
ecoli_comp_check.set_rel_U_list([head_rel_U * fct, tail_rel_U * fct])
ecoli_comp_check.set_ref_U(ref_U * fct)
problem.set_planeShearRate(planeShearRate)
problem_ff.set_planeShearRate(planeShearRate)
# 5) save and print
if rank == 0:
idx = 0
ti = idx * eval_dt
savemat('%s_%05d.mat' % (fileHandle, idx), {
'ti': ti,
'planeShearRate': planeShearRate,
'ecoli_center': np.vstack(ecoli_comp.get_center()),
'ecoli_nodes': np.vstack([tobj.get_u_nodes() for tobj in ecoli_comp.get_obj_list()]),
'ecoli_f': np.hstack([np.zeros_like(tobj.get_force())
for tobj in ecoli_comp.get_obj_list()]).reshape(-1, 3),
'ecoli_u': np.hstack([np.zeros_like(tobj.get_re_velocity())
for tobj in ecoli_comp.get_obj_list()]).reshape(-1, 3),
'ecoli_norm': np.vstack(ecoli_comp.get_norm()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U()),
'tail_rel_U': np.vstack(ecoli_comp.get_rel_U_list()[1])}, oned_as='column', )
PETSc.Sys.Print(' ref_U in free space', ref_U * fct)
PETSc.Sys.Print(' |ref_U| in free space', np.linalg.norm(ref_U[:3]) * fct, np.linalg.norm(ref_U[3:]) * fct)
PETSc.Sys.Print(' tail_rel_U in free space', tail_rel_U * fct)
print_single_ecoli_force_result(ecoli_comp, prefix='', part='full', **problem_kwargs)
t3 = time()
PETSc.Sys.Print('#################### Current loop %05d / %05d uses: %08.3fs ####################' %
(0, max_iter, (t3 - t2)))
# evaluation loop
t0 = time()
for idx in range(1, max_iter + 1):
t2 = time()
PETSc.Sys.Print()
PETSc.Sys.Print('############################ Current loop %05d / %05d ############################' %
(idx, max_iter))
# 1) ini guess
problem_ff.create_matrix()
problem_ff.solve()
ref_U = ecoli_comp.get_ref_U()
PETSc.Sys.Print(' ini ref_U in shear flow', ref_U)
# 2) optimize force and torque free
problem.create_matrix()
ref_U, _, _ = problem.do_iterate2(ini_refU1=ref_U, tolerate=iter_tor)
ecoli_comp.set_ref_U(ref_U)
# 3) check accurate of force.
ecoli_comp_check.set_ref_U(ref_U)
velocity_err_list = problem.vtk_check(fileHandle, ecoli_comp_check)
PETSc.Sys.Print('velocity error of head (total, x, y, z): ', next(velocity_err_list))
PETSc.Sys.Print('velocity error of tail (total, x, y, z): ', next(velocity_err_list))
# 4) save and print
if rank == 0:
ti = idx * eval_dt
savemat('%s_%05d.mat' % (fileHandle, idx), {
'ti': ti,
'planeShearRate': planeShearRate,
'ecoli_center': np.vstack(ecoli_comp.get_center()),
'ecoli_nodes': np.vstack([tobj.get_u_nodes() for tobj in ecoli_comp.get_obj_list()]),
'ecoli_f': np.hstack([tobj.get_force() for tobj in ecoli_comp.get_obj_list()]).reshape(-1,
3),
'ecoli_u': np.hstack([tobj.get_re_velocity() for tobj in ecoli_comp.get_obj_list()]
).reshape(-1, 3),
'ecoli_norm': np.vstack(ecoli_comp.get_norm()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U()),
'tail_rel_U': np.vstack(ecoli_comp.get_rel_U_list()[1])}, oned_as='column', )
print_single_ecoli_force_result(ecoli_comp, prefix='', part='full', **problem_kwargs)
# 5) update
problem.update_location(eval_dt, print_handle='%d / %d' % (idx, max_iter))
t3 = time()
PETSc.Sys.Print('#################### Current loop %05d / %05d uses: %08.3fs ####################' %
(idx, max_iter, (t3 - t2)))
t1 = time()
PETSc.Sys.Print('%s: run %d loops using %f' % (fileHandle, max_iter, (t1 - t0)))
problem.destroy()
if rank == 0:
savemat('%s.mat' % fileHandle,
{'ecoli_center': np.vstack(ecoli_comp.get_center_hist()),
'ecoli_norm': np.vstack(ecoli_comp.get_norm_hist()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U_hist()),
't': (np.arange(max_iter) + 1) * eval_dt},
oned_as='column')
else:
pass
return True
def main_fun_noIter(**main_kwargs):
comm = PETSc.COMM_WORLD.tompi4py()
rank = comm.Get_rank()
problem_kwargs = get_problem_kwargs(**main_kwargs)
print_case_info(**problem_kwargs)
fileHandle = problem_kwargs['fileHandle']
max_iter = problem_kwargs['max_iter']
eval_dt = problem_kwargs['eval_dt']
ecoli_velocity = problem_kwargs['ecoli_velocity']
ini_rot_theta = problem_kwargs['ini_rot_theta']
ini_rot_phi = problem_kwargs['ini_rot_phi']
iter_tor = 1e-3
if not problem_kwargs['restart']:
# create ecoli
ecoli_comp = create_ecoli_2part(**problem_kwargs)
ecoli_comp.node_rotation(np.array((0, 1, 0)), theta=ini_rot_theta)
ecoli_comp.node_rotation(np.array((0, 0, 1)), theta=ini_rot_phi)
head_rel_U = ecoli_comp.get_rel_U_list()[0]
tail_rel_U = ecoli_comp.get_rel_U_list()[1]
problem_ff = sf.ShearFlowForceFreeProblem(**problem_kwargs)
problem_ff.add_obj(ecoli_comp)
problem_ff.print_info()
problem = sf.ShearFlowForceFreeIterateProblem(**problem_kwargs)
problem.add_obj(ecoli_comp)
problem.set_iterate_comp(ecoli_comp)
planeShearRate = problem_ff.get_planeShearRate()
# calculate torque
t2 = time()
idx = 0
PETSc.Sys.Print(' ')
PETSc.Sys.Print('############################ Current loop %05d / %05d ############################' %
(idx, max_iter))
PETSc.Sys.Print('calculate the motor spin of the ecoli that keeps |ref_U|==ecoli_velocity in free space')
# 1) ini guess
problem_ff.set_planeShearRate(np.zeros(3))
problem.set_planeShearRate(np.zeros(3))
problem_ff.create_matrix()
problem_ff.solve()
ref_U = ecoli_comp.get_ref_U()
fct = ecoli_velocity / np.linalg.norm(ref_U[:3])
PETSc.Sys.Print(' ini ref_U in free space', ref_U * fct)
# 2) optimize force and torque free
problem.create_matrix()
# ref_U = problem.do_iterate3(ini_refU1=ref_U, tolerate=iter_tor)
# 4) set parameters
fct = ecoli_velocity / np.linalg.norm(ref_U[:3])
ecoli_comp.set_rel_U_list([head_rel_U * fct, tail_rel_U * fct])
ecoli_comp.set_ref_U(ref_U * fct)
problem_ff.set_planeShearRate(planeShearRate)
problem.set_planeShearRate(planeShearRate)
# 5) save and print
if rank == 0:
ti = idx * eval_dt
savemat('%s_%05d.mat' % (fileHandle, idx), {
'ti': ti,
'planeShearRate': planeShearRate,
'ecoli_center': np.vstack(ecoli_comp.get_center()),
'ecoli_nodes': np.vstack([tobj.get_u_nodes() for tobj in ecoli_comp.get_obj_list()]),
'ecoli_f': np.hstack([np.zeros_like(tobj.get_force())
for tobj in ecoli_comp.get_obj_list()]).reshape(-1, 3),
'ecoli_u': np.hstack([np.zeros_like(tobj.get_re_velocity())
for tobj in ecoli_comp.get_obj_list()]).reshape(-1, 3),
'ecoli_norm': np.vstack(ecoli_comp.get_norm()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U()),
'tail_rel_U': np.vstack(ecoli_comp.get_rel_U_list()[1])}, oned_as='column', )
PETSc.Sys.Print(' true ref_U in free space', ref_U * fct)
PETSc.Sys.Print(' true |ref_U| in free space', np.linalg.norm(ref_U[:3]) * fct,
np.linalg.norm(ref_U[3:]) * fct)
PETSc.Sys.Print(' Now used relative velocity of head and tail are %s and %s' %
(str(head_rel_U * fct), str(tail_rel_U * fct)))
print_single_ecoli_force_result(ecoli_comp, prefix='', part='full', **problem_kwargs)
t3 = time()
PETSc.Sys.Print('#################### Current loop %05d / %05d uses: %08.3fs ####################' %
(0, max_iter, (t3 - t2)))
# evaluation loop
t0 = time()
for idx in range(1, max_iter + 1):
t2 = time()
PETSc.Sys.Print()
PETSc.Sys.Print('############################ Current loop %05d / %05d ############################' %
(idx, max_iter))
# 1) ini guess
problem_ff.create_matrix()
problem_ff.solve()
# 4) save and print
if rank == 0:
ti = idx * eval_dt
savemat('%s_%05d.mat' % (fileHandle, idx), {
'ti': ti,
'planeShearRate': planeShearRate,
'ecoli_center': np.vstack(ecoli_comp.get_center()),
'ecoli_nodes': np.vstack([tobj.get_u_nodes() for tobj in ecoli_comp.get_obj_list()]),
'ecoli_f': np.hstack([tobj.get_force() for tobj in ecoli_comp.get_obj_list()]
).reshape(-1, 3),
'ecoli_u': np.hstack([tobj.get_re_velocity() for tobj in ecoli_comp.get_obj_list()]
).reshape(-1, 3),
'ecoli_norm': np.vstack(ecoli_comp.get_norm()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U()),
'tail_rel_U': np.vstack(ecoli_comp.get_rel_U_list()[1])}, oned_as='column', )
print_single_ecoli_force_result(ecoli_comp, prefix='', part='full', **problem_kwargs)
# 5) update
problem_ff.update_location(eval_dt, print_handle='%d / %d' % (idx, max_iter))
t3 = time()
PETSc.Sys.Print('#################### Current loop %05d / %05d uses: %08.3fs ####################' %
(idx, max_iter, (t3 - t2)))
t1 = time()
PETSc.Sys.Print('%s: run %d loops using %f' % (fileHandle, max_iter, (t1 - t0)))
if rank == 0:
savemat('%s.mat' % fileHandle,
{'ecoli_center': np.vstack(ecoli_comp.get_center_hist()),
'ecoli_norm': np.vstack(ecoli_comp.get_norm_hist()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U_hist()),
't': (np.arange(max_iter) + 1) * eval_dt},
oned_as='column')
else:
pass
return True
def passive_fun_noIter(**main_kwargs):
comm = PETSc.COMM_WORLD.tompi4py()
rank = comm.Get_rank()
problem_kwargs = get_problem_kwargs(**main_kwargs)
print_case_info(**problem_kwargs)
fileHandle = problem_kwargs['fileHandle']
max_iter = problem_kwargs['max_iter']
eval_dt = problem_kwargs['eval_dt']
ini_rot_theta = problem_kwargs['ini_rot_theta']
ini_rot_phi = problem_kwargs['ini_rot_phi']
if not problem_kwargs['restart']:
# create ecoli
ecoli_comp = create_ecoli_2part(**problem_kwargs)
ecoli_comp.node_rotation(np.array((0, 1, 0)), theta=ini_rot_theta)
ecoli_comp.node_rotation(np.array((0, 0, 1)), theta=ini_rot_phi)
ecoli_comp.set_rel_U_list([np.zeros(6), np.zeros(6)])
problem_ff = sf.ShearFlowForceFreeProblem(**problem_kwargs)
problem_ff.add_obj(ecoli_comp)
problem_ff.print_info()
planeShearRate = problem_ff.get_planeShearRate()
# evaluation loop
t0 = time()
for idx in range(1, max_iter + 1):
t2 = time()
PETSc.Sys.Print()
PETSc.Sys.Print('############################ Current loop %05d / %05d ############################' %
(idx, max_iter))
# 1) ini guess
problem_ff.create_matrix()
problem_ff.solve()
ref_U = ecoli_comp.get_ref_U()
# 4) save and print
if rank == 0:
ti = idx * eval_dt
savemat('%s_%05d.mat' % (fileHandle, idx), {
'ti': ti,
'planeShearRate': planeShearRate,
'ecoli_center': np.vstack(ecoli_comp.get_center()),
'ecoli_nodes': np.vstack([tobj.get_u_nodes() for tobj in ecoli_comp.get_obj_list()]),
'ecoli_f': np.hstack([tobj.get_force() for tobj in ecoli_comp.get_obj_list()]).reshape(-1,
3),
'ecoli_u': np.hstack([tobj.get_re_velocity() for tobj in ecoli_comp.get_obj_list()]
).reshape(-1, 3),
'ecoli_norm': np.vstack(ecoli_comp.get_norm()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U()),
'tail_rel_U': np.vstack(ecoli_comp.get_rel_U_list()[1])}, oned_as='column', )
PETSc.Sys.Print(' true ref_U in free space', ref_U)
# 5) update
problem_ff.update_location(eval_dt, print_handle='%d / %d' % (idx, max_iter))
t3 = time()
PETSc.Sys.Print('#################### Current loop %05d / %05d uses: %08.3fs ####################' %
(idx, max_iter, (t3 - t2)))
t1 = time()
PETSc.Sys.Print('%s: run %d loops using %f' % (fileHandle, max_iter, (t1 - t0)))
if rank == 0:
savemat('%s.mat' % fileHandle,
{'ecoli_center': np.vstack(ecoli_comp.get_center_hist()),
'ecoli_norm': np.vstack(ecoli_comp.get_norm_hist()),
'ecoli_U': np.vstack(ecoli_comp.get_ref_U_hist()),
't': (np.arange(max_iter) + 1) * eval_dt},
oned_as='column')
else:
pass
return True
if __name__ == '__main__':
OptDB = PETSc.Options()
if OptDB.getBool('main_fun_noIter', False):
OptDB.setValue('main_fun', False)
main_fun_noIter()
if OptDB.getBool('passive_fun_noIter', False):
OptDB.setValue('main_fun', False)
passive_fun_noIter()
if OptDB.getBool('main_fun', True):
main_fun()
| 47.969194 | 117 | 0.559947 | 2,533 | 20,243 | 4.161863 | 0.088038 | 0.071713 | 0.059192 | 0.046765 | 0.823942 | 0.800038 | 0.785904 | 0.772624 | 0.763707 | 0.759344 | 0 | 0.017036 | 0.298276 | 20,243 | 421 | 118 | 48.083135 | 0.725097 | 0.057995 | 0 | 0.784024 | 0 | 0 | 0.141618 | 0.017143 | 0 | 0 | 0 | 0.002375 | 0 | 1 | 0.014793 | false | 0.017751 | 0.029586 | 0 | 0.059172 | 0.050296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
125aae886c1d84726ea29ae2cbe8c8811e85af97 | 40 | py | Python | Science/Python-on-stream/src/str_types_alt.py | peroff/8-Bit-Tea-Party | 374d486a9712a7d6286d8080c1e98e28b1c5e066 | [
"MIT"
] | 13 | 2018-08-01T20:29:20.000Z | 2021-09-03T21:49:25.000Z | Science/Python-on-stream/src/str_types_alt.py | peroff/8-Bit-Tea-Party | 374d486a9712a7d6286d8080c1e98e28b1c5e066 | [
"MIT"
] | 3 | 2019-09-18T21:13:41.000Z | 2021-10-10T13:45:17.000Z | Science/Python-on-stream/src/str_types_alt.py | peroff/8-Bit-Tea-Party | 374d486a9712a7d6286d8080c1e98e28b1c5e066 | [
"MIT"
] | 4 | 2019-09-18T20:25:37.000Z | 2021-08-19T10:17:46.000Z | def str_false():
return "False"
pass
| 10 | 16 | 0.675 | 6 | 40 | 4.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 4 | 17 | 10 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
d6159f82df26eb381c8653b04009c20d9661e700 | 1,712 | py | Python | 10_light/eg_10_01_light_manual.py | byrobot-python/e_drone_examples | fca3ef69f45299f0e80df52ac303e2a1388b2b61 | [
"MIT"
] | null | null | null | 10_light/eg_10_01_light_manual.py | byrobot-python/e_drone_examples | fca3ef69f45299f0e80df52ac303e2a1388b2b61 | [
"MIT"
] | null | null | null | 10_light/eg_10_01_light_manual.py | byrobot-python/e_drone_examples | fca3ef69f45299f0e80df52ac303e2a1388b2b61 | [
"MIT"
] | null | null | null | import random
from time import sleep
from e_drone.drone import *
from e_drone.protocol import *
if __name__ == '__main__':
drone = Drone()
drone.open()
drone.send_light_manual(DeviceType.CONTROLLER, 0xFF, 0)
sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000011, 10); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000011, 100); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000011, 0); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000110, 10); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000110, 100); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000110, 0); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000101, 10); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000101, 100); sleep(1)
drone.send_light_manual(DeviceType.CONTROLLER, 0b00000101, 0); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00000110, 10); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00000110, 100); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00000110, 0); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001100, 10); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001100, 100); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001100, 0); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001010, 10); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001010, 100); sleep(1)
drone.send_light_manual(DeviceType.DRONE, 0b00001010, 0); sleep(1)
drone.close() | 43.897436 | 81 | 0.708528 | 215 | 1,712 | 5.418605 | 0.130233 | 0.146781 | 0.228326 | 0.32618 | 0.891845 | 0.891845 | 0.857511 | 0.857511 | 0.857511 | 0 | 0 | 0.156317 | 0.181659 | 1,712 | 39 | 82 | 43.897436 | 0.675232 | 0 | 0 | 0 | 0 | 0 | 0.004776 | 0 | 0 | 0 | 0.002388 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7f23e0605bc03218810d0e9ec018cd2d330f735e | 38 | py | Python | api/v1/templates/__init__.py | UCCNetsoc/cloud | d3f87c7868ef7615a5836a2a3ba09bfd1dffed1d | [
"BSD-3-Clause"
] | 9 | 2021-02-07T19:49:49.000Z | 2021-12-05T20:52:25.000Z | api/v1/templates/__init__.py | UCCNetsoc/admin | d3f87c7868ef7615a5836a2a3ba09bfd1dffed1d | [
"BSD-3-Clause"
] | 18 | 2020-09-07T16:04:40.000Z | 2020-11-05T01:50:14.000Z | api/v1/templates/__init__.py | UCCNetsoc/admin | d3f87c7868ef7615a5836a2a3ba09bfd1dffed1d | [
"BSD-3-Clause"
] | 4 | 2020-09-29T12:03:22.000Z | 2020-10-17T21:33:16.000Z | from . import email
from . import sshd | 19 | 19 | 0.763158 | 6 | 38 | 4.833333 | 0.666667 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 2 | 20 | 19 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
613b5e1a21d865f897c1f07403356c409e9d8e62 | 3,589 | py | Python | examples/distance.py | matunda007/geolocation-python | 28dde33847058c419ed21298f0c5866640b69426 | [
"BSD-3-Clause"
] | 78 | 2015-07-20T09:28:59.000Z | 2022-01-30T14:36:51.000Z | examples/distance.py | matunda007/geolocation-python | 28dde33847058c419ed21298f0c5866640b69426 | [
"BSD-3-Clause"
] | 16 | 2015-07-26T10:38:13.000Z | 2021-01-13T23:00:37.000Z | examples/distance.py | matunda007/geolocation-python | 28dde33847058c419ed21298f0c5866640b69426 | [
"BSD-3-Clause"
] | 42 | 2015-06-25T01:27:55.000Z | 2021-12-21T03:25:04.000Z | # -*- coding: utf-8 -*-
from geolocation.main import GoogleMaps
from geolocation.distance_matrix.client import DistanceMatrixApiClient
if __name__ == "__main__":
origins = ['rybnik', 'oslo']
destinations = ['zagrzeb']
google_maps = GoogleMaps(api_key='your_google_maps_key')
items = google_maps.distance(origins, destinations).all() # default mode parameter is const.MODE_DRIVING
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration) # it returns str
print('duration datetime: %s' % item.duration.datetime) # it returns datetime
# you can also get items from duration
print('duration days: %s' % item.duration.days)
print('duration hours: %s' % item.duration.hours)
print('duration minutes: %s' % item.duration.minutes)
print('duration seconds: %s' % item.duration.seconds)
items = google_maps.distance(origins, destinations, DistanceMatrixApiClient.MODE_BICYCLING).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
items = google_maps.distance(origins, destinations, DistanceMatrixApiClient.MODE_WALKING).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
items = google_maps.distance(origins, destinations, DistanceMatrixApiClient.MODE_TRANSIT).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
items = google_maps.distance(origins, destinations, avoid=DistanceMatrixApiClient.AVOID_HIGHWAYS).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
items = google_maps.distance(origins, destinations, avoid=DistanceMatrixApiClient.AVOID_FERRIES).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
items = google_maps.distance(origins, destinations, avoid=DistanceMatrixApiClient.AVOID_TOLLS).all()
for item in items:
print('origin: %s' % item.origin)
print('destination: %s' % item.destination)
print('km: %s' % item.distance.kilometers)
print('m: %s' % item.distance.meters)
print('miles: %s' % item.distance.miles)
print('duration: %s' % item.duration)
| 41.252874 | 109 | 0.638618 | 416 | 3,589 | 5.442308 | 0.153846 | 0.103799 | 0.120583 | 0.071113 | 0.764576 | 0.764576 | 0.746025 | 0.746025 | 0.715548 | 0.715548 | 0 | 0.000355 | 0.216216 | 3,589 | 86 | 110 | 41.732558 | 0.804479 | 0.038451 | 0 | 0.731343 | 0 | 0 | 0.156749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029851 | 0 | 0.029851 | 0.701493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
61471b8c6f00f18068f02db7c5ddbe40b9d1560a | 6,407 | py | Python | loldib/getratings/models/NA/na_varus/na_varus_mid.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_varus/na_varus_mid.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_varus/na_varus_mid.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Varus_Mid_Aatrox(Ratings):
pass
class NA_Varus_Mid_Ahri(Ratings):
pass
class NA_Varus_Mid_Akali(Ratings):
pass
class NA_Varus_Mid_Alistar(Ratings):
pass
class NA_Varus_Mid_Amumu(Ratings):
pass
class NA_Varus_Mid_Anivia(Ratings):
pass
class NA_Varus_Mid_Annie(Ratings):
pass
class NA_Varus_Mid_Ashe(Ratings):
pass
class NA_Varus_Mid_AurelionSol(Ratings):
pass
class NA_Varus_Mid_Azir(Ratings):
pass
class NA_Varus_Mid_Bard(Ratings):
pass
class NA_Varus_Mid_Blitzcrank(Ratings):
pass
class NA_Varus_Mid_Brand(Ratings):
pass
class NA_Varus_Mid_Braum(Ratings):
pass
class NA_Varus_Mid_Caitlyn(Ratings):
pass
class NA_Varus_Mid_Camille(Ratings):
pass
class NA_Varus_Mid_Cassiopeia(Ratings):
pass
class NA_Varus_Mid_Chogath(Ratings):
pass
class NA_Varus_Mid_Corki(Ratings):
pass
class NA_Varus_Mid_Darius(Ratings):
pass
class NA_Varus_Mid_Diana(Ratings):
pass
class NA_Varus_Mid_Draven(Ratings):
pass
class NA_Varus_Mid_DrMundo(Ratings):
pass
class NA_Varus_Mid_Ekko(Ratings):
pass
class NA_Varus_Mid_Elise(Ratings):
pass
class NA_Varus_Mid_Evelynn(Ratings):
pass
class NA_Varus_Mid_Ezreal(Ratings):
pass
class NA_Varus_Mid_Fiddlesticks(Ratings):
pass
class NA_Varus_Mid_Fiora(Ratings):
pass
class NA_Varus_Mid_Fizz(Ratings):
pass
class NA_Varus_Mid_Galio(Ratings):
pass
class NA_Varus_Mid_Gangplank(Ratings):
pass
class NA_Varus_Mid_Garen(Ratings):
pass
class NA_Varus_Mid_Gnar(Ratings):
pass
class NA_Varus_Mid_Gragas(Ratings):
pass
class NA_Varus_Mid_Graves(Ratings):
pass
class NA_Varus_Mid_Hecarim(Ratings):
pass
class NA_Varus_Mid_Heimerdinger(Ratings):
pass
class NA_Varus_Mid_Illaoi(Ratings):
pass
class NA_Varus_Mid_Irelia(Ratings):
pass
class NA_Varus_Mid_Ivern(Ratings):
pass
class NA_Varus_Mid_Janna(Ratings):
pass
class NA_Varus_Mid_JarvanIV(Ratings):
pass
class NA_Varus_Mid_Jax(Ratings):
pass
class NA_Varus_Mid_Jayce(Ratings):
pass
class NA_Varus_Mid_Jhin(Ratings):
pass
class NA_Varus_Mid_Jinx(Ratings):
pass
class NA_Varus_Mid_Kalista(Ratings):
pass
class NA_Varus_Mid_Karma(Ratings):
pass
class NA_Varus_Mid_Karthus(Ratings):
pass
class NA_Varus_Mid_Kassadin(Ratings):
pass
class NA_Varus_Mid_Katarina(Ratings):
pass
class NA_Varus_Mid_Kayle(Ratings):
pass
class NA_Varus_Mid_Kayn(Ratings):
pass
class NA_Varus_Mid_Kennen(Ratings):
pass
class NA_Varus_Mid_Khazix(Ratings):
pass
class NA_Varus_Mid_Kindred(Ratings):
pass
class NA_Varus_Mid_Kled(Ratings):
pass
class NA_Varus_Mid_KogMaw(Ratings):
pass
class NA_Varus_Mid_Leblanc(Ratings):
pass
class NA_Varus_Mid_LeeSin(Ratings):
pass
class NA_Varus_Mid_Leona(Ratings):
pass
class NA_Varus_Mid_Lissandra(Ratings):
pass
class NA_Varus_Mid_Lucian(Ratings):
pass
class NA_Varus_Mid_Lulu(Ratings):
pass
class NA_Varus_Mid_Lux(Ratings):
pass
class NA_Varus_Mid_Malphite(Ratings):
pass
class NA_Varus_Mid_Malzahar(Ratings):
pass
class NA_Varus_Mid_Maokai(Ratings):
pass
class NA_Varus_Mid_MasterYi(Ratings):
pass
class NA_Varus_Mid_MissFortune(Ratings):
pass
class NA_Varus_Mid_MonkeyKing(Ratings):
pass
class NA_Varus_Mid_Mordekaiser(Ratings):
pass
class NA_Varus_Mid_Morgana(Ratings):
pass
class NA_Varus_Mid_Nami(Ratings):
pass
class NA_Varus_Mid_Nasus(Ratings):
pass
class NA_Varus_Mid_Nautilus(Ratings):
pass
class NA_Varus_Mid_Nidalee(Ratings):
pass
class NA_Varus_Mid_Nocturne(Ratings):
pass
class NA_Varus_Mid_Nunu(Ratings):
pass
class NA_Varus_Mid_Olaf(Ratings):
pass
class NA_Varus_Mid_Orianna(Ratings):
pass
class NA_Varus_Mid_Ornn(Ratings):
pass
class NA_Varus_Mid_Pantheon(Ratings):
pass
class NA_Varus_Mid_Poppy(Ratings):
pass
class NA_Varus_Mid_Quinn(Ratings):
pass
class NA_Varus_Mid_Rakan(Ratings):
pass
class NA_Varus_Mid_Rammus(Ratings):
pass
class NA_Varus_Mid_RekSai(Ratings):
pass
class NA_Varus_Mid_Renekton(Ratings):
pass
class NA_Varus_Mid_Rengar(Ratings):
pass
class NA_Varus_Mid_Riven(Ratings):
pass
class NA_Varus_Mid_Rumble(Ratings):
pass
class NA_Varus_Mid_Ryze(Ratings):
pass
class NA_Varus_Mid_Sejuani(Ratings):
pass
class NA_Varus_Mid_Shaco(Ratings):
pass
class NA_Varus_Mid_Shen(Ratings):
pass
class NA_Varus_Mid_Shyvana(Ratings):
pass
class NA_Varus_Mid_Singed(Ratings):
pass
class NA_Varus_Mid_Sion(Ratings):
pass
class NA_Varus_Mid_Sivir(Ratings):
pass
class NA_Varus_Mid_Skarner(Ratings):
pass
class NA_Varus_Mid_Sona(Ratings):
pass
class NA_Varus_Mid_Soraka(Ratings):
pass
class NA_Varus_Mid_Swain(Ratings):
pass
class NA_Varus_Mid_Syndra(Ratings):
pass
class NA_Varus_Mid_TahmKench(Ratings):
pass
class NA_Varus_Mid_Taliyah(Ratings):
pass
class NA_Varus_Mid_Talon(Ratings):
pass
class NA_Varus_Mid_Taric(Ratings):
pass
class NA_Varus_Mid_Teemo(Ratings):
pass
class NA_Varus_Mid_Thresh(Ratings):
pass
class NA_Varus_Mid_Tristana(Ratings):
pass
class NA_Varus_Mid_Trundle(Ratings):
pass
class NA_Varus_Mid_Tryndamere(Ratings):
pass
class NA_Varus_Mid_TwistedFate(Ratings):
pass
class NA_Varus_Mid_Twitch(Ratings):
pass
class NA_Varus_Mid_Udyr(Ratings):
pass
class NA_Varus_Mid_Urgot(Ratings):
pass
class NA_Varus_Mid_Varus(Ratings):
pass
class NA_Varus_Mid_Vayne(Ratings):
pass
class NA_Varus_Mid_Veigar(Ratings):
pass
class NA_Varus_Mid_Velkoz(Ratings):
pass
class NA_Varus_Mid_Vi(Ratings):
pass
class NA_Varus_Mid_Viktor(Ratings):
pass
class NA_Varus_Mid_Vladimir(Ratings):
pass
class NA_Varus_Mid_Volibear(Ratings):
pass
class NA_Varus_Mid_Warwick(Ratings):
pass
class NA_Varus_Mid_Xayah(Ratings):
pass
class NA_Varus_Mid_Xerath(Ratings):
pass
class NA_Varus_Mid_XinZhao(Ratings):
pass
class NA_Varus_Mid_Yasuo(Ratings):
pass
class NA_Varus_Mid_Yorick(Ratings):
pass
class NA_Varus_Mid_Zac(Ratings):
pass
class NA_Varus_Mid_Zed(Ratings):
pass
class NA_Varus_Mid_Ziggs(Ratings):
pass
class NA_Varus_Mid_Zilean(Ratings):
pass
class NA_Varus_Mid_Zyra(Ratings):
pass
| 15.364508 | 46 | 0.761667 | 972 | 6,407 | 4.59465 | 0.151235 | 0.216301 | 0.370802 | 0.463502 | 0.797582 | 0.797582 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173404 | 6,407 | 416 | 47 | 15.401442 | 0.843278 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
61c37435cb60caa1322c8fe1714b7e6f9edeecf8 | 7,342 | py | Python | operator_api/ledger/migrations/0056_auto_20190507_1545.py | liquidity-network/nocust-hub | 76f49f9b8a6c264fcbe9e0c110e98031d463c0a8 | [
"MIT"
] | 1 | 2021-08-04T06:09:46.000Z | 2021-08-04T06:09:46.000Z | operator_api/ledger/migrations/0056_auto_20190507_1545.py | liquidity-network/nocust-hub | 76f49f9b8a6c264fcbe9e0c110e98031d463c0a8 | [
"MIT"
] | 8 | 2020-11-01T19:48:21.000Z | 2022-02-10T14:12:25.000Z | operator_api/ledger/migrations/0056_auto_20190507_1545.py | liquidity-network/nocust-hub | 76f49f9b8a6c264fcbe9e0c110e98031d463c0a8 | [
"MIT"
] | 3 | 2020-11-01T15:59:56.000Z | 2021-09-16T07:18:18.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.8 on 2019-05-07 15:45
from __future__ import unicode_literals
from decimal import Decimal
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('ledger', '0055_pgsql_constraints'),
]
operations = [
migrations.AlterField(
model_name='activestate',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='challenge',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='deposit',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='exclusivebalanceallotment',
name='active_state',
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='exclusivebalanceallotment',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='minimumavailablebalancemarker',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='rootcommitment',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='token',
name='address',
field=models.CharField(db_index=True, max_length=40, unique=True),
),
migrations.AlterField(
model_name='token',
name='trail',
field=models.IntegerField(db_index=True, unique=True, validators=[
django.core.validators.MinValueValidator(0)]),
),
migrations.AlterField(
model_name='transfer',
name='appended',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='transfer',
name='cancelled',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='transfer',
name='complete',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='transfer',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='transfer',
name='nonce',
field=models.DecimalField(blank=True, db_index=True, decimal_places=0, max_digits=80, null=True, validators=[
django.core.validators.MinValueValidator(Decimal('0'))]),
),
migrations.AlterField(
model_name='transfer',
name='passive',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='transfer',
name='position',
field=models.DecimalField(blank=True, db_index=True, decimal_places=0, max_digits=80, null=True, validators=[
django.core.validators.MinValueValidator(Decimal('0'))]),
),
migrations.AlterField(
model_name='transfer',
name='processed',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='transfer',
name='recipient_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='recipient_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='recipient_cancellation_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='recipient_cancellation_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='recipient_finalization_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='recipient_finalization_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='recipient_fulfillment_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='recipient_fulfillment_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='sender_active_state',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='sender_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='sender_cancellation_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='sender_cancellation_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='sender_finalization_active_state',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT,
related_name='sender_finalization_active_state', to='ledger.ActiveState'),
),
migrations.AlterField(
model_name='transfer',
name='voided',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AlterField(
model_name='wallet',
name='address',
field=models.CharField(db_index=True, max_length=40),
),
migrations.AlterField(
model_name='wallet',
name='registration_eon_number',
field=models.BigIntegerField(db_index=True, validators=[
django.core.validators.MinValueValidator(0)]),
),
migrations.AlterField(
model_name='wallet',
name='trail_identifier',
field=models.BigIntegerField(blank=True, db_index=True, null=True),
),
migrations.AlterField(
model_name='withdrawal',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
migrations.AlterField(
model_name='withdrawalrequest',
name='eon_number',
field=models.BigIntegerField(db_index=True),
),
]
| 40.563536 | 121 | 0.584037 | 667 | 7,342 | 6.232384 | 0.142429 | 0.144335 | 0.180419 | 0.209286 | 0.864566 | 0.838826 | 0.809719 | 0.781814 | 0.77171 | 0.759682 | 0 | 0.00689 | 0.30809 | 7,342 | 180 | 122 | 40.788889 | 0.811417 | 0.009262 | 0 | 0.699422 | 1 | 0 | 0.153349 | 0.069316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.00578 | 0.028902 | 0 | 0.046243 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4ee80ae8b2e11bd8823316eac54fa917c9f73059 | 31,802 | py | Python | depthCompletion/datasets.py | dataflowr/evaluating_bdl | b7d7e3f2b8095a0ec43118d2b69b4b49e0b910f2 | [
"MIT"
] | 110 | 2019-06-04T13:30:23.000Z | 2022-03-05T07:37:52.000Z | depthCompletion/datasets.py | dataflowr/evaluating_bdl | b7d7e3f2b8095a0ec43118d2b69b4b49e0b910f2 | [
"MIT"
] | 3 | 2020-08-31T17:12:39.000Z | 2021-09-12T01:21:24.000Z | depthCompletion/datasets.py | dataflowr/evaluating_bdl | b7d7e3f2b8095a0ec43118d2b69b4b49e0b910f2 | [
"MIT"
] | 23 | 2019-06-05T08:53:28.000Z | 2022-03-05T09:01:25.000Z | # code-checked
# server-checked
import cv2
import numpy as np
import os
import random
import torch
from torch.utils import data
################################################################################
# KITTI:
################################################################################
class DatasetKITTIAugmentation(data.Dataset):
def __init__(self, kitti_depth_path, kitti_rgb_path, max_iters=None, crop_size=(352, 352)):
self.crop_h, self.crop_w = crop_size
self.kitti_depth_train_path = kitti_depth_path + "/train"
self.kitti_rgb_train_path = kitti_rgb_path + "/train"
train_dir_names = os.listdir(self.kitti_depth_train_path) # (contains "2011_09_26_drive_0001_sync" and so on)
self.examples = []
for dir_name in train_dir_names:
groundtruth_dir_path_02 = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/groundtruth/image_02"
file_ids_02 = os.listdir(groundtruth_dir_path_02) # (contains e.g. "0000000005.png" and so on)
for file_id in file_ids_02:
target_path = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/groundtruth/image_02/" + file_id
sparse_path = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/velodyne_raw/image_02/" + file_id
img_path = self.kitti_rgb_train_path + "/" + dir_name + "/image_02/data/" + file_id
example = {}
example["img_path"] = img_path
example["sparse_path"] = sparse_path
example["target_path"] = target_path
example["file_id"] = groundtruth_dir_path_02 + "/" + file_id
self.examples.append(example)
groundtruth_dir_path_03 = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/groundtruth/image_03"
file_ids_03 = os.listdir(groundtruth_dir_path_03) # (contains e.g. "0000000005.png" and so on)
for file_id in file_ids_03:
target_path = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/groundtruth/image_03/" + file_id
sparse_path = self.kitti_depth_train_path + "/" + dir_name + "/proj_depth/velodyne_raw/image_03/" + file_id
img_path = self.kitti_rgb_train_path + "/" + dir_name + "/image_03/data/" + file_id
example = {}
example["img_path"] = img_path
example["sparse_path"] = sparse_path
example["target_path"] = target_path
example["file_id"] = groundtruth_dir_path_03 + "/" + file_id
self.examples.append(example)
print ("DatasetKITTIAugmentation - num unique examples: %d" % len(self.examples))
if max_iters is not None:
self.examples = self.examples*int(np.ceil(float(max_iters)/len(self.examples)))
print ("DatasetKITTIAugmentation - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
sparse_path = example["sparse_path"]
target_path = example["target_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (375, 1242, 3), dtype: uint8) (or something close to (375, 1242))
sparse = cv2.imread(sparse_path, -1) # (shape: (375, 1242), dtype: uint16)
target = cv2.imread(target_path, -1) # (shape: (375, 1242), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# crop to the bottom center (352, 1216):
new_img_h = 352
new_img_w = 1216 # (this is the image size of all images in the selected val/test sets)
img_h = img.shape[0]
img_w = img.shape[1]
img = img[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216, 3))
sparse = sparse[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216))
target = target[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# flip img, sparse and target along the vertical axis with 0.5 probability:
flip = np.random.randint(low=0, high=2)
if flip == 1:
img = cv2.flip(img, 1)
sparse = cv2.flip(sparse, 1)
target = cv2.flip(target, 1)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# select a random (crop_h, crop_w) crop:
img_h, img_w = sparse.shape
h_off = random.randint(0, img_h - self.crop_h)
w_off = random.randint(0, img_w - self.crop_w)
img = img[h_off:(h_off+self.crop_h), w_off:(w_off+self.crop_w)] # (shape: (crop_h, crop_w, 3))
sparse = sparse[h_off:(h_off+self.crop_h), w_off:(w_off+self.crop_w)] # (shape: (crop_h, crop_w))
target = target[h_off:(h_off+self.crop_h), w_off:(w_off+self.crop_w)] # (shape: (crop_h, crop_w))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert sparse and target to meters:
sparse = sparse/256.0
sparse = sparse.astype(np.float32)
target = target/256.0
target = target.astype(np.float32)
# convert img to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img = img.astype(np.float32)
return (img.copy(), sparse.copy(), target.copy(), file_id)
class DatasetKITTIVal(data.Dataset):
def __init__(self, kitti_depth_path):
self.kitti_depth_val_path = kitti_depth_path + "/depth_selection/val_selection_cropped"
img_dir = self.kitti_depth_val_path + "/image"
sparse_dir = self.kitti_depth_val_path + "/velodyne_raw"
target_dir = self.kitti_depth_val_path + "/groundtruth_depth"
img_ids = os.listdir(img_dir) # (contains "2011_09_26_drive_0002_sync_image_0000000005_image_02.png" and so on)
self.examples = []
for img_id in img_ids:
# (img_id == "2011_09_26_drive_0002_sync_image_0000000005_image_02.png" (e.g.))
img_path = img_dir + "/" + img_id
file_id_start, file_id_end = img_id.split("_sync_image_")
# (file_id_start == "2011_09_26_drive_0002")
# (file_id_end == "0000000005_image_02.png")
sparse_path = sparse_dir + "/" + file_id_start + "_sync_velodyne_raw_" + file_id_end
target_path = target_dir + "/" + file_id_start + "_sync_groundtruth_depth_" + file_id_end
example = {}
example["img_path"] = img_path
example["sparse_path"] = sparse_path
example["target_path"] = target_path
example["file_id"] = img_id
self.examples.append(example)
print ("DatasetKITTIVal - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
sparse_path = example["sparse_path"]
target_path = example["target_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (352, 1216, 3), dtype: uint8))
sparse = cv2.imread(sparse_path, -1) # (shape: (352, 1216), dtype: uint16)
target = cv2.imread(target_path, -1) # (shape: (352, 1216), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert sparse and target to meters:
sparse = sparse/256.0
sparse = sparse.astype(np.float32)
target = target/256.0
target = target.astype(np.float32)
# convert img to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img = img.astype(np.float32)
return (img.copy(), sparse.copy(), target.copy(), file_id)
class DatasetKITTIValSeq(data.Dataset):
def __init__(self, kitti_depth_path, kitti_raw_path, seq="2011_09_26_drive_0002"):
kitti_depth_val_seq_path = kitti_depth_path + "/val/" + seq + "_sync"
sparse_dir = kitti_depth_val_seq_path + "/proj_depth/velodyne_raw/image_02"
target_dir = kitti_depth_val_seq_path + "/proj_depth/groundtruth/image_02"
seq_date = seq.split("_drive")[0] # (seq_date == "2011_09_26")
img_dir = kitti_raw_path + "/" + seq_date + "/" + seq + "_sync/image_02/data"
self.ids = os.listdir(sparse_dir) # (contains "0000000005.png" and so on)
self.examples = []
for id in self.ids:
# (id == "0000000005.png" (e.g.))
img_path = img_dir + "/" + id
sparse_path = sparse_dir + "/" + id
target_path = target_dir + "/" + id
example = {}
example["img_path"] = img_path
example["sparse_path"] = sparse_path
example["target_path"] = target_path
example["file_id"] = id
self.examples.append(example)
print ("DatasetKITTIValSeq - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
sparse_path = example["sparse_path"]
target_path = example["target_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (375, 1242, 3), dtype: uint8) (or something close to (375, 1242))
sparse = cv2.imread(sparse_path, -1) # (shape: (375, 1242), dtype: uint16)
target = cv2.imread(target_path, -1) # (shape: (375, 1242), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# crop to the bottom center (352, 1216):
new_img_h = 352
new_img_w = 1216 # (this is the image size of all images in the selected val/test sets)
img_h = img.shape[0]
img_w = img.shape[1]
img = img[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (256, 1216, 3))
sparse = sparse[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (256, 1216))
target = target[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (256, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert sparse and target to meters:
sparse = sparse/256.0
sparse = sparse.astype(np.float32)
target = target/256.0
target = target.astype(np.float32)
# convert img to grayscale:
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
#
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img_gray = img_gray.astype(np.float32)
return (img_gray.copy(), sparse.copy(), target.copy(), file_id, img)
################################################################################
# virtualKITTI:
################################################################################
class DatasetVirtualKITTIAugmentation(data.Dataset):
def __init__(self, virtualkitti_path, max_iters=None, crop_size=(352, 352)):
self.crop_h, self.crop_w = crop_size
depthgt_path = virtualkitti_path + "/vkitti_1.3.1_depthgt"
rgb_path = virtualkitti_path + "/vkitti_1.3.1_rgb"
train_dir_names = ["0001", "0006", "0018", "0020"]
variation_dir_names = ["15-deg-left", "15-deg-right", "30-deg-left", "30-deg-right", "clone", "fog", "morning", "overcast", "rain", "sunset"]
self.examples = []
for train_dir_name in train_dir_names:
ids = os.listdir(depthgt_path + "/" + train_dir_name + "/clone") # (contains "00000.png" and so on)
for id in ids:
for variation_dir_name in variation_dir_names:
file_id = train_dir_name + "/" + variation_dir_name + "/" + id
img_path = rgb_path + "/" + file_id
gt_path = depthgt_path + "/" + file_id
example = {}
example["img_path"] = img_path
example["gt_path"] = gt_path
example["file_id"] = file_id
self.examples.append(example)
print ("DatasetVirtualKITTIAugmentation - num unique examples: %d" % len(self.examples))
if max_iters is not None:
self.examples = self.examples*int(np.ceil(float(max_iters)/len(self.examples)))
print ("DatasetVirtualKITTIAugmentation - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
gt_path = example["gt_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (375, 1242, 3), dtype: uint8) (or something close to (375, 1242))
gt = cv2.imread(gt_path, -1) # (shape: (375, 1242), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# crop to the bottom center (352, 1216):
new_img_h = 352
new_img_w = 1216 # (this is the image size of all images in the selected val/test sets of kitti-depth)
img_h = img.shape[0]
img_w = img.shape[1]
img = img[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216, 3))
gt = gt[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# flip img and gt along the vertical axis with 0.5 probability:
flip = np.random.randint(low=0, high=2)
if flip == 1:
img = cv2.flip(img, 1)
gt = cv2.flip(gt, 1)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# select a random (crop_h, crop_w) crop:
img_h, img_w = gt.shape
h_off = random.randint(0, img_h - self.crop_h)
w_off = random.randint(0, img_w - self.crop_w)
img = img[h_off:(h_off+self.crop_h), w_off:(w_off+self.crop_w)] # (shape: (crop_h, crop_w, 3))
gt = gt[h_off:(h_off+self.crop_h), w_off:(w_off+self.crop_w)] # (shape: (crop_h, crop_w))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert gt to meters:
gt = gt/100.0
gt = gt.astype(np.float32)
# convert img to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (crop_h, crop_w))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# create sparse and target from gt:
max_distance = 80.0
prob_keep = 0.05
target = gt.copy()
target[target > max_distance] = 0
sparse = target.copy()
mask = np.random.binomial(1, prob_keep, sparse.shape)
sparse = mask*sparse
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# target = (target/max_distance)*255
# target = target.astype(np.uint8)
#
# sparse = (sparse/max_distance)*255
# sparse = sparse.astype(np.uint8)
#
# sparse_color = cv2.applyColorMap(sparse, cv2.COLORMAP_JET)
# sparse_color[sparse == 0] = 0
#
# target_color = cv2.applyColorMap(target, cv2.COLORMAP_JET)
# target_color[target == 0] = 0
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
# cv2.imshow("sparse_color", sparse_color)
# cv2.waitKey(0)
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# cv2.imshow("target_color", target_color)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img = img.astype(np.float32)
sparse = sparse.astype(np.float32)
target = target.astype(np.float32)
return (img.copy(), sparse.copy(), target.copy(), file_id)
class DatasetVirtualKITTIVal(data.Dataset):
def __init__(self, virtualkitti_path):
depthgt_path = virtualkitti_path + "/vkitti_1.3.1_depthgt"
rgb_path = virtualkitti_path + "/vkitti_1.3.1_rgb"
val_dir_names = ["0002"]
variation_dir_names = ["15-deg-left", "15-deg-right", "30-deg-left", "30-deg-right", "clone", "fog", "morning", "overcast", "rain", "sunset"]
self.examples = []
for val_dir_name in val_dir_names:
ids = os.listdir(depthgt_path + "/" + val_dir_name + "/clone") # (contains "00000.png" and so on)
for id in ids:
for variation_dir_name in variation_dir_names:
file_id = val_dir_name + "/" + variation_dir_name + "/" + id
img_path = rgb_path + "/" + file_id
gt_path = depthgt_path + "/" + file_id
example = {}
example["img_path"] = img_path
example["gt_path"] = gt_path
example["file_id"] = file_id
self.examples.append(example)
print ("DatasetVirtualKITTIVal - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
gt_path = example["gt_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (375, 1242, 3), dtype: uint8) (or something close to (375, 1242))
gt = cv2.imread(gt_path, -1) # (shape: (375, 1242), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# crop to the bottom center (352, 1216):
new_img_h = 352
new_img_w = 1216 # (this is the image size of all images in the selected val/test sets of kitti-depth)
img_h = img.shape[0]
img_w = img.shape[1]
img = img[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216, 3))
gt = gt[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert gt to meters:
gt = gt/100.0
gt = gt.astype(np.float32)
# convert img to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (crop_h, crop_w))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# create sparse and target from gt:
max_distance = 80.0
prob_keep = 0.05
target = gt.copy()
target[target > max_distance] = 0
sparse = target.copy()
mask = np.random.binomial(1, prob_keep, sparse.shape)
sparse = mask*sparse
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# target = (target/max_distance)*255
# target = target.astype(np.uint8)
#
# sparse = (sparse/max_distance)*255
# sparse = sparse.astype(np.uint8)
#
# sparse_color = cv2.applyColorMap(sparse, cv2.COLORMAP_JET)
# sparse_color[sparse == 0] = 0
#
# target_color = cv2.applyColorMap(target, cv2.COLORMAP_JET)
# target_color[target == 0] = 0
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
# cv2.imshow("sparse_color", sparse_color)
# cv2.waitKey(0)
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# cv2.imshow("target_color", target_color)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img = img.astype(np.float32)
sparse = sparse.astype(np.float32)
target = target.astype(np.float32)
return (img.copy(), sparse.copy(), target.copy(), file_id)
class DatasetVirtualKITTIValSeq(data.Dataset):
def __init__(self, virtualkitti_path, seq="0002", variation="clone"):
depthgt_path = virtualkitti_path + "/vkitti_1.3.1_depthgt"
rgb_path = virtualkitti_path + "/vkitti_1.3.1_rgb"
self.examples = []
self.ids = os.listdir(depthgt_path + "/" + seq + "/clone") # (contains "00000.png" and so on)
for id in self.ids:
file_id = seq + "/" + variation + "/" + id
img_path = rgb_path + "/" + file_id
gt_path = depthgt_path + "/" + file_id
example = {}
example["img_path"] = img_path
example["gt_path"] = gt_path
example["file_id"] = file_id
self.examples.append(example)
print ("DatasetVirtualKITTIValSeq - num examples: %d" % len(self.examples))
def __len__(self):
return len(self.examples)
def __getitem__(self, index):
example = self.examples[index]
img_path = example["img_path"]
gt_path = example["gt_path"]
file_id = example["file_id"]
img = cv2.imread(img_path, -1) # (shape: (375, 1242, 3), dtype: uint8) (or something close to (375, 1242))
gt = cv2.imread(gt_path, -1) # (shape: (375, 1242), dtype: uint16)
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# crop to the bottom center (352, 1216):
new_img_h = 352
new_img_w = 1216 # (this is the image size of all images in the selected val/test sets of kitti-depth)
img_h = img.shape[0]
img_w = img.shape[1]
img = img[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216, 3))
gt = gt[(img_h - new_img_h):img_h, int(img_w/2.0 - new_img_w/2.0):int(img_w/2.0 + new_img_w/2.0)] # (shape: (352, 1216))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# convert gt to meters:
gt = gt/100.0
gt = gt.astype(np.float32)
# convert img to grayscale:
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (shape: (crop_h, crop_w))
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (gt.shape)
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("gt", gt)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
# create sparse and target from gt:
max_distance = 80.0
prob_keep = 0.05
target = gt.copy()
target[target > max_distance] = 0
sparse = target.copy()
mask = np.random.binomial(1, prob_keep, sparse.shape)
sparse = mask*sparse
# # # # # # # # # # # # # # # # # # # # # # # # # debug visualization START
# print (img.shape)
# print (sparse.shape)
# print (target.shape)
#
# target = (target/max_distance)*255
# target = target.astype(np.uint8)
#
# sparse = (sparse/max_distance)*255
# sparse = sparse.astype(np.uint8)
#
# sparse_color = cv2.applyColorMap(sparse, cv2.COLORMAP_JET)
# sparse_color[sparse == 0] = 0
#
# target_color = cv2.applyColorMap(target, cv2.COLORMAP_JET)
# target_color[target == 0] = 0
#
# cv2.imshow("img", img)
# cv2.waitKey(0)
#
# cv2.imshow("sparse", sparse)
# cv2.waitKey(0)
# cv2.imshow("sparse_color", sparse_color)
# cv2.waitKey(0)
#
# cv2.imshow("target", target)
# cv2.waitKey(0)
# cv2.imshow("target_color", target_color)
# cv2.waitKey(0)
# # # # # # # # # # # # # # # # # # # # # # # # # # debug visualization END
img_gray = img_gray.astype(np.float32)
sparse = sparse.astype(np.float32)
target = target.astype(np.float32)
return (img_gray.copy(), sparse.copy(), target.copy(), file_id, img)
| 37.022119 | 149 | 0.514685 | 3,736 | 31,802 | 4.163276 | 0.049518 | 0.038768 | 0.047383 | 0.018516 | 0.91282 | 0.891153 | 0.875402 | 0.853993 | 0.846535 | 0.838562 | 0 | 0.053185 | 0.317716 | 31,802 | 858 | 150 | 37.065268 | 0.663656 | 0.327212 | 0 | 0.73817 | 0 | 0 | 0.081685 | 0.029078 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056782 | false | 0 | 0.018927 | 0.018927 | 0.132492 | 0.025237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4ef8dbdac4b1df1e7600103b39352a77f032170e | 40 | py | Python | pyqt_show_button_when_hover_widget/__init__.py | yjg30737/pyqt-show-button-when-hover-widget | 6a7041db3cfa05bcfdc43f281618fc868ac878eb | [
"MIT"
] | null | null | null | pyqt_show_button_when_hover_widget/__init__.py | yjg30737/pyqt-show-button-when-hover-widget | 6a7041db3cfa05bcfdc43f281618fc868ac878eb | [
"MIT"
] | null | null | null | pyqt_show_button_when_hover_widget/__init__.py | yjg30737/pyqt-show-button-when-hover-widget | 6a7041db3cfa05bcfdc43f281618fc868ac878eb | [
"MIT"
] | null | null | null | from .showButtonWhenHoverWidget import * | 40 | 40 | 0.875 | 3 | 40 | 11.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f6374823002020789ca48a8b07bf02c5df55d834 | 2,716 | py | Python | test/functional/test_receiver_stats.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | null | null | null | test/functional/test_receiver_stats.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | 4 | 2021-05-24T04:31:34.000Z | 2021-06-28T03:38:56.000Z | test/functional/test_receiver_stats.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | null | null | null | from test.base import BaseTest
from test.functional.base_functional import BaseServerRestApi
class TestReceiverStats(BaseServerRestApi):
def test_receiver_stats_is_none(self):
self.assert_groups_for_user(0)
self.send_1v1_message(
user_id=BaseTest.USER_ID,
receiver_id=BaseTest.OTHER_USER_ID
)
stats = self.groups_for_user(
BaseTest.USER_ID,
count_unread=False,
receiver_stats=False
)[0]["stats"]
self.assertEqual(None, stats["receiver_delete_before"])
self.assertEqual(None, stats["receiver_hide"])
self.assertEqual(None, stats["receiver_deleted"])
self.assertEqual(-1, stats["unread"])
self.assertEqual(-1, stats["receiver_unread"])
def test_receiver_stats_is_not_none(self):
self.assert_groups_for_user(0)
self.send_1v1_message(
user_id=BaseTest.USER_ID,
receiver_id=BaseTest.OTHER_USER_ID
)
stats = self.groups_for_user(
BaseTest.USER_ID,
count_unread=False,
receiver_stats=True
)[0]["stats"]
self.assertLess(self.long_ago, stats["receiver_delete_before"])
self.assertEqual(False, stats["receiver_hide"])
self.assertEqual(False, stats["receiver_deleted"])
self.assertEqual(-1, stats["unread"])
self.assertEqual(1, stats["receiver_unread"])
def test_unread(self):
self.assert_groups_for_user(0)
self.send_1v1_message(
user_id=BaseTest.USER_ID,
receiver_id=BaseTest.OTHER_USER_ID
)
stats = self.groups_for_user(
BaseTest.USER_ID,
count_unread=True,
receiver_stats=True
)[0]["stats"]
self.assertLess(self.long_ago, stats["receiver_delete_before"])
self.assertEqual(False, stats["receiver_hide"])
self.assertEqual(False, stats["receiver_deleted"])
self.assertEqual(0, stats["unread"])
self.assertEqual(1, stats["receiver_unread"])
def test_unread_no_receiver_stats(self):
self.assert_groups_for_user(0)
self.send_1v1_message(
user_id=BaseTest.USER_ID,
receiver_id=BaseTest.OTHER_USER_ID
)
stats = self.groups_for_user(
BaseTest.USER_ID,
count_unread=True,
receiver_stats=False
)[0]["stats"]
self.assertEqual(None, stats["receiver_delete_before"])
self.assertEqual(None, stats["receiver_hide"])
self.assertEqual(None, stats["receiver_deleted"])
self.assertEqual(0, stats["unread"])
self.assertEqual(-1, stats["receiver_unread"])
| 33.530864 | 71 | 0.638071 | 313 | 2,716 | 5.226837 | 0.127796 | 0.165037 | 0.06357 | 0.08802 | 0.915648 | 0.893032 | 0.893032 | 0.893032 | 0.893032 | 0.893032 | 0 | 0.011875 | 0.255891 | 2,716 | 80 | 72 | 33.95 | 0.797625 | 0 | 0 | 0.835821 | 0 | 0 | 0.113402 | 0.032401 | 0 | 0 | 0 | 0 | 0.358209 | 1 | 0.059701 | false | 0 | 0.029851 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f64d46d25a3b2708ce07be1f8098da784c257ef3 | 17,004 | py | Python | test/test_logic.py | mcdeoliveira/ctrl | 6c6062c6d1e9902178500abcd10be6ac0bcf043d | [
"Apache-2.0"
] | 12 | 2017-06-20T13:20:40.000Z | 2021-01-18T00:12:10.000Z | test/test_logic.py | mcdeoliveira/beaglebone | 6c6062c6d1e9902178500abcd10be6ac0bcf043d | [
"Apache-2.0"
] | 2 | 2017-06-12T15:17:24.000Z | 2018-01-30T18:22:19.000Z | test/test_logic.py | mcdeoliveira/beaglebone | 6c6062c6d1e9902178500abcd10be6ac0bcf043d | [
"Apache-2.0"
] | 4 | 2017-09-25T12:19:19.000Z | 2019-01-31T21:46:24.000Z | import pytest
import numpy as np
import pyctrl
import pyctrl.block as block
import pyctrl.block.logic as logic
def testCompare():
blk = logic.Compare()
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 1
blk = logic.Compare(threshold = 1)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
blk = logic.Compare()
blk.set(threshold = 1)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
with pytest.raises(block.BlockException):
logic.Compare(m = 1.2)
with pytest.raises(block.BlockException):
logic.Compare(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(m = 1.2)
with pytest.raises(block.BlockException):
blk.set(threshold = 'as')
def testCompareWithHysterisis():
# should work like Compare
blk = logic.CompareWithHysterisis(hysterisis = 0)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 1
blk = logic.CompareWithHysterisis(threshold = 1, hysterisis = 0)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
blk = logic.CompareWithHysterisis(hysterisis = 0)
blk.set(threshold = 1)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
with pytest.raises(block.BlockException):
logic.CompareWithHysterisis(m = 1.2)
with pytest.raises(block.BlockException):
logic.CompareWithHysterisis(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(m = 1.2)
with pytest.raises(block.BlockException):
blk.set(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(hysterisis = -1)
# with hysterisis
blk = logic.CompareWithHysterisis(hysterisis = 0.1)
assert blk.state == (1,)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (0,)
blk.write(0,0)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (0,)
blk.write(0,-0.2)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
blk.write(0,0)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
blk.write(0,0.2)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (0,)
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
blk = logic.CompareWithHysterisis(threshold = 1, hysterisis = 0)
assert blk.state == (1,)
blk.write(0,1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (0,)
blk.write(1,0)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
blk.write(1,1)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (1,)
def testCompareAbs():
blk = logic.CompareAbs(threshold = 1)
blk.write(2)
(answer,) = blk.read()
assert answer == 0
blk.write(3)
(answer,) = blk.read()
assert answer == 0
blk.write(1)
(answer,) = blk.read()
assert answer == 1
blk.write(0)
(answer,) = blk.read()
assert answer == 1
blk.write(0.5)
(answer,) = blk.read()
assert answer == 1
blk = logic.CompareAbs(threshold = 1, invert = True)
blk.write(2)
(answer,) = blk.read()
assert answer == 1
blk.write(3)
(answer,) = blk.read()
assert answer == 1
blk.write(1)
(answer,) = blk.read()
assert answer == 1
blk.write(0)
(answer,) = blk.read()
assert answer == 0
blk.write(0.5)
(answer,) = blk.read()
assert answer == 0
blk = logic.CompareAbs(threshold = 0, invert = False)
blk.set(threshold = 1)
blk.set(invert = True)
blk.write(2)
(answer,) = blk.read()
assert answer == 1
blk.write(3)
(answer,) = blk.read()
assert answer == 1
blk.write(1)
(answer,) = blk.read()
assert answer == 1
blk.write(0)
(answer,) = blk.read()
assert answer == 0
blk.write(0.5)
(answer,) = blk.read()
assert answer == 0
with pytest.raises(block.BlockException):
logic.CompareAbs(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(threshold = 'as')
def testCompareAbsWithHysterisis():
# should work like CompareAbs
blk = logic.CompareAbsWithHysterisis(threshold = 1, hysterisis = 0)
assert blk.state == None
blk.write(2)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(3)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0.9)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0.5)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1.05)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1.1)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk = logic.CompareAbsWithHysterisis(threshold = 1, invert = True, hysterisis = 0)
blk.write(2)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(3)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0.9)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(0)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(0.5)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1.05)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1.1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk = logic.CompareAbsWithHysterisis(threshold = 0, invert = False, hysterisis = 0)
blk.set(threshold = 1)
blk.set(invert = True)
blk.write(2)
(answer,) = blk.read()
assert answer == 1
blk.write(3)
(answer,) = blk.read()
assert answer == 1
blk.write(1)
(answer,) = blk.read()
assert answer == 1
blk.write(0)
(answer,) = blk.read()
assert answer == 0
blk.write(0.5)
(answer,) = blk.read()
assert answer == 0
with pytest.raises(block.BlockException):
logic.CompareAbs(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(threshold = 'as')
with pytest.raises(block.BlockException):
blk.set(hysterisis = -1)
# with hysterisis
blk = logic.CompareAbsWithHysterisis(threshold = 1)
assert blk.state == None
assert blk.hysterisis == 0.1
blk.write(2)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(3)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(0.9)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0.5)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1.05)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1.11)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk = logic.CompareAbsWithHysterisis(threshold = 1, invert = True)
assert blk.hysterisis == 0.1
blk.write(2)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(3)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0.9)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
blk.write(0)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(0.5)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1.05)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (0,)
blk.write(1.1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (1,)
# with hysterisis
blk = logic.CompareAbsWithHysterisis(threshold = 0.2,
hysterisis = 0.1)
assert blk.state == None
assert blk.threshold == 0.2
assert blk.hysterisis == 0.1
blk.write(-0.3)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(-0.31)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(-0.41)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(-0.3)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(-0.1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(-0)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(-0.3)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(-0.31)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(0)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(0.3)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
blk.write(0.31)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(0.11)
(answer,) = blk.read()
assert answer == 0
assert blk.state == (logic.State.LOW,)
blk.write(0.1)
(answer,) = blk.read()
assert answer == 1
assert blk.state == (logic.State.HIGH,)
def testTrigger():
import math
blk = logic.Trigger(function = lambda x: x >= 0)
blk.write(-1,1)
answer = blk.read()
assert answer == (0,)
blk.write(1,2)
answer = blk.read()
assert answer == (2,)
blk.write(-1,3)
answer = blk.read()
assert answer == (3,)
blk.reset()
blk.write(-1,1)
answer = blk.read()
assert answer == (0,)
blk.reset()
blk.write(-1)
answer = blk.read()
assert answer == ()
blk.write(-1,1,2,3)
answer = blk.read()
assert answer == (0,0,0)
blk.write(1,1,2,3)
answer = blk.read()
assert answer == (1,2,3)
blk.write(-1,1,2,3)
answer = blk.read()
assert answer == (1,2,3)
blk.write(-1)
answer = blk.read()
assert answer == ()
def testEvent():
class myEvent(logic.Event):
def __init__(self, **kwargs):
self.value = False
super().__init__(**kwargs)
def rise_event(self):
self.value = True
def fall_event(self):
self.value = False
blk = myEvent()
assert blk.value == False
assert blk.state == logic.State.LOW
assert blk.high == 0.8
assert blk.low == 0.2
blk.write(1)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(1)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(0)
assert blk.value == False
assert blk.state == logic.State.LOW
blk.write(0.8)
assert blk.value == False
assert blk.state == logic.State.LOW
blk.write(0.9)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(0.8)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(0.5)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(0.2)
assert blk.value == True
assert blk.state == logic.State.HIGH
blk.write(0.1)
assert blk.value == False
assert blk.state == logic.State.LOW
def testSetBlock():
from pyctrl import Controller
from pyctrl.block import Constant
controller = Controller()
controller.add_source('block',
Constant(),
['s1'])
assert controller.get_source('block', 'enabled')
blk = logic.SetSource(parent = controller,
label = 'block',
on_rise_and_fall = {'enabled': False} )
assert blk.state is logic.State.LOW
blk.write(1)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
controller.set_source('block', enabled = True)
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
blk.write(0.1)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
controller.set_source('block', enabled = True)
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
blk.write(0.9)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
# OnRiseSet
blk = logic.SetSource(parent = controller,
label = 'block',
on_rise = {'enabled': False} )
assert blk.state is logic.State.LOW
blk.write(1)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
controller.set_source('block', enabled = True)
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
blk.write(0.1)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
controller.set_source('block', enabled = True)
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
blk.write(0.9)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
# OnFallSet
blk = logic.SetSource(parent = controller,
label = 'block',
on_fall = {'enabled': False} )
controller.set_source('block', enabled = True)
assert blk.state is logic.State.LOW
blk.write(1)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
blk.write(0.1)
assert not controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
controller.set_source('block', enabled = True)
assert controller.get_source('block', 'enabled')
blk.write(0.5)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.LOW
blk.write(0.9)
assert controller.get_source('block', 'enabled')
assert blk.state is logic.State.HIGH
# try pickling
import pickle
pickle.dumps(blk)
if __name__ == "__main__":
testCompare()
testCompareWithHysterisis()
testCompareAbs()
testCompareAbsWithHysterisis()
testTrigger()
testEvent()
testSetBlock()
| 21.772087 | 87 | 0.57116 | 2,204 | 17,004 | 4.38294 | 0.042196 | 0.104348 | 0.137267 | 0.200621 | 0.907971 | 0.892443 | 0.879089 | 0.843478 | 0.822671 | 0.803002 | 0 | 0.034917 | 0.275759 | 17,004 | 780 | 88 | 21.8 | 0.749492 | 0.007822 | 0 | 0.867021 | 0 | 0 | 0.02141 | 0 | 0 | 0 | 0 | 0 | 0.404255 | 1 | 0.017731 | false | 0 | 0.015957 | 0 | 0.035461 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
14027e86447e8eabb8f9205a9ea5800a03605f3a | 268,991 | py | Python | tnqmetro/__init__.py | kchabuda/TNQMetro | 16100e3e1bb0e3315d97adc2e3a5b5ed89d72682 | [
"MIT"
] | 4 | 2021-07-19T12:26:20.000Z | 2022-01-13T13:48:41.000Z | tnqmetro/__init__.py | kchabuda/TNQMetro | 16100e3e1bb0e3315d97adc2e3a5b5ed89d72682 | [
"MIT"
] | null | null | null | tnqmetro/__init__.py | kchabuda/TNQMetro | 16100e3e1bb0e3315d97adc2e3a5b5ed89d72682 | [
"MIT"
] | null | null | null | """TNQMetro: Tensor-network based package for efficient quantum metrology computations."""
# Table of Contents
#
# 1 Functions for finite size systems......................................29
# 1.1 High level functions...............................................37
# 1.2 Low level functions...............................................257
# 1.2.1 Problems with exact derivative.............................1207
# 1.2.2 Problems with discrete approximation of the derivative.....2411
# 2 Functions for infinite size systems..................................3808
# 2.1 High level functions.............................................3816
# 2.2 Low level functions..............................................4075
# 3 Auxiliary functions..................................................5048
import itertools
import math
import warnings
import numpy as np
from ncon import ncon
########################################
# #
# #
# 1 Functions for finite size systems. #
# #
# #
########################################
#############################
# #
# 1.1 High level functions. #
# #
#############################
def fin(N, so_before_list, h, so_after_list, BC='O', L_ini=None, psi0_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True, D_psi0_max=100, D_psi0_max_forced=False):
"""
Optimization of the QFI over operator L (in MPO representation) and wave function psi0 (in MPS representation) and check of convergence in their bond dimensions. Function for finite size systems.
User has to provide information about the dynamics by specifying the quantum channel. It is assumed that the quantum channel is translationally invariant and is built from layers of quantum operations.
User has to provide one defining operation for each layer as a local superoperator. These local superoperators have to be input in order of their action on the system.
Parameter encoding is a stand out quantum operation. It is assumed that the parameter encoding acts only once and is unitary so the user has to provide only its generator h.
Generator h has to be diagonal in computational basis, or in other words, it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites a particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding. Dimension d is the dimension of local Hilbert space (dimension of physical index).
Generator h has to be diagonal in the computational basis, or in other words, it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
BC: 'O' or 'P', optional
Boundary conditions, 'O' for OBC, 'P' for PBC.
L_ini: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC, optional
Initial MPO for L.
psi0_ini: list of length N of ndarrays of a shape (Dl_psi0,Dr_psi0,d) for OBC (Dl_psi0, Dr_psi0 can vary between sites) or ndarray of a shape (D_psi0,D_psi0,d,N) for PBC, optional
Initial MPS for psi0.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for MPO representing L).
D_L_max_forced: bool, optional
True if D_L_max has to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge has to be imposed on MPO representing L, otherwise False.
D_psi0_max: integer, optional
Maximal value of D_psi0 (D_psi0 is bond dimension for MPS representing psi0).
D_psi0_max_forced: bool, optional
True if D_psi0_max has to be reached, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_m: ndarray
Matrix describing the figure of merit as a function of bond dimensions of respectively L [rows] and psi0 [columns].
L: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC
Optimal L in MPO representation.
psi0: list of length N of ndarrays of a shape (Dl_psi0,Dr_psi0,d) for OBC (Dl_psi0, Dr_psi0 can vary between sites) or ndarray of a shape (D_psi0,D_psi0,d,N) for PBC
Optimal psi0 in MPS representation.
"""
if np.linalg.norm(h - np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.')
d = np.shape(h)[0]
ch = fin_create_channel(N, d, BC, so_before_list + so_after_list)
ch2 = fin_create_channel_derivative(N, d, BC, so_before_list, h, so_after_list)
result, result_m, L, psi0 = fin_gen(N, d, BC, ch, ch2, None, L_ini, psi0_ini, imprecision, D_L_max, D_L_max_forced, L_herm, D_psi0_max, D_psi0_max_forced)
return result, result_m, L, psi0
def fin_gen(N, d, BC, ch, ch2, epsilon=None, L_ini=None, psi0_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True, D_psi0_max=100, D_psi0_max_forced=False):
"""
Optimization of the figure of merit (usually interpreted as the QFI) over operator L (in MPO representation) and wave function psi0 (in MPS representation) and check of convergence when increasing their bond dimensions. Function for finite size systems.
User has to provide information about the dynamics by specifying a quantum channel ch and its derivative ch2 (or two channels separated by small parameter epsilon) as superoperators in MPO representation.
There are no constraints on the structure of the channel but the complexity of calculations highly depends on the channel's bond dimension.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
d: integer
Dimension of local Hilbert space (dimension of physical index).
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
ch: list of length N of ndarrays of a shape (Dl_ch,Dr_ch,d**2,d**2) for OBC (Dl_ch, Dr_ch can vary between sites) or ndarray of a shape (D_ch,D_ch,d**2,d**2,N) for PBC
Quantum channel as a superoperator in MPO representation.
ch2: list of length N of ndarrays of a shape (Dl_ch2,Dr_ch2,d**2,d**2) for OBC (Dl_ch2, Dr_ch2 can vary between sites) or ndarray of a shape (D_ch2,D_ch2,d**2,d**2,N) for PBC
Interpretiaon depends on whether epsilon is specifed (2) or not (1, default approach):
1) derivative of the quantum channel as a superoperator in the MPO representation,
2) the quantum channel as superoperator in the MPO representation for the value of estimated parameter shifted by epsilon in relation to ch.
epsilon: float, optional
If specified then interpeted as value of a separation between estimated parameters encoded in ch and ch2.
L_ini: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC, optional
Initial MPO for L.
psi0_ini: list of length N of ndarrays of a shape (Dl_psi0,Dr_psi0,d) for OBC (Dl_psi0, Dr_psi0 can vary between sites) or ndarray of a shape (D_psi0,D_psi0,d,N) for PBC, optional
Initial MPS for psi0.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for MPO representing L).
D_L_max_forced: bool, optional
True if D_L_max has to be reached, otherwise False.
L_herm: bool, optional
True if the Hermitian gauge has to be imposed on MPO representing L, otherwise False.
D_psi0_max: integer, optional
Maximal value of D_psi0 (D_psi0 is bond dimension for MPS representing psi0).
D_psi0_max_forced: bool, optional
True if D_psi0_max has to be reached, otherwise False.
Returns:
result: float
Optimal value of the figure of merit.
result_m: ndarray
Matrix describing the figure of merit as a function of bond dimensions of respectively L [rows] and psi0 [columns].
L: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC
Optimal L in MPO representation.
psi0: list of length N of ndarrays of a shape (Dl_psi0,Dr_psi0,d) for OBC (Dl_psi0, Dr_psi0 can vary between sites) or ndarray of a shape (D_psi0,D_psi0,d,N) for PBC
Optimal psi0 in MPS representation.
"""
if epsilon is None:
result, result_m, L, psi0 = fin_FoM_FoMD_optbd(N, d, BC, ch, ch2, L_ini, psi0_ini, imprecision, D_L_max, D_L_max_forced, L_herm, D_psi0_max, D_psi0_max_forced)
else:
result, result_m, L, psi0 = fin2_FoM_FoMD_optbd(N, d, BC, ch, ch2, epsilon, L_ini, psi0_ini, imprecision, D_L_max, D_L_max_forced, L_herm, D_psi0_max, D_psi0_max_forced)
return result, result_m, L, psi0
def fin_state(N, so_before_list, h, so_after_list, rho0, BC='O', L_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True):
"""
Optimization of the QFI over operator L (in MPO representation) and check of convergence when increasing its bond dimension. Function for finite size systems and fixed state of the system.
User has to provide information about the dynamics by specifying a quantum channel. It is assumed that the quantum channel is translationally invariant and is built from layers of quantum operations.
User has to provide one defining operation for each layer as a local superoperator. Those local superoperator have to be input in order of their action on the system.
Parameter encoding is a stand out quantum operation. It is assumed that parameter encoding acts only once and is unitary so the user has to provide only its generator h.
Generator h has to be diagonal in the computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding. Dimension d is the dimension of local Hilbert space (dimension of physical index).
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
rho0: list of length N of ndarrays of a shape (Dl_rho0,Dr_rho0,d,d) for OBC (Dl_rho0, Dr_rho0 can vary between sites) or ndarray of a shape (D_rho0,D_rho0,d,d,N) for PBC
Density matrix describing initial state of the system in MPO representation.
BC: 'O' or 'P', optional
Boundary conditions, 'O' for OBC, 'P' for PBC.
L_ini: list of length N of ndarrays of shape (Dl_L,Dr_L,d,d) for OBC, (Dl_L, Dr_L can vary between sites) or ndarray of shape (D_L,D_L,d,d,N) for PBC, optional
Initial MPO for L.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for MPO representing L).
D_L_max_forced: bool, optional
True if D_L_max has to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge has to be imposed on MPO representing L, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_v: ndarray
Vector describing figure of merit in function of bond dimensions of L.
L: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC
Optimal L in the MPO representation.
"""
if np.linalg.norm(h - np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.')
d = np.shape(h)[0]
ch = fin_create_channel(N, d, BC, so_before_list + so_after_list)
ch2 = fin_create_channel_derivative(N, d, BC, so_before_list, h, so_after_list)
rho = channel_acting_on_operator(ch, rho0)
rho2 = channel_acting_on_operator(ch2, rho0)
result, result_v, L = fin_state_gen(N, d, BC, rho, rho2, None, L_ini, imprecision, D_L_max, D_L_max_forced, L_herm)
return result, result_v, L
def fin_state_gen(N, d, BC, rho, rho2, epsilon=None, L_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True):
"""
Optimization of the the figure of merit (usually interpreted as the QFI) over operator L (in MPO representation) and check of convergence when increasing its bond dimension. Function for finite size systems and fixed state of the system.
User has to provide information about the dynamics by specifying a quantum channel ch and its derivative ch2 (or two channels separated by small parameter epsilon) as superoperators in the MPO representation.
There are no constraints on the structure of the channel but the complexity of calculations highly depends on channel's bond dimension.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
d: integer
Dimension of local Hilbert space (dimension of physical index).
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
rho: list of length N of ndarrays of a shape (Dl_rho,Dr_rho,d,d) for OBC (Dl_rho, Dr_rho can vary between sites) or ndarray of a shape (D_rho,D_rho,d,d,N) for PBC
Density matrix at the output of the quantum channel in the MPO representation.
rho2: list of length N of ndarrays of a shape (Dl_rho2,Dr_rho2,d,d) for OBC (Dl_rho2, Dr_rho2 can vary between sites) or ndarray of a shape (D_rho2,D_rho2,d,d,N) for PBC
Interpretaion depends on whether epsilon is specifed (2) or not (1, default approach):
1) derivative of density matrix at the output of quantum channel in MPO representation,
2) density matrix at the output of quantum channel in MPO representation for the value of estimated parameter shifted by epsilon in relation to rho.
epsilon: float, optional
If specified then it is interpeted as the value of separation between estimated parameters encoded in rho and rho2.
L_ini: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC, optional
Initial MPO for L.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for MPO representing L).
D_L_max_forced: bool, optional
True if D_L_max has to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge has to be imposed on MPO representing L, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_v: ndarray
Vector describing figure of merit as a function of bond dimensions of L.
L: list of length N of ndarrays of a shape (Dl_L,Dr_L,d,d) for OBC (Dl_L, Dr_L can vary between sites) or ndarray of a shape (D_L,D_L,d,d,N) for PBC
Optimal L in MPO representation.
"""
if epsilon is None:
result, result_v, L = fin_FoM_optbd(N, d, BC, rho, rho2, L_ini, imprecision, D_L_max, D_L_max_forced, L_herm)
else:
result, result_v, L = fin2_FoM_optbd(N, d, BC, rho, rho2, epsilon, L_ini, imprecision, D_L_max, D_L_max_forced, L_herm)
return result, result_v, L
############################
# #
# 1.2 Low level functions. #
# #
############################
def fin_create_channel(N, d, BC, so_list, tol=10**-10):
"""
Creates MPO for a superoperator describing translationally invariant quantum channel from list of local superoperators. Function for finite size systems.
For OBC, tensor-network length N has to be at least 2k-1, where k is the correlation length (number of sites on which acts the biggest local superoperator).
Local superoperators acting on more then 4 neighbouring sites are not currently supported.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
For OBC tensor-network length N has to be at least 2k-1 where k is the correlation length (number of sites on which acts the biggest local superoperator).
d: integer
Dimension of local Hilbert space (dimension of physical index).
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
so_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites a particular local superoperator acts
List of local superoperators in order of their action on the system.
Local superoperators acting on more then 4 neighbour sites are not currently supported.
tol: float, optional
Factor which after multiplication by the highest singular value gives a cutoff on singular values that are treated as nonzero.
Returns:
ch: list of length N of ndarrays of shape (Dl_ch,Dr_ch,d**2,d**2) for OBC (Dl_ch, Dr_ch can vary between sites) or ndarray of shape (D_ch,D_ch,d**2,d**2,N) for PBC
Quantum channel as a superoperator in the MPO representation.
"""
if so_list == []:
if BC == 'O':
ch = np.eye(d**2,dtype=complex)
ch = ch[np.newaxis,np.newaxis,:,:]
ch = [ch]*N
elif BC == 'P':
ch = np.eye(d**2,dtype=complex)
ch = ch[np.newaxis,np.newaxis,:,:,np.newaxis]
ch = np.tile(ch,(1,1,1,1,N))
return ch
if BC == 'O':
ch = [0]*N
kmax = max([int(math.log(np.shape(so_list[i])[0],d**2)) for i in range(len(so_list))])
if N < 2*kmax-1:
warnings.warn('For OBC tensor-network length N have to be at least 2k-1 where k is correlation length (number of sites on which acts the biggest local superoperator).')
for x in range(N):
if x >= kmax and N-x >= kmax:
ch[x] = ch[x-1]
continue
for i in range(len(so_list)):
so = so_list[i]
k = int(math.log(np.shape(so)[0],d**2))
if np.linalg.norm(so-np.diag(np.diag(so))) < 10**-10:
so = np.diag(so)
if k == 1:
bdchil = 1
bdchir = 1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
chi[:,:,nx,nx] = so[nx]
elif k == 2:
so = np.reshape(so,(d**2,d**2),order='F')
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
if x == 0:
bdchil = 1
bdchir = bdchi
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us[nx,:]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x > 0 and x < N-1:
bdchil = bdchi
bdchir = bdchi
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv[:,nx],us[nx,:]]
legs = [[-1],[-2]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-1:
bdchil = bdchi
bdchir = 1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv[:,nx]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif k == 3:
so = np.reshape(so,(d**2,d**4),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**2),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
if x == 0:
bdchil = 1
bdchir = bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us1[nx,:]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == 1:
bdchil = bdchi1
bdchir = bdchi2*bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us2[:,nx,:],us1[nx,:]]
legs = [[-1,-2],[-3]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x > 1 and x < N-2:
bdchil = bdchi2*bdchi1
bdchir = bdchi2*bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv2[:,nx],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-3],[-4]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-2:
bdchil = bdchi2*bdchi1
bdchir = bdchi2
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv2[:,nx],us2[:,nx,:]]
legs = [[-1],[-2,-3]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-1:
bdchil = bdchi2
bdchir = 1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv2[:,nx]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif k == 4:
so = np.reshape(so,(d**2,d**6),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**2,d**2),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
if x == 0:
bdchil = 1
bdchir = bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us1[nx,:]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == 1:
bdchil = bdchi1
bdchir = bdchi2*bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us2[:,nx,:],us1[nx,:]]
legs = [[-1,-2],[-3]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == 2:
bdchil = bdchi2*bdchi1
bdchir = bdchi3*bdchi2*bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [us3[:,nx,:],us2[:,nx,:],us1[nx,:]]
legs = [[-1,-3],[-2,-4],[-5]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x > 2 and x < N-3:
bdchil = bdchi3*bdchi2*bdchi1
bdchir = bdchi3*bdchi2*bdchi1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx],us3[:,nx,:],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-4],[-3,-5],[-6]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-3:
bdchil = bdchi3*bdchi2*bdchi1
bdchir = bdchi3*bdchi2
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx],us3[:,nx,:],us2[:,nx,:]]
legs = [[-1],[-2,-4],[-3,-5]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-2:
bdchil = bdchi3*bdchi2
bdchir = bdchi3
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx],us3[:,nx,:]]
legs = [[-1],[-2,-3]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
elif x == N-1:
bdchil = bdchi3
bdchir = 1
chi = np.zeros((bdchil,bdchir,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx]]
legs = [[-1]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchil,bdchir),order='F')
else:
warnings.warn('Local superoperators acting on more then 4 neighbour sites are not currently supported.')
else:
if k == 1:
bdchil = 1
bdchir = 1
chi = so[np.newaxis,np.newaxis,:,:]
elif k == 2:
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
us = np.reshape(us,(d**2,d**2,bdchi),order='F')
sv = np.reshape(sv,(bdchi,d**2,d**2),order='F')
tensors = [sv,us]
legs = [[-1,-3,1],[1,-4,-2]]
chi = ncon(tensors,legs)
if x == 0:
tensors = [us]
legs = [[-2,-3,-1]]
chi = ncon(tensors,legs)
bdchil = 1
bdchir = bdchi
elif x > 0 and x < N-1:
tensors = [sv,us]
legs = [[-1,-3,1],[1,-4,-2]]
chi = ncon(tensors,legs)
bdchil = bdchi
bdchir = bdchi
elif x == N-1:
tensors = [sv]
legs = [[-1,-2,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi
bdchir = 1
chi = np.reshape(chi,(bdchil,bdchir,d**2,d**2),order='F')
elif k == 3:
so = np.reshape(so,(d**4,d**8),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2,d**2,d**2),order='F')
if x == 0:
tensors = [us1]
legs = [[-2,-3,-1]]
chi = ncon(tensors,legs)
bdchil = 1
bdchir = bdchi1
elif x == 1:
tensors = [us2,us1]
legs = [[-1,-5,1,-2],[1,-6,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi1
bdchir = bdchi2*bdchi1
elif x > 1 and x < N-2:
tensors = [sv2,us2,us1]
legs = [[-1,-5,1],[-2,1,2,-3],[2,-6,-4]]
chi = ncon(tensors,legs)
bdchil = bdchi2*bdchi1
bdchir = bdchi2*bdchi1
elif x == N-2:
tensors = [sv2,us2]
legs = [[-1,-4,1],[-2,1,-5,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi2*bdchi1
bdchir = bdchi2
elif x == N-1:
tensors = [sv2]
legs = [[-1,-2,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi2
bdchir = 1
chi = np.reshape(chi,(bdchil,bdchir,d**2,d**2),order='F')
elif k == 4:
so = np.reshape(so,(d**4,d**12),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**8),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**4,d**4),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
sv3 = np.reshape(sv3,(bdchi3,d**2,d**2),order='F')
if x == 0:
tensors = [us1]
legs = [[-2,-3,-1]]
chi = ncon(tensors,legs)
bdchil = 1
bdchir = bdchi1
elif x == 1:
tensors = [us2,us1]
legs = [[-1,-4,1,-2],[1,-5,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi1
bdchir = bdchi2*bdchi1
elif x == 2:
tensors = [us3,us2,us1]
legs = [[-1,-6,1,-3],[-2,1,2,-4],[2,-7,-5]]
chi = ncon(tensors,legs)
bdchil = bdchi2*bdchi1
bdchir = bdchi3*bdchi2*bdchi1
elif x > 2 and x < N-3:
tensors = [sv3,us3,us2,us1]
legs = [[-1,-7,1],[-2,1,2,-4],[-3,2,3,-5],[3,-8,-6]]
chi = ncon(tensors,legs)
bdchil = bdchi3*bdchi2*bdchi1
bdchir = bdchi3*bdchi2*bdchi1
elif x == N-3:
tensors = [sv3,us3,us2]
legs = [[-1,-6,1],[-2,1,2,-4],[-3,2,-7,-5]]
chi = ncon(tensors,legs)
bdchil = bdchi3*bdchi2*bdchi1
bdchir = bdchi3*bdchi2
elif x == N-2:
tensors = [sv3,us3]
legs = [[-1,-4,1],[-2,1,-5,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi3*bdchi2
bdchir = bdchi3
elif x == N-1:
tensors = [sv3]
legs = [[-1,-2,-3]]
chi = ncon(tensors,legs)
bdchil = bdchi3
bdchir = 1
chi = np.reshape(chi,(bdchi,bdchi,d**2,d**2),order='F')
else:
warnings.warn('Local superoperators acting on more then 4 neighbour sites are not currently supported.')
if i == 0:
bdchl = bdchil
bdchr = bdchir
ch[x] = chi
else:
bdchl = bdchil*bdchl
bdchr = bdchir*bdchr
tensors = [chi,ch[x]]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
ch[x] = ncon(tensors,legs)
ch[x] = np.reshape(ch[x],(bdchl,bdchr,d**2,d**2),order='F')
elif BC == 'P':
for i in range(len(so_list)):
so = so_list[i]
k = int(math.log(np.shape(so)[0],d**2))
if np.linalg.norm(so-np.diag(np.diag(so))) < 10**-10:
so = np.diag(so)
if k == 1:
bdchi = 1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
chi[:,:,nx,nx] = so[nx]
elif k == 2:
so = np.reshape(so,(d**2,d**2),order='F')
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
chi[:,:,nx,nx] = np.outer(sv[:,nx],us[nx,:])
elif k == 3:
so = np.reshape(so,(d**2,d**4),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**2),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
bdchi = bdchi2*bdchi1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv2[:,nx],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-3],[-4]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchi,bdchi),order='F')
elif k == 4:
so = np.reshape(so,(d**2,d**6),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**2,d**2),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
bdchi = bdchi3*bdchi2*bdchi1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx],us3[:,nx,:],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-4],[-3,-5],[-6]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchi,bdchi),order='F')
else:
warnings.warn('Local superoperators acting on more then 4 neighbour sites are not currently supported.')
else:
if k == 1:
bdchi = 1
chi = so[np.newaxis,np.newaxis,:,:]
elif k == 2:
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
us = np.reshape(us,(d**2,d**2,bdchi),order='F')
sv = np.reshape(sv,(bdchi,d**2,d**2),order='F')
tensors = [sv,us]
legs = [[-1,-3,1],[1,-4,-2]]
chi = ncon(tensors,legs)
elif k == 3:
so = np.reshape(so,(d**4,d**8),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2,d**2,d**2),order='F')
tensors = [sv2,us2,us1]
legs = [[-1,-5,1],[-2,1,2,-3],[2,-6,-4]]
chi = ncon(tensors,legs)
bdchi = bdchi2*bdchi1
chi = np.reshape(chi,(bdchi,bdchi,d**2,d**2),order='F')
elif k == 4:
so = np.reshape(so,(d**4,d**12),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**8),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**4,d**4),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
sv3 = np.reshape(sv3,(bdchi3,d**2,d**2),order='F')
tensors = [sv3,us3,us2,us1]
legs = [[-1,-7,1],[-2,1,2,-4],[-3,2,3,-5],[3,-8,-6]]
chi = ncon(tensors,legs)
bdchi = bdchi3*bdchi2*bdchi1
chi = np.reshape(chi,(bdchi,bdchi,d**2,d**2),order='F')
else:
warnings.warn('Local superoperators acting on more then 4 neighbour sites are not currently supported.')
if i == 0:
bdch = bdchi
ch = chi
else:
bdch = bdchi*bdch
tensors = [chi,ch]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
ch = ncon(tensors,legs)
ch = np.reshape(ch,(bdch,bdch,d**2,d**2),order='F')
ch = ch[:,:,:,:,np.newaxis]
ch = np.tile(ch,(1,1,1,1,N))
return ch
def fin_create_channel_derivative(N, d, BC, so_before_list, h, so_after_list):
"""
Creates a MPO for the derivative (over estimated parameter) of the superoperator describing the quantum channel. Function for finite size systems.
Function for translationally invariant channels with unitary parameter encoding generated by h.
Generator h has to be diagonal in the computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
d: integer
Dimension of local Hilbert space (dimension of physical index).
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding.
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
Returns:
chd: list of length N of ndarrays of a shape (Dl_chd,Dr_chd,d**2,d**2) for OBC (Dl_chd, Dr_chd can vary between sites) or ndarray of a shape (D_chd,D_chd,d**2,d**2,N) for PBC
Derivative of superoperator describing quantum channel in MPO representation.
"""
if np.linalg.norm(h-np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.')
if len(so_before_list) == 0:
if BC == 'O':
ch1 = np.eye(d**2,dtype=complex)
ch1 = ch1[np.newaxis,np.newaxis,:,:]
ch1 = [ch1]*N
elif BC == 'P':
ch1 = np.eye(d**2,dtype=complex)
ch1 = ch1[np.newaxis,np.newaxis,:,:,np.newaxis]
ch1 = np.tile(ch1,(1,1,1,1,N))
ch1d = fin_commutator(N,d,BC,ch1,h,1j)
ch2 = fin_create_channel(N,d,BC,so_after_list)
if BC == 'O':
chd = [0]*N
for x in range(N):
bdch1dl = np.shape(ch1d[x])[0]
bdch1dr = np.shape(ch1d[x])[1]
bdch2l = np.shape(ch2[x])[0]
bdch2r = np.shape(ch2[x])[1]
tensors = [ch2[x],ch1d[x]]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
chd[x] = np.reshape(ncon(tensors,legs),(bdch1dl*bdch2l,bdch1dr*bdch2r,d**2,d**2),order='F')
elif BC == 'P':
bdch1d = np.shape(ch1d)[0]
bdch2 = np.shape(ch2)[0]
chd = np.zeros((bdch1d*bdch2,bdch1d*bdch2,d**2,d**2,N),dtype=complex)
for x in range(N):
tensors = [ch2[:,:,:,:,x],ch1d[:,:,:,:,x]]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
chd[:,:,:,:,x] = np.reshape(ncon(tensors,legs),(bdch1d*bdch2,bdch1d*bdch2,d**2,d**2),order='F')
elif len(so_after_list) == 0:
ch1 = fin_create_channel(N,d,BC,so_before_list)
chd = fin_commutator(N,d,BC,ch1,h,1j)
else:
ch1 = fin_create_channel(N,d,BC,so_before_list)
ch1d = fin_commutator(N,d,BC,ch1,h,1j)
ch2 = fin_create_channel(N,d,BC,so_after_list)
if BC == 'O':
chd = [0]*N
for x in range(N):
bdch1dl = np.shape(ch1d[x])[0]
bdch1dr = np.shape(ch1d[x])[1]
bdch2l = np.shape(ch2[x])[0]
bdch2r = np.shape(ch2[x])[1]
tensors = [ch2[x],ch1d[x]]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
chd[x] = np.reshape(ncon(tensors,legs),(bdch1dl*bdch2l,bdch1dr*bdch2r,d**2,d**2),order='F')
elif BC == 'P':
bdch1d = np.shape(ch1d)[0]
bdch2 = np.shape(ch2)[0]
chd = np.zeros((bdch1d*bdch2,bdch1d*bdch2,d**2,d**2,N),dtype=complex)
for x in range(N):
tensors = [ch2[:,:,:,:,x],ch1d[:,:,:,:,x]]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
chd[:,:,:,:,x] = np.reshape(ncon(tensors,legs),(bdch1d*bdch2,bdch1d*bdch2,d**2,d**2),order='F')
return chd
def fin_commutator(N, d, BC, a, h, c):
"""
Calculate MPO for commutator b = [a, c*sum{h}] of MPO a with sum of local generators h and with arbitrary multiplicative scalar factor c.
Generator h have to be diagonal in computational basis, or in other words it is assumed that a is expressed in the eigenbasis of h.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
d: integer
Dimension of local Hilbert space (dimension of physical index).
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
a: list of length N of ndarrays of a shape (Dl_a,Dr_a,d,d) for OBC (Dl_a, Dr_a can vary between sites) or ndarray of a shape (D_a,D_a,d,d,N) for PBC
MPO.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding.
Generator h have to be diagonal in computational basis, or in other words it is assumed that a is expressed in the eigenbasis of h.
c: complex
Scalar factor which multiplies sum of local generators.
Returns:
b: list of length N of ndarrays of a shape (Dl_b,Dr_b,d,d) for OBC (Dl_b, Dr_b can vary between sites) or ndarray of a shape (D_b,D_b,d,d,N) for PBC
Commutator [a, c*sum{h}] in MPO representation.
"""
if np.linalg.norm(h-np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that a is expressed in the eigenbasis of h.')
if BC == 'O':
bh = [0]*N
b = [0]*N
for x in range(N):
da = np.shape(a[x])[2]
bda1 = np.shape(a[x])[0]
bda2 = np.shape(a[x])[1]
if x == 0:
bdbh1 = 1
bdbh2 = 2
bh[x] = np.zeros((bdbh1,bdbh2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
bh[x][:,:,nx,nxp] = np.array([[c*(h[nxp,nxp]-h[nx,nx]),1]])
elif x > 0 and x < N-1:
bdbh1 = 2
bdbh2 = 2
bh[x] = np.zeros((bdbh1,bdbh2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
bh[x][:,:,nx,nxp] = np.array([[1,0],[c*(h[nxp,nxp]-h[nx,nx]),1]])
elif x == N-1:
bdbh1 = 2
bdbh2 = 1
bh[x] = np.zeros((bdbh1,bdbh2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
bh[x][:,:,nx,nxp] = np.array([[1],[c*(h[nxp,nxp]-h[nx,nx])]])
if da == d:
# a is operator
b[x] = np.zeros((bdbh1*bda1,bdbh2*bda2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
b[x][:,:,nx,nxp] = np.kron(bh[x][:,:,nx,nxp],a[x][:,:,nx,nxp])
elif da == d**2:
# a is superoperator (vectorized channel)
bh[x] = np.reshape(bh[x],(bdbh1,bdbh2,d**2),order='F')
b[x] = np.zeros((bdbh1*bda1,bdbh2*bda2,d**2,d**2),dtype=complex)
for nx in range(d**2):
for nxp in range(d**2):
b[x][:,:,nx,nxp] = np.kron(bh[x][:,:,nx],a[x][:,:,nx,nxp])
elif BC == 'P':
da = np.shape(a)[2]
bda = np.shape(a)[0]
if N == 1:
bdbh = 1
else:
bdbh = 2
bh = np.zeros((bdbh,bdbh,d,d,N),dtype=complex)
for nx in range(d):
for nxp in range(d):
if N == 1:
bh[:,:,nx,nxp,0] = c*(h[nxp,nxp]-h[nx,nx])
else:
bh[:,:,nx,nxp,0] = np.array([[c*(h[nxp,nxp]-h[nx,nx]),1],[0,0]])
for x in range(1,N-1):
bh[:,:,nx,nxp,x] = np.array([[1,0],[c*(h[nxp,nxp]-h[nx,nx]),1]])
bh[:,:,nx,nxp,N-1] = np.array([[1,0],[c*(h[nxp,nxp]-h[nx,nx]),0]])
if da == d:
# a is operator
b = np.zeros((bdbh*bda,bdbh*bda,d,d,N),dtype=complex)
for nx in range(d):
for nxp in range(d):
for x in range(N):
b[:,:,nx,nxp,x] = np.kron(bh[:,:,nx,nxp,x],a[:,:,nx,nxp,x])
elif da == d**2:
# a is superoperator (vectorized channel)
bh = np.reshape(bh,(bdbh,bdbh,d**2,N),order='F')
b = np.zeros((bdbh*bda,bdbh*bda,d**2,d**2,N),dtype=complex)
for nx in range(d**2):
for nxp in range(d**2):
for x in range(N):
b[:,:,nx,nxp,x] = np.kron(bh[:,:,nx,x],a[:,:,nx,nxp,x])
return b
def fin_enlarge_bdl(cold,factor):
"""
Enlarge bond dimension of SLD MPO. Function for finite size systems.
Parameters:
cold: SLD MPO, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
factor: factor which determine on average relation between old and newly added values of SLD MPO
Returns:
c: SLD MPO with bd += 1
"""
rng = np.random.default_rng()
if type(cold) is list:
n = len(cold)
if n == 1:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
c = [0]*n
x = 0
d = np.shape(cold[x])[2]
bdl1 = 1
bdl2 = np.shape(cold[x])[1]+1
c[x] = np.zeros((bdl1,bdl2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
meanrecold = np.sum(np.abs(np.real(cold[x][:,:,nx,nxp])))/(bdl2-1)
meanimcold = np.sum(np.abs(np.imag(cold[x][:,:,nx,nxp])))/(bdl2-1)
c[x][:,:,nx,nxp] = (meanrecold*rng.random((bdl1,bdl2))+1j*meanimcold*rng.random((bdl1,bdl2)))*factor
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[x][0:bdl1-1,0:bdl2-1,:,:] = cold[x]
for x in range(1,n-1):
d = np.shape(cold[x])[2]
bdl1 = np.shape(cold[x])[0]+1
bdl2 = np.shape(cold[x])[1]+1
c[x] = np.zeros((bdl1,bdl2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
meanrecold = np.sum(np.abs(np.real(cold[x][:,:,nx,nxp])))/((bdl1-1)*(bdl2-1))
meanimcold = np.sum(np.abs(np.imag(cold[x][:,:,nx,nxp])))/((bdl1-1)*(bdl2-1))
c[x][:,:,nx,nxp] = (meanrecold*rng.random((bdl1,bdl2))+1j*meanimcold*rng.random((bdl1,bdl2)))*factor
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[x][0:bdl1-1,0:bdl2-1,:,:] = cold[x]
x = n-1
d = np.shape(cold[x])[2]
bdl1 = np.shape(cold[x])[0]+1
bdl2 = 1
c[x] = np.zeros((bdl1,bdl2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
meanrecold = np.sum(np.abs(np.real(cold[x][:,:,nx,nxp])))/(bdl1-1)
meanimcold = np.sum(np.abs(np.imag(cold[x][:,:,nx,nxp])))/(bdl1-1)
c[x][:,:,nx,nxp] = (meanrecold*rng.random((bdl1,bdl2))+1j*meanimcold*rng.random((bdl1,bdl2)))*factor
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[x][0:bdl1-1,0:bdl2-1,:,:] = cold[x]
elif type(cold) is np.ndarray:
n = np.shape(cold)[4]
d = np.shape(cold)[2]
bdl = np.shape(cold)[0]+1
c = np.zeros((bdl,bdl,d,d,n),dtype=complex)
for nx in range(d):
for nxp in range(d):
for x in range(n):
meanrecold = np.sum(np.abs(np.real(cold[:,:,nx,nxp,x])))/(bdl-1)**2
meanimcold = np.sum(np.abs(np.imag(cold[:,:,nx,nxp,x])))/(bdl-1)**2
c[:,:,nx,nxp,x] = (meanrecold*rng.random((bdl,bdl))+1j*meanimcold*rng.random((bdl,bdl)))*factor
c = (c + np.conj(np.moveaxis(c,2,3)))/2
c[0:bdl-1,0:bdl-1,:,:,:] = cold
return c
def fin_enlarge_bdpsi(a0old,factor):
"""
Enlarge bond dimension of wave function MPS. Function for finite size systems.
Parameters:
a0old: wave function MPS, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
ratio: factor which determine on average relation between last and next to last values of diagonals of wave function MPS
Returns:
a0: wave function MPS with bd += 1
"""
rng = np.random.default_rng()
if type(a0old) is list:
n = len(a0old)
if n == 1:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
a0 = [0]*n
x = 0
d = np.shape(a0old[x])[2]
bdpsi1 = 1
bdpsi2 = np.shape(a0old[x])[1]+1
a0[x] = np.zeros((bdpsi1,bdpsi2,d),dtype=complex)
for nx in range(d):
meanrea0old = np.sum(np.abs(np.real(a0old[x][:,:,nx])))/(bdpsi2-1)
meanima0old = np.sum(np.abs(np.imag(a0old[x][:,:,nx])))/(bdpsi2-1)
a0[x][:,:,nx] = (meanrea0old*rng.random((bdpsi1,bdpsi2))+1j*meanima0old*rng.random((bdpsi1,bdpsi2)))*factor
a0[x][0:bdpsi1-1,0:bdpsi2-1,:] = a0old[x]
for x in range(1,n-1):
d = np.shape(a0old[x])[2]
bdpsi1 = np.shape(a0old[x])[0]+1
bdpsi2 = np.shape(a0old[x])[1]+1
a0[x] = np.zeros((bdpsi1,bdpsi2,d),dtype=complex)
for nx in range(d):
meanrea0old = np.sum(np.abs(np.real(a0old[x][:,:,nx])))/((bdpsi1-1)*(bdpsi2-1))
meanima0old = np.sum(np.abs(np.imag(a0old[x][:,:,nx])))/((bdpsi1-1)*(bdpsi2-1))
a0[x][:,:,nx] = (meanrea0old*rng.random((bdpsi1,bdpsi2))+1j*meanima0old*rng.random((bdpsi1,bdpsi2)))*factor
a0[x][0:bdpsi1-1,0:bdpsi2-1,:] = a0old[x]
x = n-1
d = np.shape(a0old[x])[2]
bdpsi1 = np.shape(a0old[x])[0]+1
bdpsi2 = 1
a0[x] = np.zeros((bdpsi1,bdpsi2,d),dtype=complex)
for nx in range(d):
meanrea0old = np.sum(np.abs(np.real(a0old[x][:,:,nx])))/(bdpsi1-1)
meanima0old = np.sum(np.abs(np.imag(a0old[x][:,:,nx])))/(bdpsi1-1)
a0[x][:,:,nx] = (meanrea0old*rng.random((bdpsi1,bdpsi2))+1j*meanima0old*rng.random((bdpsi1,bdpsi2)))*factor
a0[x][0:bdpsi1-1,0:bdpsi2-1,:] = a0old[x]
tensors = [np.conj(a0[n-1]),a0[n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
r1 = ncon(tensors,legs)
a0[n-1] = a0[n-1]/np.sqrt(np.linalg.norm(np.reshape(r1,-1,order='F')))
tensors = [np.conj(a0[n-1]),a0[n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
r2 = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[x]),a0[x],r2]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
r1 = ncon(tensors,legs)
a0[x] = a0[x]/np.sqrt(np.linalg.norm(np.reshape(r1,-1,order='F')))
tensors = [np.conj(a0[x]),a0[x],r2]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
r2 = ncon(tensors,legs)
tensors = [np.conj(a0[0]),a0[0],r2]
legs = [[4,2,1],[5,3,1],[2,3,4,5]]
r1 = ncon(tensors,legs)
a0[0] = a0[0]/np.sqrt(np.abs(r1))
elif type(a0old) is np.ndarray:
n = np.shape(a0old)[3]
d = np.shape(a0old)[2]
bdpsi = np.shape(a0old)[0]+1
a0 = np.zeros((bdpsi,bdpsi,d,n),dtype=complex)
for nx in range(d):
for x in range(n):
meanrea0old = np.sum(np.abs(np.real(a0old[:,:,nx,x])))/(bdpsi-1)**2
meanima0old = np.sum(np.abs(np.imag(a0old[:,:,nx,x])))/(bdpsi-1)**2
a0[:,:,nx,x] = (meanrea0old*rng.random((bdpsi,bdpsi))+1j*meanima0old*rng.random((bdpsi,bdpsi)))*factor
a0[0:bdpsi-1,0:bdpsi-1,:,:] = a0old
if n == 1:
tensors = [np.conj(a0[:,:,:,0]),a0[:,:,:,0]]
legs = [[2,2,1],[3,3,1]]
r1 = ncon(tensors,legs)
a0[:,:,:,0] = a0[:,:,:,0]/np.sqrt(np.abs(r1))
else:
tensors = [np.conj(a0[:,:,:,n-1]),a0[:,:,:,n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
r1 = ncon(tensors,legs)
a0[:,:,:,n-1] = a0[:,:,:,n-1]/np.sqrt(np.linalg.norm(np.reshape(r1,-1,order='F')))
tensors = [np.conj(a0[:,:,:,n-1]),a0[:,:,:,n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
r2 = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[:,:,:,x]),a0[:,:,:,x],r2]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
r1 = ncon(tensors,legs)
a0[:,:,:,x] = a0[:,:,:,x]/np.sqrt(np.linalg.norm(np.reshape(r1,-1,order='F')))
tensors = [np.conj(a0[:,:,:,x]),a0[:,:,:,x],r2]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
r2 = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),a0[:,:,:,0],r2]
legs = [[4,2,1],[5,3,1],[2,3,4,5]]
r1 = ncon(tensors,legs)
a0[:,:,:,0] = a0[:,:,:,0]/np.sqrt(np.abs(r1))
return a0
#########################################
# 1.2.1 Problems with exact derivative. #
#########################################
def fin_FoM_FoMD_optbd(n,d,bc,ch,chp,cini=None,a0ini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True,bdpsimax=100,alwaysbdpsimax=False):
"""
Iterative optimization of FoM/FoMD over SLD MPO and initial wave function MPS and also check of convergence in bond dimensions. Function for finite size systems.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
ch: MPO for quantum channel, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
chp: MPO for generalized derivative of quantum channel, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
cini: initial MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
a0ini: initial MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for SLD MPO, default value is 100
alwaysbdlmax: boolean value, True if maximal value of bd for SLD MPO have to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
bdpsimax: maximal value of bd for initial wave function MPS, default value is 100
alwaysbdpsimax: boolean value, True if maximal value of bd for initial wave function MPS have to be reached, otherwise False (default value)
Returns:
result: optimal value of FoM/FoMD
resultm: matrix describing FoM/FoMD in function of bd of respectively SLD MPO [rows] and initial wave function MPS [columns]
c: optimal MPO for SLD
a0: optimal MPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
if bc == 'O':
a0 = a0[np.newaxis,np.newaxis,:]
a0 = [a0]*n
elif bc == 'P':
a0 = a0[np.newaxis,np.newaxis,:,np.newaxis]
a0 = np.tile(a0,(1,1,1,n))
else:
a0 = a0ini
if bc == 'O':
bdpsi = max([np.shape(a0[i])[0] for i in range(n)])
a0 = [a0[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
if cini is None:
bdl = 1
rng = np.random.default_rng()
if bc == 'O':
c = [0]*n
c[0] = (rng.random((1,bdl,d,d)) + 1j*rng.random((1,bdl,d,d)))/bdl
c[0] = (c[0] + np.conj(np.moveaxis(c[0],2,3)))/2
for x in range(1,n-1):
c[x] = (rng.random((bdl,bdl,d,d)) + 1j*rng.random((bdl,bdl,d,d)))/bdl
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[n-1] = (rng.random((bdl,1,d,d)) + 1j*rng.random((bdl,1,d,d)))/bdl
c[n-1] = (c[n-1] + np.conj(np.moveaxis(c[n-1],2,3)))/2
elif bc == 'P':
c = (rng.random((bdl,bdl,d,d,n)) + 1j*rng.random((bdl,bdl,d,d,n)))/bdl
c = (c + np.conj(np.moveaxis(c,2,3)))/2
else:
c = cini
if bc == 'O':
bdl = max([np.shape(c[i])[0] for i in range(n)])
c = [c[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdl = np.shape(c)[0]
c = c.astype(complex)
resultm = np.zeros((bdlmax,bdpsimax),dtype=float)
resultm[bdl-1,bdpsi-1],c,a0 = fin_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,imprecision,lherm)
if bc == 'O' and n == 1:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
return result,resultm,c,a0
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
while True:
if bdpsi == bdpsimax:
break
else:
a0old = a0
bdpsi += 1
i = 0
while True:
a0 = fin_enlarge_bdpsi(a0,factorv[i])
resultm[bdl-1,bdpsi-1],cnew,a0new = fin_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-1,bdpsi-2]:
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-1,bdpsi-2]:
bdpsi += -1
a0 = a0old
a0copy = a0new
ccopy = cnew
break
else:
a0 = a0new
c = cnew
if problem:
break
if bdl == bdlmax:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi]
break
else:
bdl += 1
i = 0
while True:
c = fin_enlarge_bdl(c,factorv[i])
resultm[bdl-1,bdpsi-1],cnew,a0new = fin_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-2,bdpsi-1]:
a0 = a0new
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-2,bdpsi-1]:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
if resultm[bdl-1,bdpsi-1] < resultm[bdl-2,bdpsi]:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
bdl += -1
bdpsi += 1
result = resultm[bdl-1,bdpsi-1]
else:
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi-1]
break
if not(problem):
break
return result,resultm,c,a0
def fin_FoM_optbd(n,d,bc,a,b,cini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True):
"""
Optimization of FoM over SLD MPO and also check of convergence in bond dimension. Function for finite size systems.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
a: MPO for density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
b: MPO for generalized derivative of density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
cini: initial MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for SLD MPO, default value is 100
alwaysbdlmax: boolean value, True if maximal value of bd for SLD MPO have to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
result: optimal value of FoM
resultv: vector describing FoM in function of bd of SLD MPO
c: optimal MPO for SLD
"""
while True:
if cini is None:
bdl = 1
rng = np.random.default_rng()
if bc == 'O':
c = [0]*n
c[0] = (rng.random((1,bdl,d,d)) + 1j*rng.random((1,bdl,d,d)))/bdl
c[0] = (c[0] + np.conj(np.moveaxis(c[0],2,3)))/2
for x in range(1,n-1):
c[x] = (rng.random((bdl,bdl,d,d)) + 1j*rng.random((bdl,bdl,d,d)))/bdl
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[n-1] = (rng.random((bdl,1,d,d)) + 1j*rng.random((bdl,1,d,d)))/bdl
c[n-1] = (c[n-1] + np.conj(np.moveaxis(c[n-1],2,3)))/2
elif bc == 'P':
c = (rng.random((bdl,bdl,d,d,n)) + 1j*rng.random((bdl,bdl,d,d,n)))/bdl
c = (c + np.conj(np.moveaxis(c,2,3)))/2
else:
c = cini
if bc == 'O':
bdl = max([np.shape(c[i])[0] for i in range(n)])
c = [c[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdl = np.shape(c)[0]
c = c.astype(complex)
resultv = np.zeros(bdlmax,dtype=float)
if bc == 'O':
resultv[bdl-1],c = fin_FoM_OBC_optm(a,b,c,imprecision,lherm)
if n == 1:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
return result,resultv,c
elif bc == 'P':
resultv[bdl-1],c = fin_FoM_PBC_optm(a,b,c,imprecision,lherm)
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
if bdl == bdlmax:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
else:
bdl += 1
i = 0
while True:
c = fin_enlarge_bdl(c,factorv[i])
if bc == 'O':
resultv[bdl-1],cnew = fin_FoM_OBC_optm(a,b,c,imprecision,lherm)
elif bc == 'P':
resultv[bdl-1],cnew = fin_FoM_PBC_optm(a,b,c,imprecision,lherm)
if resultv[bdl-1] >= resultv[bdl-2]:
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultv[bdl-1] < (1+imprecision)*resultv[bdl-2]:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
if not(problem):
break
return result,resultv,c
def fin_FoMD_optbd(n,d,bc,c2d,cpd,a0ini=None,imprecision=10**-2,bdpsimax=100,alwaysbdpsimax=False):
"""
Optimization of FoMD over initial wave function MPS and also check of convergence in bond dimension. Function for finite size systems.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
c2d: MPO for square of dual of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
cpd: MPO for dual of generalized derivative of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
a0ini: initial MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdpsimax: maximal value of bd for initial wave function MPS, default value is 100
alwaysbdpsimax: boolean value, True if maximal value of bd for initial wave function MPS have to be reached, otherwise False (default value)
Returns:
result: optimal value of FoMD
resultv: vector describing FoMD in function of bd of initial wave function MPS
a0: optimal MPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
if bc == 'O':
a0 = a0[np.newaxis,np.newaxis,:]
a0 = [a0]*n
elif bc == 'P':
a0 = a0[np.newaxis,np.newaxis,:,np.newaxis]
a0 = np.tile(a0,(1,1,1,n))
else:
a0 = a0ini
if bc == 'O':
bdpsi = max([np.shape(a0[i])[0] for i in range(n)])
a0 = [a0[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
resultv = np.zeros(bdpsimax,dtype=float)
if bc == 'O':
resultv[bdpsi-1],a0 = fin_FoMD_OBC_optm(c2d,cpd,a0,imprecision)
if n == 1:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
return result,resultv,a0
elif bc == 'P':
resultv[bdpsi-1],a0 = fin_FoMD_PBC_optm(c2d,cpd,a0,imprecision)
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
if bdpsi == bdpsimax:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
else:
bdpsi += 1
i = 0
while True:
a0 = fin_enlarge_bdpsi(a0,factorv[i])
if bc == 'O':
resultv[bdpsi-1],a0new = fin_FoMD_OBC_optm(c2d,cpd,a0,imprecision)
elif bc == 'P':
resultv[bdpsi-1],a0new = fin_FoMD_PBC_optm(c2d,cpd,a0,imprecision)
if resultv[bdpsi-1] >= resultv[bdpsi-2]:
a0 = a0new
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultv[bdpsi-1] < (1+imprecision)*resultv[bdpsi-2]:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
if not(problem):
break
return result,resultv,a0
def fin_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,imprecision=10**-2,lherm=True):
"""
Iterative optimization of FoM/FoMD over SLD MPO and initial wave function MPS. Function for finite size systems.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
c: MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
a0: MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
ch: MPO for quantum channel, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
chp: MPO for generalized derivative of quantum channel, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fval: optimal value of FoM/FoMD
c: optimal MPO for SLD
a0: optimal MPS for initial wave function
"""
relunc_f = 0.1*imprecision
if bc == 'O':
chd = [0]*n
chpd = [0]*n
for x in range(n):
chd[x] = np.conj(np.moveaxis(ch[x],2,3))
chpd[x] = np.conj(np.moveaxis(chp[x],2,3))
elif bc == 'P':
chd = np.conj(np.moveaxis(ch,2,3))
chpd = np.conj(np.moveaxis(chp,2,3))
f = np.array([])
iter_f = 0
while True:
a0_dm = wave_function_to_density_matrix(a0)
a = channel_acting_on_operator(ch,a0_dm)
b = channel_acting_on_operator(chp,a0_dm)
if bc == 'O':
fom,c = fin_FoM_OBC_optm(a,b,c,imprecision,lherm)
elif bc == 'P':
fom,c = fin_FoM_PBC_optm(a,b,c,imprecision,lherm)
f = np.append(f,fom)
if iter_f >= 2 and np.std(f[-4:])/np.mean(f[-4:]) <= relunc_f:
break
if bc == 'O':
c2 = [0]*n
for x in range(n):
bdl1 = np.shape(c[x])[0]
bdl2 = np.shape(c[x])[1]
c2[x] = np.zeros((bdl1**2,bdl2**2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
for nxpp in range(d):
c2[x][:,:,nx,nxp] = c2[x][:,:,nx,nxp]+np.kron(c[x][:,:,nx,nxpp],c[x][:,:,nxpp,nxp])
elif bc == 'P':
bdl = np.shape(c)[0]
c2 = np.zeros((bdl**2,bdl**2,d,d,n),dtype=complex)
for nx in range(d):
for nxp in range(d):
for nxpp in range(d):
for x in range(n):
c2[:,:,nx,nxp,x] = c2[:,:,nx,nxp,x]+np.kron(c[:,:,nx,nxpp,x],c[:,:,nxpp,nxp,x])
c2d = channel_acting_on_operator(chd,c2)
cpd = channel_acting_on_operator(chpd,c)
if bc == 'O':
fomd,a0 = fin_FoMD_OBC_optm(c2d,cpd,a0,imprecision)
elif bc == 'P':
fomd,a0 = fin_FoMD_PBC_optm(c2d,cpd,a0,imprecision)
f = np.append(f,fomd)
iter_f += 1
fval = f[-1]
return fval,c,a0
def fin_FoM_OBC_optm(a,b,c,imprecision=10**-2,lherm=True):
"""
Optimization of FoM over MPO for SLD. Function for finite size systems with OBC.
Parameters:
a: MPO for density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
b: MPO for generalized derivative of density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
c: MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fomval: optimal value of FoM
c: optimal MPO for SLD
"""
n = len(c)
tol_fom = 0.1*imprecision/n**2
if n == 1:
if np.shape(a[0])[0] == 1 and np.shape(b[0])[0] == 1 and np.shape(c[0])[0] == 1:
d = np.shape(c[0])[2]
tensors = [b[0][0,0,:,:]]
legs = [[-2,-1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[0][0,0,:,:],np.eye(d)]
legs = [[-2,-3],[-4,-1]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(d*d,d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[0][0,0,:,:] = np.reshape(cv,(d,d),order='F')
if lherm:
c[0] = (c[0]+np.conj(np.moveaxis(c[0],2,3)))/2
cv = np.reshape(c[0],-1,order='F')
fomval = np.real(2*cv @ l1 - cv @ l2 @ cv)
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
relunc_fom = 0.1*imprecision
l1f = [0]*n
l2f = [0]*n
fom = np.array([])
iter_fom = 0
while True:
tensors = [c[n-1],b[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1f[n-2] = ncon(tensors,legs)
l1f[n-2] = l1f[n-2][:,:,0,0]
tensors = [c[n-1],a[n-1],c[n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2f[n-2] = ncon(tensors,legs)
l2f[n-2] = l2f[n-2][:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [c[x],b[x],l1f[x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1f[x-1] = ncon(tensors,legs)
tensors = [c[x],a[x],c[x],l2f[x]]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2f[x-1] = ncon(tensors,legs)
bdl1,bdl2,d,d = np.shape(c[0])
tensors = [b[0],l1f[0]]
legs = [[-5,1,-4,-3],[-2,1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[0],np.eye(d),l2f[0]]
legs = [[-9,1,-4,-7],[-8,-3],[-2,1,-6]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[0] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[0] = (c[0]+np.conj(np.moveaxis(c[0],2,3)))/2
cv = np.reshape(c[0],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
tensors = [c[0],b[0]]
legs = [[-3,-1,1,2],[-4,-2,2,1]]
l1c = ncon(tensors,legs)
l1c = l1c[:,:,0,0]
tensors = [c[0],a[0],c[0]]
legs = [[-4,-1,1,2],[-5,-2,2,3],[-6,-3,3,1]]
l2c = ncon(tensors,legs)
l2c = l2c[:,:,:,0,0,0]
for x in range(1,n-1):
bdl1,bdl2,d,d = np.shape(c[x])
tensors = [l1c,b[x],l1f[x]]
legs = [[-1,1],[1,2,-4,-3],[-2,2]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l2c,a[x],np.eye(d),l2f[x]]
legs = [[-1,1,-5],[1,2,-4,-7],[-8,-3],[-2,2,-6]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[x] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[x] = (c[x]+np.conj(np.moveaxis(c[x],2,3)))/2
cv = np.reshape(c[x],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
tensors = [l1c,c[x],b[x]]
legs = [[3,4],[3,-1,1,2],[4,-2,2,1]]
l1c = ncon(tensors,legs)
tensors = [l2c,c[x],a[x],c[x]]
legs = [[4,5,6],[4,-1,1,2],[5,-2,2,3],[6,-3,3,1]]
l2c = ncon(tensors,legs)
bdl1,bdl2,d,d = np.shape(c[n-1])
tensors = [l1c,b[n-1]]
legs = [[-1,1],[1,-5,-4,-3]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l2c,a[n-1],np.eye(d)]
legs = [[-1,1,-5],[1,-9,-4,-7],[-8,-3]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[n-1] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[n-1] = (c[n-1]+np.conj(np.moveaxis(c[n-1],2,3)))/2
cv = np.reshape(c[n-1],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
iter_fom += 1
if iter_fom >= 2 and all(fom[-2*n:] > 0) and np.std(fom[-2*n:])/np.mean(fom[-2*n:]) <= relunc_fom:
break
fomval = fom[-1]
return fomval,c
def fin_FoM_PBC_optm(a,b,c,imprecision=10**-2,lherm=True):
"""
Optimization of FoM over MPO for SLD. Function for finite size systems with PBC.
Parameters:
a: MPO for density matrix, expected ndarray of a shape (bd,bd,d,d,n)
b: MPO for generalized derivative of density matrix, expected ndarray of a shape (bd,bd,d,d,n)
c: MPO for SLD, expected ndarray of a shape (bd,bd,d,d,n)
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fomval: optimal value of FoM
c: optimal MPO for SLD
"""
n = np.shape(a)[4]
d = np.shape(a)[2]
bdr = np.shape(a)[0]
bdrp = np.shape(b)[0]
bdl = np.shape(c)[0]
tol_fom = 0.1*imprecision/n**2
if n == 1:
tensors = [b[:,:,:,:,0],np.eye(bdl)]
legs = [[1,1,-4,-3],[-2,-1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[:,:,:,:,0],np.eye(d),np.eye(bdl),np.eye(bdl)]
legs = [[1,1,-4,-7],[-8,-3],[-2,-1],[-6,-5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,0] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,0] = (c[:,:,:,:,0]+np.conj(np.moveaxis(c[:,:,:,:,0],2,3)))/2
cv = np.reshape(c[:,:,:,:,0],-1,order='F')
fomval = np.real(2*cv @ l1 - cv @ l2 @ cv)
else:
relunc_fom = 0.1*imprecision
l1f = np.zeros((bdl,bdrp,bdl,bdrp,n-1),dtype=complex)
l2f = np.zeros((bdl,bdr,bdl,bdl,bdr,bdl,n-1),dtype=complex)
fom = np.array([])
iter_fom = 0
while True:
tensors = [c[:,:,:,:,n-1],b[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1f[:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1],c[:,:,:,:,n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2f[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [c[:,:,:,:,x],b[:,:,:,:,x],l1f[:,:,:,:,x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1f[:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x],l2f[:,:,:,:,:,:,x]]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6,-4,-5,-6]]
l2f[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [b[:,:,:,:,0],l1f[:,:,:,:,0]]
legs = [[2,1,-4,-3],[-2,1,-1,2]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[:,:,:,:,0],np.eye(d),l2f[:,:,:,:,:,:,0]]
legs = [[2,1,-4,-7],[-8,-3],[-2,1,-6,-1,2,-5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,0] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,0] = (c[:,:,:,:,0]+np.conj(np.moveaxis(c[:,:,:,:,0],2,3)))/2
cv = np.reshape(c[:,:,:,:,0],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
tensors = [c[:,:,:,:,0],b[:,:,:,:,0]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1c = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2c = ncon(tensors,legs)
for x in range(1,n-1):
tensors = [l1c,b[:,:,:,:,x],l1f[:,:,:,:,x]]
legs = [[3,4,-1,1],[1,2,-4,-3],[-2,2,3,4]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l2c,a[:,:,:,:,x],np.eye(d),l2f[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-5],[1,2,-4,-7],[-8,-3],[-2,2,-6,3,4,5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,x] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,x] = (c[:,:,:,:,x]+np.conj(np.moveaxis(c[:,:,:,:,x],2,3)))/2
cv = np.reshape(c[:,:,:,:,x],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
tensors = [l1c,c[:,:,:,:,x],b[:,:,:,:,x]]
legs = [[-1,-2,3,4],[3,-3,1,2],[4,-4,2,1]]
l1c = ncon(tensors,legs)
tensors = [l2c,c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x]]
legs = [[-1,-2,-3,4,5,6],[4,-4,1,2],[5,-5,2,3],[6,-6,3,1]]
l2c = ncon(tensors,legs)
tensors = [l1c,b[:,:,:,:,n-1]]
legs = [[-2,2,-1,1],[1,2,-4,-3]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l2c,a[:,:,:,:,n-1],np.eye(d)]
legs = [[-2,2,-6,-1,1,-5],[1,2,-4,-7],[-8,-3]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,n-1] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,n-1] = (c[:,:,:,:,n-1]+np.conj(np.moveaxis(c[:,:,:,:,n-1],2,3)))/2
cv = np.reshape(c[:,:,:,:,n-1],-1,order='F')
fom = np.append(fom,np.real(2*cv @ l1 - cv @ l2 @ cv))
iter_fom += 1
if iter_fom >= 2 and all(fom[-2*n:] > 0) and np.std(fom[-2*n:])/np.mean(fom[-2*n:]) <= relunc_fom:
break
fomval = fom[-1]
return fomval,c
def fin_FoMD_OBC_optm(c2d,cpd,a0,imprecision=10**-2):
"""
Optimization of FoMD over MPS for initial wave function. Function for finite size systems with OBC.
Parameters:
c2d: MPO for square of dual of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cpd: MPO for dual of generalized derivative of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
a0: MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
fomdval: optimal value of FoMD
a0: optimal MPS for initial wave function
"""
n = len(a0)
if n == 1:
if np.shape(c2d[0])[0] == 1 and np.shape(cpd[0])[0] == 1 and np.shape(a0[0])[0] == 1:
d = np.shape(a0[0])[2]
tensors = [c2d[0][0,0,:,:]]
legs = [[-1,-2]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(d,d),order='F')
tensors = [cpd[0][0,0,:,:]]
legs = [[-1,-2]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(d,d),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[0][0,0,:] = np.reshape(a0v,(d),order='F')
fomdval = np.real(fomdval[position])
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
relunc_fomd = 0.1*imprecision
l2df = [0]*n
lpdf = [0]*n
fomd = np.array([])
iter_fomd = 0
while True:
tensors = [np.conj(a0[n-1]),c2d[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2df[n-2] = ncon(tensors,legs)
l2df[n-2] = l2df[n-2][:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cpd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdf[n-2] = ncon(tensors,legs)
lpdf[n-2] = lpdf[n-2][:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [np.conj(a0[x]),c2d[x],a0[x],l2df[x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2df[x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cpd[x],a0[x],lpdf[x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpdf[x-1] = ncon(tensors,legs)
bdpsi1,bdpsi2,d = np.shape(a0[0])
tensors = [c2d[0],l2df[0]]
legs = [[-7,1,-3,-6],[-2,1,-5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [cpd[0],lpdf[0]]
legs = [[-7,1,-3,-6],[-2,1,-5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[0] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
a0[0] = np.moveaxis(a0[0],2,0)
a0[0] = np.reshape(a0[0],(d*bdpsi1,bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[0],full_matrices=False)
a0[0] = np.reshape(u,(d,bdpsi1,np.shape(s)[0]),order='F')
a0[0] = np.moveaxis(a0[0],0,2)
tensors = [np.diag(s) @ vh,a0[1]]
legs = [[-1,1],[1,-2,-3]]
a0[1] = ncon(tensors,legs)
tensors = [np.conj(a0[0]),c2d[0],a0[0]]
legs = [[-4,-1,1],[-5,-2,1,2],[-6,-3,2]]
l2dc = ncon(tensors,legs)
l2dc = l2dc[:,:,:,0,0,0]
tensors = [np.conj(a0[0]),cpd[0],a0[0]]
legs = [[-4,-1,1],[-5,-2,1,2],[-6,-3,2]]
lpdc = ncon(tensors,legs)
lpdc = lpdc[:,:,:,0,0,0]
for x in range(1,n-1):
bdpsi1,bdpsi2,d = np.shape(a0[x])
tensors = [l2dc,c2d[x],l2df[x]]
legs = [[-1,1,-4],[1,2,-3,-6],[-2,2,-5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [lpdc,cpd[x],lpdf[x]]
legs = [[-1,1,-4],[1,2,-3,-6],[-2,2,-5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[x] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
a0[x] = np.moveaxis(a0[x],2,0)
a0[x] = np.reshape(a0[x],(d*bdpsi1,bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[x],full_matrices=False)
a0[x] = np.reshape(u,(d,bdpsi1,np.shape(s)[0]),order='F')
a0[x] = np.moveaxis(a0[x],0,2)
tensors = [np.diag(s) @ vh,a0[x+1]]
legs = [[-1,1],[1,-2,-3]]
a0[x+1] = ncon(tensors,legs)
tensors = [l2dc,np.conj(a0[x]),c2d[x],a0[x]]
legs = [[3,4,5],[3,-1,1],[4,-2,1,2],[5,-3,2]]
l2dc = ncon(tensors,legs)
tensors = [lpdc,np.conj(a0[x]),cpd[x],a0[x]]
legs = [[3,4,5],[3,-1,1],[4,-2,1,2],[5,-3,2]]
lpdc = ncon(tensors,legs)
bdpsi1,bdpsi2,d = np.shape(a0[n-1])
tensors = [l2dc,c2d[n-1]]
legs = [[-1,1,-4],[1,-7,-3,-6]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [lpdc,cpd[n-1]]
legs = [[-1,1,-4],[1,-7,-3,-6]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[n-1] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
iter_fomd += 1
for x in range(n-1,0,-1):
bdpsi1,bdpsi2,d = np.shape(a0[x])
a0[x] = np.moveaxis(a0[x],2,1)
a0[x] = np.reshape(a0[x],(bdpsi1,d*bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[x],full_matrices=False)
a0[x] = np.reshape(vh,(np.shape(s)[0],d,bdpsi2),order='F')
a0[x] = np.moveaxis(a0[x],1,2)
tensors = [a0[x-1],u @ np.diag(s)]
legs = [[-1,1,-3],[1,-2]]
a0[x-1] = ncon(tensors,legs)
if iter_fomd >= 2 and all(fomd[-2*n:] > 0) and np.std(fomd[-2*n:])/np.mean(fomd[-2*n:]) <= relunc_fomd:
break
fomdval = fomd[-1]
return fomdval,a0
def fin_FoMD_PBC_optm(c2d,cpd,a0,imprecision=10**-2):
"""
Optimization of FoMD over MPS for initial wave function. Function for finite size systems with PBC.
Parameters:
c2d: MPO for square of dual of SLD, expected ndarray of a shape (bd,bd,d,d,n)
cpd: MPO for dual of generalized derivative of SLD, expected ndarray of a shape (bd,bd,d,d,n)
a0: MPS for initial wave function, expected ndarray of a shape (bd,bd,d,n)
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
fomdval: optimal value of FoMD
a0: optimal MPS for initial wave function
"""
n = np.shape(c2d)[4]
d = np.shape(c2d)[2]
bdl2d = np.shape(c2d)[0]
bdlpd = np.shape(cpd)[0]
bdpsi = np.shape(a0)[0]
tol_fomd = 0.1*imprecision/n**2
if n == 1:
tensors = [c2d[:,:,:,:,0],np.eye(bdpsi),np.eye(bdpsi)]
legs = [[1,1,-3,-6],[-2,-1],[-5,-4]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cpd[:,:,:,:,0],np.eye(bdpsi),np.eye(bdpsi)]
legs = [[1,1,-3,-6],[-2,-1],[-5,-4]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [np.eye(bdpsi),np.eye(bdpsi)]
legs = [[-2,-1],[-4,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,0] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomdval = np.real(fomdval[position])
else:
relunc_fomd = 0.1*imprecision
l2df = np.zeros((bdpsi,bdl2d,bdpsi,bdpsi,bdl2d,bdpsi,n-1),dtype=complex)
lpdf = np.zeros((bdpsi,bdlpd,bdpsi,bdpsi,bdlpd,bdpsi,n-1),dtype=complex)
psinormf = np.zeros((bdpsi,bdpsi,bdpsi,bdpsi,n-1),dtype=complex)
fomd = np.array([])
iter_fomd = 0
while True:
tensors = [np.conj(a0[:,:,:,n-1]),c2d[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2df[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cpd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdf[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),a0[:,:,:,n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
psinormf[:,:,:,:,n-2] = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x],l2df[:,:,:,:,:,:,x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
l2df[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x],lpdf[:,:,:,:,:,:,x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
lpdf[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),a0[:,:,:,x],psinormf[:,:,:,:,x]]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
psinormf[:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [c2d[:,:,:,:,0],l2df[:,:,:,:,:,:,0]]
legs = [[2,1,-3,-6],[-2,1,-5,-1,2,-4]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cpd[:,:,:,:,0],lpdf[:,:,:,:,:,:,0]]
legs = [[2,1,-3,-6],[-2,1,-5,-1,2,-4]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormf[:,:,:,:,0]]
legs = [[-2,-4,-1,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,0] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2dc = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdc = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),a0[:,:,:,0]]
legs = [[-1,-3,1],[-2,-4,1]]
psinormc = ncon(tensors,legs)
for x in range(1,n-1):
tensors = [l2dc,c2d[:,:,:,:,x],l2df[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-4],[1,2,-3,-6],[-2,2,-5,3,4,5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [lpdc,cpd[:,:,:,:,x],lpdf[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-4],[1,2,-3,-6],[-2,2,-5,3,4,5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormc,psinormf[:,:,:,:,x]]
legs = [[1,2,-1,-3],[-2,-4,1,2]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,x] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
tensors = [l2dc,np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x]]
legs = [[-1,-2,-3,3,4,5],[3,-4,1],[4,-5,1,2],[5,-6,2]]
l2dc = ncon(tensors,legs)
tensors = [lpdc,np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x]]
legs = [[-1,-2,-3,3,4,5],[3,-4,1],[4,-5,1,2],[5,-6,2]]
lpdc = ncon(tensors,legs)
tensors = [psinormc,np.conj(a0[:,:,:,x]),a0[:,:,:,x]]
legs = [[-1,-2,2,3],[2,-3,1],[3,-4,1]]
psinormc = ncon(tensors,legs)
tensors = [l2dc,c2d[:,:,:,:,n-1]]
legs = [[-2,2,-5,-1,1,-4],[1,2,-3,-6]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [lpdc,cpd[:,:,:,:,n-1]]
legs = [[-2,2,-5,-1,1,-4],[1,2,-3,-6]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormc]
legs = [[-2,-4,-1,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,n-1] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
iter_fomd += 1
if iter_fomd >= 2 and all(fomd[-2*n:] > 0) and np.std(fomd[-2*n:])/np.mean(fomd[-2*n:]) <= relunc_fomd:
break
fomdval = fomd[-1]
return fomdval,a0
def fin_FoM_OBC_val(a,b,c):
"""
Calculate the value of FoM. Function for finite size systems with OBC.
Parameters:
a: MPO for density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
b: MPO for generalized derivative of density matrix, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
c: MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
Returns:
fomval: value of FoM
"""
n = len(c)
if n == 1:
if np.shape(a[0])[0] == 1 and np.shape(b[0])[0] == 1 and np.shape(c[0])[0] == 1:
tensors = [c[0][0,0,:,:],b[0][0:,0,:,:]]
legs = [[1,2],[2,1]]
l1 = ncon(tensors,legs)
tensors = [c[0][0,0,:,:],[0][0,0,:,:],[0][0,0,:,:]]
legs = [[1,2],[2,3],[3,1]]
l2 = ncon(tensors,legs)
fomval = 2*l1-l2
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
tensors = [c[n-1],b[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1 = ncon(tensors,legs)
l1 = l1[:,:,0,0]
tensors = [c[n-1],a[n-1],c[n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2 = ncon(tensors,legs)
l2 = l2[:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [c[x],b[x],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1 = ncon(tensors,legs)
tensors = [c[x],a[x],c[x],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2 = ncon(tensors,legs)
tensors = [c[0],b[0],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1 = ncon(tensors,legs)
l1 = float(l1)
tensors = [c[0],a[0],c[0],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2 = ncon(tensors,legs)
l2 = float(l2)
fomval = 2*l1-l2
return fomval
def fin_FoM_PBC_val(a,b,c):
"""
Calculate the value of FoM. Function for finite size systems with PBC.
Parameters:
a: MPO for a density matrix, expected ndarray of a shape (bd,bd,d,d,n)
b: MPO for generalized derivative of a density matrix, expected ndarray of a shape (bd,bd,d,d,n)
c: MPO for the SLD, expected ndarray of a shape (bd,bd,d,d,n)
Returns:
fomval: value of FoM
"""
n = np.shape(a)[4]
if n == 1:
tensors = [c[:,:,:,:,0],b[:,:,:,:,0]]
legs = [[3,3,1,2],[4,4,2,1]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0]]
legs = [[4,4,1,2],[5,5,2,3],[6,6,3,1]]
l2 = ncon(tensors,legs)
fomval = 2*l1-l2
else:
tensors = [c[:,:,:,:,n-1],b[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1],c[:,:,:,:,n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2 = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [c[:,:,:,:,x],b[:,:,:,:,x],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6,-4,-5,-6]]
l2 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],b[:,:,:,:,0],l1]
legs = [[5,3,1,2],[6,4,2,1],[3,4,5,6]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0],l2]
legs = [[7,4,1,2],[8,5,2,3],[9,6,3,1],[4,5,6,7,8,9]]
l2 = ncon(tensors,legs)
fomval = 2*l1-l2
return fomval
def fin_FoMD_OBC_val(c2d,cpd,a0):
"""
Calculate value of FoMD. Function for finite size systems with OBC.
Parameters:
c2d: MPO for square of dual of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cpd: MPO for dual of generalized derivative of SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
a0: MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
Returns:
fomdval: value of FoMD
"""
n = len(a0)
if n == 1:
if np.shape(c2d[0])[0] == 1 and np.shape(cpd[0])[0] == 1 and np.shape(a0[0])[0] == 1:
tensors = [np.conj(a0[0][0,0,:]),c2d[0][0,0,:,:],a0[0][0,0,:]]
legs = [[1],[1,2],[2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[0][0,0,:]),cpd[0][0,0,:,:],a0[0][0,0,:]]
legs = [[1],[1,2],[2]]
lpd = ncon(tensors,legs)
fomdval = 2*lpd-l2d
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
tensors = [np.conj(a0[n-1]),c2d[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2d = ncon(tensors,legs)
l2d = l2d[:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cpd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpd = ncon(tensors,legs)
lpd = lpd[:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [np.conj(a0[x]),c2d[x],a0[x],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cpd[x],a0[x],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[0]),c2d[0],a0[0],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2d = ncon(tensors,legs)
l2d = float(l2d)
tensors = [np.conj(a0[0]),cpd[0],a0[0],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpd = ncon(tensors,legs)
lpd = float(lpd)
fomdval = 2*lpd-l2d
return fomdval
def fin_FoMD_PBC_val(c2d,cpd,a0):
"""
Calculate the value of FoMD. Function for finite size systems with PBC.
Parameters:
c2d: MPO for square of dual of the SLD, expected ndarray of a shape (bd,bd,d,d,n)
cpd: MPO for dual of generalized derivative of the SLD, expected ndarray of a shape (bd,bd,d,d,n)
a0: MPS for the initial wave function, expected ndarray of a shape (bd,bd,d,n)
Returns:
fomdval: value of FoMD
"""
n = np.shape(c2d)[4]
if n == 1:
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0]]
legs = [[3,3,1],[4,4,1,2],[5,5,2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[3,3,1],[4,4,1,2],[5,5,2]]
lpd = ncon(tensors,legs)
fomdval = 2*lpd-l2d
else:
tensors = [np.conj(a0[:,:,:,n-1]),c2d[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cpd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpd = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0],l2d]
legs = [[6,3,1],[7,4,1,2],[8,5,2],[3,4,5,6,7,8]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0],lpd]
legs = [[6,3,1],[7,4,1,2],[8,5,2],[3,4,5,6,7,8]]
lpd = ncon(tensors,legs)
fomdval = 2*lpd-l2d
return fomdval
#################################################################
# 1.2.2 Problems with discrete approximation of the derivative. #
#################################################################
def fin2_FoM_FoMD_optbd(n,d,bc,ch,chp,epsilon,cini=None,a0ini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True,bdpsimax=100,alwaysbdpsimax=False):
"""
Iterative optimization of FoM/FoMD over SLD MPO and initial wave function MPS and also a check of convergence with increasing bond dimensions. Function for finite size systems. Version with two channels separated by epsilon.
Parameters:
n: number of sites in TN
d: dimension of the local Hilbert space (dimension of the physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
ch: MPO for a quantum channel at the value of estimated parameter phi=phi_0, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
chp: MPO for a quantum channel at the value of estimated parameter phi=phi_0+epsilon, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
epsilon: value of a separation between estimated parameters encoded in ch and chp, float
cini: initial MPO for the SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
a0ini: initial MPS for the initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for SLD MPO, default value is 100
alwaysbdlmax: boolean value, True if the maximal value of bd for SLD MPO has to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
bdpsimax: maximal value of bd for the initial wave function MPS, default value is 100
alwaysbdpsimax: boolean value, True if the maximal value of bd for initial wave function MPS has to be reached, otherwise False (default value)
Returns:
result: optimal value of FoM/FoMD
resultm: matrix describing FoM/FoMD as a function of bd of respectively SLD MPO [rows] and the initial wave function MPS [columns]
c: optimal MPO for SLD
a0: optimal MPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
if bc == 'O':
a0 = a0[np.newaxis,np.newaxis,:]
a0 = [a0]*n
elif bc == 'P':
a0 = a0[np.newaxis,np.newaxis,:,np.newaxis]
a0 = np.tile(a0,(1,1,1,n))
else:
a0 = a0ini
if bc == 'O':
bdpsi = max([np.shape(a0[i])[0] for i in range(n)])
a0 = [a0[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
if cini is None:
bdl = 1
rng = np.random.default_rng()
if bc == 'O':
c = [0]*n
c[0] = (rng.random((1,bdl,d,d)) + 1j*rng.random((1,bdl,d,d)))/bdl
c[0] = (c[0] + np.conj(np.moveaxis(c[0],2,3)))/2
for x in range(1,n-1):
c[x] = (rng.random((bdl,bdl,d,d)) + 1j*rng.random((bdl,bdl,d,d)))/bdl
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[n-1] = (rng.random((bdl,1,d,d)) + 1j*rng.random((bdl,1,d,d)))/bdl
c[n-1] = (c[n-1] + np.conj(np.moveaxis(c[n-1],2,3)))/2
elif bc == 'P':
c = (rng.random((bdl,bdl,d,d,n)) + 1j*rng.random((bdl,bdl,d,d,n)))/bdl
c = (c + np.conj(np.moveaxis(c,2,3)))/2
else:
c = cini
if bc == 'O':
bdl = max([np.shape(c[i])[0] for i in range(n)])
c = [c[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdl = np.shape(c)[0]
c = c.astype(complex)
resultm = np.zeros((bdlmax,bdpsimax),dtype=float)
resultm[bdl-1,bdpsi-1],c,a0 = fin2_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,epsilon,imprecision,lherm)
if bc == 'O' and n == 1:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
return result,resultm,c,a0
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
while True:
if bdpsi == bdpsimax:
break
else:
a0old = a0
bdpsi += 1
i = 0
while True:
a0 = fin_enlarge_bdpsi(a0,factorv[i])
resultm[bdl-1,bdpsi-1],cnew,a0new = fin2_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,epsilon,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-1,bdpsi-2]:
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-1,bdpsi-2]:
bdpsi += -1
a0 = a0old
a0copy = a0new
ccopy = cnew
break
else:
a0 = a0new
c = cnew
if problem:
break
if bdl == bdlmax:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi]
break
else:
bdl += 1
i = 0
while True:
c = fin_enlarge_bdl(c,factorv[i])
resultm[bdl-1,bdpsi-1],cnew,a0new = fin2_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,epsilon,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-2,bdpsi-1]:
a0 = a0new
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-2,bdpsi-1]:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
if resultm[bdl-1,bdpsi-1] < resultm[bdl-2,bdpsi]:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
bdl += -1
bdpsi += 1
result = resultm[bdl-1,bdpsi-1]
else:
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi-1]
break
if not(problem):
break
return result,resultm,c,a0
def fin2_FoM_optbd(n,d,bc,a,b,epsilon,cini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True):
"""
Optimization of FoM over SLD MPO and also check of convergence in bond dimension. Function for finite size systems. Version with two states separated by epsilon.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
a: MPO for the density matrix at the value of estimated parameter phi=phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
b: MPO for the density matrix at the value of estimated parameter phi=phi_0+epsilon, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
epsilon: value of a separation between estimated parameters encoded in a and b, float
cini: initial MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for SLD MPO, default value is 100
alwaysbdlmax: boolean value, True if maximal value of bd for SLD MPO have to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
result: optimal value of FoM
resultv: vector describing FoM as a function of bd of the SLD MPO
c: optimal MPO for SLD
"""
while True:
if cini is None:
bdl = 1
rng = np.random.default_rng()
if bc == 'O':
c = [0]*n
c[0] = (rng.random((1,bdl,d,d)) + 1j*rng.random((1,bdl,d,d)))/bdl
c[0] = (c[0] + np.conj(np.moveaxis(c[0],2,3)))/2
for x in range(1,n-1):
c[x] = (rng.random((bdl,bdl,d,d)) + 1j*rng.random((bdl,bdl,d,d)))/bdl
c[x] = (c[x] + np.conj(np.moveaxis(c[x],2,3)))/2
c[n-1] = (rng.random((bdl,1,d,d)) + 1j*rng.random((bdl,1,d,d)))/bdl
c[n-1] = (c[n-1] + np.conj(np.moveaxis(c[n-1],2,3)))/2
elif bc == 'P':
c = (rng.random((bdl,bdl,d,d,n)) + 1j*rng.random((bdl,bdl,d,d,n)))/bdl
c = (c + np.conj(np.moveaxis(c,2,3)))/2
else:
c = cini
if bc == 'O':
bdl = max([np.shape(c[i])[0] for i in range(n)])
c = [c[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdl = np.shape(c)[0]
c = c.astype(complex)
resultv = np.zeros(bdlmax,dtype=float)
if bc == 'O':
resultv[bdl-1],c = fin2_FoM_OBC_optm(a,b,epsilon,c,imprecision,lherm)
if n == 1:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
return result,resultv,c
elif bc == 'P':
resultv[bdl-1],c = fin2_FoM_PBC_optm(a,b,epsilon,c,imprecision,lherm)
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
if bdl == bdlmax:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
else:
bdl += 1
i = 0
while True:
c = fin_enlarge_bdl(c,factorv[i])
if bc == 'O':
resultv[bdl-1],cnew = fin2_FoM_OBC_optm(a,b,epsilon,c,imprecision,lherm)
elif bc == 'P':
resultv[bdl-1],cnew = fin2_FoM_PBC_optm(a,b,epsilon,c,imprecision,lherm)
if resultv[bdl-1] >= resultv[bdl-2]:
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultv[bdl-1] < (1+imprecision)*resultv[bdl-2]:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
if not(problem):
break
return result,resultv,c
def fin2_FoMD_optbd(n,d,bc,c2d,cd,cpd,epsilon,a0ini=None,imprecision=10**-2,bdpsimax=100,alwaysbdpsimax=False):
"""
Optimization of FoMD over initial wave function MPS and also check of convergence when increasing the bond dimension. Function for finite size systems. Version with two dual SLDs separated by epsilon.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
c2d: MPO for square of dual of SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
cd: MPO for dual of SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
cpd: MPO for dual of SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
epsilon: value of a separation between estimated parameters encoded in cd and cpd, float
a0ini: initial MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
imprecision: expected imprecision of the end results, default value is 10**-2
bdpsimax: maximal value of bd for initial wave function MPS, default value is 100
alwaysbdpsimax: boolean value, True if maximal value of bd for initial wave function MPS have to be reached, otherwise False (default value)
Returns:
result: optimal value of FoMD
resultv: vector describing FoMD in function of bd of initial wave function MPS
a0: optimal MPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
if bc == 'O':
a0 = a0[np.newaxis,np.newaxis,:]
a0 = [a0]*n
elif bc == 'P':
a0 = a0[np.newaxis,np.newaxis,:,np.newaxis]
a0 = np.tile(a0,(1,1,1,n))
else:
a0 = a0ini
if bc == 'O':
bdpsi = max([np.shape(a0[i])[0] for i in range(n)])
a0 = [a0[i].astype(complex) for i in range(n)]
elif bc == 'P':
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
resultv = np.zeros(bdpsimax,dtype=float)
if bc == 'O':
resultv[bdpsi-1],a0 = fin2_FoMD_OBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
if n == 1:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
return result,resultv,a0
elif bc == 'P':
resultv[bdpsi-1],a0 = fin2_FoMD_PBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
if bdpsi == bdpsimax:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
else:
bdpsi += 1
i = 0
while True:
a0 = fin_enlarge_bdpsi(a0,factorv[i])
if bc == 'O':
resultv[bdpsi-1],a0new = fin2_FoMD_OBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
elif bc == 'P':
resultv[bdpsi-1],a0new = fin2_FoMD_PBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
if resultv[bdpsi-1] >= resultv[bdpsi-2]:
a0 = a0new
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultv[bdpsi-1] < (1+imprecision)*resultv[bdpsi-2]:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
if not(problem):
break
return result,resultv,a0
def fin2_FoM_FoMD_optm(n,d,bc,c,a0,ch,chp,epsilon,imprecision=10**-2,lherm=True):
"""
Iterative optimization of FoM/FoMD over SLD MPO and initial wave function MPS. Function for finite size systems. Version with two channels separated by epsilon.
Parameters:
n: number of sites in TN
d: dimension of local Hilbert space (dimension of physical index)
bc: boundary conditions, 'O' for OBC, 'P' for PBC
c: MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,d,n) for PBC
a0: MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d,n) for PBC
ch: MPO for quantum channel at the value of estimated parameter phi=phi_0, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
chp: MPO for quantum channel at the value of estimated parameter phi=phi_0+epsilon, expected list of length n of ndarrays of a shape (bd,bd,d**2,d**2) for OBC (bd can vary between sites), or ndarray of a shape (bd,bd,d**2,d**2,n) for PBC
epsilon: value of a separation between estimated parameters encoded in ch and chp, float
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fval: optimal value of FoM/FoMD
c: optimal MPO for SLD
a0: optimal MPS for initial wave function
"""
relunc_f = 0.1*imprecision
if bc == 'O':
chd = [0]*n
chpd = [0]*n
for x in range(n):
chd[x] = np.conj(np.moveaxis(ch[x],2,3))
chpd[x] = np.conj(np.moveaxis(chp[x],2,3))
elif bc == 'P':
chd = np.conj(np.moveaxis(ch,2,3))
chpd = np.conj(np.moveaxis(chp,2,3))
f = np.array([])
iter_f = 0
while True:
a0_dm = wave_function_to_density_matrix(a0)
a = channel_acting_on_operator(ch,a0_dm)
b = channel_acting_on_operator(chp,a0_dm)
if bc == 'O':
fom,c = fin2_FoM_OBC_optm(a,b,epsilon,c,imprecision,lherm)
elif bc == 'P':
fom,c = fin2_FoM_PBC_optm(a,b,epsilon,c,imprecision,lherm)
f = np.append(f,fom)
if iter_f >= 2 and np.std(f[-4:])/np.mean(f[-4:]) <= relunc_f:
break
if bc == 'O':
c2 = [0]*n
for x in range(n):
bdl1 = np.shape(c[x])[0]
bdl2 = np.shape(c[x])[1]
c2[x] = np.zeros((bdl1**2,bdl2**2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
for nxpp in range(d):
c2[x][:,:,nx,nxp] = c2[x][:,:,nx,nxp]+np.kron(c[x][:,:,nx,nxpp],c[x][:,:,nxpp,nxp])
elif bc == 'P':
bdl = np.shape(c)[0]
c2 = np.zeros((bdl**2,bdl**2,d,d,n),dtype=complex)
for nx in range(d):
for nxp in range(d):
for nxpp in range(d):
for x in range(n):
c2[:,:,nx,nxp,x] = c2[:,:,nx,nxp,x]+np.kron(c[:,:,nx,nxpp,x],c[:,:,nxpp,nxp,x])
c2d = channel_acting_on_operator(chd,c2)
cd = channel_acting_on_operator(chd,c)
cpd = channel_acting_on_operator(chpd,c)
if bc == 'O':
fomd,a0 = fin2_FoMD_OBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
elif bc == 'P':
fomd,a0 = fin2_FoMD_PBC_optm(c2d,cd,cpd,epsilon,a0,imprecision)
f = np.append(f,fomd)
iter_f += 1
fval = f[-1]
return fval,c,a0
def fin2_FoM_OBC_optm(a,b,epsilon,c,imprecision=10**-2,lherm=True):
"""
Optimization of FoM over MPO for SLD. Function for finite size systems with OBC. Version with two states separated by epsilon.
Parameters:
a: MPO for the density matrix at the value of estimated parameter phi=phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
b: MPO for the density matrix at the value of estimated parameter phi=phi_0+epsilon, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
epsilon: value of a separation between estimated parameters encoded in a and b, float
c: MPO for the SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fomval: optimal value of FoM
c: optimal MPO for SLD
"""
n = len(c)
tol_fom = 0.1*imprecision/n**2
if n == 1:
if np.shape(a[0])[0] == 1 and np.shape(b[0])[0] == 1 and np.shape(c[0])[0] == 1:
d = np.shape(c[0])[2]
tensors = [b[0][0,0,:,:]]
legs = [[-2,-1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[0][0,0,:,:]]
legs = [[-2,-1]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [a[0][0,0,:,:],np.eye(d)]
legs = [[-2,-3],[-4,-1]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(d*d,d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[0][0,0,:,:] = np.reshape(cv,(d,d),order='F')
if lherm:
c[0] = (c[0]+np.conj(np.moveaxis(c[0],2,3)))/2
cv = np.reshape(c[0],-1,order='F')
fomval = np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv)
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
relunc_fom = 0.1*imprecision
l1f = [0]*n
l1_0f = [0]*n
l2f = [0]*n
fom = np.array([])
iter_fom = 0
while True:
tensors = [c[n-1],b[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1f[n-2] = ncon(tensors,legs)
l1f[n-2] = l1f[n-2][:,:,0,0]
tensors = [c[n-1],a[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1_0f[n-2] = ncon(tensors,legs)
l1_0f[n-2] = l1_0f[n-2][:,:,0,0]
tensors = [c[n-1],a[n-1],c[n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2f[n-2] = ncon(tensors,legs)
l2f[n-2] = l2f[n-2][:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [c[x],b[x],l1f[x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1f[x-1] = ncon(tensors,legs)
tensors = [c[x],a[x],l1_0f[x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1_0f[x-1] = ncon(tensors,legs)
tensors = [c[x],a[x],c[x],l2f[x]]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2f[x-1] = ncon(tensors,legs)
bdl1,bdl2,d,d = np.shape(c[0])
tensors = [b[0],l1f[0]]
legs = [[-5,1,-4,-3],[-2,1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[0],l1_0f[0]]
legs = [[-5,1,-4,-3],[-2,1]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [a[0],np.eye(d),l2f[0]]
legs = [[-9,1,-4,-7],[-8,-3],[-2,1,-6]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[0] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[0] = (c[0]+np.conj(np.moveaxis(c[0],2,3)))/2
cv = np.reshape(c[0],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
tensors = [c[0],b[0]]
legs = [[-3,-1,1,2],[-4,-2,2,1]]
l1c = ncon(tensors,legs)
l1c = l1c[:,:,0,0]
tensors = [c[0],a[0]]
legs = [[-3,-1,1,2],[-4,-2,2,1]]
l1_0c = ncon(tensors,legs)
l1_0c = l1_0c[:,:,0,0]
tensors = [c[0],a[0],c[0]]
legs = [[-4,-1,1,2],[-5,-2,2,3],[-6,-3,3,1]]
l2c = ncon(tensors,legs)
l2c = l2c[:,:,:,0,0,0]
for x in range(1,n-1):
bdl1,bdl2,d,d = np.shape(c[x])
tensors = [l1c,b[x],l1f[x]]
legs = [[-1,1],[1,2,-4,-3],[-2,2]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l1_0c,a[x],l1_0f[x]]
legs = [[-1,1],[1,2,-4,-3],[-2,2]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [l2c,a[x],np.eye(d),l2f[x]]
legs = [[-1,1,-5],[1,2,-4,-7],[-8,-3],[-2,2,-6]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[x] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[x] = (c[x]+np.conj(np.moveaxis(c[x],2,3)))/2
cv = np.reshape(c[x],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
tensors = [l1c,c[x],b[x]]
legs = [[3,4],[3,-1,1,2],[4,-2,2,1]]
l1c = ncon(tensors,legs)
tensors = [l1_0c,c[x],a[x]]
legs = [[3,4],[3,-1,1,2],[4,-2,2,1]]
l1_0c = ncon(tensors,legs)
tensors = [l2c,c[x],a[x],c[x]]
legs = [[4,5,6],[4,-1,1,2],[5,-2,2,3],[6,-3,3,1]]
l2c = ncon(tensors,legs)
bdl1,bdl2,d,d = np.shape(c[n-1])
tensors = [l1c,b[n-1]]
legs = [[-1,1],[1,-5,-4,-3]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l1_0c,a[n-1]]
legs = [[-1,1],[1,-5,-4,-3]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [l2c,a[n-1],np.eye(d)]
legs = [[-1,1,-5],[1,-9,-4,-7],[-8,-3]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl1*bdl2*d*d,bdl1*bdl2*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[n-1] = np.reshape(cv,(bdl1,bdl2,d,d),order='F')
if lherm:
c[n-1] = (c[n-1]+np.conj(np.moveaxis(c[n-1],2,3)))/2
cv = np.reshape(c[n-1],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
iter_fom += 1
if iter_fom >= 2 and all(fom[-2*n:] > 0) and np.std(fom[-2*n:])/np.mean(fom[-2*n:]) <= relunc_fom:
break
fomval = fom[-1]
return fomval,c
def fin2_FoM_PBC_optm(a,b,epsilon,c,imprecision=10**-2,lherm=True):
"""
Optimization of FoM over MPO for SLD. Function for finite size systems with PBC. Version with two states separated by epsilon.
Parameters:
a: MPO for the density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d,n)
b: MPO for the density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d,n)
epsilon: value of a separation between estimated parameters encoded in a and b, float
c: MPO for the SLD, expected ndarray of a shape (bd,bd,d,d,n)
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD MPO, otherwise False
Returns:
fomval: optimal value of FoM
c: optimal MPO for SLD
"""
n = np.shape(a)[4]
d = np.shape(a)[2]
bdr = np.shape(a)[0]
bdrp = np.shape(b)[0]
bdl = np.shape(c)[0]
tol_fom = 0.1*imprecision/n**2
if n == 1:
tensors = [b[:,:,:,:,0],np.eye(bdl)]
legs = [[1,1,-4,-3],[-2,-1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[:,:,:,:,0],np.eye(bdl)]
legs = [[1,1,-4,-3],[-2,-1]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [a[:,:,:,:,0],np.eye(d),np.eye(bdl),np.eye(bdl)]
legs = [[1,1,-4,-7],[-8,-3],[-2,-1],[-6,-5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,0] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,0] = (c[:,:,:,:,0]+np.conj(np.moveaxis(c[:,:,:,:,0],2,3)))/2
cv = np.reshape(c[:,:,:,:,0],-1,order='F')
fomval = np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv)
else:
relunc_fom = 0.1*imprecision
l1f = np.zeros((bdl,bdrp,bdl,bdrp,n-1),dtype=complex)
l1_0f = np.zeros((bdl,bdrp,bdl,bdrp,n-1),dtype=complex)
l2f = np.zeros((bdl,bdr,bdl,bdl,bdr,bdl,n-1),dtype=complex)
fom = np.array([])
iter_fom = 0
while True:
tensors = [c[:,:,:,:,n-1],b[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1f[:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1_0f[:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1],c[:,:,:,:,n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2f[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [c[:,:,:,:,x],b[:,:,:,:,x],l1f[:,:,:,:,x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1f[:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],l1_0f[:,:,:,:,x]]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1_0f[:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x],l2f[:,:,:,:,:,:,x]]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6,-4,-5,-6]]
l2f[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [b[:,:,:,:,0],l1f[:,:,:,:,0]]
legs = [[2,1,-4,-3],[-2,1,-1,2]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [a[:,:,:,:,0],l1_0f[:,:,:,:,0]]
legs = [[2,1,-4,-3],[-2,1,-1,2]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [a[:,:,:,:,0],np.eye(d),l2f[:,:,:,:,:,:,0]]
legs = [[2,1,-4,-7],[-8,-3],[-2,1,-6,-1,2,-5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,0] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,0] = (c[:,:,:,:,0]+np.conj(np.moveaxis(c[:,:,:,:,0],2,3)))/2
cv = np.reshape(c[:,:,:,:,0],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
tensors = [c[:,:,:,:,0],b[:,:,:,:,0]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1c = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1_0c = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2c = ncon(tensors,legs)
for x in range(1,n-1):
tensors = [l1c,b[:,:,:,:,x],l1f[:,:,:,:,x]]
legs = [[3,4,-1,1],[1,2,-4,-3],[-2,2,3,4]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l1_0c,a[:,:,:,:,x],l1_0f[:,:,:,:,x]]
legs = [[3,4,-1,1],[1,2,-4,-3],[-2,2,3,4]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [l2c,a[:,:,:,:,x],np.eye(d),l2f[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-5],[1,2,-4,-7],[-8,-3],[-2,2,-6,3,4,5]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,x] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,x] = (c[:,:,:,:,x]+np.conj(np.moveaxis(c[:,:,:,:,x],2,3)))/2
cv = np.reshape(c[:,:,:,:,x],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
tensors = [l1c,c[:,:,:,:,x],b[:,:,:,:,x]]
legs = [[-1,-2,3,4],[3,-3,1,2],[4,-4,2,1]]
l1c = ncon(tensors,legs)
tensors = [l1_0c,c[:,:,:,:,x],a[:,:,:,:,x]]
legs = [[-1,-2,3,4],[3,-3,1,2],[4,-4,2,1]]
l1_0c = ncon(tensors,legs)
tensors = [l2c,c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x]]
legs = [[-1,-2,-3,4,5,6],[4,-4,1,2],[5,-5,2,3],[6,-6,3,1]]
l2c = ncon(tensors,legs)
tensors = [l1c,b[:,:,:,:,n-1]]
legs = [[-2,2,-1,1],[1,2,-4,-3]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l1_0c,a[:,:,:,:,n-1]]
legs = [[-2,2,-1,1],[1,2,-4,-3]]
l1_0 = ncon(tensors,legs)
l1_0 = np.reshape(l1_0,-1,order='F')
tensors = [l2c,a[:,:,:,:,n-1],np.eye(d)]
legs = [[-2,2,-6,-1,1,-5],[1,2,-4,-7],[-8,-3]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*(l1-l1_0)/epsilon
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c[:,:,:,:,n-1] = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c[:,:,:,:,n-1] = (c[:,:,:,:,n-1]+np.conj(np.moveaxis(c[:,:,:,:,n-1],2,3)))/2
cv = np.reshape(c[:,:,:,:,n-1],-1,order='F')
fom = np.append(fom,np.real(2*cv @ (l1-l1_0)/epsilon - cv @ l2 @ cv))
iter_fom += 1
if iter_fom >= 2 and all(fom[-2*n:] > 0) and np.std(fom[-2*n:])/np.mean(fom[-2*n:]) <= relunc_fom:
break
fomval = fom[-1]
return fomval,c
def fin2_FoMD_OBC_optm(c2d,cd,cpd,epsilon,a0,imprecision=10**-2):
"""
Optimization of FoMD over MPS for initial wave function. Function for finite size systems with OBC. Version with two dual SLDs separated by epsilon.
Parameters:
c2d: MPO for the square of the dual of the SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cd: MPO for the dual of the SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cpd: MPO for the dual of the SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
epsilon: value of a separation between estimated parameters encoded in cd and cpd, float
a0: MPS for the initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
fomdval: optimal value of FoMD
a0: optimal MPS for the initial wave function
"""
n = len(a0)
if n == 1:
if np.shape(c2d[0])[0] == 1 and np.shape(cpd[0])[0] == 1 and np.shape(a0[0])[0] == 1:
d = np.shape(a0[0])[2]
tensors = [c2d[0][0,0,:,:]]
legs = [[-1,-2]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(d,d),order='F')
tensors = [cpd[0][0,0,:,:]]
legs = [[-1,-2]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(d,d),order='F')
tensors = [cd[0][0,0,:,:]]
legs = [[-1,-2]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(d,d),order='F')
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[0][0,0,:] = np.reshape(a0v,(d),order='F')
fomdval = np.real(fomdval[position])
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
relunc_fomd = 0.1*imprecision
l2df = [0]*n
lpdf = [0]*n
ldf = [0]*n
fomd = np.array([])
iter_fomd = 0
while True:
tensors = [np.conj(a0[n-1]),c2d[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2df[n-2] = ncon(tensors,legs)
l2df[n-2] = l2df[n-2][:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cpd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdf[n-2] = ncon(tensors,legs)
lpdf[n-2] = lpdf[n-2][:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
ldf[n-2] = ncon(tensors,legs)
ldf[n-2] = ldf[n-2][:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [np.conj(a0[x]),c2d[x],a0[x],l2df[x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2df[x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cpd[x],a0[x],lpdf[x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpdf[x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cd[x],a0[x],ldf[x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
ldf[x-1] = ncon(tensors,legs)
bdpsi1,bdpsi2,d = np.shape(a0[0])
tensors = [c2d[0],l2df[0]]
legs = [[-7,1,-3,-6],[-2,1,-5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [cpd[0],lpdf[0]]
legs = [[-7,1,-3,-6],[-2,1,-5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [cd[0],ldf[0]]
legs = [[-7,1,-3,-6],[-2,1,-5]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[0] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
a0[0] = np.moveaxis(a0[0],2,0)
a0[0] = np.reshape(a0[0],(d*bdpsi1,bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[0],full_matrices=False)
a0[0] = np.reshape(u,(d,bdpsi1,np.shape(s)[0]),order='F')
a0[0] = np.moveaxis(a0[0],0,2)
tensors = [np.diag(s) @ vh,a0[1]]
legs = [[-1,1],[1,-2,-3]]
a0[1] = ncon(tensors,legs)
tensors = [np.conj(a0[0]),c2d[0],a0[0]]
legs = [[-4,-1,1],[-5,-2,1,2],[-6,-3,2]]
l2dc = ncon(tensors,legs)
l2dc = l2dc[:,:,:,0,0,0]
tensors = [np.conj(a0[0]),cpd[0],a0[0]]
legs = [[-4,-1,1],[-5,-2,1,2],[-6,-3,2]]
lpdc = ncon(tensors,legs)
lpdc = lpdc[:,:,:,0,0,0]
tensors = [np.conj(a0[0]),cd[0],a0[0]]
legs = [[-4,-1,1],[-5,-2,1,2],[-6,-3,2]]
ldc = ncon(tensors,legs)
ldc = ldc[:,:,:,0,0,0]
for x in range(1,n-1):
bdpsi1,bdpsi2,d = np.shape(a0[x])
tensors = [l2dc,c2d[x],l2df[x]]
legs = [[-1,1,-4],[1,2,-3,-6],[-2,2,-5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [lpdc,cpd[x],lpdf[x]]
legs = [[-1,1,-4],[1,2,-3,-6],[-2,2,-5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [ldc,cd[x],ldf[x]]
legs = [[-1,1,-4],[1,2,-3,-6],[-2,2,-5]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[x] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
a0[x] = np.moveaxis(a0[x],2,0)
a0[x] = np.reshape(a0[x],(d*bdpsi1,bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[x],full_matrices=False)
a0[x] = np.reshape(u,(d,bdpsi1,np.shape(s)[0]),order='F')
a0[x] = np.moveaxis(a0[x],0,2)
tensors = [np.diag(s) @ vh,a0[x+1]]
legs = [[-1,1],[1,-2,-3]]
a0[x+1] = ncon(tensors,legs)
tensors = [l2dc,np.conj(a0[x]),c2d[x],a0[x]]
legs = [[3,4,5],[3,-1,1],[4,-2,1,2],[5,-3,2]]
l2dc = ncon(tensors,legs)
tensors = [lpdc,np.conj(a0[x]),cpd[x],a0[x]]
legs = [[3,4,5],[3,-1,1],[4,-2,1,2],[5,-3,2]]
lpdc = ncon(tensors,legs)
tensors = [ldc,np.conj(a0[x]),cd[x],a0[x]]
legs = [[3,4,5],[3,-1,1],[4,-2,1,2],[5,-3,2]]
ldc = ncon(tensors,legs)
bdpsi1,bdpsi2,d = np.shape(a0[n-1])
tensors = [l2dc,c2d[n-1]]
legs = [[-1,1,-4],[1,-7,-3,-6]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [lpdc,cpd[n-1]]
legs = [[-1,1,-4],[1,-7,-3,-6]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
tensors = [ldc,cd[n-1]]
legs = [[-1,1,-4],[1,-7,-3,-6]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi1*bdpsi2*d,bdpsi1*bdpsi2*d),order='F')
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ a0v))
a0[n-1] = np.reshape(a0v,(bdpsi1,bdpsi2,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
iter_fomd += 1
for x in range(n-1,0,-1):
bdpsi1,bdpsi2,d = np.shape(a0[x])
a0[x] = np.moveaxis(a0[x],2,1)
a0[x] = np.reshape(a0[x],(bdpsi1,d*bdpsi2),order='F')
u,s,vh = np.linalg.svd(a0[x],full_matrices=False)
a0[x] = np.reshape(vh,(np.shape(s)[0],d,bdpsi2),order='F')
a0[x] = np.moveaxis(a0[x],1,2)
tensors = [a0[x-1],u @ np.diag(s)]
legs = [[-1,1,-3],[1,-2]]
a0[x-1] = ncon(tensors,legs)
if iter_fomd >= 2 and all(fomd[-2*n:] > 0) and np.std(fomd[-2*n:])/np.mean(fomd[-2*n:]) <= relunc_fomd:
break
fomdval = fomd[-1]
return fomdval,a0
def fin2_FoMD_PBC_optm(c2d,cd,cpd,epsilon,a0,imprecision=10**-2):
"""
Optimization of FoMD over MPS for initial wave function. Function for finite size systems with PBC. Version with two dual SLDs separated by epsilon.
Parameters:
c2d: MPO for square of dual of SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d,n)
cd: MPO for dual of SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d,n)
cpd: MPO for dual of SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d,n)
epsilon: value of a separation between estimated parameters encoded in cd and cpd, float
a0: MPS for initial wave function, expected ndarray of a shape (bd,bd,d,n)
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
fomdval: optimal value of FoMD
a0: optimal MPS for initial wave function
"""
n = np.shape(c2d)[4]
d = np.shape(c2d)[2]
bdl2d = np.shape(c2d)[0]
bdlpd = np.shape(cpd)[0]
bdld = np.shape(cd)[0]
bdpsi = np.shape(a0)[0]
tol_fomd = 0.1*imprecision/n**2
if n == 1:
tensors = [c2d[:,:,:,:,0],np.eye(bdpsi),np.eye(bdpsi)]
legs = [[1,1,-3,-6],[-2,-1],[-5,-4]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cpd[:,:,:,:,0],np.eye(bdpsi),np.eye(bdpsi)]
legs = [[1,1,-3,-6],[-2,-1],[-5,-4]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cd[:,:,:,:,0],np.eye(bdpsi),np.eye(bdpsi)]
legs = [[1,1,-3,-6],[-2,-1],[-5,-4]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [np.eye(bdpsi),np.eye(bdpsi)]
legs = [[-2,-1],[-4,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,0] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomdval = np.real(fomdval[position])
else:
relunc_fomd = 0.1*imprecision
l2df = np.zeros((bdpsi,bdl2d,bdpsi,bdpsi,bdl2d,bdpsi,n-1),dtype=complex)
lpdf = np.zeros((bdpsi,bdlpd,bdpsi,bdpsi,bdlpd,bdpsi,n-1),dtype=complex)
ldf = np.zeros((bdpsi,bdld,bdpsi,bdpsi,bdld,bdpsi,n-1),dtype=complex)
psinormf = np.zeros((bdpsi,bdpsi,bdpsi,bdpsi,n-1),dtype=complex)
fomd = np.array([])
iter_fomd = 0
while True:
tensors = [np.conj(a0[:,:,:,n-1]),c2d[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2df[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cpd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdf[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
ldf[:,:,:,:,:,:,n-2] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),a0[:,:,:,n-1]]
legs = [[-1,-3,1],[-2,-4,1]]
psinormf[:,:,:,:,n-2] = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x],l2df[:,:,:,:,:,:,x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
l2df[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x],lpdf[:,:,:,:,:,:,x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
lpdf[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cd[:,:,:,:,x],a0[:,:,:,x],ldf[:,:,:,:,:,:,x]]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
ldf[:,:,:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),a0[:,:,:,x],psinormf[:,:,:,:,x]]
legs = [[-1,2,1],[-2,3,1],[2,3,-3,-4]]
psinormf[:,:,:,:,x-1] = ncon(tensors,legs)
tensors = [c2d[:,:,:,:,0],l2df[:,:,:,:,:,:,0]]
legs = [[2,1,-3,-6],[-2,1,-5,-1,2,-4]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cpd[:,:,:,:,0],lpdf[:,:,:,:,:,:,0]]
legs = [[2,1,-3,-6],[-2,1,-5,-1,2,-4]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [cd[:,:,:,:,0],ldf[:,:,:,:,:,:,0]]
legs = [[2,1,-3,-6],[-2,1,-5,-1,2,-4]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormf[:,:,:,:,0]]
legs = [[-2,-4,-1,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,0] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2dc = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpdc = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
ldc = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),a0[:,:,:,0]]
legs = [[-1,-3,1],[-2,-4,1]]
psinormc = ncon(tensors,legs)
for x in range(1,n-1):
tensors = [l2dc,c2d[:,:,:,:,x],l2df[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-4],[1,2,-3,-6],[-2,2,-5,3,4,5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [lpdc,cpd[:,:,:,:,x],lpdf[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-4],[1,2,-3,-6],[-2,2,-5,3,4,5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [ldc,cd[:,:,:,:,x],ldf[:,:,:,:,:,:,x]]
legs = [[3,4,5,-1,1,-4],[1,2,-3,-6],[-2,2,-5,3,4,5]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormc,psinormf[:,:,:,:,x]]
legs = [[1,2,-1,-3],[-2,-4,1,2]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,x] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
tensors = [l2dc,np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x]]
legs = [[-1,-2,-3,3,4,5],[3,-4,1],[4,-5,1,2],[5,-6,2]]
l2dc = ncon(tensors,legs)
tensors = [lpdc,np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x]]
legs = [[-1,-2,-3,3,4,5],[3,-4,1],[4,-5,1,2],[5,-6,2]]
lpdc = ncon(tensors,legs)
tensors = [ldc,np.conj(a0[:,:,:,x]),cd[:,:,:,:,x],a0[:,:,:,x]]
legs = [[-1,-2,-3,3,4,5],[3,-4,1],[4,-5,1,2],[5,-6,2]]
ldc = ncon(tensors,legs)
tensors = [psinormc,np.conj(a0[:,:,:,x]),a0[:,:,:,x]]
legs = [[-1,-2,2,3],[2,-3,1],[3,-4,1]]
psinormc = ncon(tensors,legs)
tensors = [l2dc,c2d[:,:,:,:,n-1]]
legs = [[-2,2,-5,-1,1,-4],[1,2,-3,-6]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [lpdc,cpd[:,:,:,:,n-1]]
legs = [[-2,2,-5,-1,1,-4],[1,2,-3,-6]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [ldc,cd[:,:,:,:,n-1]]
legs = [[-2,2,-5,-1,1,-4],[1,2,-3,-6]]
ld = ncon(tensors,legs)
ld = np.reshape(ld,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [psinormc]
legs = [[-2,-4,-1,-3]]
psinorm = ncon(tensors,legs)
psinorm = np.reshape(psinorm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
psinormpinv = np.kron(np.eye(d),psinormpinv)
eiginput = 2*(lpd-ld)/epsilon-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0v = a0v/np.sqrt(np.abs(np.conj(a0v) @ np.kron(np.eye(d),psinorm) @ a0v))
a0[:,:,:,n-1] = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
fomd = np.append(fomd,np.real(fomdval[position]))
iter_fomd += 1
if iter_fomd >= 2 and all(fomd[-2*n:] > 0) and np.std(fomd[-2*n:])/np.mean(fomd[-2*n:]) <= relunc_fomd:
break
fomdval = fomd[-1]
return fomdval,a0
def fin2_FoM_OBC_val(a,b,epsilon,c):
"""
Calculate value of FoM. Function for finite size systems with OBC. Version with two states separated by epsilon.
Parameters:
a: MPO for density matrix at the value of estimated parameter phi=phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
b: MPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
epsilon: value of a separation between estimated parameters encoded in a and b, float
c: MPO for SLD, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
Returns:
fomval: value of FoM
"""
n = len(c)
if n == 1:
if np.shape(a[0])[0] == 1 and np.shape(b[0])[0] == 1 and np.shape(c[0])[0] == 1:
tensors = [c[0][0,0,:,:],b[0][0,0,:,:]]
legs = [[1,2],[2,1]]
l1 = ncon(tensors,legs)
tensors = [c[0][0,0,:,:],a[0][0,0,:,:]]
legs = [[1,2],[2,1]]
l1_0 = ncon(tensors,legs)
tensors = [c[0][0,0,:,:],a[0][0,0,:,:],c[0][0,0,:,:]]
legs = [[1,2],[2,3],[3,1]]
l2 = ncon(tensors,legs)
fomval = 2*(l1-l1_0)/epsilon-l2
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
tensors = [c[n-1],b[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1 = ncon(tensors,legs)
l1 = l1[:,:,0,0]
tensors = [c[n-1],a[n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1_0 = ncon(tensors,legs)
l1_0 = l1_0[:,:,0,0]
tensors = [c[n-1],a[n-1],c[n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2 = ncon(tensors,legs)
l2 = l2[:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [c[x],b[x],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1 = ncon(tensors,legs)
tensors = [c[x],a[x],l1_0]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1_0 = ncon(tensors,legs)
tensors = [c[x],a[x],c[x],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2 = ncon(tensors,legs)
tensors = [c[0],b[0],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1 = ncon(tensors,legs)
l1 = float(l1)
tensors = [c[0],a[0],l1_0]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4]]
l1_0 = ncon(tensors,legs)
l1_0 = float(l1_0)
tensors = [c[0],a[0],c[0],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6]]
l2 = ncon(tensors,legs)
l2 = float(l2)
fomval = 2*(l1-l1_0)/epsilon-l2
return fomval
def fin2_FoM_PBC_val(a,b,epsilon,c):
"""
Calculate value of FoM. Function for finite size systems with PBC. Version with two states separated by epsilon.
Parameters:
a: MPO for density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d,n)
b: MPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d,n)
epsilon: value of a separation between estimated parameters encoded in a and b, float
c: MPO for SLD, expected ndarray of a shape (bd,bd,d,d,n)
Returns:
fomval: value of FoM
"""
n = np.shape(a)[4]
if n == 1:
tensors = [c[:,:,:,:,0],b[:,:,:,:,0]]
legs = [[3,3,1,2],[4,4,2,1]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0]]
legs = [[3,3,1,2],[4,4,2,1]]
l1_0 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0]]
legs = [[4,4,1,2],[5,5,2,3],[6,6,3,1]]
l2 = ncon(tensors,legs)
fomval = 2*(l1-l1_0)/epsilon-l2
else:
tensors = [c[:,:,:,:,n-1],b[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1]]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
l1_0 = ncon(tensors,legs)
tensors = [c[:,:,:,:,n-1],a[:,:,:,:,n-1],c[:,:,:,:,n-1]]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
l2 = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [c[:,:,:,:,x],b[:,:,:,:,x],l1]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],l1_0]
legs = [[-1,3,1,2],[-2,4,2,1],[3,4,-3,-4]]
l1_0 = ncon(tensors,legs)
tensors = [c[:,:,:,:,x],a[:,:,:,:,x],c[:,:,:,:,x],l2]
legs = [[-1,4,1,2],[-2,5,2,3],[-3,6,3,1],[4,5,6,-4,-5,-6]]
l2 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],b[:,:,:,:,0],l1]
legs = [[5,3,1,2],[6,4,2,1],[3,4,5,6]]
l1 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],l1_0]
legs = [[5,3,1,2],[6,4,2,1],[3,4,5,6]]
l1_0 = ncon(tensors,legs)
tensors = [c[:,:,:,:,0],a[:,:,:,:,0],c[:,:,:,:,0],l2]
legs = [[7,4,1,2],[8,5,2,3],[9,6,3,1],[4,5,6,7,8,9]]
l2 = ncon(tensors,legs)
fomval = 2*(l1-l1_0)/epsilon-l2
return fomval
def fin2_FoMD_OBC_val(c2d,cd,cpd,epsilon,a0):
"""
Calculate value of FoMD. Function for finite size systems with OBC. Version with two dual SLDs separated by epsilon.
Parameters:
c2d: MPO for square of dual of SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cd: MPO for dual of SLD at the value of estimated parameter phi=-phi_0, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
cpd: MPO for dual of SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
epsilon: value of a separation between estimated parameters encoded in cd and cpd, float
a0: MPS for initial wave function, expected list of length n of ndarrays of a shape (bd,bd,d,d) (bd can vary between sites)
Returns:
fomdval: value of FoMD
"""
n = len(a0)
if n == 1:
if np.shape(c2d[0])[0] == 1 and np.shape(cpd[0])[0] == 1 and np.shape(a0[0])[0] == 1:
tensors = [np.conj(a0[0][0,0,:]),c2d[0][0,0,:,:],a0[0][0,0,:]]
legs = [[1],[1,2],[2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[0][0,0,:]),cpd[0][0,0,:,:],a0[0][0,0,:]]
legs = [[1],[1,2],[2]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[0][0,0,:]),cd[0][0,0,:,:],a0[0][0,0,:]]
legs = [[1],[1,2],[2]]
ld = ncon(tensors,legs)
fomdval = 2*(lpd-ld)/epsilon-l2d
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
tensors = [np.conj(a0[n-1]),c2d[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2d = ncon(tensors,legs)
l2d = l2d[:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cpd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpd = ncon(tensors,legs)
lpd = lpd[:,:,:,0,0,0]
tensors = [np.conj(a0[n-1]),cd[n-1],a0[n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
ld = ncon(tensors,legs)
ld = ld[:,:,:,0,0,0]
for x in range(n-2,0,-1):
tensors = [np.conj(a0[x]),c2d[x],a0[x],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cpd[x],a0[x],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[x]),cd[x],a0[x],ld]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
ld = ncon(tensors,legs)
tensors = [np.conj(a0[0]),c2d[0],a0[0],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
l2d = ncon(tensors,legs)
l2d = float(l2d)
tensors = [np.conj(a0[0]),cpd[0],a0[0],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
lpd = ncon(tensors,legs)
lpd = float(lpd)
tensors = [np.conj(a0[0]),cd[0],a0[0],ld]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5]]
ld = ncon(tensors,legs)
ld = float(ld)
fomdval = 2*(lpd-ld)/epsilon-l2d
return fomdval
def fin2_FoMD_PBC_val(c2d,cd,cpd,epsilon,a0):
"""
Calculate value of FoMD. Function for finite size systems with PBC. Version with two dual SLDs separated by epsilon.
Parameters:
c2d: MPO for square of dual of SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d,n)
cd: MPO for dual of SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d,n)
cpd: MPO for dual of SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d,n)
epsilon: value of a separation between estimated parameters encoded in cd and cpd, float
a0: MPS for initial wave function, expected ndarray of a shape (bd,bd,d,n)
Returns:
fomdval: value of FoMD
"""
n = np.shape(c2d)[4]
if n == 1:
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0]]
legs = [[3,3,1],[4,4,1,2],[5,5,2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[3,3,1],[4,4,1,2],[5,5,2]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cd[:,:,:,:,0],a0[:,:,:,0]]
legs = [[3,3,1],[4,4,1,2],[5,5,2]]
ld = ncon(tensors,legs)
fomdval = 2*(lpd-ld)/epsilon-l2d
else:
tensors = [np.conj(a0[:,:,:,n-1]),c2d[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cpd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,n-1]),cd[:,:,:,:,n-1],a0[:,:,:,n-1]]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
ld = ncon(tensors,legs)
for x in range(n-2,0,-1):
tensors = [np.conj(a0[:,:,:,x]),c2d[:,:,:,:,x],a0[:,:,:,x],l2d]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cpd[:,:,:,:,x],a0[:,:,:,x],lpd]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,x]),cd[:,:,:,:,x],a0[:,:,:,x],ld]
legs = [[-1,3,1],[-2,4,1,2],[-3,5,2],[3,4,5,-4,-5,-6]]
ld = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),c2d[:,:,:,:,0],a0[:,:,:,0],l2d]
legs = [[6,3,1],[7,4,1,2],[8,5,2],[3,4,5,6,7,8]]
l2d = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cpd[:,:,:,:,0],a0[:,:,:,0],lpd]
legs = [[6,3,1],[7,4,1,2],[8,5,2],[3,4,5,6,7,8]]
lpd = ncon(tensors,legs)
tensors = [np.conj(a0[:,:,:,0]),cd[:,:,:,:,0],a0[:,:,:,0],ld]
legs = [[6,3,1],[7,4,1,2],[8,5,2],[3,4,5,6,7,8]]
ld = ncon(tensors,legs)
fomdval = 2*(lpd-ld)/epsilon-l2d
return fomdval
##########################################
# #
# #
# 2 Functions for infinite size systems. #
# #
# #
##########################################
#############################
# #
# 2.1 High level functions. #
# #
#############################
def inf(so_before_list, h, so_after_list, L_ini=None, psi0_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True, D_psi0_max=100, D_psi0_max_forced=False):
"""
Optimization of the lim_{N --> infinity} QFI/N over operator tilde{L} (in iMPO representation) and wave function psi0 (in iMPS representation) and check of convergence in their bond dimensions. Function for infinite size systems.
User has to provide information about the dynamics by specifying quantum channel. It is assumed that quantum channel is translationally invariant and is build from layers of quantum operations.
User has to provide one defining for each layer operation as a local superoperator. Those local superoperator have to be input in order of their action on the system.
Parameter encoding is a stand out quantum operation. It is assumed that parameter encoding acts only once and is unitary so the user have to provide only its generator h.
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding. Dimension d is the dimension of local Hilbert space (dimension of physical index).
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
L_ini: ndarray of a shape (D_L,D_L,d,d), optional
Initial iMPO for tilde{L}.
psi0_ini: ndarray of a shape (D_psi0,D_psi0,d), optional
Initial iMPS for psi0.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for iMPO representing tilde{L}).
D_L_max_forced: bool, optional
True if D_L_max have to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge have to be imposed on iMPO representing tilde{L}, otherwise False.
D_psi0_max: integer, optional
Maximal value of D_psi0 (D_psi0 is bond dimension for iMPS representing psi0).
D_psi0_max_forced: bool, optional
True if D_psi0_max have to be reached, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_m: ndarray
Matrix describing figure of merit in function of bond dimensions of respectively tilde{L} [rows] and psi0 [columns].
L: ndarray of a shape (D_L,D_L,d,d)
Optimal tilde{L} in iMPO representation.
psi0: ndarray of a shape (D_psi0,D_psi0,d)
Optimal psi0 in iMPS representation.
"""
if np.linalg.norm(h - np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.')
d = np.shape(h)[0]
epsilon = 10**-4
aux = np.kron(h, np.eye(d)) - np.kron(np.eye(d), h)
z = np.diag(np.exp(-1j * np.diag(aux) * epsilon))
ch = inf_create_channel(d, so_before_list + so_after_list)
ch2 = inf_create_channel(d, so_before_list + [z] + so_after_list)
result, result_m, L, psi0 = inf_gen(d, ch, ch2, epsilon, inf_L_symfun, inf_psi0_symfun, L_ini, psi0_ini, imprecision, D_L_max, D_L_max_forced, L_herm, D_psi0_max, D_psi0_max_forced)
return result, result_m, L, psi0
def inf_gen(d, ch, ch2, epsilon, symfun_L, symfun_psi0, L_ini=None, psi0_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True, D_psi0_max=100, D_psi0_max_forced=False):
"""
Optimization of the figure of merit (usually interpreted as lim_{N --> infinity} QFI/N) over operator tilde{L} (in iMPO representation) and wave function psi0 (in iMPS representation) and check of convergence in their bond dimensions. Function for infinite size systems.
User has to provide information about the dynamics by specifying two channels separated by small parameter epsilon as superoperators in iMPO representation.
By definition this infinite approach assumes translation invariance of the problem, other than that there are no constraints on the structure of the channel but the complexity of calculations highly depends on channel's bond dimension.
Parameters:
d: integer
Dimension of local Hilbert space (dimension of physical index).
ch: ndarray of a shape (D_ch,D_ch,d**2,d**2)
Quantum channel as superoperator in iMPO representation.
ch2: ndarray of a shape (D_ch2,D_ch2,d**2,d**2)
Quantum channel as superoperator in iMPO representation for the value of estimated parameter shifted by epsilon in relation to ch.
epsilon: float
Value of a separation between estimated parameters encoded in ch and ch2.
symfun_L: function
Function which symmetrize iMPO for tilde{L} after each step of otimization (the most simple one would be lambda x: x).
Choosing good function is key factor for successful optimization in infinite approach.
TNQMetro package features inf_L_symfun function which performs well in dephasing type problems.
symfun_psi0: function
Function which symmetrize iMPS for psi0 after each step of otimization (the most simple one would be lambda x: x).
Choosing good function is key factor for successful optimization in infinite approach.
TNQMetro package features inf_psi0_symfun function which performs well in dephasing type problems.
L_ini: ndarray of a shape (D_L,D_L,d,d), optional
Initial iMPO for tilde{L}.
psi0_ini: ndarray of a shape (D_psi0,D_psi0,d), optional
Initial iMPS for psi0.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for iMPO representing tilde{L}).
D_L_max_forced: bool, optional
True if D_L_max have to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge have to be imposed on iMPO representing tilde{L}, otherwise False.
D_psi0_max: integer, optional
Maximal value of D_psi0 (D_psi0 is bond dimension for iMPS representing psi0).
D_psi0_max_forced: bool, optional
True if D_psi0_max have to be reached, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_m: ndarray
Matrix describing figure of merit in function of bond dimensions of respectively tilde{L} [rows] and psi0 [columns].
L: ndarray of a shape (D_L,D_L,d,d)
Optimal tilde{L} in iMPO representation.
psi0: ndarray of a shape (D_psi0,D_psi0,d)
Optimal psi0 in iMPS representation.
"""
result, result_m, L, psi0 = inf_FoM_FoMD_optbd(d, ch, ch2, epsilon, symfun_L, symfun_psi0, L_ini, psi0_ini, imprecision, D_L_max, D_L_max_forced, L_herm, D_psi0_max, D_psi0_max_forced)
return result, result_m, L, psi0
def inf_state(so_before_list, h, so_after_list, rho0, L_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True):
"""
Optimization of the lim_{N --> infinity} QFI/N over operator tilde{L} (in iMPO representation) and check of convergence in its bond dimension. Function for infinite size systems and fixed state of the system.
User has to provide information about the dynamics by specifying quantum channel. It is assumed that quantum channel is translationally invariant and is build from layers of quantum operations.
User has to provide one defining for each layer operation as a local superoperator. Those local superoperator have to be input in order of their action on the system.
Parameter encoding is a stand out quantum operation. It is assumed that parameter encoding acts only once and is unitary so the user have to provide only its generator h.
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding. Dimension d is the dimension of local Hilbert space (dimension of physical index).
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
rho0: ndarray of a shape (D_rho0,D_rho0,d,d)
Density matrix describing initial state of the system in iMPO representation.
L_ini: ndarray of a shape (D_L,D_L,d,d), optional
Initial iMPO for tilde{L}.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for iMPO representing tilde{L}).
D_L_max_forced: bool, optional
True if D_L_max have to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge have to be imposed on iMPO representing tilde{L}, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_v: ndarray
Vector describing figure of merit in function of bond dimensions of tilde{L}.
L: ndarray of a shape (D_L,D_L,d,d)
Optimal tilde{L} in iMPO representation.
"""
if np.linalg.norm(h - np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.')
d = np.shape(h)[0]
epsilon = 10**-4
aux = np.kron(h, np.eye(d)) - np.kron(np.eye(d), h)
z = np.diag(np.exp(-1j * np.diag(aux) * epsilon))
ch = inf_create_channel(d, so_before_list + so_after_list)
ch2 = inf_create_channel(d, so_before_list + [z] + so_after_list)
rho = channel_acting_on_operator(ch, rho0)
rho2 = channel_acting_on_operator(ch2, rho0)
result, result_v, L = inf_state_gen(d, rho, rho2, epsilon, inf_L_symfun, L_ini, imprecision, D_L_max, D_L_max_forced, L_herm)
return result, result_v, L
def inf_state_gen(d, rho, rho2, epsilon, symfun_L, L_ini=None, imprecision=10**-2, D_L_max=100, D_L_max_forced=False, L_herm=True):
"""
Optimization of the figure of merit (usually interpreted as lim_{N --> infinity} QFI/N) over operator tilde{L} (in iMPO representation) and check of convergence in its bond dimension. Function for infinite size systems and fixed state of the system.
User has to provide information about the dynamics by specifying two channels separated by small parameter epsilon as superoperators in iMPO representation.
By definition this infinite approach assumes translation invariance of the problem, other than that there are no constraints on the structure of the channel but the complexity of calculations highly depends on channel's bond dimension.
Parameters:
d: integer
Dimension of local Hilbert space (dimension of physical index).
rho: ndarray of a shape (D_rho,D_rho,d,d)
Density matrix at the output of quantum channel in iMPO representation.
rho2: ndarray of a shape (D_rho2,D_rho2,d,d)
Density matrix at the output of quantum channel in iMPO representation for the value of estimated parameter shifted by epsilon in relation to rho.
epsilon: float
Value of a separation between estimated parameters encoded in rho and rho2.
symfun_L: function
Function which symmetrize iMPO for tilde{L} after each step of otimization (the most simple one would be lambda x: x).
Choosing good function is key factor for successful optimization in infinite approach.
TNQMetro package features inf_L_symfun function which performs well in dephasing type problems.
L_ini: ndarray of a shape (D_L,D_L,d,d), optional
Initial iMPO for tilde{L}.
imprecision: float, optional
Expected relative imprecision of the end results.
D_L_max: integer, optional
Maximal value of D_L (D_L is bond dimension for iMPO representing tilde{L}).
D_L_max_forced: bool, optional
True if D_L_max have to be reached, otherwise False.
L_herm: bool, optional
True if Hermitian gauge have to be imposed on iMPO representing tilde{L}, otherwise False.
Returns:
result: float
Optimal value of figure of merit.
result_v: ndarray
Vector describing figure of merit in function of bond dimensions of tilde{L}.
L: ndarray of a shape (D_L,D_L,d,d)
Optimal tilde{L} in iMPO representation.
"""
result, result_v, L = inf_FoM_optbd(d, rho, rho2, epsilon, symfun_L, L_ini, imprecision, D_L_max, D_L_max_forced, L_herm)
return result, result_v, L
def inf_L_symfun(l):
"""
Symmetrize function for iMPO representing tilde{L} which performs well in dephasing type problems.
Parameters:
l: ndarray of a shape (D_L,D_L,d,d)
iMPO for tilde{L}.
Returns:
l: ndarray of a shape (D_L,D_L,d,d)
Symmetrize iMPO for tilde{L}.
"""
bdl = np.shape(l)[0]
d = np.shape(l)[2]
if bdl == 1:
l = np.reshape(l,(d,d),order='F')
lmd = np.mean(np.diag(l))
l = np.imag(l)
l = (l+np.rot90(l,2).T)/2
l = lmd*np.eye(d)+1j*l
l = np.reshape(l,(bdl,bdl,d,d),order='F')
else:
for nx in range(d):
l[:,:,nx,nx] = np.zeros((bdl,bdl),dtype=complex)
l[0,0,nx,nx] = 1
return l
def inf_psi0_symfun(p):
"""
Symmetrize function for iMPS representing psi0 which performs well in dephasing type problems.
Parameters:
p: ndarray of a shape (D_psi0,D_psi0,d)
iMPS for psi0.
Returns:
p: ndarray of a shape (D_psi0,D_psi0,d)
Symmetrize iMPS for psi0.
"""
p = (p+np.conj(np.moveaxis(p,0,1)))/2
p = (p+np.moveaxis(np.flip(p,2),0,1))/2
p = (p+np.moveaxis(np.rot90(p,2),0,1))/2
return p
############################
# #
# 2.2 Low level functions. #
# #
############################
def inf_create_channel(d, so_list, tol=10**-10):
"""
Creates iMPO for superoperator describing translationally invariant quantum channel from list of local superoperators. Function for infinite size systems.
Local superoperators acting on more then 4 neighbour sites are not currently supported.
Parameters:
d: integer
Dimension of local Hilbert space (dimension of physical index).
so_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators in order of their action on the system.
Local superoperators acting on more then 4 neighbour sites are not currently supported.
tol: float, optional
Factor which after multiplication by the highest singular value give cutoff on singular values.
Returns:
ch: ndarray of a shape (D_ch,D_ch,d**2,d**2)
Quantum channel as superoperator in iMPO representation.
"""
if so_list == []:
ch = np.eye(d**2,dtype=complex)
ch = ch[np.newaxis,np.newaxis,:,:]
return ch
for i in range(len(so_list)):
so = so_list[i]
k = int(math.log(np.shape(so)[0],d**2))
if np.linalg.norm(so-np.diag(np.diag(so))) < 10**-10:
so = np.diag(so)
if k == 1:
bdchi = 1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
chi[:,:,nx,nx] = so[nx]
elif k == 2:
so = np.reshape(so,(d**2,d**2),order='F')
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
chi[:,:,nx,nx] = np.outer(sv[:,nx],us[nx,:])
elif k == 3:
so = np.reshape(so,(d**2,d**4),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**2),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
bdchi = bdchi2*bdchi1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv2[:,nx],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-3],[-4]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchi,bdchi),order='F')
elif k == 4:
so = np.reshape(so,(d**2,d**6),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
sv1 = np.reshape(sv1,(bdchi1*d**2,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**2,d**2),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
bdchi = bdchi3*bdchi2*bdchi1
chi = np.zeros((bdchi,bdchi,d**2,d**2),dtype=complex)
for nx in range(d**2):
tensors = [sv3[:,nx],us3[:,nx,:],us2[:,nx,:],us1[nx,:]]
legs = [[-1],[-2,-4],[-3,-5],[-6]]
chi[:,:,nx,nx] = np.reshape(ncon(tensors,legs),(bdchi,bdchi),order='F')
else:
warnings.warn('Local noise superoperators acting on more then 4 neighbour sites are not currently supported.')
else:
if k == 1:
bdchi = 1
chi = so[np.newaxis,np.newaxis,:,:]
elif k == 2:
u,s,vh = np.linalg.svd(so)
s = s[s > s[0]*tol]
bdchi = np.shape(s)[0]
u = u[:,:bdchi]
vh = vh[:bdchi,:]
us = u @ np.diag(np.sqrt(s))
sv = np.diag(np.sqrt(s)) @ vh
us = np.reshape(us,(d**2,d**2,bdchi),order='F')
sv = np.reshape(sv,(bdchi,d**2,d**2),order='F')
tensors = [sv,us]
legs = [[-1,-3,1],[1,-4,-2]]
chi = ncon(tensors,legs)
elif k == 3:
so = np.reshape(so,(d**4,d**8),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**4),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2,d**2,d**2),order='F')
tensors = [sv2,us2,us1]
legs = [[-1,-5,1],[-2,1,2,-3],[2,-6,-4]]
chi = ncon(tensors,legs)
bdchi = bdchi2*bdchi1
chi = np.reshape(chi,(bdchi,bdchi,d**2,d**2),order='F')
elif k == 4:
so = np.reshape(so,(d**4,d**12),order='F')
u1,s1,vh1 = np.linalg.svd(so,full_matrices=False)
s1 = s1[s1 > s1[0]*tol]
bdchi1 = np.shape(s1)[0]
u1 = u1[:,:bdchi1]
vh1 = vh1[:bdchi1,:]
us1 = u1 @ np.diag(np.sqrt(s1))
sv1 = np.diag(np.sqrt(s1)) @ vh1
us1 = np.reshape(us1,(d**2,d**2,bdchi1),order='F')
sv1 = np.reshape(sv1,(bdchi1*d**4,d**8),order='F')
u2,s2,vh2 = np.linalg.svd(sv1,full_matrices=False)
s2 = s2[s2 > s2[0]*tol]
bdchi2 = np.shape(s2)[0]
u2 = u2[:,:bdchi2]
vh2 = vh2[:bdchi2,:]
us2 = u2 @ np.diag(np.sqrt(s2))
us2 = np.reshape(us2,(bdchi1,d**2,d**2,bdchi2),order='F')
sv2 = np.diag(np.sqrt(s2)) @ vh2
sv2 = np.reshape(sv2,(bdchi2*d**4,d**4),order='F')
u3,s3,vh3 = np.linalg.svd(sv2,full_matrices=False)
s3 = s3[s3 > s3[0]*tol]
bdchi3 = np.shape(s3)[0]
u3 = u3[:,:bdchi3]
vh3 = vh3[:bdchi3,:]
us3 = u3 @ np.diag(np.sqrt(s3))
us3 = np.reshape(us3,(bdchi2,d**2,d**2,bdchi3),order='F')
sv3 = np.diag(np.sqrt(s3)) @ vh3
sv3 = np.reshape(sv3,(bdchi3,d**2,d**2),order='F')
tensors = [sv3,us3,us2,us1]
legs = [[-1,-7,1],[-2,1,2,-4],[-3,2,3,-5],[3,-8,-6]]
chi = ncon(tensors,legs)
bdchi = bdchi3*bdchi2*bdchi1
chi = np.reshape(chi,(bdchi,bdchi,d**2,d**2),order='F')
else:
warnings.warn('Local noise superoperators acting on more then 4 neighbour sites are not currently supported.')
if i == 0:
bdch = bdchi
ch = chi
else:
bdch = bdchi*bdch
tensors = [chi,ch]
legs = [[-1,-3,-5,1],[-2,-4,1,-6]]
ch = ncon(tensors,legs)
ch = np.reshape(ch,(bdch,bdch,d**2,d**2),order='F')
return ch
def inf_L_normalization(l):
"""
Normalize (shifted) SLD iMPO.
Parameters:
l: (shifted) SLD iMPO, expected ndarray of a shape (bd,bd,d,d)
Returns:
l: normalized (shifted) SLD iMPO
"""
d = np.shape(l)[2]
tensors = [l]
legs = [[-1,-2,1,1]]
tm = ncon(tensors,legs)
ltr = np.linalg.eigvals(tm)
ltr = ltr[np.argmax(np.abs(ltr))]
ltr = np.real(ltr)
l = d*l/ltr
return l
def inf_psi0_normalization(p):
"""
Normalize wave function iMPS.
Parameters:
p: wave function iMPS, expected ndarray of a shape (bd,bd,d)
Returns:
p: normalized wave function iMPS
"""
bdp = np.shape(p)[0]
tensors = [np.conj(p),p]
legs = [[-1,-3,1],[-2,-4,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdp*bdp,bdp*bdp),order='F')
tm = (tm+np.conj(tm).T)/2
tmval = np.linalg.eigvalsh(tm)
pnorm = np.sqrt(tmval[-1])
p = p/pnorm
return p
def inf_enlarge_bdl(cold,factor,symfun):
"""
Enlarge bond dimension of (shifted) SLD iMPO. Function for infinite size systems.
Parameters:
cold: (shifted) SLD iMPO, expected ndarray of a shape (bd,bd,d,d)
factor: factor which determine on average relation between old and newly added values of (shifted) SLD iMPO
symfun: symmetrize function
Returns:
c: (shifted) SLD iMPO with bd += 1
"""
d = np.shape(cold)[2]
bdl = np.shape(cold)[0]+1
rng = np.random.default_rng()
c = np.zeros((bdl,bdl,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
meanrecold = np.sum(np.abs(np.real(cold[:,:,nx,nxp])))/(bdl-1)**2
meanimcold = np.sum(np.abs(np.imag(cold[:,:,nx,nxp])))/(bdl-1)**2
c[:,:,nx,nxp] = (meanrecold*rng.random((bdl,bdl))+1j*meanimcold*rng.random((bdl,bdl)))*factor
c = (c + np.conj(np.moveaxis(c,2,3)))/2
c[0:bdl-1,0:bdl-1,:,:] = cold
c = symfun(c)
c = inf_L_normalization(c)
return c
def inf_enlarge_bdpsi(a0old,ratio,symfund):
"""
Enlarge bond dimension of wave function iMPS. Function for infinite size systems.
Parameters:
a0old: wave function iMPS, expected ndarray of a shape (bd,bd,d)
ratio: factor which determine on average relation between last and next to last values of diagonals of wave function iMPS
symfund: symmetrize function
Returns:
a0: wave function iMPS with bd += 1
"""
d = np.shape(a0old)[2]
bdpsi = np.shape(a0old)[0]+1
a0 = np.zeros((bdpsi,bdpsi,d),dtype=complex)
for i in range(d):
if i <= np.ceil(d/2)-1:
a0oldihalf = np.triu(np.rot90(a0old[:,:,i],-1))
a0[0:bdpsi-1,1:bdpsi,i] = a0oldihalf
a0[:,:,i] = a0[:,:,i]+a0[:,:,i].T
a0[:,:,i] = a0[:,:,i]+np.diag(np.concatenate(([0],np.diag(a0[:,:,i],2),[0])))
a0[:,:,i] = np.rot90(a0[:,:,i],1)
a0[0,-1,i] = ratio*(1+1j)*np.abs(a0[0,-2,i])
a0[-1,0,i] = np.conj(a0[0,-1,i])
if i == np.ceil(d/2)-1 and np.mod(d,2) == 1:
a0[:,:,i] = (a0[:,:,i]+a0[:,:,i].T)/2
else:
a0[:,:,i] = a0[:,:,d-1-i].T
a0 = symfund(a0)
a0 = inf_psi0_normalization(a0)
return a0
def inf_FoM_FoMD_optbd(d,ch,chp,epsilon,symfun,symfund,cini=None,a0ini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True,bdpsimax=100,alwaysbdpsimax=False):
"""
Iterative optimization of FoM/FoMD over (shifted) SLD iMPO and initial wave function iMPS and also check of convergence in bond dimensions. Function for infinite size systems.
Parameters:
d: dimension of local Hilbert space (dimension of physical index)
ch: iMPO for quantum channel at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d**2,d**2)
chp: iMPO for quantum channel at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d**2,d**2)
epsilon: value of a separation between estimated parameters encoded in ch and chp, float
symfun: symmetrize function for iMPO for (shifted) SLD
symfund: symmetrize function for iMPS for initial wave function
cini: initial iMPO for (shifted) SLD, expected TN of a shape (bd,bd,d,d)
a0ini: initial iMPS for initial wave function, expected TN of a shape (bd,bd,d)
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for (shifted) SLD iMPO, default value is 100
alwaysbdlmax: boolean value, True if maximal value of bd for (shifted) SLD iMPO have to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD iMPO, otherwise False
bdpsimax: maximal value of bd for iMPS for initial wave function, default value is 100
alwaysbdpsimax: boolean value, True if maximal value of bd for iMPS for initial wave function have to be reached, otherwise False (default value)
Returns:
result: optimal value of FoM/FoMD
resultm: matrix describing FoM/FoMD in function of bd of respectively (shifted) SLD iMPO [rows] and initial wave function iMPS [columns]
c: optimal iMPO for (shifted) SLD
a0: optimal iMPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
a0 = a0[np.newaxis,np.newaxis,:]
else:
a0 = a0ini
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
if cini is None:
bdl = 1
c = np.triu(np.ones((d,d))-np.eye(d))
c = 1j*epsilon*c
c = c+np.conj(c).T
c = np.eye(d)+c
c = np.reshape(c,(bdl,bdl,d,d),order='F')
else:
c = cini
bdl = np.shape(c)[0]
c = c.astype(complex)
resultm = np.zeros((bdlmax,bdpsimax),dtype=float)
resultm[bdl-1,bdpsi-1],c,a0 = inf_FoM_FoMD_optm(c,a0,ch,chp,epsilon,symfun,symfund,imprecision,lherm)
ratiov = np.array([10**-3,10**-2.5,10**-2])
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
while True:
if bdpsi == bdpsimax:
break
else:
a0old = a0
bdpsi += 1
i = 0
while True:
a0 = inf_enlarge_bdpsi(a0,ratiov[i],symfund)
resultm[bdl-1,bdpsi-1],cnew,a0new = inf_FoM_FoMD_optm(c,a0,ch,chp,epsilon,symfun,symfund,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-1,bdpsi-2]:
break
i += 1
if i == np.size(ratiov):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-1,bdpsi-2]:
bdpsi += -1
a0 = a0old
a0copy = a0new
ccopy = cnew
break
else:
a0 = a0new
c = cnew
if problem:
break
if bdl == bdlmax:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi]
break
else:
bdl += 1
i = 0
while True:
c = inf_enlarge_bdl(c,factorv[i],symfun)
resultm[bdl-1,bdpsi-1],cnew,a0new = inf_FoM_FoMD_optm(c,a0,ch,chp,epsilon,symfun,symfund,imprecision,lherm)
if resultm[bdl-1,bdpsi-1] >= resultm[bdl-2,bdpsi-1]:
a0 = a0new
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultm[bdl-1,bdpsi-1] < (1+imprecision)*resultm[bdl-2,bdpsi-1]:
if bdpsi == bdpsimax:
resultm = resultm[0:bdl,0:bdpsi]
result = resultm[bdl-1,bdpsi-1]
else:
if resultm[bdl-1,bdpsi-1] < resultm[bdl-2,bdpsi]:
a0 = a0copy
c = ccopy
resultm = resultm[0:bdl,0:bdpsi+1]
bdl += -1
bdpsi += 1
result = resultm[bdl-1,bdpsi-1]
else:
resultm = resultm[0:bdl,0:bdpsi+1]
result = resultm[bdl-1,bdpsi-1]
break
if not(problem):
break
return result,resultm,c,a0
def inf_FoM_optbd(d,a,b,epsilon,symfun,cini=None,imprecision=10**-2,bdlmax=100,alwaysbdlmax=False,lherm=True):
"""
Optimization of FoM over (shifted) SLD iMPO and also check of convergence in bond dimension. Function for infinite size systems.
Parameters:
d: dimension of local Hilbert space (dimension of physical index)
a: iMPO for density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d)
b: iMPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d)
epsilon: value of a separation between estimated parameters encoded in a and b, float
symfun: symmetrize function for iMPO for (shifted) SLD
cini: initial iMPO for (shifted) SLD, expected TN of a shape (bd,bd,d,d)
imprecision: expected imprecision of the end results, default value is 10**-2
bdlmax: maximal value of bd for (shifted) SLD iMPO, default value is 100
alwaysbdlmax: boolean value, True if maximal value of bd for (shifted) SLD iMPO have to be reached, otherwise False (default value)
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD iMPO, otherwise False
Returns:
result: optimal value of FoM
resultv: matrix describing FoM in function of bd of (shifted) SLD iMPO
c: optimal iMPO for (shifted) SLD
"""
while True:
if cini is None:
bdl = 1
c = np.triu(np.ones((d,d))-np.eye(d))
c = 1j*epsilon*c
c = c+np.conj(c).T
c = np.eye(d)+c
c = np.reshape(c,(bdl,bdl,d,d),order='F')
else:
c = cini
bdl = np.shape(c)[0]
c = c.astype(complex)
resultv = np.zeros(bdlmax,dtype=float)
resultv[bdl-1],c = inf_FoM_optm_glob(a,b,c,epsilon,symfun,imprecision,lherm)
factorv = np.array([0.5,0.25,0.1,1,0.01])
problem = False
while True:
if bdl == bdlmax:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
else:
bdl += 1
i = 0
while True:
c = inf_enlarge_bdl(c,factorv[i],symfun)
resultv[bdl-1],cnew = inf_FoM_optm_glob(a,b,c,epsilon,symfun,imprecision,lherm)
if resultv[bdl-1] >= resultv[bdl-2]:
c = cnew
break
i += 1
if i == np.size(factorv):
problem = True
break
if problem:
break
if not(alwaysbdlmax) and resultv[bdl-1] < (1+imprecision)*resultv[bdl-2]:
resultv = resultv[0:bdl]
result = resultv[bdl-1]
break
if not(problem):
break
return result,resultv,c
def inf_FoMD_optbd(d,c2d,cpd,epsilon,symfund,a0ini=None,imprecision=10**-2,bdpsimax=100,alwaysbdpsimax=False):
"""
Optimization of FoMD over initial wave function iMPS and also check of convergence in bond dimension. Function for infinite size systems.
Parameters:
d: dimension of local Hilbert space (dimension of physical index)
c2d: iMPO for square of dual of (shifted) SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d)
cpd: iMPO for dual of (shifted) SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d)
epsilon: value of a separation between estimated parameters encoded in c2d and cpd, float
symfund: symmetrize function for iMPS for initial wave function
a0ini: initial iMPS for initial wave function, expected TN of a shape (bd,bd,d)
imprecision: expected imprecision of the end results, default value is 10**-2
bdpsimax: maximal value of bd for iMPS for initial wave function, default value is 100
alwaysbdpsimax: boolean value, True if maximal value of bd for iMPS for initial wave function have to be reached, otherwise False (default value)
Returns:
result: optimal value of FoMD
resultv: matrix describing FoMD in function of bd of initial wave function iMPS
a0: optimal iMPS for initial wave function
"""
while True:
if a0ini is None:
bdpsi = 1
a0 = np.zeros(d,dtype=complex)
for i in range(d):
a0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# a0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
a0 = a0[np.newaxis,np.newaxis,:]
else:
a0 = a0ini
bdpsi = np.shape(a0)[0]
a0 = a0.astype(complex)
resultv = np.zeros(bdpsimax,dtype=float)
resultv[bdpsi-1],a0 = inf_FoMD_optm_glob(c2d,cpd,a0,epsilon,symfund,imprecision)
ratiov = np.array([10**-3,10**-2.5,10**-2])
problem = False
while True:
if bdpsi == bdpsimax:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
else:
bdpsi += 1
i = 0
while True:
a0 = inf_enlarge_bdpsi(a0,ratiov[i],symfund)
resultv[bdpsi-1],a0new = inf_FoMD_optm_glob(c2d,cpd,a0,epsilon,symfund,imprecision)
if resultv[bdpsi-1] >= resultv[bdpsi-2]:
a0 = a0new
break
i += 1
if i == np.size(ratiov):
problem = True
break
if problem:
break
if not(alwaysbdpsimax) and resultv[bdpsi-1] < (1+imprecision)*resultv[bdpsi-2]:
resultv = resultv[0:bdpsi]
result = resultv[bdpsi-1]
break
if not(problem):
break
return result,resultv,a0
def inf_FoM_FoMD_optm(c,a0,ch,chp,epsilon,symfun,symfund,imprecision=10**-2,lherm=True):
"""
Iterative optimization of FoM/FoMD over (shifted) SLD iMPO and initial wave function iMPS. Function for infinite size systems.
Parameters:
c: iMPO for (shifted) SLD, expected ndarray of a shape (bd,bd,d,d)
a0: iMPS for initial wave function, expected ndarray of a shape (bd,bd,d)
ch: iMPO for quantum channel at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d**2,d**2)
chp: iMPO for quantum channel at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d**2,d**2)
epsilon: value of a separation between estimated parameters encoded in ch and chp, float
symfun: symmetrize function for iMPO for (shifted) SLD
symfund: symmetrize function for iMPS for initial wave function
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD iMPO, otherwise False
Returns:
fval: optimal value of FoM/FoMD
c: optimal iMPO for (shifted) SLD
a0: optimal iMPS for initial wave function
"""
d = np.shape(c)[2]
bdl = np.shape(c)[0]
relunc_f = 0.1*imprecision
chd = np.conj(np.moveaxis(ch,2,3))
chpd = np.conj(np.moveaxis(chp,2,3))
f = np.array([])
iter_f = 0
while True:
a0_dm = wave_function_to_density_matrix(a0)
a = channel_acting_on_operator(ch,a0_dm)
b = channel_acting_on_operator(chp,a0_dm)
fom,c = inf_FoM_optm_glob(a,b,c,epsilon,symfun,imprecision,lherm)
f = np.append(f,fom)
if iter_f >= 2 and np.std(f[-4:])/np.mean(f[-4:]) <= relunc_f:
break
c2 = np.zeros((bdl**2,bdl**2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
for nxpp in range(d):
c2[:,:,nx,nxp] = c2[:,:,nx,nxp]+np.kron(c[:,:,nx,nxpp],c[:,:,nxpp,nxp])
c2d = channel_acting_on_operator(chd,c2)
cpd = channel_acting_on_operator(chpd,c)
fomd,a0 = inf_FoMD_optm_glob(c2d,cpd,a0,epsilon,symfund,imprecision)
f = np.append(f,fomd)
iter_f += 1
fval = f[-1]
return fval,c,a0
def inf_FoM_optm_glob(a,b,c,epsilon,symfun,imprecision=10**-2,lherm=True):
"""
Optimization of FoM over iMPO for (shifted) SLD. Function for infinite size systems.
Parameters:
a: iMPO for density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d)
b: iMPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d)
c: iMPO for (shifted) SLD, expected ndarray of a shape (bd,bd,d,d)
epsilon: value of a separation between estimated parameters encoded in a and b, float
symfun: symmetrize function for iMPO for (shifted) SLD
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD iMPO, otherwise False
Returns:
fomval: optimal value of FoM
c: optimal iMPO for (shifted) SLD
"""
def inf_FoM_optm_glob_mix(m,c_old,opt_flag=True,c_old_locopt=None):
"""
Nested function for mixing localy optimal iMPO for (shifted) SLD with its initial form according to parameter m.
Parameters:
m: mixing parameter
c_old: initial iMPO for (shifted) SLD, expected ndarray of a shape (bd,bd,d,d)
opt_flag: boolean value, True (default value) if calculating localy optimal iMPO is necessary, otherwise False
c_old_locopt: localy optimal iMPO for (shifted) SLD used when opt_flag=False, default value is None
Returns:
fomvalf: value of FoM
c_newf: iMPO for (shifted) SLD after mixing
c_locoptf: localy optimal iMPO for (shifted) SLD before mixing
"""
if opt_flag:
c_locoptf = inf_FoM_optm_loc(a,b,c_old,epsilon,imprecision,lherm)
c_locoptf = symfun(c_locoptf)
c_locoptf = inf_L_normalization(c_locoptf)
else:
c_locoptf = c_old_locopt
c_newf = c_locoptf*np.sin(m*np.pi)-c_old*np.cos(m*np.pi)
c_newf = symfun(c_newf)
c_newf = inf_L_normalization(c_newf)
fomvalf = inf_FoM_val(a,b,c_newf,epsilon)
return fomvalf,c_newf,c_locoptf
step_ini = 10**-1
step_tiny = 10**-10
relunc_fom = 0.1*imprecision
fom = np.array([])
fom_1 = inf_FoM_val(a,b,c,epsilon)
fom_05,c_05 = inf_FoM_optm_glob_mix(1/2,c)[:2]
if fom_05 > fom_1:
c = c_05
fom = np.append(fom,fom_05)
else:
fom = np.append(fom,fom_1)
del c_05
fom_tinylean = inf_FoM_optm_glob_mix(1+step_tiny,c)[0]
if fom_tinylean > fom[0]:
step = step_ini
else:
step = -step_ini
opt_flag = True
c_locopt = None
iter_fom = 1
while True:
fomval,c_new,c_locopt = inf_FoM_optm_glob_mix(1+step,c,opt_flag,c_locopt)
if fomval > fom[-1]:
c = c_new
fom = np.append(fom,fomval)
opt_flag = True
iter_fom += 1
else:
step = step/2
opt_flag = False
if np.abs(step) < step_tiny or (iter_fom >= 4 and all(fom[-4:] > 0) and np.std(fom[-4:])/np.mean(fom[-4:]) <= relunc_fom):
break
fomval = fom[-1]
return fomval,c
def inf_FoMD_optm_glob(c2d,cpd,a0,epsilon,symfund,imprecision=10**-2):
"""
Optimization of FoMD over iMPS for initial wave function. Function for infinite size systems.
Parameters:
c2d: iMPO for square of dual of (shifted) SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d)
cpd: iMPO for dual of (shifted) SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d)
a0: iMPS for initial wave function, expected ndarray of a shape (bd,bd,d)
epsilon: value of a separation between estimated parameters encoded in c2d and cpd, float
symfund: symmetrize function for iMPS for initial wave function
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
fomdval: optimal value of FoMD
a0: optimal iMPS for initial wave function
"""
def inf_FoMD_optm_glob_mix(m,a0_old,opt_flag=True,a0_old_locopt=None):
"""
Nested function for mixing localy optimal iMPS for initial wave function with its initial form according to parameter m.
Parameters:
m: mixing parameter
a0_old: initial iMPS for initial wave function, expected ndarray of a shape (bd,bd,d)
opt_flag: boolean value, True (default value) if calculating localy optimal iMPS is necessary, otherwise False
a0_old_locopt: localy optimal iMPS for initial wave function used when opt_flag=False, default value is None
Returns:
fomdvalf: value of FoMD
a0_newf: iMPS for initial wave function after mixing
a0_locoptf: localy optimal iMPS for initial wave function before mixing
"""
if opt_flag:
a0_locoptf = inf_FoMD_optm_loc(c2d,cpd,a0_old,epsilon,imprecision)
a0_locoptf = symfund(a0_locoptf)
a0_locoptf = inf_psi0_normalization(a0_locoptf)
else:
a0_locoptf = a0_old_locopt
a0_newf = a0_locoptf*np.sin(m*np.pi)-a0_old*np.cos(m*np.pi)
a0_newf = symfund(a0_newf)
a0_newf = inf_psi0_normalization(a0_newf)
fomdvalf = inf_FoMD_val(c2d,cpd,a0_newf,epsilon)
return fomdvalf,a0_newf,a0_locoptf
step_ini = 10**-1
step_tiny = 10**-10
relunc_fomd = 0.1*imprecision
fomd = np.array([])
fomd_1 = inf_FoMD_val(c2d,cpd,a0,epsilon)
fomd_05,a0_05 = inf_FoMD_optm_glob_mix(1/2,a0)[:2]
if fomd_05 > fomd_1:
a0 = a0_05
fomd = np.append(fomd,fomd_05)
else:
fomd = np.append(fomd,fomd_1)
del a0_05
fomd_tinylean = inf_FoMD_optm_glob_mix(1+step_tiny,a0)[0]
if fomd_tinylean > fomd[0]:
step = step_ini
else:
step = -step_ini
opt_flag = True
a0_locopt = None
iter_fomd = 1
while True:
fomdval,a0_new,a0_locopt = inf_FoMD_optm_glob_mix(1+step,a0,opt_flag,a0_locopt)
if fomdval > fomd[-1]:
a0 = a0_new
fomd = np.append(fomd,fomdval)
opt_flag = True
iter_fomd += 1
else:
step = step/2
opt_flag = False
if np.abs(step) < step_tiny or (iter_fomd >= 4 and all(fomd[-4:] > 0) and np.std(fomd[-4:])/np.mean(fomd[-4:]) <= relunc_fomd):
break
fomdval = fomd[-1]
return fomdval,a0
def inf_FoM_optm_loc(a,b,c,epsilon,imprecision=10**-2,lherm=True):
"""
Calculate localy optimal iMPO for (shifted) SLD. Function for infinite size systems.
Parameters:
a: iMPO for density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d)
b: iMPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d)
c: iMPO for (shifted) SLD, expected ndarray of a shape (bd,bd,d,d)
epsilon: value of a separation between estimated parameters encoded in a and b, float
imprecision: expected imprecision of the end results, default value is 10**-2
lherm: boolean value, True (default value) when Hermitian gauge is imposed on SLD iMPO, otherwise False
Returns:
c: localy optimal iMPO for (shifted) SLD
"""
d = np.shape(a)[2]
bdr = np.shape(a)[0]
bdrp = np.shape(b)[0]
bdl = np.shape(c)[0]
tol_fom = imprecision*epsilon**2
tensors = [c,b]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdl*bdrp,bdl*bdrp),order='F')
tmval,tmvr = np.linalg.eig(tm)
tmvr = tmvr[:,np.argmax(np.abs(tmval))]
tmval,tmvl = np.linalg.eig(tm.T)
tmvl = tmvl[:,np.argmax(np.abs(tmval))]
tmvnorm = np.sqrt(tmvl @ tmvr)
l1r = np.reshape(tmvr/tmvnorm,(bdl,bdrp),order='F')
l1l = np.reshape(tmvl/tmvnorm,(bdl,bdrp),order='F')
tensors = [c,a,c]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdl*bdr*bdl,bdl*bdr*bdl),order='F')
tmval,tmvr = np.linalg.eig(tm)
tmvr = tmvr[:,np.argmax(np.abs(tmval))]
tmval,tmvl = np.linalg.eig(tm.T)
tmvl = tmvl[:,np.argmax(np.abs(tmval))]
tmvnorm = np.sqrt(tmvl @ tmvr)
l2r = np.reshape(tmvr/tmvnorm,(bdl,bdr,bdl),order='F')
l2l = np.reshape(tmvl/tmvnorm,(bdl,bdr,bdl),order='F')
tensors = [l1l,b,l1r]
legs = [[-1,2],[2,1,-4,-3],[-2,1]]
l1 = ncon(tensors,legs)
l1 = np.reshape(l1,-1,order='F')
tensors = [l2l,a,np.eye(d),l2r]
legs = [[-1,2,-5],[2,1,-4,-7],[-8,-3],[-2,1,-6]]
l2 = ncon(tensors,legs)
l2 = np.reshape(l2,(bdl*bdl*d*d,bdl*bdl*d*d),order='F')
dl2 = l2+l2.T
dl1 = 2*l1
dl2pinv = np.linalg.pinv(dl2,tol_fom)
dl2pinv = (dl2pinv+dl2pinv.T)/2
cv = dl2pinv @ dl1
c = np.reshape(cv,(bdl,bdl,d,d),order='F')
if lherm:
c = (c+np.conj(np.moveaxis(c,2,3)))/2
return c
def inf_FoMD_optm_loc(c2d,cpd,a0,epsilon,imprecision=10**-2):
"""
Calculate localy optimal iMPS for initial wave function. Function for infinite size systems.
Parameters:
c2d: iMPO for square of dual of (shifted) SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d)
cpd: iMPO for dual of (shifted) SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d)
a0: iMPS for initial wave function, expected ndarray of a shape (bd,bd,d)
epsilon: value of a separation between estimated parameters encoded in c2d and cpd, float
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
a0: localy optimal iMPS for initial wave function
"""
d = np.shape(c2d)[2]
bdl2d = np.shape(c2d)[0]
bdlpd = np.shape(cpd)[0]
bdpsi = np.shape(a0)[0]
tol_fomd = imprecision*epsilon**2
tensors = [np.conj(a0),cpd,a0]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdpsi*bdlpd*bdpsi,bdpsi*bdlpd*bdpsi),order='F')
tmval,tmvr = np.linalg.eig(tm)
tmvr = tmvr[:,np.argmax(np.abs(tmval))]
tmval,tmvl = np.linalg.eig(tm.T)
tmvl = tmvl[:,np.argmax(np.abs(tmval))]
tmvnorm = np.sqrt(tmvl @ tmvr)
lpdr = np.reshape(tmvr/tmvnorm,(bdpsi,bdlpd,bdpsi),order='F')
lpdl = np.reshape(tmvl/tmvnorm,(bdpsi,bdlpd,bdpsi),order='F')
tensors = [np.conj(a0),c2d,a0]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdpsi*bdl2d*bdpsi,bdpsi*bdl2d*bdpsi),order='F')
tmval,tmvr = np.linalg.eig(tm)
tmvr = tmvr[:,np.argmax(np.abs(tmval))]
tmval,tmvl = np.linalg.eig(tm.T)
tmvl = tmvl[:,np.argmax(np.abs(tmval))]
tmvnorm = np.sqrt(tmvl @ tmvr)
l2dr = np.reshape(tmvr/tmvnorm,(bdpsi,bdl2d,bdpsi),order='F')
l2dl = np.reshape(tmvl/tmvnorm,(bdpsi,bdl2d,bdpsi),order='F')
tensors = [np.conj(a0),a0]
legs = [[-1,-3,1],[-2,-4,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdpsi*bdpsi,bdpsi*bdpsi),order='F')
tm = (tm+np.conj(tm).T)/2
tmval,tmv = np.linalg.eigh(tm)
tmv = tmv[:,-1]
psinorm = np.reshape(tmv,(bdpsi,bdpsi),order='F')
psinorm = (psinorm+np.conj(psinorm).T)/2
tensors = [lpdl,cpd,lpdr]
legs = [[-1,2,-4],[2,1,-3,-6],[-2,1,-5]]
lpd = ncon(tensors,legs)
lpd = np.reshape(lpd,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
tensors = [l2dl,c2d,l2dr]
legs = [[-1,2,-4],[2,1,-3,-6],[-2,1,-5]]
l2d = ncon(tensors,legs)
l2d = np.reshape(l2d,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
psinormpinv = np.linalg.pinv(psinorm,tol_fomd,hermitian=True)
psinormpinv = (psinormpinv+np.conj(psinormpinv).T)/2
tensors = [np.conj(psinormpinv),np.eye(d),psinormpinv]
legs = [[-1,-4],[-3,-6],[-2,-5]]
psinormpinv = ncon(tensors,legs)
psinormpinv = np.reshape(psinormpinv,(bdpsi*bdpsi*d,bdpsi*bdpsi*d),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eiginput = psinormpinv @ eiginput
fomdval,a0v = np.linalg.eig(eiginput)
position = np.argmax(np.real(fomdval))
a0v = np.reshape(a0v[:,position],-1,order='F')
a0 = np.reshape(a0v,(bdpsi,bdpsi,d),order='F')
return a0
def inf_FoM_val(a,b,c,epsilon):
"""
Calculate value of FoM. Function for infinite size systems.
Parameters:
a: iMPO for density matrix at the value of estimated parameter phi=phi_0, expected ndarray of a shape (bd,bd,d,d)
b: iMPO for density matrix at the value of estimated parameter phi=phi_0+epsilon, expected ndarray of a shape (bd,bd,d,d)
c: iMPO for (shifted) SLD, expected ndarray of a shape (bd,bd,d,d)
epsilon: value of a separation between estimated parameters encoded in a and b, float
Returns:
fomval: value of FoM
"""
bdr = np.shape(a)[0]
bdrp = np.shape(b)[0]
bdl = np.shape(c)[0]
tensors = [c,b]
legs = [[-1,-3,1,2],[-2,-4,2,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdl*bdrp,bdl*bdrp),order='F')
l1 = np.linalg.eigvals(tm)
l1 = l1[np.argmax(np.abs(l1))]
tensors = [c,a,c]
legs = [[-1,-4,1,2],[-2,-5,2,3],[-3,-6,3,1]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdl*bdr*bdl,bdl*bdr*bdl),order='F')
l2 = np.linalg.eigvals(tm)
l2 = l2[np.argmax(np.abs(l2))]
fomval = np.real((2*l1-l2-1)/epsilon**2)
return fomval
def inf_FoMD_val(c2d,cpd,a0,epsilon):
"""
Calculate value of FoMD. Function for infinite size systems.
Parameters:
c2d: iMPO for square of dual of (shifted) SLD at the value of estimated parameter phi=-phi_0, expected ndarray of a shape (bd,bd,d,d)
cpd: iMPO for dual of (shifted) SLD at the value of estimated parameter phi=-(phi_0+epsilon), expected ndarray of a shape (bd,bd,d,d)
a0: iMPS for initial wave function, expected ndarray of a shape (bd,bd,d)
epsilon: value of a separation between estimated parameters encoded in c2d and cpd, float
Returns:
fomdval: value of FoMD
"""
bdl2d = np.shape(c2d)[0]
bdlpd = np.shape(cpd)[0]
bdpsi = np.shape(a0)[0]
tensors = [np.conj(a0),cpd,a0]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdpsi*bdlpd*bdpsi,bdpsi*bdlpd*bdpsi),order='F')
lpd = np.linalg.eigvals(tm)
lpd = lpd[np.argmax(np.abs(lpd))]
tensors = [np.conj(a0),c2d,a0]
legs = [[-1,-4,1],[-2,-5,1,2],[-3,-6,2]]
tm = ncon(tensors,legs)
tm = np.reshape(tm,(bdpsi*bdl2d*bdpsi,bdpsi*bdl2d*bdpsi),order='F')
l2d = np.linalg.eigvals(tm)
l2d = l2d[np.argmax(np.abs(l2d))]
fomdval = np.real((2*lpd-l2d-1)/epsilon**2)
return fomdval
##########################
# #
# #
# 3 Auxiliary functions. #
# #
# #
##########################
def channel_acting_on_operator(ch, o):
"""
Creates MPO/iMPO for operator o after the evolution through quantum channel ch.
Parameters:
ch: list of length N of ndarrays of a shape (Dl_ch,Dr_ch,d**2,d**2) for OBC (Dl_ch, Dr_ch can vary between sites) or ndarray of a shape (D_ch,D_ch,d**2,d**2,N) for PBC or ndarray of a shape (D_ch,D_ch,d**2,d**2) for infinite approach
Quantum channel as superoperator in MPO/iMPO representation.
o: list of length N of ndarrays of a shape (Dl_o,Dr_o,d,d) for OBC (Dl_o,Dr_o can vary between sites) or ndarray of a shape (D_o,D_o,d,d,N) for PBC or ndarray of a shape (D_o,D_o,d,d) for infinite approach
Operator in MPO/iMPO representation.
Returns:
ch_o: list of length N of ndarrays of a shape (Dl_ch*Dl_o,Dr_ch*Dr_o,d,d) for OBC (Dl_ch*Dl_o, Dr_ch*Dr_o can vary between sites) or ndarray of a shape (D_ch*D_o,D_ch*D_o,d,d,N) for PBC or ndarray of a shape (D_ch*D_o,D_ch*D_o,d,d) for infinite approach
Operator after the evolution through quantum channel in MPO/iMPO representation.
"""
if type(o) is list and type(ch) is list:
if len(o) != len(ch):
warnings.warn('Tensor networks representing channel and operator have to be of the same length.')
n = len(o)
ch_o = [0]*n
for x in range(n):
d = np.shape(o[x])[2]
bdo1 = np.shape(o[x])[0]
bdo2 = np.shape(o[x])[1]
bdch1 = np.shape(ch[x])[0]
bdch2 = np.shape(ch[x])[1]
o[x] = np.reshape(o[x],(bdo1,bdo2,d**2),order='F')
ch_o[x] = np.zeros((bdo1*bdch1,bdo2*bdch2,d**2),dtype=complex)
for nx in range(d**2):
for nxp in range(d**2):
ch_o[x][:,:,nx] = ch_o[x][:,:,nx]+np.kron(ch[x][:,:,nx,nxp],o[x][:,:,nxp])
ch_o[x] = np.reshape(ch_o[x],(bdo1*bdch1,bdo2*bdch2,d,d),order='F')
o[x] = np.reshape(o[x],(bdo1,bdo2,d,d),order='F')
elif type(o) is np.ndarray and type(ch) is np.ndarray:
if np.ndim(o) != np.ndim(ch) or (np.ndim(o) == 5 and np.shape(o)[4] != np.shape(ch)[4]):
warnings.warn('Tensor networks representing channel and operator have to be of the same length.')
d = np.shape(o)[2]
bdo = np.shape(o)[0]
bdch = np.shape(ch)[0]
if np.ndim(o) == 4:
o = np.reshape(o,(bdo,bdo,d**2),order='F')
ch_o = np.zeros((bdo*bdch,bdo*bdch,d**2),dtype=complex)
for nx in range(d**2):
for nxp in range(d**2):
ch_o[:,:,nx] = ch_o[:,:,nx]+np.kron(ch[:,:,nx,nxp],o[:,:,nxp])
ch_o = np.reshape(ch_o,(bdo*bdch,bdo*bdch,d,d),order='F')
o = np.reshape(o,(bdo,bdo,d,d),order='F')
elif np.ndim(o) == 5:
n = np.shape(o)[4]
o = np.reshape(o,(bdo,bdo,d**2,n),order='F')
ch_o = np.zeros((bdo*bdch,bdo*bdch,d**2,n),dtype=complex)
for nx in range(d**2):
for nxp in range(d**2):
for x in range(n):
ch_o[:,:,nx,x] = ch_o[:,:,nx,x]+np.kron(ch[:,:,nx,nxp,x],o[:,:,nxp,x])
ch_o = np.reshape(ch_o,(bdo*bdch,bdo*bdch,d,d,n),order='F')
o = np.reshape(o,(bdo,bdo,d,d,n),order='F')
else:
warnings.warn('Channel and operator have to be of the same type (list or numpy.ndarray).')
return ch_o
def wave_function_to_density_matrix(p):
"""
Creates density matrix in MPO/iMPO representation from wave function in MPS/iMPS representation.
Parameters:
p: list of length N of ndarrays of a shape (Dl_p,Dr_p,d) for OBC (Dl_p, Dr_p can vary between sites) or ndarray of a shape (D_p,D_p,d,N) for PBC or ndarray of a shape (D_p,D_p,d) for infinite approach
Wave function in MPS/iMPS representation.
Returns:
r: list of length N of ndarrays of a shape (Dl_r,Dr_r,d,d) for OBC (Dl_r, Dr_r can vary between sites) or ndarray of a shape (D_r,D_r,d,d,N) for PBC or ndarray of a shape (D_r,D_r,d,d) for infinite approach
Density matrix in MPO/iMPO representation.
"""
if type(p) is list:
n = len(p)
r = [0]*n
for x in range(n):
d = np.shape(p[x])[2]
bdp1 = np.shape(p[x])[0]
bdp2 = np.shape(p[x])[1]
r[x] = np.zeros((bdp1**2,bdp2**2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
r[x][:,:,nx,nxp] = np.kron(p[x][:,:,nx],np.conj(p[x][:,:,nxp]))
elif type(p) is np.ndarray:
d = np.shape(p)[2]
bdp = np.shape(p)[0]
if np.ndim(p) == 3:
r = np.zeros((bdp**2,bdp**2,d,d),dtype=complex)
for nx in range(d):
for nxp in range(d):
r[:,:,nx,nxp] = np.kron(p[:,:,nx],np.conj(p[:,:,nxp]))
elif np.ndim(p) == 4:
n = np.shape(p)[3]
r = np.zeros((bdp**2,bdp**2,d,d,n),dtype=complex)
for nx in range(d):
for nxp in range(d):
for x in range(n):
r[:,:,nx,nxp,x] = np.kron(p[:,:,nx,x],np.conj(p[:,:,nxp,x]))
return r
def Kraus_to_superoperator(kraus_list):
"""
Creates superoperator from the list of Kraus operators.
This function is designed to creates local superoperators from the list of local Kraus operators so dk = d**k where d is dimension of local Hilbert space (dimension of physical index) and k is number of sites on which local Kraus operators acts.
In this framework Kraus operators have to be square.
Parameters:
kraus_list: list of ndarrays of a shape (dk,dk) where dk is dimension of a Kraus operator
List of Kraus operators.
Returns:
so: ndarray of a shape (dk**2,dk**2)
Superoperator.
"""
if np.shape(kraus_list[0])[0] != np.shape(kraus_list[0])[1]:
warnings.warn('In this framework Kraus operators have to be square.')
dk = np.shape(kraus_list[0])[0]
dynamicalmatrix = np.zeros((dk**2,dk**2),dtype=complex)
for kraus in kraus_list:
krausvec = np.reshape(kraus,-1,order='F')
dynamicalmatrix = dynamicalmatrix + np.outer(krausvec,np.conj(krausvec))
# Proper dynamical matrix would have also 1/dk factor.
so = np.reshape(np.moveaxis(np.reshape(dynamicalmatrix,(dk,dk,dk,dk),order='F'),1,2),(dk**2,dk**2),order='F')
return so
def fullHilb(N, so_before_list, h, so_after_list, BC='O', imprecision=10**-2):
"""
Optimization of the QFI over operator L (in full Hilbert space) and wave function psi0 (in full Hilbert space).
Function designed to be complementary to fin() so it has the same inputs.
User has to provide information about the dynamics by specifying quantum channel. It is assumed that quantum channel is translationally invariant and is build from layers of quantum operations.
User has to provide one defining for each layer operation as a local superoperator. Those local superoperator have to be input in order of their action on the system.
Parameter encoding is a stand out quantum operation. It is assumed that parameter encoding acts only once and is unitary so the user have to provide only its generator h.
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
Parameters:
N: integer
Number of sites in the chain of tensors (usually number of particles).
so_before_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act before unitary parameter encoding.
h: ndarray of a shape (d,d)
Generator of unitary parameter encoding. Dimension d is the dimension of local Hilbert space (dimension of physical index).
Generator h have to be diagonal in computational basis, or in other words it is assumed that local superoperators are expressed in the eigenbasis of h.
so_after_list: list of ndarrays of a shape (d**(2*k),d**(2*k)) where k describes on how many sites particular local superoperator acts
List of local superoperators (in order) which act after unitary parameter encoding.
BC: 'O' or 'P'
Boundary conditions, 'O' for OBC, 'P' for PBC.
imprecision: float, optional
Expected relative imprecision of the end results.
Returns:
result: float
Optimal value of figure of merit.
L: ndarray of a shape (d**N,d**N)
Optimal L in full Hilbert space.
psi0: ndarray of a shape (d**N,)
Optimal psi0 in full Hilbert space.
"""
if np.linalg.norm(h - np.diag(np.diag(h))) > 10**-10:
warnings.warn('Generator h have to be diagonal in computational basis, or in other words we assume that local superoperators are expressed in the eigenbasis of h.')
d = np.shape(h)[0]
ch = fin_create_channel(N, d, BC, so_before_list + so_after_list)
ch2 = fin_create_channel_derivative(N, d, BC, so_before_list, h, so_after_list)
ch_fH = MPO_to_fullHilb_superoperator(ch)
ch2_fH = MPO_to_fullHilb_superoperator(ch2)
result, L, psi0 = fullHilb_FoM_FoMD_opt(N, d, ch_fH, ch2_fH, imprecision)
return result, L, psi0
def fullHilb_FoM_FoMD_opt(n,d,ch,chp,imprecision=10**-2):
"""
Iterative optimization of FoM/FoMD over SLD and initial wave function using standard full Hilbert space description.
Parameters:
n: number of particles
d: dimension of local Hilbert space
ch: superoperator for quantum channel describing decoherence, expected ndarray of a shape (d**n,d**n)
chp: superoperator for generalized derivative of quantum channel describing decoherence, expected ndarray of a shape (d**n,d**n)
imprecision: expected imprecision of the end results, default value is 10**-2
Returns:
result: optimal value of FoM/FoMD
l: optimal SLD
psi0: optimal initial wave function
"""
relunc_f = 0.1*imprecision
chd = np.conj(ch).T
chpd = np.conj(chp).T
psi0 = np.zeros(d,dtype=complex)
for i in range(d):
psi0[i] = np.sqrt(math.comb(d-1,i))*2**(-(d-1)/2) # prod
# psi0[i] = np.sqrt(2/(d+1))*np.sin((1+i)*np.pi/(d+1)) # sine
psi0_0 = np.copy(psi0)
for x in range(n-1):
psi0 = np.kron(psi0,psi0_0)
rho0 = np.outer(psi0,np.conj(psi0))
rho0vec = np.reshape(rho0,-1,order='F')
f = np.array([])
iter_f = 0
while True:
rhovec = ch @ rho0vec
rho = np.reshape(rhovec,(d**n,d**n),order='F')
rhopvec = chp @ rho0vec
rhop = np.reshape(rhopvec,(d**n,d**n),order='F')
rho = (rho + np.conj(rho).T)/2
rhoeigval,rhoeigvec = np.linalg.eigh(rho)
lpart1 = np.zeros((d**n,d**n),dtype=complex)
for nt in range(d**n):
for ntp in range(d**n):
if np.abs(rhoeigval[nt]+rhoeigval[ntp]) > 10**-10:
lpart1[nt,ntp] = 1/(rhoeigval[nt]+rhoeigval[ntp])
else:
lpart1[nt,ntp] = 0
lpart2 = np.conj(rhoeigvec).T @ rhop @ rhoeigvec
l = rhoeigvec @ (2*lpart1*lpart2) @ np.conj(rhoeigvec).T
fom = np.real(np.trace(rhop @ l))
f = np.append(f,fom)
if iter_f >= 4 and np.std(f[-4:])/np.mean(f[-4:]) <= relunc_f:
break
lvec = np.reshape(l,-1,order='F')
l2vec = np.reshape(l @ l,-1,order='F')
l2dvec = chd @ l2vec
l2d = np.reshape(l2dvec,(d**n,d**n),order='F')
lpdvec = chpd @ lvec
lpd = np.reshape(lpdvec,(d**n,d**n),order='F')
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eigval,eigvec = np.linalg.eigh(eiginput)
fomd = eigval[-1]
f = np.append(f,fomd)
psi0 = eigvec[:,-1]
rho0 = np.outer(psi0,np.conj(psi0))
rho0vec = np.reshape(rho0,-1,order='F')
iter_f += 1
result = f[-1]
return result,l,psi0
def fullHilb_FoM_val(rho,rhop):
"""
Calculate value of FoM using standard full Hilbert space description.
Parameters:
rho: density matrix
rhop: generalized derivative of density matrix
Returns:
fomval: value of FoM
"""
dn = np.shape(rho)[0]
rhoeigval,rhoeigvec = np.linalg.eigh(rho)
lpart1 = np.zeros((dn,dn),dtype=complex)
for nt in range(dn):
for ntp in range(dn):
if np.abs(rhoeigval[nt]+rhoeigval[ntp]) > 10**-10:
lpart1[nt,ntp] = 1/(rhoeigval[nt]+rhoeigval[ntp])
else:
lpart1[nt,ntp] = 0
lpart2 = np.conj(rhoeigvec).T @ rhop @ rhoeigvec
l = rhoeigvec @ (2*lpart1*lpart2) @ np.conj(rhoeigvec).T
fomval = np.real(np.trace(rhop @ l))
return fomval
def fullHilb_FoMD_val(l2d,lpd):
"""
Calculate value of FoMD using standard full Hilbert space description.
Parameters:
l2d: square of dual of SLD
lpd: dual of generalized derivative of SLD
Returns:
fomdval: value of FoMD
"""
eiginput = 2*lpd-l2d
eiginput = (eiginput+np.conj(eiginput).T)/2
eigval,eigvec = np.linalg.eigh(eiginput)
fomdval = eigval[-1]
return fomdval
def MPS_to_fullHilb_wave_function(a):
"""
Creates wave function in full Hilbert space from its MPS description.
Parameters:
a: list of length N of ndarrays of a shape (Dl_a,Dr_a,d) for OBC (Dl_a, Dr_a can vary between sites) or ndarray of a shape (D_a,D_a,d,N) for PBC
MPS.
Returns:
b: ndarray of a shape (d**N,)
Wave function in full Hilbert space.
"""
if type(a) is list:
bc = 'O'
n = len(a)
d = np.shape(a[0])[2]
elif type(a) is np.ndarray:
bc = 'P'
n = np.shape(a)[3]
d = np.shape(a)[2]
b = np.zeros(d**n,dtype=complex)
nt = 0
for ntc in itertools.product(np.arange(d,dtype=int),repeat=n):
if bc == 'O':
if n == 1:
if np.shape(a[0])[0] == 1:
b[nt] = a[0][0,0,ntc[0]]
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
aux = a[0][:,:,ntc[0]]
for x in range(1,n):
aux = aux @ a[x][:,:,ntc[x]]
b[nt] = aux
elif bc == 'P':
aux = a[:,:,ntc[0],0]
for x in range(1,n):
aux = aux @ a[:,:,ntc[x],x]
b[nt] = np.trace(aux)
nt += 1
return b
def MPO_to_fullHilb_operator(a):
"""
Creates operator in full Hilbert space from its MPO description.
Parameters:
a: list of length N of ndarrays of a shape (Dl_a,Dr_a,d,d) for OBC (Dl_a, Dr_a can vary between sites) or ndarray of a shape (D_a,D_a,d,d,N) for PBC
MPO.
Returns:
b: ndarray of a shape (d**N,d**N)
Operator in full Hilbert space.
"""
if type(a) is list:
bc = 'O'
n = len(a)
d = np.shape(a[0])[2]
elif type(a) is np.ndarray:
bc = 'P'
n = np.shape(a)[4]
d = np.shape(a)[2]
b = np.zeros((d**n,d**n),dtype=complex)
nt = 0
for ntc in itertools.product(np.arange(d,dtype=int),repeat=n):
ntp = 0
for ntpc in itertools.product(np.arange(d,dtype=int),repeat=n):
if bc == 'O':
if n == 1:
if np.shape(a[0])[0] == 1:
b[nt,ntp] = a[0][0,0,ntc[0],ntpc[0]]
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
aux = a[0][:,:,ntc[0],ntpc[0]]
for x in range(1,n):
aux = aux @ a[x][:,:,ntc[x],ntpc[x]]
b[nt,ntp] = aux
elif bc == 'P':
aux = a[:,:,ntc[0],ntpc[0],0]
for x in range(1,n):
aux = aux @ a[:,:,ntc[x],ntpc[x],x]
b[nt,ntp] = np.trace(aux)
ntp += 1
nt += 1
return b
def MPO_to_fullHilb_superoperator(a):
"""
Creates a superoperator in the full Hilbert space from its MPO description.
Parameters:
a: list of length N of ndarrays of a shape (Dl_a,Dr_a,d**2,d**2) for OBC (Dl_a, Dr_a can vary between sites) or ndarray of a shape (D_a,D_a,d**2,d**2,N) for PBC
MPO.
Returns:
b: ndarray of a shape (d**(2*N),d**(2*N))
Superoperator in full Hilbert space.
"""
if type(a) is list:
bc = 'O'
n = len(a)
d2 = np.shape(a[0])[2]
elif type(a) is np.ndarray:
bc = 'P'
n = np.shape(a)[4]
d2 = np.shape(a)[2]
d = int(round(np.sqrt(d2)))
indexlist = []
for x in itertools.product(np.arange(d,dtype=int),repeat=n):
for y in itertools.product(np.arange(d,dtype=int),repeat=n):
helplist = []
for z in range(n):
helplist.append(d*x[z]+y[z])
indexlist.append(helplist)
b = np.zeros((d2**n,d2**n),dtype=complex)
for x in range(d2**n):
for y in range(d2**n):
if bc == 'O':
if n == 1:
if np.shape(a[0])[0] == 1:
b[x,y] = a[0][0,0,indexlist[x][0],indexlist[y][0]]
else:
warnings.warn('Tensor networks with OBC and length one have to have bond dimension equal to one.')
else:
aux = a[0][:,:,indexlist[x][0],indexlist[y][0]]
for z in range(1,n):
aux = aux @ a[z][:,:,indexlist[x][z],indexlist[y][z]]
b[x,y] = aux
elif bc == 'P':
aux = a[:,:,indexlist[x][0],indexlist[y][0],0]
for z in range(1,n):
aux = aux @ a[:,:,indexlist[x][z],indexlist[y][z],z]
b[x,y] = np.trace(aux)
return b | 49.121804 | 274 | 0.511675 | 40,423 | 268,991 | 3.364768 | 0.016327 | 0.005882 | 0.037165 | 0.017645 | 0.943153 | 0.927213 | 0.911708 | 0.898709 | 0.888313 | 0.877027 | 0 | 0.056397 | 0.323152 | 268,991 | 5,476 | 275 | 49.121804 | 0.690662 | 0.2725 | 0 | 0.857214 | 0 | 0.001987 | 0.018298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016389 | false | 0 | 0.001242 | 0 | 0.036007 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1446b13932e33caf4cec4892e3d02e3cf54a4626 | 7,321 | py | Python | tests/vcheck_test.py | joelfrederico/VCheck | 8d14e010aa4d8ce9fb5d594c37c5cc6d1abf2599 | [
"MIT"
] | null | null | null | tests/vcheck_test.py | joelfrederico/VCheck | 8d14e010aa4d8ce9fb5d594c37c5cc6d1abf2599 | [
"MIT"
] | null | null | null | tests/vcheck_test.py | joelfrederico/VCheck | 8d14e010aa4d8ce9fb5d594c37c5cc6d1abf2599 | [
"MIT"
] | null | null | null | import unittest.mock as mock
from .base import base
from .base import * # noqa
import warnings
import vcheck
import logging
class vcheck_test(base):
def assertNoWarnings(self, func, *args, **kwargs):
with warnings.catch_warnings(record=True) as wrn:
func(*args, **kwargs)
self.assertListEqual(wrn, [])
# ================================
# Test vcheck function
# ================================
def vcheck_toomanyargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Only specify either hexsha (.*) or version(.*)'):
vcheck.vcheck(self.mod2check, hexsha=current_hexshas[on_version_ind] , version=current_versions[on_version_ind])
def vcheck_notenoughargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Neither hexsha nor version specified'):
vcheck.vcheck(self.mod2check)
def vcheck_hexshamatches_test(self):
self.assertTrue(vcheck.vcheck(self.mod2check, hexsha=current_hexsha))
def vcheck_hexshafails_test(self):
self.assertFalse(vcheck.vcheck(self.mod2check, hexsha=unpresent_hexsha))
def vcheck_versionmatches_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
self.assertTrue(vcheck.vcheck(self.mod2check, version=current_versions[on_version_ind]))
def vcheck_versionfails_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
self.assertFalse(vcheck.vcheck(self.mod2check, version=unpresent_version))
def vcheck_versionerrors_test(self):
with self.assertRaisesRegex(vcheck.VersionError, 'Repo for module .* does not match a released version.'):
vcheck.vcheck(self.mod2check, version=unpresent_version)
# ================================
# Test check_warn function
# ================================
def check_warn_toomanyargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Only specify either hexsha (.*) or version(.*)'):
vcheck.check_warn(self.mod2check, hexsha=current_hexshas[on_version_ind] , version=current_versions[on_version_ind])
def check_warn_notenoughargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Neither hexsha nor version specified'):
vcheck.check_warn(self.mod2check)
def check_warn_hexshamatches_test(self):
self.assertNoWarnings(vcheck.check_warn, self.mod2check, hexsha=current_hexsha)
def check_warn_hexshafails_test(self):
with self.assertWarnsRegex(UserWarning, 'Module .* with hexsha .* does not match requested: .*'):
vcheck.check_warn(self.mod2check, hexsha=unpresent_hexsha)
def check_warn_versionmatches_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
self.assertNoWarnings(vcheck.check_warn, self.mod2check, version=current_versions[on_version_ind])
def check_warn_versionfails_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertWarnsRegex(UserWarning, 'Module .* with version .* does not match requested: .*'):
vcheck.check_warn(self.mod2check, version=unpresent_version)
def check_warn_versionerrors_test(self):
with self.assertWarnsRegex(UserWarning, 'Repo for module .* does not match a released version.'):
vcheck.check_warn(self.mod2check, version=unpresent_version)
def check_warn_verbosehexsha_test(self):
with mock.patch('builtins.print', autospec=True) as m:
vcheck.check_warn(self.mod2check, hexsha=current_hexsha, verbose=True)
self.assertEqual(m.call_count, 1)
self.assertRegex(m.call_args[0][0], 'VCheck: Module vcheck matches requested hexsha .*')
def check_warn_verboseversion_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with mock.patch('builtins.print', autospec=True) as m:
vcheck.check_warn(self.mod2check, version=current_versions[on_version_ind], verbose=True)
self.assertEqual(m.call_count, 1)
self.assertRegex(m.call_args[0][0], 'VCheck: Module vcheck matches requested version .*')
# ================================
# Test check_raise function
# ================================
def check_raise_toomanyargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Only specify either hexsha (.*) or version(.*)'):
vcheck.check_raise(self.mod2check, hexsha=current_hexshas[on_version_ind] , version=current_versions[on_version_ind])
def check_raise_notenoughargs_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(ValueError, 'Neither hexsha nor version specified'):
vcheck.check_raise(self.mod2check)
def check_raise_hexshamatches_test(self):
vcheck.check_raise(self.mod2check, hexsha=current_hexsha)
def check_raise_hexshafails_test(self):
with self.assertRaisesRegex(vcheck.VersionError, 'Module .* with hexsha .* does not match requested: .*'):
vcheck.check_raise(self.mod2check, hexsha=unpresent_hexsha)
def check_raise_versionmatches_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
vcheck.check_raise(self.mod2check, version=current_versions[on_version_ind])
def check_raise_versionfails_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with self.assertRaisesRegex(vcheck.VersionError, 'Module .* with version .* does not match requested: .*'):
vcheck.check_raise(self.mod2check, version=unpresent_version)
def check_raise_versionerrors_test(self):
with self.assertRaisesRegex(vcheck.VersionError, 'Repo for module .* does not match a released version.'):
vcheck.check_raise(self.mod2check, version=unpresent_version)
def check_raise_verbosehexsha_test(self):
with mock.patch('builtins.print', autospec=True) as m:
vcheck.check_raise(self.mod2check, hexsha=current_hexsha, verbose=True)
self.assertEqual(m.call_count, 1)
self.assertRegex(m.call_args[0][0], 'VCheck: Module vcheck matches requested hexsha .*')
def check_raise_verboseversion_test(self):
on_version_ind = -1
self.mockrepo_real(on_version_ind=on_version_ind)
with mock.patch('builtins.print', autospec=True) as m:
vcheck.check_raise(self.mod2check, version=current_versions[on_version_ind], verbose=True)
self.assertEqual(m.call_count, 1)
self.assertRegex(m.call_args[0][0], 'VCheck: Module vcheck matches requested version .*')
| 43.064706 | 129 | 0.691162 | 881 | 7,321 | 5.46765 | 0.095346 | 0.099024 | 0.132032 | 0.049408 | 0.894955 | 0.880424 | 0.822296 | 0.764584 | 0.742371 | 0.685281 | 0 | 0.008618 | 0.19164 | 7,321 | 169 | 130 | 43.319527 | 0.80534 | 0.037427 | 0 | 0.45614 | 0 | 0 | 0.124076 | 0 | 0 | 0 | 0 | 0 | 0.254386 | 1 | 0.22807 | false | 0 | 0.052632 | 0 | 0.289474 | 0.035088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1477afc44d2dff7b7683b5b6c4709f4ac61e97b3 | 9,923 | py | Python | tests/components/demo/test_cover.py | alindeman/home-assistant | b274b10f3874c196f0db8f9cfa5f47eb756d1f8e | [
"Apache-2.0"
] | 4 | 2019-07-03T22:36:57.000Z | 2019-08-10T15:33:25.000Z | tests/components/demo/test_cover.py | alindeman/home-assistant | b274b10f3874c196f0db8f9cfa5f47eb756d1f8e | [
"Apache-2.0"
] | 7 | 2019-08-23T05:26:02.000Z | 2022-03-11T23:57:18.000Z | tests/components/demo/test_cover.py | alindeman/home-assistant | b274b10f3874c196f0db8f9cfa5f47eb756d1f8e | [
"Apache-2.0"
] | 2 | 2018-08-15T03:59:35.000Z | 2018-10-18T12:20:05.000Z | """The tests for the Demo cover platform."""
from datetime import timedelta
import pytest
from homeassistant.components.cover import (
ATTR_POSITION, ATTR_CURRENT_POSITION, ATTR_CURRENT_TILT_POSITION,
ATTR_TILT_POSITION, DOMAIN)
from homeassistant.const import (
ATTR_ENTITY_ID, ATTR_SUPPORTED_FEATURES,
STATE_OPEN, STATE_OPENING, STATE_CLOSED, STATE_CLOSING, SERVICE_TOGGLE,
SERVICE_CLOSE_COVER, SERVICE_CLOSE_COVER_TILT, SERVICE_TOGGLE_COVER_TILT,
SERVICE_OPEN_COVER, SERVICE_OPEN_COVER_TILT, SERVICE_SET_COVER_POSITION,
SERVICE_SET_COVER_TILT_POSITION, SERVICE_STOP_COVER,
SERVICE_STOP_COVER_TILT)
from homeassistant.setup import async_setup_component
import homeassistant.util.dt as dt_util
from tests.common import assert_setup_component, async_fire_time_changed
CONFIG = {'cover': {'platform': 'demo'}}
ENTITY_COVER = 'cover.living_room_window'
@pytest.fixture
async def setup_comp(hass):
"""Set up demo cover component."""
with assert_setup_component(1, DOMAIN):
await async_setup_component(hass, DOMAIN, CONFIG)
async def test_supported_features(hass, setup_comp):
"""Test cover supported features."""
state = hass.states.get('cover.garage_door')
assert state.attributes[ATTR_SUPPORTED_FEATURES] == 3
state = hass.states.get('cover.kitchen_window')
assert state.attributes[ATTR_SUPPORTED_FEATURES] == 11
state = hass.states.get('cover.hall_window')
assert state.attributes[ATTR_SUPPORTED_FEATURES] == 15
state = hass.states.get('cover.living_room_window')
assert state.attributes[ATTR_SUPPORTED_FEATURES] == 255
async def test_close_cover(hass, setup_comp):
"""Test closing the cover."""
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPEN
assert state.attributes[ATTR_CURRENT_POSITION] == 70
await hass.services.async_call(
DOMAIN, SERVICE_CLOSE_COVER,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_CLOSING
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_CLOSED
assert state.attributes[ATTR_CURRENT_POSITION] == 0
async def test_open_cover(hass, setup_comp):
"""Test opening the cover."""
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPEN
assert state.attributes[ATTR_CURRENT_POSITION] == 70
await hass.services.async_call(
DOMAIN, SERVICE_OPEN_COVER,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPENING
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPEN
assert state.attributes[ATTR_CURRENT_POSITION] == 100
async def test_toggle_cover(hass, setup_comp):
"""Test toggling the cover."""
# Start open
await hass.services.async_call(
DOMAIN, SERVICE_OPEN_COVER,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPEN
assert state.attributes['current_position'] == 100
# Toggle closed
await hass.services.async_call(
DOMAIN, SERVICE_TOGGLE,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(10):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_CLOSED
assert state.attributes[ATTR_CURRENT_POSITION] == 0
# Toggle open
await hass.services.async_call(
DOMAIN, SERVICE_TOGGLE,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(10):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.state == STATE_OPEN
assert state.attributes[ATTR_CURRENT_POSITION] == 100
async def test_set_cover_position(hass, setup_comp):
"""Test moving the cover to a specific position."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_POSITION] == 70
await hass.services.async_call(
DOMAIN, SERVICE_SET_COVER_POSITION,
{ATTR_ENTITY_ID: ENTITY_COVER, ATTR_POSITION: 10}, blocking=True)
for _ in range(6):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_POSITION] == 10
async def test_stop_cover(hass, setup_comp):
"""Test stopping the cover."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_POSITION] == 70
await hass.services.async_call(
DOMAIN, SERVICE_OPEN_COVER,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
await hass.services.async_call(
DOMAIN, SERVICE_STOP_COVER,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_POSITION] == 80
async def test_close_cover_tilt(hass, setup_comp):
"""Test closing the cover tilt."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 50
await hass.services.async_call(
DOMAIN, SERVICE_CLOSE_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 0
async def test_open_cover_tilt(hass, setup_comp):
"""Test opening the cover tilt."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 50
await hass.services.async_call(
DOMAIN, SERVICE_OPEN_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 100
async def test_toggle_cover_tilt(hass, setup_comp):
"""Test toggling the cover tilt."""
# Start open
await hass.services.async_call(
DOMAIN, SERVICE_OPEN_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 100
# Toggle closed
await hass.services.async_call(
DOMAIN, SERVICE_TOGGLE_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(10):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 0
# Toggle Open
await hass.services.async_call(
DOMAIN, SERVICE_TOGGLE_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
for _ in range(10):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 100
async def test_set_cover_tilt_position(hass, setup_comp):
"""Test moving the cover til to a specific position."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 50
await hass.services.async_call(
DOMAIN, SERVICE_SET_COVER_TILT_POSITION,
{ATTR_ENTITY_ID: ENTITY_COVER, ATTR_TILT_POSITION: 90},
blocking=True)
for _ in range(7):
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 90
async def test_stop_cover_tilt(hass, setup_comp):
"""Test stopping the cover tilt."""
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 50
await hass.services.async_call(
DOMAIN, SERVICE_CLOSE_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
future = dt_util.utcnow() + timedelta(seconds=1)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
await hass.services.async_call(
DOMAIN, SERVICE_STOP_COVER_TILT,
{ATTR_ENTITY_ID: ENTITY_COVER}, blocking=True)
async_fire_time_changed(hass, future)
await hass.async_block_till_done()
state = hass.states.get(ENTITY_COVER)
assert state.attributes[ATTR_CURRENT_TILT_POSITION] == 40
| 37.730038 | 77 | 0.720851 | 1,328 | 9,923 | 5.064759 | 0.073795 | 0.067053 | 0.062444 | 0.074933 | 0.858311 | 0.825453 | 0.815343 | 0.740113 | 0.728516 | 0.724799 | 0 | 0.011001 | 0.184722 | 9,923 | 262 | 78 | 37.874046 | 0.820396 | 0.011388 | 0 | 0.722772 | 0 | 0 | 0.014412 | 0.005124 | 0 | 0 | 0 | 0 | 0.183168 | 1 | 0 | false | 0 | 0.034653 | 0 | 0.034653 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
148edafca07b67441321c7e460099639a0074414 | 2,269 | py | Python | tests/test_modules.py | neu-vig/ezflow | 1eb6f675e72b1de6db7b35d61ca4ef0082bae890 | [
"MIT"
] | 94 | 2021-11-18T18:31:18.000Z | 2022-03-04T02:30:13.000Z | tests/test_modules.py | neu-vig/ezflow | 1eb6f675e72b1de6db7b35d61ca4ef0082bae890 | [
"MIT"
] | 72 | 2021-11-19T16:59:10.000Z | 2022-03-02T14:39:10.000Z | tests/test_modules.py | neu-vig/ezflow | 1eb6f675e72b1de6db7b35d61ca4ef0082bae890 | [
"MIT"
] | 5 | 2021-11-18T18:42:38.000Z | 2022-03-03T11:35:26.000Z | import torch
from ezflow.modules import MODULE_REGISTRY
def test_ConvGRU():
inp_x = torch.rand(2, 8, 32, 32)
inp_h = torch.rand(2, 8, 32, 32)
module = MODULE_REGISTRY.get("ConvGRU")(hidden_dim=8, input_dim=8)
_ = module(inp_h, inp_x)
def test_BasicBlock():
inp = torch.randn(2, 3, 256, 256)
module = MODULE_REGISTRY.get("BasicBlock")(
inp.shape[1], 32, norm="group", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BasicBlock")(
inp.shape[1], 32, norm="batch", activation="leakyrelu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BasicBlock")(
inp.shape[1], 32, norm="instance", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BasicBlock")(
inp.shape[1], 32, norm="none", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BasicBlock")(
inp.shape[1], 32, norm=None, activation="relu", stride=3
)
_ = module(inp)
del module
def test_BottleneckBlock():
inp = torch.randn(2, 3, 256, 256)
module = MODULE_REGISTRY.get("BottleneckBlock")(
inp.shape[1], 32, norm="group", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BottleneckBlock")(
inp.shape[1], 32, norm="batch", activation="leakyrelu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BottleneckBlock")(
inp.shape[1], 32, norm="instance", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BottleneckBlock")(
inp.shape[1], 32, norm="none", activation="relu", stride=3
)
_ = module(inp)
del module
module = MODULE_REGISTRY.get("BottleneckBlock")(
inp.shape[1], 32, norm=None, activation="relu", stride=3
)
_ = module(inp)
del module
def test_DAP():
inp = torch.randn(2, 1, 7, 7, 16, 16)
module = MODULE_REGISTRY.get("DisplacementAwareProjection")(temperature=False)
_ = module(inp)
module = MODULE_REGISTRY.get("DisplacementAwareProjection")(temperature=True)
_ = module(inp)
| 24.138298 | 82 | 0.624945 | 283 | 2,269 | 4.879859 | 0.159011 | 0.182476 | 0.188269 | 0.21651 | 0.843592 | 0.843592 | 0.733526 | 0.733526 | 0.733526 | 0.733526 | 0 | 0.044495 | 0.227413 | 2,269 | 93 | 83 | 24.397849 | 0.743297 | 0 | 0 | 0.656716 | 0 | 0 | 0.123402 | 0.023799 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.029851 | 0 | 0.089552 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
14ada0efff332322917fc0170eebe947f0561b4d | 136,777 | py | Python | crowdstrike-falcon/1.0.0/src/app.py | bhagyeshkumar/shuffle-apps | 1abf1e23ccb2ee1094ac0f2e50f31f76f56c7ece | [
"MIT"
] | 41 | 2020-05-21T17:00:46.000Z | 2021-09-23T21:24:12.000Z | crowdstrike-falcon/1.0.0/src/app.py | bhagyeshkumar/shuffle-apps | 1abf1e23ccb2ee1094ac0f2e50f31f76f56c7ece | [
"MIT"
] | 179 | 2020-05-22T08:11:39.000Z | 2021-09-22T15:48:27.000Z | crowdstrike-falcon/1.0.0/src/app.py | bhagyeshkumar/shuffle-apps | 1abf1e23ccb2ee1094ac0f2e50f31f76f56c7ece | [
"MIT"
] | 57 | 2020-07-07T10:38:16.000Z | 2021-09-21T20:43:04.000Z | import requests
import asyncio
import json
import urllib3
from walkoff_app_sdk.app_base import AppBase
class Crowdstrike_Falcon(AppBase):
__version__ = "1.0"
app_name = "Crowdstrike_Falcon"
def __init__(self, redis, logger, console_logger=None):
self.verify = False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
super().__init__(redis, logger, console_logger)
def setup_headers(self, headers):
request_headers={}
if len(headers) > 0:
for header in headers.split("\n"):
if '=' in header:
headersplit=header.split('=')
request_headers[headersplit[0].strip()] = headersplit[1].strip()
elif ':' in header:
headersplit=header.split(':')
request_headers[headersplit[0].strip()] = headersplit[1].strip()
return request_headers
def setup_params(self, queries):
params={}
if len(queries) > 0:
for query in queries.split("\&"):
if '=' in query:
headersplit=query.split('&')
params[headersplit[0].strip()] = headersplit[1].strip()
return params
async def generate_oauth2_access_token(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/oauth2/token"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
body={'client_id': client_id, 'client_secret': client_secret}
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def revoke_oauth2_access_token(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/oauth2/revoke"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
body={'client_id': client_id, 'client_secret': client_secret}
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def download_analysis_artifacts(self, url, client_id, client_secret, id, headers="", queries="", name=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/artifacts/v1?id={id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if name:
params["name"] = name
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_detect_aggregates(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/detects/aggregates/detects/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def view_information_about_detections(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/detects/entities/summaries/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def modify_detections(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/detects/entities/detects/v2"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_sandbox_reports(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/falconx/queries/reports/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_rules_by_id(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/rules/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_rules_from_a_rule_group_by_id(self, url, client_id, client_secret, rule_group_id, ids, headers="", queries="", comment=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/rules/v1?rule_group_id={rule_group_id}&ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_a_rule_within_a_rule_group(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_rules_within_a_rule_group(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_prevention_policy_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/prevention-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def set_precedence_of_device_control_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control-precedence/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_hidden_hosts(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/devices/queries/devices-hidden/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_rule_types_by_id(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/rule-types/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_all_platform_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/platforms/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_combined_for_indicators(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/iocs/combined/indicator/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def set_precedence_of_response_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response-precedence/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_set_of_sensor_visibility_exclusions(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sv-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_the_sensor_visibility_exclusions_by_id(self, url, client_id, client_secret, ids, headers="", queries="", comment=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sv-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_the_sensor_visibility_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sv-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_the_sensor_visibility_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sv-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_prevention_policy_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/prevention/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_notifications_based_on_their_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/notifications/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_notifications_based_on_ids_notifications(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/notifications/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_notification_status_or_assignee(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/recon/entities/notifications/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_sensor_installer_ids_by_provided_query(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/sensors/queries/installers/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_info_about_indicators(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q="", include_deleted=""):
params={}
request_headers={}
url=f"{url}/intel/combined/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
if include_deleted:
params["include_deleted"] = include_deleted
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def download_earlier_rule_sets(self, url, client_id, client_secret, id, headers="", queries="", format=""):
params={}
request_headers={"Accept": "undefined"}
url=f"{url}/intel/entities/rules-files/v1?id={id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_report_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={}
url=f"{url}/intel/queries/reports/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_rule_ids(self, url, client_id, client_secret, type, headers="", queries="", offset="", limit="", sort="", name="", description="", tags="", min_created_date="", max_created_date="", q=""):
params={}
request_headers={}
url=f"{url}/intel/queries/rules/v1?type={type}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if name:
params["name"] = name
if description:
params["description"] = description
if tags:
params["tags"] = tags
if min_created_date:
params["min_created_date"] = min_created_date
if max_created_date:
params["max_created_date"] = max_created_date
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/sensor-update/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_set_of_ioa_exclusions(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ioa-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_the_ioa_exclusions_by_id(self, url, client_id, client_secret, ids, headers="", queries="", comment=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ioa-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_the_ioa_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ioa-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_the_ioa_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ioa-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_update_policy_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/sensor-update-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_visibility_exclusions(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/sv-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def find_ids_for_submitted_scans(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/scanner/queries/scans/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_sensor_installer_details_by_provided_query(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/sensors/combined/installers/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_hosts(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/devices/queries/devices-scroll/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_info_about_reports(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q="", fields=""):
params={}
request_headers={}
url=f"{url}/intel/combined/reports/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
if fields:
params["fields"] = fields
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_zipped_sample(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/malquery/entities/samples-fetch/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def schedule_samples_for_download(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/malquery/entities/samples-multidownload/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_the_sensor_update_policies(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def query_notifications(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={}
url=f"{url}/recon/queries/notifications/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_prevention_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/prevention/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_status_of_an_executed_active_responder_command_on_a_single_host(self, url, client_id, client_secret, cloud_request_id, sequence_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/active-responder-command/v1?cloud_request_id={cloud_request_id}&sequence_id={sequence_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def execute_an_active_responder_command_on_a_single_host(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/active-responder-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def find_all_rule_ids(self, url, client_id, client_secret, headers="", queries="", sort="", filter="", q="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if q:
params["q"] = q
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def set_precedence_of_prevention_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention-precedence/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_indicators_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q="", include_deleted=""):
params={}
request_headers={}
url=f"{url}/intel/queries/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
if include_deleted:
params["include_deleted"] = include_deleted
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_update_policy_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/sensor-update-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_refresh_a_rtr_session_on_multiple_hosts_rtr_sessions_will_expire_after_10_minutes_unless_refreshed(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-refresh-session/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_queued_session_metadata_by_session_id(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/queued-sessions/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_the_device_control_policies(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_scans_aggregations(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/scanner/aggregates/scans/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_detailed_notifications_based_on_their_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/notifications-detailed-translated/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_specific_indicators_using_their_indicator_ids(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/intel/entities/indicators/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def find_all_rule_group_ids(self, url, client_id, client_secret, headers="", queries="", sort="", filter="", q="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/rule-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if q:
params["q"] = q
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_falcon_malquery(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/malquery/queries/exact-search/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_available_builds_for_use_with_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", platform=""):
params={}
request_headers={}
url=f"{url}/policy/combined/sensor-update-builds/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_firewall_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/firewall/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_set_of_host_groups(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/devices/entities/host-groups/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_set_of_host_groups(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/devices/entities/host-groups/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_host_groups(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/devices/entities/host-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_host_groups(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/devices/entities/host-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_behaviors(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/incidents/queries/behaviors/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_incidents(self, url, client_id, client_secret, headers="", queries="", sort="", filter="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/incidents/queries/incidents/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_rule_groups_by_id(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/rule-groups/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_rule_groups_by_id(self, url, client_id, client_secret, ids, headers="", queries="", comment=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/rule-groups/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_a_rule_group(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rule-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_a_rule_group(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rule-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_all_rule_type_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/rule-types/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_information_about_search_and_download_quotas(self, url, client_id, client_secret, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/malquery/aggregates/quotas/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def refresh_a_session_timeout_on_a_single_host(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/refresh-session/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def query_crowdscore(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/incidents/combined/crowdscores/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_actions_on_incidents(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/incidents/entities/incident-actions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_info_about_actors(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q="", fields=""):
params={}
request_headers={}
url=f"{url}/intel/combined/actors/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
if fields:
params["fields"] = fields
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_response_policy_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/response-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_initialize_a_rtr_session_on_multiple_hosts__before_any_rtr_commands_can_be_used_an_active_session_is_needed_on_the_host(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-init-session/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_rtr_extracted_file_contents_for_specified_session_and_sha256(self, url, client_id, client_secret, session_id, sha256, headers="", queries="", filename=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/extracted-file-contents/v1?session_id={session_id}&sha256={sha256}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_host_groups(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/devices/combined/host-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_all_pattern_severity_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/pattern-severities/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_indicators_by_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/iocs/entities/indicators/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_indicators_by_ids(self, url, client_id, client_secret, headers="", queries="", filter="", ids="", comment=""):
params={}
request_headers={}
url=f"{url}/iocs/entities/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if ids:
params["ids"] = ids
if comment:
params["comment"] = comment
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_indicators(self, url, client_id, client_secret, headers="", queries="", retrodetects="", ignore_warnings="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/jsonX-CS-USERNAME"}
url=f"{url}/iocs/entities/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if ignore_warnings:
params["ignore_warnings"] = ignore_warnings
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_indicators(self, url, client_id, client_secret, headers="", queries="", retrodetects="", ignore_warnings="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/jsonX-CS-USERNAME"}
url=f"{url}/iocs/entities/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if ignore_warnings:
params["ignore_warnings"] = ignore_warnings
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_device_control_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_set_of_device_control_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_device_control_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_device_control_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/device-control/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_ioa_exclusions(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/ioa-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_aggregates_on_session_data(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/aggregates/sessions/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_session(self, url, client_id, client_secret, session_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/sessions/v1?session_id={session_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def initialize_a_new_session_with_the_rtr_cloud(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/sessions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_full_sandbox_report(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/reports/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_report(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/reports/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_ml_exclusions(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/ml-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_update_policy_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/sensor-update/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_queued_session_command(self, url, client_id, client_secret, session_id, cloud_request_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/queued-sessions/command/v1?session_id={session_id}&cloud_request_id={cloud_request_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def preview_rules_notification_count_and_distribution(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/aggregates/rules-preview/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_report_pdf_attachment(self, url, client_id, client_secret, id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/intel/entities/report-files/v1?id={id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_prevention_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_set_of_prevention_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_prevention_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_prevention_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_putfiles_based_on_the_ids_given(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/put-files/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_putfile_based_on_the_ids_given(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/put-files/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def upload_a_new_putfile_to_use_for_the_rtr_put_command(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/put-files/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_list_of_session_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/real-time-response/queries/sessions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_list_of_samples(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/jsonX-CS-USERUUID"}
url=f"{url}/samples/queries/samples/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def check_status_of_sandbox_analysis(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/submissions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def submit_upload_for_sandbox_analysis(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/submissions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_number_of_hosts_that_have_observed_a_given_custom_ioc(self, url, client_id, client_secret, type, value, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/indicators/aggregates/devices-count/v1?type={type}&value={value}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def set_precedence_of_firewall_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall-precedence/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_notification_aggregates(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/recon/aggregates/notifications/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_actions_based_on_their_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/actions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_an_action_from_a_monitoring_rule_based_on_the_action_id(self, url, client_id, client_secret, id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/actions/v1?id={id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_actions_for_a_monitoring_rule(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/recon/entities/actions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_an_action_for_a_monitoring_rule(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/recon/entities/actions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def query_actions(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={}
url=f"{url}/recon/queries/actions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_host_group_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/devices/queries/host-groups/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_indexed_files_metadata_by_their_hash(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/malquery/entities/metadata/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_sensor_update_policies_with_additional_support_for_uninstall_protection(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/sensor-update/v2"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_the_firewall_policies(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_process_details(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/processes/entities/processes/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_short_summary_version_of_a_sandbox_report(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/falconx/entities/report-summaries/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def schedule_a_yara_based_search_for_execution(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/malquery/queries/hunt/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_the_status_of_batch_get_command__will_return_successful_files_when_they_are_finished_processing(self, url, client_id, client_secret, batch_get_cmd_req_id, headers="", queries="", timeout="", timeout_duration=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-get-command/v1?batch_get_cmd_req_id={batch_get_cmd_req_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_executes_get_command_across_hosts_to_retrieve_files_after_this_call_is_made_get_realtimeresponsecombinedbatchgetcommandv1_is_used_to_query_for_the_results(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-get-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def query_monitoring_rules(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/queries/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_sensor_installer_details_by_provided_sha256_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/sensors/entities/installers/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def modify_host_tags(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/devices/entities/devices/tags/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_response_policy_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/response-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_status_of_an_executed_rtr_administrator_command_on_a_single_host(self, url, client_id, client_secret, cloud_request_id, sequence_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/admin-command/v1?cloud_request_id={cloud_request_id}&sequence_id={sequence_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def execute_a_rtr_administrator_command_on_a_single_host(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/admin-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def refresh_an_active_event_stream(self, url, client_id, client_secret, action_name, appId, partition, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/sensors/entities/datafeed-actions/v1/{partition}?action_name={action_name}&appId={appId}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def validates_field_values_and_checks_for_string_matches(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rules/validate/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def check_the_status_of_a_volume_scan(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/scanner/entities/scans/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def submit_a_volume_of_files_for_ml_scanning(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/scanner/entities/scans/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def download_the_latest_rule_set(self, url, client_id, client_secret, type, headers="", queries="", format=""):
params={}
request_headers={"Accept": "undefined"}
url=f"{url}/intel/entities/rules-latest-files/v1?type={type}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_rules_by_id(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/ioarules/entities/rules/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def find_all_rule_groups(self, url, client_id, client_secret, headers="", queries="", sort="", filter="", q="", offset="", limit=""):
params={}
request_headers={}
url=f"{url}/ioarules/queries/rule-groups-full/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if q:
params["q"] = q
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def check_the_status_and_results_of_an_asynchronous_request(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/malquery/entities/requests/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_set_of_ml_exclusions(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ml-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_the_ml_exclusions_by_id(self, url, client_id, client_secret, ids, headers="", queries="", comment=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ml-exclusions/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_the_ml_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ml-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_the_ml_exclusions(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/ml-exclusions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_device_control_policy_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/device-control/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_firewall_policy_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/firewall-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_notifications_based_on_their_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/notifications-translated/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_host_group_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/devices/combined/host-group-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_platforms_by_id(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/platforms/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_the_response_policies(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_response_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_set_of_response_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_response_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_response_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/response/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_executes_a_rtr_readonly_command(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_session_metadata_by_session_id(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/sessions/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_host_group(self, url, client_id, client_secret, action_name, host_group_id, hostnames, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/devices/entities/host-group-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
body = {"action_parameters": [{"name": "filter", "value": "(hostname:['" + hostnames + "'])" } ], "ids": [ host_group_id ]}
ret = requests.post(url, headers=request_headers, params=params, json=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_device_control_policy_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/device-control-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_firewall_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/firewall/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_sensor_update_policies_with_additional_support_for_uninstall_protection(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v2?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v2"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v2"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_list_of_putfile_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/real-time-response/queries/put-files/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_list_of_custom_script_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/real-time-response/queries/scripts/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_detailed_notifications_based_on_their_ids_with_raw_intelligence_content_that_generated_the_match(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/recon/entities/notifications-detailed/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_all_event_streams(self, url, client_id, client_secret, appId, headers="", queries="", format=""):
params={}
request_headers={}
url=f"{url}/sensors/entities/datafeed/v2?appId={appId}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def download_sensor_installer_by_sha256_id(self, url, client_id, client_secret, id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/sensors/entities/download-installer/v1?id={id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_hosts_that_have_observed_a_given_custom_ioc(self, url, client_id, client_secret, type, value, headers="", queries="", limit="", offset=""):
params={}
request_headers={}
url=f"{url}/indicators/queries/devices/v1?type={type}&value={value}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_details_for_rule_sets_for_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/intel/entities/rules/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def download_a_file_indexed_by_malquery(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/malquery/entities/download-files/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_an_uninstall_token_for_a_specific_device(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/combined/reveal-uninstall-token/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_response_policy_ids(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/response/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_a_list_of_files_for_rtr_session(self, url, client_id, client_secret, session_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/file/v1?session_id={session_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_rtr_session_file(self, url, client_id, client_secret, ids, session_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/file/v1?ids={ids}&session_id={session_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_custom_scripts_based_on_the_ids_given(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/scripts/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_custom_script_based_on_the_id_given(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/scripts/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def upload_a_new_custom_script_to_use(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/scripts/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def upload_a_new_scripts_to_replace_an_existing_one(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/scripts/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_details_on_hosts(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/devices/entities/devices/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_actor_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={}
url=f"{url}/intel/queries/actors/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_ccid_to_use_with_sensor_installers(self, url, client_id, client_secret, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/sensors/queries/installers/ccid/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def find_submission_ids_for_uploaded_files(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/falconx/queries/submissions/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_details_on_behaviors(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/incidents/entities/behaviors/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_device_control_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/device-control/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_prevention_policy_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/prevention-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_status_of_an_executed_command_on_a_single_host(self, url, client_id, client_secret, cloud_request_id, sequence_id, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/command/v1?cloud_request_id={cloud_request_id}&sequence_id={sequence_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def execute_a_command_on_a_single_host(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/entities/command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_the_file_associated_with_the_given_id_sha256(self, url, client_id, client_secret, ids, headers="", queries="", password_protected=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/samples/entities/samples/v3?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_sample_from_the_collection(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/samples/entities/samples/v3?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def upload_a_file_for_further_cloud_analysis(self, url, client_id, client_secret, file_name, headers="", queries="", comment="", is_confidential="", body=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/samples/entities/samples/v3?file_name={file_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if is_confidential:
params["is_confidential"] = is_confidential
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_response_policies(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/response/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_firewall_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_set_of_firewall_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_firewall_policies(self, url, client_id, client_secret, headers="", queries="", clone_id="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_firewall_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/firewall/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def set_precedence_of_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update-precedence/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_device_control_policy_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/queries/device-control-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_executes_a_rtr_active_responder_command(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-active-responder-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def batch_executes_a_rtr_administrator_command(self, url, client_id, client_secret, headers="", queries="", timeout="", timeout_duration="", body=""):
params={}
request_headers={}
url=f"{url}/real-time-response/combined/batch-admin-command/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if timeout_duration:
params["timeout_duration"] = timeout_duration
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_monitoring_rules_rules_by_provided_ids(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/entities/rules/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_monitoring_rules(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/entities/rules/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_monitoring_rules(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/entities/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_monitoring_rules(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/recon/entities/rules/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_detection_ids(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter="", q=""):
params={}
request_headers={}
url=f"{url}/detects/queries/detects/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
if q:
params["q"] = q
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_the_file_associated_with_the_given_id_sha256(self, url, client_id, client_secret, ids, headers="", queries="", password_protected=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/samples/entities/samples/v2?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def upload_for_sandbox_analysis(self, url, client_id, client_secret, file_name, headers="", queries="", comment="", is_confidential="", body=""):
params={}
request_headers={"X-CS-USERUUID": "undefined"}
url=f"{url}/samples/entities/samples/v2?file_name={file_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if is_confidential:
params["is_confidential"] = is_confidential
body = " ".join(body.strip().split()).encode("utf-8")
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_host_group_member_ids(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/devices/queries/host-group-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_details_on_incidents(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/incidents/entities/incidents/GET/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_processes_associated_with_a_custom_ioc(self, url, client_id, client_secret, type, value, device_id, headers="", queries="", limit="", offset=""):
params={}
request_headers={}
url=f"{url}/indicators/queries/processes/v1?type={type}&value={value}&device_id={device_id}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_specific_reports_using_their_report_ids(self, url, client_id, client_secret, ids, headers="", queries="", fields=""):
params={}
request_headers={}
url=f"{url}/intel/entities/reports/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_indicators(self, url, client_id, client_secret, headers="", queries="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/iocs/queries/indicators/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_firewall_policy_members(self, url, client_id, client_secret, headers="", queries="", id="", filter="", offset="", limit="", sort=""):
params={}
request_headers={}
url=f"{url}/policy/combined/firewall-members/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if filter:
params["filter"] = filter
if offset:
params["offset"] = offset
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def perform_action_on_the_prevention_policies(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/prevention-actions/v1?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_a_set_of_sensor_update_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def delete_a_set_of_sensor_update_policies(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.delete(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def create_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def update_sensor_update_policies(self, url, client_id, client_secret, headers="", queries="", body=""):
params={}
request_headers={}
url=f"{url}/policy/entities/sensor-update/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.patch(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def take_action_on_hosts(self, url, client_id, client_secret, action_name, headers="", queries="", body=""):
params={}
request_headers={"Content-Type": "application/json","Accept": "application/json"}
url=f"{url}/devices/entities/devices-actions/v2?action_name={action_name}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.post(url, headers=request_headers, params=params, data=body)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def search_for_hosts(self, url, client_id, client_secret, headers="", queries="", offset="", limit="", sort="", filter=""):
params={}
request_headers={}
url=f"{url}/devices/queries/devices/v1"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
if limit:
params["limit"] = limit
if sort:
params["sort"] = sort
if filter:
params["filter"] = filter
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def retrieve_specific_actors_using_their_actor_ids(self, url, client_id, client_secret, ids, headers="", queries="", fields=""):
params={}
request_headers={}
url=f"{url}/intel/entities/actors/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
async def get_pattern_severities_by_id(self, url, client_id, client_secret, ids, headers="", queries=""):
params={}
request_headers={}
url=f"{url}/ioarules/entities/pattern-severities/v1?ids={ids}"
request_headers=self.setup_headers(headers)
params=self.setup_params(queries)
ret = requests.get(url, headers=request_headers, params=params)
try:
return ret.json()
except json.decoder.JSONDecodeError:
return ret.text
if __name__ == "__main__":
asyncio.run(Crowdstrike_Falcon.run(), debug=True)
| 36.473867 | 278 | 0.628432 | 15,721 | 136,777 | 5.284333 | 0.023535 | 0.114932 | 0.03876 | 0.05489 | 0.956714 | 0.953247 | 0.951068 | 0.949142 | 0.944026 | 0.937743 | 0 | 0.002718 | 0.249421 | 136,777 | 3,749 | 279 | 36.483596 | 0.806491 | 0 | 0 | 0.859819 | 0 | 0.004015 | 0.097392 | 0.075239 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001004 | false | 0.000669 | 0.001673 | 0 | 0.15557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
211374b5ec68ece70323c92b28adce62bda8c7ee | 18,268 | py | Python | test/test_classifier_ip6.py | amithbraj/vpp | edf1da94dc099c6e2ab1d455ce8652fada3cdb04 | [
"Apache-2.0"
] | 751 | 2017-07-13T06:16:46.000Z | 2022-03-30T09:14:35.000Z | test/test_classifier_ip6.py | amithbraj/vpp | edf1da94dc099c6e2ab1d455ce8652fada3cdb04 | [
"Apache-2.0"
] | 63 | 2018-06-11T09:48:35.000Z | 2021-01-05T09:11:03.000Z | test/test_classifier_ip6.py | amithbraj/vpp | edf1da94dc099c6e2ab1d455ce8652fada3cdb04 | [
"Apache-2.0"
] | 479 | 2017-07-13T06:17:26.000Z | 2022-03-31T18:20:43.000Z | #!/usr/bin/env python3
import unittest
import socket
import binascii
from framework import VppTestCase, VppTestRunner
from scapy.packet import Raw
from scapy.layers.l2 import Ether
from scapy.layers.inet6 import IPv6, UDP, TCP
from util import ppp
from template_classifier import TestClassifier
class TestClassifierIP6(TestClassifier):
""" Classifier IP6 Test Case """
@classmethod
def setUpClass(cls):
super(TestClassifierIP6, cls).setUpClass()
cls.af = socket.AF_INET6
@classmethod
def tearDownClass(cls):
super(TestClassifierIP6, cls).tearDownClass()
def test_iacl_src_ip(self):
""" Source IP6 iACL test
Test scenario for basic IP ACL with source IP
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with source IP address.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with source IP
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes)
self.pg0.add_stream(pkts)
key = 'ip6_src'
self.create_classify_table(
key,
self.build_ip6_mask(src_ip='ffffffffffffffffffffffffffffffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(src_ip=self.pg0.remote_ip6))
self.input_acl_set_interface(self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_dst_ip(self):
""" Destination IP6 iACL test
Test scenario for basic IP ACL with destination IP
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with destination IP address.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with destination IP
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes)
self.pg0.add_stream(pkts)
key = 'ip6_dst'
self.create_classify_table(
key,
self.build_ip6_mask(dst_ip='ffffffffffffffffffffffffffffffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(dst_ip=self.pg1.remote_ip6))
self.input_acl_set_interface(self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_src_dst_ip(self):
""" Source and destination IP6 iACL test
Test scenario for basic IP ACL with source and destination IP
- Create IPv4 stream for pg0 -> pg1 interface.
- Create iACL with source and destination IP addresses.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with source and destination IP
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes)
self.pg0.add_stream(pkts)
key = 'ip6'
self.create_classify_table(
key,
self.build_ip6_mask(src_ip='ffffffffffffffffffffffffffffffff',
dst_ip='ffffffffffffffffffffffffffffffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(src_ip=self.pg0.remote_ip6,
dst_ip=self.pg1.remote_ip6))
self.input_acl_set_interface(self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
# Tests split to different test case classes because of issue reported in
# ticket VPP-1336
class TestClassifierIP6UDP(TestClassifier):
""" Classifier IP6 UDP proto Test Case """
@classmethod
def setUpClass(cls):
super(TestClassifierIP6UDP, cls).setUpClass()
cls.af = socket.AF_INET6
def test_iacl_proto_udp(self):
""" IP6 UDP protocol iACL test
Test scenario for basic protocol ACL with UDP protocol
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with UDP IP protocol.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with UDP protocol
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes)
self.pg0.add_stream(pkts)
key = 'nh_udp'
self.create_classify_table(key, self.build_ip6_mask(nh='ff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_UDP))
self.input_acl_set_interface(self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_udp_sport(self):
""" IP6 UDP source port iACL test
Test scenario for basic protocol ACL with UDP and sport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with UDP IP protocol and defined sport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with UDP and sport
sport = 38
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
UDP(sport=sport, dport=5678))
self.pg0.add_stream(pkts)
key = 'nh_udp_sport'
self.create_classify_table(
key, self.build_ip6_mask(nh='ff', src_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_UDP, src_port=sport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_udp_dport(self):
""" IP6 UDP destination port iACL test
Test scenario for basic protocol ACL with UDP and dport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with UDP IP protocol and defined dport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with UDP and dport
dport = 427
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
UDP(sport=1234, dport=dport))
self.pg0.add_stream(pkts)
key = 'nh_udp_dport'
self.create_classify_table(
key, self.build_ip6_mask(nh='ff', dst_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_UDP, dst_port=dport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_udp_sport_dport(self):
""" IP6 UDP source and destination ports iACL test
Test scenario for basic protocol ACL with UDP and sport and dport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with UDP IP protocol and defined sport and dport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with UDP and sport and dport
sport = 13720
dport = 9080
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
UDP(sport=sport, dport=dport))
self.pg0.add_stream(pkts)
key = 'nh_udp_ports'
self.create_classify_table(
key,
self.build_ip6_mask(nh='ff', src_port='ffff', dst_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_UDP, src_port=sport,
dst_port=dport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
class TestClassifierIP6TCP(TestClassifier):
""" Classifier IP6 TCP proto Test Case """
@classmethod
def setUpClass(cls):
super(TestClassifierIP6TCP, cls).setUpClass()
cls.af = socket.AF_INET6
def test_iacl_proto_tcp(self):
""" IP6 TCP protocol iACL test
Test scenario for basic protocol ACL with TCP protocol
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with TCP IP protocol.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with TCP protocol
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
TCP(sport=1234, dport=5678))
self.pg0.add_stream(pkts)
key = 'nh_tcp'
self.create_classify_table(key, self.build_ip6_mask(nh='ff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_TCP))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts, TCP)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_tcp_sport(self):
""" IP6 TCP source port iACL test
Test scenario for basic protocol ACL with TCP and sport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with TCP IP protocol and defined sport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with TCP and sport
sport = 38
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
TCP(sport=sport, dport=5678))
self.pg0.add_stream(pkts)
key = 'nh_tcp_sport'
self.create_classify_table(
key, self.build_ip6_mask(nh='ff', src_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_TCP, src_port=sport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts, TCP)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_tcp_dport(self):
""" IP6 TCP destination port iACL test
Test scenario for basic protocol ACL with TCP and dport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with TCP IP protocol and defined dport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with TCP and dport
dport = 427
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
TCP(sport=1234, dport=dport))
self.pg0.add_stream(pkts)
key = 'nh_tcp_dport'
self.create_classify_table(
key, self.build_ip6_mask(nh='ff', dst_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_TCP, dst_port=dport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts, TCP)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
def test_iacl_proto_tcp_sport_dport(self):
""" IP6 TCP source and destination ports iACL test
Test scenario for basic protocol ACL with TCP and sport and dport
- Create IPv6 stream for pg0 -> pg1 interface.
- Create iACL with TCP IP protocol and defined sport and dport.
- Send and verify received packets on pg1 interface.
"""
# Basic iACL testing with TCP and sport and dport
sport = 13720
dport = 9080
pkts = self.create_stream(self.pg0, self.pg1, self.pg_if_packet_sizes,
TCP(sport=sport, dport=dport))
self.pg0.add_stream(pkts)
key = 'nh_tcp_ports'
self.create_classify_table(
key,
self.build_ip6_mask(nh='ff', src_port='ffff', dst_port='ffff'))
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(nh=socket.IPPROTO_TCP, src_port=sport,
dst_port=dport))
self.input_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg1.get_capture(len(pkts))
self.verify_capture(self.pg1, pkts, TCP)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
class TestClassifierIP6Out(TestClassifier):
""" Classifier output IP6 Test Case """
@classmethod
def setUpClass(cls):
super(TestClassifierIP6Out, cls).setUpClass()
cls.af = socket.AF_INET6
def test_acl_ip_out(self):
""" Output IP6 ACL test
Test scenario for basic IP ACL with source IP
- Create IPv6 stream for pg1 -> pg0 interface.
- Create ACL with source IP address.
- Send and verify received packets on pg0 interface.
"""
# Basic oACL testing with source IP
pkts = self.create_stream(self.pg1, self.pg0, self.pg_if_packet_sizes)
self.pg1.add_stream(pkts)
key = 'ip6_out'
self.create_classify_table(
key,
self.build_ip6_mask(src_ip='ffffffffffffffffffffffffffffffff'),
data_offset=0)
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_ip6_match(src_ip=self.pg1.remote_ip6))
self.output_acl_set_interface(
self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg0.get_capture(len(pkts))
self.verify_capture(self.pg0, pkts)
self.pg1.assert_nothing_captured(remark="packets forwarded")
self.pg2.assert_nothing_captured(remark="packets forwarded")
class TestClassifierIP6MAC(TestClassifier):
""" Classifier IP6 MAC Test Case """
@classmethod
def setUpClass(cls):
super(TestClassifierIP6MAC, cls).setUpClass()
cls.af = socket.AF_INET6
def test_acl_mac(self):
""" IP6 MAC iACL test
Test scenario for basic MAC ACL with source MAC
- Create IPv6 stream for pg0 -> pg2 interface.
- Create ACL with source MAC address.
- Send and verify received packets on pg2 interface.
"""
# Basic iACL testing with source MAC
pkts = self.create_stream(self.pg0, self.pg2, self.pg_if_packet_sizes)
self.pg0.add_stream(pkts)
key = 'mac'
self.create_classify_table(
key, self.build_mac_mask(src_mac='ffffffffffff'), data_offset=-14)
self.create_classify_session(
self.acl_tbl_idx.get(key),
self.build_mac_match(src_mac=self.pg0.remote_mac))
self.input_acl_set_interface(self.pg0, self.acl_tbl_idx.get(key))
self.acl_active_table = key
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
pkts = self.pg2.get_capture(len(pkts))
self.verify_capture(self.pg2, pkts)
self.pg0.assert_nothing_captured(remark="packets forwarded")
self.pg1.assert_nothing_captured(remark="packets forwarded")
if __name__ == '__main__':
unittest.main(testRunner=VppTestRunner)
| 37.205703 | 78 | 0.643913 | 2,381 | 18,268 | 4.714406 | 0.060059 | 0.034298 | 0.025479 | 0.030111 | 0.893363 | 0.88196 | 0.872873 | 0.857372 | 0.829488 | 0.817996 | 0 | 0.021887 | 0.269707 | 18,268 | 490 | 79 | 37.281633 | 0.819504 | 0.218962 | 0 | 0.708904 | 0 | 0 | 0.05738 | 0.011755 | 0 | 0 | 0 | 0 | 0.089041 | 1 | 0.065068 | false | 0 | 0.030822 | 0 | 0.113014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dce747ec0fb9e02ced2e90c800dad9fdd65b3286 | 561 | py | Python | spatialfriend/__init__.py | aaron-schroeder/spatialfriend | 386e7de3a0352a7144a9fd9913882bb7c9ab2e0d | [
"MIT"
] | 1 | 2019-11-11T14:08:34.000Z | 2019-11-11T14:08:34.000Z | spatialfriend/__init__.py | aaron-schroeder/spatialfriend | 386e7de3a0352a7144a9fd9913882bb7c9ab2e0d | [
"MIT"
] | 2 | 2019-12-29T01:37:31.000Z | 2020-02-20T22:26:52.000Z | spatialfriend/__init__.py | aaron-schroeder/spatialfriend | 386e7de3a0352a7144a9fd9913882bb7c9ab2e0d | [
"MIT"
] | null | null | null | from spatialfriend.spatialfriend import (Elevation,
elevation_gain,
elevation_smooth,
elevation_smooth_time,
grade_smooth,
grade_smooth_time,
grade_raw)
__all__ = ['Elevation', 'elevation_gain', 'elevation_smooth',
'elevation_smooth_time', 'grade_smooth', 'grade_smooth_time',
'grade_raw']
| 43.153846 | 72 | 0.426025 | 35 | 561 | 6.257143 | 0.285714 | 0.273973 | 0.273973 | 0.283105 | 0.821918 | 0.821918 | 0.821918 | 0.821918 | 0.821918 | 0.821918 | 0 | 0 | 0.516934 | 561 | 12 | 73 | 46.75 | 0.808118 | 0 | 0 | 0 | 0 | 0 | 0.174688 | 0.037433 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
b497c292dfcc25f243fd90749b87f15721f82103 | 143,710 | py | Python | web/transiq/restapi/serializers/team.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | web/transiq/restapi/serializers/team.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | 14 | 2020-06-05T23:06:45.000Z | 2022-03-12T00:00:18.000Z | web/transiq/restapi/serializers/team.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | import re
from datetime import datetime, timedelta
from django.utils import timezone
from django.contrib.auth.models import User
from rest_framework import serializers, ISO_8601
from rest_framework.validators import UniqueValidator
from api.models import S3Upload
from api.utils import to_int
from owner.models import Owner, FuelCard
from restapi.helper_api import generate_credit_note_customer_serial_number, generate_debit_note_customer_serial_number, \
generate_credit_note_supplier_serial_number, generate_debit_note_supplier_serial_number, \
generate_credit_note_customer_direct_advance_serial_number, \
generate_debit_note_supplier_direct_advance_serial_number, DATE_FORMAT, DATETIME_FORMAT
from restapi.models import BookingStatuses, BookingStatusChain, BookingStatusesMapping
from restapi.serializers.sme import SmeSerializer
from restapi.serializers.utils import AahoOfficeSerializer, CitySerializer
from restapi.service.booking import get_booking_images, access_payment_paid_to_supplier, debit_amount_to_be_adjusted, \
get_booking_bank_accounts
from restapi.service.validators import validate_gstin, validate_vehicle_number
from sme.models import Sme
from supplier.models import Driver
from supplier.models import Supplier
from supplier.models import Vehicle
from team.models import InvoiceSummary, ManualBooking, LrNumber, RejectedPOD, BookingConsignorConsignee, \
BookingInsurance, InWardPayment, OutWardPayment, OutWardPaymentBill, Invoice, ToPayInvoice, \
PendingInwardPaymentEntry, CreditDebitNoteReason, CreditNoteCustomer, DebitNoteCustomer, CreditNoteSupplier, \
DebitNoteSupplier, CreditNoteCustomerDirectAdvance, DebitNoteSupplierDirectAdvance, BookingStatusColor, \
DataTablesFilter
from utils.models import City, AahoOffice, VehicleCategory, Bank
class InvoiceSummarySerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
ref_number = serializers.CharField(max_length=20,
validators=[UniqueValidator(queryset=InvoiceSummary.objects.all())])
datetime = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
s3_upload = serializers.PrimaryKeyRelatedField(queryset=S3Upload.objects.all())
booking = serializers.PrimaryKeyRelatedField(many=True, queryset=ManualBooking.objects.all(), required=False)
booking_id = serializers.SerializerMethodField()
lr_numbers = serializers.SerializerMethodField()
s3_upload_data = serializers.SerializerMethodField()
def get_booking_id(self, instance):
return '\n'.join(instance.booking.values_list('booking_id', flat=True))
def get_lr_numbers(self, instance):
return '\n'.join(['\n'.join(booking.lr_numbers.values_list('lr_number', flat=True)) for booking in
instance.booking.all()])
def get_s3_upload_url(self, instance):
if isinstance(instance.s3_upload, S3Upload):
return instance.s3_upload.public_url()
return ''
def validate_created_by(self, value):
if isinstance(self.instance, InvoiceSummary) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def get_s3_upload_data(self, instance):
if isinstance(instance.s3_upload, S3Upload):
return {
"url": instance.s3_upload.public_url(),
"filename": instance.s3_upload.filename
}
return {}
def create(self, validated_data):
instance = InvoiceSummary.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
InvoiceSummary.objects.filter(id=instance.id).update(**validated_data)
return InvoiceSummary.objects.get(id=instance.id)
class ManualBookingMISSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
created_on = serializers.DateTimeField(read_only=True, format=DATETIME_FORMAT)
shipment_date = serializers.DateField(read_only=True, format=DATE_FORMAT)
delivery_datetime = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
booking_id = serializers.CharField(read_only=True)
lr_number = serializers.SerializerMethodField()
customer_placed_order_data = serializers.SerializerMethodField(read_only=True)
consignor_name = serializers.CharField(read_only=True)
consignee_name = serializers.CharField(read_only=True)
customer_to_be_billed_to_data = serializers.SerializerMethodField(read_only=True)
from_city = serializers.SerializerMethodField()
to_city = serializers.SerializerMethodField()
vehicle = serializers.SerializerMethodField()
lorry_number = serializers.CharField(read_only=True)
party_rate = serializers.IntegerField(read_only=True)
charged_weight = serializers.DecimalField(read_only=True, decimal_places=3, max_digits=12)
freight_revenue = serializers.SerializerMethodField()
additional_charges_for_company = serializers.DecimalField(read_only=True, decimal_places=3, max_digits=12)
invoice_remarks_for_additional_charges = serializers.CharField(read_only=True)
deductions_for_company = serializers.DecimalField(read_only=True, decimal_places=3, max_digits=12)
invoice_remarks_for_deduction_discount = serializers.CharField(read_only=True)
total_amount_to_company = serializers.IntegerField(read_only=True)
refund_amount_due = serializers.SerializerMethodField()
refund_amount_paid = serializers.SerializerMethodField()
inward_payments_advance = serializers.SerializerMethodField()
inward_payments_other = serializers.SerializerMethodField()
tds_deducted_amount = serializers.IntegerField(read_only=True)
credit_amount_customer = serializers.SerializerMethodField()
debit_amount_customer = serializers.SerializerMethodField()
balance_for_customer = serializers.IntegerField(read_only=True)
invoice_status = serializers.CharField(read_only=True)
invoice_number = serializers.CharField(read_only=True)
billing_invoice_date = serializers.DateField(read_only=True)
supplier_data = serializers.SerializerMethodField()
supplier_charged_weight = serializers.CharField(read_only=True)
supplier_rate = serializers.IntegerField(read_only=True)
supplier_freight = serializers.IntegerField(read_only=True)
loading_charge = serializers.IntegerField(read_only=True)
unloading_charge = serializers.IntegerField(read_only=True)
detention_charge = serializers.IntegerField(read_only=True)
other_deduction = serializers.IntegerField(read_only=True)
remarks_about_deduction = serializers.CharField(read_only=True)
tds_deducted_supplier = serializers.SerializerMethodField()
total_amount_to_owner = serializers.IntegerField(read_only=True)
total_out_ward_amount = serializers.CharField(read_only=True)
credit_amount_supplier = serializers.SerializerMethodField()
debit_amount_supplier = serializers.SerializerMethodField()
debit_amount_supplier_direct_advance = serializers.SerializerMethodField()
balance_amt_payable = serializers.SerializerMethodField()
pod_status = serializers.CharField(read_only=True)
source_office = serializers.SerializerMethodField()
destination_office = serializers.SerializerMethodField()
def get_debit_amount_customer(self, instance):
return sum(
instance.debitnotecustomer_set.filter(status__in=['partial', 'adjusted']).exclude(deleted=True).values_list(
'adjusted_amount', flat=True))
def get_credit_amount_customer(self, instance):
return sum(instance.creditnotecustomer_set.filter(status__in=['partial', 'adjusted']).exclude(
deleted=True).values_list(
'adjusted_amount', flat=True))
def get_credit_amount_supplier(self, instance):
return sum(instance.creditnotesupplier_set.filter(status__in=['partial', 'adjusted']).exclude(
deleted=True).values_list(
'adjusted_amount', flat=True))
def get_debit_amount_supplier(self, instance):
return sum(
instance.debitnotesupplier_set.filter(status__in=['partial', 'adjusted']).exclude(deleted=True).values_list(
'adjusted_amount', flat=True))
def get_vehicle(self, instance):
if isinstance(instance.supplier_vehicle, Vehicle):
vehicle = {
'id': instance.supplier_vehicle.id, 'vehicle_number': instance.supplier_vehicle.number(),
}
if isinstance(instance.supplier_vehicle.vehicle_type, VehicleCategory):
vehicle["vehicle_type"] = instance.supplier_vehicle.vehicle_type.vehicle_type
else:
vehicle["vehicle_type"] = None
return vehicle
return {'id': -1, 'vehicle_number': None, "vehicle_type": None}
def get_customer_placed_order_data(self, instance):
if isinstance(instance.customer_to_be_billed_to, Sme):
return {'id': instance.customer_to_be_billed_to.id, 'name': instance.customer_to_be_billed_to.get_name(),
'code': instance.customer_to_be_billed_to.company_code,
'gstin': instance.customer_to_be_billed_to.gstin}
return {}
def get_lr_number(self, instance):
if isinstance(instance, ManualBooking) and len(instance.lr_numbers.values_list()) > 0:
return '\n'.join(instance.lr_numbers.values_list('lr_number', flat=True))
return ''
def get_customer_to_be_billed_to_data(self, instance):
if isinstance(instance.customer_to_be_billed_to, Sme):
return {'id': instance.customer_to_be_billed_to.id, 'name': instance.customer_to_be_billed_to.get_name(),
'code': instance.customer_to_be_billed_to.company_code}
return {}
def get_from_city(self, instance):
if isinstance(instance, ManualBooking) and isinstance(instance.from_city_fk, City):
return instance.from_city_fk.name
return None
def get_to_city(self, instance):
if isinstance(instance, ManualBooking) and isinstance(instance.to_city_fk, City):
return instance.to_city_fk.name
return None
def get_freight_revenue(self, instance):
return instance.customer_freight
def get_refund_amount_due(self, instance):
return instance.refundable_due_amount
def get_refund_amount_paid(self, instance):
return instance.refundable_paid_amount
def get_inward_payments_advance(self, instance):
return instance.adjusted_cnca_amount
def get_inward_payments_other(self, instance):
return instance.inward_amount
def get_tds_deducted_supplier(self, instance):
return 0
def get_supplier_data(self, instance):
if isinstance(instance.booking_supplier, Supplier):
return {'id': instance.booking_supplier.id, 'name': instance.booking_supplier.name,
'phone': instance.booking_supplier.phone, 'code': instance.booking_supplier.code}
return {}
def get_debit_amount_supplier_direct_advance(self, instance):
return instance.adjusted_cnca_amount
def get_balance_amt_payable(self, instance):
return instance.balance_for_supplier
def get_source_office(self, instance):
if isinstance(instance.source_office, AahoOffice):
return {'id': instance.source_office.id, 'branch_name': instance.source_office.branch_name}
return {}
def get_destination_office(self, instance):
if isinstance(instance.destination_office, AahoOffice):
return {'id': instance.destination_office.id, 'branch_name': instance.destination_office.branch_name}
return {}
class FMSManualBookingSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
booking_id = serializers.CharField(read_only=True)
shipment_date = serializers.DateField(format=DATE_FORMAT)
from_city = serializers.CharField(max_length=50)
to_city = serializers.CharField(max_length=50)
lorry_number = serializers.CharField(max_length=15, min_length=7)
pod_status = serializers.ChoiceField(
allow_null=True, choices=(
('pending', 'Pending'), ('unverified', 'Unverified'), ('rejected', 'Rejected'), ('completed', 'Delivered')),
required=False
)
outward_payment_status = serializers.ChoiceField(allow_null=True, choices=(
('no_payment_made', 'Nil'), ('partial', 'Partial'), ('complete', 'Full'), ('excess', 'Excess')), required=False)
supplier_charged_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=True)
supplier_rate = serializers.IntegerField(read_only=True)
loading_charge = serializers.IntegerField(read_only=True)
unloading_charge = serializers.IntegerField(read_only=True)
detention_charge = serializers.IntegerField(read_only=True)
additional_charges_for_owner = serializers.IntegerField(read_only=True)
commission = serializers.IntegerField(read_only=True)
lr_cost = serializers.IntegerField(read_only=True)
deduction_for_advance = serializers.IntegerField(read_only=True)
deduction_for_balance = serializers.IntegerField(read_only=True)
other_deduction = serializers.IntegerField(read_only=True)
remarks_about_deduction = serializers.CharField(read_only=True)
total_amount_to_owner = serializers.IntegerField(read_only=True)
lr_numbers = serializers.SerializerMethodField()
outward_payments = serializers.SerializerMethodField()
pod_data = serializers.SerializerMethodField()
amount = serializers.SerializerMethodField()
paid_amount = serializers.SerializerMethodField()
balance_amount = serializers.SerializerMethodField()
latest_payment_date = serializers.SerializerMethodField()
debit_note_supplier = serializers.SerializerMethodField()
credit_note_supplier = serializers.SerializerMethodField()
credit_note_for_direct_advance = serializers.SerializerMethodField()
vehicle_data = serializers.SerializerMethodField()
def get_latest_payment_date(self, instance):
if instance.outward_booking.exclude(payment_date=None).exists():
return instance.outward_booking.last().payment_date.strftime('%d-%b-%Y')
return None
def get_credit_note_for_direct_advance(self, instance):
return CreditNoteCustomerDirectAdvanceSerializer(many=True,
instance=instance.creditnotecustomerdirectadvance_set.filter(
status__in=['partial', 'adjusted'])).data
def get_credit_note_supplier(self, instance):
return CreditNoteSupplierSerializer(many=True, instance=instance.creditnotesupplier_set.filter(
status__in=['partial', 'adjusted'])).data
def get_debit_note_supplier(self, instance):
return DebitNoteSupplierSerializer(many=True, instance=instance.debitnotesupplier_set.filter(
status__in=['partial', 'adjusted'])).data
def get_amount(self, instance):
return instance.fms_supplier_amount
def get_paid_amount(self, instance):
return instance.fms_supplier_paid_amount
def get_balance_amount(self, instance):
return instance.fms_balance_supplier
def get_pod_data(self, instance):
from restapi.serializers.file_upload import BasicPODFileSerializer
return BasicPODFileSerializer(instance.podfile_set.all(), many=True).data
def get_outward_payments(self, instance):
return OutWardPaymentSerializer(
OutWardPayment.objects.filter(
booking_id=instance).exclude(is_refund_amount=True).exclude(deleted=True), many=True).data
def get_lr_numbers(self, instance):
return [{"id": lr.id, "lr_number": lr.lr_number} for lr in instance.lr_numbers.all()]
def get_vehicle_data(self, instance):
if isinstance(instance.supplier_vehicle, Vehicle):
vehicle = {
'id': instance.supplier_vehicle.id, 'vehicle_number': instance.supplier_vehicle.number(),
}
if isinstance(instance.supplier_vehicle.vehicle_type, VehicleCategory):
vehicle["vehicle_type"] = instance.supplier_vehicle.vehicle_type.vehicle_type
else:
vehicle["vehicle_type"] = None
return vehicle
return {'id': -1, 'vehicle_number': None, "vehicle_type": None}
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'outward_payments', 'credit_note_supplier', 'debit_note_supplier',
'credit_note_for_direct_advance'
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
class CustomerManualBookingSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
booking_id = serializers.CharField(read_only=True)
shipment_date = serializers.DateField(format=DATE_FORMAT)
from_city = serializers.CharField(max_length=50)
to_city = serializers.CharField(max_length=50)
lorry_number = serializers.CharField(max_length=15, min_length=7)
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
class ConnectManualBookingSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
booking_id = serializers.CharField(max_length=35, required=True,
validators=[UniqueValidator(queryset=ManualBooking.objects.all())])
shipment_date = serializers.DateField(format=DATE_FORMAT, input_formats=[DATE_FORMAT, ISO_8601])
charged_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=True)
supplier_charged_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=True)
party_rate = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0, required=False)
supplier_rate = serializers.IntegerField(
allow_null=True, max_value=2147483647, min_value=0, required=True)
loading_charge = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
unloading_charge = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
detention_charge = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
additional_charges_for_company = serializers.IntegerField(allow_null=True,
label='Additional Charges/Deductions for Company (+/-)',
max_value=2147483647, min_value=0,
required=False)
remarks_about_additional_charges = serializers.CharField(allow_null=True, required=False)
additional_charges_for_owner = serializers.IntegerField(allow_null=True, max_value=2147483647,
min_value=0, required=False)
note_for_additional_owner_charges = serializers.CharField(allow_null=True, required=False)
commission = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0, required=False)
lr_cost = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0, required=False)
deduction_for_advance = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
deduction_for_balance = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
other_deduction = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
remarks_about_deduction = serializers.CharField(allow_null=True, required=False)
deductions_for_company = serializers.IntegerField(allow_null=True, max_value=2147483647, min_value=0,
required=False)
pod_status = serializers.ChoiceField(
allow_null=True, choices=(
('pending', 'Pending'), ('unverified', 'Unverified'), ('rejected', 'Rejected'), ('completed', 'Delivered')),
required=False
)
booking_status = serializers.ChoiceField(choices=(
('confirmed', 'Confirmed'), ('delivered', 'Delivered'), ('closed', 'Closed'), ('cancelled', 'Cancelled')),
required=False)
source_office_data = serializers.SerializerMethodField()
destination_office_data = serializers.SerializerMethodField()
customer_placed_order_data = serializers.SerializerMethodField()
customer_to_be_billed_to_data = serializers.SerializerMethodField()
supplier_data = serializers.SerializerMethodField()
owner_data = serializers.SerializerMethodField()
driver_data = serializers.SerializerMethodField()
from_city_fk_data = serializers.SerializerMethodField()
to_city_fk_data = serializers.SerializerMethodField()
vehicle_data = serializers.SerializerMethodField()
vehicle_category_data = serializers.SerializerMethodField()
lr_numbers = serializers.SerializerMethodField()
inward_payments = serializers.SerializerMethodField()
outward_payments = serializers.SerializerMethodField()
invoices = serializers.SerializerMethodField()
pod_data = serializers.SerializerMethodField()
supplier_freight = serializers.SerializerMethodField()
customer_freight = serializers.SerializerMethodField()
status_color_code = serializers.SerializerMethodField()
documents = serializers.SerializerMethodField()
outward_amount = serializers.SerializerMethodField()
inward_amount = serializers.SerializerMethodField()
supplier_amount = serializers.SerializerMethodField()
customer_amount = serializers.SerializerMethodField()
amount_received_from_customer = serializers.SerializerMethodField()
amount_paid_to_supplier = serializers.SerializerMethodField()
balance_for_customer = serializers.SerializerMethodField()
balance_for_supplier = serializers.SerializerMethodField()
tds_amount_customer = serializers.SerializerMethodField()
debit_amount_supplier = serializers.SerializerMethodField()
credit_amount_supplier = serializers.SerializerMethodField()
debit_amount_customer = serializers.SerializerMethodField()
credit_amount_customer = serializers.SerializerMethodField()
credit_note_customer = serializers.SerializerMethodField()
credit_note_supplier = serializers.SerializerMethodField()
debit_note_customer = serializers.SerializerMethodField()
debit_note_supplier = serializers.SerializerMethodField()
credit_note_for_direct_advance = serializers.SerializerMethodField()
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'inward_payments', 'outward_payments', 'credit_amount_customer', 'debit_amount_customer',
'credit_amount_supplier', 'debit_amount_supplier', 'tds_amount_customer',
'balance_for_supplier', 'balance_for_customer', 'amount_paid_to_supplier',
'amount_received_from_customer', 'customer_amount', 'supplier_amount', 'inward_amount',
'outward_amount', 'documents', 'status_color_code', 'customer_freight', 'supplier_freight',
'credit_note_customer', 'credit_note_supplier', 'debit_note_customer',
'debit_note_supplier', 'credit_note_for_direct_advance'
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
def get_credit_note_for_direct_advance(self, instance):
return CreditNoteCustomerDirectAdvanceSerializer(many=True,
instance=instance.creditnotecustomerdirectadvance_set.all()).data
def get_credit_note_customer(self, instance):
return CreditNoteCustomerSerializer(many=True, instance=instance.creditnotecustomer_set.all()).data
def get_credit_note_supplier(self, instance):
return CreditNoteSupplierSerializer(many=True, instance=instance.creditnotesupplier_set.all()).data
def get_debit_note_customer(self, instance):
return DebitNoteCustomerSerializer(many=True, instance=instance.debitnotecustomer_set.all()).data
def get_debit_note_supplier(self, instance):
return DebitNoteSupplierSerializer(many=True, instance=instance.debitnotesupplier_set.all()).data
def get_excess_payment_paid_to_supplier(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
supplier_excess_amount, supplier_excess_amount_msg = access_payment_paid_to_supplier(
supplier=instance.accounting_supplier)
return {'supplier_excess_amount': supplier_excess_amount,
'supplier_excess_amount_msg': supplier_excess_amount_msg}
return {'supplier_excess_amount': 0,
'supplier_excess_amount_msg': None}
def get_debit_amount_to_be_adjusted(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
debit_amount = debit_amount_to_be_adjusted(supplier=instance.accounting_supplier)
return {'debit_amount_to_be_adjusted': debit_amount}
return {'debit_amount_to_be_adjusted': 0}
def get_refundable_paid_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.refundable_paid_amount
return None
def get_outward_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.outward_amount
return None
def get_inward_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.inward_amount
return None
def get_credit_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.credit_amount_customer
return None
def get_debit_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.debit_amount_customer
return None
def get_credit_amount_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.credit_amount_supplier
return None
def get_debit_amount_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.debit_amount_supplier
return None
def get_tds_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.tds_amount_customer
return None
def get_balance_for_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.balance_for_supplier
return None
def get_balance_for_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.balance_for_customer
return None
def get_supplier_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.supplier_amount
return None
def get_customer_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.customer_amount
return None
def get_amount_received_from_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.amount_received_from_customer
return None
def get_amount_paid_to_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.amount_paid_to_supplier
return None
def get_documents(self, instance):
if isinstance(instance, ManualBooking):
return get_booking_images(instance)
return []
def get_status_color_code(self, instance):
if isinstance(instance.booking_status_color, BookingStatusColor) and instance.booking_status_color.color_code:
return instance.booking_status_color.color_code
return '#000000'
def get_customer_freight(self, instance):
return instance.customer_freight
def get_supplier_freight(self, instance):
return instance.supplier_freight
def get_pod_data(self, instance):
from restapi.serializers.file_upload import BasicPODFileSerializer
return BasicPODFileSerializer(instance.podfile_set.all(), many=True).data
def get_inward_payments(self, instance):
return InWardPaymentSerializer(InWardPayment.objects.filter(booking_id=instance), many=True).data
def get_outward_payments(self, instance):
return OutWardPaymentSerializer(OutWardPayment.objects.filter(booking_id=instance), many=True).data
def get_invoices(self, instance):
return InvoiceSerializer(Invoice.objects.filter(bookings=instance).distinct(), many=True).data
def get_lr_numbers(self, instance):
return [{"id": lr.id, "lr_number": lr.lr_number} for lr in instance.lr_numbers.all()]
@staticmethod
def get_source_office_data(instance):
if isinstance(instance.source_office, AahoOffice):
return {'id': instance.source_office.id, 'branch_name': instance.source_office.branch_name}
return {'id': -1, 'branch_name': None}
@staticmethod
def get_destination_office_data(instance):
if isinstance(instance.destination_office, AahoOffice):
return {'id': instance.destination_office.id, 'branch_name': instance.destination_office.branch_name}
return {'id': -1, 'branch_name': None}
@staticmethod
def get_customer_placed_order_data(instance):
if isinstance(instance.company, Sme):
return {'id': instance.company.id, 'name': instance.company.get_name(),
'code': instance.company.company_code, 'gstin': instance.company.gstin}
return {'id': None, 'name': None, 'code': None, 'gstin': None}
@staticmethod
def get_customer_to_be_billed_to_data(obj):
if isinstance(obj.customer_to_be_billed_to, Sme):
return {'id': obj.customer_to_be_billed_to.id, 'name': obj.customer_to_be_billed_to.get_name(),
'code': obj.customer_to_be_billed_to.company_code, 'gstin': obj.customer_to_be_billed_to.gstin,
'address': obj.customer_to_be_billed_to.customer_address, 'pin': obj.customer_to_be_billed_to.pin,
'city': {
'name': obj.customer_to_be_billed_to.city.name if obj.customer_to_be_billed_to.city else None,
'id': obj.customer_to_be_billed_to.city.id if obj.customer_to_be_billed_to.city else -1},
'credit_period': obj.customer_to_be_billed_to.credit_period}
return {'id': -1, 'name': None, 'pin': None, 'code': None, 'gstin': None, 'address': None,
'city': {'name': None, 'id': -1}, 'credit_period': None}
@staticmethod
def get_supplier_data(instance):
if isinstance(instance.booking_supplier, Supplier):
return {'id': instance.booking_supplier.id, 'name': instance.booking_supplier.name,
'phone': instance.booking_supplier.phone, 'code': instance.booking_supplier.code}
return {'id': -1, 'name': None, 'phone': None, 'code': None}
@staticmethod
def get_owner_data(instance):
if isinstance(instance.owner_supplier, Supplier):
return {'id': instance.owner_supplier.id, 'name': instance.owner_supplier.name,
'phone': instance.owner_supplier.phone}
return {'id': -1, 'name': None, 'phone': None}
@staticmethod
def get_driver_data(instance):
if isinstance(instance.driver_supplier, Driver):
return {'id': instance.driver_supplier.id, 'name': instance.driver_supplier.name,
'phone': instance.driver_supplier.phone}
return {'id': -1, 'name': None, 'phone': None}
@staticmethod
def get_consignor_city_fk_data(instance):
if isinstance(instance.consignor_city_fk, City):
return {'id': instance.consignor_city_fk.id, 'name': instance.consignor_city_fk.name,
'code': instance.consignor_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_consignee_city_fk_data(instance):
if isinstance(instance.consignee_city_fk, City):
return {'id': instance.consignee_city_fk.id, 'name': instance.consignee_city_fk.name,
'code': instance.consignee_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_from_city_fk_data(instance):
if isinstance(instance.from_city_fk, City):
return {'id': instance.from_city_fk.id, 'name': instance.from_city_fk.name,
'code': instance.from_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_to_city_fk_data(instance):
if isinstance(instance.to_city_fk, City):
return {'id': instance.to_city_fk.id, 'name': instance.to_city_fk.name,
'code': instance.to_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_vehicle_data(instance):
if isinstance(instance.supplier_vehicle, Vehicle):
vehicle = {
'id': instance.supplier_vehicle.id, 'vehicle_number': instance.supplier_vehicle.number(),
}
if isinstance(instance.supplier_vehicle.vehicle_type, VehicleCategory):
vehicle["vehicle_type"] = instance.supplier_vehicle.vehicle_type.vehicle_type
else:
vehicle["vehicle_type"] = None
return vehicle
return {'id': -1, 'vehicle_number': None, "vehicle_type": None}
@staticmethod
def get_vehicle_category_data(instance):
if isinstance(instance.vehicle_category, VehicleCategory):
return {'id': instance.vehicle_category.id, 'type': instance.vehicle_category.vehicle_category}
return {}
def validate_created_by(self, value):
if isinstance(self.instance, ManualBooking) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def validate_lorry_number(self, value):
vehicle_number_pattern = re.compile('^[a-z]{2}\d{1,2}[a-z]{0,3}\d{4}$')
if not vehicle_number_pattern.match(value):
raise serializers.ValidationError({"vehicle_number": "Vehicle Number is not valid"})
return value
class TinyManualBookingSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
booking_id = serializers.CharField(read_only=True)
lr_numbers = serializers.SerializerMethodField()
def get_lr_numbers(self, instance):
return ', '.join(instance.lr_numbers.values_list('lr_number', flat=True))
def create(self, validated_data):
pass
def update(self, instance, validated_data):
pass
class ManualBookingSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
booking_id = serializers.CharField(max_length=35, required=True,
validators=[UniqueValidator(queryset=ManualBooking.objects.all())])
company_code = serializers.CharField(label='Company Code', max_length=3, min_length=3)
consignor_name = serializers.CharField(allow_null=True, max_length=100, required=False)
consignor_address = serializers.CharField(allow_null=True, max_length=255, required=False)
consignor_city = serializers.CharField(allow_null=True, max_length=35, required=False)
consignor_pin = serializers.CharField(allow_null=True, max_length=6, required=False)
consignor_phone = serializers.CharField(allow_null=True, max_length=20, required=False)
consignor_cst_tin = serializers.CharField(allow_null=True, max_length=35, required=False)
consignor_gstin = serializers.CharField(allow_null=True, min_length=15, max_length=15, required=False)
consignee_name = serializers.CharField(allow_null=True, max_length=100, required=False)
consignee_address = serializers.CharField(allow_null=True, max_length=400, required=False)
consignee_city = serializers.CharField(allow_null=True, max_length=35, required=False)
consignee_pin = serializers.CharField(allow_null=True, max_length=6, required=False)
consignee_phone = serializers.CharField(allow_null=True, max_length=20, required=False)
consignee_cst_tin = serializers.CharField(allow_null=True, max_length=50, required=False)
consignee_gstin = serializers.CharField(allow_null=True, max_length=50, required=False)
billing_type = serializers.ChoiceField(choices=(
('T.B.B.', 'T.B.B.'), ('To Pay', 'To Pay'), ('Paid', 'Paid'), ('contract', 'Contract')))
gst_liability = serializers.ChoiceField(allow_null=True, choices=(
('consignor', 'Consignor'), ('consignee', 'Consignee'), ('carrier', 'Carrier'), ('exempted', 'Exempted')))
liability_of_service_tax = serializers.CharField(allow_null=True, max_length=40, required=False)
shipment_date = serializers.DateField(format=DATE_FORMAT, input_formats=[DATE_FORMAT, ISO_8601])
delivery_datetime = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
from_city = serializers.CharField(max_length=50)
to_city = serializers.CharField(max_length=50)
lorry_number = serializers.CharField(max_length=15, min_length=7)
type_of_vehicle = serializers.CharField(allow_null=True, max_length=70, required=True)
road_permit_number = serializers.CharField(allow_null=True, max_length=255, required=False)
party_invoice_number = serializers.CharField(allow_null=True, max_length=255, required=False)
party_invoice_date = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
party_invoice_amount = serializers.CharField(allow_null=True, max_length=100, required=False)
number_of_package = serializers.CharField(allow_null=True, max_length=30, required=False)
material = serializers.CharField(allow_null=True, max_length=500, required=False)
loaded_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=False)
delivered_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=False)
charged_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=True)
supplier_charged_weight = serializers.DecimalField(allow_null=True, decimal_places=3, max_digits=12, required=True)
party_rate = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0, required=False)
supplier_rate = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=True)
is_insured = serializers.BooleanField(required=False)
insurance_provider = serializers.CharField(allow_null=True, max_length=200, required=False)
insurance_policy_number = serializers.CharField(allow_null=True, max_length=200, required=False)
insured_amount = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
insurance_date = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
insurance_risk = serializers.CharField(allow_null=True, max_length=200, required=False)
driver_name = serializers.CharField(max_length=255, required=True)
driver_phone = serializers.CharField(max_length=255, required=True)
driver_dl_number = serializers.CharField(allow_null=True, max_length=255, required=True)
driver_dl_validity = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
truck_broker_owner_name = serializers.CharField(allow_null=True, allow_blank=True, label='Truck Owner/Broker name',
max_length=100, required=False)
truck_broker_owner_phone = serializers.CharField(allow_null=True, allow_blank=True,
label='Truck Owner/Broker Phone Number',
max_length=25)
truck_owner_name = serializers.CharField(allow_null=True, allow_blank=True, label='Truck Owner name',
max_length=100, required=False)
truck_owner_phone = serializers.CharField(allow_null=True, allow_blank=True, label='Truck Owner Phone Number',
max_length=25,
required=False)
loading_points = serializers.CharField(allow_null=True, max_length=255, required=False)
unloading_points = serializers.CharField(allow_null=True, max_length=255, required=False)
total_in_ward_amount = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
total_out_ward_amount = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
total_amount_to_company = serializers.IntegerField(allow_null=True, max_value=1000000, required=False)
advance_amount_from_company = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
refund_amount = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
total_amount_to_owner = serializers.IntegerField(allow_null=True, max_value=1000000, required=False)
loading_charge = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
unloading_charge = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
detention_charge = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
additional_charges_for_company = serializers.IntegerField(allow_null=True,
label='Additional Charges/Deductions for Company (+/-)',
max_value=1000000, min_value=0,
required=False)
remarks_about_additional_charges = serializers.CharField(allow_null=True, required=False)
additional_charges_for_owner = serializers.IntegerField(allow_null=True, max_value=1000000,
min_value=0, required=False)
note_for_additional_owner_charges = serializers.CharField(allow_null=True, required=False)
commission = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0, required=False)
lr_cost = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0, required=False)
deduction_for_advance = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
deduction_for_balance = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
other_deduction = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
remarks_about_deduction = serializers.CharField(allow_null=True, required=False)
deductions_for_company = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
to_be_billed_to = serializers.CharField(allow_null=True, max_length=200, required=False)
invoice_number = serializers.CharField(allow_null=True, label='Invoice Number', max_length=50,
required=False)
billing_address = serializers.CharField(allow_null=True, max_length=300, required=False)
billing_contact_number = serializers.CharField(allow_null=True, max_length=50, required=False)
billing_invoice_date = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
invoice_remarks_for_additional_charges = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
invoice_remarks_for_deduction_discount = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
tds_deducted_amount = serializers.IntegerField(allow_null=True, max_value=1000000, min_value=0,
required=False)
pod_date = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, '%Y-%m-%d', ISO_8601])
pod_status = serializers.ChoiceField(
allow_null=True, choices=(
('pending', 'Pending'), ('unverified', 'Unverified'), ('rejected', 'Rejected'), ('completed', 'Delivered'),
('not_required', 'Not Required')),
required=False
)
outward_payment_status = serializers.ChoiceField(allow_null=True, choices=(
('no_payment_made', 'Nil'), ('partial', 'Partial'), ('complete', 'Full'), ('excess', 'Excess')), required=False)
inward_payment_status = serializers.ChoiceField(allow_null=True, choices=(
('no_payment', 'Nil'), ('partial_received', 'Partial'), ('full_received', 'Full'), ('excess', 'Excess')),
required=False)
invoice_status = serializers.ChoiceField(allow_null=True, choices=(
('no_invoice', 'NoInvoice'), ('invoice_raised', 'InvoiceRaised'), ('invoice_sent', 'InvoiceSent'),
('invoice_confirmed', 'InvoiceConfirmed')),
required=False)
comments = serializers.CharField(allow_null=True, required=False
)
remarks_advance_from_company = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
tds_certificate_status = serializers.ChoiceField(allow_null=True, choices=(('y', 'Yes'), ('n', 'No')),
required=False)
booking_status = serializers.ChoiceField(choices=(
('confirmed', 'Confirmed'), ('delivered', 'Delivered'), ('closed', 'Closed'), ('cancelled', 'Cancelled')),
required=False)
is_advance = serializers.ChoiceField(allow_null=True, choices=(('no', 'No'), ('yes', 'Yes')), required=False)
is_print_payment_mode_instruction = serializers.BooleanField(required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
source_office = serializers.PrimaryKeyRelatedField(write_only=True, queryset=AahoOffice.objects.all())
source_office_data = serializers.SerializerMethodField()
destination_office = serializers.PrimaryKeyRelatedField(
write_only=True, queryset=AahoOffice.objects.exclude(deleted=True))
destination_office_data = serializers.SerializerMethodField()
company = serializers.PrimaryKeyRelatedField(
write_only=True, label='Customer who has placed order ', queryset=Sme.objects.all())
customer_placed_order_data = serializers.SerializerMethodField()
customer_to_be_billed_to = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, label='Customer who will make payment',
queryset=Sme.objects.all(), required=True
)
customer_to_be_billed_to_data = serializers.SerializerMethodField()
supplier = serializers.PrimaryKeyRelatedField(source='booking_supplier', write_only=True, allow_null=True,
required=False,
queryset=Supplier.objects.all())
accounting_supplier = serializers.PrimaryKeyRelatedField(write_only=True, allow_null=True, required=False,
queryset=Supplier.objects.all())
owner_supplier = serializers.PrimaryKeyRelatedField(write_only=True, allow_null=True, required=False,
queryset=Supplier.objects.all())
supplier_data = serializers.SerializerMethodField()
accounting_supplier_data = serializers.SerializerMethodField()
owner = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, queryset=Owner.objects.all(), required=False)
owner_data = serializers.SerializerMethodField()
driver_supplier = serializers.PrimaryKeyRelatedField(
write_only=True, required=False, allow_null=True, label='Driver Name', queryset=Driver.objects.all())
driver = serializers.PrimaryKeyRelatedField(
write_only=True, required=False, allow_null=True, label='Driver Name', queryset=Driver.objects.all())
driver_data = serializers.SerializerMethodField()
consignor_city_fk = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, queryset=City.objects.all(), required=False)
consignor_city_fk_data = serializers.SerializerMethodField()
consignee_city_fk = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, queryset=City.objects.all(), required=False)
consignee_city_fk_data = serializers.SerializerMethodField()
from_city_fk = serializers.PrimaryKeyRelatedField(write_only=True, queryset=City.objects.all())
from_city_fk_data = serializers.SerializerMethodField()
to_city_fk = serializers.PrimaryKeyRelatedField(write_only=True, queryset=City.objects.all())
to_city_fk_data = serializers.SerializerMethodField()
vehicle = serializers.PrimaryKeyRelatedField(write_only=True, source='supplier_vehicle',
queryset=Vehicle.objects.all())
vehicle_data = serializers.SerializerMethodField()
vehicle_category = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, queryset=VehicleCategory.objects.all(), required=False)
vehicle_category_data = serializers.SerializerMethodField()
invoice_summary = serializers.PrimaryKeyRelatedField(
allow_null=True, queryset=InvoiceSummary.objects.all(), required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
lr_numbers = serializers.SerializerMethodField()
inward_payments = serializers.SerializerMethodField()
outward_payments = serializers.SerializerMethodField()
invoices = serializers.SerializerMethodField()
pod_data = serializers.SerializerMethodField()
supplier_freight = serializers.SerializerMethodField()
customer_freight = serializers.SerializerMethodField()
status_color_code = serializers.SerializerMethodField()
documents = serializers.SerializerMethodField()
bank_accounts = serializers.SerializerMethodField()
outward_amount = serializers.SerializerMethodField()
inward_amount = serializers.SerializerMethodField()
supplier_amount = serializers.SerializerMethodField()
customer_amount = serializers.SerializerMethodField()
amount_received_from_customer = serializers.SerializerMethodField()
amount_paid_to_supplier = serializers.SerializerMethodField()
balance_for_customer = serializers.SerializerMethodField()
balance_for_supplier = serializers.SerializerMethodField()
tds_amount_customer = serializers.SerializerMethodField()
debit_amount_supplier = serializers.SerializerMethodField()
credit_amount_supplier = serializers.SerializerMethodField()
debit_amount_customer = serializers.SerializerMethodField()
credit_amount_customer = serializers.SerializerMethodField()
refundable_paid_amount = serializers.SerializerMethodField()
credit_note_customer = serializers.SerializerMethodField()
credit_note_supplier = serializers.SerializerMethodField()
debit_note_customer = serializers.SerializerMethodField()
debit_note_supplier = serializers.SerializerMethodField()
credit_note_for_direct_advance = serializers.SerializerMethodField()
excess_payment_paid_to_supplier = serializers.SerializerMethodField()
debit_amount_to_be_adjusted = serializers.SerializerMethodField()
valid_s3_lr_doc_url = serializers.SerializerMethodField()
decide_account_supplier = serializers.SerializerMethodField()
def validate_consignor_gstin(self, value):
if value and not validate_gstin(value):
raise serializers.ValidationError("Not a valid gstin")
return value
def validate_consignee_gstin(self, value):
if value and not validate_gstin(value):
raise serializers.ValidationError("Not a valid gstin")
return value
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'credit_note_for_direct_advance', 'debit_note_supplier', 'debit_note_customer', 'credit_note_supplier',
'credit_note_customer', 'refundable_paid_amount', 'credit_amount_customer', 'debit_amount_customer',
'credit_amount_supplier', 'debit_amount_supplier', 'tds_amount_customer',
'amount_paid_to_supplier', 'amount_received_from_customer', 'customer_amount',
'supplier_amount', 'inward_amount', 'outward_amount', 'documents', 'bank_accounts', 'customer_freight',
'supplier_freight',
'pod_data', 'invoices', 'outward_payments', 'inward_payments', 'changed_by', 'created_by',
'invoice_summary', 'vehicle_category', 'vehicle',
'to_city_fk', 'from_city_fk', 'consignee_city_fk_data',
'consignee_city_fk', 'consignor_city_fk_data', 'consignor_city_fk', 'driver_data', 'driver', 'owner_data',
'owner', 'supplier', 'destination_office', 'is_advance',
'is_print_payment_mode_instruction', 'customer_to_be_billed_to', 'company', 'source_office', 'created_on',
'updated_on', 'deleted', 'deleted_on', 'to_be_billed_to', 'billing_address', 'billing_contact_number',
'invoice_remarks_for_additional_charges', 'invoice_remarks_for_deduction_discount', 'pod_date', 'comments',
'remarks_advance_from_company', 'tds_certificate_status', 'booking_status', 'loading_points',
'unloading_points', 'advance_amount_from_company', 'loading_charge', 'unloading_charge', 'detention_charge',
'additional_charges_for_company', 'remarks_about_additional_charges', 'additional_charges_for_owner',
'note_for_additional_owner_charges', 'commission', 'lr_cost', 'deduction_for_advance',
'deduction_for_balance', 'other_deduction', 'remarks_about_deduction', 'deductions_for_company',
'insurance_provider', 'insurance_policy_number', 'insured_amount', 'insurance_date', 'insurance_risk',
'driver_name', 'driver_phone', 'driver_dl_number', 'driver_dl_validity', 'truck_broker_owner_phone',
'truck_owner_name', 'truck_owner_phone', 'is_insured', 'billing_type', 'gst_liability',
'liability_of_service_tax', 'type_of_vehicle', 'road_permit_number', 'party_invoice_number',
'party_invoice_date', 'party_invoice_amount', 'number_of_package', 'material', 'loaded_weight',
'delivered_weight', 'company_code', 'consignor_name', 'consignor_address', 'consignor_city',
'consignor_pin', 'consignor_phone', 'consignor_cst_tin', 'consignor_gstin', 'consignee_name',
'consignee_address', 'consignee_city', 'consignee_pin', 'consignee_phone', 'valid_s3_lr_doc_url',
'consignee_cst_tin', 'consignee_gstin', 'excess_payment_paid_to_supplier', 'decide_account_supplier']
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
def get_bank_accounts(self, instance):
return get_booking_bank_accounts(instance)
def get_decide_account_supplier(self, instance):
booking_supplier = instance.booking_supplier
owner_supplier = instance.owner_supplier
return {
'status': 'success' if isinstance(instance, ManualBooking) else 'error',
'booking_supplier': {
'supplier_data': {'id': booking_supplier.id, 'name': booking_supplier.name,
'phone': booking_supplier.phone,
'code': booking_supplier.code} if isinstance(
booking_supplier, Supplier) else {'id': -1, 'name': None, 'phone': None, 'code': None},
'valid_pan': True if isinstance(instance.booking_supplier,
Supplier) and booking_supplier.supplier_files.filter(
document_category='PAN').exists() else False,
'valid_dec': True if isinstance(instance.booking_supplier,
Supplier) and booking_supplier.supplier_files.filter(
document_category='DEC').exists() else False,
},
'owner_supplier': {
'supplier_data': {'id': owner_supplier.id, 'name': owner_supplier.name, 'phone': owner_supplier.phone,
'code': owner_supplier.code} if isinstance(
owner_supplier, Supplier) else {'id': -1, 'name': None, 'phone': None, 'code': None},
'valid_pan': True if isinstance(instance.owner_supplier,
Supplier) and owner_supplier.supplier_files.filter(
document_category='PAN').exists() else False,
'valid_dec': True if isinstance(instance.owner_supplier,
Supplier) and owner_supplier.supplier_files.filter(
document_category='DEC').exists() else False,
},
}
def get_valid_s3_lr_doc_url(self, instance):
if isinstance(instance, ManualBooking) and instance.manualbookings3upload_set.filter(is_valid=True).exclude(
s3_upload=None).exists():
return instance.manualbookings3upload_set.filter(is_valid=True).exclude(
s3_upload=None).last().s3_upload.public_url()
return None
def get_excess_payment_paid_to_supplier(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
supplier_excess_amount, supplier_excess_amount_msg = access_payment_paid_to_supplier(
supplier=instance.accounting_supplier)
return {'supplier_excess_amount': supplier_excess_amount,
'supplier_excess_amount_msg': supplier_excess_amount_msg}
return {'supplier_excess_amount': 0,
'supplier_excess_amount_msg': None}
def get_debit_amount_to_be_adjusted(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
debit_amount = debit_amount_to_be_adjusted(supplier=instance.accounting_supplier)
return {'debit_amount_to_be_adjusted': debit_amount}
return {'debit_amount_to_be_adjusted': 0}
def get_credit_note_for_direct_advance(self, instance):
return CreditNoteCustomerDirectAdvanceSerializer(many=True,
instance=instance.creditnotecustomerdirectadvance_set.all()).data
def get_credit_note_customer(self, instance):
return CreditNoteCustomerSerializer(many=True, instance=instance.creditnotecustomer_set.all()).data
def get_credit_note_supplier(self, instance):
return CreditNoteSupplierSerializer(many=True, instance=instance.creditnotesupplier_set.all()).data
def get_debit_note_customer(self, instance):
return DebitNoteCustomerSerializer(many=True, instance=instance.debitnotecustomer_set.all()).data
def get_debit_note_supplier(self, instance):
return DebitNoteSupplierSerializer(many=True, instance=instance.debitnotesupplier_set.all()).data
def get_refundable_paid_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.refundable_paid_amount
return None
def get_outward_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.outward_amount
return None
def get_inward_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.inward_amount
return None
def get_credit_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.credit_amount_customer
return None
def get_debit_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.debit_amount_customer
return None
def get_credit_amount_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.credit_amount_supplier
return None
def get_debit_amount_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.debit_amount_supplier
return None
def get_tds_amount_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.tds_amount_customer
return None
def get_balance_for_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.balance_for_supplier
return None
def get_balance_for_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.balance_for_customer
return None
def get_supplier_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.supplier_amount
return None
def get_customer_amount(self, instance):
if isinstance(instance, ManualBooking):
return instance.customer_amount
return None
def get_amount_received_from_customer(self, instance):
if isinstance(instance, ManualBooking):
return instance.amount_received_from_customer
return None
def get_amount_paid_to_supplier(self, instance):
if isinstance(instance, ManualBooking):
return instance.amount_paid_to_supplier
return None
def get_documents(self, instance):
if isinstance(instance, ManualBooking):
return get_booking_images(instance)
return []
def get_status_color_code(self, instance):
if isinstance(instance.booking_status_color, BookingStatusColor) and instance.booking_status_color.color_code:
return instance.booking_status_color.color_code
return '#000000'
def get_customer_freight(self, instance):
return instance.customer_freight
def get_supplier_freight(self, instance):
return instance.supplier_freight
def get_pod_data(self, instance):
from restapi.serializers.file_upload import BasicPODFileSerializer
return BasicPODFileSerializer(instance.podfile_set.all(), many=True).data
def get_inward_payments(self, instance):
return InWardPaymentSerializer(InWardPayment.objects.filter(booking_id=instance), many=True).data
def get_outward_payments(self, instance):
return OutWardPaymentSerializer(OutWardPayment.objects.filter(booking_id=instance), many=True).data
def get_invoices(self, instance):
return InvoiceSerializer(Invoice.objects.filter(bookings=instance), many=True).data
def get_lr_numbers(self, instance):
return '\n'.join(instance.lr_numbers.values_list('lr_number', flat=True))
@staticmethod
def get_source_office_data(instance):
if isinstance(instance.source_office, AahoOffice):
return {'id': instance.source_office.id, 'branch_name': instance.source_office.branch_name}
return {'id': -1, 'branch_name': None}
@staticmethod
def get_destination_office_data(instance):
if isinstance(instance.destination_office, AahoOffice):
return {'id': instance.destination_office.id, 'branch_name': instance.destination_office.branch_name}
return {'id': -1, 'branch_name': None}
@staticmethod
def get_customer_placed_order_data(instance):
if isinstance(instance.company, Sme):
return {'id': instance.company.id, 'name': instance.company.get_name(),
'code': instance.company.company_code, 'gstin': instance.company.gstin}
return {'id': None, 'name': None, 'code': None, 'gstin': None}
@staticmethod
def get_customer_to_be_billed_to_data(obj):
if isinstance(obj.customer_to_be_billed_to, Sme):
return {'id': obj.customer_to_be_billed_to.id, 'name': obj.customer_to_be_billed_to.get_name(),
'code': obj.customer_to_be_billed_to.company_code, 'gstin': obj.customer_to_be_billed_to.gstin,
'address': obj.customer_to_be_billed_to.customer_address, 'pin': obj.customer_to_be_billed_to.pin,
'city': {
'name': obj.customer_to_be_billed_to.city.name if obj.customer_to_be_billed_to.city else None,
'id': obj.customer_to_be_billed_to.city.id if obj.customer_to_be_billed_to.city else -1}}
return {'id': -1, 'name': None, 'pin': None, 'code': None, 'gstin': None, 'address': None,
'city': {'name': None, 'id': -1}}
@staticmethod
def get_supplier_data(instance):
if isinstance(instance.booking_supplier, Supplier):
return {'id': instance.booking_supplier.id, 'name': instance.booking_supplier.name,
'phone': instance.booking_supplier.phone, 'code': instance.booking_supplier.code,
'name_code': '{}, {}'.format(instance.booking_supplier.name, instance.booking_supplier.code)}
return {'id': -1, 'name': None, 'phone': None, 'code': None,'name_code':None}
def get_accounting_supplier_data(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
return {'id': instance.accounting_supplier.id, 'name': instance.accounting_supplier.name,
'phone': instance.accounting_supplier.phone, 'code': instance.accounting_supplier.code,
'name_code': '{}, {}'.format(instance.accounting_supplier.name, instance.accounting_supplier.code)}
return {'id': -1, 'name': None, 'phone': None, 'code': None, 'name_code': None}
@staticmethod
def get_owner_data(instance):
if isinstance(instance.owner_supplier, Supplier):
return {'id': instance.owner_supplier.id, 'name': instance.owner_supplier.name,
'phone': instance.owner_supplier.phone}
return {'id': -1, 'name': None, 'phone': None}
@staticmethod
def get_driver_data(instance):
if isinstance(instance.driver_supplier, Driver):
return {'id': instance.driver_supplier.id, 'name': instance.driver_supplier.name,
'phone': instance.driver_supplier.phone}
return {'id': -1, 'name': None, 'phone': None}
@staticmethod
def get_consignor_city_fk_data(instance):
if isinstance(instance.consignor_city_fk, City):
return {'id': instance.consignor_city_fk.id, 'name': instance.consignor_city_fk.name,
'code': instance.consignor_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_consignee_city_fk_data(instance):
if isinstance(instance.consignee_city_fk, City):
return {'id': instance.consignee_city_fk.id, 'name': instance.consignee_city_fk.name,
'code': instance.consignee_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_from_city_fk_data(instance):
if isinstance(instance.from_city_fk, City):
return {'id': instance.from_city_fk.id, 'name': instance.from_city_fk.name,
'code': instance.from_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_to_city_fk_data(instance):
if isinstance(instance.to_city_fk, City):
return {'id': instance.to_city_fk.id, 'name': instance.to_city_fk.name,
'code': instance.to_city_fk.code}
return {'id': -1, 'name': None, 'code': None}
@staticmethod
def get_vehicle_data(instance):
if isinstance(instance.supplier_vehicle, Vehicle):
vehicle = {
'id': instance.supplier_vehicle.id, 'vehicle_number': instance.supplier_vehicle.number(),
}
if isinstance(instance.supplier_vehicle.vehicle_type, VehicleCategory):
vehicle["vehicle_type"] = instance.supplier_vehicle.vehicle_type.vehicle_type
else:
vehicle["vehicle_type"] = None
return vehicle
return {'id': -1, 'vehicle_number': None, "vehicle_type": None}
@staticmethod
def get_vehicle_category_data(instance):
if isinstance(instance.vehicle_category, VehicleCategory):
return {'id': instance.vehicle_category.id, 'type': instance.vehicle_category.vehicle_category}
return {}
def validate_created_by(self, value):
if isinstance(self.instance, ManualBooking) and value:
raise serializers.ValidationError("Created by is immutable")
return value
# def validate_lorry_number(self, value):
# if not validate_vehicle_number(value):
# raise serializers.ValidationError({"vehicle_number": "Vehicle Number is not valid"})
# return value
def create(self, validated_data):
instance = ManualBooking.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
ManualBooking.objects.filter(id=instance.id).update(**validated_data)
booking = ManualBooking.objects.get(id=instance.id)
booking.save()
return ManualBooking.objects.get(id=instance.id)
@staticmethod
def create_booking_status_mapping(data):
manual_booking = ManualBooking.objects.get(id=data['mb_id'])
try:
booking_status = BookingStatuses.objects.get(status=data['status'])
except BookingStatuses.DoesNotExist:
return {'id': None, 'booking_status_chain_id': None}
try:
booking_status_chain = BookingStatusChain.objects.get(booking_status=booking_status)
except BookingStatusChain.DoesNotExist:
return {'id': None, 'booking_status_chain_id': None}
due_date = (timezone.now() + timedelta(minutes=booking_status_chain.booking_status.time_limit)).date()
booking_statuses_mapping = BookingStatusesMapping.objects.create(booking_status_chain=booking_status_chain,
manual_booking=manual_booking,
booking_stage='in_progress',
created_by=data['user'],
due_date=due_date)
return {'id': booking_statuses_mapping.id, 'booking_status_chain_id': booking_status_chain.id}
class LrNumberSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
lr_number = serializers.CharField(max_length=30, validators=[UniqueValidator(queryset=LrNumber.objects.all())])
datetime = serializers.DateTimeField(format=DATE_FORMAT)
pod_status = serializers.ChoiceField(allow_null=True, choices=(
('pending', 'Pending'), ('unverified', 'Unverified'), ('rejected', 'Rejected'), ('completed', 'Delivered')),
required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
booking = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=ManualBooking.objects.all(), required=False)
source_office = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=AahoOffice.objects.all(),
required=False)
destination_office = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=AahoOffice.objects.all(),
required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
booking_id = serializers.SerializerMethodField()
s3_upload_url = serializers.SerializerMethodField()
def to_representation(self, instance):
# self.fields["booking"] = ManualBookingSerializer(read_only=True)
self.fields["source_office"] = AahoOfficeSerializer(read_only=True)
self.fields["destination_office"] = AahoOfficeSerializer(read_only=True)
return super().to_representation(instance=instance)
def validate_created_by(self, value):
if isinstance(self.instance, LrNumber) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def get_booking_id(self, instance):
if isinstance(instance.booking, ManualBooking):
return instance.booking.booking_id
return None
def get_s3_upload_url(self, instance):
if isinstance(instance, LrNumber) and instance.lrs3upload_set.filter(is_valid=True).exclude(
s3_upload=None).exclude(deleted=True).exists():
return instance.lrs3upload_set.filter(is_valid=True).exclude(s3_upload=None).exclude(
deleted=True).last().s3_upload.public_url()
return None
def create(self, validated_data):
instance = LrNumber.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
LrNumber.objects.filter(id=instance.id).update(**validated_data)
return LrNumber.objects.get(id=instance.id)
class RejectedPODSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
remarks = serializers.CharField(max_length=500)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
# created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
# changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
booking = serializers.PrimaryKeyRelatedField(queryset=ManualBooking.objects.all())
lr = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=LrNumber.objects.all())
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
def to_representation(self, instance):
# self.fields["booking"] = ManualBookingSerializer(read_only=True)
self.fields["lr"] = LrNumberSerializer(read_only=True)
return super().to_representation(instance=instance)
def validate_created_by(self, value):
if isinstance(self.instance, RejectedPOD) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
instance = RejectedPOD.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
RejectedPOD.objects.filter(id=instance.id).update(**validated_data)
return RejectedPOD.objects.get(id=instance.id)
class BookingConsignorConsigneeSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
category = serializers.CharField(allow_null=True, max_length=20, required=False)
name = serializers.CharField(allow_null=True, max_length=255, required=False)
address = serializers.CharField(allow_null=True, max_length=255, required=False)
pin = serializers.CharField(allow_null=True, max_length=255, required=False)
phone = serializers.CharField(allow_null=True, max_length=255, required=False)
cst_tin = serializers.CharField(allow_null=True, max_length=255, required=False)
gstin = serializers.CharField(allow_null=True, max_length=15, required=False)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
booking = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=ManualBooking.objects.all(), required=False)
lr = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=LrNumber.objects.all(), required=False)
city = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=City.objects.all(), required=False)
def to_representation(self, instance):
self.fields["booking"] = ManualBookingSerializer(read_only=True)
self.fields["lr"] = LrNumberSerializer(read_only=True)
self.fields["city"] = CitySerializer(read_only=True)
return super().to_representation(instance=instance)
def validate_created_by(self, value):
if isinstance(self.instance, BookingConsignorConsignee) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
instance = BookingConsignorConsignee.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
BookingConsignorConsignee.objects.filter(id=instance.id).update(**validated_data)
return BookingConsignorConsignee.objects.get(id=instance.id)
class BookingInsuranceSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
is_insured = serializers.BooleanField(required=False)
insurance_provider = serializers.CharField(allow_null=True, max_length=200, required=False)
insurance_policy_number = serializers.CharField(allow_null=True, max_length=200, required=False)
insured_amount = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
insurance_date = serializers.DateField(allow_null=True, required=False)
insurance_risk = serializers.CharField(allow_null=True, max_length=200, required=False)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
def validate_created_by(self, value):
if isinstance(self.instance, BookingInsurance) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
instance = BookingInsurance.objects.create(**validated_data)
return instance
def update(self, instance, validated_data):
BookingInsurance.objects.filter(id=instance.id).update(**validated_data)
return BookingInsurance.objects.get(id=instance.id)
class InWardPaymentSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
received_from = serializers.CharField(max_length=300)
tds = serializers.DecimalField(decimal_places=2, max_digits=30)
actual_amount = serializers.DecimalField(decimal_places=2, max_digits=30)
expected_amount = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
payment_mode = serializers.ChoiceField(choices=(
('cash', 'Cash'), ('cheque', 'Cheque'), ('neft', 'NEFT'), ('imps', 'IMPS'), ('rtgs', 'RTGS'),
('happay', 'Happay'),
('cash_deposit', 'Cash Deposit'), ('hdfc_internal_account', 'HDFC Internal Account')))
trn = serializers.CharField(allow_null=True, max_length=200, required=False)
remarks = serializers.CharField(allow_null=True, allow_blank=True, required=False)
payment_date = serializers.DateField(input_formats=[DATE_FORMAT, ISO_8601, "%d/%m/%Y"], format=DATE_FORMAT)
invoice_number = serializers.CharField(allow_null=True, max_length=300)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
booking_id = serializers.PrimaryKeyRelatedField(many=True, queryset=ManualBooking.objects.all())
bookings = serializers.SerializerMethodField()
lr_numbers = serializers.SerializerMethodField()
pending_inward_id = serializers.SerializerMethodField()
booking_data = serializers.SerializerMethodField()
def validate_created_by(self, value):
if isinstance(self.instance, InWardPayment) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def get_booking_data(self, instance):
return [{'id': booking.id, 'booking_id': booking.booking_id,
'lr_number': ', '.join(booking.lr_numbers.values_list('lr_number', flat=True))} for booking in
instance.booking_id.all()]
def get_payment_mode_display(self, instance):
return instance.get_payment_mode_display()
def get_bookings(self, instance):
return '\n'.join(instance.booking_id.values_list('booking_id', flat=True))
def get_lr_numbers(self, instance):
return '\n'.join(['\n'.join(booking.lr_numbers.values_list('lr_number', flat=True)) for booking in
instance.booking_id.all()])
def get_pending_inward_id(self, instance):
if instance.pendinginwardpaymententry_set.exists():
pending_inward = instance.pendinginwardpaymententry_set.last()
return pending_inward.id
return '-'
def create(self, validated_data):
booking_ids = []
if "booking_id" in validated_data.keys():
booking_ids = validated_data.pop('booking_id')
instance = InWardPayment.objects.create(**validated_data)
for booking_id in booking_ids:
instance.booking_id.add(booking_id)
booking_id.save()
return instance
def update(self, instance, validated_data):
booking_ids = []
if "booking_id" in validated_data.keys():
booking_ids = validated_data.pop('booking_id')
instance.booking_id.clear()
InWardPayment.objects.filter(id=instance.id).update(**validated_data)
for booking_id in booking_ids:
instance.booking_id.add(booking_id)
return InWardPayment.objects.get(id=instance.id)
class OutWardPaymentSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
paid_to = serializers.CharField(max_length=300)
lorry_number = serializers.CharField(allow_null=True, max_length=30, required=False)
utr = serializers.CharField(allow_null=True, min_length=16, max_length=30, required=False)
actual_amount = serializers.DecimalField(decimal_places=2, max_digits=30, required=True)
tds = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=30, required=False)
expected_amount = serializers.DecimalField(write_only=True, allow_null=True, decimal_places=2, max_digits=30,
required=False)
payment_mode = serializers.ChoiceField(write_only=True, choices=(
('cash', 'Cash'), ('cheque', 'Cheque'), ('neft', 'NEFT'), ('imps', 'IMPS'), ('rtgs', 'RTGS'),
('happay', 'Happay'), ('fuel_card', 'Fuel Card'), ('hdfc_internal_account', 'HDFC Internal Account'),
('adjustment', 'Adjustment')))
remarks = serializers.CharField(allow_null=True, required=False, style={'base_template': 'textarea.html'})
payment_date = serializers.DateField(format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601, "%d/%m/%Y", '%Y-%m-%d'])
invoice_number = serializers.CharField(write_only=True, allow_null=True, max_length=300, required=False)
status = serializers.ChoiceField(
allow_null=True, choices=(('paid', 'Paid'), ('unpaid', 'Not Paid'), ('reconciled', 'Reconciled')),
required=False)
is_sms_supplier = serializers.BooleanField(required=False)
is_refund_amount = serializers.BooleanField(required=False)
created_on = serializers.DateTimeField(read_only=True, format=DATETIME_FORMAT)
updated_on = serializers.DateTimeField(read_only=True, format=DATETIME_FORMAT)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False,
input_formats=[DATE_FORMAT, ISO_8601, "%d/%m/%Y", '%Y-%m-%d'])
bank_account = serializers.PrimaryKeyRelatedField(
write_only=True, allow_null=True, queryset=Bank.objects.all(), required=False)
fuel_card = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=FuelCard.objects.all(), required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
booking_id = serializers.PrimaryKeyRelatedField(write_only=True, many=True, queryset=ManualBooking.objects.all(),
required=False)
aaho_office = serializers.PrimaryKeyRelatedField(write_only=True, allow_null=True,
queryset=AahoOffice.objects.all(), required=False)
bookings = serializers.SerializerMethodField()
lr_numbers = serializers.SerializerMethodField()
bank_account_detail = serializers.SerializerMethodField()
fuel_card_detail = serializers.SerializerMethodField()
payment_mode_display = serializers.SerializerMethodField()
details = serializers.SerializerMethodField()
account_number = serializers.SerializerMethodField()
bookings_data = serializers.SerializerMethodField()
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'deleted', 'deleted_on', 'updated_on', 'created_on'
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
def get_bookings_data(self, instance):
if instance.booking_id.count() > 0:
booking = instance.booking_id.last()
return {'id': booking.id, 'booking_id': booking.booking_id}
return {'id': -1, 'booking_id': None}
def get_details(self, instance):
if isinstance(instance.bank_account, Bank):
return 'A/C No.: {}'.format(instance.bank_account.account_number)
elif isinstance(instance.fuel_card, FuelCard):
return 'Card Number: {}'.format(instance.fuel_card.card_number)
else:
return None
def get_account_number(self, instance):
return instance.bank_account.account_number if isinstance(instance.bank_account, Bank) else None
def get_fuel_card_detail(self, instance):
if isinstance(instance.fuel_card, FuelCard):
return {'id': instance.fuel_card.id, 'card_number': instance.fuel_card.card_number}
return {'id': -1, 'card_number': None}
def get_bank_account_detail(self, instance):
if isinstance(instance.bank_account, Bank):
return {'id': instance.bank_account.id, 'account_holder_name': instance.bank_account.account_holder_name,
'account_number': instance.bank_account.account_number}
return {'id': -1, 'account_holder_name': None, 'account_number': None}
def get_payment_mode_display(self, instance):
return instance.get_payment_mode_display()
def get_bookings(self, instance):
return '\n'.join(instance.booking_id.values_list('booking_id', flat=True))
def get_lr_numbers(self, instance):
return '\n'.join(['\n'.join(booking.lr_numbers.values_list('lr_number', flat=True)) for booking in
instance.booking_id.all()])
def validate_created_by(self, value):
if isinstance(self.instance, OutWardPayment) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
booking_ids = []
if "booking_id" in validated_data.keys():
booking_ids = validated_data.pop('booking_id')
instance = OutWardPayment.objects.create(**validated_data)
for booking_id in booking_ids:
instance.booking_id.add(booking_id)
booking_id.save()
return instance
def update(self, instance, validated_data):
booking_ids = []
if "booking_id" in validated_data.keys():
booking_ids = validated_data.pop('booking_id')
instance.booking_id.clear()
OutWardPayment.objects.filter(id=instance.id).update(**validated_data)
for booking_id in booking_ids:
instance.booking_id.add(booking_id)
return OutWardPayment.objects.get(id=instance.id)
class OutWardPaymentBillSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
bill_number = serializers.CharField(max_length=30,
validators=[UniqueValidator(queryset=OutWardPaymentBill.objects.all())])
bill_date = serializers.DateField(format=DATE_FORMAT, input_formats=[DATE_FORMAT, ISO_8601])
amount = serializers.IntegerField(max_value=2147483647, min_value=0)
vehicle_number = serializers.CharField(allow_null=True, max_length=50, required=False)
lr_number = serializers.CharField(allow_null=True, max_length=200, required=False)
from_city = serializers.CharField(allow_null=True, max_length=50, required=False)
to_city = serializers.CharField(allow_null=True, max_length=50, required=False)
loading_date = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
weight = serializers.CharField(allow_null=True, max_length=50, required=False)
paid_to = serializers.CharField(allow_null=True, max_length=50, required=False)
pan_number = serializers.CharField(allow_null=True, max_length=30, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
booking = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=ManualBooking.objects.all(), required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
outward_pmt = serializers.PrimaryKeyRelatedField(allow_empty=False, many=True,
queryset=OutWardPayment.objects.all())
booking_id = serializers.SerializerMethodField()
s3_upload_url = serializers.SerializerMethodField()
all_lr_numbers = serializers.SerializerMethodField()
payment_date_mode_amount = serializers.SerializerMethodField()
total_amount = serializers.SerializerMethodField()
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'id', 'deleted', 'deleted_on', 'updated_on', 'from_city', 'to_city'
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
def validate_created_by(self, value):
if isinstance(self.instance, OutWardPaymentBill) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def get_payment_mode_display(self, instance):
return instance.get_payment_mode_display()
def get_booking_id(self, instance):
if isinstance(instance, OutWardPaymentBill) and isinstance(instance.booking, ManualBooking):
return instance.booking.booking_id
return None
def get_all_lr_numbers(self, instance):
if isinstance(instance, OutWardPaymentBill) and isinstance(instance.booking, ManualBooking):
return '\n'.join(instance.booking.lr_numbers.values_list("lr_number", flat=True))
return None
def get_payment_date_mode_amount(self, instance):
if isinstance(instance, OutWardPaymentBill) and instance.outward_pmt:
return [{'payment_date': payment.payment_date.strftime(DATE_FORMAT) if payment.payment_date else None,
'mode': payment.get_payment_mode_display(),
'amount': to_int(payment.actual_amount)} for payment in instance.outward_pmt.exclude(deleted=True)]
return []
def get_s3_upload_url(self, instance):
if isinstance(instance, OutWardPaymentBill):
if S3Upload.objects.filter(filename__istartswith='{}-{}'.format('OPB', instance.bill_number),
filename__iendswith='.pdf').exists():
return S3Upload.objects.filter(filename__istartswith='{}-{}'.format('OPB', instance.bill_number),
filename__iendswith='.pdf').last().public_url()
return None
def get_total_amount(self, instance):
if isinstance(instance, OutWardPaymentBill):
return instance.total_amount
return None
def create(self, validated_data):
outward_pmts = []
if "outward_pmt" in validated_data.keys():
outward_pmts = validated_data.pop('outward_pmt')
instance = OutWardPaymentBill.objects.create(**validated_data)
for outward_pmt in outward_pmts:
instance.outward_pmt.add(outward_pmt)
return instance
def update(self, instance, validated_data):
outward_pmts = []
if "outward_pmt" in validated_data.keys():
outward_pmts = validated_data.pop('outward_pmt')
instance.outward_pmt.clear()
OutWardPaymentBill.objects.filter(id=instance.id).update(**validated_data)
for outward_pmt in outward_pmts:
instance.outward_pmt.add(outward_pmt)
return OutWardPaymentBill.objects.get(id=instance.id)
class InvoiceSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
invoice_number = serializers.CharField(max_length=30, validators=[UniqueValidator(queryset=Invoice.objects.all())])
date = serializers.DateField(format=DATE_FORMAT, input_formats=[DATE_FORMAT, ISO_8601])
company_name = serializers.CharField(max_length=255)
payment_received = serializers.BooleanField(required=False)
address = serializers.CharField(allow_null=True, max_length=500)
pin = serializers.CharField(allow_null=True, max_length=6, required=True)
gstin = serializers.CharField(allow_null=True, min_length=15, max_length=15, required=True)
total_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=True)
advance_payment = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
remarks = serializers.CharField(allow_null=True, max_length=500, required=False)
service_tax_paid_by = serializers.CharField(allow_null=True, max_length=255, required=True)
service_tax_aaho = serializers.DecimalField(decimal_places=2, max_digits=4)
created_on = serializers.DateTimeField(read_only=True, format=DATETIME_FORMAT)
updated_on = serializers.DateTimeField(read_only=True, format=DATETIME_FORMAT)
deleted = serializers.BooleanField(required=False)
summary_required = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATETIME_FORMAT)
customer_fk = serializers.PrimaryKeyRelatedField(queryset=Sme.objects.all(), required=True)
city = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=City.objects.all(), required=True)
s3_upload = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=S3Upload.objects.all(), required=True)
s3_upload_url = serializers.SerializerMethodField()
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
bookings = serializers.PrimaryKeyRelatedField(many=True, queryset=ManualBooking.objects.all(), required=False)
booking_id = serializers.SerializerMethodField()
lr_numbers = serializers.SerializerMethodField()
is_escalate = serializers.SerializerMethodField()
due_date = serializers.SerializerMethodField()
amount_to_be_received = serializers.SerializerMethodField()
def get_is_escalate(self, instance):
if BookingStatusesMapping.objects.filter(
manual_booking__in=instance.bookings.all(),
booking_status_chain__booking_status__status='party_invoice_sent',
booking_stage='in_progress').exists():
return True
return False
@classmethod
def many_init(cls, *args, **kwargs):
kwargs['child'] = cls()
excluded_fields = [
'deleted', 'bookings', 'changed_by', 's3_upload', 'city', 'customer_fk', 'updated_on',
'service_tax_paid_by', 'service_tax_aaho', 'summary_required', 'address', 'remarks', 'deleted_on'
]
for field in excluded_fields:
kwargs['child'].fields.pop(field)
return serializers.ListSerializer(*args, **kwargs)
def get_booking_id(self, instance):
return '\n'.join(instance.bookings.values_list('booking_id', flat=True))
def get_amount_to_be_received(self, instance):
return instance.get_amount_to_be_received
def get_due_date(self, instance):
if instance.customer_fk:
credit_period = instance.customer_fk.credit_period if instance.customer_fk.credit_period else 0
else:
credit_period = 0
return (instance.date + timedelta(days=int(credit_period))).strftime("%d-%b-%Y")
def get_lr_numbers(self, instance):
return '\n'.join(['\n'.join(booking.lr_numbers.values_list('lr_number', flat=True)) for booking in
instance.bookings.all()])
def get_s3_upload_url(self, instance):
if isinstance(instance.s3_upload, S3Upload):
return instance.s3_upload.public_url()
return ''
def validate_created_by(self, value):
if isinstance(self.instance, Invoice) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
instance = Invoice.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
instance.bookings.clear()
Invoice.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return Invoice.objects.get(id=instance.id)
class ToPayInvoiceSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
invoice_gen_office = serializers.CharField(allow_null=True, max_length=200, required=False)
invoice_number = serializers.CharField(allow_null=True, max_length=30, required=False,
validators=[UniqueValidator(queryset=ToPayInvoice.objects.all())])
date = serializers.DateField(allow_null=True, required=False, format=DATE_FORMAT,
input_formats=[DATE_FORMAT, ISO_8601])
company_name = serializers.CharField(allow_null=True, max_length=100, required=False)
payment_received = serializers.BooleanField(required=False)
company_address = serializers.CharField(allow_null=True, max_length=300, required=False)
pin = serializers.CharField(allow_null=True, max_length=6, required=False)
gstin = serializers.CharField(allow_null=True, max_length=15, required=False)
source = serializers.CharField(allow_null=True, max_length=35, required=False)
destination = serializers.CharField(allow_null=True, max_length=35, required=False)
vehicle_number = serializers.CharField(allow_null=True, max_length=20, required=False)
lr_number = serializers.CharField(allow_null=True, max_length=100, required=False)
quantity = serializers.CharField(allow_null=True, max_length=100, required=False)
rate = serializers.CharField(allow_null=True, max_length=20, required=False)
total_payable_freight = serializers.CharField(allow_null=True, max_length=30, required=False)
amount_payable_to_transiq = serializers.CharField(allow_null=True, max_length=30, required=False)
balance_payable_to_lorry_driver = serializers.CharField(allow_null=True, max_length=30,
required=False)
advance_payment = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
remarks = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
service_tax_paid_by = serializers.CharField(allow_null=True, max_length=255, required=False)
service_tax_aaho = serializers.DecimalField(allow_null=True, decimal_places=2, max_digits=4, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
customer_fk = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Sme.objects.all(), required=False)
city = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=City.objects.all(), required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), required=False, slug_field="username")
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
bookings = serializers.PrimaryKeyRelatedField(allow_empty=False, many=True, queryset=ManualBooking.objects.all())
def to_representation(self, instance):
self.fields["customer_fk"] = SmeSerializer(read_only=True)
self.fields["city"] = CitySerializer(read_only=True)
self.fields["bookings"] = ManualBookingSerializer(read_only=True, many=True)
return super().to_representation(instance=instance)
def validate_created_by(self, value):
if isinstance(self.instance, ToPayInvoice) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
instance = ToPayInvoice.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
instance.bookings.clear()
ToPayInvoice.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return ToPayInvoice.objects.get(id=instance.id)
class PendingInwardPaymentEntrySerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
customer_name = serializers.CharField(allow_null=True, max_length=300, required=False)
payment_mode = serializers.ChoiceField(choices=(
('cash', 'Cash'), ('cheque', 'Cheque'), ('neft', 'NEFT'), ('rtgs', 'RTGS'), ('cash_deposit', 'Cash Deposit'),
('hdfc_internal_account', 'HDFC')))
amount = serializers.DecimalField(decimal_places=2, max_digits=12, required=True)
tds = serializers.DecimalField(decimal_places=2, max_digits=12, required=False)
payment_date = serializers.DateField(format=DATE_FORMAT, input_formats=[DATE_FORMAT, ISO_8601, "%d/%m/%Y"])
adjusted_flag = serializers.BooleanField(required=False)
credited_flag = serializers.BooleanField(required=False)
uploaded_datetime = serializers.DateTimeField(allow_null=True, default=datetime.now())
adjusted_datetime = serializers.DateTimeField(allow_null=True, required=False)
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
trn = serializers.CharField(allow_null=True,
style={'base_template': 'textarea.html'}, required=False)
additional_remark = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
customer = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Sme.objects.all())
uploaded_by = serializers.SlugRelatedField(allow_null=True, queryset=User.objects.all(),
slug_field="username")
adjusted_by = serializers.SlugRelatedField(allow_null=True, queryset=User.objects.all(), required=False,
slug_field="username")
inward_payment = serializers.PrimaryKeyRelatedField(many=True, allow_empty=False,
queryset=InWardPayment.objects.all(), required=False)
bookings = serializers.PrimaryKeyRelatedField(many=True, allow_empty=False, queryset=ManualBooking.objects.all(),
required=False)
# def to_representation(self, instance):
# self.fields["customer"] = SmeSerializer(read_only=True)
# self.fields["uploaded_by"] = UserSerializer(read_only=True)
# self.fields["adjusted_by"] = UserSerializer(read_only=True)
# self.fields["inward_payment"] = InWardPaymentSerializer(read_only=True, many=True)
# self.fields["bookings"] = ManualBookingSerializer(read_only=True, many=True)
# return super(PendingInwardPaymentEntrySerializer, self).to_representation(instance=instance)
def validate(self, attrs):
if isinstance(self.instance, PendingInwardPaymentEntry):
if "payment_mode" in attrs.keys() and attrs["payment_mode"] != "cash":
payment_date = attrs["payment_date"] if "payment_date" in attrs.keys() else self.instance.payment_date
trn = attrs["trn"] if "trn" in attrs.keys() else self.instance.trn
if PendingInwardPaymentEntry.objects.filter(trn=trn, payment_date=payment_date).exists():
raise serializers.ValidationError(
"Error: TRN = {}, Payment Date = {} combination with Payment Mode = {} already exist".format(
trn,
payment_date,
attrs["payment_mode"].upper())
)
else:
if "payment_mode" in attrs.keys() and attrs["payment_mode"] != "cash":
if PendingInwardPaymentEntry.objects.filter(trn=attrs["trn"],
payment_date=attrs["payment_date"]).exists():
raise serializers.ValidationError(
"Error: TRN = {}, Payment Date = {} combination with Payment Mode = {} already exist".format(
attrs["trn"],
attrs["payment_date"],
attrs["payment_mode"].upper())
)
return attrs
def validate_uploaded_by(self, value):
if isinstance(self.instance, PendingInwardPaymentEntry) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
bookings = []
inward_payments = []
if validated_data["customer"] is not None:
validated_data["customer_name"] = validated_data["customer"].get_name()
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
if "inward_payment" in validated_data.keys():
inward_payments = validated_data.pop('inward_payment')
instance = PendingInwardPaymentEntry.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
for inward_payment in inward_payments:
instance.inward_payment.add(inward_payment)
return instance
def update(self, instance, validated_data):
bookings = []
inward_payments = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop('bookings')
instance.bookings.clear()
if "inward_payment" in validated_data.keys():
inward_payments = validated_data.pop('inward_payment')
instance.inward_payment.clear()
PendingInwardPaymentEntry.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
for inward_payment in inward_payments:
instance.inward_payment.add(inward_payment)
return PendingInwardPaymentEntry.objects.get(id=instance.id)
class CreditDebitNoteReasonSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
name = serializers.CharField(max_length=30,
validators=[UniqueValidator(queryset=CreditDebitNoteReason.objects.all())])
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
def validate_created_by(self, value):
if isinstance(self.instance, CreditDebitNoteReason) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
return CreditDebitNoteReason.objects.create(**validated_data)
def update(self, instance, validated_data):
CreditDebitNoteReason.objects.filter(id=instance.id).update(**validated_data)
return CreditDebitNoteReason.objects.get(id=instance.id)
class CreditNoteCustomerSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
credit_note_number = serializers.CharField(max_length=16,
validators=[UniqueValidator(queryset=CreditNoteCustomer.objects.all())],
required=False)
credit_amount = serializers.IntegerField(max_value=2147483647, min_value=0)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Invoice.objects.all(), required=False)
customer = serializers.PrimaryKeyRelatedField(queryset=Sme.objects.all())
reason = serializers.PrimaryKeyRelatedField(
label='Reason for Credit Note', queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(
label='Adjusted Bookings', many=True, queryset=ManualBooking.objects.all(), required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
customer_name = serializers.SerializerMethodField()
reason_text = serializers.SerializerMethodField()
def get_reason_text(self, instance):
if isinstance(instance.reason, CreditDebitNoteReason):
return instance.reason.name
return None
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def get_customer_name(self, instance):
if isinstance(instance.customer, Sme):
return instance.customer.get_name()
return None
def validate_created_by(self, value):
if isinstance(self.instance, CreditNoteCustomer) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
validated_data["credit_note_number"] = generate_credit_note_customer_serial_number(
validated_data["customer"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = CreditNoteCustomer.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
CreditNoteCustomer.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return CreditNoteCustomer.objects.get(id=instance.id)
class DebitNoteCustomerSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
debit_note_number = serializers.CharField(max_length=16,
validators=[UniqueValidator(queryset=DebitNoteCustomer.objects.all())],
required=False)
debit_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Invoice.objects.all(), required=False)
customer = serializers.PrimaryKeyRelatedField(queryset=Sme.objects.all())
reason = serializers.PrimaryKeyRelatedField(label='Reason for Credit Note',
queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(label='Adjusted Bookings', many=True,
queryset=ManualBooking.objects.all(),
required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
customer_name = serializers.SerializerMethodField()
reason_text = serializers.SerializerMethodField()
def get_reason_text(self, instance):
if isinstance(instance.reason, CreditDebitNoteReason):
return instance.reason.name
return None
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def get_customer_name(self, instance):
if isinstance(instance.customer, Sme):
return instance.customer.get_name()
return None
def validate_created_by(self, value):
if isinstance(self.instance, DebitNoteCustomer) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
validated_data["debit_note_number"] = generate_debit_note_customer_serial_number(
validated_data["customer"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = DebitNoteCustomer.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
DebitNoteCustomer.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return DebitNoteCustomer.objects.get(id=instance.id)
class CreditNoteSupplierSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
credit_note_number = serializers.CharField(max_length=16,
validators=[UniqueValidator(queryset=CreditNoteSupplier.objects.all())],
required=False)
credit_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Invoice.objects.all(), required=False)
accounting_supplier = serializers.PrimaryKeyRelatedField(queryset=Supplier.objects.all())
reason = serializers.PrimaryKeyRelatedField(label='Reason for Credit Note',
queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(allow_empty=False, label='Adjusted Bookings', many=True,
queryset=ManualBooking.objects.all(), required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
supplier_name = serializers.SerializerMethodField()
reason_text = serializers.SerializerMethodField()
def get_reason_text(self, instance):
if isinstance(instance.reason, CreditDebitNoteReason):
return instance.reason.name
return None
def get_supplier_name(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
return instance.accounting_supplier.name
return None
def validate_created_by(self, value):
if isinstance(self.instance, CreditNoteSupplier) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def create(self, validated_data):
validated_data["credit_note_number"] = generate_credit_note_supplier_serial_number(
validated_data["accounting_supplier"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = CreditNoteSupplier.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
CreditNoteSupplier.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return CreditNoteSupplier.objects.get(id=instance.id)
class DebitNoteSupplierSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
debit_note_number = serializers.CharField(max_length=16,
validators=[UniqueValidator(queryset=DebitNoteSupplier.objects.all())],
required=False)
debit_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Invoice.objects.all(), required=False)
accounting_supplier = serializers.PrimaryKeyRelatedField(queryset=Supplier.objects.all())
reason = serializers.PrimaryKeyRelatedField(label='Reason for Credit Note',
queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(allow_empty=False, label='Adjusted Bookings', many=True,
queryset=ManualBooking.objects.all(), required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
supplier_name = serializers.SerializerMethodField()
reason_text = serializers.SerializerMethodField()
def get_reason_text(self, instance):
if isinstance(instance.reason, CreditDebitNoteReason):
return instance.reason.name
return None
def get_supplier_name(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
return instance.accounting_supplier.name
return None
def validate_created_by(self, value):
if isinstance(self.instance, DebitNoteSupplier) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def create(self, validated_data):
validated_data["debit_note_number"] = generate_debit_note_supplier_serial_number(
validated_data["accounting_supplier"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = DebitNoteSupplier.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
DebitNoteSupplier.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return DebitNoteSupplier.objects.get(id=instance.id)
class CreditNoteCustomerDirectAdvanceSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
credit_note_number = serializers.CharField(max_length=17, validators=[UniqueValidator(
queryset=CreditNoteCustomerDirectAdvance.objects.all())], required=False)
credit_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_blank=True, allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_blank=True, allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=Invoice.objects.all(), required=False)
customer = serializers.PrimaryKeyRelatedField(queryset=Sme.objects.all())
accounting_supplier = serializers.PrimaryKeyRelatedField(queryset=Supplier.objects.all(), required=False)
reason = serializers.PrimaryKeyRelatedField(label='Reason for Credit Note',
queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(allow_empty=False, label='Adjusted Bookings', many=True,
queryset=ManualBooking.objects.all(), required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
customer_name = serializers.SerializerMethodField()
supplier_name = serializers.SerializerMethodField()
reason_text = serializers.SerializerMethodField()
def get_reason_text(self, instance):
if isinstance(instance.reason, CreditDebitNoteReason):
return instance.reason.name
return None
def get_supplier_name(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
return instance.accounting_supplier.name
return None
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def get_customer_name(self, instance):
if isinstance(instance.customer, Sme):
return instance.customer.get_name()
return None
def validate_created_by(self, value):
if isinstance(self.instance, CreditNoteCustomerDirectAdvance) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
validated_data["credit_note_number"] = generate_credit_note_customer_direct_advance_serial_number(
validated_data["customer"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = CreditNoteCustomerDirectAdvance.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
CreditNoteCustomerDirectAdvance.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return CreditNoteCustomerDirectAdvance.objects.get(id=instance.id)
class DebitNoteSupplierDirectAdvanceSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
debit_note_number = serializers.CharField(max_length=17, validators=[UniqueValidator(
queryset=DebitNoteSupplierDirectAdvance.objects.all())], required=False)
debit_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
adjusted_amount = serializers.IntegerField(max_value=2147483647, min_value=0, required=False)
approved_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
adjusted_on = serializers.DateTimeField(allow_null=True, required=False, format=DATE_FORMAT)
remarks = serializers.CharField(allow_blank=True, allow_null=True, required=False,
style={'base_template': 'textarea.html'})
status = serializers.ChoiceField(choices=(
('pending', 'Pending for Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'),
('partial', 'Partially Adjusted'), ('adjusted', 'Fully Adjusted')), required=False)
rejected_on = serializers.DateTimeField(allow_null=True, required=False)
rejection_reason = serializers.CharField(allow_blank=True, allow_null=True, required=False,
style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True, format=DATE_FORMAT)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
invoice = serializers.PrimaryKeyRelatedField(queryset=Invoice.objects.all(), required=False)
accounting_supplier = serializers.PrimaryKeyRelatedField(queryset=Supplier.objects.all())
customer = serializers.PrimaryKeyRelatedField(queryset=Sme.objects.all(), required=False)
reason = serializers.PrimaryKeyRelatedField(label='Reason for Credit Note',
queryset=CreditDebitNoteReason.objects.all())
bookings = serializers.PrimaryKeyRelatedField(allow_empty=False, label='Adjusted Bookings', many=True,
queryset=ManualBooking.objects.all(), required=False)
approved_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
adjusted_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
created_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
changed_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username")
rejected_by = serializers.SlugRelatedField(queryset=User.objects.all(), slug_field="username", required=False)
supplier_name = serializers.SerializerMethodField()
def get_supplier_name(self, instance):
if isinstance(instance.accounting_supplier, Supplier):
return instance.accounting_supplier.name
return None
def to_representation(self, instance):
self.fields["reason"] = CreditDebitNoteReasonSerializer(read_only=True)
return super().to_representation(instance=instance)
def validate_created_by(self, value):
if isinstance(self.instance, DebitNoteSupplierDirectAdvance) and value:
raise serializers.ValidationError("Created by is immutable")
return value
def create(self, validated_data):
validated_data["debit_note_number"] = generate_debit_note_supplier_direct_advance_serial_number(
validated_data["accounting_supplier"].id)
bookings = []
if "bookings" in validated_data.keys():
bookings = validated_data.pop("bookings")
instance = DebitNoteSupplierDirectAdvance.objects.create(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return instance
def update(self, instance, validated_data):
bookings = []
if "bookings" in validated_data.keys():
instance.bookings.clear()
bookings = validated_data.pop("bookings")
DebitNoteSupplierDirectAdvance.objects.filter(id=instance.id).update(**validated_data)
for booking in bookings:
instance.bookings.add(booking)
return DebitNoteSupplierDirectAdvance.objects.get(id=instance.id)
class DataTablesFilterSerializer(serializers.Serializer):
id = serializers.IntegerField(label='ID', read_only=True)
table_name = serializers.ChoiceField(choices=(
('MBS', 'Manual Bookings'), ('INV', 'Invoices'), ('OWP', 'Outward Payments'), ('IWP', 'Inward Payments'),
('CUS', 'Customers'), ('SUP', 'Suppliers'), ('OWN', 'Owners'), ('VEH', 'Vehicles')),
validators=[UniqueValidator(queryset=DataTablesFilter.objects.all())])
criteria = serializers.JSONField(style={'base_template': 'textarea.html'})
created_on = serializers.DateTimeField(read_only=True)
updated_on = serializers.DateTimeField(read_only=True)
deleted = serializers.BooleanField(required=False)
deleted_on = serializers.DateTimeField(allow_null=True, required=False)
created_by = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=User.objects.all(), required=False)
changed_by = serializers.PrimaryKeyRelatedField(allow_null=True, queryset=User.objects.all(), required=False)
def create(self, validated_data):
pass
def update(self, instance, validated_data):
pass
| 55.529366 | 122 | 0.701649 | 15,264 | 143,710 | 6.362814 | 0.032233 | 0.044841 | 0.033463 | 0.030158 | 0.870925 | 0.840355 | 0.809405 | 0.785023 | 0.76756 | 0.73801 | 0 | 0.0084 | 0.198977 | 143,710 | 2,587 | 123 | 55.550831 | 0.835295 | 0.007084 | 0 | 0.664853 | 0 | 0.000454 | 0.072815 | 0.010254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113379 | false | 0.001814 | 0.010884 | 0.026757 | 0.619048 | 0.000907 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
b4c46c53d8b1de81f3bb72c74f28f3571a597d98 | 127 | py | Python | test/test_edit_group.py | hvolena/python_training | 621a9342939054f32129f9e0652a786269f0174b | [
"Apache-2.0"
] | null | null | null | test/test_edit_group.py | hvolena/python_training | 621a9342939054f32129f9e0652a786269f0174b | [
"Apache-2.0"
] | null | null | null | test/test_edit_group.py | hvolena/python_training | 621a9342939054f32129f9e0652a786269f0174b | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
def test_edit_first_group(app):
app.group.edit_first_group(group_name="Друзья")
| 21.166667 | 55 | 0.700787 | 19 | 127 | 4.368421 | 0.684211 | 0.216867 | 0.337349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.149606 | 127 | 5 | 56 | 25.4 | 0.759259 | 0.165354 | 0 | 0 | 0 | 0 | 0.057692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b4ebcf444c2d8648164637738ccab2f42a033944 | 13,208 | py | Python | adjudicator/tests/datc/test_c.py | zingbretsen/diplomacy | e4c8d2c89540c0e2ea1929879fd303a170d0a723 | [
"MIT"
] | null | null | null | adjudicator/tests/datc/test_c.py | zingbretsen/diplomacy | e4c8d2c89540c0e2ea1929879fd303a170d0a723 | [
"MIT"
] | null | null | null | adjudicator/tests/datc/test_c.py | zingbretsen/diplomacy | e4c8d2c89540c0e2ea1929879fd303a170d0a723 | [
"MIT"
] | null | null | null | import unittest
from adjudicator.decisions import Outcomes
from adjudicator.order import Convoy, Move, Support
from adjudicator.piece import Army, Fleet
from adjudicator.processor import process
from adjudicator.state import State
from adjudicator.tests.data import NamedCoasts, Nations, Territories, register_all
class TestCircularMovement(unittest.TestCase):
def setUp(self):
self.state = State()
self.territories = Territories()
self.named_coasts = NamedCoasts(self.territories)
self.state = register_all(self.state, self.territories, self.named_coasts)
def test_three_army_circular_movement(self):
"""
Three units can change place, even in spring 1901.
Turkey:
F Ankara - Constantinople
A Constantinople - Smyrna
A Smyrna - Ankara
All three units will move.
"""
pieces = [
Fleet(0, Nations.TURKEY, self.territories.ANKARA),
Army(0, Nations.TURKEY, self.territories.CONSTANTINOPLE),
Army(0, Nations.TURKEY, self.territories.SMYRNA)
]
orders = [
Move(0, Nations.TURKEY, self.territories.ANKARA, self.territories.CONSTANTINOPLE),
Move(0, Nations.TURKEY, self.territories.CONSTANTINOPLE, self.territories.SMYRNA),
Move(0, Nations.TURKEY, self.territories.SMYRNA, self.territories.ANKARA),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[0].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[1].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[2].outcome, Outcomes.SUCCEEDS)
def test_three_army_circular_movement_with_support(self):
"""
Three units can change place, even when one gets support.
Turkey:
F Ankara - Constantinople
A Constantinople - Smyrna
A Smyrna - Ankara
A Bulgaria Supports F Ankara - Constantinople
Of course the three units will move, but knowing how programs are
written, this can confuse the adjudicator.
"""
pieces = [
Fleet(0, Nations.TURKEY, self.territories.ANKARA),
Army(0, Nations.TURKEY, self.territories.BULGARIA),
Army(0, Nations.TURKEY, self.territories.CONSTANTINOPLE),
Army(0, Nations.TURKEY, self.territories.SMYRNA)
]
orders = [
Move(0, Nations.TURKEY, self.territories.ANKARA, self.territories.CONSTANTINOPLE),
Move(0, Nations.TURKEY, self.territories.CONSTANTINOPLE, self.territories.SMYRNA),
Move(0, Nations.TURKEY, self.territories.SMYRNA, self.territories.ANKARA),
Support(0, Nations.TURKEY, self.territories.BULGARIA, self.territories.ANKARA, self.territories.CONSTANTINOPLE),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[0].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[1].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[2].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[3].outcome, Outcomes.SUCCEEDS)
def test_disrupted_three_army_circular_movement(self):
"""
When one of the units bounces, the whole circular movement will hold.
Turkey:
F Ankara - Constantinople
A Constantinople - Smyrna
A Smyrna - Ankara
A Bulgaria - Constantinople
Every unit will keep its place.
"""
pieces = [
Fleet(0, Nations.TURKEY, self.territories.ANKARA),
Army(0, Nations.TURKEY, self.territories.BULGARIA),
Army(0, Nations.TURKEY, self.territories.CONSTANTINOPLE),
Army(0, Nations.TURKEY, self.territories.SMYRNA)
]
orders = [
Move(0, Nations.TURKEY, self.territories.ANKARA, self.territories.CONSTANTINOPLE),
Move(0, Nations.TURKEY, self.territories.CONSTANTINOPLE, self.territories.SMYRNA),
Move(0, Nations.TURKEY, self.territories.SMYRNA, self.territories.ANKARA),
Move(0, Nations.TURKEY, self.territories.BULGARIA, self.territories.CONSTANTINOPLE),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[0].outcome, Outcomes.FAILS)
self.assertEqual(orders[1].outcome, Outcomes.FAILS)
self.assertEqual(orders[2].outcome, Outcomes.FAILS)
self.assertEqual(orders[3].outcome, Outcomes.FAILS)
def test_circular_movement_with_attacked_convoy(self):
"""
When the circular movement contains an attacked convoy, the circular
movement succeeds. The adjudication algorithm should handle attack of
convoys before calculating circular movement.
Austria:
A Trieste - Serbia
A Serbia - Bulgaria
Turkey:
A Bulgaria - Trieste
F Aegean Sea Convoys A Bulgaria - Trieste
F Ionian Sea Convoys A Bulgaria - Trieste
F Adriatic Sea Convoys A Bulgaria - Trieste
Italy:
F Naples - Ionian Sea
The fleet in the Ionian Sea is attacked but not dislodged. The circular
movement succeeds. The Austrian and Turkish armies will advance.
"""
pieces = [
Army(0, Nations.AUSTRIA, self.territories.TRIESTE),
Army(0, Nations.AUSTRIA, self.territories.SERBIA),
Army(0, Nations.TURKEY, self.territories.BULGARIA),
Fleet(0, Nations.TURKEY, self.territories.AEGEAN_SEA),
Fleet(0, Nations.TURKEY, self.territories.IONIAN_SEA),
Fleet(0, Nations.TURKEY, self.territories.ADRIATIC_SEA),
Fleet(0, Nations.ITALY, self.territories.NAPLES),
]
orders = [
Move(0, Nations.AUSTRIA, self.territories.TRIESTE, self.territories.SERBIA),
Move(0, Nations.AUSTRIA, self.territories.SERBIA, self.territories.BULGARIA),
Move(0, Nations.TURKEY, self.territories.BULGARIA, self.territories.TRIESTE, via_convoy=True),
Convoy(0, Nations.TURKEY, self.territories.AEGEAN_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Convoy(0, Nations.TURKEY, self.territories.IONIAN_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Convoy(0, Nations.TURKEY, self.territories.ADRIATIC_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Move(0, Nations.ITALY, self.territories.NAPLES, self.territories.IONIAN_SEA),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[0].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[1].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[2].outcome, Outcomes.SUCCEEDS)
self.assertEqual(pieces[3].dislodged_decision, Outcomes.SUSTAINS)
self.assertEqual(pieces[4].dislodged_decision, Outcomes.SUSTAINS)
self.assertEqual(pieces[5].dislodged_decision, Outcomes.SUSTAINS)
self.assertEqual(orders[6].outcome, Outcomes.FAILS)
def test_disrupted_circular_movement_due_to_dislodged_convoy(self):
"""
When the circular movement contains a convoy, the circular movement is
disrupted when the convoying fleet is dislodged. The adjudication
algorithm should disrupt convoys before calculating circular movement.
Austria:
A Trieste - Serbia
A Serbia - Bulgaria
Turkey:
A Bulgaria - Trieste
F Aegean Sea Convoys A Bulgaria - Trieste
F Ionian Sea Convoys A Bulgaria - Trieste
F Adriatic Sea Convoys A Bulgaria - Trieste
Italy:
F Naples - Ionian Sea
F Tunis Supports F Naples - Ionian Sea
Due to the dislodged convoying fleet, all Austrian and Turkish armies
will not move.
"""
pieces = [
Army(0, Nations.AUSTRIA, self.territories.TRIESTE),
Army(0, Nations.AUSTRIA, self.territories.SERBIA),
Army(0, Nations.TURKEY, self.territories.BULGARIA),
Fleet(0, Nations.TURKEY, self.territories.AEGEAN_SEA),
Fleet(0, Nations.TURKEY, self.territories.IONIAN_SEA),
Fleet(0, Nations.TURKEY, self.territories.ADRIATIC_SEA),
Fleet(0, Nations.ITALY, self.territories.NAPLES),
Fleet(0, Nations.ITALY, self.territories.TUNIS),
]
orders = [
Move(0, Nations.AUSTRIA, self.territories.TRIESTE, self.territories.SERBIA),
Move(0, Nations.AUSTRIA, self.territories.SERBIA, self.territories.BULGARIA),
Move(0, Nations.TURKEY, self.territories.BULGARIA, self.territories.TRIESTE, via_convoy=True),
Convoy(0, Nations.TURKEY, self.territories.AEGEAN_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Convoy(0, Nations.TURKEY, self.territories.IONIAN_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Convoy(0, Nations.TURKEY, self.territories.ADRIATIC_SEA, self.territories.BULGARIA, self.territories.TRIESTE),
Move(0, Nations.ITALY, self.territories.NAPLES, self.territories.IONIAN_SEA),
Support(0, Nations.ITALY, self.territories.TUNIS, self.territories.NAPLES, self.territories.IONIAN_SEA),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[0].outcome, Outcomes.FAILS)
self.assertEqual(orders[1].outcome, Outcomes.FAILS)
self.assertEqual(orders[2].outcome, Outcomes.FAILS)
self.assertEqual(pieces[3].dislodged_decision, Outcomes.SUSTAINS)
self.assertEqual(pieces[4].dislodged_decision, Outcomes.DISLODGED)
self.assertEqual(pieces[4].dislodged_by, pieces[6])
self.assertEqual(pieces[5].dislodged_decision, Outcomes.SUSTAINS)
self.assertEqual(orders[6].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[7].outcome, Outcomes.SUCCEEDS)
def test_two_armies_with_two_convoys(self):
"""
Two armies can swap places even when they are not adjacent.
England:
F North Sea Convoys A London - Belgium
A London - Belgium
France:
F English Channel Convoys A Belgium - London
A Belgium - London
Both convoys should succeed.
"""
pieces = [
Fleet(0, Nations.ENGLAND, self.territories.NORTH_SEA),
Army(0, Nations.ENGLAND, self.territories.LONDON),
Fleet(0, Nations.FRANCE, self.territories.ENGLISH_CHANNEL),
Army(0, Nations.FRANCE, self.territories.BELGIUM),
]
orders = [
Convoy(0, Nations.ENGLAND, self.territories.NORTH_SEA, self.territories.LONDON, self.territories.BELGIUM),
Move(0, Nations.ENGLAND, self.territories.LONDON, self.territories.BELGIUM, via_convoy=True),
Convoy(0, Nations.FRANCE, self.territories.ENGLISH_CHANNEL, self.territories.BELGIUM, self.territories.LONDON),
Move(0, Nations.FRANCE, self.territories.BELGIUM, self.territories.LONDON, via_convoy=True),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[1].outcome, Outcomes.SUCCEEDS)
self.assertEqual(orders[3].outcome, Outcomes.SUCCEEDS)
def test_disrupted_unit_swap(self):
"""
If in a swap one of the unit bounces, then the swap fails.
England:
F North Sea Convoys A London - Belgium
A London - Belgium
France:
F English Channel Convoys A Belgium - London
A Belgium - London
A Burgundy - Belgium
None of the units will succeed to move.
"""
pieces = [
Fleet(0, Nations.ENGLAND, self.territories.NORTH_SEA),
Army(0, Nations.ENGLAND, self.territories.LONDON),
Fleet(0, Nations.FRANCE, self.territories.ENGLISH_CHANNEL),
Army(0, Nations.FRANCE, self.territories.BELGIUM),
Army(0, Nations.FRANCE, self.territories.BURGUNDY),
]
orders = [
Convoy(0, Nations.ENGLAND, self.territories.NORTH_SEA, self.territories.LONDON, self.territories.BELGIUM),
Move(0, Nations.ENGLAND, self.territories.LONDON, self.territories.BELGIUM, via_convoy=True),
Convoy(0, Nations.FRANCE, self.territories.ENGLISH_CHANNEL, self.territories.BELGIUM, self.territories.LONDON),
Move(0, Nations.FRANCE, self.territories.BELGIUM, self.territories.LONDON, via_convoy=True),
Move(0, Nations.FRANCE, self.territories.BURGUNDY, self.territories.BELGIUM),
]
self.state.register(*pieces, *orders)
self.state.post_register_updates()
process(self.state)
self.assertEqual(orders[1].outcome, Outcomes.FAILS)
self.assertEqual(orders[3].outcome, Outcomes.FAILS)
self.assertEqual(orders[4].outcome, Outcomes.FAILS)
| 44.322148 | 124 | 0.665279 | 1,485 | 13,208 | 5.857239 | 0.09899 | 0.206944 | 0.061163 | 0.078639 | 0.843642 | 0.816854 | 0.785468 | 0.767763 | 0.761784 | 0.757071 | 0 | 0.010621 | 0.23728 | 13,208 | 297 | 125 | 44.47138 | 0.852789 | 0.17285 | 0 | 0.725146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187135 | 1 | 0.046784 | false | 0 | 0.040936 | 0 | 0.093567 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3708a1592351210a002a92bcd1a994ee78bea940 | 37 | py | Python | src/lib/sched.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/sched.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/sched.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("sched")
| 18.5 | 36 | 0.756757 | 6 | 37 | 3.833333 | 0.666667 | 0.521739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.676471 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
371ad0974876e79b772f680e3aaca51073997908 | 16,326 | py | Python | django-budget/transaction/tests/tests_views.py | eliostvs/django-budget | c3b181e0dd259f14de6cb6f537508190e1344ec3 | [
"MIT"
] | null | null | null | django-budget/transaction/tests/tests_views.py | eliostvs/django-budget | c3b181e0dd259f14de6cb6f537508190e1344ec3 | [
"MIT"
] | null | null | null | django-budget/transaction/tests/tests_views.py | eliostvs/django-budget | c3b181e0dd259f14de6cb6f537508190e1344ec3 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.contrib import messages
from django.contrib.messages.middleware import MessageMiddleware
from django.contrib.sessions.middleware import SessionMiddleware
from django.core.paginator import Page, Paginator
from django.core.urlresolvers import reverse
from djet.testcases import MiddlewareType
from model_mommy import mommy
from rebar.testing import flatten_to_dict
from base.utils import BaseTestCase
class TransactionListViewTest(BaseTestCase):
from transaction.views import TransactionListView
url = reverse('transaction:transaction_list')
view_class = TransactionListView
def test_view_with_no_transaction(self):
response = self.get()
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/list.html')
self.assertContains(response, reverse('transaction:transaction_add'))
self.assertIsInstance(response.context_data['paginator'], Paginator)
self.assertIsInstance(response.context_data['page_obj'], Page)
self.assertFalse(response.context_data['is_paginated'])
self.assertEqual(0, response.context_data['transactions'].count())
def test_view_with_no_active_transaction(self):
mommy.make('Transaction', is_deleted=True)
response = self.get()
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/list.html')
self.assertContains(response, reverse('transaction:transaction_add'))
self.assertIsInstance(response.context_data['paginator'], Paginator)
self.assertIsInstance(response.context_data['page_obj'], Page)
self.assertFalse(response.context_data['is_paginated'])
self.assertEqual(0, response.context_data['transactions'].count())
def test_view_with_a_transaction(self):
transaction = mommy.make('Transaction')
response = self.get()
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/list.html')
self.assertIsInstance(response.context_data['paginator'], Paginator)
self.assertIsInstance(response.context_data['page_obj'], Page)
self.assertFalse(response.context_data['is_paginated'])
self.assertEqual(1, response.context_data['transactions'].count())
self.assertIn(transaction, response.context_data['transactions'])
def test_view_pagination(self):
mommy.make('Transaction', _quantity=10)
transaction = mommy.make('Transaction')
url = '%s?page=2' % self.url
request = self.factory.get(path=url, user=self.mock_user)
response = self.view(request)
response.render()
self.assertIsInstance(response.context_data['paginator'], Paginator)
self.assertIsInstance(response.context_data['page_obj'], Page)
self.assertTrue(response.context_data['is_paginated'])
self.assertEqual(1, response.context_data['transactions'].count())
self.assertIn(transaction, response.context_data['transactions'])
def test_html_content_with_no_transaction(self):
response = self.get()
self.assertNotContains(response, 'INVALID VARIABLE:')
self.assertContains(response, 'Transaction List', count=2)
self.assertContains(response, 'New Transaction')
self.assertContains(response, reverse('transaction:transaction_add'))
self.assertContains(response, 'No transactions found.')
def test_html_content_with_a_transaction(self):
category = mommy.make('Category')
transaction = mommy.make('Transaction', notes='foo', category=category)
response = self.get()
self.assertNotContains(response, 'INVALID VARIABLE:')
self.assertContains(response, 'Transaction List', count=2)
self.assertContains(response, reverse('transaction:transaction_add'))
self.assertNotContains(response, 'No transactions found.')
self.assertContains(response, transaction.id)
self.assertContains(response, transaction.notes)
self.assertContains(response, transaction.get_transaction_type_display())
self.assertContains(response, transaction.date.strftime('%m/%d/%Y'))
self.assertContains(response, transaction.category.name)
self.assertContains(response, transaction.amount)
self.assertContains(response, reverse('transaction:transaction_edit', kwargs={'pk': transaction.id}))
self.assertContains(response, reverse('transaction:transaction_delete', kwargs={'pk': transaction.id}))
def test_view_redirect_if_anonymous(self):
request = self.factory.get(path=self.url, user=self.anonymous_user)
response = self.view(request)
self.assertEqual(302, response.status_code)
self.assertEqual('%s?next=%s' % (reverse('login'), self.url), response._headers['location'][1])
def get(self):
request = self.factory.get(path=self.url, user=self.mock_user)
response = self.view(request)
return response.render()
class TransactionAddViewTest(BaseTestCase):
from transaction.views import TransactionCreateView
url = url = reverse('transaction:transaction_add')
view_class = TransactionCreateView
middleware_classes = [
SessionMiddleware,
(MessageMiddleware, MiddlewareType.PROCESS_REQUEST),
]
def test_has_form_on_context(self):
from transaction.forms import TransactionForm
response = self.get()
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/add.html')
self.assertIsInstance(response.context_data['form'], TransactionForm)
def test_show_form_with_errors(self):
form_data = {}
_, response = self.post(form_data)
response.render()
form = response.context_data['form']
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/add.html')
self.assertEqual(4, len(form.errors))
self.assertTrue(form['transaction_type'].errors)
self.assertTrue(form['category'].errors)
self.assertTrue(form['amount'].errors)
self.assertTrue(form['date'].errors)
def test_redirects_after_save(self):
from transaction.forms import TransactionForm
category = mommy.make('Category')
transaction = mommy.prepare('Transaction', category=category)
form_data = flatten_to_dict(TransactionForm(instance=transaction))
_, response = self.post(form_data)
self.assertEqual(302, response.status_code)
self.assertEqual(('Location', reverse('transaction:transaction_list')), response._headers['location'])
def test_confirm_saved_object(self):
from transaction.models import Transaction
from transaction.forms import TransactionForm
category = mommy.make('Category')
old = mommy.prepare('Transaction', category=category)
form_data = flatten_to_dict(TransactionForm(instance=old))
self.post(form_data)
new = Transaction.objects.get(pk=1)
self.assertEqual(1, Transaction.objects.count())
self.assertEqual(old.transaction_type, new.transaction_type)
self.assertEqual(old.category, new.category)
self.assertEqual(old.amount, new.amount)
self.assertEqual(old.notes, new.notes)
self.assertEqual(old.date, new.date)
def test_show_aleter_message_after_save(self):
from transaction.forms import TransactionForm
category = mommy.make('Category')
old = mommy.prepare('Transaction', category=category)
form_data = flatten_to_dict(TransactionForm(instance=old))
request, response = self.post(form_data)
self.assert_redirect(response, reverse('transaction:transaction_list'))
message = 'Transaction was created successfuly!'
self.assert_message_exists(request, messages.SUCCESS, message)
def test_html_with_a_unbound_form(self):
response = self.get()
self.assertNotContains(response, 'INVALID VARIABLE:')
self.assertContains(response, 'Add A Transaction', count=2)
self.assertContains(response, 'id="id_notes"')
self.assertContains(response, 'id="id_category"')
self.assertContains(response, 'id="id_amount"')
self.assertContains(response, 'id="id_date"')
self.assertContains(response, reverse('transaction:transaction_list'))
def test_view_redirect_if_anonymous(self):
request = self.factory.get(path=self.url, user=self.anonymous_user)
response = self.view(request)
self.assertEqual(302, response.status_code)
self.assertEqual('%s?next=%s' % (reverse('login'), self.url), response._headers['location'][1])
def get(self):
request = self.factory.get(path=self.url, user=self.mock_user)
response = self.view(request)
return response.render()
def post(self, form_data):
request = self.factory.post(path=self.url, data=form_data, user=self.mock_user)
return request, self.view(request)
class TransactionEditViewTest(BaseTestCase):
from transaction.views import TransactionUpdateView
view_class = TransactionUpdateView
middleware_classes = [
SessionMiddleware,
(MessageMiddleware, MiddlewareType.PROCESS_REQUEST),
]
def test_has_form_on_context(self):
from transaction.forms import TransactionForm
transaction = mommy.make('Transaction')
response = self.get(transaction)
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/edit.html')
self.assertIsInstance(response.context_data['form'], TransactionForm)
def test_empy_post_should_show_form_with_errors(self):
transaction = mommy.make('Transaction')
form_data = {}
_, response = self.post(transaction, form_data)
response.render()
form = response.context_data['form']
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/add.html')
self.assertEqual(4, len(form.errors))
self.assertTrue(form['transaction_type'].errors)
self.assertTrue(form['category'].errors)
self.assertTrue(form['amount'].errors)
self.assertTrue(form['date'].errors)
self.assertFalse(form['notes'].errors)
def test_redirects_after_save(self):
from transaction.forms import TransactionForm
transaction = mommy.make('Transaction')
form_data = flatten_to_dict(TransactionForm(instance=transaction))
_, response = self.post(transaction, form_data)
self.assertEqual(302, response.status_code)
self.assertEqual(reverse('transaction:transaction_list'), response._headers['location'][1])
def test_confirm_saved_object(self):
from transaction.models import Transaction
from transaction.forms import TransactionForm
old = mommy.make('Transaction', notes='Foo')
form_data = flatten_to_dict(TransactionForm(instance=old))
form_data['notes'] = 'Bar'
self.post(old, form_data)
new = Transaction.active.get(pk=1)
self.assertEqual(1, Transaction.active.count())
self.assertEqual(old.transaction_type, new.transaction_type)
self.assertEqual('Bar', new.notes)
self.assertEqual(old.category, new.category)
self.assertEqual(old.amount, new.amount)
self.assertEqual(old.date, new.date)
def test_show_alert_message_after_save(self):
from transaction.forms import TransactionForm
old = mommy.make('Transaction')
form_data = flatten_to_dict(TransactionForm(instance=old))
request, response = self.post(old, form_data)
self.assert_redirect(response, reverse('transaction:transaction_list'))
message = 'Transaction was updated successfuly!'
self.assert_message_exists(request, messages.SUCCESS, message)
def test_view_html_with_a_bound_from(self):
transaction = mommy.make('Transaction')
response = self.get(transaction)
self.assertNotContains(response, 'INVALID VARIABLE:')
self.assertContains(response, 'Edit Transaction', count=2)
self.assertContains(response, transaction.transaction_type)
self.assertContains(response, transaction.get_transaction_type_display())
self.assertContains(response, transaction.notes)
self.assertContains(response, transaction.category.name)
self.assertContains(response, transaction.date)
self.assertContains(response, reverse('transaction:transaction_list'))
self.assertContains(response, reverse('transaction:transaction_delete', kwargs={'pk': transaction.pk}))
def test_view_redirect_if_anonymous(self):
pk = 1
url = reverse('transaction:transaction_edit', kwargs={'pk': pk})
request = self.factory.get(path=url, user=self.anonymous_user)
response = self.view(request, pk=pk)
self.assertEqual(302, response.status_code)
self.assertEqual('%s?next=%s' % (reverse('login'), url), response._headers['location'][1])
def get(self, transaction):
url = reverse('transaction:transaction_edit', kwargs={'pk': transaction.id})
request = self.factory.get(path=url, user=self.mock_user)
response = self.view(request, pk=transaction.id)
return response.render()
def post(self, transaction, form_data):
url = reverse('transaction:transaction_edit', kwargs={'pk': transaction.id})
request = self.factory.post(path=url, data=form_data, user=self.mock_user)
return request, self.view(request, pk=transaction.id)
class TransactionDeleteViewTest(BaseTestCase):
from transaction.views import TransactionDeleteView
view_class = TransactionDeleteView
def test_view_status_code_and_template_on_get(self):
transaction = mommy.make('Transaction')
response = self.get(transaction)
self.assertEqual(200, response.status_code)
self.assertTemplateUsed(response, 'transaction/delete.html')
def test_view_redirects_after_delete(self):
transaction = mommy.make('Transaction')
response = self.post(transaction)
self.assertEqual(302, response.status_code)
self.assertEqual(('Location', reverse('transaction:transaction_list')), response._headers['location'])
def test_confirm_deleted_object(self):
from transaction.models import Transaction
old_transaction = mommy.make('Transaction')
self.post(old_transaction)
new_transaction = self.refresh(old_transaction)
self.assertEqual(1, Transaction.objects.count())
self.assertEqual(0, Transaction.active.count())
self.assertTrue(new_transaction.is_deleted)
def test_html_content_on_delete_view(self):
transaction = mommy.make('Transaction')
response = self.get(transaction)
self.assertNotContains(response, 'INVALID VARIABLE:')
self.assertContains(response, 'Delete Transaction', count=2)
self.assertContains(response, 'Are you sure you want to delete "%s"?' % transaction)
self.assertContains(response, reverse('transaction:transaction_list'))
def test_view_redirect_if_anonymous(self):
pk = 1
url = reverse('transaction:transaction_delete', kwargs={'pk': pk})
request = self.factory.get(path=url, user=self.anonymous_user)
response = self.view(request, pk=pk)
self.assertEqual(302, response.status_code)
self.assertEqual('%s?next=%s' % (reverse('login'), url), response._headers['location'][1])
def get(self, transaction):
url = reverse('transaction:transaction_delete', kwargs={'pk': transaction.id})
request = self.factory.get(path=url, user=self.mock_user)
response = self.view(request, pk=transaction.id)
return response.render()
def post(self, transaction):
url = reverse('transaction:transaction_delete', kwargs={'pk': transaction.id})
request = self.factory.post(path=url, user=self.mock_user)
response = self.view(request, pk=transaction.id)
return response
| 42.963158 | 111 | 0.706174 | 1,782 | 16,326 | 6.310887 | 0.092593 | 0.05602 | 0.076294 | 0.029344 | 0.84928 | 0.805887 | 0.778321 | 0.758581 | 0.720434 | 0.702383 | 0 | 0.005392 | 0.182041 | 16,326 | 379 | 112 | 43.076517 | 0.836753 | 0 | 0 | 0.654237 | 0 | 0 | 0.116256 | 0.046307 | 0 | 0 | 0 | 0 | 0.40339 | 1 | 0.111864 | false | 0 | 0.084746 | 0 | 0.264407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2eec1a094e41186980cf772f30df5f7eff2555a3 | 6,232 | py | Python | DQMOffline/Muon/python/EfficencyPlotter_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | DQMOffline/Muon/python/EfficencyPlotter_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | DQMOffline/Muon/python/EfficencyPlotter_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from DQMServices.Core.DQMEDHarvester import DQMEDHarvester
effPlotterLoose = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Loose")
)
effPlotterMedium = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Medium")
)
effPlotterTight = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Tight")
)
effPlotterLooseMiniAOD = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons_miniAOD/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Loose")
)
effPlotterMediumMiniAOD = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons_miniAOD/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Medium")
)
effPlotterTightMiniAOD = DQMEDHarvester("EfficiencyPlotter",
folder = cms.string("Muons_miniAOD/EfficiencyAnalyzer"),
phiMin = cms.double(-3.2),
etaMin = cms.double(-2.5),
ptMin = cms.double(10),
etaBin = cms.int32(8),
ptBin = cms.int32(10),
phiBin = cms.int32(8),
etaMax = cms.double(2.5),
phiMax = cms.double(3.2),
ptMax = cms.double(100),
vtxBin = cms.int32(10),
vtxMin = cms.double(0.5),
vtxMax = cms.double(40.5),
MuonID = cms.string("Tight")
)
| 56.654545 | 99 | 0.311778 | 411 | 6,232 | 4.720195 | 0.116788 | 0.22268 | 0.061856 | 0.068041 | 0.896392 | 0.896392 | 0.896392 | 0.896392 | 0.896392 | 0.896392 | 0 | 0.077576 | 0.602856 | 6,232 | 109 | 100 | 57.174312 | 0.706263 | 0 | 0 | 0.857143 | 0 | 0 | 0.048475 | 0.026966 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.020408 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2c064940ada2f78ef0098f1fe3f5a3134ad55718 | 59,772 | py | Python | torchfusion/learners/learners.py | fbremer/TorchFusion | c6fedcfc69c88efcaabba57adcc79ac45e289ee0 | [
"MIT"
] | 250 | 2018-06-10T15:10:04.000Z | 2022-01-30T14:40:34.000Z | torchfusion/learners/learners.py | ObinnaObeleagu/TorchFusion | 8837ca2863e2d62192ed44e43b1827a7b56c30f8 | [
"MIT"
] | 4 | 2018-11-03T19:12:41.000Z | 2019-06-05T10:17:14.000Z | torchfusion/learners/learners.py | ObinnaObeleagu/TorchFusion | 8837ca2863e2d62192ed44e43b1827a7b56c30f8 | [
"MIT"
] | 40 | 2018-06-10T16:31:36.000Z | 2021-04-27T01:05:10.000Z | import torch
from torch.autograd import Variable
import torch.cuda as cuda
from torch.utils.data import DataLoader
import torch.nn as nn
from torch.optim import Adam
from torch.optim.lr_scheduler import StepLR
import os
from time import time
from math import ceil
from io import open
from ..utils import PlotInput, visualize, get_model_summary,get_batch_size,clip_grads,save_model,load_model
from tensorboardX import SummaryWriter
from torch.optim.lr_scheduler import ReduceLROnPlateau
import torch.onnx as onnx
import torch.backends.cudnn as cudnn
r"""Abstract base Model for training, evaluating and performing inference
All custom models should subclass this and implement train, evaluate and predict functions
Args:
use_cuda_if_available (boolean): If set to true, training would be done on a gpu if any is available
"""
class AbstractBaseLearner():
def __init__(self, use_cuda_if_available=True):
self.cuda = False
self.fp16_mode = False
if use_cuda_if_available and cuda.is_available():
self.cuda = True
cudnn.benchmark = True
self.epoch_start_funcs = []
self.batch_start_funcs = []
self.epoch_end_funcs = []
self.batch_end_funcs = []
self.train_completed_funcs = []
r"""Defines the training loop
subclasses must override this
"""
def train(self, *args):
raise NotImplementedError()
r"""Defines the evaluation loop
subclasses must override this
"""
def evaluate(self, *args):
raise NotImplementedError()
r"""Defines the validation loop
subclasses must override this
"""
def validate(self, *args):
raise NotImplementedError()
r"""Defines the prediction logic
subclasses must override this
"""
def predict(self, *args):
raise NotImplementedError()
r"""Adds a function to be called at the start of each epoch
It should have the following signature::
func(epoch) -> None
"""
def half(self):
self.fp16_mode = True
def add_on_epoch_start(self,func):
self.epoch_start_funcs.append(func)
r"""Adds a function to be called at the end of each epoch
It should have the following signature::
func(epoch,data) -> None
data is a dictionary containining metric values, losses and vital details at the end of the epcoh
"""
def add_on_epoch_end(self, func):
self.epoch_end_funcs.append(func)
r"""Adds a function to be called at the start of each batch
It should have the following signature::
func(epoch,batch) -> None
"""
def add_on_batch_start(self, func):
self.batch_start_funcs.append(func)
r"""Adds a function to be called at the end of each batch
It should have the following signature::
func(epoch,batch,data) -> None
data is a dictionary containining metric values, losses and vital details at the end of the batch
"""
def add_on_batch_end(self, func):
self.batch_end_funcs.append(func)
r"""Adds a function to be called at the end of training
It should have the following signature::
func(data) -> None
data is a dictionary containining metric values, duration and vital details at the end of training
"""
def add_on_training_completed(self, func):
self.train_completed_funcs.append(func)
r""" This function should return a dictionary containing information about the training including metric values.
Child classes must override this.
"""
def get_train_history(self):
raise NotImplementedError()
r"""This is the base learner for training, evaluating and performing inference with a single model
All custom learners should subclass this and implement __train__, __evaluate__,__validate__ and __predict__ functions
This class already takes care of data loading, iterations and metrics, subclasses only need to define custom logic for training,
evaluation and prediction
Args:
model (nn.Module): the module to be used for training, evaluation and inference.
use_cuda_if_available (boolean): If set to true, training would be done on a gpu if any is available
"""
class BaseLearner(AbstractBaseLearner):
def __init__(self, model, use_cuda_if_available=True):
self.model = model
super(BaseLearner, self).__init__(use_cuda_if_available)
self.__train_history__ = {}
self.train_running_loss = None
self.train_metrics = None
self.test_metrics = None
self.val_metrics = None
self.iterations = 0
self.model_dir = os.getcwd()
r"""Initialize model weights using pre-trained weights from the filepath
Args:
path (str): path to a compatible pre-defined model
"""
def load_model(self, path):
load_model(self.model,path)
r"""Saves the model to the path specified
Args:
path (str): path to save model
save_architecture (boolean): if True, both weights and architecture will be saved, default is False
"""
def save_model(self, path,save_architecture=False):
save_model(self.model,path,save_architecture)
def train(self,*args):
self.__train_loop__(*args)
def __train_loop__(self, train_loader, train_metrics, test_loader=None, test_metrics=None, val_loader=None,val_metrics=None, num_epochs=10,lr_scheduler=None,
save_models="all", model_dir=os.getcwd(),save_model_interval=1,display_metrics=True, save_metrics=True, notebook_mode=False, batch_log=True, save_logs=None,
visdom_log=None,tensorboard_log=None, save_architecture=False):
"""
:param train_loader:
:param train_metrics:
:param test_loader:
:param test_metrics:
:param val_loader:
:param val_metrics:
:param num_epochs:
:param lr_scheduler:
:param save_models:
:param model_dir:
:param save_model_interval:
:param display_metrics:
:param save_metrics:
:param notebook_mode:
:param batch_log:
:param save_logs:
:param visdom_log:
:param tensorboard_log:
:param save_architecture:
:return:
"""
if save_models not in ["all", "best"]:
raise ValueError("save models must be 'all' or 'best' , {} is invalid".format(save_models))
if save_models == "best" and test_loader is None and val_loader is None:
raise ValueError("save models can only be best when test_loader or val_loader is provided ")
if test_loader is not None:
if test_metrics is None:
raise ValueError("You must provide a metric for your test data")
elif len(test_metrics) == 0:
raise ValueError("test metrics cannot be an empty list")
if val_loader is not None:
if val_metrics is None:
raise ValueError("You must provide a metric for your val data")
elif len(val_metrics) == 0:
raise ValueError("val metrics cannot be an empty list")
self.train_metrics = train_metrics
self.test_metrics = test_metrics
self.val_metrics = val_metrics
self.model_dir = model_dir
if not os.path.exists(model_dir):
os.mkdir(model_dir)
models_all = os.path.join(model_dir, "all_models")
models_best = os.path.join(model_dir, "best_models")
if not os.path.exists(models_all):
os.mkdir(models_all)
if not os.path.exists(models_best) and (test_loader is not None or val_loader is not None):
os.mkdir(models_best)
from tqdm import tqdm_notebook
from tqdm import tqdm
best_test_metric = 0.0
best_val_metric = 0.0
train_start_time = time()
for e in range(num_epochs):
print("Epoch {} of {}".format(e + 1, num_epochs))
for metric in self.train_metrics:
metric.reset()
self.model.train()
for func in self.epoch_start_funcs:
func(e + 1)
self.train_running_loss = torch.Tensor([0.0])
train_loss = 0.0
data_len = 0
if notebook_mode and batch_log:
progress_ = tqdm_notebook(enumerate(train_loader))
elif batch_log:
progress_ = tqdm(enumerate(train_loader))
else:
progress_ = enumerate(train_loader)
max_batch_size = 0
init_time = time()
for i, data in progress_:
for func in self.batch_start_funcs:
func(e + 1,i + 1)
batch_size = get_batch_size(data)
if max_batch_size < batch_size:
max_batch_size = batch_size
self.__train_func__(data)
self.iterations += 1
data_len += batch_size
train_loss = self.train_running_loss.item() / data_len
if batch_log:
progress_message = ""
for metric in self.train_metrics:
progress_message += "Train {} : {}".format(metric.name, metric.getValue())
progress_.set_description("{}/{} batches ".format(int(ceil(data_len / max_batch_size)),
int(ceil(len(
train_loader.dataset) / max_batch_size))))
progress_dict = {"Train Loss": train_loss}
for metric in self.train_metrics:
progress_dict["Train " + metric.name] = metric.getValue()
progress_.set_postfix(progress_dict)
batch_info = {"train_loss":train_loss}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
batch_info[metric_name] = metric.getValue()
for func in self.batch_end_funcs:
func(e + 1,i + 1,batch_info)
if self.cuda:
cuda.synchronize()
duration = time() - init_time
if "duration" in self.__train_history__:
self.__train_history__["duration"].append(duration)
else:
self.__train_history__["duration"] = [duration]
if "train_loss" in self.__train_history__:
self.__train_history__["train_loss"].append(train_loss)
else:
self.__train_history__["train_loss"] = [train_loss]
model_file = os.path.join(models_all, "model_{}.pth".format(e + 1))
if save_models == "all" and (e+1) % save_model_interval == 0:
self.save_model(model_file,save_architecture)
logfile = None
if save_logs is not None:
logfile = open(save_logs, "a")
print(os.linesep + "Epoch: {}, Duration: {} , Train Loss: {}".format(e + 1, duration, train_loss))
if logfile is not None:
logfile.write(
os.linesep + "Epoch: {}, Duration: {} , Train Loss: {}".format(e + 1, duration, train_loss))
if val_loader is None and lr_scheduler is not None:
if isinstance(lr_scheduler,ReduceLROnPlateau):
lr_scheduler.step(train_metrics[0].getValue())
else:
lr_scheduler.step()
if test_loader is not None:
message = "Test Accuracy did not improve, current best is {}".format(best_test_metric)
current_best = best_test_metric
self.evaluate(test_loader, test_metrics)
result = self.test_metrics[0].getValue()
if result > current_best:
best_test_metric = result
message = "Test {} improved from {} to {}".format(test_metrics[0].name, current_best, result)
model_file = os.path.join(models_best, "model_{}.pth".format(e + 1))
self.save_model(model_file,save_architecture)
print(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
if logfile is not None:
logfile.write(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
else:
print(os.linesep + message)
if logfile is not None:
logfile.write(os.linesep + message)
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Test {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Test {} : {}".format(metric.name, metric.getValue()))
if val_loader is not None:
message = "Val Accuracy did not improve, current best is {}".format(best_val_metric)
current_best = best_val_metric
self.validate(val_loader, val_metrics)
result = self.val_metrics[0].getValue()
if lr_scheduler is not None:
if isinstance(lr_scheduler, ReduceLROnPlateau):
lr_scheduler.step(result)
else:
lr_scheduler.step()
if result > current_best:
best_val_metric = result
message = "Val {} improved from {} to {}".format(val_metrics[0].name, current_best, result)
if test_loader is None:
model_file = os.path.join(models_best, "model_{}.pth".format(e + 1))
self.save_model(model_file,save_architecture)
print(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
if logfile is not None:
logfile.write(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
else:
print(os.linesep + "{}".format(message))
if logfile is not None:
logfile.write(os.linesep + "{}".format(message))
else:
print(os.linesep + message)
if logfile is not None:
logfile.write(os.linesep + message)
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Val {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Val {} : {}".format(metric.name, metric.getValue()))
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Train {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Train {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.close()
if "epoch" in self.__train_history__:
self.__train_history__["epoch"].append(e+1)
else:
self.__train_history__["epoch"] = [e+1]
epoch_arr = self.__train_history__["epoch"]
epoch_arr_tensor = torch.LongTensor(epoch_arr)
if visdom_log is not None:
visdom_log.plot_line(torch.FloatTensor(self.__train_history__["train_loss"]),epoch_arr_tensor,win="train_loss",title="Train Loss")
if test_metrics is not None:
for metric in test_metrics:
metric_name = "test_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]),epoch_arr_tensor,win="test_{}".format(metric.name),title="Test {}".format(metric.name))
if val_metrics is not None:
for metric in val_metrics:
metric_name = "val_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]),epoch_arr_tensor,win="val_{}".format(metric.name),title="Val {}".format(metric.name))
for metric in train_metrics:
metric_name = "train_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]), epoch_arr_tensor,
win="train_{}".format(metric.name), title="Train {}".format(metric.name))
if tensorboard_log is not None:
writer = SummaryWriter(os.path.join(model_dir,tensorboard_log))
writer.add_scalar("logs/train_loss", train_loss, global_step=e+1)
if test_metrics is not None:
for metric in test_metrics:
writer.add_scalar("logs/test_metrics/{}".format(metric.name), metric.getValue(),
global_step=e+1)
if val_metrics is not None:
for metric in val_metrics:
writer.add_scalar("logs/val_metrics/{}".format(metric.name), metric.getValue(),
global_step=e+1)
for metric in train_metrics:
writer.add_scalar("logs/train_metrics/{}".format(metric.name), metric.getValue(),
global_step=e + 1)
writer.close()
if display_metrics or save_metrics:
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "epoch_{}_loss.png".format(e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__["train_loss"], name="Train Loss", color="red")],
display=display_metrics,
save_path=save_path)
if test_loader is not None and (display_metrics or save_metrics):
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "test_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Test " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
if val_loader is not None and (display_metrics or save_metrics):
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "val_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Val " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "train_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Train " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
epoch_info = {"train_loss": train_loss,"duration":duration}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
if self.test_metrics != None and test_loader != None:
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
if self.val_metrics != None and val_loader != None:
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
for func in self.epoch_end_funcs:
func(e + 1,epoch_info)
train_end_time = time() - train_start_time
train_info = {"train_duration":train_end_time}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
if self.test_metrics != None and test_loader != None:
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
if val_loader != None:
for metric in self.val_metrics:
metric_name = "train_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
for func in self.train_completed_funcs:
func(train_info)
r"""Training logic, all models must override this
Args:
data: a single batch of data from the train_loader
"""
def __train_func__(self, data):
raise NotImplementedError()
r"""Evaluates the dataset on the set of provided metrics
Args:
test_loader (DataLoader): an instance of DataLoader containing the test set
test_metrics ([]): an array of metrics for evaluating the test set
"""
def evaluate(self, test_loader, metrics):
if self.test_metrics is None:
self.test_metrics = metrics
for metric in self.test_metrics:
metric.reset()
self.model.eval()
for i, data in enumerate(test_loader):
self.__eval_function__(data)
r"""Evaluation logic, all models must override this
Args:
data: a single batch of data from the test_loader
"""
def __eval_function__(self, data):
raise NotImplementedError()
r"""Validates the dataset on the set of provided metrics
Args:
val_loader (DataLoader): an instance of DataLoader containing the test set
metrics ([]): an array of metrics for evaluating the test set
"""
def validate(self, val_loader, metrics):
if self.val_metrics is None:
self.val_metrics = metrics
for metric in self.val_metrics:
metric.reset()
self.model.eval()
for i, data in enumerate(val_loader):
self.__val_function__(data)
r"""Validation logic, all models must override this
Args:
data: a single batch of data from the test_loader
"""
def __val_function__(self, data):
raise NotImplementedError()
r"""Runs inference on the given input
Args:
inputs: a DataLoader or a tensor of input values.
"""
def predict(self, inputs):
self.model.eval()
if isinstance(inputs, DataLoader):
predictions = []
for i, data in enumerate(inputs):
batch_pred = self.__predict_func__(data)
for pred in batch_pred:
predictions.append(pred.unsqueeze(0))
return torch.cat(predictions)
else:
pred = self.__predict_func__(inputs)
return pred.squeeze(0)
r"""Inference logic, all models must override this
Args:
input: a batch of data
"""
def __predict_func__(self, input):
raise NotImplementedError()
r""" Returns a dictionary containing the values of metrics, epochs and loss during training.
"""
def get_train_history(self):
return self.__train_history__
class BaseTextLearner(BaseLearner):
def __init__(self, model, source_field, target_field, batch_first=False,use_cuda_if_available=True):
super(BaseTextLearner, self).__init__(model,use_cuda_if_available)
self.batch_first = batch_first
self.source_field = source_field
self.target_field = target_field
def __train_loop__(self, train_loader, train_metrics, test_loader=None, test_metrics=None, val_loader=None,val_metrics=None, num_epochs=10,lr_scheduler=None,
save_models="all", model_dir=os.getcwd(),save_model_interval=1,display_metrics=True, save_metrics=True, notebook_mode=False, batch_log=True, save_logs=None,
visdom_log=None,tensorboard_log=None, save_architecture=False):
"""
:param train_loader:
:param train_metrics:
:param test_loader:
:param test_metrics:
:param val_loader:
:param val_metrics:
:param num_epochs:
:param lr_scheduler:
:param save_models:
:param model_dir:
:param save_model_interval:
:param display_metrics:
:param save_metrics:
:param notebook_mode:
:param batch_log:
:param save_logs:
:param visdom_log:
:param tensorboard_log:
:param save_architecture:
:return:
"""
if save_models not in ["all", "best"]:
raise ValueError("save models must be 'all' or 'best' , {} is invalid".format(save_models))
if save_models == "best" and test_loader is None and val_loader is None:
raise ValueError("save models can only be best when test_loader or val_loader is provided ")
if test_loader is not None:
if test_metrics is None:
raise ValueError("You must provide a metric for your test data")
elif len(test_metrics) == 0:
raise ValueError("test metrics cannot be an empty list")
if val_loader is not None:
if val_metrics is None:
raise ValueError("You must provide a metric for your val data")
elif len(val_metrics) == 0:
raise ValueError("val metrics cannot be an empty list")
self.train_metrics = train_metrics
self.test_metrics = test_metrics
self.val_metrics = val_metrics
self.model_dir = model_dir
if not os.path.exists(model_dir):
os.mkdir(model_dir)
models_all = os.path.join(model_dir, "all_models")
models_best = os.path.join(model_dir, "best_models")
if not os.path.exists(models_all):
os.mkdir(models_all)
if not os.path.exists(models_best) and (test_loader is not None or val_loader is not None):
os.mkdir(models_best)
from tqdm import tqdm_notebook
from tqdm import tqdm
best_test_metric = 0.0
best_val_metric = 0.0
train_start_time = time()
for e in range(num_epochs):
print("Epoch {} of {}".format(e + 1, num_epochs))
for metric in self.train_metrics:
metric.reset()
self.model.train()
for func in self.epoch_start_funcs:
func(e + 1)
self.train_running_loss = torch.Tensor([0.0])
train_loss = 0.0
data_len = 0
if notebook_mode and batch_log:
progress_ = tqdm_notebook(enumerate(train_loader))
elif batch_log:
progress_ = tqdm(enumerate(train_loader))
else:
progress_ = enumerate(train_loader)
max_batch_size = 0
init_time = time()
for i, data in progress_:
for func in self.batch_start_funcs:
func(e + 1,i + 1)
source = getattr(data, self.source_field)
batch_size = get_batch_size(source, self.batch_first)
if max_batch_size < batch_size:
max_batch_size = batch_size
self.__train_func__(data)
self.iterations += 1
data_len += batch_size
train_loss = self.train_running_loss.item() / data_len
if batch_log:
progress_message = ""
for metric in self.train_metrics:
progress_message += "Train {} : {}".format(metric.name, metric.getValue())
progress_.set_description("{}/{} batches ".format(int(ceil(data_len / max_batch_size)),
int(ceil(len(
train_loader.dataset) / max_batch_size))))
progress_dict = {"Train Loss": train_loss}
for metric in self.train_metrics:
progress_dict["Train " + metric.name] = metric.getValue()
progress_.set_postfix(progress_dict)
batch_info = {"train_loss":train_loss}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
batch_info[metric_name] = metric.getValue()
for func in self.batch_end_funcs:
func(e + 1,i + 1,batch_info)
if self.cuda:
cuda.synchronize()
duration = time() - init_time
if "duration" in self.__train_history__:
self.__train_history__["duration"].append(duration)
else:
self.__train_history__["duration"] = [duration]
if "train_loss" in self.__train_history__:
self.__train_history__["train_loss"].append(train_loss)
else:
self.__train_history__["train_loss"] = [train_loss]
model_file = os.path.join(models_all, "model_{}.pth".format(e + 1))
if save_models == "all" and (e+1) % save_model_interval == 0:
self.save_model(model_file,save_architecture)
logfile = None
if save_logs is not None:
logfile = open(save_logs, "a")
print(os.linesep + "Epoch: {}, Duration: {} , Train Loss: {}".format(e + 1, duration, train_loss))
if logfile is not None:
logfile.write(
os.linesep + "Epoch: {}, Duration: {} , Train Loss: {}".format(e + 1, duration, train_loss))
if val_loader is None and lr_scheduler is not None:
if isinstance(lr_scheduler,ReduceLROnPlateau):
lr_scheduler.step(train_metrics[0].getValue())
else:
lr_scheduler.step()
if test_loader is not None:
message = "Test Accuracy did not improve, current best is {}".format(best_test_metric)
current_best = best_test_metric
self.evaluate(test_loader, test_metrics)
result = self.test_metrics[0].getValue()
if result > current_best:
best_test_metric = result
message = "Test {} improved from {} to {}".format(test_metrics[0].name, current_best, result)
model_file = os.path.join(models_best, "model_{}.pth".format(e + 1))
self.save_model(model_file,save_architecture)
print(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
if logfile is not None:
logfile.write(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
else:
print(os.linesep + message)
if logfile is not None:
logfile.write(os.linesep + message)
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Test {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Test {} : {}".format(metric.name, metric.getValue()))
if val_loader is not None:
message = "Val Accuracy did not improve, current best is {}".format(best_val_metric)
current_best = best_val_metric
self.validate(val_loader, val_metrics)
result = self.val_metrics[0].getValue()
if lr_scheduler is not None:
if isinstance(lr_scheduler, ReduceLROnPlateau):
lr_scheduler.step(result)
else:
lr_scheduler.step()
if result > current_best:
best_val_metric = result
message = "Val {} improved from {} to {}".format(val_metrics[0].name, current_best, result)
if test_loader is None:
model_file = os.path.join(models_best, "model_{}.pth".format(e + 1))
self.save_model(model_file,save_architecture)
print(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
if logfile is not None:
logfile.write(os.linesep + "{} New Best Model saved in {}".format(message, model_file))
else:
print(os.linesep + "{}".format(message))
if logfile is not None:
logfile.write(os.linesep + "{}".format(message))
else:
print(os.linesep + message)
if logfile is not None:
logfile.write(os.linesep + message)
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Val {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Val {} : {}".format(metric.name, metric.getValue()))
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
if metric_name in self.__train_history__:
self.__train_history__[metric_name].append(metric.getValue())
else:
self.__train_history__[metric_name] = [metric.getValue()]
print("Train {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.write(os.linesep + "Train {} : {}".format(metric.name, metric.getValue()))
if logfile is not None:
logfile.close()
if "epoch" in self.__train_history__:
self.__train_history__["epoch"].append(e+1)
else:
self.__train_history__["epoch"] = [e+1]
epoch_arr = self.__train_history__["epoch"]
epoch_arr_tensor = torch.LongTensor(epoch_arr)
if visdom_log is not None:
visdom_log.plot_line(torch.FloatTensor(self.__train_history__["train_loss"]),epoch_arr_tensor,win="train_loss",title="Train Loss")
if test_metrics is not None:
for metric in test_metrics:
metric_name = "test_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]),epoch_arr_tensor,win="test_{}".format(metric.name),title="Test {}".format(metric.name))
if val_metrics is not None:
for metric in val_metrics:
metric_name = "val_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]),epoch_arr_tensor,win="val_{}".format(metric.name),title="Val {}".format(metric.name))
for metric in train_metrics:
metric_name = "train_{}".format(metric.name)
visdom_log.plot_line(torch.FloatTensor(self.__train_history__[metric_name]), epoch_arr_tensor,
win="train_{}".format(metric.name), title="Train {}".format(metric.name))
if tensorboard_log is not None:
writer = SummaryWriter(os.path.join(model_dir,tensorboard_log))
writer.add_scalar("logs/train_loss",train_loss,global_step=e+1)
if test_metrics is not None:
for metric in test_metrics:
writer.add_scalar("logs/test_metrics/{}".format(metric.name), metric.getValue(),
global_step=e+1)
if val_metrics is not None:
for metric in val_metrics:
writer.add_scalar("logs/val_metrics/{}".format(metric.name), metric.getValue(),
global_step=e+1)
for metric in train_metrics:
writer.add_scalar("logs/train_metrics/{}".format(metric.name), metric.getValue(),
global_step=e + 1)
writer.close()
if display_metrics or save_metrics:
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "epoch_{}_loss.png".format(e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__["train_loss"], name="Train Loss", color="red")],
display=display_metrics,
save_path=save_path)
if test_loader is not None and (display_metrics or save_metrics):
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "test_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Test " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
if val_loader is not None and (display_metrics or save_metrics):
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "val_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Val " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
save_path = None
if save_metrics:
save_path = os.path.join(model_dir, "train_{}_epoch_{}.png".format(metric.name, e + 1))
visualize(epoch_arr, [PlotInput(value=self.__train_history__[metric_name], name="Train " + metric.name, color="blue")],
display=display_metrics,
save_path=save_path)
epoch_info = {"train_loss": train_loss,"duration":duration}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
if self.test_metrics != None and test_loader != None:
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
if self.val_metrics != None and val_loader != None:
for metric in self.val_metrics:
metric_name = "val_{}".format(metric.name)
epoch_info[metric_name] = metric.getValue()
for func in self.epoch_end_funcs:
func(e + 1,epoch_info)
train_end_time = time() - train_start_time
train_info = {"train_duration":train_end_time}
for metric in self.train_metrics:
metric_name = "train_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
if self.test_metrics != None and test_loader != None:
for metric in self.test_metrics:
metric_name = "test_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
if val_loader != None:
for metric in self.val_metrics:
metric_name = "train_{}".format(metric.name)
train_info[metric_name] = metric.getValue()
for func in self.train_completed_funcs:
func(train_info)
class StandardLearner(BaseLearner):
def __init__(self, model, use_cuda_if_available=True):
super(StandardLearner,self).__init__(model, use_cuda_if_available)
"""Train function
Args:
train_loader (DataLoader): an instance of DataLoader containing the training set
loss_fn (Loss): the loss function
optimizer (Optimizer): an optimizer for updating parameters
train_metrics ([]): an array of metrics for evaluating the training set
test_loader (DataLoader): an instance of DataLoader containing the test set
test_metrics ([]): an array of metrics for evaluating the test set
num_epochs (int): The maximum number of training epochs
lr_scheduler (_LRSchedular): Learning rate scheduler updated at every epoch
save_models (str): If all, the model is saved at the end of each epoch while the best models based
on the test set are also saved in best_models folder
If 'best', only the best models are saved, test_loader and test_metrics must be provided
model_dir (str) : a path in which to save the models
save_model_interval (int): saves the models after every n epoch
notebook_mode (boolean): Optimizes the progress bar for either jupyter notebooks or consoles
display_metrics (boolean): Enables display of metrics and loss visualizations at the end of each epoch.
save_metrics (boolean): Enables saving of metrics and loss visualizations at the end of each epoch.
batch_log (boolean): Enables printing of logs at every batch iteration
save_logs (str): Specifies a filepath in which to permanently save logs at every epoch
visdom_log (VisdomLogger): Logs outputs and metrics to the visdom server
tensorboard_log (str): Logs outputs and metrics to the filepath for visualization in tensorboard
save_architecture (boolean): Saves the architecture as well as weights during model saving
clip_grads: a tuple specifying the minimum and maximum gradient values
"""
def train(self, train_loader, loss_fn, optimizer, train_metrics, test_loader=None, test_metrics=None, val_loader=None,val_metrics=None, num_epochs=10,lr_scheduler=None,
save_models="all", model_dir=os.getcwd(),save_model_interval=1,display_metrics=False, save_metrics=False, notebook_mode=False, batch_log=True, save_logs=None,
visdom_log=None,tensorboard_log=None, save_architecture=False,clip_grads=None):
self.optimizer = optimizer
self.loss_fn = loss_fn
self.clip_grads = clip_grads
super().__train_loop__(train_loader, train_metrics, test_loader, test_metrics, val_loader,val_metrics, num_epochs,lr_scheduler,
save_models, model_dir,save_model_interval,display_metrics, save_metrics, notebook_mode, batch_log, save_logs,
visdom_log,tensorboard_log, save_architecture)
def __train_func__(self, data):
self.optimizer.zero_grad()
if self.clip_grads is not None:
clip_grads(self.model,self.clip_grads[0],self.clip_grads[1])
train_x, train_y = data
batch_size = get_batch_size(train_x)
if isinstance(train_x,list) or isinstance(train_x,tuple):
train_x = (Variable(x.cuda() if self.cuda else x) for x in train_x)
else:
train_x = Variable(train_x.cuda() if self.cuda else train_x)
if isinstance(train_y,list) or isinstance(train_y,tuple):
train_y = (Variable(y.cuda() if self.cuda else y) for y in train_y)
else:
train_y = Variable(train_y.cuda() if self.cuda else train_y)
outputs = self.model(train_x)
loss = self.loss_fn(outputs, train_y)
if self.fp16_mode:
self.optimizer.backward(loss)
else:
loss.backward()
self.optimizer.step()
self.train_running_loss = self.train_running_loss + (loss.cpu().item() * batch_size)
for metric in self.train_metrics:
metric.update(outputs, train_y)
def __eval_function__(self, data):
test_x, test_y = data
if isinstance(test_x, list) or isinstance(test_x, tuple):
test_x = (x.cuda() if self.cuda else x for x in test_x)
else:
test_x = test_x.cuda() if self.cuda else test_x
if isinstance(test_y, list) or isinstance(test_y, tuple):
test_y = (y.cuda() if self.cuda else y for y in test_y)
else:
test_y = test_y.cuda() if self.cuda else test_y
outputs = self.model(test_x)
for metric in self.test_metrics:
metric.update(outputs, test_y)
def __val_function__(self, data):
val_x, val_y = data
if isinstance(val_x, list) or isinstance(val_x, tuple):
val_x = (x.cuda() if self.cuda else x for x in val_x)
else:
val_x = val_x.cuda() if self.cuda else val_x
if isinstance(val_y, list) or isinstance(val_y, tuple):
val_y = (y.cuda() if self.cuda else y for y in val_y)
else:
val_y = val_y.cuda() if self.cuda else val_y
outputs = self.model(val_x)
for metric in self.val_metrics:
metric.update(outputs, val_y)
def __predict_func__(self, inputs):
if isinstance(inputs, list) or isinstance(inputs, tuple):
inputs = (x.cuda() if self.cuda else x for x in inputs)
else:
inputs = inputs.cuda() if self.cuda else inputs
return self.model(inputs)
"""returns a complete summary of the model
Args:
input_sizes: a single tuple or a list of tuples in the case of multiple inputs, specifying the
size of the inputs to the model
input_types: a single tensor type or a list of tensors in the case of multiple inputs, specifying the
type of the inputs to the model
item_length(int): the length of each item in the summary
tensorboard_log(str): if enabled, the model will be serialized into a format readable by tensorboard,
useful for visualizing the model in tensorboard.
"""
def summary(self,input_sizes,input_types=torch.FloatTensor,item_length=26,tensorboard_log=None):
if isinstance(input_sizes,list):
inputs = (torch.randn(input_size).type(input_type).unsqueeze(0) for input_size, input_type in zip(input_sizes,input_types))
inputs = (input.cuda() if self.cuda else input for input in inputs)
else:
inputs = torch.randn(input_sizes).type(input_types).unsqueeze(0)
inputs = inputs.cuda() if self.cuda else inputs
return get_model_summary(self.model,inputs,item_length=item_length,tensorboard_log=tensorboard_log)
"""saves the model in onnx format
Args:
input_sizes: a single tuple or a list of tuples in the case of multiple inputs, specifying the
size of the inputs to the model
input_types: a single tensor type or a list of tensors in the case of multiple inputs, specifying the
type of the inputs to the model
"""
def to_onnx(self,input_sizes,path,input_types=torch.FloatTensor,**kwargs):
if isinstance(input_sizes,list):
inputs = (torch.randn(input_size).type(input_type).unsqueeze(0) for input_size, input_type in zip(input_sizes,input_types))
inputs = (input.cuda() if self.cuda else input for input in inputs)
else:
inputs = torch.randn(input_sizes).type(input_types).unsqueeze(0)
inputs = inputs.cuda() if self.cuda else inputs
return onnx._export(self.model, inputs, f=path, **kwargs)
class TextClassifier(BaseTextLearner):
def __init__(self, model, source_field, target_field, batch_first=False, use_cuda_if_available=True):
super(TextClassifier, self).__init__(model, source_field, target_field, batch_first, use_cuda_if_available)
"""Train function
Args:
train_loader (DataLoader): an instance of DataLoader containing the training set
loss_fn (Loss): the loss function
optimizer (Optimizer): an optimizer for updating parameters
train_metrics ([]): an array of metrics for evaluating the training set
test_loader (DataLoader): an instance of DataLoader containing the test set
test_metrics ([]): an array of metrics for evaluating the test set
num_epochs (int): The maximum number of training epochs
lr_scheduler (_LRSchedular): Learning rate scheduler updated at every epoch
save_models (str): If all, the model is saved at the end of each epoch while the best models based
on the test set are also saved in best_models folder
If 'best', only the best models are saved, test_loader and test_metrics must be provided
model_dir (str) : a path in which to save the models
save_model_interval (int): saves the models after every n epoch
notebook_mode (boolean): Optimizes the progress bar for either jupyter notebooks or consoles
display_metrics (boolean): Enables display of metrics and loss visualizations at the end of each epoch.
save_metrics (boolean): Enables saving of metrics and loss visualizations at the end of each epoch.
batch_log (boolean): Enables printing of logs at every batch iteration
save_logs (str): Specifies a filepath in which to permanently save logs at every epoch
visdom_log (VisdomLogger): Logs outputs and metrics to the visdom server
tensorboard_log (str): Logs outputs and metrics to the filepath for visualization in tensorboard
save_architecture (boolean): Saves the architecture as well as weights during model saving
clip_grads: a tuple specifying the minimum and maximum gradient values
"""
def train(self, train_loader, loss_fn, optimizer, train_metrics, test_loader=None, test_metrics=None, val_loader=None,val_metrics=None, num_epochs=10,lr_scheduler=None,
save_models="all", model_dir=os.getcwd(),save_model_interval=1,display_metrics=False, save_metrics=False, notebook_mode=False, batch_log=True, save_logs=None,
visdom_log=None,tensorboard_log=None, save_architecture=False,clip_grads=None):
self.optimizer = optimizer
self.loss_fn = loss_fn
self.clip_grads = clip_grads
super().__train_loop__(train_loader, train_metrics, test_loader, test_metrics, val_loader,val_metrics, num_epochs,lr_scheduler,
save_models, model_dir,save_model_interval,display_metrics, save_metrics, notebook_mode, batch_log, save_logs,
visdom_log,tensorboard_log, save_architecture)
def __train_func__(self, data):
self.optimizer.zero_grad()
if self.clip_grads is not None:
clip_grads(self.model,self.clip_grads[0],self.clip_grads[1])
train_x = getattr(data, self.source_field)
train_y = getattr(data, self.target_field)
batch_size = get_batch_size(train_x,self.batch_first)
if isinstance(train_x, list) or isinstance(train_x, tuple):
train_x = (x.cuda() if self.cuda else x for x in train_x)
else:
train_x = train_x.cuda() if self.cuda else train_x
if isinstance(train_y, list) or isinstance(train_y, tuple):
train_y = (y.cuda() if self.cuda else y for y in train_y)
else:
train_y = train_y.cuda() if self.cuda else train_y
outputs = self.model(train_x)
loss = self.loss_fn(outputs, train_y)
if self.fp16_mode:
self.optimizer.backward(loss)
else:
loss.backward()
self.optimizer.step()
self.train_running_loss = self.train_running_loss + (loss.cpu().item() * batch_size)
for metric in self.train_metrics:
metric.update(outputs, train_y,self.batch_first)
def __eval_function__(self, data):
test_x = getattr(data, self.source_field)
test_y = getattr(data, self.target_field)
if isinstance(test_x, list) or isinstance(test_x, tuple):
test_x = (x.cuda() if self.cuda else x for x in test_x)
else:
test_x = test_x.cuda() if self.cuda else test_x
if isinstance(test_y, list) or isinstance(test_y, tuple):
test_y = (y.cuda() if self.cuda else y for y in test_y)
else:
test_y = test_y.cuda() if self.cuda else test_y
outputs = self.model(test_x)
for metric in self.test_metrics:
metric.update(outputs, test_y,self.batch_first)
def __val_function__(self, data):
val_x = getattr(data, self.source_field)
val_y = getattr(data, self.target_field)
if isinstance(val_x, list) or isinstance(val_x, tuple):
val_x = (x.cuda() if self.cuda else x for x in val_x)
else:
val_x = val_x.cuda() if self.cuda else val_x
if isinstance(val_y, list) or isinstance(val_y, tuple):
val_y = (y.cuda() if self.cuda else y for y in val_y)
else:
val_y = val_y.cuda() if self.cuda else val_y
outputs = self.model(val_x)
for metric in self.val_metrics:
metric.update(outputs.cpu().data , val_y.cpu().data,self.batch_first)
def __predict_func__(self, inputs):
if isinstance(inputs, list) or isinstance(inputs, tuple):
inputs = (x.cuda() if self.cuda else x for x in inputs)
else:
inputs = inputs.cuda() if self.cuda else inputs
return self.model(inputs)
"""returns a complete summary of the model
Args:
input_sizes: a single tuple or a list of tuples in the case of multiple inputs, specifying the
size of the inputs to the model
input_types: a single tensor type or a list of tensors in the case of multiple inputs, specifying the
type of the inputs to the model
item_length(int): the length of each item in the summary
tensorboard_log(str): if enabled, the model will be serialized into a format readable by tensorboard,
useful for visualizing the model in tensorboard.
"""
def summary(self, input_sizes, input_types=torch.FloatTensor, item_length=26, tensorboard_log=None):
if isinstance(input_sizes, list):
inputs = (torch.randn(input_size).type(input_type).unsqueeze(0) for input_size, input_type in zip(input_sizes, input_types))
inputs = (input.cuda() if self.cuda else input for input in inputs)
else:
inputs = torch.randn(input_sizes).type(input_types).unsqueeze(0)
inputs = inputs.cuda() if self.cuda else inputs
return get_model_summary(self.model, inputs, item_length=item_length, tensorboard_log=tensorboard_log)
"""saves the model in onnx format
Args:
input_sizes: a single tuple or a list of tuples in the case of multiple inputs, specifying the
size of the inputs to the model
input_types: a single tensor type or a list of tensors in the case of multiple inputs, specifying the
type of the inputs to the model
"""
def to_onnx(self, input_sizes, path, input_types=torch.FloatTensor, **kwargs):
if isinstance(input_sizes, list):
inputs = (torch.randn(input_size).type(input_type).unsqueeze(0) for input_size, input_type in
zip(input_sizes, input_types))
inputs = (input.cuda() if self.cuda else input for input in inputs)
else:
inputs = torch.randn(input_sizes).type(input_types).unsqueeze(0)
inputs = inputs.cuda() if self.cuda else inputs
return onnx._export(self.model, inputs, f=path, **kwargs)
| 43.063401 | 188 | 0.586127 | 7,210 | 59,772 | 4.609154 | 0.049515 | 0.046341 | 0.033702 | 0.030332 | 0.909214 | 0.898381 | 0.883065 | 0.866243 | 0.863084 | 0.856975 | 0 | 0.003065 | 0.328666 | 59,772 | 1,387 | 189 | 43.094448 | 0.825105 | 0.013702 | 0 | 0.83528 | 0 | 0 | 0.056568 | 0.001761 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050234 | false | 0 | 0.023364 | 0.001168 | 0.089953 | 0.023364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2c0fc639e1238f4f857ef87a5e56d03dfed21e92 | 5,339 | py | Python | tests/hypothepy/utils/test_http_client.py | embo-press/hypothepy | dc4e0ba5bcbc46fcb0a9ced97218a388c590228d | [
"MIT"
] | 6 | 2021-01-31T09:06:29.000Z | 2022-01-29T14:16:54.000Z | tests/hypothepy/utils/test_http_client.py | embo-press/hypothepy | dc4e0ba5bcbc46fcb0a9ced97218a388c590228d | [
"MIT"
] | 1 | 2019-08-28T09:14:49.000Z | 2020-04-02T07:43:22.000Z | tests/hypothepy/utils/test_http_client.py | embo-press/hypothepy | dc4e0ba5bcbc46fcb0a9ced97218a388c590228d | [
"MIT"
] | null | null | null | import unittest
from hypothepy.utils.http_client import HttpClient
from unittest.mock import patch
import json
class DefaultHeadersTest(unittest.TestCase):
def setUp(self):
self.header_name = 'TestHeader'
self.header_value = 'Test Token'
self.http = HttpClient(default_headers={
self.header_name: self.header_value,
})
####################################################################################################################
# GET Method
#
@patch('requests.get')
def test_get_method_uses_default_headers(self, mock_get):
self.http.get("www.someurl.com")
mock_get.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value})
@patch('requests.get')
def test_get_method_allows_overwritting_default_headers(self, mock_get):
self.http.get("www.someurl.com", headers={self.header_name: 'New Value'})
mock_get.assert_called_with("www.someurl.com", headers={self.header_name: 'New Value'})
@patch('requests.get')
def test_get_method_merges_extra_headers(self, mock_get):
self.http.get("www.someurl.com", headers={'Content-Type': 'application/json'})
mock_get.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value, 'Content-Type': 'application/json'})
####################################################################################################################
# POST Method
#
@patch('requests.post')
def test_post_method_uses_default_headers(self, mock_post):
self.http.post("www.someurl.com")
mock_post.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value})
@patch('requests.post')
def test_post_method_allows_overwritting_default_headers(self, mock_post):
self.http.post("www.someurl.com", headers={self.header_name: 'New Value'})
mock_post.assert_called_with("www.someurl.com", headers={self.header_name: 'New Value'})
@patch('requests.post')
def test_post_method_merges_extra_headers(self, mock_post):
self.http.post("www.someurl.com", headers={'Content-Type': 'application/json'})
mock_post.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value, 'Content-Type': 'application/json'})
####################################################################################################################
# DELETE Method
#
@patch('requests.delete')
def test_delete_method_uses_default_headers(self, mock_delete):
self.http.delete("www.someurl.com")
mock_delete.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value})
@patch('requests.delete')
def test_delete_method_allows_overwritting_default_headers(self, mock_delete):
self.http.delete("www.someurl.com", headers={self.header_name: 'New Value'})
mock_delete.assert_called_with("www.someurl.com", headers={self.header_name: 'New Value'})
@patch('requests.delete')
def test_delete_method_merges_extra_headers(self, mock_delete):
self.http.delete("www.someurl.com", headers={'Content-Type': 'application/json'})
mock_delete.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value, 'Content-Type': 'application/json'})
####################################################################################################################
# PUT Method
#
@patch('requests.put')
def test_put_method_uses_default_headers(self, mock_put):
self.http.put("www.someurl.com")
mock_put.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value})
@patch('requests.put')
def test_put_method_allows_overwritting_default_headers(self, mock_put):
self.http.put("www.someurl.com", headers={self.header_name: 'New Value'})
mock_put.assert_called_with("www.someurl.com", headers={self.header_name: 'New Value'})
@patch('requests.put')
def test_put_method_merges_extra_headers(self, mock_put):
self.http.put("www.someurl.com", headers={'Content-Type': 'application/json'})
mock_put.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value, 'Content-Type': 'application/json'})
####################################################################################################################
# PATCH Method
#
@patch('requests.patch')
def test_patch_method_uses_default_headers(self, mock_patch):
self.http.patch("www.someurl.com")
mock_patch.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value})
@patch('requests.patch')
def test_patch_method_allows_overwritting_default_headers(self, mock_patch):
self.http.patch("www.someurl.com", headers={self.header_name: 'New Value'})
mock_patch.assert_called_with("www.someurl.com", headers={self.header_name: 'New Value'})
@patch('requests.patch')
def test_patch_method_merges_extra_headers(self, mock_patch):
self.http.patch("www.someurl.com", headers={'Content-Type': 'application/json'})
mock_patch.assert_called_with("www.someurl.com", headers={self.header_name: self.header_value, 'Content-Type': 'application/json'})
| 51.834951 | 140 | 0.637198 | 638 | 5,339 | 5.056426 | 0.0721 | 0.122753 | 0.120893 | 0.154991 | 0.916305 | 0.916305 | 0.893056 | 0.800682 | 0.734346 | 0.694048 | 0 | 0 | 0.130736 | 5,339 | 102 | 141 | 52.343137 | 0.695109 | 0.011238 | 0 | 0.211268 | 0 | 0 | 0.221416 | 0 | 0 | 0 | 0 | 0 | 0.211268 | 1 | 0.225352 | false | 0 | 0.056338 | 0 | 0.295775 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2c1151336e6d4561c83cf06fb8d616fee6b1441e | 53,447 | py | Python | dfirtrack_config/tests/assignment/test_assignment_filters.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | dfirtrack_config/tests/assignment/test_assignment_filters.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 6 | 2022-03-16T12:30:51.000Z | 2022-03-28T01:34:45.000Z | dfirtrack_config/tests/assignment/test_assignment_filters.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | import json
from django.contrib.auth.models import User
from django.contrib.messages import get_messages
from django.test import TestCase
from dfirtrack_artifacts.models import Artifact, Artifactstatus, Artifacttype
from dfirtrack_config.models import UserConfigModel
from dfirtrack_main.models import (
Case,
Casepriority,
Casestatus,
Headline,
Note,
Notestatus,
Reportitem,
System,
Systemstatus,
Tag,
Tagcolor,
Task,
Taskname,
Taskpriority,
Taskstatus,
)
def set_user_config(
test_user,
filter_assignment_view_case,
filter_assignment_view_tag,
filter_assignment_view_user,
filter_assignment_view_keep=True,
):
"""set user config"""
# get config
user_config = UserConfigModel.objects.get(user_config_username=test_user)
# set values
user_config.filter_assignment_view_case = filter_assignment_view_case
user_config.filter_assignment_view_tag = filter_assignment_view_tag
user_config.filter_assignment_view_user = filter_assignment_view_user
user_config.filter_assignment_view_keep = filter_assignment_view_keep
# save config
user_config.save()
# return to test
return
def check_data_for_system_name(data, system_name):
"""check json data if system name was delivered according to filtering"""
# set default to false
system_found = False
# get list with all system entries from dict
for data_entry in data['data']:
# check dict for system
if system_name in data_entry['system_name']:
# change to true if system was found
system_found = True
# return result
return system_found
class AssignmentFilterTestCase(TestCase):
"""assignment filter tests"""
@classmethod
def setUpTestData(cls):
# create user
test_user = User.objects.create_user(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# create config
UserConfigModel.objects.get_or_create(user_config_username=test_user)
# create objects
artifactstatus_1 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_1'
)
artifacttype_1 = Artifacttype.objects.create(artifacttype_name='artifacttype_1')
casepriority_1 = Casepriority.objects.create(casepriority_name='casepriority_1')
casestatus_1 = Casestatus.objects.create(casestatus_name='casestatus_1')
headline_1 = Headline.objects.create(headline_name='headline_1')
notestatus_1 = Notestatus.objects.create(notestatus_name='notestatus_1')
systemstatus_1 = Systemstatus.objects.create(systemstatus_name='systemstatus_1')
tagcolor_1 = Tagcolor.objects.create(tagcolor_name='tagcolor_1')
taskname_1 = Taskname.objects.create(taskname_name='taskname_1')
taskpriority_1 = Taskpriority.objects.create(taskpriority_name='prio_1')
taskstatus_1 = Taskstatus.objects.create(taskstatus_name='taskstatus_1')
# create objects
tag_1 = Tag.objects.create(
tag_name='tag_1',
tagcolor=tagcolor_1,
)
Tag.objects.create(
tag_name='tag_2',
tagcolor=tagcolor_1,
)
Tag.objects.create(
tag_name='tag_3',
tagcolor=tagcolor_1,
)
Tag.objects.create(
tag_name='tag_4',
tagcolor=tagcolor_1,
tag_assigned_to_user_id=test_user,
)
# create objects
case_1 = Case.objects.create(
case_name='case_1',
casepriority=casepriority_1,
casestatus=casestatus_1,
case_is_incident=True,
case_created_by_user_id=test_user,
case_modified_by_user_id=test_user,
)
Case.objects.create(
case_name='case_2',
casepriority=casepriority_1,
casestatus=casestatus_1,
case_is_incident=True,
case_created_by_user_id=test_user,
case_modified_by_user_id=test_user,
)
case_3 = Case.objects.create(
case_name='case_3',
casepriority=casepriority_1,
casestatus=casestatus_1,
case_is_incident=True,
case_created_by_user_id=test_user,
case_modified_by_user_id=test_user,
)
case_3.tag.add(tag_1)
Case.objects.create(
case_name='case_4',
casepriority=casepriority_1,
casestatus=casestatus_1,
case_is_incident=True,
case_created_by_user_id=test_user,
case_modified_by_user_id=test_user,
case_assigned_to_user_id=test_user,
)
# create objects
Note.objects.create(
note_title='note_1',
note_content='note_1',
notestatus=notestatus_1,
note_created_by_user_id=test_user,
note_modified_by_user_id=test_user,
)
Note.objects.create(
note_title='note_2',
note_content='note_2',
notestatus=notestatus_1,
note_created_by_user_id=test_user,
note_modified_by_user_id=test_user,
case=case_1,
)
note_3 = Note.objects.create(
note_title='note_3',
note_content='note_3',
notestatus=notestatus_1,
note_created_by_user_id=test_user,
note_modified_by_user_id=test_user,
)
note_3.tag.add(tag_1)
Note.objects.create(
note_title='note_4',
note_content='note_4',
notestatus=notestatus_1,
note_created_by_user_id=test_user,
note_modified_by_user_id=test_user,
note_assigned_to_user_id=test_user,
)
# create objects
system_1 = System.objects.create(
system_name='system_1',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
system_2 = System.objects.create(
system_name='system_2',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
system_2.case.add(case_1)
system_3 = System.objects.create(
system_name='system_3',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
system_3.tag.add(tag_1)
System.objects.create(
system_name='system_4',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
system_assigned_to_user_id=test_user,
)
# create objects
Task.objects.create(
taskname=taskname_1,
task_note='task_1',
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
)
Task.objects.create(
taskname=taskname_1,
task_note='task_2',
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
case=case_1,
)
task_3 = Task.objects.create(
taskname=taskname_1,
task_note='task_3',
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
)
task_3.tag.add(tag_1)
Task.objects.create(
taskname=taskname_1,
task_note='task_4',
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
task_assigned_to_user_id=test_user,
)
# create objects
Artifact.objects.create(
artifact_name='artifact_1',
artifactstatus=artifactstatus_1,
artifacttype=artifacttype_1,
system=system_1,
artifact_created_by_user_id=test_user,
artifact_modified_by_user_id=test_user,
)
Artifact.objects.create(
artifact_name='artifact_2',
artifactstatus=artifactstatus_1,
artifacttype=artifacttype_1,
system=system_1,
artifact_created_by_user_id=test_user,
artifact_modified_by_user_id=test_user,
case=case_1,
)
artifact_3 = Artifact.objects.create(
artifact_name='artifact_3',
artifactstatus=artifactstatus_1,
artifacttype=artifacttype_1,
system=system_1,
artifact_created_by_user_id=test_user,
artifact_modified_by_user_id=test_user,
)
artifact_3.tag.add(tag_1)
Artifact.objects.create(
artifact_name='artifact_4',
artifactstatus=artifactstatus_1,
artifacttype=artifacttype_1,
system=system_1,
artifact_created_by_user_id=test_user,
artifact_modified_by_user_id=test_user,
artifact_assigned_to_user_id=test_user,
)
# create objects
Reportitem.objects.create(
reportitem_note='reportitem_1',
headline=headline_1,
notestatus=notestatus_1,
system=system_1,
reportitem_created_by_user_id=test_user,
reportitem_modified_by_user_id=test_user,
)
Reportitem.objects.create(
reportitem_note='reportitem_2',
headline=headline_1,
notestatus=notestatus_1,
system=system_1,
reportitem_created_by_user_id=test_user,
reportitem_modified_by_user_id=test_user,
case=case_1,
)
reportitem_3 = Reportitem.objects.create(
reportitem_note='reportitem_3',
headline=headline_1,
notestatus=notestatus_1,
system=system_1,
reportitem_created_by_user_id=test_user,
reportitem_modified_by_user_id=test_user,
)
reportitem_3.tag.add(tag_1)
Reportitem.objects.create(
reportitem_note='reportitem_4',
headline=headline_1,
notestatus=notestatus_1,
system=system_1,
reportitem_created_by_user_id=test_user,
reportitem_modified_by_user_id=test_user,
reportitem_assigned_to_user_id=test_user,
)
def test_assignment_view_no_filter_context(self):
"""no filter applied"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
artifact_1 = Artifact.objects.get(artifact_name='artifact_1')
artifact_2 = Artifact.objects.get(artifact_name='artifact_2')
artifact_3 = Artifact.objects.get(artifact_name='artifact_3')
artifact_4 = Artifact.objects.get(artifact_name='artifact_4')
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
case_3 = Case.objects.get(case_name='case_3')
case_4 = Case.objects.get(case_name='case_4')
note_1 = Note.objects.get(note_title='note_1')
note_2 = Note.objects.get(note_title='note_2')
note_3 = Note.objects.get(note_title='note_3')
note_4 = Note.objects.get(note_title='note_4')
reportitem_1 = Reportitem.objects.get(reportitem_note='reportitem_1')
reportitem_2 = Reportitem.objects.get(reportitem_note='reportitem_2')
reportitem_3 = Reportitem.objects.get(reportitem_note='reportitem_3')
reportitem_4 = Reportitem.objects.get(reportitem_note='reportitem_4')
system_1 = System.objects.get(system_name='system_1')
system_2 = System.objects.get(system_name='system_2')
system_3 = System.objects.get(system_name='system_3')
system_4 = System.objects.get(system_name='system_4')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
tag_3 = Tag.objects.get(tag_name='tag_3')
tag_4 = Tag.objects.get(tag_name='tag_4')
task_1 = Task.objects.get(task_note='task_1')
task_2 = Task.objects.get(task_note='task_2')
task_3 = Task.objects.get(task_note='task_3')
task_4 = Task.objects.get(task_note='task_4')
# change config
set_user_config(test_user, None, None, None)
# get response
response = self.client.get('/config/assignment/')
# compare
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_1.artifact_name)
.exists()
)
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_2.artifact_name)
.exists()
)
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_3.artifact_name)
.exists()
)
self.assertTrue(
response.context['case'].filter(case_name=case_1.case_name).exists()
)
self.assertTrue(
response.context['case'].filter(case_name=case_2.case_name).exists()
)
self.assertTrue(
response.context['case'].filter(case_name=case_3.case_name).exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_1.note_title).exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_2.note_title).exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_3.note_title).exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_1.reportitem_note)
.exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_2.reportitem_note)
.exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_3.reportitem_note)
.exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_1.system_name).exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_2.system_name).exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_3.system_name).exists()
)
self.assertTrue(
response.context['tag'].filter(tag_name=tag_1.tag_name).exists()
)
self.assertTrue(
response.context['tag'].filter(tag_name=tag_2.tag_name).exists()
)
self.assertTrue(
response.context['tag'].filter(tag_name=tag_3.tag_name).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_1.task_note).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_2.task_note).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_3.task_note).exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_4.artifact_name)
.exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_4.case_name).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_4.note_title).exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_4.reportitem_note)
.exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_4.system_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_4.tag_name).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_4.task_note).exists()
)
def test_assignment_view_case_filter_context(self):
"""case filter applied"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
artifact_1 = Artifact.objects.get(artifact_name='artifact_1')
artifact_2 = Artifact.objects.get(artifact_name='artifact_2')
artifact_3 = Artifact.objects.get(artifact_name='artifact_3')
artifact_4 = Artifact.objects.get(artifact_name='artifact_4')
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
case_3 = Case.objects.get(case_name='case_3')
case_4 = Case.objects.get(case_name='case_4')
note_1 = Note.objects.get(note_title='note_1')
note_2 = Note.objects.get(note_title='note_2')
note_3 = Note.objects.get(note_title='note_3')
note_4 = Note.objects.get(note_title='note_4')
reportitem_1 = Reportitem.objects.get(reportitem_note='reportitem_1')
reportitem_2 = Reportitem.objects.get(reportitem_note='reportitem_2')
reportitem_3 = Reportitem.objects.get(reportitem_note='reportitem_3')
reportitem_4 = Reportitem.objects.get(reportitem_note='reportitem_4')
system_1 = System.objects.get(system_name='system_1')
system_2 = System.objects.get(system_name='system_2')
system_3 = System.objects.get(system_name='system_3')
system_4 = System.objects.get(system_name='system_4')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
tag_3 = Tag.objects.get(tag_name='tag_3')
tag_4 = Tag.objects.get(tag_name='tag_4')
task_1 = Task.objects.get(task_note='task_1')
task_2 = Task.objects.get(task_note='task_2')
task_3 = Task.objects.get(task_note='task_3')
task_4 = Task.objects.get(task_note='task_4')
# change config
set_user_config(test_user, case_1, None, None)
# get response
response = self.client.get('/config/assignment/')
# compare
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_2.artifact_name)
.exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_2.note_title).exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_2.reportitem_note)
.exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_2.system_name).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_2.task_note).exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_1.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_3.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_4.artifact_name)
.exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_3.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_4.case_name).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_1.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_3.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_4.note_title).exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_1.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_3.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_4.reportitem_note)
.exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_3.system_name).exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_4.system_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_1.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_3.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_4.tag_name).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_1.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_3.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_4.task_note).exists()
)
# special case 'case' - filtering for case 1 returns only case 1 itself
self.assertTrue(
response.context['case'].filter(case_name=case_1.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_2.case_name).exists()
)
# special case 'system' - system is added to case 1 because of signal for artifact 2 and reportitem 2
self.assertTrue(
response.context['system'].filter(system_name=system_1.system_name).exists()
)
# special case 'tag' - tag has no case relation so no cases are returned
self.assertFalse(
response.context['tag'].filter(tag_name=tag_2.tag_name).exists()
)
def test_assignment_view_tag_filter_context(self):
"""tag filter applied"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
artifact_1 = Artifact.objects.get(artifact_name='artifact_1')
artifact_2 = Artifact.objects.get(artifact_name='artifact_2')
artifact_3 = Artifact.objects.get(artifact_name='artifact_3')
artifact_4 = Artifact.objects.get(artifact_name='artifact_4')
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
case_3 = Case.objects.get(case_name='case_3')
case_4 = Case.objects.get(case_name='case_4')
note_1 = Note.objects.get(note_title='note_1')
note_2 = Note.objects.get(note_title='note_2')
note_3 = Note.objects.get(note_title='note_3')
note_4 = Note.objects.get(note_title='note_4')
reportitem_1 = Reportitem.objects.get(reportitem_note='reportitem_1')
reportitem_2 = Reportitem.objects.get(reportitem_note='reportitem_2')
reportitem_3 = Reportitem.objects.get(reportitem_note='reportitem_3')
reportitem_4 = Reportitem.objects.get(reportitem_note='reportitem_4')
system_1 = System.objects.get(system_name='system_1')
system_2 = System.objects.get(system_name='system_2')
system_3 = System.objects.get(system_name='system_3')
system_4 = System.objects.get(system_name='system_4')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
tag_3 = Tag.objects.get(tag_name='tag_3')
tag_4 = Tag.objects.get(tag_name='tag_4')
task_1 = Task.objects.get(task_note='task_1')
task_2 = Task.objects.get(task_note='task_2')
task_3 = Task.objects.get(task_note='task_3')
task_4 = Task.objects.get(task_note='task_4')
# change config
set_user_config(test_user, None, tag_1, None)
# get response
response = self.client.get('/config/assignment/')
# compare
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_3.artifact_name)
.exists()
)
self.assertTrue(
response.context['case'].filter(case_name=case_3.case_name).exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_3.note_title).exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_3.reportitem_note)
.exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_3.system_name).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_3.task_note).exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_1.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_2.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_4.artifact_name)
.exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_1.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_2.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_4.case_name).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_1.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_2.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_4.note_title).exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_1.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_2.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_4.reportitem_note)
.exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_1.system_name).exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_2.system_name).exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_4.system_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_2.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_4.tag_name).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_1.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_2.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_4.task_note).exists()
)
# special case 'tag' - filtering for tag 1 returns only tag 1 itself
self.assertTrue(
response.context['tag'].filter(tag_name=tag_1.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_3.tag_name).exists()
)
def test_assignment_view_user_filter_context(self):
"""user filter applied"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
artifact_1 = Artifact.objects.get(artifact_name='artifact_1')
artifact_2 = Artifact.objects.get(artifact_name='artifact_2')
artifact_3 = Artifact.objects.get(artifact_name='artifact_3')
artifact_4 = Artifact.objects.get(artifact_name='artifact_4')
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
case_3 = Case.objects.get(case_name='case_3')
case_4 = Case.objects.get(case_name='case_4')
note_1 = Note.objects.get(note_title='note_1')
note_2 = Note.objects.get(note_title='note_2')
note_3 = Note.objects.get(note_title='note_3')
note_4 = Note.objects.get(note_title='note_4')
reportitem_1 = Reportitem.objects.get(reportitem_note='reportitem_1')
reportitem_2 = Reportitem.objects.get(reportitem_note='reportitem_2')
reportitem_3 = Reportitem.objects.get(reportitem_note='reportitem_3')
reportitem_4 = Reportitem.objects.get(reportitem_note='reportitem_4')
system_1 = System.objects.get(system_name='system_1')
system_2 = System.objects.get(system_name='system_2')
system_3 = System.objects.get(system_name='system_3')
system_4 = System.objects.get(system_name='system_4')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
tag_3 = Tag.objects.get(tag_name='tag_3')
tag_4 = Tag.objects.get(tag_name='tag_4')
task_1 = Task.objects.get(task_note='task_1')
task_2 = Task.objects.get(task_note='task_2')
task_3 = Task.objects.get(task_note='task_3')
task_4 = Task.objects.get(task_note='task_4')
# change config
set_user_config(test_user, None, None, test_user)
# get response
response = self.client.get('/config/assignment/')
# compare
self.assertTrue(
response.context['artifact']
.filter(artifact_name=artifact_4.artifact_name)
.exists()
)
self.assertTrue(
response.context['case'].filter(case_name=case_4.case_name).exists()
)
self.assertTrue(
response.context['note'].filter(note_title=note_4.note_title).exists()
)
self.assertTrue(
response.context['reportitem']
.filter(reportitem_note=reportitem_4.reportitem_note)
.exists()
)
self.assertTrue(
response.context['system'].filter(system_name=system_4.system_name).exists()
)
self.assertTrue(
response.context['tag'].filter(tag_name=tag_4.tag_name).exists()
)
self.assertTrue(
response.context['task'].filter(task_note=task_4.task_note).exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_1.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_2.artifact_name)
.exists()
)
self.assertFalse(
response.context['artifact']
.filter(artifact_name=artifact_3.artifact_name)
.exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_1.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_2.case_name).exists()
)
self.assertFalse(
response.context['case'].filter(case_name=case_3.case_name).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_1.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_2.note_title).exists()
)
self.assertFalse(
response.context['note'].filter(note_title=note_3.note_title).exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_1.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_2.reportitem_note)
.exists()
)
self.assertFalse(
response.context['reportitem']
.filter(reportitem_note=reportitem_3.reportitem_note)
.exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_1.system_name).exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_2.system_name).exists()
)
self.assertFalse(
response.context['system'].filter(system_name=system_3.system_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_1.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_2.tag_name).exists()
)
self.assertFalse(
response.context['tag'].filter(tag_name=tag_3.tag_name).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_1.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_2.task_note).exists()
)
self.assertFalse(
response.context['task'].filter(task_note=task_3.task_note).exists()
)
def test_assignment_view_post_keep_false(self):
"""all filter applied, keep False"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
# change config
set_user_config(test_user, case_1, tag_1, test_user, True)
# get config
user_config = UserConfigModel.objects.get(user_config_username=test_user)
# compare - config before POST
self.assertTrue(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, case_1)
self.assertEqual(user_config.filter_assignment_view_tag, tag_1)
self.assertEqual(user_config.filter_assignment_view_user, test_user)
# create post data
data_dict = {
'case': case_2.case_id,
'tag': tag_2.tag_id,
'user': test_user.id,
}
# get response
self.client.post('/config/assignment/', data_dict)
# reload page manually to avoid runtime issues
self.client.get('/config/assignment/')
# update config
user_config.refresh_from_db()
# compare - config after POST
self.assertFalse(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, None)
self.assertEqual(user_config.filter_assignment_view_tag, None)
self.assertEqual(user_config.filter_assignment_view_user, None)
def test_assignment_view_post_keep_true(self):
"""all filters applied, keep True"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
case_1 = Case.objects.get(case_name='case_1')
case_2 = Case.objects.get(case_name='case_2')
tag_1 = Tag.objects.get(tag_name='tag_1')
tag_2 = Tag.objects.get(tag_name='tag_2')
# change config
set_user_config(test_user, case_1, tag_1, None, False)
# get config
user_config = UserConfigModel.objects.get(user_config_username=test_user)
# compare - config before POST
self.assertFalse(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, case_1)
self.assertEqual(user_config.filter_assignment_view_tag, tag_1)
self.assertEqual(user_config.filter_assignment_view_user, None)
# create post data
data_dict = {
'case': case_2.case_id,
'tag': tag_2.tag_id,
'user': test_user.id,
'filter_assignment_view_keep': 'on',
}
# get response
self.client.post('/config/assignment/', data_dict)
# reload page manually to avoid runtime issues
self.client.get('/config/assignment/')
# update config
user_config.refresh_from_db()
# compare - config after POST
self.assertTrue(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, case_2)
self.assertEqual(user_config.filter_assignment_view_tag, tag_2)
self.assertEqual(user_config.filter_assignment_view_user, test_user)
def test_assignment_view_post_empty(self):
"""no filters applied, keep True"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
case_1 = Case.objects.get(case_name='case_1')
tag_1 = Tag.objects.get(tag_name='tag_1')
# change config
set_user_config(test_user, case_1, tag_1, test_user, False)
# get config
user_config = UserConfigModel.objects.get(user_config_username=test_user)
# compare - config before POST
self.assertFalse(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, case_1)
self.assertEqual(user_config.filter_assignment_view_tag, tag_1)
self.assertEqual(user_config.filter_assignment_view_user, test_user)
# create post data
data_dict = {
'case': '',
'tag': '',
'user': '',
'filter_assignment_view_keep': 'on',
}
# get response
self.client.post('/config/assignment/', data_dict)
# reload page manually to avoid runtime issues
self.client.get('/config/assignment/')
# update config
user_config.refresh_from_db()
# compare - config after POST
self.assertTrue(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, None)
self.assertEqual(user_config.filter_assignment_view_tag, None)
self.assertEqual(user_config.filter_assignment_view_user, None)
def test_assignment_view_clear_filter(self):
"""test clear filter view"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get objects
case_1 = Case.objects.get(case_name='case_1')
tag_1 = Tag.objects.get(tag_name='tag_1')
# change config
set_user_config(test_user, case_1, tag_1, test_user, True)
# get config
user_config = UserConfigModel.objects.get(user_config_username=test_user)
# compare - config before POST
self.assertTrue(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, case_1)
self.assertEqual(user_config.filter_assignment_view_tag, tag_1)
self.assertEqual(user_config.filter_assignment_view_user, test_user)
# reload page manually to avoid runtime issues
self.client.get('/config/assignment/clear_filter/')
# update config
user_config.refresh_from_db()
# compare - config after POST
self.assertTrue(user_config.filter_assignment_view_keep)
self.assertEqual(user_config.filter_assignment_view_case, None)
self.assertEqual(user_config.filter_assignment_view_tag, None)
self.assertEqual(user_config.filter_assignment_view_user, None)
def test_dt_referer_wo_search_wo_filter(self):
"""test system datatables processing: w/o search, w/o filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# change config
set_user_config(test_user, None, None, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': '',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 3)
self.assertTrue(check_data_for_system_name(data, 'system_1'))
self.assertTrue(check_data_for_system_name(data, 'system_2'))
self.assertTrue(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_w_search_wo_filter(self):
"""test system datatables processing: w/ search, w/o filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# change config
set_user_config(test_user, None, None, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': 'system_1',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 1)
self.assertTrue(check_data_for_system_name(data, 'system_1'))
self.assertFalse(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_wo_search_case_filter(self):
"""test system datatables processing: w/o search, w/ case filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get object
case_1 = Case.objects.get(case_name='case_1')
# change config
set_user_config(test_user, case_1, None, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': '',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 2)
# special case 'system' - system is added to case 1 because of signal for artifact 2 and reportitem 2
self.assertTrue(check_data_for_system_name(data, 'system_1'))
self.assertTrue(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_w_search_case_filter(self):
"""test system datatables processing: w/ search, w/ case filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get object
case_1 = Case.objects.get(case_name='case_1')
# change config
set_user_config(test_user, case_1, None, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': 'system_2',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 1)
self.assertFalse(check_data_for_system_name(data, 'system_1'))
self.assertTrue(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_wo_search_tag_filter(self):
"""test system datatables processing: w/o search, w/ tag filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get object
tag_1 = Tag.objects.get(tag_name='tag_1')
# change config
set_user_config(test_user, None, tag_1, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': '',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 1)
self.assertFalse(check_data_for_system_name(data, 'system_1'))
self.assertFalse(check_data_for_system_name(data, 'system_2'))
self.assertTrue(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_w_search_tag_filter(self):
"""test system datatables processing: w/ search, w/ tag filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# get object
tag_1 = Tag.objects.get(tag_name='tag_1')
# change config
set_user_config(test_user, None, tag_1, None)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': 'system_1',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 0)
self.assertFalse(check_data_for_system_name(data, 'system_1'))
self.assertFalse(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertFalse(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_wo_search_user_filter(self):
"""test system datatables processing: w/o search, w/ user filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# change config
set_user_config(test_user, None, None, test_user)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': '',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 1)
self.assertFalse(check_data_for_system_name(data, 'system_1'))
self.assertFalse(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertTrue(check_data_for_system_name(data, 'system_4'))
def test_dt_referer_w_search_user_filter(self):
"""test system datatables processing: w/ search, w/ user filter"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# change config
set_user_config(test_user, None, None, test_user)
# get response
response = self.client.get(
'/system/json/',
{
'order[0][column]': '1',
'order[0][dir]': 'asc',
'start': '0',
'length': '25',
'search[value]': 'system_4',
'columns[1][data]': 'system_name',
'columns[2][data]': 'systemstatus',
'draw': '1',
},
HTTP_REFERER='/assignment/',
)
data = json.loads(response.content)
# compare
self.assertEqual(int(data['recordsFiltered']), 1)
self.assertFalse(check_data_for_system_name(data, 'system_1'))
self.assertFalse(check_data_for_system_name(data, 'system_2'))
self.assertFalse(check_data_for_system_name(data, 'system_3'))
self.assertTrue(check_data_for_system_name(data, 'system_4'))
def test_assignment_view_filter_message(self):
"""test filter warning message"""
# login testuser
self.client.login(
username='testuser_assignment_filter', password='B1z2nn60R4XUMmRoqcA7'
)
# get user
test_user = User.objects.get(username='testuser_assignment_filter')
# change config
case_1 = Case.objects.get(case_name='case_1')
set_user_config(test_user, case_1, None, None)
# get response
response = self.client.get('/config/assignment/')
# get messages
messages = list(get_messages(response.wsgi_request))
# compare
self.assertEqual(
str(messages[0]), 'Filter is active. Entities might be incomplete.'
)
| 38.176429 | 109 | 0.616555 | 6,066 | 53,447 | 5.122156 | 0.02852 | 0.04892 | 0.051817 | 0.067587 | 0.927135 | 0.921695 | 0.895787 | 0.879791 | 0.872582 | 0.85237 | 0 | 0.019942 | 0.275694 | 53,447 | 1,399 | 110 | 38.203717 | 0.782677 | 0.060209 | 0 | 0.705521 | 0 | 0 | 0.103303 | 0.019924 | 0 | 0 | 0 | 0 | 0.162138 | 1 | 0.017528 | false | 0.015776 | 0.006135 | 0 | 0.026293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
25c6117c82e7da59c7a90a537e25cd3123d6e1b1 | 9,529 | py | Python | beacon/request/routes.py | elixir-luxembourg/BH2021-beacon-2.x-omop | d811e7902909f5d38f9a5964ff0ff3335ec0056a | [
"Apache-2.0"
] | null | null | null | beacon/request/routes.py | elixir-luxembourg/BH2021-beacon-2.x-omop | d811e7902909f5d38f9a5964ff0ff3335ec0056a | [
"Apache-2.0"
] | null | null | null | beacon/request/routes.py | elixir-luxembourg/BH2021-beacon-2.x-omop | d811e7902909f5d38f9a5964ff0ff3335ec0056a | [
"Apache-2.0"
] | null | null | null | from aiohttp import web
from beacon.db import analyses, biosamples, cohorts, datasets, g_variants, individuals, runs
from beacon.db.backends.postgres import get_dummy_value, count_individuals
from beacon.request.handlers import dummy_pg_handler
from beacon.response import framework, filtering_terms, info
routes = [
# DB Test TODO: Remove
web.get('/db_test/', dummy_pg_handler('db test', db_fn=get_dummy_value)),
########################################
# CONFIG
########################################
web.get('/api', info.handler),
web.get('/api/info', info.handler),
web.get('/api/filtering_terms', dummy_pg_handler(log_name='filtering terms', db_fn=filtering_terms.handler)),
web.get('/api/configuration', framework.configuration),
web.get('/api/entry_types', framework.entry_types),
web.get('/api/map', framework.beacon_map),
########################################
# GET
########################################
# TODO: Uncomment
# web.get('/api/analyses/', generic_handler(db_fn=analyses.get_analyses)),
# web.get('/api/analyses/{id}/', generic_handler(db_fn=analyses.get_analysis_with_id)),
# web.get('/api/analyses/{id}/g_variants/', generic_handler(db_fn=analyses.get_variants_of_analysis)),
# web.get('/api/biosamples/', generic_handler(db_fn=biosamples.get_biosamples)),
# web.get('/api/biosamples/{id}/', generic_handler(db_fn=biosamples.get_biosample_with_id)),
# web.get('/api/biosamples/{id}/g_variants/', generic_handler(db_fn=biosamples.get_variants_of_biosample)),
# web.get('/api/biosamples/{id}/analyses/', generic_handler(db_fn=biosamples.get_analyses_of_biosample)),
# web.get('/api/biosamples/{id}/runs/', generic_handler(db_fn=biosamples.get_runs_of_biosample)),
# web.get('/api/cohorts/', generic_handler(db_fn=cohorts.get_cohorts)),
# web.get('/api/cohorts/{id}/', generic_handler(db_fn=cohorts.get_cohort_with_id)),
# web.get('/api/cohorts/{id}/individuals/', generic_handler(db_fn=cohorts.get_individuals_of_cohort)),
# web.get('/api/cohorts/{id}/filtering_terms/', generic_handler(db_fn=cohorts.get_filtering_terms_of_cohort)),
# web.get('/api/cohorts/{id}/g_variants/', generic_handler(db_fn=cohorts.get_variants_of_cohort)),
# web.get('/api/cohorts/{id}/biosamples/', generic_handler(db_fn=cohorts.get_biosamples_of_cohort)),
# web.get('/api/cohorts/{id}/runs/', generic_handler(db_fn=cohorts.get_runs_of_cohort)),
# web.get('/api/cohorts/{id}/analyses/', generic_handler(db_fn=cohorts.get_analyses_of_cohort)),
# web.get('/api/datasets/', generic_handler(db_fn=datasets.get_datasets)),
# web.get('/api/datasets/{id}/', generic_handler(db_fn=datasets.get_dataset_with_id)),
# web.get('/api/datasets/{id}/g_variants/', generic_handler(db_fn=datasets.get_variants_of_dataset)),
# web.get('/api/datasets/{id}/biosamples/', generic_handler(db_fn=datasets.get_biosamples_of_dataset)),
# web.get('/api/datasets/{id}/individuals/', generic_handler(db_fn=datasets.get_individuals_of_dataset)),
# web.get('/api/datasets/{id}/filtering_terms/', generic_handler(db_fn=datasets.get_filtering_terms_of_dataset)),
# web.get('/api/datasets/{id}/runs/', generic_handler(db_fn=datasets.get_runs_of_dataset)),
# web.get('/api/datasets/{id}/analyses/', generic_handler(db_fn=datasets.get_analyses_of_dataset)),
# web.get('/api/g_variants/', generic_handler(db_fn=g_variants.get_variants)),
# web.get('/api/g_variants/{id}/', generic_handler(db_fn=g_variants.get_variant_with_id)),
# web.get('/api/g_variants/{id}/biosamples/', generic_handler(db_fn=g_variants.get_biosamples_of_variant)),
# web.get('/api/g_variants/{id}/individuals/', generic_handler(db_fn=g_variants.get_individuals_of_variant)),
# web.get('/api/g_variants/{id}/runs/', generic_handler(db_fn=g_variants.get_runs_of_variant)),
# web.get('/api/g_variants/{id}/analyses/', generic_handler(db_fn=g_variants.get_analyses_of_variant)),
# web.get('/api/individuals/', dummy_pg_handler(log_name='GET /api/individuals', db_fn=individuals.get_individuals)),
# web.get('/api/individuals/{id}/', generic_handler(db_fn=individuals.get_individual_with_id)),
# web.get('/api/individuals/{id}/g_variants/', generic_handler(db_fn=individuals.get_variants_of_individual)),
# web.get('/api/individuals/{id}/biosamples/', generic_handler(db_fn=individuals.get_biosamples_of_individual)),
# web.get('/api/individuals/{id}/filtering_terms/', generic_handler(db_fn=individuals.get_filtering_terms_of_individual)),
# web.get('/api/individuals/{id}/runs/', generic_handler(db_fn=individuals.get_runs_of_individual)),
# web.get('/api/individuals/{id}/analyses/', generic_handler(db_fn=individuals.get_analyses_of_individual)),
# web.get('/api/runs/', generic_handler(db_fn=runs.get_runs)),
# web.get('/api/runs/{id}/', generic_handler(db_fn=runs.get_run_with_id)),
# web.get('/api/runs/{id}/g_variants/', generic_handler(db_fn=runs.get_variants_of_run)),
# web.get('/api/runs/{id}/analyses/', generic_handler(db_fn=runs.get_analyses_of_run)),
########################################
# POST
########################################
# web.post('/api/analyses/', generic_handler(db_fn=analyses.get_analyses)),
# web.post('/api/analyses/{id}/', generic_handler(db_fn=analyses.get_analysis_with_id)),
# web.post('/api/analyses/{id}/g_variants/', generic_handler(db_fn=analyses.get_variants_of_analysis)),
# web.post('/api/biosamples/', generic_handler(db_fn=biosamples.get_biosamples)),
# web.post('/api/biosamples/{id}/', generic_handler(db_fn=biosamples.get_biosample_with_id)),
# web.post('/api/biosamples/{id}/g_variants/', generic_handler(db_fn=biosamples.get_variants_of_biosample)),
# web.post('/api/biosamples/{id}/analyses/', generic_handler(db_fn=biosamples.get_analyses_of_biosample)),
# web.post('/api/biosamples/{id}/runs/', generic_handler(db_fn=biosamples.get_runs_of_biosample)),
# web.post('/api/cohorts/', generic_handler(db_fn=cohorts.get_cohorts)),
# web.post('/api/cohorts/{id}/', generic_handler(db_fn=cohorts.get_cohort_with_id)),
# web.post('/api/cohorts/{id}/individuals/', generic_handler(db_fn=cohorts.get_individuals_of_cohort)),
# web.post('/api/cohorts/{id}/filtering_terms/', generic_handler(db_fn=cohorts.get_filtering_terms_of_cohort)),
# web.post('/api/cohorts/{id}/g_variants/', generic_handler(db_fn=cohorts.get_variants_of_cohort)),
# web.post('/api/cohorts/{id}/biosamples/', generic_handler(db_fn=cohorts.get_biosamples_of_cohort)),
# web.post('/api/cohorts/{id}/runs/', generic_handler(db_fn=cohorts.get_runs_of_cohort)),
# web.post('/api/cohorts/{id}/analyses/', generic_handler(db_fn=cohorts.get_analyses_of_cohort)),
# web.post('/api/datasets/', generic_handler(db_fn=datasets.get_datasets)),
# web.post('/api/datasets/{id}/', generic_handler(db_fn=datasets.get_dataset_with_id)),
# web.post('/api/datasets/{id}/g_variants/', generic_handler(db_fn=datasets.get_variants_of_dataset)),
# web.post('/api/datasets/{id}/biosamples/', generic_handler(db_fn=datasets.get_biosamples_of_dataset)),
# web.post('/api/datasets/{id}/individuals/', generic_handler(db_fn=datasets.get_individuals_of_dataset)),
# web.post('/api/datasets/{id}/filtering_terms/', generic_handler(db_fn=datasets.get_filtering_terms_of_dataset)),
# web.post('/api/datasets/{id}/runs/', generic_handler(db_fn=datasets.get_runs_of_dataset)),
# web.post('/api/datasets/{id}/analyses/', generic_handler(db_fn=datasets.get_analyses_of_dataset)),
# web.post('/api/g_variants/', generic_handler(db_fn=g_variants.get_variants)),
# web.post('/api/g_variants/{id}/', generic_handler(db_fn=g_variants.get_variant_with_id)),
# web.post('/api/g_variants/{id}/biosamples/', generic_handler(db_fn=g_variants.get_biosamples_of_variant)),
# web.post('/api/g_variants/{id}/individuals/', generic_handler(db_fn=g_variants.get_individuals_of_variant)),
# web.post('/api/g_variants/{id}/runs/', generic_handler(db_fn=g_variants.get_runs_of_variant)),
# web.post('/api/g_variants/{id}/analyses/', generic_handler(db_fn=g_variants.get_analyses_of_variant)),
web.post('/api/individuals/', dummy_pg_handler(log_name='/api/individuals/', db_fn=count_individuals)),
# web.post('/api/individuals/', dummy_pg_handler(log_name='post /api/individuals', db_fn=individuals.get_individuals)),
# web.post('/api/individuals/{id}/', generic_handler(db_fn=individuals.get_individual_with_id)),
# web.post('/api/individuals/{id}/g_variants/', generic_handler(db_fn=individuals.get_variants_of_individual)),
# web.post('/api/individuals/{id}/biosamples/', generic_handler(db_fn=individuals.get_biosamples_of_individual)),
# web.post('/api/individuals/{id}/filtering_terms/', generic_handler(db_fn=individuals.get_filtering_terms_of_individual)),
# web.post('/api/individuals/{id}/runs/', generic_handler(db_fn=individuals.get_runs_of_individual)),
# web.post('/api/individuals/{id}/analyses/', generic_handler(db_fn=individuals.get_analyses_of_individual)),
# web.post('/api/runs/', generic_handler(db_fn=runs.get_runs)),
# web.post('/api/runs/{id}/', generic_handler(db_fn=runs.get_run_with_id)),
# web.post('/api/runs/{id}/g_variants/', generic_handler(db_fn=runs.get_variants_of_run)),
# web.post('/api/runs/{id}/analyses/', generic_handler(db_fn=runs.get_analyses_of_run)),
]
| 73.3 | 127 | 0.72526 | 1,351 | 9,529 | 4.774241 | 0.040711 | 0.052713 | 0.19845 | 0.223256 | 0.910233 | 0.880155 | 0.877985 | 0.855194 | 0.837674 | 0.825116 | 0 | 0 | 0.078812 | 9,529 | 129 | 128 | 73.868217 | 0.734792 | 0.824641 | 0 | 0 | 0 | 0 | 0.104244 | 0 | 0 | 0 | 0 | 0.007752 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
25d626e201e9f2074853fac595c81857bbde4abe | 14,109 | py | Python | tests/routes/test_categories.py | suneettipirneni/hackathon-2021-backend | 18df5ce348303900cefa21cc88cc56e1b07dc562 | [
"MIT"
] | null | null | null | tests/routes/test_categories.py | suneettipirneni/hackathon-2021-backend | 18df5ce348303900cefa21cc88cc56e1b07dc562 | [
"MIT"
] | null | null | null | tests/routes/test_categories.py | suneettipirneni/hackathon-2021-backend | 18df5ce348303900cefa21cc88cc56e1b07dc562 | [
"MIT"
] | null | null | null | # flake8: noqa
import json
from src.models.category import Category
from src.models.user import ROLES
from src.models.sponsor import Sponsor
from tests.base import BaseTestCase
class TestCategoriesBlueprint(BaseTestCase):
"""Tests for the categories Endpoints"""
"""create_category"""
def test_create_category(self):
Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
res = self.client.post(
"/api/categories/",
data=json.dumps({
"name": "new_category",
"sponsor": "new_sponsor",
"description": "new_description"
}),
content_type="application/json")
self.assertEqual(res.status_code, 201)
self.assertEqual(Category.objects.count(), 1)
def test_create_category_invalid_json(self):
res = self.client.post(
"/api/categories/",
data=json.dumps({}),
content_type="application/json"
)
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 400)
self.assertEqual(data["name"], "Bad Request")
self.assertEqual(Category.objects.count(), 0)
def test_create_category_sponsor_not_found(self):
res = self.client.post(
"/api/categories/",
data=json.dumps({
"name": "new_category",
"sponsor": "random_sponsor1",
"description": "new_description"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertEqual(data["name"], "Not Found")
self.assertEqual(Category.objects.count(), 0)
def test_create_category_duplicate_category(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.post(
"/api/categories/",
data=json.dumps({
"name": "new_category",
"sponsor": "new_sponsor",
"description": "new_description"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 409)
self.assertIn("Sorry, a category with that name already exists.", data["description"])
self.assertEqual(Category.objects.count(), 1)
def test_create_category_invalid_datatypes(self):
Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
res = self.client.post(
"/api/categories/",
data=json.dumps({
"name": "new_category",
"sponsor": "new_sponsor",
"description": 123456
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 400)
self.assertEqual(data["name"], "Bad Request")
self.assertEqual(Category.objects.count(), 0)
"""edit_category"""
def test_edit_category(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.put(
"/api/categories/?name=new_category",
data=json.dumps({
"name": "another_category"
}),
content_type="application/json")
self.assertEqual(res.status_code, 201)
updated = Category.findOne(name="another_category")
self.assertEqual(updated.name, "another_category")
def test_edit_category_sponsor_not_found_query(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.put(
"/api/categories/?name=new_category&sponsor=another_sponsor",
data=json.dumps({
"name": "another_category"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertEqual(data["description"], "A sponsor with that name does not exist!")
def test_edit_category_not_found(self):
Sponsor.createOne(username="new_sponsor",
email="new@sponsor.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
res = self.client.put(
"/api/categories/?name=new_category&sponsor=new_sponsor",
data=json.dumps({
"name": "another_category"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertEqual(data["description"], "Sorry, no categories exist that match the query.")
def test_edit_category_sponsor_not_found_update(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.put(
"/api/categories/?name=new_category",
data=json.dumps({
"name": "another_category",
"sponsor": "another_sponsor"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertEqual(data["description"], "A sponsor with that name does not exist!")
def test_edit_category_duplicate_category(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@sponsor.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
Category.createOne(name="another_category",
sponsor=sponsor,
description="new_description")
res = self.client.put(
"/api/categories/?name=new_category&sponsor=new_sponsor",
data=json.dumps({
"name": "another_category"
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 409)
self.assertEqual(data["description"], "Sorry, a category already exists with that name.")
def test_edit_category_invalid_datatypes(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@sponsor.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.put(
"/api/categories/?name=new_category&sponsor=new_sponsor",
data=json.dumps({
"description": 123456
}),
content_type="application/json")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 400)
self.assertEqual(data["name"], "Bad Request")
"""delete_category"""
def test_delete_category(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
token = self.login_user(ROLES.ADMIN)
res = self.client.delete("/api/categories/?name=new_category&sponsor=new_sponsor",
headers=[("sid", token)])
self.assertEqual(res.status_code, 201)
self.assertEqual(Category.objects.count(), 0)
def test_delete_category_sponsor_not_found(self):
Category.createOne(name="new_category",
sponsor="new_sponsor",
description="new_description")
token = self.login_user(ROLES.ADMIN)
res = self.client.delete("/api/categories/?name=new_category&sponsor=new_sponsor",
headers=[("sid", token)])
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertIn("A sponsor with that name does not exist!", data["description"])
self.assertEqual(Category.objects.count(), 1)
def test_delete_category_not_found(self):
token = self.login_user(ROLES.ADMIN)
res = self.client.delete("/api/categories/?name=new_category",
headers=[("sid", token)])
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertIn("Sorry, no categories exist that match the query.", data["description"])
self.assertEqual(Category.objects.count(), 0)
"""get_category"""
def test_get_category(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
cat = Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
res = self.client.get("/api/categories/?name=new_category&sponsor=new_sponsor")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 201)
self.assertEqual(cat.name, data["categories"][0]["name"])
self.assertEqual(cat.description, data["categories"][0]["description"])
def test_get_category_sponsor_not_found(self):
Category.createOne(name="new_category",
sponsor="new_sponsor",
description="new_description")
res = self.client.get("/api/categories/?name=new_category&sponsor=new_sponsor/")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertIn("A sponsor with that name does not exist!", data["description"])
def test_get_category_not_found(self):
res = self.client.get("/api/categories/?name=new_category/")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertIn("Sorry, no categories exist that match the query.", data["description"])
"""get_all_categories"""
def test_get_all_categories(self):
sponsor = Sponsor.createOne(username="new_sponsor",
email="new@email.com",
password="new_password",
roles=ROLES.SPONSOR,
sponsor_name="new_sponsor")
Category.createOne(name="new_category",
sponsor=sponsor,
description="new_description")
Category.createOne(name="another_new_category",
sponsor=sponsor,
description="new_description")
res = self.client.get("api/categories/get_all_categories/")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 200)
self.assertEqual(data["categories"][0]["name"], "new_category")
self.assertEqual(data["categories"][1]["name"], "another_new_category")
def test_get_all_categories_not_found(self):
res = self.client.get("api/categories/get_all_categories/")
data = json.loads(res.data.decode())
self.assertEqual(res.status_code, 404)
self.assertEqual(data["name"], "Not Found")
| 39.191667 | 97 | 0.539301 | 1,329 | 14,109 | 5.547028 | 0.069225 | 0.083424 | 0.056972 | 0.068638 | 0.89745 | 0.872626 | 0.871405 | 0.857434 | 0.848074 | 0.83885 | 0 | 0.009038 | 0.349139 | 14,109 | 359 | 98 | 39.300836 | 0.793749 | 0.003402 | 0 | 0.821818 | 0 | 0 | 0.209361 | 0.046015 | 0 | 0 | 0 | 0 | 0.167273 | 1 | 0.069091 | false | 0.043636 | 0.018182 | 0 | 0.090909 | 0.003636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d303e0d5fbae27165d48c1598ff8afecf9e9a30d | 13,379 | py | Python | Code/Geometry/Wrap/testGeometry.py | docking-org/rdk | 6eb710254f027b348a8e3089e6a92c3d40de0949 | [
"PostgreSQL"
] | 1 | 2019-01-23T06:02:24.000Z | 2019-01-23T06:02:24.000Z | Code/Geometry/Wrap/testGeometry.py | Mike575/rdkit | 373a89021e478f878c6011a201e3fb8f4a122093 | [
"PostgreSQL"
] | null | null | null | Code/Geometry/Wrap/testGeometry.py | Mike575/rdkit | 373a89021e478f878c6011a201e3fb8f4a122093 | [
"PostgreSQL"
] | 2 | 2017-12-04T02:28:18.000Z | 2018-11-29T01:18:46.000Z | from __future__ import print_function
import os, sys
import unittest
import copy
import math
from rdkit.six.moves import cPickle
from rdkit import RDConfig
from rdkit import DataStructs
from rdkit.Geometry import rdGeometry as geom
def feq(v1, v2, tol=1.0e-4):
return abs(v1 - v2) < tol
class TestCase(unittest.TestCase):
def setUp(self):
pass
def test1aPoint3D(self):
pt = geom.Point3D()
self.assertTrue(feq(pt.x, 0.0))
self.assertTrue(feq(pt.y, 0.0))
self.assertTrue(feq(pt.z, 0.0))
pt = geom.Point3D(3., 4., 5.)
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt.z, 5.0))
self.assertTrue(feq(pt[0], 3.0))
self.assertTrue(feq(pt[1], 4.0))
self.assertTrue(feq(pt[2], 5.0))
self.assertTrue(feq(pt[-3], 3.0))
self.assertTrue(feq(pt[-2], 4.0))
self.assertTrue(feq(pt[-1], 5.0))
lst = list(pt)
self.assertTrue(feq(lst[0], 3.0))
self.assertTrue(feq(lst[1], 4.0))
self.assertTrue(feq(lst[2], 5.0))
pt2 = geom.Point3D(1., 1., 1.)
pt3 = pt + pt2
self.assertTrue(feq(pt3.x, 4.0))
self.assertTrue(feq(pt3.y, 5.0))
self.assertTrue(feq(pt3.z, 6.0))
pt += pt2
self.assertTrue(feq(pt.x, 4.0))
self.assertTrue(feq(pt.y, 5.0))
self.assertTrue(feq(pt.z, 6.0))
pt3 = pt - pt2
self.assertTrue(feq(pt3.x, 3.0))
self.assertTrue(feq(pt3.y, 4.0))
self.assertTrue(feq(pt3.z, 5.0))
pt -= pt2
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt.z, 5.0))
pt *= 2.0
self.assertTrue(feq(pt.x, 6.0))
self.assertTrue(feq(pt.y, 8.0))
self.assertTrue(feq(pt.z, 10.0))
pt /= 2
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt.z, 5.0))
self.assertTrue(feq(pt.Length(), 7.0711))
self.assertTrue(feq(pt.LengthSq(), 50.0))
pt.Normalize()
self.assertTrue(feq(pt.Length(), 1.0))
pt1 = geom.Point3D(1.0, 0.0, 0.0)
pt2 = geom.Point3D(2.0 * math.cos(math.pi / 6), 2.0 * math.sin(math.pi / 6), 0.0)
ang = pt1.AngleTo(pt2)
self.assertTrue(feq(ang, math.pi / 6))
prod = pt1.DotProduct(pt2)
self.assertTrue(feq(prod, 2.0 * math.cos(math.pi / 6)))
pt3 = pt1.CrossProduct(pt2)
self.assertTrue(feq(pt3.x, 0.0))
self.assertTrue(feq(pt3.y, 0.0))
self.assertTrue(feq(pt3.z, 1.0))
def test1bPoint2D(self):
pt = geom.Point2D()
self.assertTrue(feq(pt.x, 0.0))
self.assertTrue(feq(pt.y, 0.0))
pt = geom.Point2D(3., 4.)
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt[0], 3.0))
self.assertTrue(feq(pt[1], 4.0))
self.assertTrue(feq(pt[-2], 3.0))
self.assertTrue(feq(pt[-1], 4.0))
lst = list(pt)
self.assertTrue(feq(lst[0], 3.0))
self.assertTrue(feq(lst[1], 4.0))
pt2 = geom.Point2D(1., 1.)
pt3 = pt + pt2
self.assertTrue(feq(pt3.x, 4.0))
self.assertTrue(feq(pt3.y, 5.0))
pt += pt2
self.assertTrue(feq(pt.x, 4.0))
self.assertTrue(feq(pt.y, 5.0))
pt3 = pt - pt2
self.assertTrue(feq(pt3.x, 3.0))
self.assertTrue(feq(pt3.y, 4.0))
pt -= pt2
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
pt *= 2.0
self.assertTrue(feq(pt.x, 6.0))
self.assertTrue(feq(pt.y, 8.0))
pt /= 2
self.assertTrue(feq(pt.x, 3.0))
self.assertTrue(feq(pt.y, 4.0))
self.assertTrue(feq(pt.Length(), 5.0))
self.assertTrue(feq(pt.LengthSq(), 25.0))
pt.Normalize()
self.assertTrue(feq(pt.Length(), 1.0))
pt1 = geom.Point2D(1.0, 0.0)
pt2 = geom.Point2D(2.0 * math.cos(math.pi / 6), 2.0 * math.sin(math.pi / 6))
ang = pt1.AngleTo(pt2)
self.assertTrue(feq(ang, math.pi / 6))
prod = pt1.DotProduct(pt2)
self.assertTrue(feq(prod, 2.0 * math.cos(math.pi / 6)))
def test1cPointND(self):
dim = 4
pt = geom.PointND(4)
for i in range(dim):
self.assertTrue(feq(pt[i], 0.0))
pt[0] = 3
pt[3] = 4
self.assertTrue(feq(pt[0], 3.0))
self.assertTrue(feq(pt[3], 4.0))
self.assertTrue(feq(pt[-4], 3.0))
self.assertTrue(feq(pt[-1], 4.0))
lst = list(pt)
self.assertTrue(feq(lst[0], 3.0))
self.assertTrue(feq(lst[3], 4.0))
pt2 = geom.PointND(4)
pt2[0] = 1.
pt2[2] = 1.
pt3 = pt + pt2
self.assertTrue(feq(pt3[0], 4.0))
self.assertTrue(feq(pt3[2], 1.0))
self.assertTrue(feq(pt3[3], 4.0))
pt += pt2
self.assertTrue(feq(pt[0], 4.0))
self.assertTrue(feq(pt[2], 1.0))
self.assertTrue(feq(pt[3], 4.0))
pt3 = pt - pt2
self.assertTrue(feq(pt3[0], 3.0))
self.assertTrue(feq(pt3[2], 0.0))
self.assertTrue(feq(pt3[3], 4.0))
pt -= pt2
self.assertTrue(feq(pt[0], 3.0))
self.assertTrue(feq(pt[2], 0.0))
self.assertTrue(feq(pt[3], 4.0))
pt *= 2.0
self.assertTrue(feq(pt[0], 6.0))
self.assertTrue(feq(pt[1], 0.0))
self.assertTrue(feq(pt[2], 0.0))
self.assertTrue(feq(pt[3], 8.0))
pt /= 2
self.assertTrue(feq(pt[0], 3.0))
self.assertTrue(feq(pt[1], 0.0))
self.assertTrue(feq(pt[2], 0.0))
self.assertTrue(feq(pt[3], 4.0))
self.assertTrue(feq(pt.Length(), 5.0))
self.assertTrue(feq(pt.LengthSq(), 25.0))
pt.Normalize()
self.assertTrue(feq(pt.Length(), 1.0))
pkl = cPickle.dumps(pt)
pt2 = cPickle.loads(pkl)
self.assertTrue(len(pt) == len(pt2))
for i in range(len(pt)):
self.assertTrue(feq(pt2[i], pt[i]))
def test3UniformGrid(self):
ugrid = geom.UniformGrid3D(20, 18, 15)
self.assertTrue(ugrid.GetNumX() == 40)
self.assertTrue(ugrid.GetNumY() == 36)
self.assertTrue(ugrid.GetNumZ() == 30)
dvect = ugrid.GetOccupancyVect()
ugrid = geom.UniformGrid3D(20, 18, 15, 0.5, DataStructs.DiscreteValueType.TWOBITVALUE)
dvect = ugrid.GetOccupancyVect()
self.assertTrue(dvect.GetValueType() == DataStructs.DiscreteValueType.TWOBITVALUE)
grd = geom.UniformGrid3D(10.0, 10.0, 10.0, 0.5)
grd.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(2.0, 2.0, 0.0), 1.5, 0.25)
geom.WriteGridToFile(grd, "junk.grd")
grd2 = geom.UniformGrid3D(10.0, 10.0, 10.0, 0.5)
grd2.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.5, 0.25)
grd2.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.5, 0.25)
grd2.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.5, 0.25)
dist = geom.TanimotoDistance(grd, grd2)
self.assertTrue(dist == 0.25)
dist = geom.ProtrudeDistance(grd, grd2)
self.assertTrue(dist == 0.25)
dist = geom.ProtrudeDistance(grd2, grd)
self.assertTrue(dist == 0.0)
grd2 = geom.UniformGrid3D(10.0, 10.0, 10.0, 0.5, DataStructs.DiscreteValueType.FOURBITVALUE)
grd2.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.5, 0.25, 3)
grd2.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.5, 0.25, 3)
grd2.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.5, 0.25, 3)
self.assertRaises(ValueError, lambda: geom.TanimotoDistance(grd, grd2))
grd2 = geom.UniformGrid3D(10.0, 10.0, 10.0, 1.0)
self.assertRaises(ValueError, lambda: geom.TanimotoDistance(grd, grd2))
grd2 = geom.UniformGrid3D(11.0, 10.0, 10.0, 1.0)
self.assertRaises(ValueError, lambda: geom.TanimotoDistance(grd, grd2))
def testSymmetry(self):
grd = geom.UniformGrid3D(10.0, 10.0, 10.0, 0.5)
grd.SetSphereOccupancy(geom.Point3D(-2.2, -2.0, 0.0), 1.65, 0.25)
grd.SetSphereOccupancy(geom.Point3D(2.2, -2.0, 0.0), 1.65, 0.25)
bPt1 = geom.Point3D(-4.0, -2.0, -2.0)
bPt2 = geom.Point3D(4.0, -2.0, -2.0)
for k in range(8):
bPt1 += geom.Point3D(0.0, 0.0, 0.5)
bPt2 += geom.Point3D(0.0, 0.0, 0.5)
for j in range(8):
bPt1 += geom.Point3D(0.0, 0.5, 0.0)
bPt2 += geom.Point3D(0.0, 0.5, 0.0)
for i in range(8):
bPt1 += geom.Point3D(0.5, 0.0, 0.0)
bPt2 -= geom.Point3D(0.5, 0.0, 0.0)
self.assertTrue(grd.GetValPoint(bPt1) == grd.GetValPoint(bPt2))
bPt1.x = -4.0
bPt2.x = 4.0
bPt1.y = -2.0
bPt2.y = -2.0
def testPointPickles(self):
pt = geom.Point3D(2.0, -3.0, 1.0)
pt2 = cPickle.loads(cPickle.dumps(pt))
self.assertTrue(feq(pt.x, pt2.x, 1e-6))
self.assertTrue(feq(pt.y, pt2.y, 1e-6))
self.assertTrue(feq(pt.z, pt2.z, 1e-6))
pt = geom.Point2D(2.0, -4.0)
pt2 = cPickle.loads(cPickle.dumps(pt))
self.assertTrue(feq(pt.x, pt2.x, 1e-6))
self.assertTrue(feq(pt.y, pt2.y, 1e-6))
def test4GridPickles(self):
grd = geom.UniformGrid3D(10.0, 9.0, 8.0, 0.5)
self.assertTrue(grd.GetNumX() == 20)
self.assertTrue(grd.GetNumY() == 18)
self.assertTrue(grd.GetNumZ() == 16)
grd.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.5, 0.25)
grd.SetSphereOccupancy(geom.Point3D(2.0, 2.0, 0.0), 1.5, 0.25)
self.assertTrue(geom.TanimotoDistance(grd, grd) == 0.0)
grd2 = cPickle.loads(cPickle.dumps(grd))
self.assertTrue(grd2.GetNumX() == 20)
self.assertTrue(grd2.GetNumY() == 18)
self.assertTrue(grd2.GetNumZ() == 16)
self.assertTrue(geom.TanimotoDistance(grd, grd2) == 0.0)
def test5GridOps(self):
grd = geom.UniformGrid3D(10, 10, 10)
grd.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.0, 0.25)
grd.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.0, 0.25)
grd2 = geom.UniformGrid3D(10, 10, 10)
grd2.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.0, 0.25)
grd2.SetSphereOccupancy(geom.Point3D(2.0, 2.0, 0.0), 1.0, 0.25)
self.assertTrue(geom.TanimotoDistance(grd, grd) == 0.0)
self.assertTrue(geom.TanimotoDistance(grd, grd2) == 1.0)
grd3 = copy.deepcopy(grd)
grd3 |= grd2
self.assertTrue(geom.TanimotoDistance(grd3, grd) == .5)
self.assertTrue(geom.TanimotoDistance(grd3, grd2) == .5)
grd3 = copy.deepcopy(grd)
grd3 += grd2
self.assertTrue(geom.TanimotoDistance(grd3, grd) == .5)
self.assertTrue(geom.TanimotoDistance(grd3, grd2) == .5)
grd3 -= grd
self.assertTrue(geom.TanimotoDistance(grd3, grd) == 1.0)
self.assertTrue(geom.TanimotoDistance(grd3, grd2) == 0)
grd4 = geom.UniformGrid3D(10, 10, 10)
grd4.SetSphereOccupancy(geom.Point3D(-2.0, -2.0, 0.0), 1.0, 0.25)
grd4.SetSphereOccupancy(geom.Point3D(-2.0, 2.0, 0.0), 1.0, 0.25)
grd4.SetSphereOccupancy(geom.Point3D(2.0, -2.0, 0.0), 1.0, 0.25)
self.assertTrue(feq(geom.TanimotoDistance(grd4, grd), .3333))
self.assertTrue(feq(geom.TanimotoDistance(grd4, grd2), .75))
grd4 &= grd2
self.assertTrue(feq(geom.TanimotoDistance(grd4, grd), 1.0))
self.assertTrue(feq(geom.TanimotoDistance(grd4, grd2), .5))
def test6Dihedrals(self):
p1 = geom.Point3D(1, 0, 0)
p2 = geom.Point3D(0, 0, 0)
p3 = geom.Point3D(0, 1, 0)
p4 = geom.Point3D(.5, 1, .5)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 4, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, -math.pi / 4, 4)
p4 = geom.Point3D(-.5, 1, .5)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, 3 * math.pi / 4, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, -3 * math.pi / 4, 4)
p4 = geom.Point3D(.5, 1, -.5)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 4, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 4, 4)
p4 = geom.Point3D(-.5, 1, -.5)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, 3 * math.pi / 4, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, 3 * math.pi / 4, 4)
p4 = geom.Point3D(0, 1, 1)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 2, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, -math.pi / 2, 4)
p4 = geom.Point3D(0, 1, -1)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 2, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi / 2, 4)
p4 = geom.Point3D(1, 1, 0)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, 0, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, 0, 4)
p4 = geom.Point3D(-1, 1, 0)
ang = geom.ComputeDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi, 4)
ang = geom.ComputeSignedDihedralAngle(p1, p2, p3, p4)
self.assertAlmostEqual(ang, math.pi, 4)
def test7UniformGridIndices(self):
ugrid = geom.UniformGrid3D(20, 18, 15)
idx = ugrid.GetGridIndex(3, 2, 1)
xi, yi, zi = ugrid.GetGridIndices(idx)
self.assertEqual(xi, 3)
self.assertEqual(yi, 2)
self.assertEqual(zi, 1)
if __name__ == '__main__':
print("Testing Geometry wrapper")
unittest.main()
| 32.711491 | 96 | 0.622094 | 2,206 | 13,379 | 3.766999 | 0.06845 | 0.227437 | 0.22503 | 0.17148 | 0.794705 | 0.774368 | 0.708303 | 0.674128 | 0.650662 | 0.641035 | 0 | 0.107709 | 0.189476 | 13,379 | 408 | 97 | 32.791667 | 0.658613 | 0 | 0 | 0.492447 | 0 | 0 | 0.00299 | 0 | 0 | 0 | 0 | 0 | 0.47432 | 1 | 0.036254 | false | 0.003021 | 0.02719 | 0.003021 | 0.069486 | 0.006042 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d35c207a70d236d9cbebe44f548a5431b96db069 | 17,058 | py | Python | nyoka/tests/test_xgboost_to_pmml_UnitTest.py | maxibor/nyoka | 19f480eee608035aa5fba368c96d4143bc2f5710 | [
"Apache-2.0"
] | 71 | 2020-08-24T07:59:56.000Z | 2022-03-21T08:36:35.000Z | nyoka/tests/test_xgboost_to_pmml_UnitTest.py | maxibor/nyoka | 19f480eee608035aa5fba368c96d4143bc2f5710 | [
"Apache-2.0"
] | 16 | 2020-09-02T10:27:36.000Z | 2022-03-31T05:37:12.000Z | nyoka/tests/test_xgboost_to_pmml_UnitTest.py | maxibor/nyoka | 19f480eee608035aa5fba368c96d4143bc2f5710 | [
"Apache-2.0"
] | 16 | 2020-09-17T15:01:33.000Z | 2022-03-28T03:13:25.000Z | import sys, os
import unittest
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import StandardScaler
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor, XGBClassifier
from nyoka import xgboost_to_pmml
from nyoka import PMML44 as pml
import json
class TestMethods(unittest.TestCase):
def test_xgboost_01(self):
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data, columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species').to_numpy()
target = 'Species'
f_name = "xgbc_pmml.pmml"
model = XGBClassifier()
pipeline_obj = Pipeline([
('xgbc', model)
])
pipeline_obj.fit(irisd[features], irisd[target])
xgboost_to_pmml(pipeline_obj, features, target, f_name, model_name="testModel")
pmml_obj = pml.parse(f_name, True)
pmml_value_list = []
model_value_list = []
pmml_score_list = []
model_score_list = []
list_seg_score1 = []
list_seg_score2 = []
list_seg_score3 = []
list_seg_val1 = []
list_seg_val2 = []
list_seg_val3 = []
get_nodes_in_json_format = []
for i in range(model.n_estimators * model.n_classes_):
get_nodes_in_json_format.append(json.loads(model._Booster.get_dump(dump_format='json')[i]))
n = 1
for i in range(len(get_nodes_in_json_format)):
list_score_temp = []
list_val_temp = []
node_list = get_nodes_in_json_format[i]
if n == 1:
n = 2
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score1 = list_seg_score1 + list_score_temp
list_seg_val1 = list_seg_val1 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
elif n == 2:
n = 3
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score2 = list_seg_score2 + list_score_temp
list_seg_val2 = list_seg_val2 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
elif n == 3:
n = 1
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score3 = list_seg_score3 + list_score_temp
list_seg_val3 = list_seg_val3 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
model_score_list = list_seg_score1 + list_seg_score2 + list_seg_score3
model_value_list = list_seg_val1 + list_seg_val2 + list_seg_val3
seg_tab = pmml_obj.MiningModel[0].Segmentation.Segment
for seg in seg_tab:
if int(seg.id) <= 3:
for segment in seg.MiningModel.Segmentation.Segment:
node_tab = segment.TreeModel.Node.Node
if not node_tab:
pmml_score_list.append(segment.TreeModel.Node.score)
else:
for node in node_tab:
varlen = node.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(node.SimplePredicate.value)
self.extractValues(node, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(node.SimplePredicate.value)
pmml_score_list.append(node.score)
##1
for model_val, pmml_val in zip(model_score_list, pmml_score_list):
self.assertEqual(model_val, float(pmml_val))
##2
for model_val, pmml_val in zip(model_value_list, pmml_value_list):
self.assertEqual(model_val, pmml_val)
##3
self.assertEqual(os.path.isfile(f_name), True)
def test_xgboost_02(self):
auto = pd.read_csv('nyoka/tests/auto-mpg.csv')
feature_names = [name for name in auto.columns if name not in ('mpg', 'car name')]
target_name = 'mpg'
f_name = "xgbr_pmml.pmml"
model = XGBRegressor()
pipeline_obj = Pipeline([
('xgbr', model)
])
pipeline_obj.fit(auto[feature_names], auto[target_name])
xgboost_to_pmml(pipeline_obj, feature_names, target_name, f_name, description="A test model")
pmml_obj = pml.parse(f_name, True)
pmml_value_list = []
model_value_list = []
pmml_score_list = []
model_score_list = []
seg_tab = pmml_obj.MiningModel[0].Segmentation.Segment
for seg in seg_tab:
for node in seg.TreeModel.Node.Node:
varlen = node.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(node.SimplePredicate.value)
self.extractValues(node, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(node.SimplePredicate.value)
pmml_score_list.append(node.score)
get_nodes_in_json_format = []
for i in range(model.n_estimators):
get_nodes_in_json_format.append(json.loads(model._Booster.get_dump(dump_format='json')[i]))
for i in range(len(get_nodes_in_json_format)):
list_score_temp = []
list_val_temp = []
node_list = get_nodes_in_json_format[i]
self.create_node(node_list, list_score_temp, list_val_temp)
model_score_list = model_score_list + list_score_temp
model_value_list = model_value_list + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
##1
for model_val, pmml_val in zip(model_score_list, pmml_score_list):
self.assertEqual(model_val, float(pmml_val))
##2
for model_val, pmml_val in zip(model_value_list, pmml_value_list):
self.assertEqual(model_val, pmml_val)
##3
self.assertEqual(os.path.isfile(f_name), True)
def test_xgboost_03(self):
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data, columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
f_name = "xgbc_pmml_preprocess.pmml"
model = XGBClassifier(n_estimators=5)
pipeline_obj = Pipeline([
('scaling', StandardScaler()),
('xgbc', model)
])
pipeline_obj.fit(irisd[features], irisd[target])
xgboost_to_pmml(pipeline_obj, features, target, f_name)
pmml_obj = pml.parse(f_name, True)
pmml_value_list = []
model_value_list = []
pmml_score_list = []
model_score_list = []
list_seg_score1 = []
list_seg_score2 = []
list_seg_score3 = []
list_seg_val1 = []
list_seg_val2 = []
list_seg_val3 = []
get_nodes_in_json_format = []
for i in range(model.n_estimators * model.n_classes_):
get_nodes_in_json_format.append(json.loads(model._Booster.get_dump(dump_format='json')[i]))
n = 1
for i in range(len(get_nodes_in_json_format)):
list_score_temp = []
list_val_temp = []
node_list = get_nodes_in_json_format[i]
if n == 1:
n = 2
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score1 = list_seg_score1 + list_score_temp
list_seg_val1 = list_seg_val1 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
elif n == 2:
n = 3
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score2 = list_seg_score2 + list_score_temp
list_seg_val2 = list_seg_val2 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
elif n == 3:
n = 1
self.create_node(node_list, list_score_temp, list_val_temp)
list_seg_score3 = list_seg_score3 + list_score_temp
list_seg_val3 = list_seg_val3 + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
model_score_list = list_seg_score1 + list_seg_score2 + list_seg_score3
model_value_list = list_seg_val1 + list_seg_val2 + list_seg_val3
seg_tab = pmml_obj.MiningModel[0].Segmentation.Segment
for seg in seg_tab:
if int(seg.id) <= 3:
for segment in seg.MiningModel.Segmentation.Segment:
node_tab = segment.TreeModel.Node.Node
if not node_tab:
pmml_score_list.append(segment.TreeModel.Node.score)
else:
for node in node_tab:
varlen = node.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(node.SimplePredicate.value)
self.extractValues(node, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(node.SimplePredicate.value)
pmml_score_list.append(node.score)
##1
for model_val, pmml_val in zip(model_score_list, pmml_score_list):
self.assertEqual(model_val, float(pmml_val))
##2
for model_val, pmml_val in zip(model_value_list, pmml_value_list):
self.assertEqual(model_val, pmml_val)
##3
self.assertEqual(os.path.isfile(f_name), True)
def test_xgboost_04(self):
auto = pd.read_csv('nyoka/tests/auto-mpg.csv')
X = auto.drop(['mpg'], axis=1)
y = auto['mpg']
feature_names = [name for name in auto.columns if name not in 'mpg']
f_name = "xgbr_pmml_preprocess2.pmml"
target_name = 'mpg'
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=101)
model = XGBRegressor()
pipeline_obj = Pipeline([
('mapper', DataFrameMapper([
('car name', CountVectorizer()),
(['displacement'], [StandardScaler()])
])),
('xgbr', model)
])
pipeline_obj.fit(x_train, y_train)
xgboost_to_pmml(pipeline_obj, feature_names, target_name, f_name)
pmml_obj = pml.parse(f_name, True)
pmml_value_list = []
model_value_list = []
pmml_score_list = []
model_score_list = []
seg_tab = pmml_obj.MiningModel[0].Segmentation.Segment
for seg in seg_tab:
for node in seg.TreeModel.Node.Node:
varlen = node.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(node.SimplePredicate.value)
self.extractValues(node, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(node.SimplePredicate.value)
pmml_score_list.append(node.score)
get_nodes_in_json_format = []
for i in range(model.n_estimators):
get_nodes_in_json_format.append(json.loads(model._Booster.get_dump(dump_format='json')[i]))
for i in range(len(get_nodes_in_json_format)):
list_score_temp = []
list_val_temp = []
node_list = get_nodes_in_json_format[i]
self.create_node(node_list, list_score_temp, list_val_temp)
model_score_list = model_score_list + list_score_temp
model_value_list = model_value_list + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
##1
for model_val, pmml_val in zip(model_score_list, pmml_score_list):
self.assertEqual(model_val, float(pmml_val))
##2
for model_val, pmml_val in zip(model_value_list, pmml_value_list):
self.assertEqual(model_val, pmml_val)
##3
self.assertEqual(os.path.isfile(f_name), True)
def test_xgboost_05(self):
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data, columns=iris.feature_names)
irisd['target'] = [i % 2 for i in range(iris.data.shape[0])]
features = irisd.columns.drop('target')
target = 'target'
f_name = "xgbc_bin_pmml.pmml"
model = XGBClassifier(min_child_weight=6, n_estimators=10, scale_pos_weight=10, deterministic_histogram=False)
pipeline_obj = Pipeline([
('xgbc', model)
])
pipeline_obj.fit(irisd[features], irisd[target])
xgboost_to_pmml(pipeline_obj, features, target, f_name)
pmml_obj = pml.parse(f_name, True)
pmml_value_list = []
model_value_list = []
pmml_score_list = []
model_score_list = []
seg_tab = pmml_obj.MiningModel[0].Segmentation.Segment
for seg in seg_tab:
if int(seg.id) == 1:
for segment in seg.MiningModel.Segmentation.Segment:
node_tab = segment.TreeModel.Node.Node
if not node_tab:
pmml_score_list.append(segment.TreeModel.Node.score)
else:
for node in node_tab:
varlen = node.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(node.SimplePredicate.value)
self.extractValues(node, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(node.SimplePredicate.value)
pmml_score_list.append(node.score)
get_nodes_in_json_format = []
for i in range(model.n_estimators):
get_nodes_in_json_format.append(json.loads(model._Booster.get_dump(dump_format='json')[i]))
for i in range(len(get_nodes_in_json_format)):
list_score_temp = []
list_val_temp = []
node_list = get_nodes_in_json_format[i]
self.create_node(node_list, list_score_temp, list_val_temp)
model_score_list = model_score_list + list_score_temp
model_value_list = model_value_list + list_val_temp
list_val_temp.clear()
list_score_temp.clear()
##1
for model_val, pmml_val in zip(model_score_list, pmml_score_list):
self.assertEqual(model_val, float(pmml_val))
##2
for model_val, pmml_val in zip(model_value_list, pmml_value_list):
self.assertEqual(model_val, pmml_val)
##3
self.assertEqual(os.path.isfile(f_name), True)
def test_xgboost_06(self):
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data, columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
f_name = "xgbc_pmml.pmml"
model = XGBClassifier()
model.fit(irisd[features], irisd[target])
with self.assertRaises(TypeError):
xgboost_to_pmml(model, features, target,f_name , model_name="testModel")
def extractValues(self, node, pmml_value_list, pmml_score_list):
for nsample in (node.Node):
varlen = nsample.get_Node().__len__()
if varlen > 0:
pmml_value_list.append(nsample.SimplePredicate.value)
self.extractValues(nsample, pmml_value_list, pmml_score_list)
else:
pmml_value_list.append(nsample.SimplePredicate.value)
pmml_score_list.append(nsample.score)
def create_node(self, obj, list_score_temp, list_val_temp):
if 'split' not in obj:
list_score_temp.append(obj['leaf'])
else:
self.create_left_node(obj, list_score_temp, list_val_temp)
self.create_right_node(obj, list_score_temp, list_val_temp)
def create_left_node(self, children_list, list_score_temp, list_val_temp):
value = "{:.16f}".format(children_list['split_condition'])
list_val_temp.append(value)
self.create_node(children_list['children'][0], list_score_temp, list_val_temp)
def create_right_node(self, children_list, list_score_temp, list_val_temp):
value = "{:.16f}".format(children_list['split_condition'])
list_val_temp.append(value)
self.create_node(children_list['children'][1], list_score_temp, list_val_temp)
if __name__ == '__main__':
unittest.main(warnings='ignore')
| 38.768182 | 118 | 0.597725 | 2,113 | 17,058 | 4.442026 | 0.077615 | 0.038035 | 0.04805 | 0.047944 | 0.865438 | 0.8469 | 0.844875 | 0.824206 | 0.813126 | 0.805668 | 0 | 0.010859 | 0.314398 | 17,058 | 439 | 119 | 38.856492 | 0.791706 | 0.000879 | 0 | 0.81268 | 0 | 0 | 0.026568 | 0.005819 | 0 | 0 | 0 | 0 | 0.04611 | 1 | 0.028818 | false | 0 | 0.037464 | 0 | 0.069164 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6c9b0cf39493eac399e1eaf5620dc28ea9ccf7d3 | 101 | py | Python | greenbot/web/common/__init__.py | EMorf/greenbot | 5528fcb9246109d6742a867b9668a408d43701d6 | [
"MIT"
] | null | null | null | greenbot/web/common/__init__.py | EMorf/greenbot | 5528fcb9246109d6742a867b9668a408d43701d6 | [
"MIT"
] | null | null | null | greenbot/web/common/__init__.py | EMorf/greenbot | 5528fcb9246109d6742a867b9668a408d43701d6 | [
"MIT"
] | null | null | null | import greenbot.web.common.assets
import greenbot.web.common.filters
import greenbot.web.common.menu
| 25.25 | 34 | 0.851485 | 15 | 101 | 5.733333 | 0.466667 | 0.488372 | 0.593023 | 0.802326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 101 | 3 | 35 | 33.666667 | 0.905263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
6cc197a647a000c2307d13bde142924587bcbe64 | 49,509 | py | Python | ultracart/apis/coupon_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | ultracart/apis/coupon_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | ultracart/apis/coupon_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
UltraCart Rest API V2
UltraCart REST API Version 2
OpenAPI spec version: 2.0.0
Contact: support@ultracart.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..api_client import ApiClient
class CouponApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def delete_coupon(self, coupon_oid, **kwargs):
"""
Delete a coupon
Delete a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_coupon(coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon_oid to delete. (required)
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_coupon_with_http_info(coupon_oid, **kwargs)
else:
(data) = self.delete_coupon_with_http_info(coupon_oid, **kwargs)
return data
def delete_coupon_with_http_info(self, coupon_oid, **kwargs):
"""
Delete a coupon
Delete a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_coupon_with_http_info(coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon_oid to delete. (required)
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon_oid']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_coupon" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon_oid' is set
if ('coupon_oid' not in params) or (params['coupon_oid'] is None):
raise ValueError("Missing the required parameter `coupon_oid` when calling `delete_coupon`")
collection_formats = {}
path_params = {}
if 'coupon_oid' in params:
path_params['coupon_oid'] = params['coupon_oid']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/{coupon_oid}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def generate_coupon_codes(self, coupon_oid, coupon_codes_request, **kwargs):
"""
Generates one time codes for a coupon
Generate one time codes for a coupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.generate_coupon_codes(coupon_oid, coupon_codes_request, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon oid to generate codes. (required)
:param CouponCodesRequest coupon_codes_request: Coupon code generation parameters (required)
:return: CouponCodesResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.generate_coupon_codes_with_http_info(coupon_oid, coupon_codes_request, **kwargs)
else:
(data) = self.generate_coupon_codes_with_http_info(coupon_oid, coupon_codes_request, **kwargs)
return data
def generate_coupon_codes_with_http_info(self, coupon_oid, coupon_codes_request, **kwargs):
"""
Generates one time codes for a coupon
Generate one time codes for a coupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.generate_coupon_codes_with_http_info(coupon_oid, coupon_codes_request, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon oid to generate codes. (required)
:param CouponCodesRequest coupon_codes_request: Coupon code generation parameters (required)
:return: CouponCodesResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon_oid', 'coupon_codes_request']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method generate_coupon_codes" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon_oid' is set
if ('coupon_oid' not in params) or (params['coupon_oid'] is None):
raise ValueError("Missing the required parameter `coupon_oid` when calling `generate_coupon_codes`")
# verify the required parameter 'coupon_codes_request' is set
if ('coupon_codes_request' not in params) or (params['coupon_codes_request'] is None):
raise ValueError("Missing the required parameter `coupon_codes_request` when calling `generate_coupon_codes`")
collection_formats = {}
path_params = {}
if 'coupon_oid' in params:
path_params['coupon_oid'] = params['coupon_oid']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'coupon_codes_request' in params:
body_params = params['coupon_codes_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/{coupon_oid}/generate_codes', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponCodesResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def generate_one_time_codes_by_merchant_code(self, merchant_code, coupon_codes_request, **kwargs):
"""
Generates one time codes by merchant code
Generate one time codes by merchant code
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.generate_one_time_codes_by_merchant_code(merchant_code, coupon_codes_request, async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: The merchant code to generate one time codes. (required)
:param CouponCodesRequest coupon_codes_request: Coupon code generation parameters (required)
:return: CouponCodesResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.generate_one_time_codes_by_merchant_code_with_http_info(merchant_code, coupon_codes_request, **kwargs)
else:
(data) = self.generate_one_time_codes_by_merchant_code_with_http_info(merchant_code, coupon_codes_request, **kwargs)
return data
def generate_one_time_codes_by_merchant_code_with_http_info(self, merchant_code, coupon_codes_request, **kwargs):
"""
Generates one time codes by merchant code
Generate one time codes by merchant code
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.generate_one_time_codes_by_merchant_code_with_http_info(merchant_code, coupon_codes_request, async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: The merchant code to generate one time codes. (required)
:param CouponCodesRequest coupon_codes_request: Coupon code generation parameters (required)
:return: CouponCodesResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['merchant_code', 'coupon_codes_request']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method generate_one_time_codes_by_merchant_code" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'merchant_code' is set
if ('merchant_code' not in params) or (params['merchant_code'] is None):
raise ValueError("Missing the required parameter `merchant_code` when calling `generate_one_time_codes_by_merchant_code`")
# verify the required parameter 'coupon_codes_request' is set
if ('coupon_codes_request' not in params) or (params['coupon_codes_request'] is None):
raise ValueError("Missing the required parameter `coupon_codes_request` when calling `generate_one_time_codes_by_merchant_code`")
collection_formats = {}
path_params = {}
if 'merchant_code' in params:
path_params['merchant_code'] = params['merchant_code']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'coupon_codes_request' in params:
body_params = params['coupon_codes_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/merchant_code/{merchant_code}/generate_codes', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponCodesResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_coupon(self, coupon_oid, **kwargs):
"""
Retrieve a coupon
Retrieves a single coupon using the specified coupon profile oid.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupon(coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon oid to retrieve. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_coupon_with_http_info(coupon_oid, **kwargs)
else:
(data) = self.get_coupon_with_http_info(coupon_oid, **kwargs)
return data
def get_coupon_with_http_info(self, coupon_oid, **kwargs):
"""
Retrieve a coupon
Retrieves a single coupon using the specified coupon profile oid.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupon_with_http_info(coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param int coupon_oid: The coupon oid to retrieve. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon_oid', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_coupon" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon_oid' is set
if ('coupon_oid' not in params) or (params['coupon_oid'] is None):
raise ValueError("Missing the required parameter `coupon_oid` when calling `get_coupon`")
collection_formats = {}
path_params = {}
if 'coupon_oid' in params:
path_params['coupon_oid'] = params['coupon_oid']
query_params = []
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/{coupon_oid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_coupon_by_merchant_code(self, merchant_code, **kwargs):
"""
Retrieve a coupon by merchant code
Retrieves a single coupon using the specified merchant code.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupon_by_merchant_code(merchant_code, async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: The coupon merchant code to retrieve. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_coupon_by_merchant_code_with_http_info(merchant_code, **kwargs)
else:
(data) = self.get_coupon_by_merchant_code_with_http_info(merchant_code, **kwargs)
return data
def get_coupon_by_merchant_code_with_http_info(self, merchant_code, **kwargs):
"""
Retrieve a coupon by merchant code
Retrieves a single coupon using the specified merchant code.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupon_by_merchant_code_with_http_info(merchant_code, async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: The coupon merchant code to retrieve. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['merchant_code', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_coupon_by_merchant_code" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'merchant_code' is set
if ('merchant_code' not in params) or (params['merchant_code'] is None):
raise ValueError("Missing the required parameter `merchant_code` when calling `get_coupon_by_merchant_code`")
collection_formats = {}
path_params = {}
if 'merchant_code' in params:
path_params['merchant_code'] = params['merchant_code']
query_params = []
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/merchant_code/{merchant_code}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_coupons(self, **kwargs):
"""
Retrieve coupons
Retrieves coupons for this account. If no parameters are specified, all coupons will be returned. You will need to make multiple API calls in order to retrieve the entire result set since this API performs result set pagination.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupons(async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: Merchant code
:param str description: Description
:param str coupon_type: Coupon type
:param str start_date_begin: Start date begin
:param str start_date_end: Start date end
:param str expiration_date_begin: Expiration date begin
:param str expiration_date_end: Expiration date end
:param int affiliate_oid: Affiliate oid
:param bool exclude_expired: Exclude expired
:param int limit: The maximum number of records to return on this one API call. (Max 200)
:param int offset: Pagination of the record set. Offset is a zero based index.
:param str sort: The sort order of the coupons. See Sorting documentation for examples of using multiple values and sorting by ascending and descending.
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_coupons_with_http_info(**kwargs)
else:
(data) = self.get_coupons_with_http_info(**kwargs)
return data
def get_coupons_with_http_info(self, **kwargs):
"""
Retrieve coupons
Retrieves coupons for this account. If no parameters are specified, all coupons will be returned. You will need to make multiple API calls in order to retrieve the entire result set since this API performs result set pagination.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupons_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str merchant_code: Merchant code
:param str description: Description
:param str coupon_type: Coupon type
:param str start_date_begin: Start date begin
:param str start_date_end: Start date end
:param str expiration_date_begin: Expiration date begin
:param str expiration_date_end: Expiration date end
:param int affiliate_oid: Affiliate oid
:param bool exclude_expired: Exclude expired
:param int limit: The maximum number of records to return on this one API call. (Max 200)
:param int offset: Pagination of the record set. Offset is a zero based index.
:param str sort: The sort order of the coupons. See Sorting documentation for examples of using multiple values and sorting by ascending and descending.
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponsResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['merchant_code', 'description', 'coupon_type', 'start_date_begin', 'start_date_end', 'expiration_date_begin', 'expiration_date_end', 'affiliate_oid', 'exclude_expired', 'limit', 'offset', 'sort', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_coupons" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'merchant_code' in params:
query_params.append(('merchant_code', params['merchant_code']))
if 'description' in params:
query_params.append(('description', params['description']))
if 'coupon_type' in params:
query_params.append(('coupon_type', params['coupon_type']))
if 'start_date_begin' in params:
query_params.append(('start_date_begin', params['start_date_begin']))
if 'start_date_end' in params:
query_params.append(('start_date_end', params['start_date_end']))
if 'expiration_date_begin' in params:
query_params.append(('expiration_date_begin', params['expiration_date_begin']))
if 'expiration_date_end' in params:
query_params.append(('expiration_date_end', params['expiration_date_end']))
if 'affiliate_oid' in params:
query_params.append(('affiliate_oid', params['affiliate_oid']))
if 'exclude_expired' in params:
query_params.append(('exclude_expired', params['exclude_expired']))
if 'limit' in params:
query_params.append(('_limit', params['limit']))
if 'offset' in params:
query_params.append(('_offset', params['offset']))
if 'sort' in params:
query_params.append(('_sort', params['sort']))
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponsResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_coupons_by_query(self, coupon_query, **kwargs):
"""
Retrieve coupons by query
Retrieves coupons from the account. If no parameters are specified, all coupons will be returned. You will need to make multiple API calls in order to retrieve the entire result set since this API performs result set pagination.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupons_by_query(coupon_query, async=True)
>>> result = thread.get()
:param async bool
:param CouponQuery coupon_query: Coupon query (required)
:param int limit: The maximum number of records to return on this one API call. (Max 200)
:param int offset: Pagination of the record set. Offset is a zero based index.
:param str sort: The sort order of the coupons. See Sorting documentation for examples of using multiple values and sorting by ascending and descending.
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_coupons_by_query_with_http_info(coupon_query, **kwargs)
else:
(data) = self.get_coupons_by_query_with_http_info(coupon_query, **kwargs)
return data
def get_coupons_by_query_with_http_info(self, coupon_query, **kwargs):
"""
Retrieve coupons by query
Retrieves coupons from the account. If no parameters are specified, all coupons will be returned. You will need to make multiple API calls in order to retrieve the entire result set since this API performs result set pagination.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_coupons_by_query_with_http_info(coupon_query, async=True)
>>> result = thread.get()
:param async bool
:param CouponQuery coupon_query: Coupon query (required)
:param int limit: The maximum number of records to return on this one API call. (Max 200)
:param int offset: Pagination of the record set. Offset is a zero based index.
:param str sort: The sort order of the coupons. See Sorting documentation for examples of using multiple values and sorting by ascending and descending.
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponsResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon_query', 'limit', 'offset', 'sort', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_coupons_by_query" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon_query' is set
if ('coupon_query' not in params) or (params['coupon_query'] is None):
raise ValueError("Missing the required parameter `coupon_query` when calling `get_coupons_by_query`")
collection_formats = {}
path_params = {}
query_params = []
if 'limit' in params:
query_params.append(('_limit', params['limit']))
if 'offset' in params:
query_params.append(('_offset', params['offset']))
if 'sort' in params:
query_params.append(('_sort', params['sort']))
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'coupon_query' in params:
body_params = params['coupon_query']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/query', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponsResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_editor_values(self, **kwargs):
"""
Retrieve values needed for a coupon editor
Retrieve values needed for a coupon editor
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_editor_values(async=True)
>>> result = thread.get()
:param async bool
:return: CouponEditorValues
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_editor_values_with_http_info(**kwargs)
else:
(data) = self.get_editor_values_with_http_info(**kwargs)
return data
def get_editor_values_with_http_info(self, **kwargs):
"""
Retrieve values needed for a coupon editor
Retrieve values needed for a coupon editor
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_editor_values_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: CouponEditorValues
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_editor_values" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/editor_values', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponEditorValues',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def insert_coupon(self, coupon, **kwargs):
"""
Insert a coupon
Insert a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.insert_coupon(coupon, async=True)
>>> result = thread.get()
:param async bool
:param Coupon coupon: Coupon to insert (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.insert_coupon_with_http_info(coupon, **kwargs)
else:
(data) = self.insert_coupon_with_http_info(coupon, **kwargs)
return data
def insert_coupon_with_http_info(self, coupon, **kwargs):
"""
Insert a coupon
Insert a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.insert_coupon_with_http_info(coupon, async=True)
>>> result = thread.get()
:param async bool
:param Coupon coupon: Coupon to insert (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method insert_coupon" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon' is set
if ('coupon' not in params) or (params['coupon'] is None):
raise ValueError("Missing the required parameter `coupon` when calling `insert_coupon`")
collection_formats = {}
path_params = {}
query_params = []
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'coupon' in params:
body_params = params['coupon']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json; charset=UTF-8'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_coupon(self, coupon, coupon_oid, **kwargs):
"""
Update a coupon
Update a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_coupon(coupon, coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param Coupon coupon: Coupon to update (required)
:param int coupon_oid: The coupon_oid to update. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.update_coupon_with_http_info(coupon, coupon_oid, **kwargs)
else:
(data) = self.update_coupon_with_http_info(coupon, coupon_oid, **kwargs)
return data
def update_coupon_with_http_info(self, coupon, coupon_oid, **kwargs):
"""
Update a coupon
Update a coupon on the UltraCart account.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_coupon_with_http_info(coupon, coupon_oid, async=True)
>>> result = thread.get()
:param async bool
:param Coupon coupon: Coupon to update (required)
:param int coupon_oid: The coupon_oid to update. (required)
:param str expand: The object expansion to perform on the result. See documentation for examples
:return: CouponResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['coupon', 'coupon_oid', 'expand']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_coupon" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'coupon' is set
if ('coupon' not in params) or (params['coupon'] is None):
raise ValueError("Missing the required parameter `coupon` when calling `update_coupon`")
# verify the required parameter 'coupon_oid' is set
if ('coupon_oid' not in params) or (params['coupon_oid'] is None):
raise ValueError("Missing the required parameter `coupon_oid` when calling `update_coupon`")
collection_formats = {}
path_params = {}
if 'coupon_oid' in params:
path_params['coupon_oid'] = params['coupon_oid']
query_params = []
if 'expand' in params:
query_params.append(('_expand', params['expand']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'coupon' in params:
body_params = params['coupon']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json; charset=UTF-8'])
# Authentication setting
auth_settings = ['ultraCartOauth', 'ultraCartSimpleApiKey']
return self.api_client.call_api('/coupon/coupons/{coupon_oid}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CouponResponse',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.723577 | 239 | 0.585873 | 5,289 | 49,509 | 5.240499 | 0.041785 | 0.024678 | 0.020204 | 0.025977 | 0.961107 | 0.95068 | 0.940975 | 0.927662 | 0.914421 | 0.908901 | 0 | 0.000669 | 0.336141 | 49,509 | 1,106 | 240 | 44.764014 | 0.842639 | 0.027005 | 0 | 0.786828 | 1 | 0 | 0.192656 | 0.049031 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.010399 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
6cf10fc89f8cebc7f27b17543205c9674b40b748 | 13,284 | py | Python | notebooks/xskillscore/tests/test_deterministic.py | brian-rose/cmip6hack-multigen | fb54153e9303a3c31a9964c25b35bef980219979 | [
"MIT"
] | null | null | null | notebooks/xskillscore/tests/test_deterministic.py | brian-rose/cmip6hack-multigen | fb54153e9303a3c31a9964c25b35bef980219979 | [
"MIT"
] | null | null | null | notebooks/xskillscore/tests/test_deterministic.py | brian-rose/cmip6hack-multigen | fb54153e9303a3c31a9964c25b35bef980219979 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import pytest
import xarray as xr
from xarray.tests import assert_allclose
from xskillscore.core.deterministic import (
_preprocess_dims, _preprocess_weights, mae, mse, pearson_r, pearson_r_p_value, rmse)
from xskillscore.core.np_deterministic import (
_mae, _mse, _pearson_r, _pearson_r_p_value, _rmse)
AXES = ('time', 'lat', 'lon', ('lat', 'lon'), ('time', 'lat', 'lon'))
@pytest.fixture
def a():
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.random.rand(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon'])
@pytest.fixture
def b():
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.random.rand(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon'])
@pytest.fixture
def weights_ones():
"""
Weighting array of all ones, i.e. no weighting.
"""
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.ones((len(dates), len(lats), len(lons)))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon'])
@pytest.fixture
def weights_latitude():
"""
Weighting array by cosine of the latitude.
"""
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
cos = np.abs(np.cos(lats))
data = np.tile(cos, (len(dates), len(lons), 1)).reshape(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon'])
@pytest.fixture
def a_dask():
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.random.rand(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon']).chunk()
@pytest.fixture
def b_dask(b):
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.random.rand(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon']).chunk()
@pytest.fixture
def weights_ones_dask(b):
"""
Weighting array of all ones, i.e. no weighting.
"""
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
data = np.ones((len(dates), len(lats), len(lons)))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon']).chunk()
@pytest.fixture
def weights_latitude_dask():
"""
Weighting array by cosine of the latitude.
"""
dates = pd.date_range('1/1/2000', '1/3/2000', freq='D')
lats = np.arange(4)
lons = np.arange(5)
cos = np.abs(np.cos(lats))
data = np.tile(cos, (len(dates), len(lons), 1)).reshape(len(dates), len(lats), len(lons))
return xr.DataArray(data,
coords=[dates, lats, lons],
dims=['time', 'lat', 'lon']).chunk()
def adjust_weights(weight, dim, weights_ones, weights_latitude):
"""
Adjust the weights test data to only span the core dimension
that the function is being applied over.
"""
drop_dims = [i for i in weights_ones.dims if i not in dim]
drop_dims = {k: 0 for k in drop_dims}
if weight:
weights_arg = weights_latitude.isel(drop_dims)
weights_np = weights_latitude.isel(drop_dims)
else:
weights_arg = None
weights_np = weights_ones.isel(drop_dims)
return weights_arg, weights_np
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_pearson_r_xr(a, b, dim, weight, weights_ones, weights_latitude):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones, weights_latitude)
actual = pearson_r(a, b, dim, weights=weights_arg)
assert actual.chunks is None
dim, _ = _preprocess_dims(dim)
if len(dim) > 1:
new_dim = '_'.join(dim)
_a = a.stack(**{new_dim: dim})
_b = b.stack(**{new_dim: dim})
_weights_np = weights_np.stack(**{new_dim: dim})
else:
new_dim = dim[0]
_a = a
_b = b
_weights_np = weights_np
_weights_np = _preprocess_weights(_a, dim, new_dim, _weights_np)
axis = _a.dims.index(new_dim)
res = _pearson_r(_a.values, _b.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_pearson_r_xr_dask(a_dask, b_dask, dim, weight, weights_ones_dask, weights_latitude_dask):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones_dask, weights_latitude_dask)
actual = pearson_r(a_dask, b_dask, dim, weights=weights_arg)
assert actual.chunks is not None
dim, _ = _preprocess_dims(dim)
if len(dim) > 1:
new_dim = '_'.join(dim)
_a_dask = a_dask.stack(**{new_dim: dim})
_b_dask = b_dask.stack(**{new_dim: dim})
_weights_np = weights_np.stack(**{new_dim: dim})
else:
new_dim = dim[0]
_a_dask = a_dask
_b_dask = b_dask
_weights_np = weights_np
_weights_np = _preprocess_weights(_a_dask, dim, new_dim, _weights_np)
axis = _a_dask.dims.index(new_dim)
res = _pearson_r(_a_dask.values, _b_dask.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_pearson_r_p_value_xr(a, b, dim, weight, weights_ones, weights_latitude):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones, weights_latitude)
actual = pearson_r_p_value(a, b, dim, weights=weights_arg)
assert actual.chunks is None
dim, _ = _preprocess_dims(dim)
if len(dim) > 1:
new_dim = '_'.join(dim)
_a = a.stack(**{new_dim: dim})
_b = b.stack(**{new_dim: dim})
_weights_np = weights_np.stack(**{new_dim: dim})
else:
new_dim = dim[0]
_a = a
_b = b
_weights_np = weights_np
_weights_np = _preprocess_weights(_a, dim, new_dim, _weights_np)
axis = _a.dims.index(new_dim)
res = _pearson_r_p_value(_a.values, _b.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_pearson_r_p_value_xr_dask(a_dask, b_dask, dim, weight, weights_ones_dask, weights_latitude_dask):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones_dask, weights_latitude_dask)
actual = pearson_r_p_value(a_dask, b_dask, dim, weights=weights_arg)
assert actual.chunks is not None
dim, _ = _preprocess_dims(dim)
if len(dim) > 1:
new_dim = '_'.join(dim)
_a_dask = a_dask.stack(**{new_dim: dim})
_b_dask = b_dask.stack(**{new_dim: dim})
_weights_np = weights_np.stack(**{new_dim: dim})
else:
new_dim = dim[0]
_a_dask = a_dask
_b_dask = b_dask
_weights_np = weights_np
_weights_np = _preprocess_weights(_a_dask, dim, new_dim, _weights_np)
axis = _a_dask.dims.index(new_dim)
res = _pearson_r_p_value(_a_dask.values, _b_dask.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_rmse_r_xr(a, b, dim, weight, weights_ones, weights_latitude):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones, weights_latitude)
actual = rmse(a, b, dim, weights=weights_arg)
assert actual.chunks is None
dim, axis = _preprocess_dims(dim)
_a = a
_b = b
_weights_np = _preprocess_weights(_a, dim, dim, weights_np)
axis = tuple(a.dims.index(d) for d in dim)
res = _rmse(_a.values, _b.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_rmse_r_xr_dask(a_dask, b_dask, dim, weight, weights_ones_dask, weights_latitude_dask):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones_dask, weights_latitude_dask)
actual = rmse(a_dask, b_dask, dim, weights=weights_arg)
assert actual.chunks is not None
dim, axis = _preprocess_dims(dim)
_a_dask = a_dask
_b_dask = b_dask
_weights_np = _preprocess_weights(_a_dask, dim, dim, weights_np)
axis = tuple(a_dask.dims.index(d) for d in dim)
res = _rmse(_a_dask.values, _b_dask.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_mse_r_xr(a, b, dim, weight, weights_ones, weights_latitude):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones, weights_latitude)
actual = mse(a, b, dim, weights=weights_arg)
assert actual.chunks is None
dim, axis = _preprocess_dims(dim)
_a = a
_b = b
_weights_np = _preprocess_weights(_a, dim, dim, weights_np)
axis = tuple(a.dims.index(d) for d in dim)
res = _mse(_a.values, _b.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_mse_r_xr_dask(a_dask, b_dask, dim, weight, weights_ones_dask, weights_latitude_dask):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones_dask, weights_latitude_dask)
actual = mse(a_dask, b_dask, dim, weights=weights_arg)
assert actual.chunks is not None
dim, axis = _preprocess_dims(dim)
_a_dask = a_dask
_b_dask = b_dask
_weights_np = _preprocess_weights(_a_dask, dim, dim, weights_np)
axis = tuple(a_dask.dims.index(d) for d in dim)
res = _mse(_a_dask.values, _b_dask.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_mae_r_xr(a, b, dim, weight, weights_ones, weights_latitude):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones, weights_latitude)
actual = mae(a, b, dim, weights=weights_arg)
assert actual.chunks is None
dim, axis = _preprocess_dims(dim)
_a = a
_b = b
_weights_np = _preprocess_weights(_a, dim, dim, weights_np)
axis = tuple(a.dims.index(d) for d in dim)
res = _mae(_a.values, _b.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
@pytest.mark.parametrize('dim', AXES)
@pytest.mark.parametrize('weight', [True, False])
def test_mae_r_xr_dask(a_dask, b_dask, dim, weight, weights_ones_dask, weights_latitude_dask):
# Generates subsetted weights to pass in as arg to main function and for the numpy testing.
weights_arg, weights_np = adjust_weights(weight, dim, weights_ones_dask, weights_latitude_dask)
actual = mae(a_dask, b_dask, dim, weights=weights_arg)
assert actual.chunks is not None
dim, axis = _preprocess_dims(dim)
_a_dask = a_dask
_b_dask = b_dask
_weights_np = _preprocess_weights(_a_dask, dim, dim, weights_np)
axis = tuple(a_dask.dims.index(d) for d in dim)
res = _mae(_a_dask.values, _b_dask.values, _weights_np.values, axis)
expected = actual.copy()
expected.values = res
assert_allclose(actual, expected)
| 36.196185 | 106 | 0.663656 | 1,960 | 13,284 | 4.236224 | 0.060714 | 0.063953 | 0.023847 | 0.018066 | 0.938095 | 0.9287 | 0.927255 | 0.923883 | 0.918102 | 0.910394 | 0 | 0.01175 | 0.211984 | 13,284 | 366 | 107 | 36.295082 | 0.781429 | 0.08913 | 0 | 0.793478 | 0 | 0 | 0.027974 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 1 | 0.068841 | false | 0 | 0.025362 | 0 | 0.126812 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9f2a6e073e52ba788f0951564350d87a94b023d8 | 24,943 | py | Python | nexus_api_python_client/api/routing_rules_api.py | simonebruzzechesse/nexus-api-python-client | eaa1098dbd8778f6f3bda948268953b742f2ab64 | [
"MIT"
] | 1 | 2021-11-14T12:43:38.000Z | 2021-11-14T12:43:38.000Z | nexus_api_python_client/api/routing_rules_api.py | simonebruzzechesse/nexus-api-python-client | eaa1098dbd8778f6f3bda948268953b742f2ab64 | [
"MIT"
] | null | null | null | nexus_api_python_client/api/routing_rules_api.py | simonebruzzechesse/nexus-api-python-client | eaa1098dbd8778f6f3bda948268953b742f2ab64 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Nexus Repository Manager REST API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 3.20.1-01
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from nexus_api_python_client.api_client import ApiClient
from nexus_api_python_client.exceptions import (
ApiTypeError,
ApiValueError
)
class RoutingRulesApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_routing_rule(self, body, **kwargs): # noqa: E501
"""Create a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_routing_rule(body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param RoutingRuleXO body: A routing rule configuration (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_routing_rule_with_http_info(body, **kwargs) # noqa: E501
def create_routing_rule_with_http_info(self, body, **kwargs): # noqa: E501
"""Create a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_routing_rule_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param RoutingRuleXO body: A routing rule configuration (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_routing_rule" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'body' is set
if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501
local_var_params['body'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `body` when calling `create_routing_rule`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/beta/routing-rules', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_routing_rule(self, name, **kwargs): # noqa: E501
"""Delete a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_routing_rule(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to delete (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_routing_rule_with_http_info(name, **kwargs) # noqa: E501
def delete_routing_rule_with_http_info(self, name, **kwargs): # noqa: E501
"""Delete a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_routing_rule_with_http_info(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to delete (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_routing_rule" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'name' is set
if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501
local_var_params['name'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `name` when calling `delete_routing_rule`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in local_var_params:
path_params['name'] = local_var_params['name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/beta/routing-rules/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_routing_rule(self, name, **kwargs): # noqa: E501
"""Get a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_routing_rule(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to get (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: RoutingRuleXO
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_routing_rule_with_http_info(name, **kwargs) # noqa: E501
def get_routing_rule_with_http_info(self, name, **kwargs): # noqa: E501
"""Get a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_routing_rule_with_http_info(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to get (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(RoutingRuleXO, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_routing_rule" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'name' is set
if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501
local_var_params['name'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `name` when calling `get_routing_rule`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in local_var_params:
path_params['name'] = local_var_params['name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/beta/routing-rules/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RoutingRuleXO', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_routing_rules(self, **kwargs): # noqa: E501
"""List routing rules # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_routing_rules(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[RoutingRuleXO]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_routing_rules_with_http_info(**kwargs) # noqa: E501
def get_routing_rules_with_http_info(self, **kwargs): # noqa: E501
"""List routing rules # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_routing_rules_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[RoutingRuleXO], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_routing_rules" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/beta/routing-rules', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RoutingRuleXO]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_routing_rule(self, name, body, **kwargs): # noqa: E501
"""Update a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_routing_rule(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to update (required)
:param RoutingRuleXO body: A routing rule configuration (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_routing_rule_with_http_info(name, body, **kwargs) # noqa: E501
def update_routing_rule_with_http_info(self, name, body, **kwargs): # noqa: E501
"""Update a single routing rule # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_routing_rule_with_http_info(name, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: The name of the routing rule to update (required)
:param RoutingRuleXO body: A routing rule configuration (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['name', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_routing_rule" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'name' is set
if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501
local_var_params['name'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `name` when calling `update_routing_rule`") # noqa: E501
# verify the required parameter 'body' is set
if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501
local_var_params['body'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `body` when calling `update_routing_rule`") # noqa: E501
collection_formats = {}
path_params = {}
if 'name' in local_var_params:
path_params['name'] = local_var_params['name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/beta/routing-rules/{name}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.225177 | 124 | 0.59656 | 2,813 | 24,943 | 5.044081 | 0.066122 | 0.039467 | 0.059201 | 0.031715 | 0.951864 | 0.946085 | 0.937839 | 0.929382 | 0.924589 | 0.924589 | 0 | 0.013901 | 0.330914 | 24,943 | 563 | 125 | 44.30373 | 0.836299 | 0.459007 | 0 | 0.788845 | 1 | 0 | 0.157952 | 0.041425 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043825 | false | 0 | 0.01992 | 0 | 0.10757 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9f61e29bc94e122c9f58b93bc9684145506a2a30 | 183 | py | Python | mtg_ssm/serialization/__init__.py | suniahk/mtg_ssm | 66912ff1a8d3532683d303b8d5d0c13287c28b32 | [
"MIT"
] | 29 | 2016-03-18T12:10:36.000Z | 2022-02-20T17:32:06.000Z | mtg_ssm/serialization/__init__.py | gwax/mtgcdb | f45b45052f34bebd600c8be0c4fb787856971162 | [
"MIT"
] | 6 | 2016-04-26T08:25:01.000Z | 2021-02-22T11:56:27.000Z | mtg_ssm/serialization/__init__.py | gwax/mtgcdb | f45b45052f34bebd600c8be0c4fb787856971162 | [
"MIT"
] | 8 | 2016-06-12T09:44:57.000Z | 2021-11-05T01:17:59.000Z | """Ensure that all serializers are imported to properly set up interface."""
import mtg_ssm.serialization.csv
import mtg_ssm.serialization.interface
import mtg_ssm.serialization.xlsx
| 36.6 | 76 | 0.836066 | 26 | 183 | 5.769231 | 0.653846 | 0.18 | 0.24 | 0.5 | 0.453333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092896 | 183 | 4 | 77 | 45.75 | 0.903614 | 0.382514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4ca5dca257316c86e542f3717757672dd4e45d79 | 168 | py | Python | tests/providers/data5u_provider_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | 3 | 2019-02-19T04:49:59.000Z | 2021-01-15T12:36:50.000Z | tests/providers/data5u_provider_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | null | null | null | tests/providers/data5u_provider_test.py | peng4217/scylla | aa5133d7c6d565c95651fc75b26ad605da0982cd | [
"Apache-2.0"
] | 3 | 2019-02-19T04:50:00.000Z | 2021-01-15T12:37:04.000Z | from scylla.providers import Data5uProvider
from tests.providers.helpers import assert_provider
def test_cool_proxy_provider():
assert_provider(Data5uProvider())
| 24 | 51 | 0.839286 | 20 | 168 | 6.8 | 0.65 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013245 | 0.10119 | 168 | 6 | 52 | 28 | 0.887417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4ca88cd38014476045120d6ea508ddb6541c8bda | 87 | py | Python | tests/test_import.py | jorgepiloto/atmopy | d2a2cc6f24f3b73a5d35b62a54e3d6cde345046a | [
"MIT"
] | 12 | 2020-02-19T17:48:25.000Z | 2022-01-12T03:48:54.000Z | tests/test_import.py | jorgepiloto/atmopy | d2a2cc6f24f3b73a5d35b62a54e3d6cde345046a | [
"MIT"
] | 4 | 2020-02-19T14:59:02.000Z | 2020-07-26T06:19:30.000Z | tests/test_import.py | jorgepiloto/atmopy | d2a2cc6f24f3b73a5d35b62a54e3d6cde345046a | [
"MIT"
] | 2 | 2020-02-24T22:28:54.000Z | 2020-02-28T04:54:57.000Z | import atmopy
def test_atmopy_version():
assert atmopy.__version__ == "0.1.dev0"
| 14.5 | 43 | 0.724138 | 12 | 87 | 4.75 | 0.75 | 0.45614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 0.16092 | 87 | 5 | 44 | 17.4 | 0.739726 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e23573219885eb08f6cd42a5c94776e88846e4ec | 2,865 | py | Python | tests/unit_tests/test_component.py | toaco/inschema | 32a4418e1553438e478848a45ba02d81cf679495 | [
"MIT"
] | 2 | 2018-04-03T01:52:32.000Z | 2018-04-03T02:18:12.000Z | tests/unit_tests/test_component.py | toaco/inschema | 32a4418e1553438e478848a45ba02d81cf679495 | [
"MIT"
] | null | null | null | tests/unit_tests/test_component.py | toaco/inschema | 32a4418e1553438e478848a45ba02d81cf679495 | [
"MIT"
] | null | null | null | import pytest
import rubric
from rubric import *
def test_nest_dict():
# 简单的嵌套
schema = {
'a': int,
'b': str,
'c': {
'c1': Int(validator=lambda x: x < 10)
}
}
rubric.validate(schema, {
'a': 1,
'b': '2',
'c': {
'c1': 3
}
})
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': 1,
'b': '2',
'c': {
'c1': 11
}
})
def test_nest_dict1():
# 不可以多键
schema = {
'a': int,
'b': str
}
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': 1,
'b': '2',
'c': 1
})
def test_nest_dict2():
# 可少键,但是必须提供默认值
schema = {
'a': int,
'b': str
}
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': 1,
})
schema = {
'a': int,
'b': Str(default='1')
}
rubric.validate(schema, {
'a': 1
})
def test_nest_dict3():
# 嵌套更深
schema = {
'a': int,
'b': str,
'c': {
'c1': Int(validator=lambda x: x < 10),
'd': {
'd1': Int(validator=lambda x: x < 10)
}
}
}
rubric.validate(schema, {
'a': 1,
'b': '2',
'c': {
'c1': 3,
'd': {
'd1': 9
}
}
})
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': 1,
'b': '2',
'c': {
'c1': 3,
'd': {
'd1': 11
}
}
})
def test_dict_list():
# 字典和列表
schema = {
'a': [],
'b': [
{
'c': int,
'd': 3,
'e': [9]
}
]
}
rubric.validate(schema, {
'a': [],
'b': [
{
'c': 1,
'd': 3,
'e': [9]
}
]
})
rubric.validate(schema, {
'a': [],
'b': [
{
'c': 1,
'd': 3,
'e': [9, 9]
}
]
})
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': [1],
'b': [
{
'c': 1,
'd': 3,
'e': [9]
}
]
})
with pytest.raises(ValidateError):
rubric.validate(schema, {
'a': [],
'b': [
{
'c': 1,
'd': 3,
'e': [8]
}
]
})
| 16.952663 | 53 | 0.282024 | 228 | 2,865 | 3.5 | 0.188596 | 0.149123 | 0.275689 | 0.289474 | 0.776942 | 0.73183 | 0.73183 | 0.725564 | 0.725564 | 0.68797 | 0 | 0.044409 | 0.55986 | 2,865 | 168 | 54 | 17.053571 | 0.588422 | 0.012914 | 0 | 0.623188 | 0 | 0 | 0.028703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036232 | false | 0 | 0.021739 | 0 | 0.057971 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e238109b968e5f2ef093c7c91820e7a2507382e0 | 145 | py | Python | sknni/internals/__init__.py | ksachdeva/scikit-nni | ff1405e3667e881ca2e91925ec70f44ebe82d488 | [
"Apache-2.0"
] | 20 | 2019-11-05T09:20:05.000Z | 2020-12-13T03:16:59.000Z | sknni/internals/__init__.py | ksachdeva/scikit-nni | ff1405e3667e881ca2e91925ec70f44ebe82d488 | [
"Apache-2.0"
] | 4 | 2020-03-24T17:41:39.000Z | 2021-06-02T00:33:31.000Z | sknni/internals/__init__.py | ksachdeva/scikit-nni | ff1405e3667e881ca2e91925ec70f44ebe82d488 | [
"Apache-2.0"
] | 2 | 2020-01-17T05:24:02.000Z | 2020-04-09T08:46:24.000Z | from ._nni_config_generator import generate as nni_config_generator
from ._pipeline_builder import PipelineBuilder
from ._utils import get_class
| 36.25 | 67 | 0.882759 | 20 | 145 | 5.95 | 0.65 | 0.151261 | 0.302521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096552 | 145 | 3 | 68 | 48.333333 | 0.908397 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e23e01ba27d957314c60d6c79e81612fba633ebe | 184 | py | Python | Trakttv.bundle/Contents/Libraries/Shared/plugin/core/libraries/helpers/__init__.py | disrupted/Trakttv.bundle | 24712216c71f3b22fd58cb5dd89dad5bb798ed60 | [
"RSA-MD"
] | 1,346 | 2015-01-01T14:52:24.000Z | 2022-03-28T12:50:48.000Z | Trakttv.bundle/Contents/Libraries/Shared/plugin/core/libraries/helpers/__init__.py | alcroito/Plex-Trakt-Scrobbler | 4f83fb0860dcb91f860d7c11bc7df568913c82a6 | [
"RSA-MD"
] | 474 | 2015-01-01T10:27:46.000Z | 2022-03-21T12:26:16.000Z | Trakttv.bundle/Contents/Libraries/Shared/plugin/core/libraries/helpers/__init__.py | alcroito/Plex-Trakt-Scrobbler | 4f83fb0860dcb91f860d7c11bc7df568913c82a6 | [
"RSA-MD"
] | 191 | 2015-01-02T18:27:22.000Z | 2022-03-29T10:49:48.000Z | from plugin.core.libraries.helpers.path import PathHelper
from plugin.core.libraries.helpers.storage import StorageHelper
from plugin.core.libraries.helpers.system import SystemHelper
| 46 | 63 | 0.869565 | 24 | 184 | 6.666667 | 0.5 | 0.1875 | 0.2625 | 0.43125 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 184 | 3 | 64 | 61.333333 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2c5495d8d05873c03923949e87aa96d92402c48a | 75 | py | Python | dtech_instagram/worker/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | 1 | 2018-12-03T08:47:47.000Z | 2018-12-03T08:47:47.000Z | dtech_instagram/worker/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | 1 | 2018-12-12T17:31:31.000Z | 2018-12-12T17:31:31.000Z | dtech_instagram/worker/__init__.py | hideki-saito/InstagramAPP_Flask | c3ee6f10d35edb74f0f82f4370faca8f0c25200c | [
"MIT"
] | null | null | null | import dtech_instagram.worker.post
import dtech_instagram.worker.analytics
| 25 | 39 | 0.893333 | 10 | 75 | 6.5 | 0.6 | 0.338462 | 0.615385 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 75 | 2 | 40 | 37.5 | 0.915493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
2cb78787fcd175ffb8055ee4336ae6d55c423f57 | 93 | py | Python | example/tests/test_app.py | NeonGraal/boxer | ebb0ed6a5880b2a27df1b39a5604ced1450a9f85 | [
"MIT"
] | 5 | 2017-10-03T15:34:11.000Z | 2019-10-17T21:32:28.000Z | example/tests/test_app.py | NeonGraal/boxer | ebb0ed6a5880b2a27df1b39a5604ced1450a9f85 | [
"MIT"
] | null | null | null | example/tests/test_app.py | NeonGraal/boxer | ebb0ed6a5880b2a27df1b39a5604ced1450a9f85 | [
"MIT"
] | 1 | 2018-09-21T21:38:00.000Z | 2018-09-21T21:38:00.000Z | from app import app
def test_hello_world():
assert app.hello_world() == "Hello world!"
| 15.5 | 46 | 0.698925 | 14 | 93 | 4.428571 | 0.571429 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182796 | 93 | 5 | 47 | 18.6 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e2c505fafa39779bb474856d9618102b6f20e920 | 1,376 | py | Python | tests/equipment/thorlabs_stage.py | MSLNZ/pr-single-photons | b3f52d2a2a7cf5385d885f30ae9555e1e7d77ec2 | [
"MIT"
] | null | null | null | tests/equipment/thorlabs_stage.py | MSLNZ/pr-single-photons | b3f52d2a2a7cf5385d885f30ae9555e1e7d77ec2 | [
"MIT"
] | null | null | null | tests/equipment/thorlabs_stage.py | MSLNZ/pr-single-photons | b3f52d2a2a7cf5385d885f30ae9555e1e7d77ec2 | [
"MIT"
] | null | null | null | """
Test that photons/equipment/thorlabs_stage.py is working properly.
"""
from time import sleep
import connect
app, stage = connect.device('stage-y', 'Is it safe to move?')
info = stage.info()
assert info == {'unit': ' mm', 'minimum': 0.0, 'maximum': 13.0}, str(info)
stage.home(wait=False)
while stage.is_moving():
app.logger.info(f'homing, at position {stage.get_position()}')
sleep(0.05)
assert stage.get_position() == 0.0
sleep(1)
stage.home()
assert stage.get_position() == 0.0
sleep(1)
stage.set_position(5)
assert stage.get_position() == 5
sleep(1)
stage.set_position(5.01)
assert stage.get_position() == 5.01
sleep(1)
stage.set_position(5.01)
assert stage.get_position() == 5.01
sleep(1)
stage.set_position(2.0, wait=False)
while stage.is_moving():
app.logger.info(f'moving stage to position 2.0, at position {stage.get_position()}')
sleep(0.05)
assert stage.get_position() == 2.0
sleep(1)
stage.set_position(2.0, wait=False)
while stage.is_moving():
app.logger.info(f'moving stage to position 2.0, at position {stage.get_position()}')
sleep(0.05)
assert stage.get_position() == 2.0
sleep(1)
stage.set_position(1.99, wait=False)
while stage.is_moving():
app.logger.info(f'moving stage to position 1.99, at position {stage.get_position()}')
sleep(0.05)
assert stage.get_position() == 1.99
app.disconnect_equipment()
| 21.5 | 89 | 0.704942 | 230 | 1,376 | 4.113043 | 0.217391 | 0.10148 | 0.20296 | 0.186047 | 0.752643 | 0.72833 | 0.715645 | 0.715645 | 0.715645 | 0.664905 | 0 | 0.052632 | 0.130087 | 1,376 | 63 | 90 | 21.84127 | 0.737678 | 0.047965 | 0 | 0.658537 | 0 | 0 | 0.21659 | 0.067588 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0 | false | 0 | 0.04878 | 0 | 0.04878 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3913fcdd1e4dfd6e2f0dc159694055dcf29a18c9 | 59 | py | Python | python/testData/inspections/PyUnresolvedReferencesInspection3K/FromNamespacePackageImportInManySourceRoots/a.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyUnresolvedReferencesInspection3K/FromNamespacePackageImportInManySourceRoots/a.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyUnresolvedReferencesInspection3K/FromNamespacePackageImportInManySourceRoots/a.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from nspkg1 import m1
from nspkg1 import m2
print(m1, m2)
| 11.8 | 21 | 0.762712 | 11 | 59 | 4.090909 | 0.545455 | 0.444444 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.186441 | 59 | 4 | 22 | 14.75 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3937f5d0875434f75fe8be4bff7850679431a0b8 | 1,065 | py | Python | sabueso/forms/api_string_pdb.py | dprada/sabueso | 14843cf3522b5b89db5b61c1541a7015f114dd53 | [
"MIT"
] | null | null | null | sabueso/forms/api_string_pdb.py | dprada/sabueso | 14843cf3522b5b89db5b61c1541a7015f114dd53 | [
"MIT"
] | 2 | 2022-01-31T21:22:17.000Z | 2022-02-04T20:20:12.000Z | sabueso/forms/api_string_pdb.py | dprada/sabueso | 14843cf3522b5b89db5b61c1541a7015f114dd53 | [
"MIT"
] | 1 | 2021-07-20T15:01:14.000Z | 2021-07-20T15:01:14.000Z | form_name='string:pdb'
is_form = {
'string:pdb': form_name,
}
###### Get
def get_entity_index(item, indices='all'):
raise NotImplementedError
def get_entity_name(item, indices='all'):
raise NotImplementedError
def get_entity_id(item, indices='all'):
raise NotImplementedError
def get_entity_type(item, indices='all'):
raise NotImplementedError
def get_n_entities(item, indices='all'):
raise NotImplementedError
def is_ion(item, indices='all'):
raise NotImplementedError
def is_water(item, indices='all'):
raise NotImplementedError
def is_cosolute(item, indices='all'):
raise NotImplementedError
def is_small_molecule(item, indices='all'):
raise NotImplementedError
def is_lipid(item, indices='all'):
raise NotImplementedError
def is_peptide(item, indices='all'):
raise NotImplementedError
def is_protein(item, indices='all'):
raise NotImplementedError
def is_rna(item, indices='all'):
raise NotImplementedError
def is_dna(item, indices='all'):
raise NotImplementedError
| 16.384615 | 43 | 0.7277 | 129 | 1,065 | 5.829457 | 0.217054 | 0.204787 | 0.260638 | 0.353723 | 0.823138 | 0.772606 | 0.772606 | 0.199468 | 0 | 0 | 0 | 0 | 0.16338 | 1,065 | 64 | 44 | 16.640625 | 0.843996 | 0.002817 | 0 | 0.4375 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a3815a1eb5f805cad0962415fe81d25816ff6b7 | 94 | py | Python | Hello World Programs/hello_world.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 256 | 2020-09-30T19:31:34.000Z | 2021-11-20T18:09:15.000Z | Hello World Programs/hello_world.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 293 | 2020-09-30T19:14:54.000Z | 2021-06-06T02:34:47.000Z | Hello World Programs/hello_world.py | TeacherManoj0131/HacktoberFest2020-Contributions | c7119202fdf211b8a6fc1eadd0760dbb706a679b | [
"MIT"
] | 1,620 | 2020-09-30T18:37:44.000Z | 2022-03-03T20:54:22.000Z | #basic python program
print("double quoted: hello world")
print('single quoted: hello world')
| 23.5 | 35 | 0.765957 | 13 | 94 | 5.538462 | 0.692308 | 0.305556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117021 | 94 | 3 | 36 | 31.333333 | 0.86747 | 0.212766 | 0 | 0 | 0 | 0 | 0.712329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
1a503419379487ca60154b5ae9b6837b0e36cb3e | 15,247 | py | Python | tests/test_submission_builder.py | jskinn/rvchallenge-starter-kit | 86f33d33040f0a143572ca2d6c99045d6e2e8575 | [
"BSD-3-Clause"
] | 15 | 2019-01-10T22:58:37.000Z | 2021-05-17T20:38:21.000Z | PODStarterKit/tests/test_submission_builder.py | robotic-vision-lab/Deep-Ensembles-For-Probabilistic-Object-Detection | 82fd36376694f447ccb5564c7af4cc6128b3ea60 | [
"Apache-2.0"
] | 2 | 2019-05-06T20:49:09.000Z | 2019-05-20T12:39:20.000Z | PODStarterKit/tests/test_submission_builder.py | robotic-vision-lab/Deep-Ensembles-For-Probabilistic-Object-Detection | 82fd36376694f447ccb5564c7af4cc6128b3ea60 | [
"Apache-2.0"
] | 4 | 2019-03-07T09:32:41.000Z | 2021-05-17T20:42:34.000Z | import unittest
import os.path
import shutil
import numpy as np
import scoring_program.tests.test_helpers as th
import scoring_program.submission_loader as submission_loader
import scoring_program.class_list as class_list
import starter_kit.submission_builder as submission_builder
class TestMakeDetection(unittest.TestCase):
def test_makes_valid_detection_without_covars(self):
confidences = [0.1, 0.2, 0.3, 0.4]
det = submission_builder.make_detection(confidences, 1, 3, 12, 14)
self.assertIn('bbox', det)
self.assertIn('label_probs', det)
self.assertNotIn('covars', det)
self.assertEqual([1, 3, 12, 14], det['bbox'])
self.assertEqual(confidences, det['label_probs'])
def test_makes_valid_detection_with_covars(self):
confidences = [0.1, 0.2, 0.3, 0.4]
upper_left = [[3, 1], [1, 4]]
lower_right = [[10, 0], [0, 15]]
det = submission_builder.make_detection(confidences, 1, 3, 12, 14, upper_left, lower_right)
self.assertIn('bbox', det)
self.assertIn('label_probs', det)
self.assertIn('covars', det)
self.assertEqual([1, 3, 12, 14], det['bbox'])
self.assertEqual(confidences, det['label_probs'])
self.assertEqual([upper_left, lower_right], det['covars'])
def test_errors_if_xmax_less_than_xmin(self):
with self.assertRaises(ValueError) as cm:
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4], 15, 3, 2, 14)
msg = str(cm.exception)
self.assertIn('xmax', msg)
self.assertIn('xmin', msg)
def test_errors_if_ymax_less_than_ymin(self):
with self.assertRaises(ValueError) as cm:
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4], 1, 31, 12, 14)
msg = str(cm.exception)
self.assertIn('ymax', msg)
self.assertIn('ymin', msg)
def test_normalizes_probabilities_greater_than_1(self):
probs = [0.1, 0.2, 0.3, 0.4, 0.5]
total_prob = sum(probs)
normalized_probs = [v / total_prob for v in probs]
detection = submission_builder.make_detection(probs, 1, 3, 12, 14)
self.assertEqual(detection['label_probs'], normalized_probs)
def test_errors_if_only_one_covar_given(self):
cov = [[3, 1], [1, 4]]
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14, upper_left_cov=cov)
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14, lower_right_cov=cov)
def test_errors_if_covar_is_not_2x2(self):
cov1 = [[3, 1, 2], [1, 4, 3]]
cov2 = [[3, 1], [1, 4]]
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov1, lower_right_cov=cov2)
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov2, lower_right_cov=cov1)
def test_errors_if_covar_is_not_symmetric(self):
cov1 = [[3, 2], [1, 3]]
cov2 = [[3, 1], [1, 4]]
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov1, lower_right_cov=cov2)
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov2, lower_right_cov=cov1)
def test_errors_if_covar_is_not_postitive_definite(self):
cov1 = [[1, 4], [4, 1]]
cov2 = [[3, 1], [1, 4]]
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov1, lower_right_cov=cov2)
with self.assertRaises(ValueError):
submission_builder.make_detection([0.1, 0.2, 0.3, 0.4, 0.5], 1, 3, 12, 14,
upper_left_cov=cov2, lower_right_cov=cov1)
class TestSubmissionBuilder(th.ExtendedTestCase):
temp_dir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'temp')
def tearDown(self):
if os.path.isdir(self.temp_dir):
shutil.rmtree(self.temp_dir)
def test_integration(self):
# Make an example submission
submission = {
'000000': [
[{
'classes': [0.1, 0.4, 0.2, 0.3],
'bbox': [1, 2, 14, 15]
}, {
'classes': [0.8, 0.1, 0.05, 0.05],
'bbox': [1, 2, 14, 15],
'covars': [[[1, 0], [0, 1]], [[1, 0], [0, 1]]]
}],
[],
[{
'classes': [0.4, 0.1, 0.3, 0.2],
'bbox': [1, 2, 14, 15],
'covars': [[[10, 2], [2, 10]], [[1, 0], [0, 100]]]
}, {
'classes': [0.1, 0.7, 0.1, 0.1],
'bbox': [1, 2, 14, 15],
'covars': [[[5, 0], [0, 15]], [[16, 1], [1, 8]]]
}],
[{
'classes': [0.4, 0.1, 0.4, 0.1],
'bbox': [11, 12, 44, 55],
'covars': [[[15, 0], [0, 21]], [[126, 2], [2, 18]]]
}],
[{
'classes': [0.9, 0.01, 0.04, 0.05],
'bbox': [13, 14, 46, 57],
'covars': [[[51, 0], [0, 15]], [[1, 1], [1, 1]]]
}]
],
'000006': [
[],
[{
'classes': [0.1, 0.4, 0.2, 0.3],
'bbox': [1, 2, 14, 15],
'covars': [[[1, 0], [0, 1]], [[1, 0], [0, 1]]]
}, {
'classes': [0.8, 0.1, 0.05, 0.05],
'bbox': [1, 2, 14, 15]
}],
[],
[{
'classes': [0.4, 0.1, 0.3, 0.2],
'bbox': [11, 12, 44, 55],
'covars': [[[10, 2], [2, 10]], [[1, 0], [0, 100]]]
}, {
'classes': [0.1, 0.7, 0.1, 0.1],
'bbox': [1, 2, 14, 15],
'covars': [[[5, 0], [0, 15]], [[16, 1], [1, 8]]]
}],
[],
[{
'classes': [0.4, 0.1, 0.4, 0.1],
'bbox': [1, 2, 14, 15]
}, {
'classes': [0.9, 0.01, 0.04, 0.05],
'bbox': [13, 14, 46, 57],
'covars': [[[51, 0], [0, 15]], [[2, 1], [1, 2]]]
}],
[]
]
}
classes = class_list.CLASSES[1:5]
# Write our submission to fle
writer = submission_builder.SubmissionWriter(self.temp_dir, classes)
for sequence_name, sequence_data in submission.items():
for detections in sequence_data:
for detection in detections:
if 'covars' in detection:
writer.add_detection(
class_probabilities=detection['classes'],
xmin=detection['bbox'][0],
ymin=detection['bbox'][1],
xmax=detection['bbox'][2],
ymax=detection['bbox'][3],
upper_left_cov=detection['covars'][0],
lower_right_cov=detection['covars'][1]
)
else:
writer.add_detection(
class_probabilities=detection['classes'],
xmin=detection['bbox'][0],
ymin=detection['bbox'][1],
xmax=detection['bbox'][2],
ymax=detection['bbox'][3]
)
writer.next_image()
writer.save_sequence(sequence_name)
# Read that submission using the submission loader, and check that it's the same
loaded_sequences = submission_loader.read_submission(self.temp_dir, set(submission.keys()))
self.assertEqual(set(submission.keys()), set(loaded_sequences.keys()))
for sequence_name, generator in loaded_sequences.items():
detections = list(generator)
self.assertEqual(len(submission[sequence_name]), len(detections))
for img_idx in range(len(detections)):
img_detections = list(detections[img_idx])
self.assertEqual(len(submission[sequence_name][img_idx]), len(img_detections))
for det_idx in range(len(img_detections)):
sub_det = submission[sequence_name][img_idx][det_idx]
expected_classes = np.zeros(len(class_list.CLASSES), dtype=np.float32)
expected_classes[1:5] = sub_det['classes']
if not np.all(np.equal(expected_classes, img_detections[det_idx].class_list)):
print("Got a problem boss")
self.assertNPEqual(expected_classes, img_detections[det_idx].class_list)
self.assertNPEqual(sub_det['bbox'], img_detections[det_idx].box)
if 'covars' in sub_det:
self.assertNPEqual(sub_det['covars'], img_detections[det_idx].covs)
def test_integration_numpy(self):
# Make an example submission
submission = {
'000000': [
[{
'classes': np.array([0.1, 0.4, 0.2, 0.3]),
'bbox': np.array([1, 2, 14, 15])
}, {
'classes': np.array([0.8, 0.1, 0.05, 0.05]),
'bbox': np.array([1, 2, 14, 15]),
'covars': np.array([[[1, 0], [0, 1]], [[1, 0], [0, 1]]])
}],
[],
[{
'classes': np.array([0.4, 0.1, 0.3, 0.2]),
'bbox': np.array([1, 2, 14, 15]),
'covars': np.array([[[10, 2], [2, 10]], [[1, 0], [0, 100]]])
}, {
'classes': np.array([0.1, 0.7, 0.1, 0.1]),
'bbox': np.array([1, 2, 14, 15]),
'covars': np.array([[[5, 0], [0, 15]], [[16, 1], [1, 8]]])
}],
[{
'classes': np.array([0.4, 0.1, 0.4, 0.1]),
'bbox': np.array([11, 12, 44, 55]),
'covars': np.array([[[15, 0], [0, 21]], [[126, 2], [2, 18]]])
}],
[{
'classes': np.array([0.9, 0.01, 0.04, 0.05]),
'bbox': np.array([13, 14, 46, 57]),
'covars': np.array([[[51, 0], [0, 15]], [[1, 1], [1, 1]]])
}]
],
'000006': [
[],
[{
'classes': np.array([0.1, 0.4, 0.2, 0.3]),
'bbox': np.array([1, 2, 14, 15]),
'covars': np.array([[[1, 0], [0, 1]], [[1, 0], [0, 1]]])
}, {
'classes': np.array([0.8, 0.1, 0.05, 0.05]),
'bbox': np.array([1, 2, 14, 15])
}],
[],
[{
'classes': np.array([0.4, 0.1, 0.3, 0.2]),
'bbox': np.array([11, 12, 44, 55]),
'covars': np.array([[[10, 2], [2, 10]], [[1, 0], [0, 100]]])
}, {
'classes': np.array([0.1, 0.7, 0.1, 0.1]),
'bbox': np.array([1, 2, 14, 15]),
'covars': np.array([[[5, 0], [0, 15]], [[16, 1], [1, 8]]])
}],
[],
[{
'classes': np.array([0.4, 0.1, 0.4, 0.1]),
'bbox': np.array([1, 2, 14, 15])
}, {
'classes': np.array([0.9, 0.01, 0.04, 0.05]),
'bbox': np.array([13, 14, 46, 57]),
'covars': np.array([[[51, 0], [0, 15]], [[2, 1], [1, 2]]])
}],
[]
]
}
classes = class_list.CLASSES[1:5]
# Write our submission to fle
writer = submission_builder.SubmissionWriter(self.temp_dir, classes)
for sequence_name, sequence_data in submission.items():
for detections in sequence_data:
for detection in detections:
if 'covars' in detection:
writer.add_detection(
class_probabilities=detection['classes'],
xmin=detection['bbox'][0],
ymin=detection['bbox'][1],
xmax=detection['bbox'][2],
ymax=detection['bbox'][3],
upper_left_cov=detection['covars'][0],
lower_right_cov=detection['covars'][1]
)
else:
writer.add_detection(
class_probabilities=detection['classes'],
xmin=detection['bbox'][0],
ymin=detection['bbox'][1],
xmax=detection['bbox'][2],
ymax=detection['bbox'][3]
)
writer.next_image()
writer.save_sequence(sequence_name)
# Read that submission using the submission loader, and check that it's the same
loaded_sequences = submission_loader.read_submission(self.temp_dir, set(submission.keys()))
self.assertEqual(set(submission.keys()), set(loaded_sequences.keys()))
for sequence_name, generator in loaded_sequences.items():
detections = list(generator)
self.assertEqual(len(submission[sequence_name]), len(detections))
for img_idx in range(len(detections)):
img_detections = list(detections[img_idx])
self.assertEqual(len(submission[sequence_name][img_idx]), len(img_detections))
for det_idx in range(len(img_detections)):
sub_det = submission[sequence_name][img_idx][det_idx]
expected_classes = np.zeros(len(class_list.CLASSES), dtype=np.float32)
expected_classes[1:5] = sub_det['classes']
if not np.all(np.equal(expected_classes, img_detections[det_idx].class_list)):
print("Got a problem boss")
self.assertNPEqual(expected_classes, img_detections[det_idx].class_list)
self.assertNPEqual(sub_det['bbox'], img_detections[det_idx].box)
if 'covars' in sub_det:
self.assertNPEqual(sub_det['covars'], img_detections[det_idx].covs)
| 46.769939 | 107 | 0.459828 | 1,787 | 15,247 | 3.781198 | 0.099049 | 0.015687 | 0.016427 | 0.010064 | 0.848157 | 0.838982 | 0.833506 | 0.819742 | 0.807015 | 0.803167 | 0 | 0.094811 | 0.385715 | 15,247 | 325 | 108 | 46.913846 | 0.626628 | 0.017512 | 0 | 0.788591 | 0 | 0 | 0.047753 | 0 | 0 | 0 | 0 | 0 | 0.127517 | 1 | 0.040268 | false | 0 | 0.026846 | 0 | 0.077181 | 0.006711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1aadd2c9c31e79e860ac1505f38474adf7c6bba7 | 155 | py | Python | cases/unary.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | 1 | 2019-04-30T16:27:19.000Z | 2019-04-30T16:27:19.000Z | cases/unary.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | cases/unary.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | print -1/2
print -1/2 - 1/2
print -1/2 + (-1/2)
print -1/2 - 1/3
print -1.5/2 + 42.3/23
print -1---------2
print -1--------2
print +++++++10
print -----8
| 15.5 | 22 | 0.477419 | 35 | 155 | 2.114286 | 0.257143 | 0.216216 | 0.567568 | 0.432432 | 0.635135 | 0.635135 | 0.378378 | 0.378378 | 0.378378 | 0.378378 | 0 | 0.226563 | 0.174194 | 155 | 9 | 23 | 17.222222 | 0.351563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
200aed62fd480f6661ca2ea9944434fc4e7de8c7 | 44,313 | py | Python | uitests/tests.py | wichmannpas/todoscheduler-webclient | ee8256b9bc8cba475f18721c92550b3dd9023ca4 | [
"Apache-2.0"
] | null | null | null | uitests/tests.py | wichmannpas/todoscheduler-webclient | ee8256b9bc8cba475f18721c92550b3dd9023ca4 | [
"Apache-2.0"
] | 75 | 2018-08-20T11:45:14.000Z | 2021-03-03T04:16:00.000Z | uitests/tests.py | wichmannpas/todoscheduler-webclient | ee8256b9bc8cba475f18721c92550b3dd9023ca4 | [
"Apache-2.0"
] | null | null | null | import os
from base64 import urlsafe_b64encode
from datetime import date, timedelta
from decimal import Decimal
from time import sleep
from subprocess import DEVNULL, Popen
from django.contrib.auth import authenticate, get_user_model
from django.core.exceptions import ObjectDoesNotExist
from django.core.servers.basehttp import ThreadedWSGIServer
from django.db.models import Q
from django.test import override_settings, LiveServerTestCase
from django.test.testcases import LiveServerThread, QuietWSGIRequestHandler
from rest_authtoken.models import AuthToken
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from task.models import Task, TaskChunk
@override_settings(STATIC_ROOT='nonexistent', STATIC_URL='nonexistent')
class SeleniumTest(LiveServerTestCase):
host = '127.0.0.1'
port = 8000
frontend_port = 8080
def setUp(self):
# ensure local storage is cleared
self.selenium.get(self.frontend_url)
self.selenium.execute_script('window.localStorage.clear()')
@classmethod
def setUpClass(cls):
super().setUpClass()
options = webdriver.ChromeOptions()
options.add_argument('--headless')
cls.selenium = webdriver.Chrome(options=options)
cls.selenium.implicitly_wait(10)
cls._frontend_server = Popen([
'python',
'-m', 'http.server',
str(cls.frontend_port),
], cwd=os.environ.get('DIST_DIR'), stdout=DEVNULL, stderr=DEVNULL)
sleep(0.2)
cls.frontend_url = 'http://127.0.0.1:{}'.format(cls.frontend_port)
@classmethod
def tearDownClass(cls):
cls.selenium.quit()
cls._frontend_server.kill()
super().tearDownClass()
class ReusableLiveServerThread(LiveServerThread):
def _create_server(self):
return ThreadedWSGIServer(
(self.host, self.port),
QuietWSGIRequestHandler,
allow_reuse_address=True
)
server_thread_class = ReusableLiveServerThread
class LoginPageTest(SeleniumTest):
"""
Test the login page.
"""
def test_login(self):
user = get_user_model().objects.create(
username='admin',
workhours_weekday=Decimal(8),
workhours_weekend=Decimal(4))
user.set_password('foobar123')
user.save()
self.selenium.get(self.frontend_url)
sleep(0.5)
username_input = self.selenium.find_element_by_id('login-username')
username_input.send_keys('admin')
password_input = self.selenium.find_element_by_id('login-password')
password_input.send_keys('foobar123')
login_button = self.selenium.find_element_by_xpath('//button[contains(.,"Login")]')
login_button.click()
sleep(0.5)
self.assertNotIn(
'landing',
self.selenium.current_url)
self.assertIn(
'NEW TASK',
self.selenium.find_element_by_tag_name('body').text)
def test_redirection_when_not_authenticated(self):
self.selenium.get(self.frontend_url)
sleep(1)
# hash-location contain landing now
self.assertIn(
'landing',
self.selenium.current_url)
def test_registration(self):
self.selenium.get(self.frontend_url)
sleep(0.5)
self.assertEqual(
get_user_model().objects.count(),
0)
username_input = self.selenium.find_element_by_id('register-username')
username_input.send_keys('admin')
password_input = self.selenium.find_element_by_id('register-password')
password_input.send_keys('foobar123')
password_input2 = self.selenium.find_element_by_id('register-password2')
password_input2.send_keys('foobar123')
register_button = self.selenium.find_element_by_xpath('//button[contains(.,"Register")]')
register_button.click()
sleep(3)
self.assertNotIn(
'landing',
self.selenium.current_url)
self.assertIn(
'NEW TASK',
self.selenium.find_element_by_tag_name('body').text)
self.assertEqual(
get_user_model().objects.count(),
1)
user = get_user_model().objects.first()
self.assertEqual(
user.username,
'admin')
self.assertEqual(
authenticate(username='admin', password='foobar123'),
user)
def test_registration_username_taken(self):
user = get_user_model().objects.create(
username='admin',
workhours_weekday=Decimal(8),
workhours_weekend=Decimal(4))
user.set_password('foobar123')
user.save()
self.selenium.get(self.frontend_url)
sleep(0.5)
self.assertEqual(
get_user_model().objects.count(),
1)
username_input = self.selenium.find_element_by_id('register-username')
username_input.send_keys('admin')
password_input = self.selenium.find_element_by_id('register-password')
password_input.send_keys('bazqux')
password_input2 = self.selenium.find_element_by_id('register-password2')
password_input2.send_keys('bazqux')
register_button = self.selenium.find_element_by_xpath('//button[contains(.,"Register")]')
register_button.click()
sleep(3)
self.assertIn(
'landing',
self.selenium.current_url)
self.assertIn(
'already taken',
self.selenium.find_element_by_tag_name('body').text)
self.assertEqual(
get_user_model().objects.count(),
1)
class AuthenticatedSeleniumTest(SeleniumTest):
def setUp(self):
super().setUp()
self.user = get_user_model().objects.create(
username='admin',
email='admin@localhost',
workhours_weekday=Decimal(8),
workhours_weekend=Decimal(4))
self.selenium.get(self.frontend_url)
sleep(0.2)
token = urlsafe_b64encode(AuthToken.create_token_for_user(self.user)).decode()
self.selenium.execute_script(
'window.localStorage.setItem("authToken", "{}")'.format(token))
class MainPageTest(AuthenticatedSeleniumTest):
def test_new_task(self):
self.assertEqual(Task.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, None)
def test_new_task_submit_with_enter_duration(self):
self.assertEqual(Task.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
duration_input.send_keys(Keys.ENTER)
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, None)
def test_new_task_submit_with_enter_name(self):
self.assertEqual(Task.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
name_input.send_keys(Keys.ENTER)
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, None)
def test_new_task_scheduling_today(self):
"""Test creating a new task and instantly scheduling it."""
self.assertEqual(Task.objects.count(), 0)
self.assertEqual(TaskChunk.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
schedule_checkbox = self.selenium.find_element_by_id('task-schedule')
schedule_checkbox.click()
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, None)
self.assertEqual(TaskChunk.objects.count(), 1)
chunk = TaskChunk.objects.first()
self.assertEqual(chunk.task, task)
self.assertEqual(chunk.day, date.today())
def test_new_task_scheduling_tomorrow(self):
"""Test creating a new task and instantly scheduling it."""
self.assertEqual(Task.objects.count(), 0)
self.assertEqual(TaskChunk.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
schedule_checkbox = self.selenium.find_element_by_id('task-schedule')
schedule_checkbox.click()
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
Select(schedule_for).select_by_visible_text(
'Tomorrow')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, None)
self.assertEqual(TaskChunk.objects.count(), 1)
chunk = TaskChunk.objects.first()
self.assertEqual(chunk.task, task)
self.assertEqual(chunk.day, date.today() + timedelta(days=1))
def test_new_task_invalid_duration(self):
self.assertEqual(Task.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('-42.2')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertIn(
'This duration is invalid.',
self.selenium.find_element_by_class_name('mdc-dialog__surface').get_attribute('innerHTML'))
self.assertEqual(Task.objects.count(), 0)
def test_new_task_with_start_date(self):
self.assertEqual(Task.objects.count(), 0)
self.selenium.get(self.frontend_url)
sleep(0.5)
new_task_link = self.selenium.find_element_by_xpath('//button[contains(., "New Task")]')
new_task_link.click()
sleep(0.1)
name_input = self.selenium.find_element_by_id('task-name')
name_input.send_keys('Testtask')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42.2')
start_input = self.selenium.find_element_by_id('task-start')
start_input.send_keys('05/02/2018')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(Task.objects.count(), 1)
task = Task.objects.first()
self.assertEqual(task.name, 'Testtask')
self.assertEqual(task.duration, Decimal('42.2'))
self.assertEqual(task.start, date(2018, 5, 2))
def test_edit_task_duration_too_low(self):
"""
Test that it is not possible to set the total duration of a task
to a value lower than the duration that is already scheduled.
"""
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=1,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
edit_task_link = self.selenium.find_elements_by_xpath('//a[@data-tooltip="Edit task"]')[0]
edit_task_link.click()
sleep(0.1)
scheduled_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][1]')
self.assertIn(
'3h',
scheduled_display.get_attribute('innerHTML'))
finished_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][2]')
self.assertIn(
'1h',
finished_display.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1') # invalid, 3 hours are already scheduled
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertIn(
'This duration is invalid.',
self.selenium.find_element_by_class_name('mdc-dialog__surface').get_attribute('innerHTML'))
task.refresh_from_db()
# the duration was not changed
self.assertEqual(
task.duration,
Decimal(5))
def test_edit_task_duration_incomplete(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=1,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
edit_task_link = self.selenium.find_elements_by_xpath('//a[@data-tooltip="Edit task"]')[0]
edit_task_link.click()
sleep(0.1)
scheduled_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][1]')
self.assertIn(
'3h',
scheduled_display.get_attribute('innerHTML'))
finished_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][2]')
self.assertIn(
'1h',
finished_display.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
task.refresh_from_db()
self.assertEqual(
task.name,
'Testtask')
self.assertEqual(
task.duration,
Decimal(42))
def test_edit_task_name_incomplete(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=1,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
edit_task_link = self.selenium.find_elements_by_xpath('//a[@data-tooltip="Edit task"]')[0]
edit_task_link.click()
sleep(0.1)
scheduled_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][1]')
self.assertIn(
'3h',
scheduled_display.get_attribute('innerHTML'))
finished_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][2]')
self.assertIn(
'1h',
finished_display.get_attribute('innerHTML'))
name_input = self.selenium.find_element_by_id('task-name')
name_input.clear()
name_input.send_keys('Edited Task')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
task.refresh_from_db()
self.assertEqual(
task.name,
'Edited Task')
self.assertEqual(
task.duration,
Decimal(5))
def test_edit_task_name_duration_incomplete(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=1,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
edit_task_link = self.selenium.find_elements_by_xpath('//a[@data-tooltip="Edit task"]')[0]
edit_task_link.click()
sleep(0.1)
scheduled_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][1]')
self.assertIn(
'3h',
scheduled_display.get_attribute('innerHTML'))
finished_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][2]')
self.assertIn(
'1h',
finished_display.get_attribute('innerHTML'))
name_input = self.selenium.find_element_by_id('task-name')
name_input.clear()
name_input.send_keys('Edited Task')
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('42')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
task.refresh_from_db()
self.assertEqual(
task.name,
'Edited Task')
self.assertEqual(
task.duration,
Decimal(42))
def test_edit_task_start_incomplete(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
TaskChunk.objects.create(
day=date.today(),
task=task,
duration=1,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
edit_task_link = self.selenium.find_elements_by_xpath('//a[@data-tooltip="Edit task"]')[0]
edit_task_link.click()
sleep(0.1)
scheduled_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][1]')
self.assertIn(
'3h',
scheduled_display.get_attribute('innerHTML'))
finished_display = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]/section/div/div/div[contains(@class, "mdc-layout-grid__cell--span-7")][2]')
self.assertIn(
'1h',
finished_display.get_attribute('innerHTML'))
start_input = self.selenium.find_element_by_id('task-start')
start_input.send_keys('05/02/2018')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
task.refresh_from_db()
self.assertEqual(
task.name,
'Testtask')
self.assertEqual(
task.duration,
Decimal(5))
self.assertEqual(
task.start,
date(2018, 5, 2))
def test_schedule_task_for_today(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date.today())
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_for_today_submit_with_enter_duration(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
duration_input.send_keys(Keys.ENTER)
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date.today())
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_for_tomorrow(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
Select(schedule_for).select_by_visible_text(
'Tomorrow')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date.today() + timedelta(days=1))
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_for_next_free_capacity(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
other_task = Task.objects.create(
user=self.user,
name='Placeholder Testtask',
duration=30)
# create task chunks to fill current and next 2 days
TaskChunk.objects.bulk_create([
TaskChunk(
task=other_task, duration=10, day=date.today(), day_order=1),
TaskChunk(
task=other_task, duration=10, day=date.today() + timedelta(days=1), day_order=1),
TaskChunk(
task=other_task, duration=10, day=date.today() + timedelta(days=2), day_order=1)])
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
Select(schedule_for).select_by_visible_text(
'Next Free Capacity')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date.today() + timedelta(days=3))
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_for_another_time(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
Select(schedule_for).select_by_visible_text(
'Another Time')
date_input = self.selenium.find_element_by_xpath(
'//div[@class="mdc-dialog__surface"]//input[@type="date"]')
date_input.send_keys(Keys.DELETE)
date_input.send_keys('01/02/2017')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date(2017, 1, 2))
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_for_another_time_submit_with_enter_date(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('1')
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
Select(schedule_for).select_by_visible_text(
'Another Time')
date_input = self.selenium.find_element_by_xpath(
'//div[@class="mdc-dialog__surface"]//input[@type="date"]')
date_input.send_keys(Keys.DELETE)
date_input.send_keys('01/02/2017')
date_input.send_keys(Keys.ENTER)
sleep(0.5)
self.assertEqual(task.chunks.count(), 1)
chunk = task.chunks.first()
self.assertEqual(chunk.day, date(2017, 1, 2))
self.assertEqual(chunk.duration, Decimal(1))
self.assertFalse(chunk.finished)
def test_schedule_task_invalid_duration(self):
self.assertEqual(TaskChunk.objects.count(), 0)
# create dummy task
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
schedule_link = self.selenium.find_element_by_xpath('//a[@data-tooltip="Schedule"]')
schedule_link.click()
sleep(0.1)
modal_body = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]')
self.assertIn(
'Testtask',
modal_body.get_attribute('innerHTML'))
self.assertIn(
'5h',
modal_body.get_attribute('innerHTML'))
duration_input = self.selenium.find_element_by_id('task-duration')
duration_input.clear()
duration_input.send_keys('-1')
schedule_for = self.selenium.find_element_by_xpath('//div[@class="mdc-dialog__surface"]//select')
self.selenium.find_element_by_xpath('//button[contains(@class, "mdc-dialog__footer__button--accept")]').click()
sleep(0.5)
self.assertIn(
'This duration is invalid.',
self.selenium.find_element_by_class_name('mdc-dialog__surface').get_attribute('innerHTML'))
self.assertEqual(task.chunks.count(), 0)
def test_task_unscheduled_finish(self):
"""
Finish a task from the incomplete list that has no task chunks.
"""
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Complete task"]').click()
sleep(0.5)
self.assertRaises(
ObjectDoesNotExist,
task.refresh_from_db)
def test_task_scheduled_finish(self):
"""
Finish a task from the incomplete list that has task chunks.
"""
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Complete task"]').click()
sleep(0.5)
task.refresh_from_db()
self.assertEqual(
task.duration,
Decimal('2'))
def test_task_chunk_increase_time(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Takes 30 more minutes"]').click()
sleep(0.5)
chunk.refresh_from_db()
self.assertEqual(
chunk.duration,
Decimal('2.5'))
def test_task_chunk_decrease_time(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Takes 30 less minutes"]').click()
sleep(0.5)
chunk.refresh_from_db()
self.assertEqual(
chunk.duration,
Decimal('1.5'))
def test_task_chunk_finish(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Done"]').click()
sleep(0.5)
chunk.refresh_from_db()
self.assertTrue(chunk.finished)
def test_task_chunk_undo(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Not done"]').click()
sleep(0.5)
chunk.refresh_from_db()
self.assertFalse(chunk.finished)
def test_task_chunk_delete(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="No time needed on this day"]').click()
alert = self.selenium.switch_to.alert
alert.accept()
sleep(0.5)
self.assertRaises(ObjectDoesNotExist, chunk.refresh_from_db)
task.refresh_from_db()
self.assertEqual(
task.duration,
Decimal(3))
def test_task_chunk_postpone(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today(),
task=task,
duration=2,
day_order=1,
finished=True)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_element_by_css_selector('[data-tooltip="Postpone to another day"]').click()
sleep(0.5)
self.assertRaises(ObjectDoesNotExist, chunk.refresh_from_db)
task.refresh_from_db()
self.assertEqual(
task.duration,
Decimal(5))
def test_task_chunk_split(self):
task1 = Task.objects.create(
user=self.user,
name='Task 1',
duration=5)
chunk1 = TaskChunk.objects.create(
day=date.today(),
task=task1,
duration=Decimal(2.5),
day_order=1)
self.assertEqual(
TaskChunk.objects.count(),
1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_elements_by_css_selector('[data-tooltip="Split task chunk"]')[0].click()
sleep(0.5)
self.assertEqual(
TaskChunk.objects.count(),
2)
chunk1.refresh_from_db()
self.assertEqual(
chunk1.duration,
Decimal(1))
chunk2 = TaskChunk.objects.get(~Q(pk=chunk1.pk))
self.assertEqual(
chunk2.duration,
Decimal('1.5'))
def test_task_chunk_left(self):
task1 = Task.objects.create(
user=self.user,
name='Task 1',
duration=5)
chunk1 = TaskChunk.objects.create(
day=date.today(),
task=task1,
duration=2,
day_order=1)
self.selenium.execute_script('window.localStorage.setItem("drag-and-drop", "never")')
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_elements_by_css_selector('[data-tooltip="Move to previous day"]')[0].click()
sleep(0.5)
chunk1.refresh_from_db()
self.assertEqual(
chunk1.day,
date.today() - timedelta(days=1))
def test_task_chunk_right(self):
task1 = Task.objects.create(
user=self.user,
name='Task 1',
duration=5)
chunk1 = TaskChunk.objects.create(
day=date.today(),
task=task1,
duration=2,
day_order=1)
self.selenium.execute_script('window.localStorage.setItem("drag-and-drop", "never")')
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_elements_by_css_selector('[data-tooltip="Move to next day"]')[0].click()
sleep(0.5)
chunk1.refresh_from_db()
self.assertEqual(
chunk1.day,
date.today() + timedelta(days=1))
def test_task_chunk_up(self):
task1 = Task.objects.create(
user=self.user,
name='Task 1',
duration=5)
chunk1 = TaskChunk.objects.create(
day=date.today(),
task=task1,
duration=2,
day_order=1)
task2 = Task.objects.create(
user=self.user,
name='Task 2',
duration=5)
chunk2 = TaskChunk.objects.create(
day=date.today(),
task=task2,
duration=1,
day_order=2)
self.selenium.execute_script('window.localStorage.setItem("drag-and-drop", "never")')
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_elements_by_css_selector('[data-tooltip="Needs time earlier"]')[1].click()
sleep(0.5)
chunk1.refresh_from_db()
chunk2.refresh_from_db()
self.assertLess(
chunk2.day_order,
chunk1.day_order)
def test_task_chunk_down(self):
task1 = Task.objects.create(
user=self.user,
name='Task 1',
duration=5)
chunk1 = TaskChunk.objects.create(
day=date.today(),
task=task1,
duration=2,
day_order=1)
task2 = Task.objects.create(
user=self.user,
name='Task 2',
duration=5)
chunk2 = TaskChunk.objects.create(
day=date.today(),
task=task2,
duration=1,
day_order=2)
self.selenium.execute_script('window.localStorage.setItem("drag-and-drop", "never")')
self.selenium.get(self.frontend_url)
sleep(0.5)
self.selenium.find_elements_by_css_selector('[data-tooltip="Needs time later"]')[0].click()
sleep(0.5)
chunk1.refresh_from_db()
chunk2.refresh_from_db()
self.assertLess(
chunk2.day_order,
chunk1.day_order)
def test_missed_task_chunk_finish(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today() - timedelta(days=4),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.assertIn(
'You missed these task chunks!',
self.selenium.execute_script('return document.documentElement.innerHTML'))
self.selenium.find_element_by_css_selector('[data-tooltip="Done"]').click()
sleep(0.5)
chunk.refresh_from_db()
self.assertTrue(chunk.finished)
def test_missed_task_chunk_postpone(self):
task = Task.objects.create(
user=self.user,
name='Testtask',
duration=5)
chunk = TaskChunk.objects.create(
day=date.today() - timedelta(days=4),
task=task,
duration=2,
day_order=1)
self.selenium.get(self.frontend_url)
sleep(0.5)
self.assertIn(
'You missed these task chunks!',
self.selenium.page_source)
self.selenium.find_element_by_css_selector('[data-tooltip="Postpone to another day"]').click()
sleep(0.5)
self.assertRaises(ObjectDoesNotExist, chunk.refresh_from_db)
| 35.939173 | 176 | 0.611942 | 5,188 | 44,313 | 5.000578 | 0.059946 | 0.080484 | 0.074625 | 0.098408 | 0.893767 | 0.887291 | 0.876113 | 0.867286 | 0.856801 | 0.85021 | 0 | 0.018847 | 0.263602 | 44,313 | 1,232 | 177 | 35.968344 | 0.776171 | 0.015616 | 0 | 0.853728 | 0 | 0.00956 | 0.138392 | 0.079573 | 0 | 0 | 0 | 0 | 0.137667 | 1 | 0.041109 | false | 0.012428 | 0.016252 | 0.000956 | 0.066922 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6463b2b2585aab39e44d0b4be645e5b56e64bc0e | 32,986 | py | Python | third_party/tests/Opentitan/util/wavegen/wavesvg_data.py | parzival3/Surelog | cf126533ebfb2af7df321057af9e3535feb30487 | [
"Apache-2.0"
] | 156 | 2019-11-16T17:29:55.000Z | 2022-01-21T05:41:13.000Z | third_party/tests/Opentitan/util/wavegen/wavesvg_data.py | parzival3/Surelog | cf126533ebfb2af7df321057af9e3535feb30487 | [
"Apache-2.0"
] | 414 | 2021-06-11T07:22:01.000Z | 2022-03-31T22:06:14.000Z | third_party/tests/Opentitan/util/wavegen/wavesvg_data.py | parzival3/Surelog | cf126533ebfb2af7df321057af9e3535feb30487 | [
"Apache-2.0"
] | 30 | 2019-11-18T16:31:40.000Z | 2021-12-26T01:22:51.000Z | # Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
# portions adapted from the javascript wavedrom.js
# https://github.com/drom/wavedrom/blob/master/wavedrom.js
# see LICENSE.wavedrom
head1 = """
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
overflow="hidden"
"""
# Styles are from wavedrom.js
head2 = """
<style type="text/css">
text { font-size: 11pt; font-style: normal; font-variant:
normal; font-weight: normal; font-stretch: normal;
text-align: center; fill-opacity: 1; font-family:
Helvetica }
.muted {
fill: #aaa
}
.warning {
fill: #f6b900
}
.error {
fill: #f60000
}
.info {
fill: #0041c4
}
.success {
fill: #00ab00
}
.h1 {
font-size: 33pt;
font-weight: bold
}
.h2 {
font-size: 27pt;
font-weight: bold
}
.h3 {
font-size: 20pt;
font-weight: bold
}
.h4 {
font-size: 14pt;
font-weight: bold
}
.h5 {
font-size: 11pt;
font-weight: bold
}
.h6 {
font-size: 8pt;
font-weight: bold
}
.s1 {
fill: none;
stroke: #000;
stroke-width: 1;
stroke-linecap: round;
stroke-linejoin: miter;
stroke-miterlimit: 4;
stroke-opacity: 1;
stroke-dasharray: none
}
.s2 {
fill: none;
stroke: #000;
stroke-width: 0.5;
stroke-linecap: round;
stroke-linejoin: miter;
stroke-miterlimit: 4;
stroke-opacity: 1;
stroke-dasharray: none
}
.s3 {
color: #000;
fill: none;
stroke: #000;
stroke-width: 1;
stroke-linecap: round;
stroke-linejoin: miter;
stroke-miterlimit: 4;
stroke-opacity: 1;
stroke-dasharray: 1, 3;
stroke-dashoffset: 0;
marker: none;
visibility: visible;
display: inline;
overflow: visible;
enable-background: accumulate
}
.s4 {
color: #000;
fill: none;
stroke: #000;
stroke-width: 1;
stroke-linecap: round;
stroke-linejoin: miter;
stroke-miterlimit: 4;
stroke-opacity: 1;
stroke-dasharray: none;
stroke-dashoffset: 0;
marker: none;
visibility: visible;
display: inline;
overflow: visible
}
.s5 {
fill: #fff;
stroke: none
}
.s6 {
color: #000;
fill: #ffffb4;
fill-opacity: 1;
fill-rule: nonzero;
stroke: none;
stroke-width: 1px;
marker: none;
visibility: visible;
display: inline;
overflow: visible;
enable-background: accumulate
}
.s7 {
color: #000;
fill: #ffe0b9;
fill-opacity: 1;
fill-rule: nonzero;
stroke: none;
stroke-width: 1px;
marker: none;
visibility: visible;
display: inline;
overflow: visible;
enable-background: accumulate
}
.s8 {
color: #000;
fill: #b9e0ff;
fill-opacity: 1;
fill-rule: nonzero;
stroke: none;
stroke-width: 1px;
marker: none;
visibility: visible;
display: inline;
overflow: visible;
enable-background: accumulate
}
.s9 {
fill: #000;
fill-opacity: 1;
stroke: none
}
.s10 {
color: #000;
fill: #fff;
fill-opacity: 1;
fill-rule: nonzero;
stroke: none;
stroke-width: 1px;
marker: none;
visibility: visible;
display: inline;
overflow: visible;
enable-background: accumulate
}
.s11 {
fill: #0041c4;
fill-opacity: 1;
stroke: none
}
.s12 {
fill: none;
stroke: #0041c4;
stroke-width: 1;
stroke-linecap: round;
stroke-linejoin: miter;
stroke-miterlimit: 4;
stroke-opacity: 1;
stroke-dasharray: none
}
</style>
"""
defs_head = """
<defs>
"""
defs_tail = """
</defs>
"""
tail = """
</svg>
"""
# Brick definitions from wavedrom.js
# Split out here so only the ones that are used are inserted in the svg
use_defs = {
'arrows':
''' <marker id="arrowhead" style="fill: rgb(0, 65, 196);" markerHeight="7" markerWidth="10" markerUnits="strokeWidth" viewBox="0 -4 11 8" refX="15" refY="0" orient="auto">
<path d="M0 -4 11 0 0 4z"></path>
</marker>
<marker id="arrowtail" style="fill: rgb(0, 65, 196);" markerHeight="7" markerWidth="10" markerUnits="strokeWidth" viewBox="-11 -4 11 8" refX="-15" refY="0" orient="auto">
<path d="M0 -4 -11 0 0 4z"></path>
</marker>
''',
'socket':
''' <g id="socket">
<rect y="15" x="6" height="20" width="20"></rect>
</g>''',
'pclk':
''' <g id="pclk">
<path d="M0,20 0,0 20,0" class="s1"></path>
</g>''',
'nclk':
''' <g id="nclk">
<path d="m0,0 0,20 20,0" class="s1"></path>
</g>''',
'000':
''' <g id="000">
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'0m0':
''' <g id="0m0">
<path d="m0,20 3,0 3,-10 3,10 11,0" class="s1"></path>
</g>''',
'0m1':
''' <g id="0m1">
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'0mx':
''' <g id="0mx">
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 5,20" class="s2"></path>
<path d="M20,0 4,16" class="s2"></path>
<path d="M15,0 6,9" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'0md':
''' <g id="0md">
<path d="m8,20 10,0" class="s3"></path>
<path d="m0,20 5,0" class="s1"></path>
</g>''',
'0mu':
''' <g id="0mu">
<path d="m0,20 3,0 C 7,10 10.107603,0 20,0" class="s1"></path>
</g>''',
'0mz':
''' <g id="0mz">
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'111':
''' <g id="111">
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'1m0':
''' <g id="1m0">
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
</g>''',
'1m1':
''' <g id="1m1">
<path d="M0,0 3,0 6,10 9,0 20,0" class="s1"></path>
</g>''',
'1mx':
''' <g id="1mx">
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 6,9" class="s2"></path>
<path d="M10,0 5,5" class="s2"></path>
<path d="M3.5,1.5 5,0" class="s2"></path>
</g>''',
'1md':
''' <g id="1md">
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
</g>''',
'1mu':
''' <g id="1mu">
<path d="M0,0 5,0" class="s1"></path>
<path d="M8,0 18,0" class="s3"></path>
</g>''',
'1mz':
''' <g id="1mz">
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s1"></path>
</g>''',
'xxx':
''' <g id="xxx">
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,5 5,0" class="s2"></path>
<path d="M0,10 10,0" class="s2"></path>
<path d="M0,15 15,0" class="s2"></path>
<path d="M0,20 20,0" class="s2"></path>
<path d="M5,20 20,5" class="s2"></path>
<path d="M10,20 20,10" class="s2"></path>
<path d="m15,20 5,-5" class="s2"></path>
</g>''',
'xm0':
''' <g id="xm0">
<path d="M0,0 4,0 9,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,5 4,1" class="s2"></path>
<path d="M0,10 5,5" class="s2"></path>
<path d="M0,15 6,9" class="s2"></path>
<path d="M0,20 7,13" class="s2"></path>
<path d="M5,20 8,17" class="s2"></path>
</g>''',
'xm1':
''' <g id="xm1">
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,20 4,20 9,0" class="s1"></path>
<path d="M0,5 5,0" class="s2"></path>
<path d="M0,10 9,1" class="s2"></path>
<path d="M0,15 7,8" class="s2"></path>
<path d="M0,20 5,15" class="s2"></path>
</g>''',
'xmx':
''' <g id="xmx">
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,5 5,0" class="s2"></path>
<path d="M0,10 10,0" class="s2"></path>
<path d="M0,15 15,0" class="s2"></path>
<path d="M0,20 20,0" class="s2"></path>
<path d="M5,20 20,5" class="s2"></path>
<path d="M10,20 20,10" class="s2"></path>
<path d="m15,20 5,-5" class="s2"></path>
</g>''',
'xmd':
''' <g id="xmd">
<path d="m0,0 4,0 c 3,10 6,20 16,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,5 4,1" class="s2"></path>
<path d="M0,10 5.5,4.5" class="s2"></path>
<path d="M0,15 6.5,8.5" class="s2"></path>
<path d="M0,20 8,12" class="s2"></path>
<path d="m5,20 5,-5" class="s2"></path>
<path d="m10,20 2.5,-2.5" class="s2"></path>
</g>''',
'xmu':
''' <g id="xmu">
<path d="M0,0 20,0" class="s1"></path>
<path d="m0,20 4,0 C 7,10 10,0 20,0" class="s1"></path>
<path d="M0,5 5,0" class="s2"></path>
<path d="M0,10 10,0" class="s2"></path>
<path d="M0,15 10,5" class="s2"></path>
<path d="M0,20 6,14" class="s2"></path>
</g>''',
'xmz':
''' <g id="xmz">
<path d="m0,0 4,0 c 6,10 11,10 16,10" class="s1"></path>
<path d="m0,20 4,0 C 10,10 15,10 20,10" class="s1"></path>
<path d="M0,5 4.5,0.5" class="s2"></path>
<path d="M0,10 6.5,3.5" class="s2"></path>
<path d="M0,15 8.5,6.5" class="s2"></path>
<path d="M0,20 11.5,8.5" class="s2"></path>
</g>''',
'ddd':
''' <g id="ddd">
<path d="m0,20 20,0" class="s3"></path>
</g>''',
'dm0':
''' <g id="dm0">
<path d="m0,20 10,0" class="s3"></path>
<path d="m12,20 8,0" class="s1"></path>
</g>''',
'dm1':
''' <g id="dm1">
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'dmx':
''' <g id="dmx">
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 5,20" class="s2"></path>
<path d="M20,0 4,16" class="s2"></path>
<path d="M15,0 6,9" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'dmd':
''' <g id="dmd">
<path d="m0,20 20,0" class="s3"></path>
</g>''',
'dmu':
''' <g id="dmu">
<path d="m0,20 3,0 C 7,10 10.107603,0 20,0" class="s1"></path>
</g>''',
'dmz':
''' <g id="dmz">
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'uuu':
''' <g id="uuu">
<path d="M0,0 20,0" class="s3"></path>
</g>''',
'um0':
''' <g id="um0">
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
</g>''',
'um1':
''' <g id="um1">
<path d="M0,0 10,0" class="s3"></path>
<path d="m12,0 8,0" class="s1"></path>
</g>''',
'umx':
''' <g id="umx">
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 6,9" class="s2"></path>
<path d="M10,0 5,5" class="s2"></path>
<path d="M3.5,1.5 5,0" class="s2"></path>
</g>''',
'umd':
''' <g id="umd">
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
</g>''',
'umu':
''' <g id="umu">
<path d="M0,0 20,0" class="s3"></path>
</g>''',
'umz':
''' <g id="umz">
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s4"></path>
</g>''',
'zzz':
''' <g id="zzz">
<path d="m0,10 20,0" class="s1"></path>
</g>''',
'zm0':
''' <g id="zm0">
<path d="m0,10 6,0 3,10 11,0" class="s1"></path>
</g>''',
'zm1':
''' <g id="zm1">
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
</g>''',
'zmx':
''' <g id="zmx">
<path d="m6,10 3,10 11,0" class="s1"></path>
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 6.5,8.5" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
</g>''',
'zmd':
''' <g id="zmd">
<path d="m0,10 7,0 c 3,5 8,10 13,10" class="s1"></path>
</g>''',
'zmu':
''' <g id="zmu">
<path d="m0,10 7,0 C 10,5 15,0 20,0" class="s1"></path>
</g>''',
'zmz':
''' <g id="zmz">
<path d="m0,10 20,0" class="s1"></path>
</g>''',
'gap':
''' <g id="gap">
<path d="m7,-2 -4,0 c -5,0 -5,24 -10,24 l 4,0 C 2,22 2,-2 7,-2 z" class="s5"></path>
<path d="M-7,22 C -2,22 -2,-2 3,-2" class="s1"></path>
<path d="M-3,22 C 2,22 2,-2 7,-2" class="s1"></path>
</g>''',
'0mv-3':
''' <g id="0mv-3">
<path d="M9,0 20,0 20,20 3,20 z" class="s6"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'1mv-3':
''' <g id="1mv-3">
<path d="M2.875,0 20,0 20,20 9,20 z" class="s6"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'xmv-3':
''' <g id="xmv-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,5 3.5,1.5" class="s2"></path>
<path d="M0,10 4.5,5.5" class="s2"></path>
<path d="M0,15 6,9" class="s2"></path>
<path d="M0,20 4,16" class="s2"></path>
</g>''',
'dmv-3':
''' <g id="dmv-3">
<path d="M9,0 20,0 20,20 3,20 z" class="s6"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'umv-3':
''' <g id="umv-3">
<path d="M3,0 20,0 20,20 9,20 z" class="s6"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'zmv-3':
''' <g id="zmv-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="m6,10 3,10 11,0" class="s1"></path>
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
</g>''',
'vvv-3':
''' <g id="vvv-3">
<path d="M20,20 0,20 0,0 20,0" class="s6"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vm0-3':
''' <g id="vm0-3">
<path d="M0,20 0,0 3,0 9,20" class="s6"></path>
<path d="M0,0 3,0 9,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vm1-3':
''' <g id="vm1-3">
<path d="M0,0 0,20 3,20 9,0" class="s6"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,20 3,20 9,0" class="s1"></path>
</g>''',
'vmx-3':
''' <g id="vmx-3">
<path d="M0,0 0,20 3,20 6,10 3,0" class="s6"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 7,8" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
</g>''',
'vmd-3':
''' <g id="vmd-3">
<path d="m0,0 0,20 20,0 C 10,20 7,10 3,0" class="s6"></path>
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vmu-3':
''' <g id="vmu-3">
<path d="m0,0 0,20 3,0 C 7,10 10,0 20,0" class="s6"></path>
<path d="m0,20 3,0 C 7,10 10,0 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vmz-3':
''' <g id="vmz-3">
<path d="M0,0 3,0 C 10,10 15,10 20,10 15,10 10,10 3,20 L 0,20" class="s6"></path>
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s1"></path>
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'vmv-3-3':
''' <g id="vmv-3-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s6"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-3-4':
''' <g id="vmv-3-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s6"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-3-5':
''' <g id="vmv-3-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s6"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-4-3':
''' <g id="vmv-4-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s7"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-4-4':
''' <g id="vmv-4-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s7"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-4-5':
''' <g id="vmv-4-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s7"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-5-3':
''' <g id="vmv-5-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s8"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-5-4':
''' <g id="vmv-5-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s8"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-5-5':
''' <g id="vmv-5-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s8"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'0mv-4':
''' <g id="0mv-4">
<path d="M9,0 20,0 20,20 3,20 z" class="s7"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'1mv-4':
''' <g id="1mv-4">
<path d="M2.875,0 20,0 20,20 9,20 z" class="s7"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'xmv-4':
''' <g id="xmv-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,5 3.5,1.5" class="s2"></path>
<path d="M0,10 4.5,5.5" class="s2"></path>
<path d="M0,15 6,9" class="s2"></path>
<path d="M0,20 4,16" class="s2"></path>
</g>''',
'dmv-4':
''' <g id="dmv-4">
<path d="M9,0 20,0 20,20 3,20 z" class="s7"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'umv-4':
''' <g id="umv-4">
<path d="M3,0 20,0 20,20 9,20 z" class="s7"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'zmv-4':
''' <g id="zmv-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="m6,10 3,10 11,0" class="s1"></path>
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
</g>''',
'0mv-5':
''' <g id="0mv-5">
<path d="M9,0 20,0 20,20 3,20 z" class="s8"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'1mv-5':
''' <g id="1mv-5">
<path d="M2.875,0 20,0 20,20 9,20 z" class="s8"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'xmv-5':
''' <g id="xmv-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,5 3.5,1.5" class="s2"></path>
<path d="M0,10 4.5,5.5" class="s2"></path>
<path d="M0,15 6,9" class="s2"></path>
<path d="M0,20 4,16" class="s2"></path>
</g>''',
'dmv-5':
''' <g id="dmv-5">
<path d="M9,0 20,0 20,20 3,20 z" class="s8"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'umv-5':
''' <g id="umv-5">
<path d="M3,0 20,0 20,20 9,20 z" class="s8"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'zmv-5':
''' <g id="zmv-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="m6,10 3,10 11,0" class="s1"></path>
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
</g>''',
'vvv-4':
''' <g id="vvv-4">
<path d="M20,20 0,20 0,0 20,0" class="s7"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vm0-4':
''' <g id="vm0-4">
<path d="M0,20 0,0 3,0 9,20" class="s7"></path>
<path d="M0,0 3,0 9,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vm1-4':
''' <g id="vm1-4">
<path d="M0,0 0,20 3,20 9,0" class="s7"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,20 3,20 9,0" class="s1"></path>
</g>''',
'vmx-4':
''' <g id="vmx-4">
<path d="M0,0 0,20 3,20 6,10 3,0" class="s7"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 7,8" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
</g>''',
'vmd-4':
''' <g id="vmd-4">
<path d="m0,0 0,20 20,0 C 10,20 7,10 3,0" class="s7"></path>
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vmu-4':
''' <g id="vmu-4">
<path d="m0,0 0,20 3,0 C 7,10 10,0 20,0" class="s7"></path>
<path d="m0,20 3,0 C 7,10 10,0 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vmz-4':
''' <g id="vmz-4">
<path d="M0,0 3,0 C 10,10 15,10 20,10 15,10 10,10 3,20 L 0,20" class="s7"></path>
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s1"></path>
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'vvv-5':
''' <g id="vvv-5">
<path d="M20,20 0,20 0,0 20,0" class="s8"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vm0-5':
''' <g id="vm0-5">
<path d="M0,20 0,0 3,0 9,20" class="s8"></path>
<path d="M0,0 3,0 9,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vm1-5':
''' <g id="vm1-5">
<path d="M0,0 0,20 3,20 9,0" class="s8"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,20 3,20 9,0" class="s1"></path>
</g>''',
'vmx-5':
''' <g id="vmx-5">
<path d="M0,0 0,20 3,20 6,10 3,0" class="s8"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 7,8" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
</g>''',
'vmd-5':
''' <g id="vmd-5">
<path d="m0,0 0,20 20,0 C 10,20 7,10 3,0" class="s8"></path>
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vmu-5':
''' <g id="vmu-5">
<path d="m0,0 0,20 3,0 C 7,10 10,0 20,0" class="s8"></path>
<path d="m0,20 3,0 C 7,10 10,0 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vmz-5':
''' <g id="vmz-5">
<path d="M0,0 3,0 C 10,10 15,10 20,10 15,10 10,10 3,20 L 0,20" class="s8"></path>
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s1"></path>
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'Pclk':
''' <g id="Pclk">
<path d="M-3,12 0,3 3,12 C 1,11 -1,11 -3,12 z" class="s9"></path>
<path d="M0,20 0,0 20,0" class="s1"></path>
</g>''',
'Nclk':
''' <g id="Nclk">
<path d="M-3,8 0,17 3,8 C 1,9 -1,9 -3,8 z" class="s9"></path>
<path d="m0,0 0,20 20,0" class="s1"></path>
</g>''',
'vvv-2':
''' <g id="vvv-2">
<path d="M20,20 0,20 0,0 20,0" class="s10"></path>
<path d="m0,20 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vm0-2':
''' <g id="vm0-2">
<path d="M0,20 0,0 3,0 9,20" class="s10"></path>
<path d="M0,0 3,0 9,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vm1-2':
''' <g id="vm1-2">
<path d="M0,0 0,20 3,20 9,0" class="s10"></path>
<path d="M0,0 20,0" class="s1"></path>
<path d="M0,20 3,20 9,0" class="s1"></path>
</g>''',
'vmx-2':
''' <g id="vmx-2">
<path d="M0,0 0,20 3,20 6,10 3,0" class="s10"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m20,15 -5,5" class="s2"></path>
<path d="M20,10 10,20" class="s2"></path>
<path d="M20,5 8,17" class="s2"></path>
<path d="M20,0 7,13" class="s2"></path>
<path d="M15,0 7,8" class="s2"></path>
<path d="M10,0 9,1" class="s2"></path>
</g>''',
'vmd-2':
''' <g id="vmd-2">
<path d="m0,0 0,20 20,0 C 10,20 7,10 3,0" class="s10"></path>
<path d="m0,0 3,0 c 4,10 7,20 17,20" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'vmu-2':
''' <g id="vmu-2">
<path d="m0,0 0,20 3,0 C 7,10 10,0 20,0" class="s10"></path>
<path d="m0,20 3,0 C 7,10 10,0 20,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'vmz-2':
''' <g id="vmz-2">
<path d="M0,0 3,0 C 10,10 15,10 20,10 15,10 10,10 3,20 L 0,20" class="s10"></path>
<path d="m0,0 3,0 c 7,10 12,10 17,10" class="s1"></path>
<path d="m0,20 3,0 C 10,10 15,10 20,10" class="s1"></path>
</g>''',
'0mv-2':
''' <g id="0mv-2">
<path d="M9,0 20,0 20,20 3,20 z" class="s10"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'1mv-2':
''' <g id="1mv-2">
<path d="M2.875,0 20,0 20,20 9,20 z" class="s10"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'xmv-2':
''' <g id="xmv-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,5 3.5,1.5" class="s2"></path>
<path d="M0,10 4.5,5.5" class="s2"></path>
<path d="M0,15 6,9" class="s2"></path>
<path d="M0,20 4,16" class="s2"></path>
</g>''',
'dmv-2':
''' <g id="dmv-2">
<path d="M9,0 20,0 20,20 3,20 z" class="s10"></path>
<path d="M3,20 9,0 20,0" class="s1"></path>
<path d="m0,20 20,0" class="s1"></path>
</g>''',
'umv-2':
''' <g id="umv-2">
<path d="M3,0 20,0 20,20 9,20 z" class="s10"></path>
<path d="m3,0 6,20 11,0" class="s1"></path>
<path d="M0,0 20,0" class="s1"></path>
</g>''',
'zmv-2':
''' <g id="zmv-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="m6,10 3,10 11,0" class="s1"></path>
<path d="M0,10 6,10 9,0 20,0" class="s1"></path>
</g>''',
'vmv-3-2':
''' <g id="vmv-3-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s6"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-4-2':
''' <g id="vmv-4-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s7"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-5-2':
''' <g id="vmv-5-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s8"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-2-3':
''' <g id="vmv-2-3">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s6"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s10"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-2-4':
''' <g id="vmv-2-4">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s7"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s10"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-2-5':
''' <g id="vmv-2-5">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s8"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s10"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
'vmv-2-2':
''' <g id="vmv-2-2">
<path d="M9,0 20,0 20,20 9,20 6,10 z" class="s10"></path>
<path d="M3,0 0,0 0,20 3,20 6,10 z" class="s10"></path>
<path d="m0,0 3,0 6,20 11,0" class="s1"></path>
<path d="M0,20 3,20 9,0 20,0" class="s1"></path>
</g>''',
}
| 33.454361 | 180 | 0.441157 | 6,049 | 32,986 | 2.405191 | 0.042817 | 0.136779 | 0.173826 | 0.132312 | 0.850505 | 0.839164 | 0.83016 | 0.815383 | 0.797581 | 0.793594 | 0 | 0.19837 | 0.296853 | 32,986 | 985 | 181 | 33.488325 | 0.428904 | 0.012126 | 0 | 0.248826 | 0 | 0 | 0.712504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
649ce8af7af2eb9e73b1f38314806e2c51a19b28 | 48 | py | Python | api/schema/__init__.py | domenic-corso/kris-kringle-python-web-app | 57d209edfc3644b04e95f2ef4adb6d48e87142e2 | [
"MIT"
] | null | null | null | api/schema/__init__.py | domenic-corso/kris-kringle-python-web-app | 57d209edfc3644b04e95f2ef4adb6d48e87142e2 | [
"MIT"
] | null | null | null | api/schema/__init__.py | domenic-corso/kris-kringle-python-web-app | 57d209edfc3644b04e95f2ef4adb6d48e87142e2 | [
"MIT"
] | null | null | null | from .ParticipantSchema import ParticipantSchema | 48 | 48 | 0.916667 | 4 | 48 | 11 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
64d62b12a89db4bf230ae743bc475a1476694f16 | 50,526 | py | Python | sdk/python/pulumi_spotinst/azure/ocean.py | pulumi/pulumi-spotinst | 75592d6293d63f6cec703722f2e02ff1fb1cca44 | [
"ECL-2.0",
"Apache-2.0"
] | 4 | 2019-12-21T20:50:43.000Z | 2021-12-01T20:57:38.000Z | sdk/python/pulumi_spotinst/azure/ocean.py | pulumi/pulumi-spotinst | 75592d6293d63f6cec703722f2e02ff1fb1cca44 | [
"ECL-2.0",
"Apache-2.0"
] | 103 | 2019-12-09T22:03:16.000Z | 2022-03-30T17:07:34.000Z | sdk/python/pulumi_spotinst/azure/ocean.py | pulumi/pulumi-spotinst | 75592d6293d63f6cec703722f2e02ff1fb1cca44 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['OceanArgs', 'Ocean']
@pulumi.input_type
class OceanArgs:
def __init__(__self__, *,
acd_identifier: pulumi.Input[str],
aks_name: pulumi.Input[str],
aks_resource_group_name: pulumi.Input[str],
ssh_public_key: pulumi.Input[str],
autoscaler: Optional[pulumi.Input['OceanAutoscalerArgs']] = None,
controller_cluster_id: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]] = None,
health: Optional[pulumi.Input['OceanHealthArgs']] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]] = None,
managed_service_identities: Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input['OceanNetworkArgs']] = None,
os_disk: Optional[pulumi.Input['OceanOsDiskArgs']] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
strategies: Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]] = None,
user_name: Optional[pulumi.Input[str]] = None,
vm_sizes: Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]] = None):
"""
The set of arguments for constructing a Ocean resource.
:param pulumi.Input[str] acd_identifier: The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
:param pulumi.Input[str] aks_name: The AKS cluster name.
:param pulumi.Input[str] aks_resource_group_name: Name of the Azure Resource Group where the AKS cluster is located.
:param pulumi.Input[str] ssh_public_key: SSH public key for admin access to Linux VMs.
:param pulumi.Input['OceanAutoscalerArgs'] autoscaler: The Ocean Kubernetes Autoscaler object.
:param pulumi.Input[str] controller_cluster_id: A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
:param pulumi.Input[str] custom_data: Must contain a valid Base64 encoded string.
:param pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]] extensions: List of Azure extension objects.
:param pulumi.Input['OceanHealthArgs'] health: The Ocean AKS Health object.
:param pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]] images: Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
:param pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]] load_balancers: Configure Load Balancer.
:param pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]] managed_service_identities: List of Managed Service Identity objects.
:param pulumi.Input[str] name: Name of the Load Balancer.
:param pulumi.Input['OceanNetworkArgs'] network: Define the Virtual Network and Subnet.
:param pulumi.Input['OceanOsDiskArgs'] os_disk: OS disk specifications.
:param pulumi.Input[str] resource_group_name: The Resource Group name of the Load Balancer.
:param pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]] strategies: The Ocean AKS strategy object.
:param pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]] tags: Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
:param pulumi.Input[str] user_name: Username for admin access to VMs.
:param pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]] vm_sizes: The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
pulumi.set(__self__, "acd_identifier", acd_identifier)
pulumi.set(__self__, "aks_name", aks_name)
pulumi.set(__self__, "aks_resource_group_name", aks_resource_group_name)
pulumi.set(__self__, "ssh_public_key", ssh_public_key)
if autoscaler is not None:
pulumi.set(__self__, "autoscaler", autoscaler)
if controller_cluster_id is not None:
pulumi.set(__self__, "controller_cluster_id", controller_cluster_id)
if custom_data is not None:
pulumi.set(__self__, "custom_data", custom_data)
if extensions is not None:
pulumi.set(__self__, "extensions", extensions)
if health is not None:
pulumi.set(__self__, "health", health)
if images is not None:
pulumi.set(__self__, "images", images)
if load_balancers is not None:
pulumi.set(__self__, "load_balancers", load_balancers)
if managed_service_identities is not None:
pulumi.set(__self__, "managed_service_identities", managed_service_identities)
if name is not None:
pulumi.set(__self__, "name", name)
if network is not None:
pulumi.set(__self__, "network", network)
if os_disk is not None:
pulumi.set(__self__, "os_disk", os_disk)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if strategies is not None:
pulumi.set(__self__, "strategies", strategies)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
if vm_sizes is not None:
pulumi.set(__self__, "vm_sizes", vm_sizes)
@property
@pulumi.getter(name="acdIdentifier")
def acd_identifier(self) -> pulumi.Input[str]:
"""
The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
"""
return pulumi.get(self, "acd_identifier")
@acd_identifier.setter
def acd_identifier(self, value: pulumi.Input[str]):
pulumi.set(self, "acd_identifier", value)
@property
@pulumi.getter(name="aksName")
def aks_name(self) -> pulumi.Input[str]:
"""
The AKS cluster name.
"""
return pulumi.get(self, "aks_name")
@aks_name.setter
def aks_name(self, value: pulumi.Input[str]):
pulumi.set(self, "aks_name", value)
@property
@pulumi.getter(name="aksResourceGroupName")
def aks_resource_group_name(self) -> pulumi.Input[str]:
"""
Name of the Azure Resource Group where the AKS cluster is located.
"""
return pulumi.get(self, "aks_resource_group_name")
@aks_resource_group_name.setter
def aks_resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "aks_resource_group_name", value)
@property
@pulumi.getter(name="sshPublicKey")
def ssh_public_key(self) -> pulumi.Input[str]:
"""
SSH public key for admin access to Linux VMs.
"""
return pulumi.get(self, "ssh_public_key")
@ssh_public_key.setter
def ssh_public_key(self, value: pulumi.Input[str]):
pulumi.set(self, "ssh_public_key", value)
@property
@pulumi.getter
def autoscaler(self) -> Optional[pulumi.Input['OceanAutoscalerArgs']]:
"""
The Ocean Kubernetes Autoscaler object.
"""
return pulumi.get(self, "autoscaler")
@autoscaler.setter
def autoscaler(self, value: Optional[pulumi.Input['OceanAutoscalerArgs']]):
pulumi.set(self, "autoscaler", value)
@property
@pulumi.getter(name="controllerClusterId")
def controller_cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
"""
return pulumi.get(self, "controller_cluster_id")
@controller_cluster_id.setter
def controller_cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "controller_cluster_id", value)
@property
@pulumi.getter(name="customData")
def custom_data(self) -> Optional[pulumi.Input[str]]:
"""
Must contain a valid Base64 encoded string.
"""
return pulumi.get(self, "custom_data")
@custom_data.setter
def custom_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_data", value)
@property
@pulumi.getter
def extensions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]]:
"""
List of Azure extension objects.
"""
return pulumi.get(self, "extensions")
@extensions.setter
def extensions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]]):
pulumi.set(self, "extensions", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input['OceanHealthArgs']]:
"""
The Ocean AKS Health object.
"""
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input['OceanHealthArgs']]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]]:
"""
Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]]:
"""
Configure Load Balancer.
"""
return pulumi.get(self, "load_balancers")
@load_balancers.setter
def load_balancers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]]):
pulumi.set(self, "load_balancers", value)
@property
@pulumi.getter(name="managedServiceIdentities")
def managed_service_identities(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]]:
"""
List of Managed Service Identity objects.
"""
return pulumi.get(self, "managed_service_identities")
@managed_service_identities.setter
def managed_service_identities(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]]):
pulumi.set(self, "managed_service_identities", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Load Balancer.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['OceanNetworkArgs']]:
"""
Define the Virtual Network and Subnet.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['OceanNetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> Optional[pulumi.Input['OceanOsDiskArgs']]:
"""
OS disk specifications.
"""
return pulumi.get(self, "os_disk")
@os_disk.setter
def os_disk(self, value: Optional[pulumi.Input['OceanOsDiskArgs']]):
pulumi.set(self, "os_disk", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The Resource Group name of the Load Balancer.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def strategies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]]:
"""
The Ocean AKS strategy object.
"""
return pulumi.get(self, "strategies")
@strategies.setter
def strategies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]]):
pulumi.set(self, "strategies", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]]:
"""
Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[pulumi.Input[str]]:
"""
Username for admin access to VMs.
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "user_name", value)
@property
@pulumi.getter(name="vmSizes")
def vm_sizes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]]:
"""
The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
return pulumi.get(self, "vm_sizes")
@vm_sizes.setter
def vm_sizes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]]):
pulumi.set(self, "vm_sizes", value)
@pulumi.input_type
class _OceanState:
def __init__(__self__, *,
acd_identifier: Optional[pulumi.Input[str]] = None,
aks_name: Optional[pulumi.Input[str]] = None,
aks_resource_group_name: Optional[pulumi.Input[str]] = None,
autoscaler: Optional[pulumi.Input['OceanAutoscalerArgs']] = None,
controller_cluster_id: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]] = None,
health: Optional[pulumi.Input['OceanHealthArgs']] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]] = None,
managed_service_identities: Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input['OceanNetworkArgs']] = None,
os_disk: Optional[pulumi.Input['OceanOsDiskArgs']] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
ssh_public_key: Optional[pulumi.Input[str]] = None,
strategies: Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]] = None,
user_name: Optional[pulumi.Input[str]] = None,
vm_sizes: Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]] = None):
"""
Input properties used for looking up and filtering Ocean resources.
:param pulumi.Input[str] acd_identifier: The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
:param pulumi.Input[str] aks_name: The AKS cluster name.
:param pulumi.Input[str] aks_resource_group_name: Name of the Azure Resource Group where the AKS cluster is located.
:param pulumi.Input['OceanAutoscalerArgs'] autoscaler: The Ocean Kubernetes Autoscaler object.
:param pulumi.Input[str] controller_cluster_id: A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
:param pulumi.Input[str] custom_data: Must contain a valid Base64 encoded string.
:param pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]] extensions: List of Azure extension objects.
:param pulumi.Input['OceanHealthArgs'] health: The Ocean AKS Health object.
:param pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]] images: Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
:param pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]] load_balancers: Configure Load Balancer.
:param pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]] managed_service_identities: List of Managed Service Identity objects.
:param pulumi.Input[str] name: Name of the Load Balancer.
:param pulumi.Input['OceanNetworkArgs'] network: Define the Virtual Network and Subnet.
:param pulumi.Input['OceanOsDiskArgs'] os_disk: OS disk specifications.
:param pulumi.Input[str] resource_group_name: The Resource Group name of the Load Balancer.
:param pulumi.Input[str] ssh_public_key: SSH public key for admin access to Linux VMs.
:param pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]] strategies: The Ocean AKS strategy object.
:param pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]] tags: Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
:param pulumi.Input[str] user_name: Username for admin access to VMs.
:param pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]] vm_sizes: The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
if acd_identifier is not None:
pulumi.set(__self__, "acd_identifier", acd_identifier)
if aks_name is not None:
pulumi.set(__self__, "aks_name", aks_name)
if aks_resource_group_name is not None:
pulumi.set(__self__, "aks_resource_group_name", aks_resource_group_name)
if autoscaler is not None:
pulumi.set(__self__, "autoscaler", autoscaler)
if controller_cluster_id is not None:
pulumi.set(__self__, "controller_cluster_id", controller_cluster_id)
if custom_data is not None:
pulumi.set(__self__, "custom_data", custom_data)
if extensions is not None:
pulumi.set(__self__, "extensions", extensions)
if health is not None:
pulumi.set(__self__, "health", health)
if images is not None:
pulumi.set(__self__, "images", images)
if load_balancers is not None:
pulumi.set(__self__, "load_balancers", load_balancers)
if managed_service_identities is not None:
pulumi.set(__self__, "managed_service_identities", managed_service_identities)
if name is not None:
pulumi.set(__self__, "name", name)
if network is not None:
pulumi.set(__self__, "network", network)
if os_disk is not None:
pulumi.set(__self__, "os_disk", os_disk)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if ssh_public_key is not None:
pulumi.set(__self__, "ssh_public_key", ssh_public_key)
if strategies is not None:
pulumi.set(__self__, "strategies", strategies)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
if vm_sizes is not None:
pulumi.set(__self__, "vm_sizes", vm_sizes)
@property
@pulumi.getter(name="acdIdentifier")
def acd_identifier(self) -> Optional[pulumi.Input[str]]:
"""
The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
"""
return pulumi.get(self, "acd_identifier")
@acd_identifier.setter
def acd_identifier(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "acd_identifier", value)
@property
@pulumi.getter(name="aksName")
def aks_name(self) -> Optional[pulumi.Input[str]]:
"""
The AKS cluster name.
"""
return pulumi.get(self, "aks_name")
@aks_name.setter
def aks_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "aks_name", value)
@property
@pulumi.getter(name="aksResourceGroupName")
def aks_resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Azure Resource Group where the AKS cluster is located.
"""
return pulumi.get(self, "aks_resource_group_name")
@aks_resource_group_name.setter
def aks_resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "aks_resource_group_name", value)
@property
@pulumi.getter
def autoscaler(self) -> Optional[pulumi.Input['OceanAutoscalerArgs']]:
"""
The Ocean Kubernetes Autoscaler object.
"""
return pulumi.get(self, "autoscaler")
@autoscaler.setter
def autoscaler(self, value: Optional[pulumi.Input['OceanAutoscalerArgs']]):
pulumi.set(self, "autoscaler", value)
@property
@pulumi.getter(name="controllerClusterId")
def controller_cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
"""
return pulumi.get(self, "controller_cluster_id")
@controller_cluster_id.setter
def controller_cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "controller_cluster_id", value)
@property
@pulumi.getter(name="customData")
def custom_data(self) -> Optional[pulumi.Input[str]]:
"""
Must contain a valid Base64 encoded string.
"""
return pulumi.get(self, "custom_data")
@custom_data.setter
def custom_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_data", value)
@property
@pulumi.getter
def extensions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]]:
"""
List of Azure extension objects.
"""
return pulumi.get(self, "extensions")
@extensions.setter
def extensions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanExtensionArgs']]]]):
pulumi.set(self, "extensions", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input['OceanHealthArgs']]:
"""
The Ocean AKS Health object.
"""
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input['OceanHealthArgs']]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def images(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]]:
"""
Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
"""
return pulumi.get(self, "images")
@images.setter
def images(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanImageArgs']]]]):
pulumi.set(self, "images", value)
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]]:
"""
Configure Load Balancer.
"""
return pulumi.get(self, "load_balancers")
@load_balancers.setter
def load_balancers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanLoadBalancerArgs']]]]):
pulumi.set(self, "load_balancers", value)
@property
@pulumi.getter(name="managedServiceIdentities")
def managed_service_identities(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]]:
"""
List of Managed Service Identity objects.
"""
return pulumi.get(self, "managed_service_identities")
@managed_service_identities.setter
def managed_service_identities(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanManagedServiceIdentityArgs']]]]):
pulumi.set(self, "managed_service_identities", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Load Balancer.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['OceanNetworkArgs']]:
"""
Define the Virtual Network and Subnet.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['OceanNetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> Optional[pulumi.Input['OceanOsDiskArgs']]:
"""
OS disk specifications.
"""
return pulumi.get(self, "os_disk")
@os_disk.setter
def os_disk(self, value: Optional[pulumi.Input['OceanOsDiskArgs']]):
pulumi.set(self, "os_disk", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The Resource Group name of the Load Balancer.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="sshPublicKey")
def ssh_public_key(self) -> Optional[pulumi.Input[str]]:
"""
SSH public key for admin access to Linux VMs.
"""
return pulumi.get(self, "ssh_public_key")
@ssh_public_key.setter
def ssh_public_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ssh_public_key", value)
@property
@pulumi.getter
def strategies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]]:
"""
The Ocean AKS strategy object.
"""
return pulumi.get(self, "strategies")
@strategies.setter
def strategies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanStrategyArgs']]]]):
pulumi.set(self, "strategies", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]]:
"""
Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanTagArgs']]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[pulumi.Input[str]]:
"""
Username for admin access to VMs.
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "user_name", value)
@property
@pulumi.getter(name="vmSizes")
def vm_sizes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]]:
"""
The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
return pulumi.get(self, "vm_sizes")
@vm_sizes.setter
def vm_sizes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['OceanVmSizeArgs']]]]):
pulumi.set(self, "vm_sizes", value)
class Ocean(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acd_identifier: Optional[pulumi.Input[str]] = None,
aks_name: Optional[pulumi.Input[str]] = None,
aks_resource_group_name: Optional[pulumi.Input[str]] = None,
autoscaler: Optional[pulumi.Input[pulumi.InputType['OceanAutoscalerArgs']]] = None,
controller_cluster_id: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanExtensionArgs']]]]] = None,
health: Optional[pulumi.Input[pulumi.InputType['OceanHealthArgs']]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanImageArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanLoadBalancerArgs']]]]] = None,
managed_service_identities: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanManagedServiceIdentityArgs']]]]] = None,
name: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input[pulumi.InputType['OceanNetworkArgs']]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['OceanOsDiskArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
ssh_public_key: Optional[pulumi.Input[str]] = None,
strategies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanStrategyArgs']]]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanTagArgs']]]]] = None,
user_name: Optional[pulumi.Input[str]] = None,
vm_sizes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanVmSizeArgs']]]]] = None,
__props__=None):
"""
Create a Ocean resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] acd_identifier: The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
:param pulumi.Input[str] aks_name: The AKS cluster name.
:param pulumi.Input[str] aks_resource_group_name: Name of the Azure Resource Group where the AKS cluster is located.
:param pulumi.Input[pulumi.InputType['OceanAutoscalerArgs']] autoscaler: The Ocean Kubernetes Autoscaler object.
:param pulumi.Input[str] controller_cluster_id: A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
:param pulumi.Input[str] custom_data: Must contain a valid Base64 encoded string.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanExtensionArgs']]]] extensions: List of Azure extension objects.
:param pulumi.Input[pulumi.InputType['OceanHealthArgs']] health: The Ocean AKS Health object.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanImageArgs']]]] images: Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanLoadBalancerArgs']]]] load_balancers: Configure Load Balancer.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanManagedServiceIdentityArgs']]]] managed_service_identities: List of Managed Service Identity objects.
:param pulumi.Input[str] name: Name of the Load Balancer.
:param pulumi.Input[pulumi.InputType['OceanNetworkArgs']] network: Define the Virtual Network and Subnet.
:param pulumi.Input[pulumi.InputType['OceanOsDiskArgs']] os_disk: OS disk specifications.
:param pulumi.Input[str] resource_group_name: The Resource Group name of the Load Balancer.
:param pulumi.Input[str] ssh_public_key: SSH public key for admin access to Linux VMs.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanStrategyArgs']]]] strategies: The Ocean AKS strategy object.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanTagArgs']]]] tags: Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
:param pulumi.Input[str] user_name: Username for admin access to VMs.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanVmSizeArgs']]]] vm_sizes: The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: OceanArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Create a Ocean resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param OceanArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(OceanArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acd_identifier: Optional[pulumi.Input[str]] = None,
aks_name: Optional[pulumi.Input[str]] = None,
aks_resource_group_name: Optional[pulumi.Input[str]] = None,
autoscaler: Optional[pulumi.Input[pulumi.InputType['OceanAutoscalerArgs']]] = None,
controller_cluster_id: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanExtensionArgs']]]]] = None,
health: Optional[pulumi.Input[pulumi.InputType['OceanHealthArgs']]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanImageArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanLoadBalancerArgs']]]]] = None,
managed_service_identities: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanManagedServiceIdentityArgs']]]]] = None,
name: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input[pulumi.InputType['OceanNetworkArgs']]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['OceanOsDiskArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
ssh_public_key: Optional[pulumi.Input[str]] = None,
strategies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanStrategyArgs']]]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanTagArgs']]]]] = None,
user_name: Optional[pulumi.Input[str]] = None,
vm_sizes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanVmSizeArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = OceanArgs.__new__(OceanArgs)
if acd_identifier is None and not opts.urn:
raise TypeError("Missing required property 'acd_identifier'")
__props__.__dict__["acd_identifier"] = acd_identifier
if aks_name is None and not opts.urn:
raise TypeError("Missing required property 'aks_name'")
__props__.__dict__["aks_name"] = aks_name
if aks_resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'aks_resource_group_name'")
__props__.__dict__["aks_resource_group_name"] = aks_resource_group_name
__props__.__dict__["autoscaler"] = autoscaler
__props__.__dict__["controller_cluster_id"] = controller_cluster_id
__props__.__dict__["custom_data"] = custom_data
__props__.__dict__["extensions"] = extensions
__props__.__dict__["health"] = health
__props__.__dict__["images"] = images
__props__.__dict__["load_balancers"] = load_balancers
__props__.__dict__["managed_service_identities"] = managed_service_identities
__props__.__dict__["name"] = name
__props__.__dict__["network"] = network
__props__.__dict__["os_disk"] = os_disk
__props__.__dict__["resource_group_name"] = resource_group_name
if ssh_public_key is None and not opts.urn:
raise TypeError("Missing required property 'ssh_public_key'")
__props__.__dict__["ssh_public_key"] = ssh_public_key
__props__.__dict__["strategies"] = strategies
__props__.__dict__["tags"] = tags
__props__.__dict__["user_name"] = user_name
__props__.__dict__["vm_sizes"] = vm_sizes
super(Ocean, __self__).__init__(
'spotinst:azure/ocean:Ocean',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
acd_identifier: Optional[pulumi.Input[str]] = None,
aks_name: Optional[pulumi.Input[str]] = None,
aks_resource_group_name: Optional[pulumi.Input[str]] = None,
autoscaler: Optional[pulumi.Input[pulumi.InputType['OceanAutoscalerArgs']]] = None,
controller_cluster_id: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanExtensionArgs']]]]] = None,
health: Optional[pulumi.Input[pulumi.InputType['OceanHealthArgs']]] = None,
images: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanImageArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanLoadBalancerArgs']]]]] = None,
managed_service_identities: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanManagedServiceIdentityArgs']]]]] = None,
name: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input[pulumi.InputType['OceanNetworkArgs']]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['OceanOsDiskArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
ssh_public_key: Optional[pulumi.Input[str]] = None,
strategies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanStrategyArgs']]]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanTagArgs']]]]] = None,
user_name: Optional[pulumi.Input[str]] = None,
vm_sizes: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanVmSizeArgs']]]]] = None) -> 'Ocean':
"""
Get an existing Ocean resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] acd_identifier: The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
:param pulumi.Input[str] aks_name: The AKS cluster name.
:param pulumi.Input[str] aks_resource_group_name: Name of the Azure Resource Group where the AKS cluster is located.
:param pulumi.Input[pulumi.InputType['OceanAutoscalerArgs']] autoscaler: The Ocean Kubernetes Autoscaler object.
:param pulumi.Input[str] controller_cluster_id: A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
:param pulumi.Input[str] custom_data: Must contain a valid Base64 encoded string.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanExtensionArgs']]]] extensions: List of Azure extension objects.
:param pulumi.Input[pulumi.InputType['OceanHealthArgs']] health: The Ocean AKS Health object.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanImageArgs']]]] images: Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanLoadBalancerArgs']]]] load_balancers: Configure Load Balancer.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanManagedServiceIdentityArgs']]]] managed_service_identities: List of Managed Service Identity objects.
:param pulumi.Input[str] name: Name of the Load Balancer.
:param pulumi.Input[pulumi.InputType['OceanNetworkArgs']] network: Define the Virtual Network and Subnet.
:param pulumi.Input[pulumi.InputType['OceanOsDiskArgs']] os_disk: OS disk specifications.
:param pulumi.Input[str] resource_group_name: The Resource Group name of the Load Balancer.
:param pulumi.Input[str] ssh_public_key: SSH public key for admin access to Linux VMs.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanStrategyArgs']]]] strategies: The Ocean AKS strategy object.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanTagArgs']]]] tags: Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
:param pulumi.Input[str] user_name: Username for admin access to VMs.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['OceanVmSizeArgs']]]] vm_sizes: The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _OceanState.__new__(_OceanState)
__props__.__dict__["acd_identifier"] = acd_identifier
__props__.__dict__["aks_name"] = aks_name
__props__.__dict__["aks_resource_group_name"] = aks_resource_group_name
__props__.__dict__["autoscaler"] = autoscaler
__props__.__dict__["controller_cluster_id"] = controller_cluster_id
__props__.__dict__["custom_data"] = custom_data
__props__.__dict__["extensions"] = extensions
__props__.__dict__["health"] = health
__props__.__dict__["images"] = images
__props__.__dict__["load_balancers"] = load_balancers
__props__.__dict__["managed_service_identities"] = managed_service_identities
__props__.__dict__["name"] = name
__props__.__dict__["network"] = network
__props__.__dict__["os_disk"] = os_disk
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["ssh_public_key"] = ssh_public_key
__props__.__dict__["strategies"] = strategies
__props__.__dict__["tags"] = tags
__props__.__dict__["user_name"] = user_name
__props__.__dict__["vm_sizes"] = vm_sizes
return Ocean(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="acdIdentifier")
def acd_identifier(self) -> pulumi.Output[str]:
"""
The AKS identifier. A valid identifier should be formatted as `acd-nnnnnnnn` and previously used identifiers cannot be reused.
"""
return pulumi.get(self, "acd_identifier")
@property
@pulumi.getter(name="aksName")
def aks_name(self) -> pulumi.Output[str]:
"""
The AKS cluster name.
"""
return pulumi.get(self, "aks_name")
@property
@pulumi.getter(name="aksResourceGroupName")
def aks_resource_group_name(self) -> pulumi.Output[str]:
"""
Name of the Azure Resource Group where the AKS cluster is located.
"""
return pulumi.get(self, "aks_resource_group_name")
@property
@pulumi.getter
def autoscaler(self) -> pulumi.Output[Optional['outputs.OceanAutoscaler']]:
"""
The Ocean Kubernetes Autoscaler object.
"""
return pulumi.get(self, "autoscaler")
@property
@pulumi.getter(name="controllerClusterId")
def controller_cluster_id(self) -> pulumi.Output[str]:
"""
A unique identifier used for connecting the Ocean SaaS platform and the Kubernetes cluster. Typically, the cluster name is used as its identifier.
"""
return pulumi.get(self, "controller_cluster_id")
@property
@pulumi.getter(name="customData")
def custom_data(self) -> pulumi.Output[str]:
"""
Must contain a valid Base64 encoded string.
"""
return pulumi.get(self, "custom_data")
@property
@pulumi.getter
def extensions(self) -> pulumi.Output[Sequence['outputs.OceanExtension']]:
"""
List of Azure extension objects.
"""
return pulumi.get(self, "extensions")
@property
@pulumi.getter
def health(self) -> pulumi.Output['outputs.OceanHealth']:
"""
The Ocean AKS Health object.
"""
return pulumi.get(self, "health")
@property
@pulumi.getter
def images(self) -> pulumi.Output[Sequence['outputs.OceanImage']]:
"""
Image of VM. An image is a template for creating new VMs. Choose from Azure image catalogue (marketplace).
"""
return pulumi.get(self, "images")
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> pulumi.Output[Sequence['outputs.OceanLoadBalancer']]:
"""
Configure Load Balancer.
"""
return pulumi.get(self, "load_balancers")
@property
@pulumi.getter(name="managedServiceIdentities")
def managed_service_identities(self) -> pulumi.Output[Optional[Sequence['outputs.OceanManagedServiceIdentity']]]:
"""
List of Managed Service Identity objects.
"""
return pulumi.get(self, "managed_service_identities")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the Load Balancer.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def network(self) -> pulumi.Output['outputs.OceanNetwork']:
"""
Define the Virtual Network and Subnet.
"""
return pulumi.get(self, "network")
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> pulumi.Output[Optional['outputs.OceanOsDisk']]:
"""
OS disk specifications.
"""
return pulumi.get(self, "os_disk")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The Resource Group name of the Load Balancer.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter(name="sshPublicKey")
def ssh_public_key(self) -> pulumi.Output[str]:
"""
SSH public key for admin access to Linux VMs.
"""
return pulumi.get(self, "ssh_public_key")
@property
@pulumi.getter
def strategies(self) -> pulumi.Output[Optional[Sequence['outputs.OceanStrategy']]]:
"""
The Ocean AKS strategy object.
"""
return pulumi.get(self, "strategies")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Sequence['outputs.OceanTag']]]:
"""
Unique key-value pairs that will be used to tag VMs that are launched in the cluster.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="userName")
def user_name(self) -> pulumi.Output[str]:
"""
Username for admin access to VMs.
"""
return pulumi.get(self, "user_name")
@property
@pulumi.getter(name="vmSizes")
def vm_sizes(self) -> pulumi.Output[Optional[Sequence['outputs.OceanVmSize']]]:
"""
The types of virtual machines that may or may not be a part of the Ocean cluster.
"""
return pulumi.get(self, "vm_sizes")
| 48.350239 | 202 | 0.665064 | 5,809 | 50,526 | 5.579618 | 0.041832 | 0.12048 | 0.098482 | 0.07019 | 0.946686 | 0.935394 | 0.91861 | 0.911823 | 0.906732 | 0.895255 | 0 | 0.000382 | 0.223093 | 50,526 | 1,044 | 203 | 48.396552 | 0.825317 | 0.270296 | 0 | 0.85381 | 1 | 0 | 0.141208 | 0.038865 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166407 | false | 0.001555 | 0.010886 | 0 | 0.276827 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
64dcf9af0d3d056aba841e93e791e22929c6a4e3 | 83 | py | Python | problem_13/large_sum.py | plilja/project-euler | 646d1989cf15e903ef7e3c6e487284847d522ec9 | [
"Apache-2.0"
] | null | null | null | problem_13/large_sum.py | plilja/project-euler | 646d1989cf15e903ef7e3c6e487284847d522ec9 | [
"Apache-2.0"
] | null | null | null | problem_13/large_sum.py | plilja/project-euler | 646d1989cf15e903ef7e3c6e487284847d522ec9 | [
"Apache-2.0"
] | null | null | null | def large_sum(numbers, wanted_digits):
return str(sum(numbers))[:wanted_digits] | 41.5 | 44 | 0.771084 | 12 | 83 | 5.083333 | 0.666667 | 0.327869 | 0.52459 | 0.721311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 44 | 41.5 | 0.813333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
64ddba86aeaea948a5cd35f77ccedeb726798402 | 28,775 | py | Python | heinsen_routing.py | ericsengithub/heinsen_routing | afa1f442489f26cd2191680fb140764505c547fd | [
"MIT"
] | null | null | null | heinsen_routing.py | ericsengithub/heinsen_routing | afa1f442489f26cd2191680fb140764505c547fd | [
"MIT"
] | null | null | null | heinsen_routing.py | ericsengithub/heinsen_routing | afa1f442489f26cd2191680fb140764505c547fd | [
"MIT"
] | null | null | null | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeakySoftmax(nn.Module):
def __init__(self, dim):
super(LeakySoftmax, self).__init__()
self.dim = dim
def forward(self, inp):
# leak = torch.zeros_like(b_ij).sum(dim=2, keepdim=True)
# leaky_logits = torch.cat((leak, b_ij),2)
# leaky_routing = F.softmax(leaky_logits, dim=2)
# c_ij = leaky_routing[:,:,1:,:].unsqueeze(4)
maximum = torch.max(inp, self.dim, keepdim=True)
power = torch.exp(inp - maximum)
return power/torch.sum(power, dim=self.dim, keepdim=True)
class Routing(nn.Module):
"""
Official implementation of the routing algorithm proposed by "An
Algorithm for Routing Capsules in All Domains" (Heinsen, 2019),
https://arxiv.org/abs/1911.00792.
Args:
d_cov: int, dimension 1 of input and output capsules.
d_inp: int, dimension 2 of input capsules.
d_out: int, dimension 2 of output capsules.
n_inp: (optional) int, number of input capsules. If not provided, any
number of input capsules will be accepted, limited by memory.
n_out: (optional) int, number of output capsules. If not provided, it
can be passed to the forward method; otherwise it will be equal
to the number of input capsules, limited by memory.
n_iters: (optional) int, number of routing iterations. Default is 3.
single_beta: (optional) bool; if True, beta_use and beta_ign are the
same parameter, otherwise they are distinct. Default: False.
p_model: (optional) str, specifies how to compute probability of input
votes at each output capsule. Choices are 'gaussian' for Gaussian
mixtures and 'skm' for soft k-means. Default: 'gaussian'.
eps: (optional) small positive float << 1.0 for numerical stability.
Input:
a_inp: [..., n_inp] input scores.
mu_inp: [..., n_inp, d_cov, d_inp] capsules of shape d_cov x d_inp.
return_R: (optional) bool, if True, return routing probabilities R
in addition to other outputs. Default: False.
n_out: (optional) int, number of output capsules. Valid as an input
only if not already specified as an argument at initialization.
Output:
a_out: [..., n_out] output scores.
mu_out: [..., n_out, d_cov, d_out] capsules of shape d_cov x d_out.
sig2_out: [..., n_out, d_cov, d_out] variances of shape d_cov x d_out.
Sample usage:
>>> a_inp = torch.randn(100) # 100 input scores
>>> mu_inp = torch.randn(100, 4, 4) # 100 capsules of shape 4 x 4
>>> m = Routing(d_cov=4, d_inp=4, d_out=4, n_inp=100, n_out=10)
>>> a_out, mu_out, sig2_out = m(a_inp, mu_inp)
>>> print(mu_out) # 10 capsules of shape 4 x 4
"""
# p_model='gaussian'
def __init__(self, d_cov, d_inp, d_out, n_inp=-1, n_out=-1, n_iters=3, single_beta=False, p_model='skm', eps=1e-5):
super().__init__()
assert p_model in ['gaussian', 'skm'], 'Unrecognized value for p_model.'
self.n_iters, self.p_model, self.eps = (n_iters, p_model, eps)
self.n_inp_is_fixed, self.n_out_is_fixed = (n_inp > 0, n_out > 0)
one_or_n_inp, one_or_n_out = (max(1, n_inp), max(1, n_out))
self.register_buffer('CONST_one', torch.tensor(1.0))
self.W = nn.Parameter(torch.empty(one_or_n_inp, one_or_n_out, d_inp, d_out).normal_() / d_inp)
self.B = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out, d_cov, d_out))
if not self.n_out_is_fixed: self.B_brk = nn.Parameter(torch.zeros(1, d_cov, d_out))
self.beta_use = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.beta_ign = self.beta_use if single_beta else nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.f, self.log_f = (nn.Sigmoid(), nn.LogSigmoid())
self.softmax, self.log_softmax = (nn.Softmax(dim=-1), nn.LogSoftmax(dim=-1))
def forward(self, a_inp, mu_inp, return_R=False, **kwargs):
n_inp = a_inp.shape[-1]
W = self.W if self.n_inp_is_fixed else self.W.expand(n_inp, -1, -1, -1)
B = self.B
if self.n_out_is_fixed:
if ('n_out' in kwargs): raise ValueError('n_out is fixed!')
n_out = W.shape[1]
else:
n_out = kwargs['n_out'] if ('n_out' in kwargs) else n_inp
W = W.expand(-1, n_out, -1, -1)
B = B + self.B_brk * torch.linspace(-1, 1, n_out, device=B.device)[:, None, None] # break symmetry
V = torch.einsum('ijdh,...icd->...ijch', W, mu_inp) + B
f_a_inp = self.f(a_inp).unsqueeze(-1) # [...i1]
if self.n_iters > 0:
for iter_num in range(self.n_iters):
# E-step.
if iter_num == 0:
R = (self.CONST_one / n_out).expand(V.shape[:-2]) # [...ij]
else:
log_p_simplified = \
- torch.einsum('...ijch,...jch->...ij', V_less_mu_out_2, 1.0 / (2.0 * sig2_out)) \
- sig2_out.sqrt().log().sum((-2, -1)).unsqueeze(-2) if (self.p_model == 'gaussian') \
else self.log_softmax(-V_less_mu_out_2.sum((-2, -1))) # soft k-means otherwise
R = self.softmax(self.log_f(a_out).unsqueeze(-2) + log_p_simplified) # [...ij]
# D-step.
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# M-step.
a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
ret_a = a_out
else:
R = (self.CONST_one / n_out).expand(V.shape[:-2]) # [...ij]
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# M-step.
a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
# last_a = a_out
# loss = F.log_softmax(a_out, dim=-1)
values, _ = torch.max(self.softmax(a_out), dim=1)
last_a = torch.mean(values)
ret_a = a_out
count = 0
while True and count < 7:
count += 1
log_p_simplified = \
- torch.einsum('...ijch,...jch->...ij', V_less_mu_out_2, 1.0 / (2.0 * sig2_out)) \
- sig2_out.sqrt().log().sum((-2, -1)).unsqueeze(-2) if (self.p_model == 'gaussian') \
else self.log_softmax(-V_less_mu_out_2.sum((-2, -1))) # soft k-means otherwise
R = self.softmax(self.log_f(a_out).unsqueeze(-2) + log_p_simplified) # [...ij]
# D-step.
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# M-step.
a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
values, _ = torch.max(self.softmax(a_out), dim=1)
candidate_a = torch.mean(values)
# print("last_a:", last_a)
# print("candidate_a:", candidate_a)
# Also try
# if candidate_a - last_a < 0:
# break
# Can also try a version where we iterate til the max score goes down.
if candidate_a > last_a:
ret_a = a_out
# cur_loss = -target * F.log_softmax(a_out, dim=-1)
# print(cur_loss)
# print(loss)
# TODO 0.05 is an arbritray epsilon decided by: https://github.com/andyweizhao/NLP-Capsule/blob/master/layer.py
if candidate_a - last_a < 0.05 and candidate_a > (1/n_out + 0.05):
a_out = ret_a
break
else:
last_a = candidate_a
# return (a_out, mu_out, sig2_out, R) if return_R else (a_out, mu_out, sig2_out)
return (ret_a, mu_out, sig2_out, R) if return_R else (ret_a, mu_out, sig2_out)
class RoutingRNN(nn.Module):
"""
Official implementation of the routing algorithm proposed by "An
Algorithm for Routing Capsules in All Domains" (Heinsen, 2019),
https://arxiv.org/abs/1911.00792.
Args:
d_cov: int, dimension 1 of input and output capsules.
d_inp: int, dimension 2 of input capsules.
d_out: int, dimension 2 of output capsules.
n_inp: (optional) int, number of input capsules. If not provided, any
number of input capsules will be accepted, limited by memory.
n_out: (optional) int, number of output capsules. If not provided, it
can be passed to the forward method; otherwise it will be equal
to the number of input capsules, limited by memory.
n_iters: (optional) int, number of routing iterations. Default is 3.
single_beta: (optional) bool; if True, beta_use and beta_ign are the
same parameter, otherwise they are distinct. Default: False.
p_model: (optional) str, specifies how to compute probability of input
votes at each output capsule. Choices are 'gaussian' for Gaussian
mixtures and 'skm' for soft k-means. Default: 'gaussian'.
eps: (optional) small positive float << 1.0 for numerical stability.
Input:
a_inp: [..., n_inp] input scores.
mu_inp: [..., n_inp, d_cov, d_inp] capsules of shape d_cov x d_inp.
return_R: (optional) bool, if True, return routing probabilities R
in addition to other outputs. Default: False.
n_out: (optional) int, number of output capsules. Valid as an input
only if not already specified as an argument at initialization.
Output:
a_out: [..., n_out] output scores.
mu_out: [..., n_out, d_cov, d_out] capsules of shape d_cov x d_out.
sig2_out: [..., n_out, d_cov, d_out] variances of shape d_cov x d_out.
Sample usage:
>>> a_inp = torch.randn(100) # 100 input scores
>>> mu_inp = torch.randn(100, 4, 4) # 100 capsules of shape 4 x 4
>>> m = Routing(d_cov=4, d_inp=4, d_out=4, n_inp=100, n_out=10)
>>> a_out, mu_out, sig2_out = m(a_inp, mu_inp)
>>> print(mu_out) # 10 capsules of shape 4 x 4
"""
def __init__(self, d_cov, d_inp, d_out, n_inp=-1, n_out=-1, n_iters=3, single_beta=False, p_model='gaussian', eps=1e-5):
super().__init__()
assert p_model in ['gaussian', 'skm'], 'Unrecognized value for p_model.'
self.n_iters, self.p_model, self.eps = (n_iters, p_model, eps)
self.n_inp_is_fixed, self.n_out_is_fixed = (n_inp > 0, n_out > 0)
one_or_n_inp, one_or_n_out = (max(1, n_inp), max(1, n_out))
self.register_buffer('CONST_one', torch.tensor(1.0))
self.W = nn.Parameter(torch.empty(one_or_n_inp, one_or_n_out, d_inp, d_out).normal_() / d_inp)
self.B = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out, d_cov, d_out))
if not self.n_out_is_fixed: self.B_brk = nn.Parameter(torch.zeros(1, d_cov, d_out))
self.beta_use = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.beta_ign = self.beta_use if single_beta else nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.f, self.log_f = (nn.Sigmoid(), nn.LogSigmoid())
self.softmax, self.log_softmax = (nn.Softmax(dim=-1), nn.LogSoftmax(dim=-1))
# If this works abstract out into parameter
self.hidden_size = 64
self.rnnCell = nn.LSTMCell(n_out, self.hidden_size)
self.output = nn.Linear(self.hidden_size, n_out)
def forward(self, a_inp, mu_inp, return_R=False, **kwargs):
n_inp = a_inp.shape[-1]
batch_size = a_inp.shape[0]
W = self.W if self.n_inp_is_fixed else self.W.expand(n_inp, -1, -1, -1)
B = self.B
if self.n_out_is_fixed:
if ('n_out' in kwargs): raise ValueError('n_out is fixed!')
n_out = W.shape[1]
else:
n_out = kwargs['n_out'] if ('n_out' in kwargs) else n_inp
W = W.expand(-1, n_out, -1, -1)
B = B + self.B_brk * torch.linspace(-1, 1, n_out, device=B.device)[:, None, None] # break symmetry
V = torch.einsum('ijdh,...icd->...ijch', W, mu_inp) + B
f_a_inp = self.f(a_inp).unsqueeze(-1) # [...i1]
hidden = self.init_hidden(batch_size).cuda(device=B.device)
cell = self.init_hidden(batch_size).cuda(device=B.device)
for iter_num in range(self.n_iters):
# E-step.
if iter_num == 0:
R = (self.CONST_one / n_out).expand(V.shape[:-2]) # [...ij]
else:
log_p_simplified = \
- torch.einsum('...ijch,...jch->...ij', V_less_mu_out_2, 1.0 / (2.0 * sig2_out)) \
- sig2_out.sqrt().log().sum((-2, -1)).unsqueeze(-2) if (self.p_model == 'gaussian') \
else self.log_softmax(-V_less_mu_out_2.sum((-2, -1))) # soft k-means otherwise
R = self.softmax(self.log_f(a_out).unsqueeze(-2) + log_p_simplified) # [...ij]
# D-step.
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# M-step.
a_temp = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2)
# a_feed = torch.cat((a_temp, a_inp),dim=1)
hidden, cell = self.rnnCell(a_temp, (hidden, cell))
a_out = self.output(hidden)
# a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
return (a_out, mu_out, sig2_out, R) if return_R else (a_out, mu_out, sig2_out)
def init_hidden(self, bs):
"""Creates a tensor of zeros to represent the initial hidden states
of a batch of sequences.
Arguments:
bs: The batch size for the initial hidden state.
Returns:
hidden: An initial hidden state of all zeros. (batch_size x hidden_size)
"""
return torch.zeros(bs, self.hidden_size)
class RoutingRNNCombo(nn.Module):
"""
Official implementation of the routing algorithm proposed by "An
Algorithm for Routing Capsules in All Domains" (Heinsen, 2019),
https://arxiv.org/abs/1911.00792.
Args:
d_cov: int, dimension 1 of input and output capsules.
d_inp: int, dimension 2 of input capsules.
d_out: int, dimension 2 of output capsules.
n_inp: (optional) int, number of input capsules. If not provided, any
number of input capsules will be accepted, limited by memory.
n_out: (optional) int, number of output capsules. If not provided, it
can be passed to the forward method; otherwise it will be equal
to the number of input capsules, limited by memory.
n_iters: (optional) int, number of routing iterations. Default is 3.
single_beta: (optional) bool; if True, beta_use and beta_ign are the
same parameter, otherwise they are distinct. Default: False.
p_model: (optional) str, specifies how to compute probability of input
votes at each output capsule. Choices are 'gaussian' for Gaussian
mixtures and 'skm' for soft k-means. Default: 'gaussian'.
eps: (optional) small positive float << 1.0 for numerical stability.
Input:
a_inp: [..., n_inp] input scores.
mu_inp: [..., n_inp, d_cov, d_inp] capsules of shape d_cov x d_inp.
return_R: (optional) bool, if True, return routing probabilities R
in addition to other outputs. Default: False.
n_out: (optional) int, number of output capsules. Valid as an input
only if not already specified as an argument at initialization.
Output:
a_out: [..., n_out] output scores.
mu_out: [..., n_out, d_cov, d_out] capsules of shape d_cov x d_out.
sig2_out: [..., n_out, d_cov, d_out] variances of shape d_cov x d_out.
Sample usage:
>>> a_inp = torch.randn(100) # 100 input scores
>>> mu_inp = torch.randn(100, 4, 4) # 100 capsules of shape 4 x 4
>>> m = Routing(d_cov=4, d_inp=4, d_out=4, n_inp=100, n_out=10)
>>> a_out, mu_out, sig2_out = m(a_inp, mu_inp)
>>> print(mu_out) # 10 capsules of shape 4 x 4
"""
def __init__(self, d_cov, d_inp, d_out, n_inp=-1, n_out=-1, n_iters=3, single_beta=False, p_model='gaussian', eps=1e-5):
super().__init__()
assert p_model in ['gaussian', 'skm'], 'Unrecognized value for p_model.'
self.n_iters, self.p_model, self.eps = (n_iters, p_model, eps)
self.n_inp_is_fixed, self.n_out_is_fixed = (n_inp > 0, n_out > 0)
one_or_n_inp, one_or_n_out = (max(1, n_inp), max(1, n_out))
self.register_buffer('CONST_one', torch.tensor(1.0))
self.W = nn.Parameter(torch.empty(one_or_n_inp, one_or_n_out, d_inp, d_out).normal_() / d_inp)
self.B = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out, d_cov, d_out))
if not self.n_out_is_fixed: self.B_brk = nn.Parameter(torch.zeros(1, d_cov, d_out))
self.beta_use = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.beta_ign = self.beta_use if single_beta else nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.f, self.log_f = (nn.Sigmoid(), nn.LogSigmoid())
self.softmax, self.log_softmax = (nn.Softmax(dim=-1), nn.LogSoftmax(dim=-1))
# If this works abstract out into parameter
self.hidden_size = 128
self.rnnCell = nn.LSTMCell(n_out + 4*(n_out), self.hidden_size)
self.output = nn.Linear(self.hidden_size, n_out)
def forward(self, a_inp, mu_inp, return_R=False, **kwargs):
n_inp = a_inp.shape[-1]
batch_size = a_inp.shape[0]
W = self.W if self.n_inp_is_fixed else self.W.expand(n_inp, -1, -1, -1)
B = self.B
if self.n_out_is_fixed:
if ('n_out' in kwargs): raise ValueError('n_out is fixed!')
n_out = W.shape[1]
else:
n_out = kwargs['n_out'] if ('n_out' in kwargs) else n_inp
W = W.expand(-1, n_out, -1, -1)
B = B + self.B_brk * torch.linspace(-1, 1, n_out, device=B.device)[:, None, None] # break symmetry
V = torch.einsum('ijdh,...icd->...ijch', W, mu_inp) + B
f_a_inp = self.f(a_inp).unsqueeze(-1) # [...i1]
hidden = self.init_hidden(batch_size).cuda(device=B.device)
cell = self.init_hidden(batch_size).cuda(device=B.device)
for iter_num in range(self.n_iters):
# E-step.
if iter_num == 0:
R = (self.CONST_one / n_out).expand(V.shape[:-2]) # [...ij]
else:
log_p_simplified = \
- torch.einsum('...ijch,...jch->...ij', V_less_mu_out_2, 1.0 / (2.0 * sig2_out)) \
- sig2_out.sqrt().log().sum((-2, -1)).unsqueeze(-2) if (self.p_model == 'gaussian') \
else self.log_softmax(-V_less_mu_out_2.sum((-2, -1))) # soft k-means otherwise
R = self.softmax(self.log_f(a_out).unsqueeze(-2) + log_p_simplified) # [...ij]
# D-step.
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# M-step.
a_temp = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2)
# a_feed = torch.cat((a_temp, a_inp),dim=1)
# a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
a_feed = torch.cat((a_temp, mu_out.reshape(batch_size, -1), sig2_out.reshape(batch_size, -1)),dim=1)
hidden, cell = self.rnnCell(a_feed, (hidden, cell))
a_out = self.output(hidden)
return (a_out, mu_out, sig2_out, R) if return_R else (a_out, mu_out, sig2_out)
def init_hidden(self, bs):
"""Creates a tensor of zeros to represent the initial hidden states
of a batch of sequences.
Arguments:
bs: The batch size for the initial hidden state.
Returns:
hidden: An initial hidden state of all zeros. (batch_size x hidden_size)
"""
return torch.zeros(bs, self.hidden_size)
class RoutingRNNLearnedRouting(nn.Module):
"""
Official implementation of the routing algorithm proposed by "An
Algorithm for Routing Capsules in All Domains" (Heinsen, 2019),
https://arxiv.org/abs/1911.00792.
Args:
d_cov: int, dimension 1 of input and output capsules.
d_inp: int, dimension 2 of input capsules.
d_out: int, dimension 2 of output capsules.
n_inp: (optional) int, number of input capsules. If not provided, any
number of input capsules will be accepted, limited by memory.
n_out: (optional) int, number of output capsules. If not provided, it
can be passed to the forward method; otherwise it will be equal
to the number of input capsules, limited by memory.
n_iters: (optional) int, number of routing iterations. Default is 3.
single_beta: (optional) bool; if True, beta_use and beta_ign are the
same parameter, otherwise they are distinct. Default: False.
p_model: (optional) str, specifies how to compute probability of input
votes at each output capsule. Choices are 'gaussian' for Gaussian
mixtures and 'skm' for soft k-means. Default: 'gaussian'.
eps: (optional) small positive float << 1.0 for numerical stability.
Input:
a_inp: [..., n_inp] input scores.
mu_inp: [..., n_inp, d_cov, d_inp] capsules of shape d_cov x d_inp.
return_R: (optional) bool, if True, return routing probabilities R
in addition to other outputs. Default: False.
n_out: (optional) int, number of output capsules. Valid as an input
only if not already specified as an argument at initialization.
Output:
a_out: [..., n_out] output scores.
mu_out: [..., n_out, d_cov, d_out] capsules of shape d_cov x d_out.
sig2_out: [..., n_out, d_cov, d_out] variances of shape d_cov x d_out.
Sample usage:
>>> a_inp = torch.randn(100) # 100 input scores
>>> mu_inp = torch.randn(100, 4, 4) # 100 capsules of shape 4 x 4
>>> m = Routing(d_cov=4, d_inp=4, d_out=4, n_inp=100, n_out=10)
>>> a_out, mu_out, sig2_out = m(a_inp, mu_inp)
>>> print(mu_out) # 10 capsules of shape 4 x 4
"""
def __init__(self, d_cov, d_inp, d_out, n_inp=-1, n_out=-1, n_iters=3, single_beta=False, p_model='gaussian', eps=1e-5):
super().__init__()
assert p_model in ['gaussian', 'skm'], 'Unrecognized value for p_model.'
self.n_iters, self.p_model, self.eps = (n_iters, p_model, eps)
self.n_inp_is_fixed, self.n_out_is_fixed = (n_inp > 0, n_out > 0)
one_or_n_inp, one_or_n_out = (max(1, n_inp), max(1, n_out))
self.register_buffer('CONST_one', torch.tensor(1.0))
self.W = nn.Parameter(torch.empty(one_or_n_inp, one_or_n_out, d_inp, d_out).normal_() / d_inp)
self.B = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out, d_cov, d_out))
if not self.n_out_is_fixed: self.B_brk = nn.Parameter(torch.zeros(1, d_cov, d_out))
self.beta_use = nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.beta_ign = self.beta_use if single_beta else nn.Parameter(torch.zeros(one_or_n_inp, one_or_n_out))
self.f, self.log_f = (nn.Sigmoid(), nn.LogSigmoid())
self.softmax, self.log_softmax = (nn.Softmax(dim=-1), nn.LogSoftmax(dim=-1))
# If this works abstract out into parameter
self.R = nn.Parameter((self.CONST_one / n_out))
self.a_1 = nn.Linear(n_out, n_out)
self.a_2 = nn.Linear(n_out, n_out)
self.a_3 = nn.Linear(n_out, n_out)
self.mu_1 = nn.Linear(n_out*4, n_out*2)
self.mu_2 = nn.Linear(n_out*4, n_out*2)
self.mu_3 = nn.Linear(n_out*4, n_out*2)
self.a_scaler = [self.a_1, self.a_2, self.a_3]
self.mu_scaler = [self.mu_1, self.mu_2, self.mu_3]
def forward(self, a_inp, mu_inp, return_R=False, **kwargs):
n_inp = a_inp.shape[-1]
batch_size = a_inp.shape[0]
W = self.W if self.n_inp_is_fixed else self.W.expand(n_inp, -1, -1, -1)
B = self.B
if self.n_out_is_fixed:
if ('n_out' in kwargs): raise ValueError('n_out is fixed!')
n_out = W.shape[1]
else:
n_out = kwargs['n_out'] if ('n_out' in kwargs) else n_inp
W = W.expand(-1, n_out, -1, -1)
B = B + self.B_brk * torch.linspace(-1, 1, n_out, device=B.device)[:, None, None] # break symmetry
V = torch.einsum('ijdh,...icd->...ijch', W, mu_inp) + B
f_a_inp = self.f(a_inp).unsqueeze(-1) # [...i1]
for iter_num in range(self.n_iters):
# E-step.
if iter_num == 0:
R = self.R.expand(V.shape[:-2]) # [...ij]
else:
log_p_simplified = \
- torch.einsum('...ijch,...jch->...ij', V_less_mu_out_2, 1.0 / (2.0 * sig2_out)) \
- sig2_out.sqrt().log().sum((-2, -1)).unsqueeze(-2) if (self.p_model == 'gaussian') \
else self.log_softmax(-V_less_mu_out_2.sum((-2, -1))) # soft k-means otherwise
R = self.softmax(self.log_f(a_out).unsqueeze(-2) + log_p_simplified) # [...ij]
# D-step.
D_use = f_a_inp * R
D_ign = f_a_inp - D_use
# a_out = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2) # [...j]
# M-step.
a_temp = (self.beta_use * D_use).sum(dim=-2) - (self.beta_ign * D_ign).sum(dim=-2)
a_out = a_temp * self.f(self.a_scaler[iter_num](a_temp))
over_D_use_sum = 1.0 / (D_use.sum(dim=-2) + self.eps) # [...j]
mu_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V, over_D_use_sum)
V_less_mu_out_2 = (V - mu_out.unsqueeze(-4)) ** 2 # [...ijch]
sig2_out = torch.einsum('...ij,...ijch,...j->...jch', D_use, V_less_mu_out_2, over_D_use_sum) + self.eps
mu_shape = mu_out.shape
ins = torch.cat((mu_out.reshape(batch_size,-1), sig2_out.reshape(batch_size,-1)), dim=1)
mu_out = mu_out * self.f(self.mu_scaler[iter_num](ins)).reshape(mu_shape)
return (a_out, mu_out, sig2_out, R) if return_R else (a_out, mu_out, sig2_out) | 51.383929 | 127 | 0.581929 | 4,558 | 28,775 | 3.424309 | 0.058578 | 0.031522 | 0.015377 | 0.014095 | 0.929972 | 0.927409 | 0.919721 | 0.909341 | 0.90665 | 0.899346 | 0 | 0.025039 | 0.283823 | 28,775 | 560 | 128 | 51.383929 | 0.732337 | 0.371607 | 0 | 0.811321 | 0 | 0 | 0.05141 | 0.024142 | 0 | 0 | 0 | 0.001786 | 0.015094 | 1 | 0.045283 | false | 0 | 0.015094 | 0 | 0.10566 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b3eb7257065e6a12459eeed33fe3c9f7a4a8c0f2 | 25 | py | Python | woopi/server.py | augustogoulart/woopi | c87a9cec0d3ee32b94d9942e23c4b1a8006cb6c2 | [
"MIT"
] | null | null | null | woopi/server.py | augustogoulart/woopi | c87a9cec0d3ee32b94d9942e23c4b1a8006cb6c2 | [
"MIT"
] | null | null | null | woopi/server.py | augustogoulart/woopi | c87a9cec0d3ee32b94d9942e23c4b1a8006cb6c2 | [
"MIT"
] | null | null | null | def main():
return 1
| 8.333333 | 12 | 0.56 | 4 | 25 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.32 | 25 | 2 | 13 | 12.5 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
375ef994f502abae0d6cfe040b8cbd86e6764cac | 7,977 | py | Python | geeksbot/imports/message_logging.py | dustinpianalto/geeksbot_v2 | 4af14aab1a1ff63378d6335f4b0d471d82a6713e | [
"MIT"
] | 2 | 2021-01-23T04:37:45.000Z | 2021-02-16T21:46:05.000Z | geeksbot/imports/message_logging.py | dustinpianalto/geeksbot_v2 | 4af14aab1a1ff63378d6335f4b0d471d82a6713e | [
"MIT"
] | null | null | null | geeksbot/imports/message_logging.py | dustinpianalto/geeksbot_v2 | 4af14aab1a1ff63378d6335f4b0d471d82a6713e | [
"MIT"
] | null | null | null | # noinspection PyPackageRequirements
import discord
async def on_message(bot, message, user_info):
if not user_info.get('disable_logging'):
if message.guild:
msg_data = {
'author': str(message.author.id),
'channel': str(message.channel.id),
'mention_everyone': message.mention_everyone,
'created_at': message.created_at
}
if message.mentions:
msg_data['mentions'] = [str(user.id) for user in message.mentions]
if message.channel_mentions:
msg_data['channel_mentions'] = [str(channel.id) for channel in message.channel_mentions]
if message.role_mentions:
msg_data['role_mentions'] = [str(role.id) for role in message.role_mentions]
if message.embeds:
msg_data['embeds'] = [e.to_dict() for e in message.embeds]
if message.content:
msg_data['content'] = message.content
if message.webhook_id:
msg_data['webhook_id'] = message.webhook_id
if message.tts:
msg_data['tts'] = message.tts
if message.attachments:
msg_data['attachments'] = [{
'id': str(a.id),
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in message.attachments]
bot.fs_db.document(f'guilds/{message.guild.id}/messages/{message.id}').set(msg_data)
else:
msg_data = {
'author': str(message.author.id),
'created_at': message.created_at
}
if message.mentions:
msg_data['mentions'] = [str(user.id) for user in message.mentions]
if message.embeds:
msg_data['embeds'] = [e.to_dict() for e in message.embeds]
if message.content:
msg_data['content'] = message.content
if message.webhook_id:
msg_data['webhook_id'] = message.webhook_id
if message.tts:
msg_data['tts'] = message.tts
if message.attachments:
msg_data['attachments'] = [{
'id': str(a.id),
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in message.attachments]
bot.fs_db.document(f'dm_channels/{message.channel.id}/messages/{message.id}').set(msg_data)
async def on_message_edit(bot, before: discord.Message, after: discord.Message, user_config):
if not user_config.get('disable_logging'):
if after.guild:
msg_ref = bot.fs_db.document(f'guilds/{after.guild.id}/messages/{after.id}')
msg_data = (await bot.loop.run_in_executor(bot.tpe, msg_ref.get)).to_dict()
if before.content != after.content:
if before.content:
if msg_data.get('previous_content') and isinstance(msg_data['previous_content'], list):
msg_data['previous_content'].append(before.content)
else:
msg_data['previous_content'] = [before.content, ]
msg_data['content'] = after.content
if before.embeds != after.embeds:
if before.embeds:
if msg_data.get('previous_embeds') and isinstance(msg_data['previous_embeds'], list):
msg_data['previous_embeds'].append(before.embeds[0].to_dict())
else:
msg_data['previous_embeds'] = [before.embeds[0].to_dict(), ]
msg_data['embeds'] = [e.to_dict() for e in after.embeds]
if before.pinned != after.pinned:
msg_data['pinned'] = after.pinned
if before.mentions != after.mentions:
msg_data['mentions'] = [str(user.id) for user in after.mentions]
if before.channel_mentions != after.channel_mentions:
msg_data['channel_mentions'] = [str(user.id) for user in after.channel_mentions]
if before.role_mentions != after.role_mentions:
msg_data['role_mentions'] = [str(user.id) for user in after.role_mentions]
if before.attachments != after.attachments:
if before.attachments:
if msg_data.get('previous_attachments') and isinstance(msg_data['previous_attachments'], list):
msg_data['previous_attachments'].append([{
'id': str(a.id),
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in before.attachments])
else:
msg_data['previous_attachments'] = [[{
'id': a.id,
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in before.attachments], ]
msg_data['attachments'] = [{
'id': a.id,
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in after.attachments]
bot.fs_db.document(f'guilds/{after.guild.id}/messages/{after.id}').set(msg_data)
else:
msg_ref = bot.fs_db.document(f'dm_channels/{after.channel.id}/messages/{after.id}')
msg_data = (await bot.loop.run_in_executor(bot.tpe, msg_ref.get)).to_dict()
if before.content != after.content:
if before.content:
if msg_data.get('previous_content') and isinstance(msg_data['previous_content'], list):
msg_data['previous_content'].append(before.content)
else:
msg_data['previous_content'] = [before.content, ]
msg_data['content'] = after.content
if before.embeds != after.embeds:
if before.embeds:
if msg_data.get('previous_embeds') and isinstance(msg_data['previous_embeds'], list):
msg_data['previous_embeds'].append(before.embeds[0].to_dict())
else:
msg_data['previous_embeds'] = [before.embeds[0].to_dict(), ]
msg_data['embeds'] = [e.to_dict() for e in after.embeds]
if before.pinned != after.pinned:
msg_data['pinned'] = after.pinned
if before.mentions != after.mentions:
msg_data['mentions'] = [str(user.id) for user in after.mentions]
if before.attachments != after.attachments:
if before.attachments:
if msg_data.get('previous_attachments') and isinstance(msg_data['previous_attachments'], list):
msg_data['previous_attachments'].append([{
'id': str(a.id),
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in before.attachments])
else:
msg_data['previous_attachments'] = [[{
'id': a.id,
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in before.attachments], ]
msg_data['attachments'] = [{
'id': a.id,
'size': a.size,
'filename': a.filename,
'url': a.url
} for a in after.attachments]
bot.fs_db.document(f'dm_channels/{after.channel.id}/messages/{after.id}').set(msg_data)
| 50.169811 | 115 | 0.504952 | 843 | 7,977 | 4.607355 | 0.083037 | 0.104531 | 0.069516 | 0.016478 | 0.878218 | 0.878218 | 0.876159 | 0.803038 | 0.784501 | 0.784501 | 0 | 0.000807 | 0.378839 | 7,977 | 158 | 116 | 50.487342 | 0.783047 | 0.004262 | 0 | 0.832215 | 0 | 0 | 0.143559 | 0.036142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006711 | 0 | 0.006711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
376f36a26f60e93038f07a56fad2edd09523293f | 16 | py | Python | pywork/py2.py | infinityman8/pythonwork-uni | 8ba7f341573f3031710d1bf4d91849508aa81bf8 | [
"MIT"
] | null | null | null | pywork/py2.py | infinityman8/pythonwork-uni | 8ba7f341573f3031710d1bf4d91849508aa81bf8 | [
"MIT"
] | null | null | null | pywork/py2.py | infinityman8/pythonwork-uni | 8ba7f341573f3031710d1bf4d91849508aa81bf8 | [
"MIT"
] | null | null | null | print(3+4+5/3)
| 8 | 15 | 0.5625 | 5 | 16 | 1.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0.125 | 16 | 1 | 16 | 16 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
37777e0cfb5b014841642ec0071b0f15c2d2caa4 | 221 | py | Python | mantisshrimp/parsers/__init__.py | ramaneswaran/mantisshrimp | d30c056f1f9f26a2ce42da73cfb32d591321f426 | [
"Apache-2.0"
] | null | null | null | mantisshrimp/parsers/__init__.py | ramaneswaran/mantisshrimp | d30c056f1f9f26a2ce42da73cfb32d591321f426 | [
"Apache-2.0"
] | 8 | 2020-06-16T18:06:42.000Z | 2020-09-15T22:35:56.000Z | mantisshrimp/parsers/__init__.py | ramaneswaran/mantisshrimp | d30c056f1f9f26a2ce42da73cfb32d591321f426 | [
"Apache-2.0"
] | null | null | null | from mantisshrimp.parsers.splits import *
from mantisshrimp.parsers.mixins import *
from mantisshrimp.parsers.parser import *
from mantisshrimp.parsers.combined_parser import *
from mantisshrimp.parsers.defaults import *
| 36.833333 | 50 | 0.841629 | 26 | 221 | 7.115385 | 0.346154 | 0.432432 | 0.621622 | 0.627027 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090498 | 221 | 5 | 51 | 44.2 | 0.920398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
3792e4b131e0ffb9c5897f8981c6f63f16e3fa22 | 95,276 | py | Python | ImageToEmojiArtBot/constants.py | crscillitoe/DiscordBotsToCleanseYourSoul | 09ef9c6b667bcc967df2b2ceb70fa2a952137e29 | [
"Apache-2.0"
] | 1 | 2019-09-15T03:26:57.000Z | 2019-09-15T03:26:57.000Z | ImageToEmojiArtBot/constants.py | crscillitoe/DiscordBotsToCleanseYourSoul | 09ef9c6b667bcc967df2b2ceb70fa2a952137e29 | [
"Apache-2.0"
] | 1 | 2019-10-17T17:45:56.000Z | 2019-10-17T17:45:56.000Z | ImageToEmojiArtBot/constants.py | crscillitoe/DiscordBotsToCleanseYourSoul | 09ef9c6b667bcc967df2b2ceb70fa2a952137e29 | [
"Apache-2.0"
] | null | null | null | color_dictionary = {
(222,143,156): ':100:',
(112,168,211): ':1234:',
(104,109,112): ':8ball:',
(234,128,141): ':ab:',
(104,163,209): ':abc:',
(117,171,213): ':abcd:',
(248,182,94): ':accept:',
(193,176,191): ':aerial_tramway:',
(196,214,210): ':airplane_arriving:',
(196,214,210): ':airplane_departure:',
(225,179,187): ':airplane_small:',
(227,175,169): ':alarm_clock:',
(178,185,190): ':alien:',
(204,187,198): ':ambulance:',
(186,155,145): ':amphora:',
(232,213,162): ':angel:',
(188,188,191): ':angel::skin-tone-1:',
(229,221,175): ':angel::skin-tone-2:',
(201,178,170): ':angel::skin-tone-3:',
(180,166,159): ':angel::skin-tone-4:',
(147,143,143): ':angel::skin-tone-5:',
(236,187,194): ':anger:',
(206,212,216): ':anger_right:',
(233,192,92): ':angry:',
(233,192,91): ':anguished:',
(147,150,152): ':ant:',
(224,113,123): ':apple:',
(115,170,212): ':arrow_double_down:',
(114,170,212): ':arrow_double_up:',
(110,167,211): ':arrow_down_small:',
(110,167,211): ':arrow_up_small:',
(116,170,212): ':arrows_clockwise:',
(121,173,214): ':arrows_counterclockwise:',
(220,183,163): ':art:',
(177,196,156): ':articulated_lorry:',
(228,189,93): ':astonished:',
(206,228,245): ':athletic_shoe:',
(98,158,203): ':atm:',
(162,179,134): ':avocado:',
(242,211,129): ':baby:',
(236,212,200): ':baby::skin-tone-1:',
(233,205,173): ':baby::skin-tone-2:',
(213,180,156): ':baby::skin-tone-3:',
(188,151,124): ':baby::skin-tone-4:',
(147,120,107): ':baby::skin-tone-5:',
(211,220,225): ':baby_bottle:',
(253,224,160): ':baby_chick:',
(247,176,82): ':baby_symbol:',
(156,159,161): ':back:',
(249,233,223): ':back_of_hand::skin-tone-1:',
(247,225,195): ':back_of_hand::skin-tone-2:',
(228,201,179): ':back_of_hand::skin-tone-3:',
(204,173,149): ':back_of_hand::skin-tone-4:',
(172,146,133): ':back_of_hand::skin-tone-5:',
(247,200,200): ':bacon:',
(212,225,236): ':badminton:',
(145,178,204): ':baggage_claim:',
(236,189,192): ':balloon:',
(166,191,210): ':ballot_box:',
(177,200,147): ':bamboo:',
(254,238,201): ':banana:',
(180,190,197): ':bank:',
(192,185,196): ':bar_chart:',
(224,224,232): ':barber:',
(190,130,52): ':basketball:',
(182,190,196): ':bat:',
(226,229,228): ':bath:',
(220,225,229): ':bath::skin-tone-1:',
(226,230,228): ':bath::skin-tone-2:',
(223,225,228): ':bath::skin-tone-3:',
(221,224,227): ':bath::skin-tone-4:',
(218,222,226): ':bath::skin-tone-5:',
(226,231,235): ':bathtub:',
(194,219,182): ':battery:',
(177,175,191): ':beach:',
(235,193,187): ':beach_umbrella:',
(193,129,110): ':bear:',
(212,217,222): ':bed:',
(194,193,183): ':bee:',
(247,208,136): ':beer:',
(244,212,148): ':beers:',
(174,114,123): ':beetle:',
(188,224,196): ':beginner:',
(255,208,139): ':bell:',
(246,200,146): ':bellhop:',
(155,103,93): ':bento:',
(188,199,199): ':bicyclist:',
(187,198,207): ':bicyclist::skin-tone-1:',
(188,198,204): ':bicyclist::skin-tone-2:',
(185,195,202): ':bicyclist::skin-tone-3:',
(182,191,198): ':bicyclist::skin-tone-4:',
(177,188,196): ':bicyclist::skin-tone-5:',
(215,184,189): ':bike:',
(216,206,230): ':bikini:',
(233,144,149): ':bird:',
(188,150,138): ':birthday:',
(98,103,106): ':black_heart:',
(214,197,202): ':black_joker:',
(144,147,149): ':black_square_button:',
(225,226,208): ':blossom:',
(224,184,170): ':blowfish:',
(95,164,216): ':blue_book:',
(161,195,220): ':blue_car:',
(136,195,241): ':blue_heart:',
(238,185,103): ':blush:',
(173,110,93): ':boar:',
(128,129,128): ':bomb:',
(199,212,221): ':book:',
(213,182,192): ':bookmark:',
(189,195,210): ':bookmark_tabs:',
(185,179,180): ':books:',
(234,173,164): ':boom:',
(215,167,153): ':boot:',
(205,187,146): ':bouquet:',
(211,203,160): ':bow:',
(174,183,191): ':bow::skin-tone-1:',
(208,209,176): ':bow::skin-tone-2:',
(182,170,169): ':bow::skin-tone-3:',
(162,157,157): ':bow::skin-tone-4:',
(132,136,142): ':bow::skin-tone-5:',
(238,218,213): ':bow_and_arrow:',
(151,149,154): ':bowling:',
(210,120,134): ':boxing_glove:',
(249,206,116): ':boy:',
(187,173,166): ':boy::skin-tone-1:',
(244,218,141): ':boy::skin-tone-2:',
(202,154,130): ':boy::skin-tone-3:',
(169,132,111): ':boy::skin-tone-4:',
(118,97,87): ':boy::skin-tone-5:',
(238,203,172): ':bread:',
(247,210,141): ':bride_with_veil:',
(185,176,173): ':bride_with_veil::skin-tone-1:',
(243,223,155): ':bride_with_veil::skin-tone-2:',
(206,165,148): ':bride_with_veil::skin-tone-3:',
(180,151,137): ':bride_with_veil::skin-tone-4:',
(140,124,117): ':bride_with_veil::skin-tone-5:',
(106,84,87): ':bridge_at_night:',
(166,104,65): ':briefcase:',
(239,148,162): ':broken_heart:',
(210,194,231): ':bug:',
(248,233,198): ':bulb:',
(176,199,216): ':bullettrain_front:',
(159,182,199): ':bullettrain_side:',
(230,214,184): ':burrito:',
(159,175,188): ':bus:',
(227,208,206): ':busstop:',
(120,124,126): ':bust_in_silhouette:',
(131,138,142): ':busts_in_silhouette:',
(148,178,200): ':butterfly:',
(164,202,142): ':cactus:',
(244,222,198): ':cake:',
(185,159,167): ':calendar:',
(218,189,196): ':calendar_spiral:',
(254,232,166): ':call_me:',
(250,234,225): ':call_me_hand::skin-tone-1:',
(247,228,202): ':call_me_hand::skin-tone-2:',
(231,208,188): ':call_me_hand::skin-tone-3:',
(210,183,162): ':call_me_hand::skin-tone-4:',
(182,160,149): ':call_me_hand::skin-tone-5:',
(122,170,206): ':calling:',
(220,172,157): ':camel:',
(111,122,130): ':camera:',
(140,139,131): ':camera_with_flash:',
(158,170,124): ':camping:',
(245,235,218): ':candle:',
(239,172,181): ':candy:',
(201,202,218): ':canoe:',
(127,178,216): ':capital_abcd:',
(183,187,193): ':card_box:',
(165,182,194): ':card_index:',
(162,194,226): ':carousel_horse:',
(239,203,145): ':carrot:',
(225,189,179): ':cat2:',
(237,190,108): ':cat:',
(182,193,202): ':cd:',
(233,239,241): ':chains:',
(208,210,193): ':champagne:',
(224,212,218): ':champagne_glass:',
(160,201,137): ':chart:',
(183,208,227): ':chart_with_downwards_trend:',
(217,187,196): ':chart_with_upwards_trend:',
(188,193,196): ':checkered_flag:',
(254,199,116): ':cheese:',
(208,163,156): ':cherries:',
(247,194,196): ':cherry_blossom:',
(190,120,99): ':chestnut:',
(238,223,214): ':chicken:',
(179,161,113): ':children_crossing:',
(203,166,137): ':chipmunk:',
(221,167,169): ':chocolate_bar:',
(194,200,166): ':christmas_tree:',
(158,196,225): ':cinema:',
(213,187,194): ':circus_tent:',
(123,96,53): ':city_dusk:',
(171,143,82): ':city_sunset:',
(101,122,132): ':cityscape:',
(232,112,127): ':cl:',
(253,221,137): ':clap:',
(247,224,208): ':clap::skin-tone-1:',
(243,216,181): ':clap::skin-tone-2:',
(224,192,164): ':clap::skin-tone-3:',
(198,163,133): ':clap::skin-tone-4:',
(164,134,116): ':clap::skin-tone-5:',
(131,151,127): ':clapper:',
(184,193,198): ':classical_building:',
(217,206,204): ':clipboard:',
(202,212,218): ':clock1030:',
(202,212,218): ':clock10:',
(202,212,218): ':clock1130:',
(203,212,218): ':clock11:',
(202,212,218): ':clock1230:',
(204,214,220): ':clock12:',
(202,211,218): ':clock130:',
(203,212,218): ':clock1:',
(202,212,218): ':clock230:',
(202,212,218): ':clock2:',
(202,211,218): ':clock330:',
(202,212,218): ':clock3:',
(203,212,218): ':clock430:',
(202,212,218): ':clock4:',
(203,212,218): ':clock530:',
(202,212,218): ':clock5:',
(205,214,220): ':clock630:',
(202,212,218): ':clock6:',
(203,212,218): ':clock730:',
(202,212,218): ':clock7:',
(203,212,218): ':clock830:',
(202,212,218): ':clock8:',
(202,212,218): ':clock930:',
(202,212,218): ':clock9:',
(209,176,168): ':clock:',
(206,66,86): ':closed_book:',
(231,175,124): ':closed_lock_with_key:',
(202,188,223): ':closed_umbrella:',
(240,234,224): ':cloud_lightning:',
(223,235,244): ':cloud_rain:',
(221,234,244): ':cloud_snow:',
(184,193,199): ':cloud_tornado:',
(226,185,163): ':clown:',
(227,230,231): ':cocktail:',
(204,192,140): ':cold_sweat:',
(225,162,172): ':compression:',
(162,201,231): ':computer:',
(218,177,161): ':confetti_ball:',
(224,184,87): ':confounded:',
(242,200,96): ':confused:',
(189,181,155): ':construction:',
(155,154,190): ':construction_site:',
(243,224,159): ':construction_worker:',
(240,223,181): ':construction_worker::skin-tone-1:',
(240,222,172): ':construction_worker::skin-tone-2:',
(234,213,167): ':construction_worker::skin-tone-3:',
(224,204,157): ':construction_worker::skin-tone-4:',
(210,194,151): ':construction_worker::skin-tone-5:',
(150,151,156): ':control_knobs:',
(185,164,183): ':convenience_store:',
(216,169,148): ':cookie:',
(172,172,168): ':cooking:',
(97,159,207): ':cool:',
(193,196,174): ':cop:',
(185,193,197): ':cop::skin-tone-1:',
(191,196,188): ':cop::skin-tone-2:',
(181,183,183): ':cop::skin-tone-3:',
(171,173,173): ':cop::skin-tone-4:',
(155,161,165): ':cop::skin-tone-5:',
(197,194,119): ':corn:',
(177,197,147): ':couch:',
(226,194,156): ':couple:',
(217,184,149): ':couple_mm:',
(225,175,136): ':couple_with_heart:',
(231,168,127): ':couple_ww:',
(245,181,107): ':couplekiss:',
(174,178,181): ':cow2:',
(212,188,188): ':cow:',
(198,166,92): ':cowboy:',
(207,110,125): ':crab:',
(229,168,177): ':crayon:',
(209,175,124): ':credit_card:',
(240,243,245): ':crescent_moon:',
(228,196,187): ':cricket:',
(196,215,184): ':crocodile:',
(251,208,150): ':croissant:',
(224,215,218): ':crossed_flags:',
(241,213,140): ':crown:',
(153,191,220): ':cruise_ship:',
(220,190,105): ':cry:',
(202,180,110): ':crying_cat_face:',
(173,198,223): ':crystal_ball:',
(190,216,174): ':cucumber:',
(212,150,171): ':cupid:',
(196,198,199): ':curly_loop:',
(182,185,188): ':currency_exchange:',
(241,227,206): ':curry:',
(223,199,172): ':custard:',
(110,155,188): ':customs:',
(157,207,245): ':cyclone:',
(212,216,218): ':dagger:',
(246,191,178): ':dancer:',
(232,183,188): ':dancer::skin-tone-1:',
(245,193,183): ':dancer::skin-tone-2:',
(236,179,181): ':dancer::skin-tone-3:',
(229,175,177): ':dancer::skin-tone-4:',
(220,169,173): ':dancer::skin-tone-5:',
(211,188,137): ':dancers:',
(242,234,217): ':dango:',
(197,199,200): ':dark_sunglasses:',
(171,146,152): ':dart:',
(210,232,248): ':dash:',
(206,170,179): ':date:',
(139,171,114): ':deciduous_tree:',
(199,161,151): ':deer:',
(160,186,198): ':department_store:',
(173,190,141): ':desert:',
(157,199,232): ':desktop:',
(165,209,243): ':diamond_shape_with_a_dot_inside:',
(241,199,95): ':disappointed:',
(220,188,99): ':disappointed_relieved:',
(244,222,180): ':dividers:',
(255,232,195): ':dizzy:',
(223,183,86): ':dizzy_face:',
(152,105,111): ':do_not_litter:',
(227,199,187): ':dog2:',
(157,163,168): ':dog:',
(184,207,179): ':dollar:',
(164,159,149): ':dolls:',
(141,185,218): ':dolphin:',
(206,104,117): ':door:',
(198,156,128): ':doughnut:',
(215,224,221): ':dove:',
(168,202,148): ':dragon:',
(165,199,145): ':dragon_face:',
(156,205,243): ':dress:',
(223,178,164): ':dromedary_camel:',
(232,195,99): ':drooling_face:',
(179,216,246): ':droplet:',
(209,128,117): ':drum:',
(221,215,200): ':duck:',
(255,219,148): ':dvd:',
(208,216,222): ':e_mail:',
(200,201,195): ':eagle:',
(253,223,151): ':ear:',
(247,227,216): ':ear::skin-tone-1:',
(243,220,191): ':ear::skin-tone-2:',
(224,198,175): ':ear::skin-tone-3:',
(201,171,147): ':ear::skin-tone-4:',
(168,144,132): ':ear::skin-tone-5:',
(222,232,198): ':ear_of_rice:',
(136,187,177): ':earth_africa:',
(139,191,189): ':earth_americas:',
(136,187,176): ':earth_asia:',
(248,231,221): ':egg:',
(186,170,210): ':eggplant:',
(120,173,214): ':eject:',
(187,190,191): ':electric_plug:',
(169,183,191): ':elephant:',
(137,140,142): ':end:',
(192,213,229): ':envelope_with_arrow:',
(198,216,198): ':euro:',
(176,195,209): ':european_castle:',
(216,141,137): ':european_post_office:',
(165,188,146): ':evergreen_tree:',
(201,188,213): ':expecting_woman::skin-tone-1:',
(223,206,209): ':expecting_woman::skin-tone-2:',
(210,186,207): ':expecting_woman::skin-tone-3:',
(202,183,205): ':expecting_woman::skin-tone-4:',
(190,175,199): ':expecting_woman::skin-tone-5:',
(242,200,96): ':expressionless:',
(203,219,231): ':eye:',
(147,150,152): ':eye_in_speech_bubble:',
(177,191,202): ':eyeglasses:',
(230,233,236): ':eyes:',
(204,139,147): ':factory:',
(225,180,164): ':fallen_leaf:',
(214,174,108): ':family:',
(206,179,113): ':family_mmb:',
(220,185,112): ':family_mmbb:',
(216,176,101): ':family_mmg:',
(226,183,105): ':family_mmgb:',
(232,180,98): ':family_mmgg:',
(224,181,106): ':family_mwbb:',
(222,172,97): ':family_mwg:',
(230,178,98): ':family_mwgb:',
(235,177,91): ':family_mwgg:',
(223,168,103): ':family_wwb:',
(229,176,97): ':family_wwbb:',
(230,167,92): ':family_wwg:',
(234,174,90): ':family_wwgb:',
(238,172,83): ':family_wwgg:',
(115,170,212): ':fast_forward:',
(153,166,169): ':fax:',
(216,196,133): ':fearful:',
(211,190,186): ':feet:',
(231,235,237): ':fencer:',
(159,181,209): ':ferris_wheel:',
(173,190,215): ':ferry:',
(241,202,191): ':field_hockey:',
(142,150,155): ':file_cabinet:',
(112,176,224): ':file_folder:',
(176,189,199): ':film_frames:',
(254,237,184): ':fingers_crossed:',
(250,197,113): ':fire:',
(213,159,171): ':fire_engine:',
(127,154,150): ':fireworks:',
(195,206,205): ':first_place:',
(164,175,183): ':first_quarter_moon:',
(234,238,240): ':first_quarter_moon_with_face:',
(151,197,232): ':fish:',
(228,214,230): ':fish_cake:',
(186,201,216): ':fishing_pole_and_fish:',
(254,224,124): ':fist:',
(247,225,212): ':fist::skin-tone-1:',
(243,216,178): ':fist::skin-tone-2:',
(219,186,157): ':fist::skin-tone-3:',
(189,149,118): ':fist::skin-tone-4:',
(148,115,99): ':fist::skin-tone-5:',
(134,139,142): ':flag_black:',
(211,194,208): ':flags:',
(208,212,212): ':flashlight:',
(140,152,161): ':floppy_disk:',
(182,107,117): ':flower_playing_cards:',
(237,194,128): ':flushed:',
(228,200,192): ':fog:',
(151,174,187): ':foggy:',
(154,112,104): ':football:',
(180,147,140): ':footprints:',
(211,218,223): ':fork_and_knife:',
(206,215,222): ':fork_knife_plate:',
(165,204,143): ':four_leaf_clover:',
(233,174,108): ':fox:',
(176,166,173): ':frame_photo:',
(117,171,213): ':free:',
(248,227,203): ':french_bread:',
(254,191,125): ':fried_shrimp:',
(234,135,104): ':fries:',
(169,206,147): ':frog:',
(240,198,95): ':frowning:',
(208,217,223): ':full_moon:',
(203,213,219): ':full_moon_with_face:',
(232,145,158): ':game_die:',
(172,212,242): ':gem:',
(206,211,214): ':ghost:',
(242,167,132): ':gift:',
(245,175,144): ':gift_heart:',
(250,203,114): ':girl:',
(180,165,159): ':girl::skin-tone-1:',
(246,217,135): ':girl::skin-tone-2:',
(200,148,126): ':girl::skin-tone-3:',
(167,128,109): ':girl::skin-tone-4:',
(117,93,84): ':girl::skin-tone-5:',
(154,194,224): ':globe_with_meridians:',
(207,184,191): ':goal:',
(229,232,233): ':goat:',
(208,225,225): ':golfer::skin-tone-1:',
(208,225,223): ':golfer::skin-tone-2:',
(207,223,222): ':golfer::skin-tone-3:',
(205,222,220): ':golfer::skin-tone-4:',
(203,220,219): ':golfer::skin-tone-5:',
(108,115,120): ':gorilla:',
(205,191,187): ':grandma::skin-tone-1:',
(203,187,171): ':grandma::skin-tone-2:',
(193,173,162): ':grandma::skin-tone-3:',
(179,157,145): ':grandma::skin-tone-4:',
(158,141,135): ':grandma::skin-tone-5:',
(171,145,208): ':grapes:',
(157,195,132): ':green_apple:',
(122,170,96): ':green_book:',
(156,198,133): ':green_heart:',
(246,248,249): ':grey_exclamation:',
(240,243,245): ':grey_question:',
(226,189,100): ':grimacing:',
(228,190,98): ':grin:',
(225,187,96): ':grinning:',
(175,161,154): ':guardsman:',
(174,161,163): ':guardsman::skin-tone-1:',
(174,160,160): ':guardsman::skin-tone-2:',
(172,157,158): ':guardsman::skin-tone-3:',
(168,154,154): ':guardsman::skin-tone-4:',
(162,150,151): ':guardsman::skin-tone-5:',
(239,201,163): ':guitar:',
(212,207,208): ':gun:',
(224,183,128): ':haircut:',
(153,142,152): ':haircut::skin-tone-1:',
(221,199,136): ':haircut::skin-tone-2:',
(181,135,132): ':haircut::skin-tone-3:',
(155,124,124): ':haircut::skin-tone-4:',
(115,97,104): ':haircut::skin-tone-5:',
(214,167,139): ':hamburger:',
(234,220,201): ':hammer:',
(210,184,149): ':hammer_pick:',
(237,184,123): ':hamster:',
(255,232,158): ':hand_splayed:',
(251,239,232): ':hand_with_index_and_middle_finger_crossed::skin-tone-1:',
(249,234,213): ':hand_with_index_and_middle_finger_crossed::skin-tone-2:',
(236,218,202): ':hand_with_index_and_middle_finger_crossed::skin-tone-3:',
(219,198,181): ':hand_with_index_and_middle_finger_crossed::skin-tone-4:',
(197,180,171): ':hand_with_index_and_middle_finger_crossed::skin-tone-5:',
(182,147,204): ':handbag:',
(255,232,168): ':handshake:',
(252,218,142): ':hatched_chick:',
(245,219,159): ':hatching_chick:',
(235,204,129): ':head_bandage:',
(175,197,214): ':headphones:',
(195,132,111): ':hear_no_evil:',
(238,167,179): ':heart_decoration:',
(228,157,97): ':heart_eyes:',
(217,154,115): ':heart_eyes_cat:',
(240,152,160): ':heartbeat:',
(230,142,157): ':heartpulse:',
(194,196,197): ':heavy_division_sign:',
(196,198,199): ':heavy_dollar_sign:',
(215,216,217): ':heavy_minus_sign:',
(184,186,187): ':heavy_plus_sign:',
(207,207,180): ':helicopter:',
(231,153,160): ':helmet_with_cross:',
(204,218,187): ':herb:',
(232,174,171): ':hibiscus:',
(255,221,172): ':high_brightness:',
(221,173,180): ':high_heel:',
(212,202,187): ':hockey:',
(187,190,192): ':hole:',
(155,173,149): ':homes:',
(247,199,129): ':honey_pot:',
(198,142,125): ':horse:',
(214,188,174): ':horse_racing:',
(213,188,175): ':horse_racing::skin-tone-1:',
(214,188,175): ':horse_racing::skin-tone-2:',
(213,187,174): ':horse_racing::skin-tone-3:',
(213,187,174): ':horse_racing::skin-tone-4:',
(212,186,173): ':horse_racing::skin-tone-5:',
(170,200,227): ':hospital:',
(234,182,184): ':hot_pepper:',
(241,191,174): ':hotdog:',
(193,195,174): ':hotel:',
(228,220,198): ':hourglass_flowing_sand:',
(222,219,205): ':house:',
(204,172,160): ':house_abandoned:',
(200,206,173): ':house_with_garden:',
(237,183,81): ':hugging:',
(234,192,92): ':hushed:',
(186,196,209): ':ice_cream:',
(235,231,233): ':ice_skate:',
(255,236,195): ':icecream:',
(186,158,223): ':id:',
(240,162,171): ':ideograph_advantage:',
(165,139,205): ':imp:',
(203,184,157): ':inbox_tray:',
(222,230,235): ':incoming_envelope:',
(229,186,141): ':information_desk_person:',
(176,157,174): ':information_desk_person::skin-tone-1:',
(226,197,156): ':information_desk_person::skin-tone-2:',
(192,145,149): ':information_desk_person::skin-tone-3:',
(168,131,137): ':information_desk_person::skin-tone-4:',
(131,106,119): ':information_desk_person::skin-tone-5:',
(218,191,107): ':innocent:',
(130,171,202): ':iphone:',
(139,165,158): ':island:',
(198,67,85): ':izakaya_lantern:',
(205,139,74): ':jack_o_lantern:',
(125,188,216): ':japan:',
(183,180,180): ':japanese_castle:',
(202,129,134): ':japanese_goblin:',
(167,87,87): ':japanese_ogre:',
(144,187,219): ':jeans:',
(199,182,120): ':joy:',
(193,175,113): ':joy_cat:',
(186,175,179): ':joystick:',
(89,76,54): ':kaaba:',
(197,199,201): ':key2:',
(227,187,175): ':key:',
(132,180,216): ':keycap_ten:',
(127,177,215): ':kimono:',
(242,164,176): ':kiss:',
(245,187,123): ':kiss_mm:',
(242,173,98): ':kiss_ww:',
(240,198,95): ':kissing:',
(220,186,96): ':kissing_cat:',
(236,184,102): ':kissing_closed_eyes:',
(232,185,93): ':kissing_heart:',
(241,199,95): ':kissing_smiling_eyes:',
(187,183,142): ':kiwi:',
(219,222,224): ':knife:',
(178,188,194): ':koala:',
(94,157,206): ':koko:',
(252,228,182): ':label:',
(121,189,242): ':large_blue_circle:',
(153,205,245): ':large_blue_diamond:',
(255,205,133): ':large_orange_diamond:',
(165,176,184): ':last_quarter_moon:',
(233,237,240): ':last_quarter_moon_with_face:',
(221,184,94): ':laughing:',
(185,218,177): ':leaves:',
(234,187,94): ':ledger:',
(254,232,165): ':left_facing_fist:',
(249,234,225): ':left_fist::skin-tone-1:',
(247,228,202): ':left_fist::skin-tone-2:',
(230,207,188): ':left_fist::skin-tone-3:',
(210,183,162): ':left_fist::skin-tone-4:',
(182,160,149): ':left_fist::skin-tone-5:',
(139,175,201): ':left_luggage:',
(235,211,118): ':lemon:',
(252,227,175): ':leopard:',
(182,196,207): ':level_slider:',
(206,208,207): ':levitate:',
(178,176,169): ':light_rail:',
(206,213,218): ':link:',
(184,140,94): ':lion_face:',
(228,173,178): ':lips:',
(247,187,173): ':lipstick:',
(186,208,172): ':lizard:',
(245,210,159): ':lock:',
(200,185,160): ':lock_with_ink_pen:',
(247,186,139): ':lollipop:',
(120,161,193): ':loop:',
(210,218,223): ':loud_sound:',
(218,176,185): ':loudspeaker:',
(198,162,188): ':love_hotel:',
(220,194,203): ':love_letter:',
(255,228,190): ':low_brightness:',
(243,202,112): ':lying_face:',
(204,215,223): ':mag:',
(204,215,223): ':mag_right:',
(193,180,184): ':mailbox:',
(193,181,185): ':mailbox_closed:',
(204,188,192): ':mailbox_with_mail:',
(168,153,156): ':mailbox_with_no_mail:',
(197,192,194): ':male_dancer::skin-tone-1:',
(201,196,192): ':male_dancer::skin-tone-2:',
(198,191,191): ':male_dancer::skin-tone-3:',
(196,190,190): ':male_dancer::skin-tone-4:',
(192,187,188): ':male_dancer::skin-tone-5:',
(251,209,121): ':man:',
(181,171,165): ':man::skin-tone-1:',
(246,223,141): ':man::skin-tone-2:',
(202,155,133): ':man::skin-tone-3:',
(170,135,116): ':man::skin-tone-4:',
(120,101,92): ':man::skin-tone-5:',
(201,195,190): ':man_dancing:',
(206,208,209): ':man_in_business_suit_levitating::skin-tone-1:',
(206,207,208): ':man_in_business_suit_levitating::skin-tone-2:',
(204,205,206): ':man_in_business_suit_levitating::skin-tone-4:',
(202,204,206): ':man_in_business_suit_levitating::skin-tone-5:',
(202,190,162): ':man_in_tuxedo:',
(182,179,178): ':man_in_tuxedo::skin-tone-1:',
(200,193,170): ':man_in_tuxedo::skin-tone-2:',
(186,173,167): ':man_in_tuxedo::skin-tone-3:',
(175,166,160): ':man_in_tuxedo::skin-tone-4:',
(159,154,152): ':man_in_tuxedo::skin-tone-5:',
(218,191,142): ':man_with_gua_pi_mao:',
(215,192,178): ':man_with_gua_pi_mao::skin-tone-1:',
(214,188,164): ':man_with_gua_pi_mao::skin-tone-2:',
(204,176,156): ':man_with_gua_pi_mao::skin-tone-3:',
(191,162,140): ':man_with_gua_pi_mao::skin-tone-4:',
(170,145,131): ':man_with_gua_pi_mao::skin-tone-5:',
(229,222,179): ':man_with_turban:',
(208,211,196): ':man_with_turban::skin-tone-1:',
(227,226,188): ':man_with_turban::skin-tone-2:',
(213,204,184): ':man_with_turban::skin-tone-3:',
(202,197,177): ':man_with_turban::skin-tone-4:',
(184,184,169): ':man_with_turban::skin-tone-5:',
(218,194,187): ':mans_shoe:',
(154,199,204): ':map:',
(236,139,151): ':maple_leaf:',
(194,201,206): ':martial_arts_uniform:',
(245,219,154): ':mask:',
(235,196,137): ':massage:',
(187,165,172): ':massage::skin-tone-1:',
(230,203,161): ':massage::skin-tone-2:',
(199,156,151): ':massage::skin-tone-3:',
(173,137,133): ':massage::skin-tone-4:',
(135,109,115): ':massage::skin-tone-5:',
(224,183,165): ':meat_on_bone:',
(197,214,215): ':medal:',
(153,198,232): ':mega:',
(180,215,158): ':melon:',
(192,166,225): ':menorah:',
(86,138,177): ':mens:',
(254,236,177): ':metal:',
(121,111,120): ':metro:',
(171,178,183): ':microphone2:',
(175,189,199): ':microphone:',
(239,212,172): ':microscope:',
(255,238,179): ':middle_finger:',
(236,206,195): ':military_medal:',
(223,228,232): ':milk:',
(103,79,145): ':milky_way:',
(170,194,212): ':minibus:',
(193,171,125): ':minidisc:',
(248,183,98): ':mobile_phone_off:',
(219,189,97): ':money_mouth:',
(203,220,200): ':money_with_wings:',
(235,214,172): ':moneybag:',
(217,168,154): ':monkey:',
(198,140,121): ':monkey_face:',
(179,196,194): ':monorail:',
(139,138,134): ':mortar_board:',
(238,195,134): ':mosque:',
(239,205,203): ':mother_christmas::skin-tone-1:',
(238,201,191): ':mother_christmas::skin-tone-2:',
(230,191,184): ':mother_christmas::skin-tone-3:',
(219,179,171): ':mother_christmas::skin-tone-4:',
(203,166,163): ':mother_christmas::skin-tone-5:',
(224,185,190): ':motor_scooter:',
(194,213,221): ':motorboat:',
(191,204,189): ':motorcycle:',
(134,172,162): ':motorway:',
(125,161,188): ':mount_fuji:',
(87,121,134): ':mountain:',
(142,174,137): ':mountain_bicyclist:',
(140,174,146): ':mountain_bicyclist::skin-tone-1:',
(138,170,141): ':mountain_bicyclist::skin-tone-3:',
(135,166,137): ':mountain_bicyclist::skin-tone-4:',
(131,163,135): ':mountain_bicyclist::skin-tone-5:',
(162,188,200): ':mountain_cableway:',
(188,198,177): ':mountain_railway:',
(108,141,153): ':mountain_snow:',
(237,232,236): ':mouse2:',
(184,182,190): ':mouse:',
(179,189,196): ':mouse_three_button:',
(143,149,152): ':movie_camera:',
(189,198,204): ':moyai:',
(241,204,172): ':mrs_claus:',
(254,231,155): ':muscle:',
(249,233,223): ':muscle::skin-tone-1:',
(246,226,196): ':muscle::skin-tone-2:',
(228,202,181): ':muscle::skin-tone-3:',
(205,175,151): ':muscle::skin-tone-4:',
(173,149,136): ':muscle::skin-tone-5:',
(224,153,165): ':mushroom:',
(116,123,127): ':musical_keyboard:',
(180,217,246): ':musical_note:',
(165,173,178): ':musical_score:',
(225,209,215): ':mute:',
(238,195,181): ':nail_care:',
(237,195,199): ':nail_care::skin-tone-1:',
(236,194,192): ':nail_care::skin-tone-2:',
(232,187,188): ':nail_care::skin-tone-3:',
(226,180,180): ':nail_care::skin-tone-4:',
(218,174,176): ':nail_care::skin-tone-5:',
(235,147,159): ':name_badge:',
(129,175,103): ':nauseated_face:',
(142,141,182): ':necktie:',
(160,201,137): ':negative_squared_cross_mark:',
(203,172,96): ':nerd:',
(239,198,95): ':neutral_face:',
(112,168,211): ':new:',
(122,135,144): ':new_moon:',
(120,132,140): ':new_moon_with_face:',
(214,224,231): ':newspaper2:',
(171,192,207): ':newspaper:',
(123,175,215): ':ng:',
(66,86,94): ':night_with_stars:',
(251,191,136): ':no_bell:',
(156,109,115): ':no_bicycles:',
(237,143,155): ':no_entry_sign:',
(213,173,155): ':no_good:',
(169,149,179): ':no_good::skin-tone-1:',
(211,182,166): ':no_good::skin-tone-2:',
(183,140,161): ':no_good::skin-tone-3:',
(164,129,152): ':no_good::skin-tone-4:',
(134,109,138): ':no_good::skin-tone-5:',
(134,87,93): ':no_mobile_phones:',
(245,203,98): ':no_mouth:',
(132,85,91): ':no_pedestrians:',
(133,87,93): ':no_smoking:',
(142,95,101): ':non_potable_water:',
(252,232,171): ':nose:',
(247,233,225): ':nose::skin-tone-1:',
(245,227,204): ':nose::skin-tone-2:',
(231,209,191): ':nose::skin-tone-3:',
(212,187,168): ':nose::skin-tone-4:',
(185,165,155): ':nose::skin-tone-5:',
(140,149,155): ':notebook:',
(212,161,142): ':notebook_with_decorative_cover:',
(186,200,209): ':notepad_spiral:',
(191,223,247): ':notes:',
(219,224,227): ':nut_and_bolt:',
(146,197,235): ':ocean:',
(224,115,130): ':octagonal_sign:',
(170,140,211): ':octopus:',
(226,219,216): ':oden:',
(187,202,209): ':office:',
(120,169,207): ':oil:',
(119,172,213): ':ok:',
(254,235,174): ':ok_hand:',
(250,236,228): ':ok_hand::skin-tone-1:',
(248,231,207): ':ok_hand::skin-tone-2:',
(233,212,194): ':ok_hand::skin-tone-3:',
(214,190,171): ':ok_hand::skin-tone-4:',
(189,169,159): ':ok_hand::skin-tone-5:',
(210,164,139): ':ok_woman:',
(171,143,175): ':ok_woman::skin-tone-1:',
(206,171,157): ':ok_woman::skin-tone-2:',
(178,128,150): ':ok_woman::skin-tone-3:',
(156,113,135): ':ok_woman::skin-tone-4:',
(123,89,119): ':ok_woman::skin-tone-5:',
(246,220,140): ':older_man:',
(241,221,210): ':older_man::skin-tone-1:',
(239,214,183): ':older_man::skin-tone-2:',
(220,190,167): ':older_man::skin-tone-3:',
(195,162,136): ':older_man::skin-tone-4:',
(158,132,119): ':older_man::skin-tone-5:',
(208,190,147): ':older_woman:',
(179,149,220): ':om_symbol:',
(137,141,143): ':on:',
(141,181,206): ':oncoming_automobile:',
(177,184,177): ':oncoming_bus:',
(148,167,184): ':oncoming_police_car:',
(209,200,157): ':oncoming_taxi:',
(106,169,217): ':open_file_folder:',
(254,236,175): ':open_hands:',
(250,237,229): ':open_hands::skin-tone-1:',
(248,231,208): ':open_hands::skin-tone-2:',
(234,213,196): ':open_hands::skin-tone-3:',
(215,191,172): ':open_hands::skin-tone-4:',
(190,170,160): ':open_hands::skin-tone-5:',
(235,194,92): ':open_mouth:',
(176,143,218): ':ophiuchus:',
(247,174,70): ':orange_book:',
(221,162,155): ':outbox_tray:',
(204,155,129): ':owl:',
(219,171,158): ':ox:',
(208,175,164): ':package:',
(198,208,215): ':page_facing_up:',
(206,215,222): ':page_with_curl:',
(165,182,157): ':pager:',
(206,216,228): ':paintbrush:',
(200,203,174): ':palm_tree:',
(238,195,137): ':pancakes:',
(192,193,194): ':panda_face:',
(225,230,233): ':paperclip:',
(209,216,221): ':paperclips:',
(99,145,143): ':park:',
(115,158,190): ':passport_control:',
(108,166,210): ':pause_button:',
(236,155,128): ':peach:',
(230,192,177): ':peanuts:',
(207,228,190): ':pear:',
(195,191,180): ':pen_ballpoint:',
(181,184,187): ':pen_fountain:',
(203,198,183): ':pencil:',
(200,197,191): ':penguin:',
(236,195,93): ':pensive:',
(201,207,204): ':performing_arts:',
(226,186,88): ':persevere:',
(236,195,143): ':person_frowning:',
(175,160,171): ':person_frowning::skin-tone-1:',
(233,208,155): ':person_frowning::skin-tone-2:',
(197,151,149): ':person_frowning::skin-tone-3:',
(172,138,139): ':person_frowning::skin-tone-4:',
(134,113,121): ':person_frowning::skin-tone-5:',
(215,219,223): ':person_with_ball::skin-tone-1:',
(222,225,218): ':person_with_ball::skin-tone-2:',
(216,215,215): ':person_with_ball::skin-tone-3:',
(210,210,210): ':person_with_ball::skin-tone-4:',
(202,204,206): ':person_with_ball::skin-tone-5:',
(250,225,129): ':person_with_blond_hair:',
(247,226,173): ':person_with_blond_hair::skin-tone-1:',
(245,222,157): ':person_with_blond_hair::skin-tone-2:',
(234,207,147): ':person_with_blond_hair::skin-tone-3:',
(217,190,128): ':person_with_blond_hair::skin-tone-4:',
(193,171,117): ':person_with_blond_hair::skin-tone-5:',
(236,196,144): ':person_with_pouting_face:',
(175,162,172): ':person_with_pouting_face::skin-tone-1:',
(233,209,155): ':person_with_pouting_face::skin-tone-2:',
(197,152,150): ':person_with_pouting_face::skin-tone-3:',
(171,139,140): ':person_with_pouting_face::skin-tone-4:',
(133,113,121): ':person_with_pouting_face::skin-tone-5:',
(234,217,193): ':pick:',
(248,208,216): ':pig2:',
(233,154,168): ':pig:',
(221,184,182): ':pig_nose:',
(233,166,137): ':pill:',
(225,220,170): ':pineapple:',
(235,146,151): ':ping_pong:',
(240,186,152): ':pizza:',
(180,150,220): ':place_of_worship:',
(124,175,215): ':play_pause:',
(255,238,176): ':point_down:',
(251,239,231): ':point_down::skin-tone-1:',
(249,233,210): ':point_down::skin-tone-2:',
(235,214,197): ':point_down::skin-tone-3:',
(216,192,173): ':point_down::skin-tone-4:',
(191,171,161): ':point_down::skin-tone-5:',
(255,238,177): ':point_left:',
(251,239,231): ':point_left::skin-tone-1:',
(249,233,210): ':point_left::skin-tone-2:',
(235,214,197): ':point_left::skin-tone-3:',
(216,193,174): ':point_left::skin-tone-4:',
(192,172,162): ':point_left::skin-tone-5:',
(255,238,176): ':point_right:',
(251,239,231): ':point_right::skin-tone-1:',
(249,233,210): ':point_right::skin-tone-2:',
(234,214,197): ':point_right::skin-tone-3:',
(216,192,173): ':point_right::skin-tone-4:',
(191,171,161): ':point_right::skin-tone-5:',
(251,240,233): ':point_up::skin-tone-1:',
(249,235,216): ':point_up::skin-tone-2:',
(237,220,205): ':point_up::skin-tone-3:',
(221,201,186): ':point_up::skin-tone-4:',
(201,184,176): ':point_up::skin-tone-5:',
(255,238,177): ':point_up_2:',
(251,239,231): ':point_up_2::skin-tone-1:',
(249,233,210): ':point_up_2::skin-tone-2:',
(235,215,198): ':point_up_2::skin-tone-3:',
(216,193,174): ':point_up_2::skin-tone-4:',
(192,172,162): ':point_up_2::skin-tone-5:',
(195,205,213): ':police_car:',
(198,206,212): ':poodle:',
(199,150,138): ':poop:',
(235,191,179): ':popcorn:',
(185,195,211): ':post_office:',
(251,216,179): ':postal_horn:',
(209,146,157): ':postbox:',
(99,147,183): ':potable_water:',
(229,188,170): ':potato:',
(202,225,182): ':pouch:',
(231,195,180): ':poultry_leg:',
(199,196,220): ':pound:',
(218,184,96): ':pouting_cat:',
(230,220,166): ':pray:',
(226,220,217): ':pray::skin-tone-1:',
(223,215,198): ':pray::skin-tone-2:',
(210,197,186): ':pray::skin-tone-3:',
(191,175,163): ':pray::skin-tone-4:',
(167,155,151): ':pray::skin-tone-5:',
(200,190,210): ':prayer_beads:',
(224,201,206): ':pregnant_woman:',
(239,213,152): ':prince:',
(212,200,195): ':prince::skin-tone-1:',
(234,216,176): ':prince::skin-tone-2:',
(211,182,166): ':prince::skin-tone-3:',
(189,165,148): ':prince::skin-tone-4:',
(155,140,133): ':prince::skin-tone-5:',
(246,198,112): ':princess:',
(162,151,147): ':princess::skin-tone-1:',
(242,217,125): ':princess::skin-tone-2:',
(193,139,119): ':princess::skin-tone-3:',
(160,123,107): ':princess::skin-tone-4:',
(108,89,82): ':princess::skin-tone-5:',
(158,189,212): ':printer:',
(128,133,137): ':projector:',
(252,220,146): ':punch:',
(246,225,213): ':punch::skin-tone-1:',
(242,217,187): ':punch::skin-tone-2:',
(223,195,171): ':punch::skin-tone-3:',
(198,166,141): ':punch::skin-tone-4:',
(164,138,126): ':punch::skin-tone-5:',
(193,172,225): ':purple_heart:',
(235,119,136): ':purse:',
(232,160,170): ':pushpin:',
(121,173,214): ':put_litter_in_its_place:',
(236,188,195): ':question:',
(200,205,212): ':rabbit2:',
(212,210,216): ':rabbit:',
(206,199,184): ':race_car:',
(223,190,180): ':racehorse:',
(152,163,170): ':radio:',
(97,146,183): ':radio_button:',
(198,77,94): ':rage:',
(201,210,182): ':railway_car:',
(143,182,182): ':railway_track:',
(161,152,144): ':rainbow:',
(255,231,152): ':raised_back_of_hand:',
(254,231,152): ':raised_hand:',
(249,233,222): ':raised_hand::skin-tone-1:',
(246,225,195): ':raised_hand::skin-tone-2:',
(228,201,179): ':raised_hand::skin-tone-3:',
(204,173,148): ':raised_hand::skin-tone-4:',
(171,146,133): ':raised_hand::skin-tone-5:',
(250,234,224): ':raised_hand_with_fingers_splayed::skin-tone-1:',
(247,227,199): ':raised_hand_with_fingers_splayed::skin-tone-2:',
(229,204,183): ':raised_hand_with_fingers_splayed::skin-tone-3:',
(207,177,154): ':raised_hand_with_fingers_splayed::skin-tone-4:',
(176,152,140): ':raised_hand_with_fingers_splayed::skin-tone-5:',
(249,233,223): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-1:',
(246,226,197): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-2:',
(228,202,181): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-3:',
(205,175,152): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-4:',
(174,149,136): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-5:',
(255,233,162): ':raised_hands:',
(251,234,221): ':raised_hands::skin-tone-1:',
(248,228,198): ':raised_hands::skin-tone-2:',
(233,207,184): ':raised_hands::skin-tone-3:',
(212,183,158): ':raised_hands::skin-tone-4:',
(185,161,145): ':raised_hands::skin-tone-5:',
(223,179,143): ':raising_hand:',
(177,155,176): ':raising_hand::skin-tone-1:',
(219,188,159): ':raising_hand::skin-tone-2:',
(189,142,152): ':raising_hand::skin-tone-3:',
(166,128,140): ':raising_hand::skin-tone-4:',
(132,104,123): ':raising_hand::skin-tone-5:',
(229,227,214): ':ram:',
(228,193,157): ':ramen:',
(184,184,189): ':rat:',
(135,182,218): ':record_button:',
(226,177,186): ':red_car:',
(228,90,107): ':red_circle:',
(233,192,92): ':relieved:',
(171,205,231): ':reminder_ribbon:',
(116,170,212): ':repeat:',
(116,170,212): ':repeat_one:',
(144,188,221): ':restroom:',
(251,239,231): ':reversed_hand_with_middle_finger_extended::skin-tone-1:',
(249,233,211): ':reversed_hand_with_middle_finger_extended::skin-tone-2:',
(235,215,199): ':reversed_hand_with_middle_finger_extended::skin-tone-3:',
(217,194,176): ':reversed_hand_with_middle_finger_extended::skin-tone-4:',
(193,175,165): ':reversed_hand_with_middle_finger_extended::skin-tone-5:',
(241,162,175): ':revolving_hearts:',
(115,170,212): ':rewind:',
(182,192,200): ':rhino:',
(226,104,120): ':ribbon:',
(229,189,196): ':rice:',
(201,203,204): ':rice_ball:',
(173,125,112): ':rice_cracker:',
(109,148,143): ':rice_scene:',
(254,232,165): ':right_facing_fist:',
(249,234,225): ':right_fist::skin-tone-1:',
(247,228,202): ':right_fist::skin-tone-2:',
(230,207,187): ':right_fist::skin-tone-3:',
(209,183,161): ':right_fist::skin-tone-4:',
(181,159,148): ':right_fist::skin-tone-5:',
(208,222,233): ':ring:',
(152,184,211): ':robot:',
(181,173,189): ':rocket:',
(212,178,95): ':rofl:',
(153,159,190): ':roller_coaster:',
(243,208,123): ':rolling_eyes:',
(237,230,229): ':rooster:',
(210,159,159): ':rose:',
(132,173,223): ':rosette:',
(218,137,150): ':rotating_light:',
(238,207,212): ':round_pushpin:',
(195,200,193): ':rowboat:',
(192,198,194): ':rowboat::skin-tone-1:',
(194,200,193): ':rowboat::skin-tone-2:',
(193,198,193): ':rowboat::skin-tone-3:',
(192,197,192): ':rowboat::skin-tone-4:',
(190,196,192): ':rowboat::skin-tone-5:',
(196,216,231): ':rugby_football:',
(219,225,210): ':runner:',
(211,221,218): ':runner::skin-tone-1:',
(218,226,214): ':runner::skin-tone-2:',
(213,218,213): ':runner::skin-tone-3:',
(208,215,210): ':runner::skin-tone-4:',
(202,210,207): ':runner::skin-tone-5:',
(150,193,215): ':running_shirt_with_sash:',
(189,219,243): ':sake:',
(188,199,174): ':salad:',
(237,202,199): ':sandal:',
(237,187,158): ':santa:',
(235,187,187): ':santa::skin-tone-1:',
(234,184,176): ':santa::skin-tone-2:',
(226,174,169): ':santa::skin-tone-3:',
(215,162,156): ':santa::skin-tone-4:',
(199,149,149): ':santa::skin-tone-5:',
(191,194,192): ':satellite:',
(208,223,232): ':satellite_orbital:',
(250,226,189): ':saxophone:',
(197,198,180): ':school:',
(187,73,83): ':school_satchel:',
(220,225,236): ':scooter:',
(198,181,172): ':scorpion:',
(217,198,151): ':scream:',
(225,192,117): ':scream_cat:',
(240,198,143): ':scroll:',
(154,188,214): ':seat:',
(184,213,235): ':second_place:',
(202,136,115): ':see_no_evil:',
(216,233,206): ':seedling:',
(211,221,200): ':selfie:',
(210,221,208): ':selfie::skin-tone-1:',
(210,221,205): ':selfie::skin-tone-2:',
(208,218,203): ':selfie::skin-tone-3:',
(206,215,200): ':selfie::skin-tone-4:',
(202,212,198): ':selfie::skin-tone-5:',
(183,145,89): ':shallow_pan_of_food:',
(161,170,176): ':shark:',
(204,193,218): ':shaved_ice:',
(237,234,221): ':sheep:',
(207,215,220): ':shell:',
(119,171,211): ':shield:',
(210,148,156): ':shinto_shrine:',
(188,192,204): ':ship:',
(107,166,212): ':shirt:',
(194,124,152): ':shopping_bags:',
(185,192,196): ':shopping_cart:',
(189,211,227): ':shower:',
(215,181,171): ':shrimp:',
(250,238,230): ':sign_of_the_horns::skin-tone-1:',
(248,232,209): ':sign_of_the_horns::skin-tone-2:',
(234,214,197): ':sign_of_the_horns::skin-tone-3:',
(216,193,175): ':sign_of_the_horns::skin-tone-4:',
(192,172,163): ':sign_of_the_horns::skin-tone-5:',
(112,168,211): ':signal_strength:',
(193,171,226): ':six_pointed_star:',
(196,205,223): ':ski:',
(194,204,221): ':skier:',
(183,191,196): ':skull:',
(221,189,101): ':sleeping:',
(201,207,212): ':sleeping_accommodation:',
(227,192,98): ':sleepy:',
(148,154,159): ':sleuth_or_spy::skin-tone-1:',
(149,155,156): ':sleuth_or_spy::skin-tone-2:',
(147,152,155): ':sleuth_or_spy::skin-tone-3:',
(145,149,152): ':sleuth_or_spy::skin-tone-4:',
(141,147,151): ':sleuth_or_spy::skin-tone-5:',
(239,197,94): ':slight_frown:',
(239,197,94): ':slight_smile:',
(173,160,165): ':slot_machine:',
(212,234,251): ':small_blue_diamond:',
(255,234,203): ':small_orange_diamond:',
(248,209,216): ':small_red_triangle:',
(248,209,215): ':small_red_triangle_down:',
(227,189,97): ':smile:',
(206,178,102): ':smile_cat:',
(222,184,94): ':smiley:',
(213,183,103): ':smiley_cat:',
(166,141,206): ':smiling_imp:',
(235,194,92): ':smirk:',
(219,185,96): ':smirk_cat:',
(235,232,231): ':smoking:',
(236,200,169): ':snail:',
(184,212,164): ':snake:',
(230,199,121): ':sneezing_face:',
(201,197,208): ':snowboarder:',
(202,183,118): ':sob:',
(231,107,122): ':sos:',
(220,226,230): ':sound:',
(165,151,191): ':space_invader:',
(235,203,181): ':spaghetti:',
(119,152,155): ':sparkler:',
(224,215,234): ':sparkles:',
(239,143,145): ':sparkling_heart:',
(201,140,120): ':speak_no_evil:',
(223,229,233): ':speaker:',
(130,130,128): ':speaking_head:',
(200,225,243): ':speech_balloon:',
(183,191,203): ':speedboat:',
(170,173,174): ':spider:',
(196,202,206): ':spider_web:',
(228,232,235): ':spoon:',
(237,182,192): ':squid:',
(144,174,183): ':stadium:',
(255,211,145): ':star2:',
(138,146,126): ':stars:',
(171,181,178): ':station:',
(58,159,159): ':statue_of_liberty:',
(163,126,127): ':steam_locomotive:',
(176,172,145): ':stew:',
(152,193,224): ':stop_button:',
(198,198,202): ':stopwatch:',
(239,218,166): ':straight_ruler:',
(209,123,128): ':strawberry:',
(234,181,98): ':stuck_out_tongue:',
(229,177,95): ':stuck_out_tongue_closed_eyes:',
(234,187,118): ':stuck_out_tongue_winking_eye:',
(218,186,143): ':stuffed_flatbread:',
(251,198,103): ':sun_with_face:',
(201,168,105): ':sunflower:',
(193,164,91): ':sunglasses:',
(176,169,124): ':sunrise:',
(192,177,81): ':sunrise_over_mountains:',
(181,197,192): ':surfer:',
(175,194,198): ':surfer::skin-tone-1:',
(181,198,195): ':surfer::skin-tone-2:',
(176,192,194): ':surfer::skin-tone-3:',
(173,190,192): ':surfer::skin-tone-4:',
(168,186,190): ':surfer::skin-tone-5:',
(199,134,138): ':sushi:',
(152,146,130): ':suspension_railway:',
(229,197,107): ':sweat:',
(158,206,244): ':sweat_drops:',
(214,186,108): ':sweat_smile:',
(238,180,161): ':sweet_potato:',
(190,211,197): ':swimmer:',
(187,211,231): ':swimmer::skin-tone-1:',
(186,208,217): ':swimmer::skin-tone-2:',
(177,196,210): ':swimmer::skin-tone-3:',
(165,182,195): ':swimmer::skin-tone-4:',
(150,169,187): ':swimmer::skin-tone-5:',
(128,178,216): ':symbols:',
(182,182,176): ':synagogue:',
(235,202,208): ':syringe:',
(248,204,139): ':taco:',
(214,153,156): ':tada:',
(200,192,168): ':tanabata_tree:',
(235,165,66): ':tangerine:',
(227,215,183): ':taxi:',
(169,196,161): ':tea:',
(174,177,178): ':telephone_receiver:',
(214,183,190): ':telescope:',
(165,206,141): ':tennis:',
(227,206,210): ':thermometer:',
(229,189,101): ':thermometer_face:',
(240,195,104): ':thinking:',
(192,199,207): ':third_place:',
(213,233,248): ':thought_balloon:',
(245,223,211): ':thumbdown::skin-tone-1:',
(241,215,184): ':thumbdown::skin-tone-2:',
(221,192,167): ':thumbdown::skin-tone-3:',
(195,162,136): ':thumbdown::skin-tone-4:',
(159,133,120): ':thumbdown::skin-tone-5:',
(252,218,141): ':thumbsdown:',
(252,219,143): ':thumbsup:',
(246,224,212): ':thumbup::skin-tone-1:',
(241,216,186): ':thumbup::skin-tone-2:',
(222,193,169): ':thumbup::skin-tone-3:',
(196,164,139): ':thumbup::skin-tone-4:',
(162,135,123): ':thumbup::skin-tone-5:',
(226,227,222): ':thunder_cloud_rain:',
(219,226,230): ':ticket:',
(231,141,154): ':tickets:',
(236,208,166): ':tiger2:',
(203,180,128): ':tiger:',
(195,197,202): ':timer:',
(218,180,88): ':tired_face:',
(220,226,231): ':toilet:',
(155,181,223): ':tokyo_tower:',
(221,102,111): ':tomato:',
(231,161,169): ':tongue:',
(206,195,177): ':tools:',
(169,171,173): ':top:',
(100,97,117): ':tophat:',
(129,179,217): ':track_next:',
(129,179,217): ':track_previous:',
(153,149,178): ':trackball:',
(174,184,152): ':tractor:',
(159,153,143): ':traffic_light:',
(199,204,207): ':train2:',
(187,148,168): ':train:',
(179,191,188): ':tram:',
(255,229,163): ':triangular_ruler:',
(255,229,164): ':trident:',
(232,199,117): ':triumph:',
(169,195,186): ':trolleybus:',
(250,209,137): ':trophy:',
(216,206,172): ':tropical_drink:',
(252,213,132): ':tropical_fish:',
(214,196,191): ':truck:',
(252,224,186): ':trumpet:',
(227,206,196): ':tulip:',
(221,211,193): ':tumbler_glass:',
(136,130,126): ':turkey:',
(190,211,177): ':turtle:',
(104,145,176): ':tv:',
(109,167,210): ':twisted_rightwards_arrows:',
(243,169,181): ':two_hearts:',
(210,205,166): ':two_men_holding_hands:',
(235,187,149): ':two_women_holding_hands:',
(231,107,122): ':u5272:',
(228,90,107): ':u5408:',
(247,174,77): ':u55b6:',
(247,171,71): ':u6709:',
(231,107,122): ':u6e80:',
(246,169,66): ':u7533:',
(231,105,121): ':u7981:',
(149,120,190): ':u7a7a:',
(233,192,91): ':unamused:',
(156,110,116): ':underage:',
(181,165,218): ':unicorn:',
(246,211,160): ':unlock:',
(110,167,211): ':up:',
(237,196,93): ':upside_down:',
(251,238,231): ':v::skin-tone-1:',
(248,233,211): ':v::skin-tone-2:',
(235,216,200): ':v::skin-tone-3:',
(218,195,178): ':v::skin-tone-4:',
(194,176,167): ':v::skin-tone-5:',
(159,152,143): ':vertical_traffic_light:',
(115,124,131): ':vhs:',
(248,180,91): ':vibration_mode:',
(130,137,141): ':video_camera:',
(147,144,139): ':video_game:',
(235,204,172): ':violin:',
(172,80,52): ':volcano:',
(202,212,219): ':volleyball:',
(248,184,100): ':vs:',
(254,231,155): ':vulcan:',
(223,228,216): ':walking:',
(216,224,223): ':walking::skin-tone-1:',
(222,229,220): ':walking::skin-tone-2:',
(217,222,218): ':walking::skin-tone-3:',
(213,219,216): ':walking::skin-tone-4:',
(207,215,213): ':walking::skin-tone-5:',
(138,150,158): ':waning_crescent_moon:',
(191,201,208): ':waning_gibbous_moon:',
(177,188,194): ':wastebasket:',
(144,148,150): ':water_buffalo:',
(225,177,172): ':watermelon:',
(243,224,138): ':wave:',
(237,226,207): ':wave::skin-tone-1:',
(235,218,180): ':wave::skin-tone-2:',
(216,194,164): ':wave::skin-tone-3:',
(192,166,134): ':wave::skin-tone-4:',
(160,139,118): ':wave::skin-tone-5:',
(137,149,158): ':waxing_crescent_moon:',
(190,200,207): ':waxing_gibbous_moon:',
(97,146,182): ':wc:',
(219,182,92): ':weary:',
(231,175,181): ':wedding:',
(211,215,220): ':weight_lifter::skin-tone-1:',
(217,219,215): ':weight_lifter::skin-tone-2:',
(211,210,212): ':weight_lifter::skin-tone-3:',
(205,205,207): ':weight_lifter::skin-tone-4:',
(196,198,203): ':weight_lifter::skin-tone-5:',
(156,196,227): ':whale2:',
(174,159,210): ':whale:',
(144,192,116): ':white_check_mark:',
(246,181,191): ':white_flower:',
(128,132,135): ':white_square_button:',
(242,234,221): ':white_sun_cloud:',
(231,230,221): ':white_sun_rain_cloud:',
(247,220,178): ':white_sun_small_cloud:',
(224,206,192): ':wilted_rose:',
(226,233,238): ':wind_blowing_face:',
(212,226,212): ':wind_chime:',
(222,203,209): ':wine_glass:',
(233,192,91): ':wink:',
(153,160,166): ':wolf:',
(251,198,105): ':woman:',
(156,144,141): ':woman::skin-tone-1:',
(247,219,118): ':woman::skin-tone-2:',
(192,133,111): ':woman::skin-tone-3:',
(156,117,99): ':woman::skin-tone-4:',
(100,80,72): ':woman::skin-tone-5:',
(199,179,228): ':womans_clothes:',
(248,232,208): ':womans_hat:',
(239,132,148): ':womens:',
(233,191,91): ':worried:',
(212,218,223): ':wrench:',
(228,223,223): ':writing_hand::skin-tone-1:',
(225,219,207): ':writing_hand::skin-tone-2:',
(213,204,196): ':writing_hand::skin-tone-3:',
(197,186,177): ':writing_hand::skin-tone-4:',
(176,168,167): ':writing_hand::skin-tone-5:',
(242,177,185): ':x:',
(254,217,133): ':yellow_heart:',
(222,210,178): ':yen:',
(235,188,92): ':yum:',
(232,197,106): ':zipper_mouth:',
(206,225,239): ':zzz:',
(112,168,211): ':1234:',
(104,109,112): ':8ball:',
(234,128,141): ':ab:',
(104,163,209): ':abc:',
(117,171,213): ':abcd:',
(248,182,94): ':accept:',
(193,176,191): ':aerial_tramway:',
(196,214,210): ':airplane_arriving:',
(196,214,210): ':airplane_departure:',
(225,179,187): ':airplane_small:',
(227,175,169): ':alarm_clock:',
(178,185,190): ':alien:',
(204,187,198): ':ambulance:',
(186,155,145): ':amphora:',
(232,213,162): ':angel:',
(188,188,191): ':angel::skin-tone-1:',
(229,221,175): ':angel::skin-tone-2:',
(201,178,170): ':angel::skin-tone-3:',
(180,166,159): ':angel::skin-tone-4:',
(147,143,143): ':angel::skin-tone-5:',
(236,187,194): ':anger:',
(206,212,216): ':anger_right:',
(233,192,92): ':angry:',
(233,192,91): ':anguished:',
(147,150,152): ':ant:',
(224,113,123): ':apple:',
(115,170,212): ':arrow_double_down:',
(114,170,212): ':arrow_double_up:',
(110,167,211): ':arrow_down_small:',
(110,167,211): ':arrow_up_small:',
(116,170,212): ':arrows_clockwise:',
(121,173,214): ':arrows_counterclockwise:',
(220,183,163): ':art:',
(177,196,156): ':articulated_lorry:',
(228,189,93): ':astonished:',
(206,228,245): ':athletic_shoe:',
(98,158,203): ':atm:',
(162,179,134): ':avocado:',
(242,211,129): ':baby:',
(236,212,200): ':baby::skin-tone-1:',
(233,205,173): ':baby::skin-tone-2:',
(213,180,156): ':baby::skin-tone-3:',
(188,151,124): ':baby::skin-tone-4:',
(147,120,107): ':baby::skin-tone-5:',
(211,220,225): ':baby_bottle:',
(253,224,160): ':baby_chick:',
(247,176,82): ':baby_symbol:',
(156,159,161): ':back:',
(249,233,223): ':back_of_hand::skin-tone-1:',
(247,225,195): ':back_of_hand::skin-tone-2:',
(228,201,179): ':back_of_hand::skin-tone-3:',
(204,173,149): ':back_of_hand::skin-tone-4:',
(172,146,133): ':back_of_hand::skin-tone-5:',
(247,200,200): ':bacon:',
(212,225,236): ':badminton:',
(145,178,204): ':baggage_claim:',
(236,189,192): ':balloon:',
(166,191,210): ':ballot_box:',
(177,200,147): ':bamboo:',
(254,238,201): ':banana:',
(180,190,197): ':bank:',
(192,185,196): ':bar_chart:',
(224,224,232): ':barber:',
(190,130,52): ':basketball:',
(182,190,196): ':bat:',
(226,229,228): ':bath:',
(220,225,229): ':bath::skin-tone-1:',
(226,230,228): ':bath::skin-tone-2:',
(223,225,228): ':bath::skin-tone-3:',
(221,224,227): ':bath::skin-tone-4:',
(218,222,226): ':bath::skin-tone-5:',
(226,231,235): ':bathtub:',
(194,219,182): ':battery:',
(177,175,191): ':beach:',
(235,193,187): ':beach_umbrella:',
(193,129,110): ':bear:',
(212,217,222): ':bed:',
(194,193,183): ':bee:',
(247,208,136): ':beer:',
(244,212,148): ':beers:',
(174,114,123): ':beetle:',
(188,224,196): ':beginner:',
(255,208,139): ':bell:',
(246,200,146): ':bellhop:',
(155,103,93): ':bento:',
(188,199,199): ':bicyclist:',
(187,198,207): ':bicyclist::skin-tone-1:',
(188,198,204): ':bicyclist::skin-tone-2:',
(185,195,202): ':bicyclist::skin-tone-3:',
(182,191,198): ':bicyclist::skin-tone-4:',
(177,188,196): ':bicyclist::skin-tone-5:',
(215,184,189): ':bike:',
(216,206,230): ':bikini:',
(233,144,149): ':bird:',
(188,150,138): ':birthday:',
(98,103,106): ':black_heart:',
(214,197,202): ':black_joker:',
(144,147,149): ':black_square_button:',
(225,226,208): ':blossom:',
(224,184,170): ':blowfish:',
(95,164,216): ':blue_book:',
(161,195,220): ':blue_car:',
(136,195,241): ':blue_heart:',
(238,185,103): ':blush:',
(173,110,93): ':boar:',
(128,129,128): ':bomb:',
(199,212,221): ':book:',
(213,182,192): ':bookmark:',
(189,195,210): ':bookmark_tabs:',
(185,179,180): ':books:',
(234,173,164): ':boom:',
(215,167,153): ':boot:',
(205,187,146): ':bouquet:',
(211,203,160): ':bow:',
(174,183,191): ':bow::skin-tone-1:',
(208,209,176): ':bow::skin-tone-2:',
(182,170,169): ':bow::skin-tone-3:',
(162,157,157): ':bow::skin-tone-4:',
(132,136,142): ':bow::skin-tone-5:',
(238,218,213): ':bow_and_arrow:',
(151,149,154): ':bowling:',
(210,120,134): ':boxing_glove:',
(249,206,116): ':boy:',
(187,173,166): ':boy::skin-tone-1:',
(244,218,141): ':boy::skin-tone-2:',
(202,154,130): ':boy::skin-tone-3:',
(169,132,111): ':boy::skin-tone-4:',
(118,97,87): ':boy::skin-tone-5:',
(238,203,172): ':bread:',
(247,210,141): ':bride_with_veil:',
(185,176,173): ':bride_with_veil::skin-tone-1:',
(243,223,155): ':bride_with_veil::skin-tone-2:',
(206,165,148): ':bride_with_veil::skin-tone-3:',
(180,151,137): ':bride_with_veil::skin-tone-4:',
(140,124,117): ':bride_with_veil::skin-tone-5:',
(106,84,87): ':bridge_at_night:',
(166,104,65): ':briefcase:',
(239,148,162): ':broken_heart:',
(210,194,231): ':bug:',
(248,233,198): ':bulb:',
(176,199,216): ':bullettrain_front:',
(159,182,199): ':bullettrain_side:',
(230,214,184): ':burrito:',
(159,175,188): ':bus:',
(227,208,206): ':busstop:',
(120,124,126): ':bust_in_silhouette:',
(131,138,142): ':busts_in_silhouette:',
(148,178,200): ':butterfly:',
(164,202,142): ':cactus:',
(244,222,198): ':cake:',
(185,159,167): ':calendar:',
(218,189,196): ':calendar_spiral:',
(254,232,166): ':call_me:',
(250,234,225): ':call_me_hand::skin-tone-1:',
(247,228,202): ':call_me_hand::skin-tone-2:',
(231,208,188): ':call_me_hand::skin-tone-3:',
(210,183,162): ':call_me_hand::skin-tone-4:',
(182,160,149): ':call_me_hand::skin-tone-5:',
(122,170,206): ':calling:',
(220,172,157): ':camel:',
(111,122,130): ':camera:',
(140,139,131): ':camera_with_flash:',
(158,170,124): ':camping:',
(245,235,218): ':candle:',
(239,172,181): ':candy:',
(201,202,218): ':canoe:',
(127,178,216): ':capital_abcd:',
(183,187,193): ':card_box:',
(165,182,194): ':card_index:',
(162,194,226): ':carousel_horse:',
(239,203,145): ':carrot:',
(225,189,179): ':cat2:',
(237,190,108): ':cat:',
(182,193,202): ':cd:',
(233,239,241): ':chains:',
(208,210,193): ':champagne:',
(224,212,218): ':champagne_glass:',
(160,201,137): ':chart:',
(183,208,227): ':chart_with_downwards_trend:',
(217,187,196): ':chart_with_upwards_trend:',
(188,193,196): ':checkered_flag:',
(254,199,116): ':cheese:',
(208,163,156): ':cherries:',
(247,194,196): ':cherry_blossom:',
(190,120,99): ':chestnut:',
(238,223,214): ':chicken:',
(179,161,113): ':children_crossing:',
(203,166,137): ':chipmunk:',
(221,167,169): ':chocolate_bar:',
(194,200,166): ':christmas_tree:',
(158,196,225): ':cinema:',
(213,187,194): ':circus_tent:',
(123,96,53): ':city_dusk:',
(171,143,82): ':city_sunset:',
(101,122,132): ':cityscape:',
(232,112,127): ':cl:',
(253,221,137): ':clap:',
(247,224,208): ':clap::skin-tone-1:',
(243,216,181): ':clap::skin-tone-2:',
(224,192,164): ':clap::skin-tone-3:',
(198,163,133): ':clap::skin-tone-4:',
(164,134,116): ':clap::skin-tone-5:',
(131,151,127): ':clapper:',
(184,193,198): ':classical_building:',
(217,206,204): ':clipboard:',
(202,212,218): ':clock1030:',
(202,212,218): ':clock10:',
(202,212,218): ':clock1130:',
(203,212,218): ':clock11:',
(202,212,218): ':clock1230:',
(204,214,220): ':clock12:',
(202,211,218): ':clock130:',
(203,212,218): ':clock1:',
(202,212,218): ':clock230:',
(202,212,218): ':clock2:',
(202,211,218): ':clock330:',
(202,212,218): ':clock3:',
(203,212,218): ':clock430:',
(202,212,218): ':clock4:',
(203,212,218): ':clock530:',
(202,212,218): ':clock5:',
(205,214,220): ':clock630:',
(202,212,218): ':clock6:',
(203,212,218): ':clock730:',
(202,212,218): ':clock7:',
(203,212,218): ':clock830:',
(202,212,218): ':clock8:',
(202,212,218): ':clock930:',
(202,212,218): ':clock9:',
(209,176,168): ':clock:',
(206,66,86): ':closed_book:',
(231,175,124): ':closed_lock_with_key:',
(202,188,223): ':closed_umbrella:',
(240,234,224): ':cloud_lightning:',
(223,235,244): ':cloud_rain:',
(221,234,244): ':cloud_snow:',
(184,193,199): ':cloud_tornado:',
(226,185,163): ':clown:',
(227,230,231): ':cocktail:',
(204,192,140): ':cold_sweat:',
(225,162,172): ':compression:',
(162,201,231): ':computer:',
(218,177,161): ':confetti_ball:',
(224,184,87): ':confounded:',
(242,200,96): ':confused:',
(189,181,155): ':construction:',
(155,154,190): ':construction_site:',
(243,224,159): ':construction_worker:',
(240,223,181): ':construction_worker::skin-tone-1:',
(240,222,172): ':construction_worker::skin-tone-2:',
(234,213,167): ':construction_worker::skin-tone-3:',
(224,204,157): ':construction_worker::skin-tone-4:',
(210,194,151): ':construction_worker::skin-tone-5:',
(150,151,156): ':control_knobs:',
(185,164,183): ':convenience_store:',
(216,169,148): ':cookie:',
(172,172,168): ':cooking:',
(97,159,207): ':cool:',
(193,196,174): ':cop:',
(185,193,197): ':cop::skin-tone-1:',
(191,196,188): ':cop::skin-tone-2:',
(181,183,183): ':cop::skin-tone-3:',
(171,173,173): ':cop::skin-tone-4:',
(155,161,165): ':cop::skin-tone-5:',
(197,194,119): ':corn:',
(177,197,147): ':couch:',
(226,194,156): ':couple:',
(217,184,149): ':couple_mm:',
(225,175,136): ':couple_with_heart:',
(231,168,127): ':couple_ww:',
(245,181,107): ':couplekiss:',
(174,178,181): ':cow2:',
(212,188,188): ':cow:',
(198,166,92): ':cowboy:',
(207,110,125): ':crab:',
(229,168,177): ':crayon:',
(209,175,124): ':credit_card:',
(240,243,245): ':crescent_moon:',
(228,196,187): ':cricket:',
(196,215,184): ':crocodile:',
(251,208,150): ':croissant:',
(224,215,218): ':crossed_flags:',
(241,213,140): ':crown:',
(153,191,220): ':cruise_ship:',
(220,190,105): ':cry:',
(202,180,110): ':crying_cat_face:',
(173,198,223): ':crystal_ball:',
(190,216,174): ':cucumber:',
(212,150,171): ':cupid:',
(196,198,199): ':curly_loop:',
(182,185,188): ':currency_exchange:',
(241,227,206): ':curry:',
(223,199,172): ':custard:',
(110,155,188): ':customs:',
(157,207,245): ':cyclone:',
(212,216,218): ':dagger:',
(246,191,178): ':dancer:',
(232,183,188): ':dancer::skin-tone-1:',
(245,193,183): ':dancer::skin-tone-2:',
(236,179,181): ':dancer::skin-tone-3:',
(229,175,177): ':dancer::skin-tone-4:',
(220,169,173): ':dancer::skin-tone-5:',
(211,188,137): ':dancers:',
(242,234,217): ':dango:',
(197,199,200): ':dark_sunglasses:',
(171,146,152): ':dart:',
(210,232,248): ':dash:',
(206,170,179): ':date:',
(139,171,114): ':deciduous_tree:',
(199,161,151): ':deer:',
(160,186,198): ':department_store:',
(173,190,141): ':desert:',
(157,199,232): ':desktop:',
(165,209,243): ':diamond_shape_with_a_dot_inside:',
(241,199,95): ':disappointed:',
(220,188,99): ':disappointed_relieved:',
(244,222,180): ':dividers:',
(255,232,195): ':dizzy:',
(223,183,86): ':dizzy_face:',
(152,105,111): ':do_not_litter:',
(227,199,187): ':dog2:',
(157,163,168): ':dog:',
(184,207,179): ':dollar:',
(164,159,149): ':dolls:',
(141,185,218): ':dolphin:',
(206,104,117): ':door:',
(198,156,128): ':doughnut:',
(215,224,221): ':dove:',
(168,202,148): ':dragon:',
(165,199,145): ':dragon_face:',
(156,205,243): ':dress:',
(223,178,164): ':dromedary_camel:',
(232,195,99): ':drooling_face:',
(179,216,246): ':droplet:',
(209,128,117): ':drum:',
(221,215,200): ':duck:',
(255,219,148): ':dvd:',
(208,216,222): ':e_mail:',
(200,201,195): ':eagle:',
(253,223,151): ':ear:',
(247,227,216): ':ear::skin-tone-1:',
(243,220,191): ':ear::skin-tone-2:',
(224,198,175): ':ear::skin-tone-3:',
(201,171,147): ':ear::skin-tone-4:',
(168,144,132): ':ear::skin-tone-5:',
(222,232,198): ':ear_of_rice:',
(136,187,177): ':earth_africa:',
(139,191,189): ':earth_americas:',
(136,187,176): ':earth_asia:',
(248,231,221): ':egg:',
(186,170,210): ':eggplant:',
(120,173,214): ':eject:',
(187,190,191): ':electric_plug:',
(169,183,191): ':elephant:',
(137,140,142): ':end:',
(192,213,229): ':envelope_with_arrow:',
(198,216,198): ':euro:',
(176,195,209): ':european_castle:',
(216,141,137): ':european_post_office:',
(165,188,146): ':evergreen_tree:',
(201,188,213): ':expecting_woman::skin-tone-1:',
(223,206,209): ':expecting_woman::skin-tone-2:',
(210,186,207): ':expecting_woman::skin-tone-3:',
(202,183,205): ':expecting_woman::skin-tone-4:',
(190,175,199): ':expecting_woman::skin-tone-5:',
(242,200,96): ':expressionless:',
(203,219,231): ':eye:',
(147,150,152): ':eye_in_speech_bubble:',
(177,191,202): ':eyeglasses:',
(230,233,236): ':eyes:',
(204,139,147): ':factory:',
(225,180,164): ':fallen_leaf:',
(214,174,108): ':family:',
(206,179,113): ':family_mmb:',
(220,185,112): ':family_mmbb:',
(216,176,101): ':family_mmg:',
(226,183,105): ':family_mmgb:',
(232,180,98): ':family_mmgg:',
(224,181,106): ':family_mwbb:',
(222,172,97): ':family_mwg:',
(230,178,98): ':family_mwgb:',
(235,177,91): ':family_mwgg:',
(223,168,103): ':family_wwb:',
(229,176,97): ':family_wwbb:',
(230,167,92): ':family_wwg:',
(234,174,90): ':family_wwgb:',
(238,172,83): ':family_wwgg:',
(115,170,212): ':fast_forward:',
(153,166,169): ':fax:',
(216,196,133): ':fearful:',
(211,190,186): ':feet:',
(231,235,237): ':fencer:',
(159,181,209): ':ferris_wheel:',
(173,190,215): ':ferry:',
(241,202,191): ':field_hockey:',
(142,150,155): ':file_cabinet:',
(112,176,224): ':file_folder:',
(176,189,199): ':film_frames:',
(254,237,184): ':fingers_crossed:',
(250,197,113): ':fire:',
(213,159,171): ':fire_engine:',
(127,154,150): ':fireworks:',
(195,206,205): ':first_place:',
(164,175,183): ':first_quarter_moon:',
(234,238,240): ':first_quarter_moon_with_face:',
(151,197,232): ':fish:',
(228,214,230): ':fish_cake:',
(186,201,216): ':fishing_pole_and_fish:',
(254,224,124): ':fist:',
(247,225,212): ':fist::skin-tone-1:',
(243,216,178): ':fist::skin-tone-2:',
(219,186,157): ':fist::skin-tone-3:',
(189,149,118): ':fist::skin-tone-4:',
(148,115,99): ':fist::skin-tone-5:',
(134,139,142): ':flag_black:',
(211,194,208): ':flags:',
(208,212,212): ':flashlight:',
(140,152,161): ':floppy_disk:',
(182,107,117): ':flower_playing_cards:',
(237,194,128): ':flushed:',
(228,200,192): ':fog:',
(151,174,187): ':foggy:',
(154,112,104): ':football:',
(180,147,140): ':footprints:',
(211,218,223): ':fork_and_knife:',
(206,215,222): ':fork_knife_plate:',
(165,204,143): ':four_leaf_clover:',
(233,174,108): ':fox:',
(176,166,173): ':frame_photo:',
(117,171,213): ':free:',
(248,227,203): ':french_bread:',
(254,191,125): ':fried_shrimp:',
(234,135,104): ':fries:',
(169,206,147): ':frog:',
(240,198,95): ':frowning:',
(208,217,223): ':full_moon:',
(203,213,219): ':full_moon_with_face:',
(232,145,158): ':game_die:',
(172,212,242): ':gem:',
(206,211,214): ':ghost:',
(242,167,132): ':gift:',
(245,175,144): ':gift_heart:',
(250,203,114): ':girl:',
(180,165,159): ':girl::skin-tone-1:',
(246,217,135): ':girl::skin-tone-2:',
(200,148,126): ':girl::skin-tone-3:',
(167,128,109): ':girl::skin-tone-4:',
(117,93,84): ':girl::skin-tone-5:',
(154,194,224): ':globe_with_meridians:',
(207,184,191): ':goal:',
(229,232,233): ':goat:',
(208,225,225): ':golfer::skin-tone-1:',
(208,225,223): ':golfer::skin-tone-2:',
(207,223,222): ':golfer::skin-tone-3:',
(205,222,220): ':golfer::skin-tone-4:',
(203,220,219): ':golfer::skin-tone-5:',
(108,115,120): ':gorilla:',
(205,191,187): ':grandma::skin-tone-1:',
(203,187,171): ':grandma::skin-tone-2:',
(193,173,162): ':grandma::skin-tone-3:',
(179,157,145): ':grandma::skin-tone-4:',
(158,141,135): ':grandma::skin-tone-5:',
(171,145,208): ':grapes:',
(157,195,132): ':green_apple:',
(122,170,96): ':green_book:',
(156,198,133): ':green_heart:',
(246,248,249): ':grey_exclamation:',
(240,243,245): ':grey_question:',
(226,189,100): ':grimacing:',
(228,190,98): ':grin:',
(225,187,96): ':grinning:',
(175,161,154): ':guardsman:',
(174,161,163): ':guardsman::skin-tone-1:',
(174,160,160): ':guardsman::skin-tone-2:',
(172,157,158): ':guardsman::skin-tone-3:',
(168,154,154): ':guardsman::skin-tone-4:',
(162,150,151): ':guardsman::skin-tone-5:',
(239,201,163): ':guitar:',
(212,207,208): ':gun:',
(224,183,128): ':haircut:',
(153,142,152): ':haircut::skin-tone-1:',
(221,199,136): ':haircut::skin-tone-2:',
(181,135,132): ':haircut::skin-tone-3:',
(155,124,124): ':haircut::skin-tone-4:',
(115,97,104): ':haircut::skin-tone-5:',
(214,167,139): ':hamburger:',
(234,220,201): ':hammer:',
(210,184,149): ':hammer_pick:',
(237,184,123): ':hamster:',
(255,232,158): ':hand_splayed:',
(251,239,232): ':hand_with_index_and_middle_finger_crossed::skin-tone-1:',
(249,234,213): ':hand_with_index_and_middle_finger_crossed::skin-tone-2:',
(236,218,202): ':hand_with_index_and_middle_finger_crossed::skin-tone-3:',
(219,198,181): ':hand_with_index_and_middle_finger_crossed::skin-tone-4:',
(197,180,171): ':hand_with_index_and_middle_finger_crossed::skin-tone-5:',
(182,147,204): ':handbag:',
(255,232,168): ':handshake:',
(252,218,142): ':hatched_chick:',
(245,219,159): ':hatching_chick:',
(235,204,129): ':head_bandage:',
(175,197,214): ':headphones:',
(195,132,111): ':hear_no_evil:',
(238,167,179): ':heart_decoration:',
(228,157,97): ':heart_eyes:',
(217,154,115): ':heart_eyes_cat:',
(240,152,160): ':heartbeat:',
(230,142,157): ':heartpulse:',
(194,196,197): ':heavy_division_sign:',
(196,198,199): ':heavy_dollar_sign:',
(215,216,217): ':heavy_minus_sign:',
(184,186,187): ':heavy_plus_sign:',
(207,207,180): ':helicopter:',
(231,153,160): ':helmet_with_cross:',
(204,218,187): ':herb:',
(232,174,171): ':hibiscus:',
(255,221,172): ':high_brightness:',
(221,173,180): ':high_heel:',
(212,202,187): ':hockey:',
(187,190,192): ':hole:',
(155,173,149): ':homes:',
(247,199,129): ':honey_pot:',
(198,142,125): ':horse:',
(214,188,174): ':horse_racing:',
(213,188,175): ':horse_racing::skin-tone-1:',
(214,188,175): ':horse_racing::skin-tone-2:',
(213,187,174): ':horse_racing::skin-tone-3:',
(213,187,174): ':horse_racing::skin-tone-4:',
(212,186,173): ':horse_racing::skin-tone-5:',
(170,200,227): ':hospital:',
(234,182,184): ':hot_pepper:',
(241,191,174): ':hotdog:',
(193,195,174): ':hotel:',
(228,220,198): ':hourglass_flowing_sand:',
(222,219,205): ':house:',
(204,172,160): ':house_abandoned:',
(200,206,173): ':house_with_garden:',
(237,183,81): ':hugging:',
(234,192,92): ':hushed:',
(186,196,209): ':ice_cream:',
(235,231,233): ':ice_skate:',
(255,236,195): ':icecream:',
(186,158,223): ':id:',
(240,162,171): ':ideograph_advantage:',
(165,139,205): ':imp:',
(203,184,157): ':inbox_tray:',
(222,230,235): ':incoming_envelope:',
(229,186,141): ':information_desk_person:',
(176,157,174): ':information_desk_person::skin-tone-1:',
(226,197,156): ':information_desk_person::skin-tone-2:',
(192,145,149): ':information_desk_person::skin-tone-3:',
(168,131,137): ':information_desk_person::skin-tone-4:',
(131,106,119): ':information_desk_person::skin-tone-5:',
(218,191,107): ':innocent:',
(130,171,202): ':iphone:',
(139,165,158): ':island:',
(198,67,85): ':izakaya_lantern:',
(205,139,74): ':jack_o_lantern:',
(125,188,216): ':japan:',
(183,180,180): ':japanese_castle:',
(202,129,134): ':japanese_goblin:',
(167,87,87): ':japanese_ogre:',
(144,187,219): ':jeans:',
(199,182,120): ':joy:',
(193,175,113): ':joy_cat:',
(186,175,179): ':joystick:',
(89,76,54): ':kaaba:',
(197,199,201): ':key2:',
(227,187,175): ':key:',
(132,180,216): ':keycap_ten:',
(127,177,215): ':kimono:',
(242,164,176): ':kiss:',
(245,187,123): ':kiss_mm:',
(242,173,98): ':kiss_ww:',
(240,198,95): ':kissing:',
(220,186,96): ':kissing_cat:',
(236,184,102): ':kissing_closed_eyes:',
(232,185,93): ':kissing_heart:',
(241,199,95): ':kissing_smiling_eyes:',
(187,183,142): ':kiwi:',
(219,222,224): ':knife:',
(178,188,194): ':koala:',
(94,157,206): ':koko:',
(252,228,182): ':label:',
(121,189,242): ':large_blue_circle:',
(153,205,245): ':large_blue_diamond:',
(255,205,133): ':large_orange_diamond:',
(165,176,184): ':last_quarter_moon:',
(233,237,240): ':last_quarter_moon_with_face:',
(221,184,94): ':laughing:',
(185,218,177): ':leaves:',
(234,187,94): ':ledger:',
(254,232,165): ':left_facing_fist:',
(249,234,225): ':left_fist::skin-tone-1:',
(247,228,202): ':left_fist::skin-tone-2:',
(230,207,188): ':left_fist::skin-tone-3:',
(210,183,162): ':left_fist::skin-tone-4:',
(182,160,149): ':left_fist::skin-tone-5:',
(139,175,201): ':left_luggage:',
(235,211,118): ':lemon:',
(252,227,175): ':leopard:',
(182,196,207): ':level_slider:',
(206,208,207): ':levitate:',
(178,176,169): ':light_rail:',
(206,213,218): ':link:',
(184,140,94): ':lion_face:',
(228,173,178): ':lips:',
(247,187,173): ':lipstick:',
(186,208,172): ':lizard:',
(245,210,159): ':lock:',
(200,185,160): ':lock_with_ink_pen:',
(247,186,139): ':lollipop:',
(120,161,193): ':loop:',
(210,218,223): ':loud_sound:',
(218,176,185): ':loudspeaker:',
(198,162,188): ':love_hotel:',
(220,194,203): ':love_letter:',
(255,228,190): ':low_brightness:',
(243,202,112): ':lying_face:',
(204,215,223): ':mag:',
(204,215,223): ':mag_right:',
(193,180,184): ':mailbox:',
(193,181,185): ':mailbox_closed:',
(204,188,192): ':mailbox_with_mail:',
(168,153,156): ':mailbox_with_no_mail:',
(197,192,194): ':male_dancer::skin-tone-1:',
(201,196,192): ':male_dancer::skin-tone-2:',
(198,191,191): ':male_dancer::skin-tone-3:',
(196,190,190): ':male_dancer::skin-tone-4:',
(192,187,188): ':male_dancer::skin-tone-5:',
(251,209,121): ':man:',
(181,171,165): ':man::skin-tone-1:',
(246,223,141): ':man::skin-tone-2:',
(202,155,133): ':man::skin-tone-3:',
(170,135,116): ':man::skin-tone-4:',
(120,101,92): ':man::skin-tone-5:',
(201,195,190): ':man_dancing:',
(206,208,209): ':man_in_business_suit_levitating::skin-tone-1:',
(206,207,208): ':man_in_business_suit_levitating::skin-tone-2:',
(204,205,206): ':man_in_business_suit_levitating::skin-tone-4:',
(202,204,206): ':man_in_business_suit_levitating::skin-tone-5:',
(202,190,162): ':man_in_tuxedo:',
(182,179,178): ':man_in_tuxedo::skin-tone-1:',
(200,193,170): ':man_in_tuxedo::skin-tone-2:',
(186,173,167): ':man_in_tuxedo::skin-tone-3:',
(175,166,160): ':man_in_tuxedo::skin-tone-4:',
(159,154,152): ':man_in_tuxedo::skin-tone-5:',
(218,191,142): ':man_with_gua_pi_mao:',
(215,192,178): ':man_with_gua_pi_mao::skin-tone-1:',
(214,188,164): ':man_with_gua_pi_mao::skin-tone-2:',
(204,176,156): ':man_with_gua_pi_mao::skin-tone-3:',
(191,162,140): ':man_with_gua_pi_mao::skin-tone-4:',
(170,145,131): ':man_with_gua_pi_mao::skin-tone-5:',
(229,222,179): ':man_with_turban:',
(208,211,196): ':man_with_turban::skin-tone-1:',
(227,226,188): ':man_with_turban::skin-tone-2:',
(213,204,184): ':man_with_turban::skin-tone-3:',
(202,197,177): ':man_with_turban::skin-tone-4:',
(184,184,169): ':man_with_turban::skin-tone-5:',
(218,194,187): ':mans_shoe:',
(154,199,204): ':map:',
(236,139,151): ':maple_leaf:',
(194,201,206): ':martial_arts_uniform:',
(245,219,154): ':mask:',
(235,196,137): ':massage:',
(187,165,172): ':massage::skin-tone-1:',
(230,203,161): ':massage::skin-tone-2:',
(199,156,151): ':massage::skin-tone-3:',
(173,137,133): ':massage::skin-tone-4:',
(135,109,115): ':massage::skin-tone-5:',
(224,183,165): ':meat_on_bone:',
(197,214,215): ':medal:',
(153,198,232): ':mega:',
(180,215,158): ':melon:',
(192,166,225): ':menorah:',
(86,138,177): ':mens:',
(254,236,177): ':metal:',
(121,111,120): ':metro:',
(171,178,183): ':microphone2:',
(175,189,199): ':microphone:',
(239,212,172): ':microscope:',
(255,238,179): ':middle_finger:',
(236,206,195): ':military_medal:',
(223,228,232): ':milk:',
(103,79,145): ':milky_way:',
(170,194,212): ':minibus:',
(193,171,125): ':minidisc:',
(248,183,98): ':mobile_phone_off:',
(219,189,97): ':money_mouth:',
(203,220,200): ':money_with_wings:',
(235,214,172): ':moneybag:',
(217,168,154): ':monkey:',
(198,140,121): ':monkey_face:',
(179,196,194): ':monorail:',
(139,138,134): ':mortar_board:',
(238,195,134): ':mosque:',
(239,205,203): ':mother_christmas::skin-tone-1:',
(238,201,191): ':mother_christmas::skin-tone-2:',
(230,191,184): ':mother_christmas::skin-tone-3:',
(219,179,171): ':mother_christmas::skin-tone-4:',
(203,166,163): ':mother_christmas::skin-tone-5:',
(224,185,190): ':motor_scooter:',
(194,213,221): ':motorboat:',
(191,204,189): ':motorcycle:',
(134,172,162): ':motorway:',
(125,161,188): ':mount_fuji:',
(87,121,134): ':mountain:',
(142,174,137): ':mountain_bicyclist:',
(140,174,146): ':mountain_bicyclist::skin-tone-1:',
(138,170,141): ':mountain_bicyclist::skin-tone-3:',
(135,166,137): ':mountain_bicyclist::skin-tone-4:',
(131,163,135): ':mountain_bicyclist::skin-tone-5:',
(162,188,200): ':mountain_cableway:',
(188,198,177): ':mountain_railway:',
(108,141,153): ':mountain_snow:',
(237,232,236): ':mouse2:',
(184,182,190): ':mouse:',
(179,189,196): ':mouse_three_button:',
(143,149,152): ':movie_camera:',
(189,198,204): ':moyai:',
(241,204,172): ':mrs_claus:',
(254,231,155): ':muscle:',
(249,233,223): ':muscle::skin-tone-1:',
(246,226,196): ':muscle::skin-tone-2:',
(228,202,181): ':muscle::skin-tone-3:',
(205,175,151): ':muscle::skin-tone-4:',
(173,149,136): ':muscle::skin-tone-5:',
(224,153,165): ':mushroom:',
(116,123,127): ':musical_keyboard:',
(180,217,246): ':musical_note:',
(165,173,178): ':musical_score:',
(225,209,215): ':mute:',
(238,195,181): ':nail_care:',
(237,195,199): ':nail_care::skin-tone-1:',
(236,194,192): ':nail_care::skin-tone-2:',
(232,187,188): ':nail_care::skin-tone-3:',
(226,180,180): ':nail_care::skin-tone-4:',
(218,174,176): ':nail_care::skin-tone-5:',
(235,147,159): ':name_badge:',
(129,175,103): ':nauseated_face:',
(142,141,182): ':necktie:',
(160,201,137): ':negative_squared_cross_mark:',
(203,172,96): ':nerd:',
(239,198,95): ':neutral_face:',
(112,168,211): ':new:',
(122,135,144): ':new_moon:',
(120,132,140): ':new_moon_with_face:',
(214,224,231): ':newspaper2:',
(171,192,207): ':newspaper:',
(123,175,215): ':ng:',
(66,86,94): ':night_with_stars:',
(251,191,136): ':no_bell:',
(156,109,115): ':no_bicycles:',
(237,143,155): ':no_entry_sign:',
(213,173,155): ':no_good:',
(169,149,179): ':no_good::skin-tone-1:',
(211,182,166): ':no_good::skin-tone-2:',
(183,140,161): ':no_good::skin-tone-3:',
(164,129,152): ':no_good::skin-tone-4:',
(134,109,138): ':no_good::skin-tone-5:',
(134,87,93): ':no_mobile_phones:',
(245,203,98): ':no_mouth:',
(132,85,91): ':no_pedestrians:',
(133,87,93): ':no_smoking:',
(142,95,101): ':non_potable_water:',
(252,232,171): ':nose:',
(247,233,225): ':nose::skin-tone-1:',
(245,227,204): ':nose::skin-tone-2:',
(231,209,191): ':nose::skin-tone-3:',
(212,187,168): ':nose::skin-tone-4:',
(185,165,155): ':nose::skin-tone-5:',
(140,149,155): ':notebook:',
(212,161,142): ':notebook_with_decorative_cover:',
(186,200,209): ':notepad_spiral:',
(191,223,247): ':notes:',
(219,224,227): ':nut_and_bolt:',
(146,197,235): ':ocean:',
(224,115,130): ':octagonal_sign:',
(170,140,211): ':octopus:',
(226,219,216): ':oden:',
(187,202,209): ':office:',
(120,169,207): ':oil:',
(119,172,213): ':ok:',
(254,235,174): ':ok_hand:',
(250,236,228): ':ok_hand::skin-tone-1:',
(248,231,207): ':ok_hand::skin-tone-2:',
(233,212,194): ':ok_hand::skin-tone-3:',
(214,190,171): ':ok_hand::skin-tone-4:',
(189,169,159): ':ok_hand::skin-tone-5:',
(210,164,139): ':ok_woman:',
(171,143,175): ':ok_woman::skin-tone-1:',
(206,171,157): ':ok_woman::skin-tone-2:',
(178,128,150): ':ok_woman::skin-tone-3:',
(156,113,135): ':ok_woman::skin-tone-4:',
(123,89,119): ':ok_woman::skin-tone-5:',
(246,220,140): ':older_man:',
(241,221,210): ':older_man::skin-tone-1:',
(239,214,183): ':older_man::skin-tone-2:',
(220,190,167): ':older_man::skin-tone-3:',
(195,162,136): ':older_man::skin-tone-4:',
(158,132,119): ':older_man::skin-tone-5:',
(208,190,147): ':older_woman:',
(179,149,220): ':om_symbol:',
(137,141,143): ':on:',
(141,181,206): ':oncoming_automobile:',
(177,184,177): ':oncoming_bus:',
(148,167,184): ':oncoming_police_car:',
(209,200,157): ':oncoming_taxi:',
(106,169,217): ':open_file_folder:',
(254,236,175): ':open_hands:',
(250,237,229): ':open_hands::skin-tone-1:',
(248,231,208): ':open_hands::skin-tone-2:',
(234,213,196): ':open_hands::skin-tone-3:',
(215,191,172): ':open_hands::skin-tone-4:',
(190,170,160): ':open_hands::skin-tone-5:',
(235,194,92): ':open_mouth:',
(176,143,218): ':ophiuchus:',
(247,174,70): ':orange_book:',
(221,162,155): ':outbox_tray:',
(204,155,129): ':owl:',
(219,171,158): ':ox:',
(208,175,164): ':package:',
(198,208,215): ':page_facing_up:',
(206,215,222): ':page_with_curl:',
(165,182,157): ':pager:',
(206,216,228): ':paintbrush:',
(200,203,174): ':palm_tree:',
(238,195,137): ':pancakes:',
(192,193,194): ':panda_face:',
(225,230,233): ':paperclip:',
(209,216,221): ':paperclips:',
(99,145,143): ':park:',
(115,158,190): ':passport_control:',
(108,166,210): ':pause_button:',
(236,155,128): ':peach:',
(230,192,177): ':peanuts:',
(207,228,190): ':pear:',
(195,191,180): ':pen_ballpoint:',
(181,184,187): ':pen_fountain:',
(203,198,183): ':pencil:',
(200,197,191): ':penguin:',
(236,195,93): ':pensive:',
(201,207,204): ':performing_arts:',
(226,186,88): ':persevere:',
(236,195,143): ':person_frowning:',
(175,160,171): ':person_frowning::skin-tone-1:',
(233,208,155): ':person_frowning::skin-tone-2:',
(197,151,149): ':person_frowning::skin-tone-3:',
(172,138,139): ':person_frowning::skin-tone-4:',
(134,113,121): ':person_frowning::skin-tone-5:',
(215,219,223): ':person_with_ball::skin-tone-1:',
(222,225,218): ':person_with_ball::skin-tone-2:',
(216,215,215): ':person_with_ball::skin-tone-3:',
(210,210,210): ':person_with_ball::skin-tone-4:',
(202,204,206): ':person_with_ball::skin-tone-5:',
(250,225,129): ':person_with_blond_hair:',
(247,226,173): ':person_with_blond_hair::skin-tone-1:',
(245,222,157): ':person_with_blond_hair::skin-tone-2:',
(234,207,147): ':person_with_blond_hair::skin-tone-3:',
(217,190,128): ':person_with_blond_hair::skin-tone-4:',
(193,171,117): ':person_with_blond_hair::skin-tone-5:',
(236,196,144): ':person_with_pouting_face:',
(175,162,172): ':person_with_pouting_face::skin-tone-1:',
(233,209,155): ':person_with_pouting_face::skin-tone-2:',
(197,152,150): ':person_with_pouting_face::skin-tone-3:',
(171,139,140): ':person_with_pouting_face::skin-tone-4:',
(133,113,121): ':person_with_pouting_face::skin-tone-5:',
(234,217,193): ':pick:',
(248,208,216): ':pig2:',
(233,154,168): ':pig:',
(221,184,182): ':pig_nose:',
(233,166,137): ':pill:',
(225,220,170): ':pineapple:',
(235,146,151): ':ping_pong:',
(240,186,152): ':pizza:',
(180,150,220): ':place_of_worship:',
(124,175,215): ':play_pause:',
(255,238,176): ':point_down:',
(251,239,231): ':point_down::skin-tone-1:',
(249,233,210): ':point_down::skin-tone-2:',
(235,214,197): ':point_down::skin-tone-3:',
(216,192,173): ':point_down::skin-tone-4:',
(191,171,161): ':point_down::skin-tone-5:',
(255,238,177): ':point_left:',
(251,239,231): ':point_left::skin-tone-1:',
(249,233,210): ':point_left::skin-tone-2:',
(235,214,197): ':point_left::skin-tone-3:',
(216,193,174): ':point_left::skin-tone-4:',
(192,172,162): ':point_left::skin-tone-5:',
(255,238,176): ':point_right:',
(251,239,231): ':point_right::skin-tone-1:',
(249,233,210): ':point_right::skin-tone-2:',
(234,214,197): ':point_right::skin-tone-3:',
(216,192,173): ':point_right::skin-tone-4:',
(191,171,161): ':point_right::skin-tone-5:',
(251,240,233): ':point_up::skin-tone-1:',
(249,235,216): ':point_up::skin-tone-2:',
(237,220,205): ':point_up::skin-tone-3:',
(221,201,186): ':point_up::skin-tone-4:',
(201,184,176): ':point_up::skin-tone-5:',
(255,238,177): ':point_up_2:',
(251,239,231): ':point_up_2::skin-tone-1:',
(249,233,210): ':point_up_2::skin-tone-2:',
(235,215,198): ':point_up_2::skin-tone-3:',
(216,193,174): ':point_up_2::skin-tone-4:',
(192,172,162): ':point_up_2::skin-tone-5:',
(195,205,213): ':police_car:',
(198,206,212): ':poodle:',
(199,150,138): ':poop:',
(235,191,179): ':popcorn:',
(185,195,211): ':post_office:',
(251,216,179): ':postal_horn:',
(209,146,157): ':postbox:',
(99,147,183): ':potable_water:',
(229,188,170): ':potato:',
(202,225,182): ':pouch:',
(231,195,180): ':poultry_leg:',
(199,196,220): ':pound:',
(218,184,96): ':pouting_cat:',
(230,220,166): ':pray:',
(226,220,217): ':pray::skin-tone-1:',
(223,215,198): ':pray::skin-tone-2:',
(210,197,186): ':pray::skin-tone-3:',
(191,175,163): ':pray::skin-tone-4:',
(167,155,151): ':pray::skin-tone-5:',
(200,190,210): ':prayer_beads:',
(224,201,206): ':pregnant_woman:',
(239,213,152): ':prince:',
(212,200,195): ':prince::skin-tone-1:',
(234,216,176): ':prince::skin-tone-2:',
(211,182,166): ':prince::skin-tone-3:',
(189,165,148): ':prince::skin-tone-4:',
(155,140,133): ':prince::skin-tone-5:',
(246,198,112): ':princess:',
(162,151,147): ':princess::skin-tone-1:',
(242,217,125): ':princess::skin-tone-2:',
(193,139,119): ':princess::skin-tone-3:',
(160,123,107): ':princess::skin-tone-4:',
(108,89,82): ':princess::skin-tone-5:',
(158,189,212): ':printer:',
(128,133,137): ':projector:',
(252,220,146): ':punch:',
(246,225,213): ':punch::skin-tone-1:',
(242,217,187): ':punch::skin-tone-2:',
(223,195,171): ':punch::skin-tone-3:',
(198,166,141): ':punch::skin-tone-4:',
(164,138,126): ':punch::skin-tone-5:',
(193,172,225): ':purple_heart:',
(235,119,136): ':purse:',
(232,160,170): ':pushpin:',
(121,173,214): ':put_litter_in_its_place:',
(236,188,195): ':question:',
(200,205,212): ':rabbit2:',
(212,210,216): ':rabbit:',
(206,199,184): ':race_car:',
(223,190,180): ':racehorse:',
(152,163,170): ':radio:',
(97,146,183): ':radio_button:',
(198,77,94): ':rage:',
(201,210,182): ':railway_car:',
(143,182,182): ':railway_track:',
(161,152,144): ':rainbow:',
(255,231,152): ':raised_back_of_hand:',
(254,231,152): ':raised_hand:',
(249,233,222): ':raised_hand::skin-tone-1:',
(246,225,195): ':raised_hand::skin-tone-2:',
(228,201,179): ':raised_hand::skin-tone-3:',
(204,173,148): ':raised_hand::skin-tone-4:',
(171,146,133): ':raised_hand::skin-tone-5:',
(250,234,224): ':raised_hand_with_fingers_splayed::skin-tone-1:',
(247,227,199): ':raised_hand_with_fingers_splayed::skin-tone-2:',
(229,204,183): ':raised_hand_with_fingers_splayed::skin-tone-3:',
(207,177,154): ':raised_hand_with_fingers_splayed::skin-tone-4:',
(176,152,140): ':raised_hand_with_fingers_splayed::skin-tone-5:',
(249,233,223): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-1:',
(246,226,197): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-2:',
(228,202,181): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-3:',
(205,175,152): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-4:',
(174,149,136): ':raised_hand_with_part_between_middle_and_ring_fingers::skin-tone-5:',
(255,233,162): ':raised_hands:',
(251,234,221): ':raised_hands::skin-tone-1:',
(248,228,198): ':raised_hands::skin-tone-2:',
(233,207,184): ':raised_hands::skin-tone-3:',
(212,183,158): ':raised_hands::skin-tone-4:',
(185,161,145): ':raised_hands::skin-tone-5:',
(223,179,143): ':raising_hand:',
(177,155,176): ':raising_hand::skin-tone-1:',
(219,188,159): ':raising_hand::skin-tone-2:',
(189,142,152): ':raising_hand::skin-tone-3:',
(166,128,140): ':raising_hand::skin-tone-4:',
(132,104,123): ':raising_hand::skin-tone-5:',
(229,227,214): ':ram:',
(228,193,157): ':ramen:',
(184,184,189): ':rat:',
(135,182,218): ':record_button:',
(226,177,186): ':red_car:',
(228,90,107): ':red_circle:',
(233,192,92): ':relieved:',
(171,205,231): ':reminder_ribbon:',
(116,170,212): ':repeat:',
(116,170,212): ':repeat_one:',
(144,188,221): ':restroom:',
(251,239,231): ':reversed_hand_with_middle_finger_extended::skin-tone-1:',
(249,233,211): ':reversed_hand_with_middle_finger_extended::skin-tone-2:',
(235,215,199): ':reversed_hand_with_middle_finger_extended::skin-tone-3:',
(217,194,176): ':reversed_hand_with_middle_finger_extended::skin-tone-4:',
(193,175,165): ':reversed_hand_with_middle_finger_extended::skin-tone-5:',
(241,162,175): ':revolving_hearts:',
(115,170,212): ':rewind:',
(182,192,200): ':rhino:',
(226,104,120): ':ribbon:',
(229,189,196): ':rice:',
(201,203,204): ':rice_ball:',
(173,125,112): ':rice_cracker:',
(109,148,143): ':rice_scene:',
(254,232,165): ':right_facing_fist:',
(249,234,225): ':right_fist::skin-tone-1:',
(247,228,202): ':right_fist::skin-tone-2:',
(230,207,187): ':right_fist::skin-tone-3:',
(209,183,161): ':right_fist::skin-tone-4:',
(181,159,148): ':right_fist::skin-tone-5:',
(208,222,233): ':ring:',
(152,184,211): ':robot:',
(181,173,189): ':rocket:',
(212,178,95): ':rofl:',
(153,159,190): ':roller_coaster:',
(243,208,123): ':rolling_eyes:',
(237,230,229): ':rooster:',
(210,159,159): ':rose:',
(132,173,223): ':rosette:',
(218,137,150): ':rotating_light:',
(238,207,212): ':round_pushpin:',
(195,200,193): ':rowboat:',
(192,198,194): ':rowboat::skin-tone-1:',
(194,200,193): ':rowboat::skin-tone-2:',
(193,198,193): ':rowboat::skin-tone-3:',
(192,197,192): ':rowboat::skin-tone-4:',
(190,196,192): ':rowboat::skin-tone-5:',
(196,216,231): ':rugby_football:',
(219,225,210): ':runner:',
(211,221,218): ':runner::skin-tone-1:',
(218,226,214): ':runner::skin-tone-2:',
(213,218,213): ':runner::skin-tone-3:',
(208,215,210): ':runner::skin-tone-4:',
(202,210,207): ':runner::skin-tone-5:',
(150,193,215): ':running_shirt_with_sash:',
(189,219,243): ':sake:',
(188,199,174): ':salad:',
(237,202,199): ':sandal:',
(237,187,158): ':santa:',
(235,187,187): ':santa::skin-tone-1:',
(234,184,176): ':santa::skin-tone-2:',
(226,174,169): ':santa::skin-tone-3:',
(215,162,156): ':santa::skin-tone-4:',
(199,149,149): ':santa::skin-tone-5:',
(191,194,192): ':satellite:',
(208,223,232): ':satellite_orbital:',
(250,226,189): ':saxophone:',
(197,198,180): ':school:',
(187,73,83): ':school_satchel:',
(220,225,236): ':scooter:',
(198,181,172): ':scorpion:',
(217,198,151): ':scream:',
(225,192,117): ':scream_cat:',
(240,198,143): ':scroll:',
(154,188,214): ':seat:',
(184,213,235): ':second_place:',
(202,136,115): ':see_no_evil:',
(216,233,206): ':seedling:',
(211,221,200): ':selfie:',
(210,221,208): ':selfie::skin-tone-1:',
(210,221,205): ':selfie::skin-tone-2:',
(208,218,203): ':selfie::skin-tone-3:',
(206,215,200): ':selfie::skin-tone-4:',
(202,212,198): ':selfie::skin-tone-5:',
(183,145,89): ':shallow_pan_of_food:',
(161,170,176): ':shark:',
(204,193,218): ':shaved_ice:',
(237,234,221): ':sheep:',
(207,215,220): ':shell:',
(119,171,211): ':shield:',
(210,148,156): ':shinto_shrine:',
(188,192,204): ':ship:',
(107,166,212): ':shirt:',
(194,124,152): ':shopping_bags:',
(185,192,196): ':shopping_cart:',
(189,211,227): ':shower:',
(215,181,171): ':shrimp:',
(250,238,230): ':sign_of_the_horns::skin-tone-1:',
(248,232,209): ':sign_of_the_horns::skin-tone-2:',
(234,214,197): ':sign_of_the_horns::skin-tone-3:',
(216,193,175): ':sign_of_the_horns::skin-tone-4:',
(192,172,163): ':sign_of_the_horns::skin-tone-5:',
(112,168,211): ':signal_strength:',
(193,171,226): ':six_pointed_star:',
(196,205,223): ':ski:',
(194,204,221): ':skier:',
(183,191,196): ':skull:',
(221,189,101): ':sleeping:',
(201,207,212): ':sleeping_accommodation:',
(227,192,98): ':sleepy:',
(148,154,159): ':sleuth_or_spy::skin-tone-1:',
(149,155,156): ':sleuth_or_spy::skin-tone-2:',
(147,152,155): ':sleuth_or_spy::skin-tone-3:',
(145,149,152): ':sleuth_or_spy::skin-tone-4:',
(141,147,151): ':sleuth_or_spy::skin-tone-5:',
(239,197,94): ':slight_frown:',
(239,197,94): ':slight_smile:',
(173,160,165): ':slot_machine:',
(212,234,251): ':small_blue_diamond:',
(255,234,203): ':small_orange_diamond:',
(248,209,216): ':small_red_triangle:',
(248,209,215): ':small_red_triangle_down:',
(227,189,97): ':smile:',
(206,178,102): ':smile_cat:',
(222,184,94): ':smiley:',
(213,183,103): ':smiley_cat:',
(166,141,206): ':smiling_imp:',
(235,194,92): ':smirk:',
(219,185,96): ':smirk_cat:',
(235,232,231): ':smoking:',
(236,200,169): ':snail:',
(184,212,164): ':snake:',
(230,199,121): ':sneezing_face:',
(201,197,208): ':snowboarder:',
(202,183,118): ':sob:',
(231,107,122): ':sos:',
(220,226,230): ':sound:',
(165,151,191): ':space_invader:',
(235,203,181): ':spaghetti:',
(119,152,155): ':sparkler:',
(224,215,234): ':sparkles:',
(239,143,145): ':sparkling_heart:',
(201,140,120): ':speak_no_evil:',
(223,229,233): ':speaker:',
(130,130,128): ':speaking_head:',
(200,225,243): ':speech_balloon:',
(183,191,203): ':speedboat:',
(170,173,174): ':spider:',
(196,202,206): ':spider_web:',
(228,232,235): ':spoon:',
(237,182,192): ':squid:',
(144,174,183): ':stadium:',
(255,211,145): ':star2:',
(138,146,126): ':stars:',
(171,181,178): ':station:',
(58,159,159): ':statue_of_liberty:',
(163,126,127): ':steam_locomotive:',
(176,172,145): ':stew:',
(152,193,224): ':stop_button:',
(198,198,202): ':stopwatch:',
(239,218,166): ':straight_ruler:',
(209,123,128): ':strawberry:',
(234,181,98): ':stuck_out_tongue:',
(229,177,95): ':stuck_out_tongue_closed_eyes:',
(234,187,118): ':stuck_out_tongue_winking_eye:',
(218,186,143): ':stuffed_flatbread:',
(251,198,103): ':sun_with_face:',
(201,168,105): ':sunflower:',
(193,164,91): ':sunglasses:',
(176,169,124): ':sunrise:',
(192,177,81): ':sunrise_over_mountains:',
(181,197,192): ':surfer:',
(175,194,198): ':surfer::skin-tone-1:',
(181,198,195): ':surfer::skin-tone-2:',
(176,192,194): ':surfer::skin-tone-3:',
(173,190,192): ':surfer::skin-tone-4:',
(168,186,190): ':surfer::skin-tone-5:',
(199,134,138): ':sushi:',
(152,146,130): ':suspension_railway:',
(229,197,107): ':sweat:',
(158,206,244): ':sweat_drops:',
(214,186,108): ':sweat_smile:',
(238,180,161): ':sweet_potato:',
(190,211,197): ':swimmer:',
(187,211,231): ':swimmer::skin-tone-1:',
(186,208,217): ':swimmer::skin-tone-2:',
(177,196,210): ':swimmer::skin-tone-3:',
(165,182,195): ':swimmer::skin-tone-4:',
(150,169,187): ':swimmer::skin-tone-5:',
(128,178,216): ':symbols:',
(182,182,176): ':synagogue:',
(235,202,208): ':syringe:',
(248,204,139): ':taco:',
(214,153,156): ':tada:',
(200,192,168): ':tanabata_tree:',
(235,165,66): ':tangerine:',
(227,215,183): ':taxi:',
(169,196,161): ':tea:',
(174,177,178): ':telephone_receiver:',
(214,183,190): ':telescope:',
(165,206,141): ':tennis:',
(227,206,210): ':thermometer:',
(229,189,101): ':thermometer_face:',
(240,195,104): ':thinking:',
(192,199,207): ':third_place:',
(213,233,248): ':thought_balloon:',
(245,223,211): ':thumbdown::skin-tone-1:',
(241,215,184): ':thumbdown::skin-tone-2:',
(221,192,167): ':thumbdown::skin-tone-3:',
(195,162,136): ':thumbdown::skin-tone-4:',
(159,133,120): ':thumbdown::skin-tone-5:',
(252,218,141): ':thumbsdown:',
(252,219,143): ':thumbsup:',
(246,224,212): ':thumbup::skin-tone-1:',
(241,216,186): ':thumbup::skin-tone-2:',
(222,193,169): ':thumbup::skin-tone-3:',
(196,164,139): ':thumbup::skin-tone-4:',
(162,135,123): ':thumbup::skin-tone-5:',
(226,227,222): ':thunder_cloud_rain:',
(219,226,230): ':ticket:',
(231,141,154): ':tickets:',
(236,208,166): ':tiger2:',
(203,180,128): ':tiger:',
(195,197,202): ':timer:',
(218,180,88): ':tired_face:',
(220,226,231): ':toilet:',
(155,181,223): ':tokyo_tower:',
(221,102,111): ':tomato:',
(231,161,169): ':tongue:',
(206,195,177): ':tools:',
(169,171,173): ':top:',
(100,97,117): ':tophat:',
(129,179,217): ':track_next:',
(129,179,217): ':track_previous:',
(153,149,178): ':trackball:',
(174,184,152): ':tractor:',
(159,153,143): ':traffic_light:',
(199,204,207): ':train2:',
(187,148,168): ':train:',
(179,191,188): ':tram:',
(255,229,163): ':triangular_ruler:',
(255,229,164): ':trident:',
(232,199,117): ':triumph:',
(169,195,186): ':trolleybus:',
(250,209,137): ':trophy:',
(216,206,172): ':tropical_drink:',
(252,213,132): ':tropical_fish:',
(214,196,191): ':truck:',
(252,224,186): ':trumpet:',
(227,206,196): ':tulip:',
(221,211,193): ':tumbler_glass:',
(136,130,126): ':turkey:',
(190,211,177): ':turtle:',
(104,145,176): ':tv:',
(109,167,210): ':twisted_rightwards_arrows:',
(243,169,181): ':two_hearts:',
(210,205,166): ':two_men_holding_hands:',
(235,187,149): ':two_women_holding_hands:',
(231,107,122): ':u5272:',
(228,90,107): ':u5408:',
(247,174,77): ':u55b6:',
(247,171,71): ':u6709:',
(231,107,122): ':u6e80:',
(246,169,66): ':u7533:',
(231,105,121): ':u7981:',
(149,120,190): ':u7a7a:',
(233,192,91): ':unamused:',
(156,110,116): ':underage:',
(181,165,218): ':unicorn:',
(246,211,160): ':unlock:',
(110,167,211): ':up:',
(237,196,93): ':upside_down:',
(251,238,231): ':v::skin-tone-1:',
(248,233,211): ':v::skin-tone-2:',
(235,216,200): ':v::skin-tone-3:',
(218,195,178): ':v::skin-tone-4:',
(194,176,167): ':v::skin-tone-5:',
(159,152,143): ':vertical_traffic_light:',
(115,124,131): ':vhs:',
(248,180,91): ':vibration_mode:',
(130,137,141): ':video_camera:',
(147,144,139): ':video_game:',
(235,204,172): ':violin:',
(172,80,52): ':volcano:',
(202,212,219): ':volleyball:',
(248,184,100): ':vs:',
(254,231,155): ':vulcan:',
(223,228,216): ':walking:',
(216,224,223): ':walking::skin-tone-1:',
(222,229,220): ':walking::skin-tone-2:',
(217,222,218): ':walking::skin-tone-3:',
(213,219,216): ':walking::skin-tone-4:',
(207,215,213): ':walking::skin-tone-5:',
(138,150,158): ':waning_crescent_moon:',
(191,201,208): ':waning_gibbous_moon:',
(177,188,194): ':wastebasket:',
(144,148,150): ':water_buffalo:',
(225,177,172): ':watermelon:',
(243,224,138): ':wave:',
(237,226,207): ':wave::skin-tone-1:',
(235,218,180): ':wave::skin-tone-2:',
(216,194,164): ':wave::skin-tone-3:',
(192,166,134): ':wave::skin-tone-4:',
(160,139,118): ':wave::skin-tone-5:',
(137,149,158): ':waxing_crescent_moon:',
(190,200,207): ':waxing_gibbous_moon:',
(97,146,182): ':wc:',
(219,182,92): ':weary:',
(231,175,181): ':wedding:',
(211,215,220): ':weight_lifter::skin-tone-1:',
(217,219,215): ':weight_lifter::skin-tone-2:',
(211,210,212): ':weight_lifter::skin-tone-3:',
(205,205,207): ':weight_lifter::skin-tone-4:',
(196,198,203): ':weight_lifter::skin-tone-5:',
(156,196,227): ':whale2:',
(174,159,210): ':whale:',
(144,192,116): ':white_check_mark:',
(246,181,191): ':white_flower:',
(128,132,135): ':white_square_button:',
(242,234,221): ':white_sun_cloud:',
(231,230,221): ':white_sun_rain_cloud:',
(247,220,178): ':white_sun_small_cloud:',
(224,206,192): ':wilted_rose:',
(226,233,238): ':wind_blowing_face:',
(212,226,212): ':wind_chime:',
(222,203,209): ':wine_glass:',
(233,192,91): ':wink:',
(153,160,166): ':wolf:',
(251,198,105): ':woman:',
(156,144,141): ':woman::skin-tone-1:',
(247,219,118): ':woman::skin-tone-2:',
(192,133,111): ':woman::skin-tone-3:',
(156,117,99): ':woman::skin-tone-4:',
(100,80,72): ':woman::skin-tone-5:',
(199,179,228): ':womans_clothes:',
(248,232,208): ':womans_hat:',
(239,132,148): ':womens:',
(233,191,91): ':worried:',
(212,218,223): ':wrench:',
(228,223,223): ':writing_hand::skin-tone-1:',
(225,219,207): ':writing_hand::skin-tone-2:',
(213,204,196): ':writing_hand::skin-tone-3:',
(197,186,177): ':writing_hand::skin-tone-4:',
(176,168,167): ':writing_hand::skin-tone-5:',
(242,177,185): ':x:',
(254,217,133): ':yellow_heart:',
(222,210,178): ':yen:',
(235,188,92): ':yum:',
(232,197,106): ':zipper_mouth:',
(206,225,239): ':zzz:',
} | 35.510995 | 87 | 0.601537 | 14,984 | 95,276 | 3.696009 | 0.082088 | 0.112096 | 0.025352 | 0.003034 | 0.999512 | 0.999512 | 0.999512 | 0.999512 | 0.999512 | 0.999512 | 0 | 0.284836 | 0.08446 | 95,276 | 2,683 | 88 | 35.510995 | 0.350056 | 0 | 0 | 0.998882 | 0 | 0 | 0.452584 | 0.211027 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.000745 | 0 | 0 | 0 | 0.001491 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
379cb6088d4c3654f1f5b1d941b5732300428077 | 140 | py | Python | Soluciones/Python/Entrada_salida/Ejercicio04.py | TheInventorist/Material-Programacion | 5de54671169224c6c6455eeb2918e91c6ee4e250 | [
"MIT"
] | 1 | 2020-06-11T02:20:34.000Z | 2020-06-11T02:20:34.000Z | Soluciones/Python/Entrada_salida/Ejercicio04.py | TheInventorist/Material-Programacion | 5de54671169224c6c6455eeb2918e91c6ee4e250 | [
"MIT"
] | 2 | 2020-06-17T17:14:48.000Z | 2020-06-19T18:38:04.000Z | Soluciones/Python/Entrada_salida/Ejercicio04.py | TheInventorist/Material-Programacion | 5de54671169224c6c6455eeb2918e91c6ee4e250 | [
"MIT"
] | 2 | 2020-01-03T14:28:53.000Z | 2021-07-19T13:30:32.000Z | numero = float(input("Ingrese numero: "))
print(f"Parte entera: {int(numero)}")
print(f"Parte decimal: {float((numero - int(numero)))}")
| 35 | 57 | 0.664286 | 19 | 140 | 4.894737 | 0.526316 | 0.236559 | 0.258065 | 0.365591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 140 | 3 | 58 | 46.666667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.649635 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
37aa949f71e406bae3b9edc3c61280e9e45a86a7 | 37,239 | py | Python | swagger_client/api/faction_warfare_api.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | swagger_client/api/faction_warfare_api.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | swagger_client/api/faction_warfare_api.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | # coding: utf-8
"""
EVE Swagger Interface
An OpenAPI for EVE Online # noqa: E501
OpenAPI spec version: 0.8.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class FactionWarfareApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_characters_character_id_fw_stats(self, character_id, **kwargs): # noqa: E501
"""Overview of a character involved in faction warfare # noqa: E501
Statistical overview of a character involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_characters_character_id_fw_stats(character_id, async=True)
>>> result = thread.get()
:param async bool
:param int character_id: An EVE character ID (required)
:param str datasource: The server name you would like data from
:param str token: Access token to use if unable to set a header
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetCharactersCharacterIdFwStatsOk
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_characters_character_id_fw_stats_with_http_info(character_id, **kwargs) # noqa: E501
else:
(data) = self.get_characters_character_id_fw_stats_with_http_info(character_id, **kwargs) # noqa: E501
return data
def get_characters_character_id_fw_stats_with_http_info(self, character_id, **kwargs): # noqa: E501
"""Overview of a character involved in faction warfare # noqa: E501
Statistical overview of a character involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_characters_character_id_fw_stats_with_http_info(character_id, async=True)
>>> result = thread.get()
:param async bool
:param int character_id: An EVE character ID (required)
:param str datasource: The server name you would like data from
:param str token: Access token to use if unable to set a header
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetCharactersCharacterIdFwStatsOk
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['character_id', 'datasource', 'token', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_characters_character_id_fw_stats" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'character_id' is set
if ('character_id' not in params or
params['character_id'] is None):
raise ValueError("Missing the required parameter `character_id` when calling `get_characters_character_id_fw_stats`") # noqa: E501
if 'character_id' in params and params['character_id'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `character_id` when calling `get_characters_character_id_fw_stats`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'character_id' in params:
path_params['character_id'] = params['character_id'] # noqa: E501
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'token' in params:
query_params.append(('token', params['token'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['evesso'] # noqa: E501
return self.api_client.call_api(
'/v1/characters/{character_id}/fw/stats/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetCharactersCharacterIdFwStatsOk', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_corporations_corporation_id_fw_stats(self, corporation_id, **kwargs): # noqa: E501
"""Overview of a corporation involved in faction warfare # noqa: E501
Statistics about a corporation involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_corporations_corporation_id_fw_stats(corporation_id, async=True)
>>> result = thread.get()
:param async bool
:param int corporation_id: An EVE corporation ID (required)
:param str datasource: The server name you would like data from
:param str token: Access token to use if unable to set a header
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetCorporationsCorporationIdFwStatsOk
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_corporations_corporation_id_fw_stats_with_http_info(corporation_id, **kwargs) # noqa: E501
else:
(data) = self.get_corporations_corporation_id_fw_stats_with_http_info(corporation_id, **kwargs) # noqa: E501
return data
def get_corporations_corporation_id_fw_stats_with_http_info(self, corporation_id, **kwargs): # noqa: E501
"""Overview of a corporation involved in faction warfare # noqa: E501
Statistics about a corporation involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_corporations_corporation_id_fw_stats_with_http_info(corporation_id, async=True)
>>> result = thread.get()
:param async bool
:param int corporation_id: An EVE corporation ID (required)
:param str datasource: The server name you would like data from
:param str token: Access token to use if unable to set a header
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetCorporationsCorporationIdFwStatsOk
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['corporation_id', 'datasource', 'token', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_corporations_corporation_id_fw_stats" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'corporation_id' is set
if ('corporation_id' not in params or
params['corporation_id'] is None):
raise ValueError("Missing the required parameter `corporation_id` when calling `get_corporations_corporation_id_fw_stats`") # noqa: E501
if 'corporation_id' in params and params['corporation_id'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `corporation_id` when calling `get_corporations_corporation_id_fw_stats`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'corporation_id' in params:
path_params['corporation_id'] = params['corporation_id'] # noqa: E501
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'token' in params:
query_params.append(('token', params['token'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['evesso'] # noqa: E501
return self.api_client.call_api(
'/v1/corporations/{corporation_id}/fw/stats/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetCorporationsCorporationIdFwStatsOk', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_leaderboards(self, **kwargs): # noqa: E501
"""List of the top factions in faction warfare # noqa: E501
Top 4 leaderboard of factions for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsOk
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_leaderboards_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_leaderboards_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_leaderboards_with_http_info(self, **kwargs): # noqa: E501
"""List of the top factions in faction warfare # noqa: E501
Top 4 leaderboard of factions for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsOk
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_leaderboards" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/leaderboards/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetFwLeaderboardsOk', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_leaderboards_characters(self, **kwargs): # noqa: E501
"""List of the top pilots in faction warfare # noqa: E501
Top 100 leaderboard of pilots for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards_characters(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsCharactersOk
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_leaderboards_characters_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_leaderboards_characters_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_leaderboards_characters_with_http_info(self, **kwargs): # noqa: E501
"""List of the top pilots in faction warfare # noqa: E501
Top 100 leaderboard of pilots for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards_characters_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsCharactersOk
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_leaderboards_characters" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/leaderboards/characters/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetFwLeaderboardsCharactersOk', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_leaderboards_corporations(self, **kwargs): # noqa: E501
"""List of the top corporations in faction warfare # noqa: E501
Top 10 leaderboard of corporations for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards_corporations(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsCorporationsOk
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_leaderboards_corporations_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_leaderboards_corporations_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_leaderboards_corporations_with_http_info(self, **kwargs): # noqa: E501
"""List of the top corporations in faction warfare # noqa: E501
Top 10 leaderboard of corporations for kills and victory points separated by total, last week and yesterday. --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_leaderboards_corporations_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: GetFwLeaderboardsCorporationsOk
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_leaderboards_corporations" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/leaderboards/corporations/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GetFwLeaderboardsCorporationsOk', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_stats(self, **kwargs): # noqa: E501
"""An overview of statistics about factions involved in faction warfare # noqa: E501
Statistical overviews of factions involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_stats(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwStats200Ok]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_stats_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_stats_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_stats_with_http_info(self, **kwargs): # noqa: E501
"""An overview of statistics about factions involved in faction warfare # noqa: E501
Statistical overviews of factions involved in faction warfare --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_stats_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwStats200Ok]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_stats" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/stats/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GetFwStats200Ok]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_systems(self, **kwargs): # noqa: E501
"""Ownership of faction warfare systems # noqa: E501
An overview of the current ownership of faction warfare solar systems --- This route is cached for up to 1800 seconds # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_systems(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwSystems200Ok]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_systems_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_systems_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_systems_with_http_info(self, **kwargs): # noqa: E501
"""Ownership of faction warfare systems # noqa: E501
An overview of the current ownership of faction warfare solar systems --- This route is cached for up to 1800 seconds # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_systems_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwSystems200Ok]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_systems" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/systems/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GetFwSystems200Ok]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_fw_wars(self, **kwargs): # noqa: E501
"""Data about which NPC factions are at war # noqa: E501
Data about which NPC factions are at war --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_wars(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwWars200Ok]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_fw_wars_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_fw_wars_with_http_info(**kwargs) # noqa: E501
return data
def get_fw_wars_with_http_info(self, **kwargs): # noqa: E501
"""Data about which NPC factions are at war # noqa: E501
Data about which NPC factions are at war --- This route expires daily at 11:05 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_fw_wars_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str datasource: The server name you would like data from
:param str user_agent: Client identifier, takes precedence over headers
:param str x_user_agent: Client identifier, takes precedence over User-Agent
:return: list[GetFwWars200Ok]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['datasource', 'user_agent', 'x_user_agent'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_fw_wars" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'datasource' in params:
query_params.append(('datasource', params['datasource'])) # noqa: E501
if 'user_agent' in params:
query_params.append(('user_agent', params['user_agent'])) # noqa: E501
header_params = {}
if 'x_user_agent' in params:
header_params['X-User-Agent'] = params['x_user_agent'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v1/fw/wars/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GetFwWars200Ok]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 43.554386 | 192 | 0.63165 | 4,415 | 37,239 | 5.103058 | 0.051642 | 0.046516 | 0.021305 | 0.035508 | 0.962273 | 0.953129 | 0.947803 | 0.938926 | 0.937062 | 0.929206 | 0 | 0.019389 | 0.283976 | 37,239 | 854 | 193 | 43.605386 | 0.82557 | 0.043073 | 0 | 0.802661 | 1 | 0.004435 | 0.204872 | 0.055483 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008869 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
806d8584fcacf1147e958b7c52a01aedc06ebe6a | 593 | py | Python | src/genie/libs/parser/nxos/tests/ShowLoggingLogfile/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/nxos/tests/ShowLoggingLogfile/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/nxos/tests/ShowLoggingLogfile/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
'logs': [
'2019 May 22 16:20:45 ha01-n7010-01 %ACLLOG-5-ACLLOG_FLOW_INTERVAL: Src IP: 172.30.10.100, Dst IP: 10.135.15.2, Src Port: 0, Dst Port: 0, Src Intf: Ethernet3/3, Protocol: "IP"(253), ACL Name: match-ef-acl, ACE Action: Permit, Appl Intf: Vlan10, Hit-count: 600',
'2019 May 22 16:20:50 ha01-n7010-01 %ACLLOG-5-ACLLOG_FLOW_INTERVAL: Src IP: 172.30.10.100, Dst IP: 10.135.15.2, Src Port: 0, Dst Port: 0, Src Intf: Ethernet3/3, Protocol: "IP"(253), ACL Name: match-ef-acl, ACE Action: Permit, Appl Intf: Vlan10, Hit-count: 500',
],
}
| 65.888889 | 269 | 0.652614 | 109 | 593 | 3.504587 | 0.431193 | 0.052356 | 0.04712 | 0.057592 | 0.926702 | 0.858639 | 0.858639 | 0.858639 | 0.858639 | 0.858639 | 0 | 0.210744 | 0.183811 | 593 | 8 | 270 | 74.125 | 0.578512 | 0 | 0 | 0 | 0 | 0.333333 | 0.879865 | 0.104907 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
80bd933210a993d613088249e63c6fb491504fb3 | 4,458 | py | Python | src/climsoft_api/api/form_monthly/schema.py | opencdms/climsoft-api | 860656423f78a2360ea7a581ea1642f6c3acb442 | [
"MIT"
] | null | null | null | src/climsoft_api/api/form_monthly/schema.py | opencdms/climsoft-api | 860656423f78a2360ea7a581ea1642f6c3acb442 | [
"MIT"
] | 2 | 2022-01-16T15:41:27.000Z | 2022-01-30T18:37:13.000Z | src/climsoft_api/api/form_monthly/schema.py | openclimateinitiative/climsoft-api | 3591d7499dd7777617b8086332dc83fab1af9588 | [
"MIT"
] | 2 | 2021-12-22T21:50:19.000Z | 2022-01-28T12:53:32.000Z | import datetime
from pydantic import BaseModel, constr, Field
from typing import Optional, List
from climsoft_api.api.schema import Response
field_mapping = {
"stationId": "station_id",
"elementId": "element_id",
"entryDatetime": "entry_datetime"
}
class CreateFormMonthly(BaseModel):
stationId: constr(max_length=255)
elementId: int
yyyy: int
mm_01: Optional[constr(max_length=255)]
mm_02: Optional[constr(max_length=255)]
mm_03: Optional[constr(max_length=255)]
mm_04: constr(max_length=255)
mm_05: Optional[constr(max_length=255)]
mm_06: Optional[constr(max_length=255)]
mm_07: Optional[constr(max_length=255)]
mm_08: Optional[constr(max_length=255)]
mm_09: Optional[constr(max_length=255)]
mm_10: Optional[constr(max_length=255)]
mm_11: Optional[constr(max_length=255)]
mm_12: Optional[constr(max_length=255)]
flag01: Optional[constr(max_length=255)]
flag02: Optional[constr(max_length=255)]
flag03: Optional[constr(max_length=255)]
flag04: Optional[constr(max_length=255)]
flag05: Optional[constr(max_length=255)]
flag06: Optional[constr(max_length=255)]
flag07: Optional[constr(max_length=255)]
flag08: Optional[constr(max_length=255)]
flag09: Optional[constr(max_length=255)]
flag10: Optional[constr(max_length=255)]
flag11: Optional[constr(max_length=255)]
flag12: Optional[constr(max_length=255)]
period01: Optional[constr(max_length=255)]
period02: Optional[constr(max_length=255)]
period03: Optional[constr(max_length=255)]
period04: Optional[constr(max_length=255)]
period05: Optional[constr(max_length=255)]
period06: Optional[constr(max_length=255)]
period07: Optional[constr(max_length=255)]
period08: Optional[constr(max_length=255)]
period09: Optional[constr(max_length=255)]
period10: Optional[constr(max_length=255)]
period11: Optional[constr(max_length=255)]
period12: Optional[constr(max_length=255)]
signature: Optional[constr(max_length=50)]
entryDatetime: Optional[datetime.datetime]
class Config:
orm_mode = True
allow_population_by_field_name = True
class UpdateFormMonthly(BaseModel):
mm_01: Optional[constr(max_length=255)]
mm_02: Optional[constr(max_length=255)]
mm_03: Optional[constr(max_length=255)]
mm_04: Optional[constr(max_length=255)]
mm_05: Optional[constr(max_length=255)]
mm_06: Optional[constr(max_length=255)]
mm_07: Optional[constr(max_length=255)]
mm_08: Optional[constr(max_length=255)]
mm_09: Optional[constr(max_length=255)]
mm_10: Optional[constr(max_length=255)]
mm_11: Optional[constr(max_length=255)]
mm_12: Optional[constr(max_length=255)]
flag01: Optional[constr(max_length=255)]
flag02: Optional[constr(max_length=255)]
flag03: Optional[constr(max_length=255)]
flag04: Optional[constr(max_length=255)]
flag05: Optional[constr(max_length=255)]
flag06: Optional[constr(max_length=255)]
flag07: Optional[constr(max_length=255)]
flag08: Optional[constr(max_length=255)]
flag09: Optional[constr(max_length=255)]
flag10: Optional[constr(max_length=255)]
flag11: Optional[constr(max_length=255)]
flag12: Optional[constr(max_length=255)]
period01: Optional[constr(max_length=255)]
period02: Optional[constr(max_length=255)]
period03: Optional[constr(max_length=255)]
period04: Optional[constr(max_length=255)]
period05: Optional[constr(max_length=255)]
period06: Optional[constr(max_length=255)]
period07: Optional[constr(max_length=255)]
period08: Optional[constr(max_length=255)]
period09: Optional[constr(max_length=255)]
period10: Optional[constr(max_length=255)]
period11: Optional[constr(max_length=255)]
period12: Optional[constr(max_length=255)]
signature: Optional[constr(max_length=50)]
entryDatetime: Optional[datetime.datetime]
class Config:
fields = field_mapping
allow_population_by_field_name = True
class FormMonthly(CreateFormMonthly):
class Config:
fields = field_mapping
allow_population_by_field_name = True
orm_mode = True
class FormMonthlyResponse(Response):
result: List[FormMonthly] = Field(title="Result")
class FormMonthlyQueryResponse(FormMonthlyResponse):
limit: int = Field(title="Limit")
page: int = Field(title="Page")
pages: int = Field(title="Pages")
| 35.951613 | 53 | 0.730372 | 587 | 4,458 | 5.342419 | 0.146508 | 0.215242 | 0.358737 | 0.419005 | 0.831314 | 0.831314 | 0.828763 | 0.816008 | 0.816008 | 0.816008 | 0 | 0.096783 | 0.149394 | 4,458 | 123 | 54 | 36.243902 | 0.730222 | 0 | 0 | 0.785047 | 0 | 0 | 0.019067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037383 | 0 | 0.88785 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 11 |
03c7daeb86d95e329658c3861ca37ed30d1d8b1c | 4,090 | py | Python | pynos/versions/ver_7/ver_7_1_0/yang/brocade_http_config.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 12 | 2015-09-21T23:56:09.000Z | 2018-03-30T04:35:32.000Z | pynos/versions/ver_7/ver_7_1_0/yang/brocade_http_config.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 10 | 2016-09-15T19:03:27.000Z | 2017-07-17T23:38:01.000Z | pynos/versions/ver_7/ver_7_1_0/yang/brocade_http_config.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 6 | 2015-08-14T08:05:23.000Z | 2022-02-03T15:33:54.000Z | #!/usr/bin/env python
import xml.etree.ElementTree as ET
class brocade_http_config(object):
"""Auto generated class.
"""
def __init__(self, **kwargs):
self._callback = kwargs.pop('callback')
def http_sa_http_server_shutdown(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
shutdown = ET.SubElement(server, "shutdown")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def http_sa_http_server_http_vrf_cont_use_vrf_use_vrf_name(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
http_vrf_cont = ET.SubElement(server, "http-vrf-cont")
use_vrf = ET.SubElement(http_vrf_cont, "use-vrf")
use_vrf_name = ET.SubElement(use_vrf, "use-vrf-name")
use_vrf_name.text = kwargs.pop('use_vrf_name')
callback = kwargs.pop('callback', self._callback)
return callback(config)
def http_sa_http_server_http_vrf_cont_use_vrf_http_vrf_shutdown(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
http_vrf_cont = ET.SubElement(server, "http-vrf-cont")
use_vrf = ET.SubElement(http_vrf_cont, "use-vrf")
use_vrf_name_key = ET.SubElement(use_vrf, "use-vrf-name")
use_vrf_name_key.text = kwargs.pop('use_vrf_name')
http_vrf_shutdown = ET.SubElement(use_vrf, "http-vrf-shutdown")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def http_sa_http_server_shutdown(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
shutdown = ET.SubElement(server, "shutdown")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def http_sa_http_server_http_vrf_cont_use_vrf_use_vrf_name(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
http_vrf_cont = ET.SubElement(server, "http-vrf-cont")
use_vrf = ET.SubElement(http_vrf_cont, "use-vrf")
use_vrf_name = ET.SubElement(use_vrf, "use-vrf-name")
use_vrf_name.text = kwargs.pop('use_vrf_name')
callback = kwargs.pop('callback', self._callback)
return callback(config)
def http_sa_http_server_http_vrf_cont_use_vrf_http_vrf_shutdown(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
http_sa = ET.SubElement(config, "http-sa", xmlns="urn:brocade.com:mgmt:brocade-http")
http = ET.SubElement(http_sa, "http")
server = ET.SubElement(http, "server")
http_vrf_cont = ET.SubElement(server, "http-vrf-cont")
use_vrf = ET.SubElement(http_vrf_cont, "use-vrf")
use_vrf_name_key = ET.SubElement(use_vrf, "use-vrf-name")
use_vrf_name_key.text = kwargs.pop('use_vrf_name')
http_vrf_shutdown = ET.SubElement(use_vrf, "http-vrf-shutdown")
callback = kwargs.pop('callback', self._callback)
return callback(config)
| 42.164948 | 93 | 0.638386 | 531 | 4,090 | 4.6629 | 0.07533 | 0.087237 | 0.072698 | 0.077544 | 0.945073 | 0.945073 | 0.945073 | 0.945073 | 0.945073 | 0.945073 | 0 | 0 | 0.227628 | 4,090 | 97 | 94 | 42.164948 | 0.783792 | 0.054034 | 0 | 0.939394 | 1 | 0 | 0.161189 | 0.051643 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106061 | false | 0 | 0.015152 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
03cd4df621286e857815d5de95246f371b32e71c | 1,728 | py | Python | extaboada.py | wcalazans81/Exercicios-Python | d9abb9505bbf9151d3515cc9ca5b9bd32435699e | [
"MIT"
] | null | null | null | extaboada.py | wcalazans81/Exercicios-Python | d9abb9505bbf9151d3515cc9ca5b9bd32435699e | [
"MIT"
] | null | null | null | extaboada.py | wcalazans81/Exercicios-Python | d9abb9505bbf9151d3515cc9ca5b9bd32435699e | [
"MIT"
] | null | null | null | n = int(input('Digite um número para ver sua taboada '))
print(n, '+ 1 = {}'.format(n+1))
print(n, '+ 2 = {}'.format(n+2))
print(n, '+ 3 = {}'.format(n+3))
print(n, '+ 4 = {}'.format(n+4))
print(n, '+ 5 = {}'.format(n+5))
print(n, '+ 6 = {}'.format(n+6))
print(n, '+ 7 = {}'.format(n+7))
print(n, '+ 8 = {}'.format(n+8))
print(n, '+ 9 = {}'.format(n+9))
print(n, '+ 10 = {}'.format(n+10))
print('=0=' *4)
print('{} - {:2} = {}'.format(n+1, n, n+1-n))
print('{} - {:2} = {}'.format(n+2, n, n+2-n))
print('{} - {:2} = {}'.format(n+3, n, n+3-n))
print('{} - {:2} = {}'.format(n+4, n, n+4-n))
print('{} - {:2} = {}'.format(n+5, n, n+5-n))
print('{} - {:2} = {}'.format(n+6, n, n+6-n))
print('{} - {:2} = {}'.format(n+7, n, n+7-n))
print('{} - {:2} = {}'.format(n+8, n, n+8-n))
print('{} - {:2} = {}'.format(n+9, n, n+9-n))
print('{} - {:2} = {}'.format(n+10, n, n+10-n))
print('=^=' *4)
print(n, 'X 1 = {}'.format(n*1))
print(n, 'X 2 = {}'.format(n*2))
print(n, 'X 3 = {}'.format(n*3))
print(n, 'X 4 = {}'.format(n*4))
print(n, 'X 5 = {}'.format(n*5))
print(n, 'X 6 = {}'.format(n*6))
print(n, 'X 7 = {}'.format(n*7))
print(n, 'X 8 = {}'.format(n*8))
print(n, 'X 9 = {}'.format(n*9))
print(n, 'X 10 = {}'.format(n*10))
print('=^=' *4)
print('{:2} / {:2} = {}'.format(n*1, n, n*1/n))
print('{} / {:2} = {}'.format(n*2, n, n*2/n))
print('{} / {:2} = {}'.format(n*3, n, n*3/n))
print('{} / {:2} = {}'.format(n*4, n, n*4/n))
print('{} / {:2} = {}'.format(n*5, n, n*5/n))
print('{} / {:2} = {}'.format(n*6, n, n*6/n))
print('{} / {:2} = {}'.format(n*7, n, n*7/n))
print('{} / {:2} = {}'.format(n*8, n, n*8/n))
print('{} / {:2} = {}'.format(n*9, n, n*9/n))
print('{} / {:2} = {}'.format(n*10, n, n*10/n))
print('=^=' *4) | 38.4 | 56 | 0.430556 | 330 | 1,728 | 2.254545 | 0.072727 | 0.376344 | 0.236559 | 0.331989 | 0.897849 | 0.854839 | 0.491935 | 0.491935 | 0.491935 | 0.491935 | 0 | 0.078458 | 0.159144 | 1,728 | 45 | 57 | 38.4 | 0.433586 | 0 | 0 | 0.066667 | 0 | 0 | 0.296125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.977778 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
03e6948c2061236793347f2b68c1f3f9c80f167a | 6,818 | py | Python | menu/migrations/0001_initial.py | arabindamahato/CanteenMS | 3e7b592798f62fba2c12405ab0d9b4f2fe89a248 | [
"MIT"
] | null | null | null | menu/migrations/0001_initial.py | arabindamahato/CanteenMS | 3e7b592798f62fba2c12405ab0d9b4f2fe89a248 | [
"MIT"
] | 6 | 2021-03-19T03:34:04.000Z | 2021-09-22T19:03:24.000Z | menu/migrations/0001_initial.py | arabindamahato/CanteenMS | 3e7b592798f62fba2c12405ab0d9b4f2fe89a248 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-05-18 13:35
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Menu',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, default="Today's Menu", max_length=255, null=True)),
('description', models.TextField(blank=True, null=True)),
('is_active', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='TodaySpecial',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_name', models.CharField(default='', max_length=255)),
('item_price', models.DecimalField(decimal_places=2, default='', max_digits=6)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('menu', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='today_special', to='menu.Menu')),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created'],
},
),
migrations.CreateModel(
name='Snacks',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_name', models.CharField(default='', max_length=255)),
('item_price', models.DecimalField(decimal_places=2, default='', max_digits=6)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('menu', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='snacks', to='menu.Menu')),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created'],
},
),
migrations.CreateModel(
name='Lunch',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_name', models.CharField(default='', max_length=255)),
('item_price', models.DecimalField(decimal_places=2, default='', max_digits=6)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('menu', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='lunch', to='menu.Menu')),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created'],
},
),
migrations.CreateModel(
name='Dinner',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_name', models.CharField(default='', max_length=255)),
('item_price', models.DecimalField(decimal_places=2, default='', max_digits=6)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('menu', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='dinner', to='menu.Menu')),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created'],
},
),
migrations.CreateModel(
name='Breakfast',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_name', models.CharField(blank=True, default='', max_length=250)),
('item_price', models.DecimalField(decimal_places=2, default='', max_digits=6)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('menu', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='breakfast', to='menu.Menu')),
('updated_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
),
]
| 60.336283 | 153 | 0.609123 | 732 | 6,818 | 5.479508 | 0.113388 | 0.037896 | 0.062827 | 0.098729 | 0.887559 | 0.887559 | 0.874844 | 0.874844 | 0.864622 | 0.864622 | 0 | 0.008287 | 0.238926 | 6,818 | 112 | 154 | 60.875 | 0.764695 | 0.0066 | 0 | 0.733333 | 1 | 0 | 0.092453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
03fc373964f6a76e42f7d1665cfffc85d98e644f | 6,209 | py | Python | tests/data/jsonrpc/invalid/__init__.py | gnexcoin/jussi | 02c1044aa38bf4236075c5ec7cf80485eeacda60 | [
"MIT"
] | 26 | 2017-04-05T02:39:37.000Z | 2020-11-03T02:13:56.000Z | tests/data/jsonrpc/invalid/__init__.py | gnexcoin/jussi | 02c1044aa38bf4236075c5ec7cf80485eeacda60 | [
"MIT"
] | 122 | 2017-04-04T18:33:13.000Z | 2021-05-17T02:02:29.000Z | tests/data/jsonrpc/invalid/__init__.py | gnexcoin/jussi | 02c1044aa38bf4236075c5ec7cf80485eeacda60 | [
"MIT"
] | 37 | 2017-08-07T22:55:44.000Z | 2022-01-21T22:56:30.000Z | # -*- coding: utf-8 -*-
batch = [
[],
[{'Id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]}],
[{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': 1000}, {}],
[{'id': 1, 'json_rpc': '2.0', 'method': ['get_block'], 'params': '1000'},
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'jsonrpc': None, 'method': 'get_block', 'params': [1000]}],
[{'METHOD': 'get_block', 'id': 1, 'json_rpc': '2.0', 'params': '1000'},
{'id': 1, 'json_rpc': ['2.0'], 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]}],
[None,
{'METHOD': 'get_block', 'id': 1, 'json_rpc': '2.0', 'params': '1000'},
b'',
{},
{'id': 1, 'method': 'get_block', 'params': [1000]}],
['',
{'ID': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': '2.0', 'method': ['get_block'], 'params': '1000'},
{'id': 1, 'jsonrpc': 2.0, 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
''],
[{'METHOD': 'get_block', 'id': 1, 'json_rpc': '2.0', 'params': '1000'},
{'id': 1, 'jsonrpc': 2.0, 'method': 'get_block', 'params': [1000]},
[],
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': 1000},
{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': None},
{'id': [1], 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
False],
[{'id': 1, 'jsonrpc': None, 'method': 'get_block', 'params': [1000]},
{'id': 1, 'jsonrpc': 2.0, 'method': 'get_block', 'params': [1000]},
{'id': 1, 'jsonrpc': None, 'method': 'get_block', 'params': [1000]},
{'id': 1, 'jsonrpc': 2.0, 'method': 'get_block', 'params': [1000]},
{'Id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': [1], 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json-rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': None, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]}],
[{'id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': 1000},
{'Id': 1, 'json_rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': '2.0', 'method': ['get_block'], 'params': '1000'},
{'id': 1, 'json_rpc': '2.0', 'method': ['get_block'], 'params': '1000'},
{'id': 1, 'json-rpc': '2.0', 'method': 'get_block', 'params': [1000]},
b'',
{'id': 1, 'json_rpc': ['2.0'], 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json-rpc': '2.0', 'method': 'get_block', 'params': [1000]},
{'id': 1, 'json_rpc': ['2.0'], 'method': 'get_block', 'params': [1000]}]
]
requests = [
# bad/missing jsonrpc
{
'id': 1,
'method': 'get_block',
'params': [1000]
},
{
'id': 1,
'jsonrpc': 2.0,
'method': 'get_block',
'params': [1000]
},
{
'id': 1,
'json-rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'id': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'id': 1,
'json_rpc': ['2.0'],
'method': 'get_block',
'params': [1000]
},
{
'id': 1,
'jsonrpc': None,
'method': 'get_block',
'params': [1000]
},
# bad/missing id
{
'id': None,
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'ID': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'Id': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'id': [1],
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
{
'id': None,
'json_rpc': '2.0',
'method': 'get_block',
'params': [1000]
},
# bad params
{
'id': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': 1000
},
{
'id': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': '1000'
},
{
'id': 1,
'json_rpc': '2.0',
'method': 'get_block',
'params': None
},
# bad/missing method
{
'id': 1,
'json_rpc': '2.0',
'params': [1000]
},
{
'id': 1,
'json_rpc': '2.0',
'METHOD': 'get_block',
'params': '1000'
},
{
'id': 1,
'json_rpc': '2.0',
'method': ['get_block'],
'params': '1000'
},
{
'id': 1,
'json_rpc': '2.0',
'method': None,
'params': '1000'
},
# invalid
False,
'False',
b'False',
'false',
b'false',
True,
'True',
b'True',
'true',
b'true',
None,
'None',
b'None',
'null',
b'null',
1,
'1',
b'1',
1.0,
'1.0',
b'1.0',
{},
'{}',
b'{}',
[],
'[]',
b'[]',
'',
b'',
]
responses = [
False,
'False',
b'False',
'false',
b'false',
True,
'True',
b'True',
'true',
b'true',
# None,
#'None',
# b'None',
#'null',
# b'null',
1,
'1',
b'1',
1.0,
'1.0',
b'1.0',
{},
'{}',
b'{}',
[],
'[]',
b'[]',
'',
b'',
# bad id
{"id": False, "jsonrpc": "2.0", "result": 1},
{"id": [1], "jsonrpc":"2.0", "result":1},
{"id": {}, "jsonrpc": "2.0", "result": 1},
# bad jsonrpc
{"id": 1, "jsonrpc": 2.0, "result": 1},
{"id": 1, "jsonrpc": 2, "result": 1},
{"id": 1, "result": 1},
# missing result and errpr
{"id": 1, "jsonrpc": 2},
# both result and error
{"id": 1, "jsonrpc": "2.0", "result": 1, "error": {"code": -32600, "message": "Invalid Request"}}
]
| 23.608365 | 101 | 0.405218 | 727 | 6,209 | 3.335626 | 0.048143 | 0.070515 | 0.300206 | 0.404124 | 0.903505 | 0.896082 | 0.893608 | 0.885773 | 0.880412 | 0.85567 | 0 | 0.095553 | 0.319053 | 6,209 | 262 | 102 | 23.698473 | 0.478004 | 0.031889 | 0 | 0.605505 | 0 | 0 | 0.344954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
205e326a269ce2720aae0eab7dc3c7fa5e4f3aa1 | 6,085 | py | Python | plots_comparative.py | hekun520/MEC_offloading | 42b17c4172f10ae15d13cc1c30f1389904be647f | [
"MIT"
] | 73 | 2019-04-08T15:04:01.000Z | 2022-03-26T09:44:00.000Z | plots_comparative.py | Abednego97/MEC_offloading | 42b17c4172f10ae15d13cc1c30f1389904be647f | [
"MIT"
] | 4 | 2019-11-15T14:29:45.000Z | 2021-01-18T15:55:27.000Z | plots_comparative.py | Abednego97/MEC_offloading | 42b17c4172f10ae15d13cc1c30f1389904be647f | [
"MIT"
] | 22 | 2019-06-02T08:54:00.000Z | 2022-03-17T02:22:15.000Z | # -*- coding: utf-8 -*-
"""
MEC_offloading.plots_for_paper
~~~~~~~~~~~~~~~~~~~~~~~~~
Generate comparative plots for the MEC_offloading
:copyright: (c) 2018 by Giorgos Mitsis.
:license: MIT License, see LICENSE for more details.
"""
import itertools
import dill
import numpy as np
import numpy as np
import matplotlib.pyplot as plt
from create_plots import *
SAVE_FIGS = True
# different cases
# Select which case to run
cases = [{"users": "hetero", "servers": "hetero", "offload": "dyn"}, {"users": "hetero", "servers": "hetero", "offload": "25"}, {"users": "hetero", "servers": "hetero", "offload": "58.6"}, {"users": "hetero", "servers": "hetero", "offload": "100"}]
results = {}
params = {}
keys = []
a = []
for case in cases:
key = case["users"] + "_" + case["servers"] + "_offload_" + case["offload"]
keys.append(key)
infile = "saved_runs/parameters/" + case["users"] + "_" + case["servers"] + "_lr_" + "0.20"
with open(infile, 'rb') as in_strm:
params[key] = dill.load(in_strm)
a.append(params[key]["a"])
infile = "saved_runs/results/" + key + "_lr_" + "0.20"
with open(infile, 'rb') as in_strm:
results[key] = dill.load(in_strm)
# if not np.all(a == a[1]):
# raise ValueError("Parameters are not equal for different cases")
color_sequence = ['#1f77b4', '#aec7e8', '#ffbb78', '#2ca02c', '#c0c0c0', '#ff00ff', '#00ffff', '#ffff00']
index = 0
offload = ["dynamic offloading", "25% offloading", "58.6% offloading", "100% offloading"]
suptitle = "Average servers' welfare for different cases"
fig, ax = setup_plots(suptitle)
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label]):
item.set_fontsize(30)
for item in (ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(26)
item.set_fontweight("bold")
font = {'weight' : 'bold'}
matplotlib.rc('font', **font)
# set offset so that text on the figures does not collide
y_offset = [-700, 0, 300, 0]
for key in keys:
average_welfare = np.mean(results[key]["all_server_welfare"][:results[key]["median_timeslots"]], axis=1)
plt.plot(average_welfare, lw=5, color=color_sequence[index])
y_pos = average_welfare[-1]
plt.text(len(average_welfare) + 5, y_pos+y_offset[index], offload[index], fontsize=24, color=color_sequence[index])
index += 1
xlabel = "timeslots"
ylabel = "servers' welfare"
plt.xlabel(xlabel, fontweight='bold')
plt.ylabel(ylabel, fontweight='bold')
path_name = "all_server_welfare"
if SAVE_FIGS == True:
plt.savefig("plots/" + path_name + ".png")
plt.show(block=False)
index = 0
suptitle = "Average users' utility for different cases"
fig, ax = setup_plots(suptitle)
for key in keys:
average_utility = np.mean(results[key]["all_user_utility"][:results[key]["median_timeslots"]], axis=1)
plt.plot(average_utility, lw=5, color=color_sequence[index])
y_pos = average_utility[-1]
plt.text(len(average_utility) + 5, y_pos, offload[index], fontsize=24, color=color_sequence[index])
index += 1
xlabel = "timeslots"
ylabel = "users' utility"
plt.xlabel(xlabel, fontweight='bold')
plt.ylabel(ylabel, fontweight='bold')
path_name = "all_user_utility"
if SAVE_FIGS == True:
plt.savefig("plots/" + path_name + ".png")
plt.show(block=False)
# different learning rates
# Select which case to run
case = {"users": "hetero", "servers": "hetero"}
learning_rates = ["0.10", "0.20", "0.30", "0.40", "0.50"]
results = {}
params = {}
keys = []
a = []
for learning_rate in learning_rates:
key = case["users"] + "_" + case["servers"] + "_lr_" + learning_rate
keys.append(key)
infile = "saved_runs/parameters/" + key
with open(infile, 'rb') as in_strm:
params[key] = dill.load(in_strm)
a.append(params[key]["a"])
infile = "saved_runs/results/" + key
with open(infile, 'rb') as in_strm:
results[key] = dill.load(in_strm)
if not np.all(a == a[1]):
raise ValueError("Parameters are not equal for different cases")
color_sequence = ['#1f77b4', '#aec7e8', '#ff7f0e', '#ffbb78', '#2ca02c']
index = 0
suptitle = "Average servers' welfare for different learning rates"
fig, ax = setup_plots(suptitle)
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label]):
item.set_fontsize(30)
for item in (ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(26)
item.set_fontweight("bold")
font = {'weight' : 'bold'}
matplotlib.rc('font', **font)
for key in keys:
average_welfare = np.mean(results[key]["all_server_welfare"][:results[key]["median_timeslots"]], axis=1)
plt.plot(average_welfare, lw=5, color=color_sequence[index])
y_pos = average_welfare[-1]
name = "b = " + key[-4:]
plt.text(len(average_welfare) + 5, y_pos, name, fontsize=24, color=color_sequence[index])
index += 1
xlabel = "timeslots"
ylabel = "servers' welfare"
plt.xlabel(xlabel, fontweight='bold')
plt.ylabel(ylabel, fontweight='bold')
path_name = "all_server_welfare_different_learning_rates"
if SAVE_FIGS == True:
plt.savefig("plots/" + path_name + ".png")
plt.show(block=False)
index = 0
suptitle = "Average users' utility for different cases"
fig, ax = setup_plots(suptitle)
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label]):
item.set_fontsize(30)
for item in (ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(26)
item.set_fontweight("bold")
font = {'weight' : 'bold'}
matplotlib.rc('font', **font)
for key in keys:
average_utility = np.mean(results[key]["all_user_utility"][:results[key]["median_timeslots"]], axis=1)
plt.plot(average_utility, lw=5, color=color_sequence[index])
y_pos = average_utility[-1]
name = "b = " + key[-4:]
plt.text(len(average_utility) + 5, y_pos, name, fontsize=24, color=color_sequence[index])
index += 1
xlabel = "timeslots"
ylabel = "users' utility"
plt.xlabel(xlabel, fontweight='bold')
plt.ylabel(ylabel, fontweight='bold')
path_name = "all_user_utility_different_learning_rates"
if SAVE_FIGS == True:
plt.savefig("plots/" + path_name + ".png")
plt.show(block=False)
if SAVE_FIGS == False:
plt.show()
| 29.975369 | 248 | 0.669844 | 857 | 6,085 | 4.607935 | 0.18203 | 0.030387 | 0.036465 | 0.046594 | 0.838187 | 0.759686 | 0.74196 | 0.722715 | 0.703722 | 0.690555 | 0 | 0.025526 | 0.156615 | 6,085 | 202 | 249 | 30.123762 | 0.743959 | 0.076582 | 0 | 0.751825 | 1 | 0 | 0.215974 | 0.022923 | 0.014599 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043796 | 0 | 0.043796 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
20944360b8b37d12229229e739ca318b37976d62 | 11,973 | py | Python | healthcareai/tests/test_dataframe_transformers_Dataframe_Imputer.py | Eastwinds99/healthcareai-py | 6667e3485aaefea90b203dffd859f0d76c593d90 | [
"MIT"
] | 263 | 2017-05-04T17:00:33.000Z | 2022-03-31T20:57:27.000Z | healthcareai/tests/test_dataframe_transformers_Dataframe_Imputer.py | Eastwinds99/healthcareai-py | 6667e3485aaefea90b203dffd859f0d76c593d90 | [
"MIT"
] | 290 | 2017-05-03T05:04:35.000Z | 2020-08-14T20:18:23.000Z | healthcareai/tests/test_dataframe_transformers_Dataframe_Imputer.py | Eastwinds99/healthcareai-py | 6667e3485aaefea90b203dffd859f0d76c593d90 | [
"MIT"
] | 168 | 2017-05-18T19:44:20.000Z | 2022-03-16T19:55:51.000Z | import pandas as pd
import numpy as np
import unittest
import healthcareai.common.transformers as transformers
from healthcareai.common.healthcareai_error import HealthcareAIError
class TestDataframeImputer(unittest.TestCase):
def test_imputation_false_returns_unmodified(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
['a', None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
['a', None, None]
])
result = transformers.DataFrameImputer(impute=False).fit_transform(df)
self.assertEqual(len(result), 4)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputation_removes_nans(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
[np.nan, np.nan, np.nan]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
['b', 4 / 3.0, 5 / 3.0]
])
result = transformers.DataFrameImputer().fit_transform(df)
self.assertEqual(len(result), 4)
# Assert no NANs
self.assertFalse(result.isnull().values.any())
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputation_removes_nones(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
['b', 4 / 3.0, 5 / 3.0]
])
result = transformers.DataFrameImputer().fit_transform(df)
self.assertEqual(len(result), 4)
self.assertFalse(result.isnull().values.any())
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputation_for_mean_of_numeric_and_mode_for_categorical(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
[None, None, None]
])
result = transformers.DataFrameImputer().fit_transform(df)
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
['b', 4. / 3, 5. / 3]
])
self.assertEqual(len(result), 4)
# Assert imputed values
self.assertTrue(expected.equals(result))
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
class TestAdvanceImputer(unittest.TestCase):
def test_imputation_false_and_imputeStrategy_None_returns_unmodified(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
result = transformers.DataFrameImputer(impute=False, imputeStrategy=None ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_None_impute_for_None(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 22 / 9.0, 30 / 9.0]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy=None ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_None_impute_for_NaN(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[np.NaN, np.NaN, np.NaN]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 22 / 9.0, 30 / 9.0]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy=None ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputation_false_and_imputeStrategy_MeanMedian_returns_unmodified(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
result = transformers.DataFrameImputer(impute=False, imputeStrategy='MeanMedian' ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_MeanMedian_impute_for_None(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 22 / 9.0, 30 / 9.0]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy='MeanMode' ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_MeanMedian_impute_for_NaN(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[np.NaN, np.NaN, np.NaN]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 22 / 9.0, 30 / 9.0]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy='MeanMode' ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputation_false_and_imputeStrategy_RandomForest_returns_unmodified(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
result = transformers.DataFrameImputer(impute=False, imputeStrategy='RandomForest' ).fit_transform(df)
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_RandomForest_impute_for_None(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[None, None, None]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 1.567, 6.032 ]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy='RandomForest' ).fit_transform(df)
result = round( result, 3 )
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
def test_imputeStrategy_RandomForest_impute_for_NaN(self):
df = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
[np.NaN, np.NaN, np.NaN]
])
expected = pd.DataFrame([
['a', 1, 2],
['b', 1, 1],
['b', 4, 1],
['a', 2, 8],
['b', 2, 6],
['b', 1, 2],
['a', 6, 2],
['b', 3, 1],
['b', 2, 7],
['b', 1.567, 6.032 ]
])
result = transformers.DataFrameImputer(impute=True, imputeStrategy='RandomForest' ).fit_transform(df)
result = round( result, 3 )
self.assertEqual(len(result), 10)
# Assert column types remain identical
self.assertTrue(list(result.dtypes) == list(df.dtypes))
self.assertTrue(expected.equals(result))
if __name__ == '__main__':
unittest.main()
| 29.709677 | 110 | 0.422785 | 1,342 | 11,973 | 3.703428 | 0.064829 | 0.018913 | 0.062777 | 0.068008 | 0.933803 | 0.924346 | 0.889738 | 0.880684 | 0.880684 | 0.87163 | 0 | 0.063694 | 0.396809 | 11,973 | 402 | 111 | 29.783582 | 0.624481 | 0.04318 | 0 | 0.918129 | 0 | 0 | 0.023339 | 0 | 0 | 0 | 0 | 0 | 0.119883 | 1 | 0.038012 | false | 0 | 0.01462 | 0 | 0.05848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
209c7345042f069e084b676562175a8b7eaf26c6 | 9,387 | py | Python | mayan/apps/document_states/tests/test_workflow_template_state_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | 2 | 2022-02-22T05:30:11.000Z | 2022-03-08T03:55:14.000Z | mayan/apps/document_states/tests/test_workflow_template_state_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | null | null | null | mayan/apps/document_states/tests/test_workflow_template_state_api.py | atitaya1412/Mayan-EDMS | bda9302ba4b743e7d829ad118b8b836221888172 | [
"Apache-2.0"
] | 46 | 2022-02-14T15:34:51.000Z | 2022-03-08T21:07:52.000Z | from rest_framework import status
from mayan.apps.rest_api.tests.base import BaseAPITestCase
from ..events import event_workflow_template_edited
from ..permissions import (
permission_workflow_template_edit, permission_workflow_template_view
)
from .literals import TEST_WORKFLOW_TEMPLATE_STATE_LABEL
from .mixins.workflow_template_mixins import WorkflowTemplateTestMixin
from .mixins.workflow_template_state_mixins import WorkflowTemplateStateAPIViewTestMixin
class WorkflowTemplateStatesAPIViewTestCase(
WorkflowTemplateStateAPIViewTestMixin, WorkflowTemplateTestMixin,
BaseAPITestCase
):
def setUp(self):
super().setUp()
self._create_test_workflow_template()
def test_workflow_template_state_create_api_view_no_permission(self):
self._clear_events()
response = self._request_test_workflow_template_state_create_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.test_workflow_template.refresh_from_db()
self.assertEqual(self.test_workflow_template.states.count(), 0)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_create_api_view_with_access(self):
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_edit
)
self._clear_events()
response = self._request_test_workflow_template_state_create_api_view()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.test_workflow_template.refresh_from_db()
self.assertEqual(
self.test_workflow_template.states.first().label,
TEST_WORKFLOW_TEMPLATE_STATE_LABEL
)
events = self._get_test_events()
self.assertEqual(events.count(), 1)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(
events[0].action_object, self.test_workflow_template_state
)
self.assertEqual(events[0].target, self.test_workflow_template)
self.assertEqual(events[0].verb, event_workflow_template_edited.id)
def test_workflow_template_state_delete_api_view_no_permission(self):
self._create_test_workflow_template_state()
self._clear_events()
response = self._request_test_workflow_template_state_delete_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.test_workflow_template.refresh_from_db()
self.assertEqual(self.test_workflow_template.states.count(), 1)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_delete_api_view_with_access(self):
self._create_test_workflow_template_state()
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_edit
)
self._clear_events()
response = self._request_test_workflow_template_state_delete_api_view()
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.test_workflow_template.refresh_from_db()
self.assertEqual(self.test_workflow_template.states.count(), 0)
events = self._get_test_events()
self.assertEqual(events.count(), 1)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(events[0].action_object, None)
self.assertEqual(events[0].target, self.test_workflow_template)
self.assertEqual(events[0].verb, event_workflow_template_edited.id)
def test_workflow_template_state_detail_api_view_no_permission(self):
self._create_test_workflow_template_state()
self._clear_events()
response = self._request_test_workflow_template_state_detail_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_detail_api_view_with_access(self):
self._create_test_workflow_template_state()
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_view
)
self._clear_events()
response = self._request_test_workflow_template_state_detail_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['id'], self.test_workflow_template_state.pk
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_list_api_view_no_permission(self):
self._create_test_workflow_template_state()
self._clear_events()
response = self._request_test_workflow_template_state_list_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_list_api_view_with_access(self):
self._create_test_workflow_template_state()
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_view
)
self._clear_events()
response = self._request_test_workflow_template_state_list_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['results'][0]['id'],
self.test_workflow_template_state.pk
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_edit_api_view_via_patch_no_permission(self):
self._create_test_workflow_template_state()
test_workflow_template_state_label = self.test_workflow_template_state.label
self._clear_events()
response = self._request_test_workflow_template_state_edit_patch_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.test_workflow_template_state.refresh_from_db()
self.assertEqual(
self.test_workflow_template_state.label,
test_workflow_template_state_label
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_edit_api_view_via_patch_with_access(self):
self._create_test_workflow_template_state()
test_workflow_template_state_label = self.test_workflow_template_state.label
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_edit
)
self._clear_events()
response = self._request_test_workflow_template_state_edit_patch_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_workflow_template_state.refresh_from_db()
self.assertNotEqual(
self.test_workflow_template_state.label,
test_workflow_template_state_label
)
events = self._get_test_events()
self.assertEqual(events.count(), 1)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(
events[0].action_object, self.test_workflow_template_state
)
self.assertEqual(events[0].target, self.test_workflow_template)
self.assertEqual(events[0].verb, event_workflow_template_edited.id)
def test_workflow_template_state_edit_api_view_via_put_no_permission(self):
self._create_test_workflow_template_state()
test_workflow_template_state_label = self.test_workflow_template_state.label
self._clear_events()
response = self._request_test_workflow_template_state_edit_put_api_view()
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.test_workflow_template_state.refresh_from_db()
self.assertEqual(
self.test_workflow_template_state.label,
test_workflow_template_state_label
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_workflow_template_state_edit_api_view_via_put_with_access(self):
self._create_test_workflow_template_state()
test_workflow_template_state_label = self.test_workflow_template_state.label
self.grant_access(
obj=self.test_workflow_template,
permission=permission_workflow_template_edit
)
self._clear_events()
response = self._request_test_workflow_template_state_edit_put_api_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_workflow_template_state.refresh_from_db()
self.assertNotEqual(
self.test_workflow_template_state.label,
test_workflow_template_state_label
)
events = self._get_test_events()
self.assertEqual(events.count(), 1)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(
events[0].action_object, self.test_workflow_template_state
)
self.assertEqual(events[0].target, self.test_workflow_template)
self.assertEqual(events[0].verb, event_workflow_template_edited.id)
| 35.965517 | 88 | 0.729306 | 1,116 | 9,387 | 5.621864 | 0.072581 | 0.24227 | 0.255021 | 0.243067 | 0.913134 | 0.90357 | 0.898789 | 0.898789 | 0.886356 | 0.886356 | 0 | 0.009026 | 0.197401 | 9,387 | 260 | 89 | 36.103846 | 0.823732 | 0 | 0 | 0.715054 | 0 | 0 | 0.001172 | 0 | 0 | 0 | 0 | 0 | 0.268817 | 1 | 0.069892 | false | 0 | 0.037634 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
20b275990d6e0977a6b385aa6ca42e60713ad0b1 | 6,364 | py | Python | port/modules/font/digiface_30.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 53 | 2018-10-15T12:01:24.000Z | 2019-11-22T09:31:02.000Z | port/modules/font/digiface_30.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 10 | 2018-10-17T13:42:19.000Z | 2019-11-25T06:42:40.000Z | port/modules/font/digiface_30.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 26 | 2018-12-04T03:53:39.000Z | 2019-11-22T03:40:05.000Z | # Code generated by font-to-py.py.
# Font: digiface.ttf Char set: .0123456789:
version = '0.26'
def height():
return 30
def max_width():
return 20
def hmap():
return True
def reverse():
return False
def monospaced():
return False
def min_ch():
return 32
def max_ch():
return 63
_font =\
b'\x11\x00\x00\x00\x00\x7f\xe0\x00\xff\xf0\x00\xff\xfc\x00\x7f\xfe'\
b'\x00\x00\x1e\x00\x00\x1e\x00\x00\x1e\x00\x00\x1e\x00\x00\x1e\x00'\
b'\x00\x1e\x00\x00\x1e\x00\x00\x1e\x00\x07\xee\x00\x03\xfa\x00\x09'\
b'\xf8\x00\x0d\xe0\x00\x0e\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00'\
b'\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x0f\x00\x00'\
b'\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0c\x00\x00\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'\
b'\x00\x00\x00\x00\xf0\xf0\xf0\xf0\xf0\xf0\x14\x00\x00\x00\x00\x0f'\
b'\xf8\x00\x1f\xfc\x00\x7f\xff\x80\xff\xff\x80\xf0\x07\x80\xf0\x07'\
b'\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80'\
b'\xf0\x03\x80\xe0\x00\x80\x80\x00\x00\x80\x00\x00\xe0\x01\x80\xf0'\
b'\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07'\
b'\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\x4f\xff\x80\x1f\xff\x00'\
b'\x1f\xfc\x00\x0f\xf8\x00\x14\x00\x00\x00\x00\x10\x00\x00\x30\x00'\
b'\x00\x70\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00'\
b'\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\x70'\
b'\x00\x00\x10\x00\x00\x00\x00\x00\x30\x00\x00\xf0\x00\x00\xf0\x00'\
b'\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00'\
b'\xf0\x00\x00\xf0\x00\x00\x70\x00\x00\x30\x00\x00\x10\x00\x00\x00'\
b'\x00\x00\x14\x00\x00\x00\x00\x0f\xf8\x00\x1f\xfc\x00\x3f\xff\x00'\
b'\x7f\xff\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00'\
b'\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x0f\xff\x80\x3f\xfe'\
b'\x80\x7f\xff\x00\xdf\xfc\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00'\
b'\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0'\
b'\x00\x00\xff\xfc\x00\x7f\xfe\x00\x1f\xff\x00\x0f\xff\x80\x14\x00'\
b'\x00\x00\x00\xff\xf0\x00\x7f\xf8\x00\x3f\xfe\x00\x1f\xff\x00\x00'\
b'\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f'\
b'\x00\x00\x0f\x00\x00\x0f\x00\x1f\xff\x00\x7f\xfd\x00\x7f\xfe\x00'\
b'\x3f\xfb\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00'\
b'\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x1f\xff'\
b'\x00\x3f\xfe\x00\x7f\xf8\x00\xff\xf0\x00\x14\x00\x00\x00\x00\x80'\
b'\x00\x80\xc0\x01\x80\xe0\x03\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07'\
b'\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80'\
b'\xf0\x07\x80\xef\xff\x80\xbf\xfe\x80\x7f\xff\x00\x1f\xfd\x80\x00'\
b'\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07'\
b'\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x03\x80\x00\x01\x80'\
b'\x00\x00\x80\x00\x00\x00\x14\x00\x00\x00\x00\x0f\xff\x00\x1f\xfe'\
b'\x00\x7f\xfc\x00\xff\xf8\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00'\
b'\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xef'\
b'\xfc\x00\xbf\xfe\x00\x7f\xff\x00\x1f\xfd\x80\x00\x07\x80\x00\x07'\
b'\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80'\
b'\x00\x07\x80\x00\x07\x80\x1f\xff\x80\x3f\xff\x00\x7f\xfc\x00\xff'\
b'\xf8\x00\x14\x00\x00\x00\x00\x0f\xff\x00\x1f\xfe\x00\x7f\xfc\x00'\
b'\xff\xf8\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0'\
b'\x00\x00\xf0\x00\x00\xf0\x00\x00\xf0\x00\x00\xef\xfc\x00\xbf\xfe'\
b'\x00\x7f\xff\x00\xdf\xfd\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80'\
b'\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0'\
b'\x07\x80\x6f\xff\x80\x1f\xff\x00\x1f\xfc\x00\x0f\xf8\x00\x14\x00'\
b'\x00\x00\x00\xff\xfd\x00\x7f\xfb\x00\x3f\xf7\x00\x1f\xef\x00\x00'\
b'\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f'\
b'\x00\x00\x0f\x00\x00\x0f\x00\x00\x07\x00\x00\x01\x00\x00\x00\x00'\
b'\x00\x03\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00'\
b'\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x0f\x00\x00\x07'\
b'\x00\x00\x03\x00\x00\x01\x00\x00\x00\x00\x14\x00\x00\x00\x00\x0f'\
b'\xf8\x00\x1f\xfc\x00\x7f\xff\x80\xff\xff\x80\xf0\x07\x80\xf0\x07'\
b'\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80'\
b'\xf0\x03\x80\xef\xfc\x80\xbf\xff\x00\xff\xfe\x00\xff\xfd\x80\xf0'\
b'\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07'\
b'\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\x4f\xff\x80\x1f\xff\x00'\
b'\x1f\xfc\x00\x0f\xf8\x00\x14\x00\x00\x00\x00\x0f\xf8\x00\x1f\xfc'\
b'\x00\x7f\xff\x00\xff\xff\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80'\
b'\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xf0\x07\x80\xef'\
b'\xff\x80\xbf\xfe\x80\x7f\xff\x00\x1f\xfd\x80\x00\x03\x80\x00\x07'\
b'\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80\x00\x07\x80'\
b'\x00\x07\x80\x00\x07\x80\x0f\xff\x80\x1f\xff\x00\x3f\xfc\x00\x7f'\
b'\xf8\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0'\
b'\xf0\xf0\xf0\xf0\xf0\x00\x00\x00\x00\x00\x00\x00\xf0\xf0\xf0\xf0'\
b'\xf0\xf0'
_index =\
b'\x00\x00\x5c\x00\x5c\x00\x9a\x00\x00\x00\x5c\x00\x00\x00\x5c\x00'\
b'\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00'\
b'\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00'\
b'\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00\x9a\x00\xba\x00'\
b'\x00\x00\x5c\x00\xba\x00\x16\x01\x16\x01\x72\x01\x72\x01\xce\x01'\
b'\xce\x01\x2a\x02\x2a\x02\x86\x02\x86\x02\xe2\x02\xe2\x02\x3e\x03'\
b'\x3e\x03\x9a\x03\x9a\x03\xf6\x03\xf6\x03\x52\x04\x52\x04\x72\x04'\
b'\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00\x00\x00\x5c\x00'\
b'\x00\x00\x5c\x00'
_mvfont = memoryview(_font)
def get_ch(ch):
ordch = ord(ch)
ordch = ordch + 1 if ordch >= 32 and ordch <= 63 else 63
idx_offs = 4 * (ordch - 32)
offset = int.from_bytes(_index[idx_offs : idx_offs + 2], 'little')
next_offs = int.from_bytes(_index[idx_offs + 2 : idx_offs + 4], 'little')
width = int.from_bytes(_font[offset:offset + 2], 'little')
return _mvfont[offset + 2:next_offs], 30, width
| 52.163934 | 78 | 0.687461 | 1,467 | 6,364 | 2.96728 | 0.070211 | 0.395589 | 0.274983 | 0.256375 | 0.768895 | 0.717666 | 0.681829 | 0.657248 | 0.633816 | 0.615208 | 0 | 0.345306 | 0.059397 | 6,364 | 121 | 79 | 52.595041 | 0.381891 | 0.011785 | 0 | 0.205607 | 1 | 0.738318 | 0.827575 | 0.820114 | 0 | 1 | 0 | 0 | 0 | 1 | 0.074766 | false | 0 | 0 | 0.065421 | 0.149533 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
20c8be02b2582d5f39aee8b839208ac8a23d7b54 | 5,909 | py | Python | tests/slippinj/cli/scripts/test_anabasii.py | scm-spain/slippin-jimmy | d0e52277daff523eda63f5d3137b5a990413923d | [
"Apache-2.0"
] | 7 | 2016-03-31T06:17:23.000Z | 2018-01-25T15:25:05.000Z | tests/slippinj/cli/scripts/test_anabasii.py | scm-spain/slippin-jimmy | d0e52277daff523eda63f5d3137b5a990413923d | [
"Apache-2.0"
] | 8 | 2016-03-30T18:45:09.000Z | 2017-06-19T09:21:35.000Z | tests/slippinj/cli/scripts/test_anabasii.py | scm-spain/slippin-jimmy | d0e52277daff523eda63f5d3137b5a990413923d | [
"Apache-2.0"
] | 13 | 2017-04-21T08:17:14.000Z | 2019-07-12T04:59:24.000Z | import logging
from mock import Mock
from slippinj.cli.objects.wf_configuration_object import WfConfigurationObject
from slippinj.cli.scripts.anabasii import Anabasii
class TestAnabasii:
def test_script_can_be_configured(self):
mocked_args_parser = Mock()
mocked_args_parser.add_parser = Mock(return_value=mocked_args_parser)
mocked_args_parser.add_argument = Mock(return_value=True)
Anabasii(mocked_args_parser).configure()
assert 4 == mocked_args_parser.add_argument.call_count
def test_script_is_executable_when_cluster_id_has_not_been_provided_not_standalone_run(self):
mocked_interactive_cluster_id = Mock()
mocked_interactive_cluster_id.get = Mock(return_value=True)
mocked_emr_deploy = Mock()
mocked_emr_deploy.upload_code = Mock(return_value=True)
mocked_injector = Mock()
mocked_injector.get = Mock(
side_effect=[self.__generate_test_logger(), self.__get_mocked_wf_configuration(),
mocked_interactive_cluster_id, mocked_emr_deploy])
mocked_args = Mock()
mocked_args.cluster_id = False
mocked_args.wf_dir = 'test'
mocked_args.hdfs_deploy_folder = 'test'
mocked_args.local_mode = False
mocked_args.script = 'hersir'
Anabasii(Mock()).run(mocked_args, mocked_injector)
assert mocked_interactive_cluster_id.get.called
def test_script_is_executable_when_cluster_id_has_not_been_provided_but_added_on_config_not_standalone_run(self):
mocked_interactive_cluster_id = Mock()
mocked_interactive_cluster_id.get = Mock(return_value=True)
mocked_emr_deploy = Mock()
mocked_emr_deploy.upload_code = Mock(return_value=True)
mocked_injector = Mock()
mocked_injector.get = Mock(
side_effect=[self.__generate_test_logger(), self.__get_mocked_wf_configuration_with_cluster_properties(),
mocked_emr_deploy])
mocked_args = Mock()
mocked_args.cluster_id = False
mocked_args.wf_dir = 'test'
mocked_args.hdfs_deploy_folder = 'test'
mocked_args.local_mode = False
mocked_args.script = 'hersir'
Anabasii(Mock()).run(mocked_args, mocked_injector)
assert not mocked_interactive_cluster_id.get.called
def test_script_is_executable_when_cluster_id_has_not_been_provided_standalone_run(self):
mocked_interactive_cluster_id = Mock()
mocked_interactive_cluster_id.get = Mock(return_value=True)
mocked_emr_deploy = Mock()
mocked_emr_deploy.upload_code = Mock(return_value=True)
mocked_injector = Mock()
mocked_injector.get = Mock(
side_effect=[self.__generate_test_logger(), mocked_interactive_cluster_id, mocked_emr_deploy])
mocked_args = Mock()
mocked_args.cluster_id = False
mocked_args.wf_dir = 'test'
mocked_args.hdfs_deploy_folder = 'test'
mocked_args.local_mode = False
mocked_args.script = 'anabasii'
Anabasii(Mock()).run(mocked_args, mocked_injector)
assert mocked_interactive_cluster_id.get.called
def test_script_is_executable_when_cluster_id_has_been_provided_not_standalone_run(self):
mocked_interactive_cluster_id = Mock()
mocked_interactive_cluster_id.get = Mock(return_value=True)
mocked_emr_deploy = Mock()
mocked_emr_deploy.upload_code = Mock(return_value=True)
mocked_injector = Mock()
mocked_injector.get = Mock(
side_effect=[self.__generate_test_logger(), self.__get_mocked_wf_configuration(), mocked_emr_deploy])
mocked_args = Mock()
mocked_args.cluster_id = 'test'
mocked_args.wf_dir = 'test'
mocked_args.hdfs_deploy_folder = 'test'
mocked_args.local_mode = False
mocked_args.script = 'hersir'
Anabasii(Mock()).run(mocked_args, mocked_injector)
mocked_interactive_cluster_id.get.assert_not_called()
assert mocked_emr_deploy.upload_code.called
def test_script_is_executable_when_cluster_id_has_been_provided_standalone_run(self):
mocked_interactive_cluster_id = Mock()
mocked_interactive_cluster_id.get = Mock(return_value=True)
mocked_emr_deploy = Mock()
mocked_emr_deploy.upload_code = Mock(return_value=True)
mocked_injector = Mock()
mocked_injector.get = Mock(
side_effect=[self.__generate_test_logger(), mocked_emr_deploy])
mocked_args = Mock()
mocked_args.cluster_id = 'test'
mocked_args.wf_dir = 'test'
mocked_args.hdfs_deploy_folder = 'test'
mocked_args.local_mode = False
mocked_args.script = 'anabasii'
Anabasii(Mock()).run(mocked_args, mocked_injector)
mocked_interactive_cluster_id.get.assert_not_called()
assert mocked_emr_deploy.upload_code.called
def __generate_test_logger(self):
logger = logging.getLogger('test')
logger.addHandler(logging.NullHandler())
return logger
def __get_mocked_wf_configuration(self):
wf_configuration = WfConfigurationObject()
wf_configuration.output_directory = 'test'
wf_configuration.wf_dir = 'test'
wf_configuration.template = 'test'
wf_configuration.incremental_tables = []
wf_configuration.hdfs_deploy_folder = 'test'
return wf_configuration
def __get_mocked_wf_configuration_with_cluster_properties(self):
wf_configuration = WfConfigurationObject()
wf_configuration.output_directory = 'test'
wf_configuration.wf_dir = 'test'
wf_configuration.template = 'test'
wf_configuration.incremental_tables = []
wf_configuration.hdfs_deploy_folder = 'test'
wf_configuration.cluster_id = 'test'
return wf_configuration
| 36.93125 | 117 | 0.71078 | 712 | 5,909 | 5.398876 | 0.110955 | 0.10666 | 0.106139 | 0.114984 | 0.864464 | 0.842612 | 0.842612 | 0.825442 | 0.825442 | 0.825442 | 0 | 0.000216 | 0.214926 | 5,909 | 159 | 118 | 37.163522 | 0.828411 | 0 | 0 | 0.732759 | 0 | 0 | 0.020646 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 1 | 0.077586 | false | 0 | 0.034483 | 0 | 0.146552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
20e1d58567a5bbf3edc5624a94611e67f259ff85 | 2,875 | py | Python | py3/tensorflow_code/initializers.py | gulamungon/SEQUENS | 48321f437f637d6d31c0beb70e03477952ad7340 | [
"Apache-1.1"
] | 4 | 2019-07-26T09:11:28.000Z | 2020-09-27T13:31:40.000Z | py3/tensorflow_code/initializers.py | gulamungon/SEQUENS | 48321f437f637d6d31c0beb70e03477952ad7340 | [
"Apache-1.1"
] | null | null | null | py3/tensorflow_code/initializers.py | gulamungon/SEQUENS | 48321f437f637d6d31c0beb70e03477952ad7340 | [
"Apache-1.1"
] | 2 | 2019-07-27T06:34:37.000Z | 2019-07-29T09:21:52.000Z | import numpy as np
def init_params_simple_he(sizes, input_mean=[], input_std=[], floatX='float32'):
params_dict = {}
if ( input_mean != [] ):
params_dict["input_mean"] = input_mean.astype( floatX)
if ( input_std != [] ):
params_dict["input_std"] = input_std.astype( floatX )
for ii in range(1, len( sizes )):
s = 1.0/np.sqrt((sizes[ii-1] )/2.0)
params_dict[ 'W_'+str(ii) ] = np.random.randn( sizes[ii-1], sizes[ii]).astype( floatX )*s
for ii in range(1, len(sizes )):
#params_dict[ 'b_bfr_pool_'+str(ii) ] = np.random.random(sizes[ii]).astype(T.config.floatX)*0.0
params_dict[ 'b_'+str(ii) ] = np.zeros(sizes[ii]).astype( floatX )
return params_dict
def init_params_simple_he_uniform(sizes, input_mean=[], input_std=[], floatX='float32', use_bug=False):
params_dict = {}
if ( input_mean != [] ):
params_dict["input_mean"] = input_mean.astype( floatX)
if ( input_std != [] ):
params_dict["input_std"] = input_std.astype( floatX )
if use_bug:
for ii in range(1, len( sizes )):
print("Buggy init")
params_dict[ 'W_'+str(ii) ] = np.random.uniform(-np.sqrt(6)/sizes[ii-1], np.sqrt(6)/sizes[ii-1],
(sizes[ii-1], sizes[ii]) ).astype( floatX )
else:
print("Bug free init")
for ii in range(1, len( sizes )):
params_dict[ 'W_'+str(ii) ] = np.random.uniform(-np.sqrt(6.0/sizes[ii-1]), np.sqrt(6.0/sizes[ii-1]),
(sizes[ii-1], sizes[ii]) ).astype( floatX )
for ii in range(1, len(sizes )):
params_dict[ 'b_'+str(ii) ] = np.zeros(sizes[ii]).astype( floatX )
return params_dict
def init_params_simple_he_uniform_full_spec(sizes, input_mean=[], input_std=[], floatX='float32', use_bug=False):
params_dict = {}
if ( input_mean != [] ):
params_dict["input_mean"] = input_mean.astype( floatX)
if ( input_std != [] ):
params_dict["input_std"] = input_std.astype( floatX )
if use_bug:
print("Buggy init")
for ii in range(0, len( sizes ) ):
params_dict[ 'W_'+str(ii +1 ) ] = np.random.uniform(-np.sqrt(6)/sizes[ii][0], np.sqrt(6)/sizes[ii][0],
(sizes[ii][0], sizes[ii][1]) ).astype( floatX )
else:
print("Bug free init")
for ii in range(0, len( sizes ) ):
params_dict[ 'W_'+str(ii +1 ) ] = np.random.uniform(-np.sqrt(6.0/sizes[ii][0]), np.sqrt(6.0/sizes[ii][0]),
(sizes[ii][0], sizes[ii][1]) ).astype( floatX )
for ii in range(0, len(sizes )):
params_dict[ 'b_'+str(ii + 1) ] = np.zeros(sizes[ii][1]).astype( floatX )
#for ii in range(0, len(sizes )):
# params_dict[ 'b_'+str(ii + 1) ] = np.random.uniform(-1, 1, ( sizes[ii][1]) ).astype( floatX )
return params_dict
| 36.392405 | 113 | 0.568 | 427 | 2,875 | 3.648712 | 0.117096 | 0.107831 | 0.061617 | 0.06932 | 0.914634 | 0.878049 | 0.859435 | 0.779204 | 0.770218 | 0.740693 | 0 | 0.026316 | 0.246609 | 2,875 | 78 | 114 | 36.858974 | 0.692982 | 0.077217 | 0 | 0.784314 | 0 | 0 | 0.05283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.019608 | 0 | 0.137255 | 0.078431 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
20e95ef7931d9b0420e9224cd3a1d2f81b64b5d8 | 4,630 | py | Python | tests/games/four_keys/allowable_actions_test.py | upkoi/skypond | 5e366a18f2c5c85ce7b092d69b28c8f8aaad8718 | [
"MIT"
] | null | null | null | tests/games/four_keys/allowable_actions_test.py | upkoi/skypond | 5e366a18f2c5c85ce7b092d69b28c8f8aaad8718 | [
"MIT"
] | null | null | null | tests/games/four_keys/allowable_actions_test.py | upkoi/skypond | 5e366a18f2c5c85ce7b092d69b28c8f8aaad8718 | [
"MIT"
] | 2 | 2019-06-13T18:08:01.000Z | 2019-06-17T02:42:19.000Z | import math
import skypond
import numpy as np
from skypond.games.four_keys.four_keys_environment import FourKeysEnvironment
from skypond.games.four_keys.four_keys_shared_state import FourKeysSharedState
from skypond.games.four_keys.four_keys_board_items import FourKeysBoardItems
from skypond.games.four_keys.four_keys_actions import FourKeysActions
from common import setup, assert_position, count_keys
def test_top_left_empty_move_down_ok():
envs,shared_state = setup()
env = envs[0]
assert_position(env,shared_state,(0,0))
env.step(FourKeysActions.DOWN)
assert_position(env,shared_state,(1,0))
def test_top_left_empty_move_left_not_ok():
envs,shared_state = setup()
env = envs[0]
assert_position(env,shared_state,(0,0))
env.step(FourKeysActions.LEFT)
assert_position(env,shared_state,(0,0))
def test_top_left_empty_move_up_not_ok():
envs,shared_state = setup()
env = envs[0]
assert_position(env,shared_state,(0,0))
env.step(FourKeysActions.UP)
assert_position(env,shared_state,(0,0))
def test_top_left_empty_move_right_ok():
envs,shared_state = setup()
env = envs[0]
assert_position(env,shared_state,(0,0))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(0,1))
def test_inside_board_move_down_ok():
envs,shared_state = setup(positions=[(1,1)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.DOWN)
assert_position(env,shared_state,(2,1))
def test_inside_board_move_right_ok():
envs,shared_state = setup(positions=[(1,1)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,2))
def test_inside_board_move_up_ok():
envs,shared_state = setup(positions=[(1,1)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.UP)
assert_position(env,shared_state,(0,1))
def test_inside_board_move_left_ok():
envs,shared_state = setup(positions=[(1,1)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.LEFT)
assert_position(env,shared_state,(1,0))
def test_inside_board_wall_block():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,2)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,1))
def test_inside_board_wall_multi_step_block():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,2))
def test_inside_board_wall_not_block():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,2))
def test_inside_board_wall_not_block():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,2))
def test_key_not_block():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)],additional_keys=[(1,2)])
env = envs[0]
assert_position(env,shared_state,(1,1))
env.step(FourKeysActions.RIGHT)
assert_position(env,shared_state,(1,2))
def test_key_pickup():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)],additional_keys=[(1,2)])
env = envs[0]
assert env.keys == 0
env.step(FourKeysActions.RIGHT)
assert env.keys == 1
def test_key_remove_from_game():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)],additional_keys=[(1,2)])
env = envs[0]
assert len(shared_state.keys) == 5
assert count_keys(shared_state.board) == 5
env.step(FourKeysActions.RIGHT)
assert len(shared_state.keys) == 4
assert count_keys(shared_state.board) == 4
def test_drop_key_add_to_board():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)],additional_keys=[(1,2)])
env = envs[0]
env.step(FourKeysActions.RIGHT) # Pickup key
assert count_keys(shared_state.board) == 4
env.step(FourKeysActions.DROP_KEY)
assert count_keys(shared_state.board) == 5
def test_drop_key_remove_from_inventory():
envs,shared_state = setup(positions=[(1,1)],walls=[(1,3)],additional_keys=[(1,2)])
env = envs[0]
env.step(FourKeysActions.RIGHT) # Pickup key
assert env.keys == 1
env.step(FourKeysActions.DROP_KEY)
assert env.keys == 0
| 34.81203 | 86 | 0.719006 | 706 | 4,630 | 4.450425 | 0.086402 | 0.175048 | 0.140675 | 0.190325 | 0.88606 | 0.85105 | 0.808721 | 0.714513 | 0.714513 | 0.675366 | 0 | 0.033292 | 0.137149 | 4,630 | 132 | 87 | 35.075758 | 0.753191 | 0.004536 | 0 | 0.765217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321739 | 1 | 0.147826 | false | 0 | 0.069565 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4553dd19284ad5e34c5c04fbbd84e02d5f63eba3 | 2,042 | py | Python | Marketplace/ideafeedPythonBackend/idea/models.py | FoodStepsApp/FoodSteps | 3c048ecdfd5490f435090e50fd7638a518980823 | [
"MIT"
] | null | null | null | Marketplace/ideafeedPythonBackend/idea/models.py | FoodStepsApp/FoodSteps | 3c048ecdfd5490f435090e50fd7638a518980823 | [
"MIT"
] | null | null | null | Marketplace/ideafeedPythonBackend/idea/models.py | FoodStepsApp/FoodSteps | 3c048ecdfd5490f435090e50fd7638a518980823 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
from mongoengine import *
import datetime
class Ideas(Document):
Idea_Choices = {
'LOCKED_IDEA': 'LOCKED IDEA',
'OPEN_IDEA': 'OPEN IDEA'
}
status = BooleanField(required=True)
idea_owner = EmailField(required=True)
idea_owner_name = StringField(required=True)
idea_genre = StringField(required=True)
idea_headline = StringField(required=True)
idea_description = StringField(required=True)
posted_date = DateTimeField(default=datetime.datetime.utcnow)
idea_field = StringField(required=True)
idea_type = StringField(choices=Idea_Choices.keys(), required=True)
comment = StringField()
price = IntField(default=0)
comments = ListField(StringField())
likes = IntField(default=0)
dislikes = IntField(default=0)
votecount = IntField(default=0)
reportAbuseUser = ListField(StringField())
reportAbuseCount = IntField(default=0)
price = IntField(default=0)
__v = IntField(default=0)
class Admin(Document):
Idea_Choices = {
'LOCKED_IDEA': 'LOCKED IDEA',
'OPEN_IDEA': 'OPEN IDEA'
}
status = BooleanField(required=True)
idea_owner = EmailField(required=True)
idea_owner_name = StringField(required=True)
idea_genre = StringField(required=True)
idea_headline = StringField(required=True)
idea_description = StringField(required=True)
posted_date = DateTimeField(default=datetime.datetime.utcnow)
idea_field = StringField(required=True)
idea_type = StringField(choices=Idea_Choices.keys(), required=True)
comment = StringField()
price = IntField(default=0)
comments = ListField(StringField())
likes = IntField(default=0)
dislikes = IntField(default=0)
votecount = IntField(default=0)
reportAbuseUser = ListField(StringField())
reportAbuseCount = IntField(default=0)
price = IntField(default=0)
__v = IntField(default=0)
similardescription = StringField()
similarheadline = StringField()
| 34.033333 | 71 | 0.713026 | 218 | 2,042 | 6.541284 | 0.233945 | 0.134642 | 0.157083 | 0.151473 | 0.892006 | 0.892006 | 0.892006 | 0.892006 | 0.892006 | 0.892006 | 0 | 0.008393 | 0.183154 | 2,042 | 59 | 72 | 34.610169 | 0.846523 | 0.011753 | 0 | 0.830189 | 0 | 0 | 0.039683 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.056604 | 0 | 0.886792 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
458281b36c8c5f5d5a97126104e58945342d9375 | 172 | py | Python | lab1-ex4.py | LiliHavingFun/fun1-python | 947ec5396a65ce1a21faf31d70bb3ffb98c3304c | [
"Unlicense"
] | null | null | null | lab1-ex4.py | LiliHavingFun/fun1-python | 947ec5396a65ce1a21faf31d70bb3ffb98c3304c | [
"Unlicense"
] | null | null | null | lab1-ex4.py | LiliHavingFun/fun1-python | 947ec5396a65ce1a21faf31d70bb3ffb98c3304c | [
"Unlicense"
] | null | null | null | import re
def occurence(search_this_string, in_this_string):
return in_this_string.count(search_this_string)
print(occurence("hello", "hello hellohelloworld")) # 3
| 19.111111 | 54 | 0.784884 | 24 | 172 | 5.291667 | 0.583333 | 0.314961 | 0.251969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006579 | 0.116279 | 172 | 8 | 55 | 21.5 | 0.828947 | 0.005814 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
45d725da4b70b0180d9235b820cd8b84a95d190d | 23,096 | py | Python | tools/migrations/0001_initial.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 4 | 2019-03-28T06:42:17.000Z | 2021-06-06T13:10:51.000Z | tools/migrations/0001_initial.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 177 | 2018-09-28T14:21:56.000Z | 2022-03-30T21:45:26.000Z | tools/migrations/0001_initial.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 8 | 2018-10-25T20:43:10.000Z | 2022-03-17T14:19:27.000Z | # Generated by Django 2.0.12 on 2019-12-16 15:51
from django.db import migrations, models
import django.db.models.deletion
import home.models
import modelcluster.fields
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.documents.blocks
import wagtail.images.blocks
class Migration(migrations.Migration):
initial = True
dependencies = [
('wagtailcore', '0040_page_draft_title'),
('wagtailimages', '0019_delete_filter'),
]
operations = [
migrations.CreateModel(
name='FeaturedTool',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='ToolPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('heading', models.CharField(blank=True, max_length=255, null=True)),
('heading_en', models.CharField(blank=True, max_length=255, null=True)),
('heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('heading_es', models.CharField(blank=True, max_length=255, null=True)),
('heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('excerpt', models.TextField(blank=True, null=True)),
('excerpt_en', models.TextField(blank=True, null=True)),
('excerpt_fr', models.TextField(blank=True, null=True)),
('excerpt_es', models.TextField(blank=True, null=True)),
('excerpt_pt', models.TextField(blank=True, null=True)),
('content_editor', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_en', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_fr', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_es', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_pt', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('listing_description', models.CharField(blank=True, help_text='Optional: short description to appear on the listing page if this tool is featured', max_length=255)),
('listing_description_en', models.CharField(blank=True, help_text='Optional: short description to appear on the listing page if this tool is featured', max_length=255, null=True)),
('listing_description_fr', models.CharField(blank=True, help_text='Optional: short description to appear on the listing page if this tool is featured', max_length=255, null=True)),
('listing_description_es', models.CharField(blank=True, help_text='Optional: short description to appear on the listing page if this tool is featured', max_length=255, null=True)),
('listing_description_pt', models.CharField(blank=True, help_text='Optional: short description to appear on the listing page if this tool is featured', max_length=255, null=True)),
('external_url', models.URLField(blank=True, help_text='Optional: external URL of the tool', max_length=255)),
('button_label', models.CharField(blank=True, help_text='Optional: label for the external URL button', max_length=255)),
('button_label_en', models.CharField(blank=True, help_text='Optional: label for the external URL button', max_length=255, null=True)),
('button_label_fr', models.CharField(blank=True, help_text='Optional: label for the external URL button', max_length=255, null=True)),
('button_label_es', models.CharField(blank=True, help_text='Optional: label for the external URL button', max_length=255, null=True)),
('button_label_pt', models.CharField(blank=True, help_text='Optional: label for the external URL button', max_length=255, null=True)),
('logo', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('social_media_image', models.ForeignKey(blank=True, help_text='This image will be used as the image for social media sharing cards.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.CreateModel(
name='ToolsListingPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('heading', models.CharField(blank=True, max_length=255, null=True)),
('heading_en', models.CharField(blank=True, max_length=255, null=True)),
('heading_fr', models.CharField(blank=True, max_length=255, null=True)),
('heading_es', models.CharField(blank=True, max_length=255, null=True)),
('heading_pt', models.CharField(blank=True, max_length=255, null=True)),
('excerpt', models.TextField(blank=True, null=True)),
('excerpt_en', models.TextField(blank=True, null=True)),
('excerpt_fr', models.TextField(blank=True, null=True)),
('excerpt_es', models.TextField(blank=True, null=True)),
('excerpt_pt', models.TextField(blank=True, null=True)),
('content_editor', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_en', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_fr', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_es', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('content_editor_pt', wagtail.core.fields.StreamField((('h2', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h3', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('h4', wagtail.core.blocks.CharBlock(classname='title', icon='title')), ('intro', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('paragraph', wagtail.core.blocks.RichTextBlock(icon='pilcrow')), ('image_figure', wagtail.core.blocks.StructBlock((('image', wagtail.images.blocks.ImageChooserBlock()), ('alignment', home.models.ImageAlignmentChoiceBlock()), ('caption', wagtail.core.blocks.RichTextBlock(required=False))), icon='image', label='Image figure')), ('pullquote', wagtail.core.blocks.StructBlock((('quote', wagtail.core.blocks.TextBlock('quote title')),))), ('aligned_html', wagtail.core.blocks.StructBlock((('html', wagtail.core.blocks.RawHTMLBlock()), ('alignment', home.models.HTMLAlignmentChoiceBlock())), icon='code', label='Raw HTML')), ('document_box', wagtail.core.blocks.StreamBlock((('document_box_heading', wagtail.core.blocks.CharBlock(classname='title', help_text='Only one heading per box.', icon='title', required=False)), ('document', wagtail.documents.blocks.DocumentChooserBlock(icon='doc-full-inverse', required=False))), icon='doc-full-inverse')), ('anchor_point', wagtail.core.blocks.CharBlock(help_text='Custom anchor points are expected to precede other content.', icon='order-down'))), blank=True, null=True)),
('highlight_title', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255)),
('highlight_title_en', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_title_fr', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_title_es', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_title_pt', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_content', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255)),
('highlight_content_en', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_content_fr', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_content_es', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('highlight_content_pt', models.CharField(blank=True, help_text='Optional: title for the highlight panel displayed after featured tools', max_length=255, null=True)),
('header_image', models.ForeignKey(blank=True, help_text='This is the image that will appear in the header banner at the top of the page. If no image is added a placeholder image will be used.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
('social_media_image', models.ForeignKey(blank=True, help_text='This image will be used as the image for social media sharing cards.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.AddField(
model_name='featuredtool',
name='page',
field=modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='featured_tools', to='tools.ToolsListingPage'),
),
migrations.AddField(
model_name='featuredtool',
name='tool',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='tools.ToolPage'),
),
]
| 192.466667 | 1,462 | 0.712721 | 2,755 | 23,096 | 5.884211 | 0.065699 | 0.10314 | 0.147863 | 0.080192 | 0.940843 | 0.937203 | 0.930541 | 0.930541 | 0.926963 | 0.921041 | 0 | 0.007167 | 0.111967 | 23,096 | 119 | 1,463 | 194.084034 | 0.783228 | 0.001992 | 0 | 0.535714 | 1 | 0.008929 | 0.279677 | 0.005684 | 0.026786 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
45df8f2438be81c2120e19aaae9091d2e8b60e4d | 8,400 | py | Python | tests/testGlobalEM1D_TD_jac_layers.py | igotchalk/simpegEM1D | 8f2233fc86bf26f14fe9c45f28c6b22ff54fafdc | [
"MIT"
] | 11 | 2015-04-11T03:35:45.000Z | 2022-02-26T02:04:18.000Z | tests/testGlobalEM1D_TD_jac_layers.py | igotchalk/simpegEM1D | 8f2233fc86bf26f14fe9c45f28c6b22ff54fafdc | [
"MIT"
] | 38 | 2018-04-21T23:07:29.000Z | 2022-01-11T07:22:27.000Z | tests/testGlobalEM1D_TD_jac_layers.py | igotchalk/simpegEM1D | 8f2233fc86bf26f14fe9c45f28c6b22ff54fafdc | [
"MIT"
] | 13 | 2015-07-15T21:54:33.000Z | 2021-11-30T09:18:54.000Z | from __future__ import print_function
import unittest
import numpy as np
from simpegEM1D import (
GlobalEM1DProblemTD, GlobalEM1DSurveyTD,
get_vertical_discretization_time
)
from SimPEG import (
regularization, Inversion, InvProblem,
DataMisfit, Utils, Mesh, Maps, Optimization,
Tests
)
from simpegEM1D import skytem_HM_2015
wave = skytem_HM_2015()
np.random.seed(41)
class GlobalEM1DTD(unittest.TestCase):
def setUp(self, parallel=True):
time = np.logspace(-6, -3, 21)
hz = get_vertical_discretization_time(
time, facter_tmax=0.5, factor_tmin=10.
)
time_input_currents = wave.current_times[-7:]
input_currents = wave.currents[-7:]
n_sounding = 5
dx = 20.
hx = np.ones(n_sounding) * dx
mesh = Mesh.TensorMesh([hx, hz], x0='00')
inds = mesh.gridCC[:, 1] < 25
inds_1 = mesh.gridCC[:, 1] < 50
sigma = np.ones(mesh.nC) * 1./100.
sigma[inds_1] = 1./10.
sigma[inds] = 1./50.
sigma_em1d = sigma.reshape(mesh.vnC, order='F').flatten()
mSynth = np.log(sigma_em1d)
x = mesh.vectorCCx
y = np.zeros_like(x)
z = np.ones_like(x) * 30.
rx_locations = np.c_[x, y, z]
src_locations = np.c_[x, y, z]
topo = np.c_[x, y, z-30.].astype(float)
n_sounding = rx_locations.shape[0]
rx_type_global = np.array(
["dBzdt"], dtype=str
).repeat(n_sounding, axis=0)
field_type_global = np.array(
['secondary'], dtype=str
).repeat(n_sounding, axis=0)
wave_type_global = np.array(
['general'], dtype=str
).repeat(n_sounding, axis=0)
time_global = [time for i in range(n_sounding)]
src_type_global = np.array(
["CircularLoop"], dtype=str
).repeat(n_sounding, axis=0)
a_global = np.array(
[13.], dtype=float
).repeat(n_sounding, axis=0)
input_currents_global = [
input_currents for i in range(n_sounding)
]
time_input_currents_global = [
time_input_currents for i in range(n_sounding)
]
mapping = Maps.ExpMap(mesh)
survey = GlobalEM1DSurveyTD(
rx_locations=rx_locations,
src_locations=src_locations,
topo=topo,
time=time_global,
src_type=src_type_global,
rx_type=rx_type_global,
field_type=field_type_global,
wave_type=wave_type_global,
a=a_global,
input_currents=input_currents_global,
time_input_currents=time_input_currents_global
)
problem = GlobalEM1DProblemTD(
mesh, sigmaMap=mapping, hz=hz, parallel=parallel, n_cpu=2
)
problem.pair(survey)
survey.makeSyntheticData(mSynth)
# Now set up the problem to do some minimization
dmis = DataMisfit.l2_DataMisfit(survey)
reg = regularization.Tikhonov(mesh)
opt = Optimization.InexactGaussNewton(
maxIterLS=20, maxIter=10, tolF=1e-6,
tolX=1e-6, tolG=1e-6, maxIterCG=6
)
invProb = InvProblem.BaseInvProblem(dmis, reg, opt, beta=0.)
inv = Inversion.BaseInversion(invProb)
self.inv = inv
self.reg = reg
self.p = problem
self.mesh = mesh
self.m0 = mSynth
self.survey = survey
self.dmis = dmis
def test_misfit(self):
passed = Tests.checkDerivative(
lambda m: (
self.survey.dpred(m),
lambda mx: self.p.Jvec(self.m0, mx)
),
self.m0,
plotIt=False,
num=3
)
self.assertTrue(passed)
def test_adjoint(self):
# Adjoint Test
v = np.random.rand(self.mesh.nC)
w = np.random.rand(self.survey.dobs.shape[0])
wtJv = w.dot(self.p.Jvec(self.m0, v))
vtJtw = v.dot(self.p.Jtvec(self.m0, w))
passed = np.abs(wtJv - vtJtw) < 1e-10
print('Adjoint Test', np.abs(wtJv - vtJtw), passed)
self.assertTrue(passed)
def test_dataObj(self):
passed = Tests.checkDerivative(
lambda m: [self.dmis(m), self.dmis.deriv(m)],
self.m0,
plotIt=False,
num=3
)
self.assertTrue(passed)
class GlobalEM1DTD_Height(unittest.TestCase):
def setUp(self, parallel=True):
time = np.logspace(-6, -3, 21)
time_input_currents = wave.current_times[-7:]
input_currents = wave.currents[-7:]
hz = get_vertical_discretization_time(
time, facter_tmax=0.5, factor_tmin=10.
)
hz = np.r_[1.]
n_sounding = 10
dx = 20.
hx = np.ones(n_sounding) * dx
e = np.ones(n_sounding)
mSynth = np.r_[e*np.log(1./100.), e*20]
x = np.arange(n_sounding)
y = np.zeros_like(x)
z = np.ones_like(x) * 30.
rx_locations = np.c_[x, y, z]
src_locations = np.c_[x, y, z]
topo = np.c_[x, y, z-30.].astype(float)
rx_type_global = np.array(
["dBzdt"], dtype=str
).repeat(n_sounding, axis=0)
field_type_global = np.array(
['secondary'], dtype=str
).repeat(n_sounding, axis=0)
wave_type_global = np.array(
['general'], dtype=str
).repeat(n_sounding, axis=0)
time_global = [time for i in range(n_sounding)]
src_type_global = np.array(
["CircularLoop"], dtype=str
).repeat(n_sounding, axis=0)
a_global = np.array(
[13.], dtype=float
).repeat(n_sounding, axis=0)
input_currents_global = [
input_currents for i in range(n_sounding)
]
time_input_currents_global = [
time_input_currents for i in range(n_sounding)
]
wires = Maps.Wires(('sigma', n_sounding),('h', n_sounding))
expmap = Maps.ExpMap(nP=n_sounding)
sigmaMap = expmap * wires.sigma
survey = GlobalEM1DSurveyTD(
rx_locations=rx_locations,
src_locations=src_locations,
topo=topo,
time=time_global,
src_type=src_type_global,
rx_type=rx_type_global,
field_type=field_type_global,
wave_type=wave_type_global,
a=a_global,
input_currents=input_currents_global,
time_input_currents=time_input_currents_global,
half_switch=True
)
problem = GlobalEM1DProblemTD(
[], sigmaMap=sigmaMap, hMap=wires.h, hz=hz, parallel=parallel, n_cpu=2
)
problem.pair(survey)
survey.makeSyntheticData(mSynth)
# Now set up the problem to do some minimization
mesh = Mesh.TensorMesh([int(n_sounding * 2)])
dmis = DataMisfit.l2_DataMisfit(survey)
reg = regularization.Tikhonov(mesh)
opt = Optimization.InexactGaussNewton(
maxIterLS=20, maxIter=10, tolF=1e-6,
tolX=1e-6, tolG=1e-6, maxIterCG=6
)
invProb = InvProblem.BaseInvProblem(dmis, reg, opt, beta=0.)
inv = Inversion.BaseInversion(invProb)
self.inv = inv
self.reg = reg
self.p = problem
self.mesh = mesh
self.m0 = mSynth
self.survey = survey
self.dmis = dmis
def test_misfit(self):
passed = Tests.checkDerivative(
lambda m: (
self.survey.dpred(m),
lambda mx: self.p.Jvec(self.m0, mx)
),
self.m0,
plotIt=False,
num=3
)
self.assertTrue(passed)
def test_adjoint(self):
# Adjoint Test
v = np.random.rand(self.mesh.nC)
w = np.random.rand(self.survey.dobs.shape[0])
wtJv = w.dot(self.p.Jvec(self.m0, v))
vtJtw = v.dot(self.p.Jtvec(self.m0, w))
passed = np.abs(wtJv - vtJtw) < 1e-10
print('Adjoint Test', np.abs(wtJv - vtJtw), passed)
self.assertTrue(passed)
def test_dataObj(self):
passed = Tests.checkDerivative(
lambda m: [self.dmis(m), self.dmis.deriv(m)],
self.m0,
plotIt=False,
num=3
)
self.assertTrue(passed)
if __name__ == '__main__':
unittest.main()
| 30.656934 | 82 | 0.566905 | 1,032 | 8,400 | 4.434109 | 0.185078 | 0.053103 | 0.03715 | 0.041521 | 0.803322 | 0.803322 | 0.803322 | 0.803322 | 0.793269 | 0.793269 | 0 | 0.02688 | 0.322381 | 8,400 | 273 | 83 | 30.769231 | 0.777056 | 0.014167 | 0 | 0.738197 | 0 | 0 | 0.012929 | 0 | 0 | 0 | 0 | 0 | 0.025751 | 1 | 0.034335 | false | 0.060086 | 0.025751 | 0 | 0.06867 | 0.012876 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
b33d5c439b6b4c4710d3be523e0312ec53168d00 | 103 | py | Python | notion_utilities/__init__.py | thomashirtz/edit-notion | 45e48b627a377e935f43bb707b4b4baff9fc7f10 | [
"Apache-2.0"
] | 2 | 2022-03-15T01:07:00.000Z | 2022-03-19T16:41:55.000Z | notion_utilities/__init__.py | thomashirtz/edit-notion | 45e48b627a377e935f43bb707b4b4baff9fc7f10 | [
"Apache-2.0"
] | null | null | null | notion_utilities/__init__.py | thomashirtz/edit-notion | 45e48b627a377e935f43bb707b4b4baff9fc7f10 | [
"Apache-2.0"
] | 1 | 2022-03-14T10:46:25.000Z | 2022-03-14T10:46:25.000Z | from notion_utilities.apply import apply_to_database
from notion_utilities.query import query_database
| 34.333333 | 52 | 0.902913 | 15 | 103 | 5.866667 | 0.533333 | 0.227273 | 0.431818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07767 | 103 | 2 | 53 | 51.5 | 0.926316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b3422d9f52f6412cdcd5fa7a54b4994749a464f0 | 37,708 | py | Python | sdk/python/pulumi_exoscale/sks_cluster.py | secustor/pulumi-exoscale | c805e4bbf896526e46ed168bc96c9c0a3f82adf8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_exoscale/sks_cluster.py | secustor/pulumi-exoscale | c805e4bbf896526e46ed168bc96c9c0a3f82adf8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_exoscale/sks_cluster.py | secustor/pulumi-exoscale | c805e4bbf896526e46ed168bc96c9c0a3f82adf8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['SKSClusterArgs', 'SKSCluster']
@pulumi.input_type
class SKSClusterArgs:
def __init__(__self__, *,
zone: pulumi.Input[str],
addons: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_upgrade: Optional[pulumi.Input[bool]] = None,
cni: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exoscale_ccm: Optional[pulumi.Input[bool]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
metrics_server: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input['SKSClusterOidcArgs']] = None,
service_level: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a SKSCluster resource.
:param pulumi.Input[str] zone: The name of the [zone][zone] to deploy the SKS cluster into.
:param pulumi.Input[bool] auto_upgrade: Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
:param pulumi.Input[str] cni: The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
:param pulumi.Input[str] description: The description of the SKS cluster.
:param pulumi.Input[bool] exoscale_ccm: Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A map of key/value labels.
:param pulumi.Input[bool] metrics_server: Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[str] name: The name of the SKS cluster.
:param pulumi.Input['SKSClusterOidcArgs'] oidc: An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
:param pulumi.Input[str] service_level: The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
:param pulumi.Input[str] version: The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
"""
pulumi.set(__self__, "zone", zone)
if addons is not None:
warnings.warn("""This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""", DeprecationWarning)
pulumi.log.warn("""addons is deprecated: This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""")
if addons is not None:
pulumi.set(__self__, "addons", addons)
if auto_upgrade is not None:
pulumi.set(__self__, "auto_upgrade", auto_upgrade)
if cni is not None:
pulumi.set(__self__, "cni", cni)
if description is not None:
pulumi.set(__self__, "description", description)
if exoscale_ccm is not None:
pulumi.set(__self__, "exoscale_ccm", exoscale_ccm)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if metrics_server is not None:
pulumi.set(__self__, "metrics_server", metrics_server)
if name is not None:
pulumi.set(__self__, "name", name)
if oidc is not None:
pulumi.set(__self__, "oidc", oidc)
if service_level is not None:
pulumi.set(__self__, "service_level", service_level)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def zone(self) -> pulumi.Input[str]:
"""
The name of the [zone][zone] to deploy the SKS cluster into.
"""
return pulumi.get(self, "zone")
@zone.setter
def zone(self, value: pulumi.Input[str]):
pulumi.set(self, "zone", value)
@property
@pulumi.getter
def addons(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "addons")
@addons.setter
def addons(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "addons", value)
@property
@pulumi.getter(name="autoUpgrade")
def auto_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
"""
return pulumi.get(self, "auto_upgrade")
@auto_upgrade.setter
def auto_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_upgrade", value)
@property
@pulumi.getter
def cni(self) -> Optional[pulumi.Input[str]]:
"""
The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
"""
return pulumi.get(self, "cni")
@cni.setter
def cni(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cni", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the SKS cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="exoscaleCcm")
def exoscale_ccm(self) -> Optional[pulumi.Input[bool]]:
"""
Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "exoscale_ccm")
@exoscale_ccm.setter
def exoscale_ccm(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "exoscale_ccm", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of key/value labels.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="metricsServer")
def metrics_server(self) -> Optional[pulumi.Input[bool]]:
"""
Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "metrics_server")
@metrics_server.setter
def metrics_server(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "metrics_server", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the SKS cluster.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def oidc(self) -> Optional[pulumi.Input['SKSClusterOidcArgs']]:
"""
An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
"""
return pulumi.get(self, "oidc")
@oidc.setter
def oidc(self, value: Optional[pulumi.Input['SKSClusterOidcArgs']]):
pulumi.set(self, "oidc", value)
@property
@pulumi.getter(name="serviceLevel")
def service_level(self) -> Optional[pulumi.Input[str]]:
"""
The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
"""
return pulumi.get(self, "service_level")
@service_level.setter
def service_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_level", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class _SKSClusterState:
def __init__(__self__, *,
addons: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_upgrade: Optional[pulumi.Input[bool]] = None,
cni: Optional[pulumi.Input[str]] = None,
created_at: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
endpoint: Optional[pulumi.Input[str]] = None,
exoscale_ccm: Optional[pulumi.Input[bool]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
metrics_server: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
nodepools: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
oidc: Optional[pulumi.Input['SKSClusterOidcArgs']] = None,
service_level: Optional[pulumi.Input[str]] = None,
state: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
zone: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering SKSCluster resources.
:param pulumi.Input[bool] auto_upgrade: Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
:param pulumi.Input[str] cni: The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
:param pulumi.Input[str] created_at: The creation date of the SKS cluster.
:param pulumi.Input[str] description: The description of the SKS cluster.
:param pulumi.Input[str] endpoint: The Kubernetes public API endpoint of the SKS cluster.
:param pulumi.Input[bool] exoscale_ccm: Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A map of key/value labels.
:param pulumi.Input[bool] metrics_server: Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[str] name: The name of the SKS cluster.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nodepools: The list of [SKS Nodepools][r-sks_nodepool] (IDs) attached to the SKS cluster.
:param pulumi.Input['SKSClusterOidcArgs'] oidc: An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
:param pulumi.Input[str] service_level: The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
:param pulumi.Input[str] state: The current state of the SKS cluster.
:param pulumi.Input[str] version: The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
:param pulumi.Input[str] zone: The name of the [zone][zone] to deploy the SKS cluster into.
"""
if addons is not None:
warnings.warn("""This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""", DeprecationWarning)
pulumi.log.warn("""addons is deprecated: This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""")
if addons is not None:
pulumi.set(__self__, "addons", addons)
if auto_upgrade is not None:
pulumi.set(__self__, "auto_upgrade", auto_upgrade)
if cni is not None:
pulumi.set(__self__, "cni", cni)
if created_at is not None:
pulumi.set(__self__, "created_at", created_at)
if description is not None:
pulumi.set(__self__, "description", description)
if endpoint is not None:
pulumi.set(__self__, "endpoint", endpoint)
if exoscale_ccm is not None:
pulumi.set(__self__, "exoscale_ccm", exoscale_ccm)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if metrics_server is not None:
pulumi.set(__self__, "metrics_server", metrics_server)
if name is not None:
pulumi.set(__self__, "name", name)
if nodepools is not None:
pulumi.set(__self__, "nodepools", nodepools)
if oidc is not None:
pulumi.set(__self__, "oidc", oidc)
if service_level is not None:
pulumi.set(__self__, "service_level", service_level)
if state is not None:
pulumi.set(__self__, "state", state)
if version is not None:
pulumi.set(__self__, "version", version)
if zone is not None:
pulumi.set(__self__, "zone", zone)
@property
@pulumi.getter
def addons(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "addons")
@addons.setter
def addons(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "addons", value)
@property
@pulumi.getter(name="autoUpgrade")
def auto_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
"""
return pulumi.get(self, "auto_upgrade")
@auto_upgrade.setter
def auto_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_upgrade", value)
@property
@pulumi.getter
def cni(self) -> Optional[pulumi.Input[str]]:
"""
The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
"""
return pulumi.get(self, "cni")
@cni.setter
def cni(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cni", value)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[pulumi.Input[str]]:
"""
The creation date of the SKS cluster.
"""
return pulumi.get(self, "created_at")
@created_at.setter
def created_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "created_at", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the SKS cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def endpoint(self) -> Optional[pulumi.Input[str]]:
"""
The Kubernetes public API endpoint of the SKS cluster.
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter(name="exoscaleCcm")
def exoscale_ccm(self) -> Optional[pulumi.Input[bool]]:
"""
Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "exoscale_ccm")
@exoscale_ccm.setter
def exoscale_ccm(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "exoscale_ccm", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of key/value labels.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="metricsServer")
def metrics_server(self) -> Optional[pulumi.Input[bool]]:
"""
Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "metrics_server")
@metrics_server.setter
def metrics_server(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "metrics_server", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the SKS cluster.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def nodepools(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The list of [SKS Nodepools][r-sks_nodepool] (IDs) attached to the SKS cluster.
"""
return pulumi.get(self, "nodepools")
@nodepools.setter
def nodepools(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "nodepools", value)
@property
@pulumi.getter
def oidc(self) -> Optional[pulumi.Input['SKSClusterOidcArgs']]:
"""
An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
"""
return pulumi.get(self, "oidc")
@oidc.setter
def oidc(self, value: Optional[pulumi.Input['SKSClusterOidcArgs']]):
pulumi.set(self, "oidc", value)
@property
@pulumi.getter(name="serviceLevel")
def service_level(self) -> Optional[pulumi.Input[str]]:
"""
The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
"""
return pulumi.get(self, "service_level")
@service_level.setter
def service_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_level", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
The current state of the SKS cluster.
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@property
@pulumi.getter
def zone(self) -> Optional[pulumi.Input[str]]:
"""
The name of the [zone][zone] to deploy the SKS cluster into.
"""
return pulumi.get(self, "zone")
@zone.setter
def zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zone", value)
class SKSCluster(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
addons: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_upgrade: Optional[pulumi.Input[bool]] = None,
cni: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exoscale_ccm: Optional[pulumi.Input[bool]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
metrics_server: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input[pulumi.InputType['SKSClusterOidcArgs']]] = None,
service_level: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
zone: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides an Exoscale [SKS][sks-doc] cluster resource. This can be used to create, modify, and delete SKS clusters.
## Example Usage
```python
import pulumi
import pulumi_exoscale as exoscale
zone = "de-fra-1"
prod = exoscale.SKSCluster("prod",
zone=zone,
version="1.20.2",
labels={
"env": "prod",
})
pulumi.export("sksEndpoint", prod.endpoint)
```
## Import
An existing SKS cluster can be imported as a resource by specifying `ID@ZONE`console
```sh
$ pulumi import exoscale:index/sKSCluster:SKSCluster example eb556678-ec59-4be6-8c54-0406ae0f6da6@de-fra-1
```
[cni]https://www.cni.dev/ [exo-ccm]https://github.com/exoscale/exoscale-cloud-controller-manager [k8s-ms]https://github.com/kubernetes-sigs/metrics-server [r-sks_nodepool]sks_nodepool.html [sks-doc]https://community.exoscale.com/documentation/sks/ [zone]https://www.exoscale.com/datacenters/
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] auto_upgrade: Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
:param pulumi.Input[str] cni: The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
:param pulumi.Input[str] description: The description of the SKS cluster.
:param pulumi.Input[bool] exoscale_ccm: Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A map of key/value labels.
:param pulumi.Input[bool] metrics_server: Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[str] name: The name of the SKS cluster.
:param pulumi.Input[pulumi.InputType['SKSClusterOidcArgs']] oidc: An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
:param pulumi.Input[str] service_level: The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
:param pulumi.Input[str] version: The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
:param pulumi.Input[str] zone: The name of the [zone][zone] to deploy the SKS cluster into.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SKSClusterArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an Exoscale [SKS][sks-doc] cluster resource. This can be used to create, modify, and delete SKS clusters.
## Example Usage
```python
import pulumi
import pulumi_exoscale as exoscale
zone = "de-fra-1"
prod = exoscale.SKSCluster("prod",
zone=zone,
version="1.20.2",
labels={
"env": "prod",
})
pulumi.export("sksEndpoint", prod.endpoint)
```
## Import
An existing SKS cluster can be imported as a resource by specifying `ID@ZONE`console
```sh
$ pulumi import exoscale:index/sKSCluster:SKSCluster example eb556678-ec59-4be6-8c54-0406ae0f6da6@de-fra-1
```
[cni]https://www.cni.dev/ [exo-ccm]https://github.com/exoscale/exoscale-cloud-controller-manager [k8s-ms]https://github.com/kubernetes-sigs/metrics-server [r-sks_nodepool]sks_nodepool.html [sks-doc]https://community.exoscale.com/documentation/sks/ [zone]https://www.exoscale.com/datacenters/
:param str resource_name: The name of the resource.
:param SKSClusterArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SKSClusterArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
addons: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_upgrade: Optional[pulumi.Input[bool]] = None,
cni: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exoscale_ccm: Optional[pulumi.Input[bool]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
metrics_server: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
oidc: Optional[pulumi.Input[pulumi.InputType['SKSClusterOidcArgs']]] = None,
service_level: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
zone: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SKSClusterArgs.__new__(SKSClusterArgs)
if addons is not None and not opts.urn:
warnings.warn("""This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""", DeprecationWarning)
pulumi.log.warn("""addons is deprecated: This attribute has been replaced by `exoscale_ccm`/`metrics_server` attributes, it will be removed in a future release.""")
__props__.__dict__["addons"] = addons
__props__.__dict__["auto_upgrade"] = auto_upgrade
__props__.__dict__["cni"] = cni
__props__.__dict__["description"] = description
__props__.__dict__["exoscale_ccm"] = exoscale_ccm
__props__.__dict__["labels"] = labels
__props__.__dict__["metrics_server"] = metrics_server
__props__.__dict__["name"] = name
__props__.__dict__["oidc"] = oidc
__props__.__dict__["service_level"] = service_level
__props__.__dict__["version"] = version
if zone is None and not opts.urn:
raise TypeError("Missing required property 'zone'")
__props__.__dict__["zone"] = zone
__props__.__dict__["created_at"] = None
__props__.__dict__["endpoint"] = None
__props__.__dict__["nodepools"] = None
__props__.__dict__["state"] = None
super(SKSCluster, __self__).__init__(
'exoscale:index/sKSCluster:SKSCluster',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
addons: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
auto_upgrade: Optional[pulumi.Input[bool]] = None,
cni: Optional[pulumi.Input[str]] = None,
created_at: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
endpoint: Optional[pulumi.Input[str]] = None,
exoscale_ccm: Optional[pulumi.Input[bool]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
metrics_server: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
nodepools: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
oidc: Optional[pulumi.Input[pulumi.InputType['SKSClusterOidcArgs']]] = None,
service_level: Optional[pulumi.Input[str]] = None,
state: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
zone: Optional[pulumi.Input[str]] = None) -> 'SKSCluster':
"""
Get an existing SKSCluster resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] auto_upgrade: Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
:param pulumi.Input[str] cni: The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
:param pulumi.Input[str] created_at: The creation date of the SKS cluster.
:param pulumi.Input[str] description: The description of the SKS cluster.
:param pulumi.Input[str] endpoint: The Kubernetes public API endpoint of the SKS cluster.
:param pulumi.Input[bool] exoscale_ccm: Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A map of key/value labels.
:param pulumi.Input[bool] metrics_server: Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
:param pulumi.Input[str] name: The name of the SKS cluster.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nodepools: The list of [SKS Nodepools][r-sks_nodepool] (IDs) attached to the SKS cluster.
:param pulumi.Input[pulumi.InputType['SKSClusterOidcArgs']] oidc: An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
:param pulumi.Input[str] service_level: The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
:param pulumi.Input[str] state: The current state of the SKS cluster.
:param pulumi.Input[str] version: The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
:param pulumi.Input[str] zone: The name of the [zone][zone] to deploy the SKS cluster into.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _SKSClusterState.__new__(_SKSClusterState)
__props__.__dict__["addons"] = addons
__props__.__dict__["auto_upgrade"] = auto_upgrade
__props__.__dict__["cni"] = cni
__props__.__dict__["created_at"] = created_at
__props__.__dict__["description"] = description
__props__.__dict__["endpoint"] = endpoint
__props__.__dict__["exoscale_ccm"] = exoscale_ccm
__props__.__dict__["labels"] = labels
__props__.__dict__["metrics_server"] = metrics_server
__props__.__dict__["name"] = name
__props__.__dict__["nodepools"] = nodepools
__props__.__dict__["oidc"] = oidc
__props__.__dict__["service_level"] = service_level
__props__.__dict__["state"] = state
__props__.__dict__["version"] = version
__props__.__dict__["zone"] = zone
return SKSCluster(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def addons(self) -> pulumi.Output[Sequence[str]]:
return pulumi.get(self, "addons")
@property
@pulumi.getter(name="autoUpgrade")
def auto_upgrade(self) -> pulumi.Output[Optional[bool]]:
"""
Enable automatic upgrading of the SKS cluster control plane Kubernetes version (default: `false`).
"""
return pulumi.get(self, "auto_upgrade")
@property
@pulumi.getter
def cni(self) -> pulumi.Output[Optional[str]]:
"""
The Kubernetes [CNI][cni] plugin to be deployed in the SKS cluster control plane (default: `"calico"`). Can only be set during creation.
"""
return pulumi.get(self, "cni")
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> pulumi.Output[str]:
"""
The creation date of the SKS cluster.
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of the SKS cluster.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def endpoint(self) -> pulumi.Output[str]:
"""
The Kubernetes public API endpoint of the SKS cluster.
"""
return pulumi.get(self, "endpoint")
@property
@pulumi.getter(name="exoscaleCcm")
def exoscale_ccm(self) -> pulumi.Output[Optional[bool]]:
"""
Deploy the Exoscale [Cloud Controller Manager][exo-ccm] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "exoscale_ccm")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of key/value labels.
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter(name="metricsServer")
def metrics_server(self) -> pulumi.Output[Optional[bool]]:
"""
Deploy the [Kubernetes Metrics Server][k8s-ms] in the SKS cluster control plane (default: `true`). Can only be set during creation.
"""
return pulumi.get(self, "metrics_server")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the SKS cluster.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def nodepools(self) -> pulumi.Output[Sequence[str]]:
"""
The list of [SKS Nodepools][r-sks_nodepool] (IDs) attached to the SKS cluster.
"""
return pulumi.get(self, "nodepools")
@property
@pulumi.getter
def oidc(self) -> pulumi.Output['outputs.SKSClusterOidc']:
"""
An OpenID Connect configuration to provide to the Kubernetes API server. Can only be set during creation. Structure is documented below.
"""
return pulumi.get(self, "oidc")
@property
@pulumi.getter(name="serviceLevel")
def service_level(self) -> pulumi.Output[Optional[str]]:
"""
The service level of the SKS cluster control plane (default: `"pro"`). Can only be set during creation.
"""
return pulumi.get(self, "service_level")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The current state of the SKS cluster.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter
def version(self) -> pulumi.Output[str]:
"""
The Kubernetes version of the SKS cluster control plane (default: latest version available from the API). Can only be set during creation.
"""
return pulumi.get(self, "version")
@property
@pulumi.getter
def zone(self) -> pulumi.Output[str]:
"""
The name of the [zone][zone] to deploy the SKS cluster into.
"""
return pulumi.get(self, "zone")
| 45.541063 | 300 | 0.640474 | 4,571 | 37,708 | 5.135419 | 0.054693 | 0.09747 | 0.075147 | 0.059044 | 0.911136 | 0.895331 | 0.865213 | 0.859163 | 0.84945 | 0.829215 | 0 | 0.00218 | 0.24581 | 37,708 | 827 | 301 | 45.596131 | 0.823236 | 0.344144 | 0 | 0.791753 | 1 | 0.012371 | 0.10657 | 0.010666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162887 | false | 0.002062 | 0.014433 | 0.006186 | 0.276289 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2fe4dafe212ed9aab2891f7589a1531d012266c7 | 10,369 | py | Python | boundaryservice/serializers.py | datadesk/django-boundaryservice | 055217834c0ca87be08dcd50cb140712313079f8 | [
"MIT"
] | 1 | 2018-05-11T13:31:25.000Z | 2018-05-11T13:31:25.000Z | boundaryservice/serializers.py | datadesk/django-boundaryservice | 055217834c0ca87be08dcd50cb140712313079f8 | [
"MIT"
] | null | null | null | boundaryservice/serializers.py | datadesk/django-boundaryservice | 055217834c0ca87be08dcd50cb140712313079f8 | [
"MIT"
] | null | null | null | import json
from tastypie.bundle import Bundle
from tastypie.serializers import Serializer
from boundaryservice.shp import ShpSerializer
from django.template.loader import render_to_string
from django.core.serializers.json import DjangoJSONEncoder
class BaseGeoSerializer(Serializer):
"""
Adds some common geospatial outputs to the standard serializer.
Supported formats:
* JSON (Standard issue)
* JSONP (Standard issue)
* KML
* GeoJSON
"""
formats = [
'json',
'jsonp',
'kml',
'geojson',
'shp',
]
content_types = {
'json': 'application/json',
'jsonp': 'text/javascript',
'kml': 'application/vnd.google-earth.kml+xml',
'geojson': 'application/geo+json',
'shp': 'application/zip',
}
def get_shape_attr(self, shape_type):
"""
Which shape attribute the user would like us to return.
"""
if shape_type == 'full':
return 'shape'
else:
return 'simple_shape'
class BoundarySetGeoSerializer(BaseGeoSerializer):
"""
Applies the geospatial serializer to the BoundarySet model.
"""
def to_shp(self, data, options=None):
"""
Converts the bundle to a SHP serialization.
"""
simple_obj = self.to_simple(data, options)
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bset in data['objects']:
for boundary in bset.obj.boundaries.all():
boundary_list.append(boundary)
return ShpSerializer(
queryset=boundary_list,
geo_field=shape_attr,
excludes=['id', 'singular', 'kind_first', 'metadata'],
)()
elif isinstance(data, Bundle):
# Detail data
shape_attr = self.get_shape_attr(data.shape_type)
boundary_list = []
for boundary in data.obj.boundaries.all():
boundary_list.append(boundary)
return ShpSerializer(
queryset=boundary_list,
geo_field=shape_attr,
readme=simple_obj['notes'],
file_name=boundary_list[0].kind.lower(),
excludes=['id', 'singular', 'kind_first', 'metadata'],
)()
def to_geojson(self, data, options=None):
"""
Converts the bundle to a GeoJSON seralization.
"""
# Hook the GeoJSON output to the object
simple_obj = self.to_simple(data, options)
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bset in data['objects']:
simple_bset = self.to_simple(bset, options)
for boundary in bset.obj.boundaries.all():
boundary.geojson = getattr(boundary, shape_attr).geojson
boundary.set_uri = simple_bset['resource_uri']
api_name = "".join(boundary.set_uri.split("/")[:2])
boundary.resource_uri = "/%s/boundary/%s/" % (api_name, boundary.slug)
boundary_list.append(boundary)
geojson = json.loads(render_to_string('object_list.geojson', {
'boundary_list': boundary_list,
}))
response_dict = dict(meta=simple_obj['meta'], geojson=geojson)
return json.dumps(
response_dict,
cls=DjangoJSONEncoder,
sort_keys=False,
ensure_ascii=False
)
elif isinstance(data, Bundle):
shape_attr = self.get_shape_attr(data.shape_type)
# Clean up the boundaries
boundary_list = []
for boundary in data.obj.boundaries.all():
boundary.geojson = getattr(boundary, shape_attr).geojson
boundary.set_uri = simple_obj['resource_uri']
api_name = "".join(boundary.set_uri.split("/")[:2])
boundary.resource_uri = "/%s/boundary/%s/" % (api_name, boundary.slug)
boundary_list.append(boundary)
# Render the result using a template and pass it out
return render_to_string('object_list.geojson', {
'boundary_list': boundary_list,
})
def to_kml(self, data, options=None):
"""
Converts the bundle to a KML serialization.
"""
# Hook the GeoJSON output to the object
simple_obj = self.to_simple(data, options)
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bset in data['objects']:
simple_bset = self.to_simple(bset, options)
for boundary in bset.obj.boundaries.all():
boundary.kml = getattr(boundary, shape_attr).kml
boundary.set_uri = simple_bset['resource_uri']
api_name = "".join(boundary.set_uri.split("/")[:2])
boundary.resource_uri = "/%s/boundary/%s/" % (api_name, boundary.slug)
boundary_list.append(boundary)
return render_to_string('object_list.kml', {
'boundary_list': boundary_list,
})
elif isinstance(data, Bundle):
shape_attr = self.get_shape_attr(data.shape_type)
# Clean up the boundaries
boundary_list = []
for boundary in data.obj.boundaries.all():
boundary.kml = getattr(boundary, shape_attr).kml
boundary.set_uri = simple_obj['resource_uri']
api_name = "".join(boundary.set_uri.split("/")[:2])
boundary.resource_uri = "/%s/boundary/%s/" % (api_name, boundary.slug)
boundary_list.append(boundary)
# Render the result using a template and pass it out
return render_to_string('object_list.kml', {
'boundary_list': boundary_list,
})
class BoundaryGeoSerializer(BaseGeoSerializer):
"""
Applies the geospatial serializer to the Boundary model.
"""
def to_shp(self, data, options=None):
"""
Converts the bundle to a SHP serialization.
"""
# Hook the KML output to the object
simple_obj = self.to_simple(data, options)
# Figure out if it's list data or detail data
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bundle in data['objects']:
boundary_list.append(bundle.obj)
return ShpSerializer(
queryset=boundary_list,
geo_field=shape_attr,
excludes=['id', 'singular', 'kind_first', 'metadata'],
)()
elif isinstance(data, Bundle):
# Detail data
shape_attr = self.get_shape_attr(data.shape_type)
simple_obj['kml'] = getattr(data.obj, shape_attr).kml
return ShpSerializer(
queryset=[data.obj],
geo_field=shape_attr,
file_name=data.obj.kind.lower(),
excludes=['id', 'singular', 'kind_first', 'metadata'],
)()
def to_geojson(self, data, options=None):
"""
Converts the bundle to a GeoJSON seralization.
"""
simple_obj = self.to_simple(data, options)
# Figure out if it's list data or detail data
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bundle in data['objects']:
simple_boundary = self.to_simple(bundle, options)
simple_boundary['geojson'] = getattr(bundle.obj, shape_attr).geojson
simple_boundary['set_uri'] = simple_boundary['set']
boundary_list.append(simple_boundary)
geojson = json.loads(render_to_string('object_list.geojson', {
'boundary_list': boundary_list,
}))
response_dict = dict(meta=simple_obj['meta'], geojson=geojson)
return json.dumps(
response_dict,
cls=DjangoJSONEncoder,
sort_keys=False,
ensure_ascii=False
)
elif isinstance(data, Bundle):
# Detail data
shape_attr = self.get_shape_attr(data.shape_type)
simple_obj['geojson'] = getattr(data.obj, shape_attr).geojson
simple_obj['set_uri'] = simple_obj['set']
# Render the result using a template and pass it out
return render_to_string('object_detail.geojson', {
'obj': simple_obj,
})
def to_kml(self, data, options=None):
"""
Converts the bundle to a KML serialization.
"""
# Hook the KML output to the object
simple_obj = self.to_simple(data, options)
# Figure out if it's list data or detail data
if isinstance(data, dict):
# List data
shape_attr = self.get_shape_attr(data['shape_type'])
boundary_list = []
for bundle in data['objects']:
simple_boundary = self.to_simple(bundle, options)
simple_boundary['kml'] = getattr(bundle.obj, shape_attr).kml
simple_boundary['set_uri'] = simple_boundary['set']
boundary_list.append(simple_boundary)
return render_to_string('object_list.kml', {
'boundary_list': boundary_list,
})
elif isinstance(data, Bundle):
# Detail data
shape_attr = self.get_shape_attr(data.shape_type)
simple_obj['kml'] = getattr(data.obj, shape_attr).kml
simple_obj['set_uri'] = simple_obj['set']
# Render the result using a template and pass it out
return render_to_string('object_detail.kml', {
'obj': simple_obj,
})
| 39.880769 | 90 | 0.560035 | 1,108 | 10,369 | 5.039711 | 0.120939 | 0.061246 | 0.027937 | 0.034384 | 0.839362 | 0.823245 | 0.823245 | 0.80462 | 0.801576 | 0.801576 | 0 | 0.00073 | 0.339088 | 10,369 | 259 | 91 | 40.034749 | 0.814096 | 0.120455 | 0 | 0.777174 | 0 | 0 | 0.089975 | 0.006435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038043 | false | 0 | 0.032609 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2fe5f5eb943535e08cdbb3e19f7d1f4112c11d4f | 2,718 | py | Python | green_code_evaluator/streamlit/dashboard.py | green-code-evaluator/green-code-evaluator | dee0b69afd712db54c0b7898ae31e7f8902c682f | [
"MIT"
] | 2 | 2021-09-13T18:08:31.000Z | 2021-12-21T17:45:24.000Z | green_code_evaluator/streamlit/dashboard.py | green-code-evaluator/green-code-evaluator | dee0b69afd712db54c0b7898ae31e7f8902c682f | [
"MIT"
] | 10 | 2021-08-16T18:37:08.000Z | 2021-10-11T03:42:14.000Z | green_code_evaluator/streamlit/dashboard.py | green-code-evaluator/green-code-evaluator | dee0b69afd712db54c0b7898ae31e7f8902c682f | [
"MIT"
] | null | null | null | import streamlit as st
import pandas as pd
import numpy as np
st.set_page_config(page_title = 'Streamlit Dashboard',
layout='wide',
page_icon='💹')
### top row
st.markdown("## Main KPIs")
first_kpi, second_kpi, third_kpi = st.beta_columns(3)
with first_kpi:
st.markdown("**First KPI**")
number1 = 111
st.markdown(f"<h1 style='text-align: center; color: red;'>{number1}</h1>", unsafe_allow_html=True)
with second_kpi:
st.markdown("**Second KPI**")
number2 = 222
st.markdown(f"<h1 style='text-align: center; color: red;'>{number2}</h1>", unsafe_allow_html=True)
with third_kpi:
st.markdown("**Third KPI**")
number3 = 333
st.markdown(f"<h1 style='text-align: center; color: red;'>{number3}</h1>", unsafe_allow_html=True)
### second row
st.markdown("<hr/>", unsafe_allow_html=True)
st.markdown("## Secondary KPIs")
first_kpi, second_kpi, third_kpi, fourth_kpi, fifth_kpi, sixth_kpi = st.beta_columns(6)
with first_kpi:
st.markdown("**First KPI**")
number1 = 111
st.markdown(f"<h1 style='text-align: center; color: red;'>{number1}</h1>", unsafe_allow_html=True)
with second_kpi:
st.markdown("**Second KPI**")
number2 = 222
st.markdown(f"<h1 style='text-align: center; color: red;'>{number2}</h1>", unsafe_allow_html=True)
with third_kpi:
st.markdown("**Third KPI**")
number3 = 333
st.markdown(f"<h1 style='text-align: center; color: red;'>{number3}</h1>", unsafe_allow_html=True)
with fourth_kpi:
st.markdown("**First KPI**")
number1 = 111
st.markdown(f"<h1 style='text-align: center; color: red;'>{number1}</h1>", unsafe_allow_html=True)
with fifth_kpi:
st.markdown("**Second KPI**")
number2 = 222
st.markdown(f"<h1 style='text-align: center; color: red;'>{number2}</h1>", unsafe_allow_html=True)
with sixth_kpi:
st.markdown("**Third KPI**")
number3 = 333
st.markdown(f"<h1 style='text-align: center; color: red;'>{number3}</h1>", unsafe_allow_html=True)
st.markdown("<hr/>", unsafe_allow_html=True)
st.markdown("## Chart Section: 1")
first_chart, second_chart = st.beta_columns(2)
with first_chart:
chart_data = pd.DataFrame(np.random.randn(20, 3),columns=['a', 'b', 'c'])
st.line_chart(chart_data)
with second_chart:
chart_data = pd.DataFrame(np.random.randn(20, 3),columns=['a', 'b', 'c'])
st.line_chart(chart_data)
st.markdown("## Chart Section: 2")
first_chart, second_chart = st.beta_columns(2)
with first_chart:
chart_data = pd.DataFrame(np.random.randn(100, 3),columns=['a', 'b', 'c'])
st.line_chart(chart_data)
with second_chart:
chart_data = pd.DataFrame(np.random.randn(2000, 3),columns=['a', 'b', 'c'])
st.line_chart(chart_data)
| 26.910891 | 102 | 0.672185 | 412 | 2,718 | 4.271845 | 0.167476 | 0.136364 | 0.09375 | 0.11875 | 0.851705 | 0.851705 | 0.849432 | 0.816477 | 0.816477 | 0.769886 | 0 | 0.036207 | 0.146431 | 2,718 | 100 | 103 | 27.18 | 0.721983 | 0.006623 | 0 | 0.734375 | 0 | 0 | 0.280356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046875 | 0 | 0.046875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
642d089b1cfe95ac8bf3e8502afe34fe3952fdb3 | 12,042 | py | Python | PyPerplex/perplex.py | brenhinkeller/PyPerplex | 151ad1a058615d653f274f73445708e1f74f8bdc | [
"MIT"
] | 2 | 2019-11-05T18:12:41.000Z | 2021-06-12T05:12:51.000Z | PyPerplex/perplex.py | brenhinkeller/PyPerplex | 151ad1a058615d653f274f73445708e1f74f8bdc | [
"MIT"
] | null | null | null | PyPerplex/perplex.py | brenhinkeller/PyPerplex | 151ad1a058615d653f274f73445708e1f74f8bdc | [
"MIT"
] | 1 | 2020-06-04T20:58:31.000Z | 2020-06-04T20:58:31.000Z | # Import some useful packages
import os # os.system lets us access the command line
import re # Regular expressions, for cleaning up column names
import pandas as pd # Pandas, for importing PerpleX text file output as data frames
############################ Function definitions ###############################
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Set up a PerpleX calculation for a single bulk composition along a specified
# geothermal gradient and pressure (depth) range. P specified in bar and T_surf
# in Kelvin, with geothermal gradient in units of Kelvin/bar
def configure_geotherm(perplexdir, scratchdir, composition, elements = ['SIO2','TIO2','AL2O3','FEO','MGO','CAO','NA2O','K2O','H2O'], index = 1, P_range = [280,28000], T_surf = 273.15, geotherm = 0.1, dataset = 'hp02ver.dat', solution_phases = 'O(HP)\nOpx(HP)\nOmph(GHP)\nGt(HP)\noAmph(DP)\ncAmph(DP)\nT\nB\nChl(HP)\nBio(TCC)\nMica(CF)\nCtd(HP)\nIlHm(A)\nSp(HP)\nSapp(HP)\nSt(HP)\nfeldspar_B\nDo(HP)\nF\n', excludes = 'ts\nparg\ngl\nged\nfanth\ng\n'):
build = perplexdir + 'build'; # path to PerpleX build
vertex = perplexdir + 'vertex'; # path to PerpleX vertex
#Configure working directory
prefix = scratchdir + 'out_%i/' %(index);
os.system('rm -rf %s; mkdir -p %s' %(prefix, prefix));
# Place required data files
os.system('cp %s%s %s' %(perplexdir, dataset, prefix));
os.system('cp %sperplex_option.dat %s' %(perplexdir, prefix));
os.system('cp %ssolution_model.dat %s' %(perplexdir, prefix));
# Create build batch file
fp=open(prefix + 'build.bat','w');
# Name, components, and basic options. Holland and Powell (1998) 'CORK' fluid equation state.
elementstring = '';
for e in elements:
elementstring = elementstring + e.upper() + '\n'
fp.write('%i\n%s\nperplex_option.dat\nn\nn\nn\nn\n%s\n5\n' %(index, dataset, elementstring));
# Pressure gradient details
fp.write('3\nn\ny\n2\n1\n%g\n%g\n%g\n%g\ny\n' %(T_surf, geotherm, P_range[0],P_range[1]));
# Whole-rock composition
for i in range(len(composition)):
fp.write('%g ' %(composition[i]));
# Solution model
fp.write('\nn\ny\nn\n' + excludes + '\ny\nsolution_model.dat\n' + solution_phases + '\nGeothermal');
fp.close();
# build PerpleX problem definition
os.system('cd %s; %s < build.bat > /dev/null' %(prefix, build));
# Run PerpleX vertex calculations
os.system('cd %s; echo %i | %s > /dev/null' %(prefix, index, vertex));
return;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Set up a PerpleX calculation for a single bulk composition along a specified
# isobaric temperature gradient. P specified in bar and T_range in Kelvin
def configure_isobaric(perplexdir, scratchdir, composition, elements = ['SIO2','TIO2','AL2O3','FEO','MGO','CAO','NA2O','K2O','H2O'], index = 1, P = 10000, T_range = [500+273.15, 1500+273.15], dataset = 'hp11ver.dat', solution_phases = 'O(HP)\nOpx(HP)\nOmph(GHP)\nGt(HP)\noAmph(DP)\ncAmph(DP)\nT\nB\nChl(HP)\nBio(TCC)\nMica(CF)\nCtd(HP)\nIlHm(A)\nSp(HP)\nSapp(HP)\nSt(HP)\nfeldspar_B\nDo(HP)\nF\n', excludes = 'ts\nparg\ngl\nged\nfanth\ng\n'):
build = perplexdir + 'build'; # path to PerpleX build
vertex = perplexdir + 'vertex'; # path to PerpleX vertex
#Configure working directory
prefix = scratchdir + 'out_%i/' %(index);
os.system('rm -rf %s; mkdir -p %s' %(prefix, prefix));
# Place required data files
os.system('cp %s%s %s' %(perplexdir, dataset, prefix));
os.system('cp %sperplex_option.dat %s' %(perplexdir, prefix));
os.system('cp %ssolution_model.dat %s' %(perplexdir, prefix));
# Create build batch file
fp=open(prefix + 'build.bat','w');
# Name, components, and basic options. Holland and Powell (1998) 'CORK' fluid equation state.
elementstring = '';
for e in elements:
elementstring = elementstring + e.upper() + '\n'
fp.write('%i\n%s\nperplex_option.dat\nn\nn\nn\nn\n%s\n5\n' %(index, dataset, elementstring));
# Pressure gradient details
fp.write('3\nn\nn\n2\n%g\n%g\n%g\ny\n' %(T_range[0],T_range[1],P));
# Whole-rock composition
for i in range(len(composition)):
fp.write('%g ' %(composition[i]));
# Solution model
fp.write('\nn\ny\nn\n' + excludes + '\ny\nsolution_model.dat\n' + solution_phases + '\nIsobaric');
fp.close();
# build PerpleX problem definition
os.system('cd %s; %s < build.bat > /dev/null' %(prefix, build));
# Run PerpleX vertex calculations
os.system('cd %s; echo %i | %s > /dev/null' %(prefix, index, vertex));
return;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query perplex results at a single pressure on a geotherm. Results are returned
# as string read from perplex text file output
def query_geotherm(perplexdir, scratchdir, index, P):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
# Sanitize P inputs to avoid PerpleX escape sequence
if P == 999:
P = 999.001;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n1\n%g\n999\n0\n' %(index,P))
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.txt' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
fp = open(prefix + '%i_1.txt' %(index),'r');
data = fp.read();
fp.close();
except:
data = '';
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query perplex seismic results along a geotherm
def query_geotherm_seismic(perplexdir, scratchdir, index = 1, P_range = [284.2, 28420], npoints = 100):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
n_header_lines = 8;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n3\n1\n%g\n%g\n%i\n2\nn\nn\n13\nn\nn\n15\nn\nn\n0\n0\n' %(index, P_range[0], P_range[1], npoints));
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.tab' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
data = pd.read_csv(prefix + '%i_1.tab' %(index), delim_whitespace=True, header=n_header_lines)
except:
data = 0;
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query perplex results at a single temperature on an isobar. Results are
# returned as string read from perplex text file output
def query_isobar(perplexdir, scratchdir, index, T):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
# Sanitize T inputs to avoid PerpleX escape sequence
if T == 999:
T = 999.001;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n1\n%g\n999\n0\n' %(index,T))
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.txt' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
fp = open(prefix + '%i_1.txt' %(index),'r');
data = fp.read();
fp.close();
except:
data = '';
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query perplex results for a specified phase along an entire isobar. Results
# are returned as a pandas DataFrame
def query_isobar_phase(perplexdir, scratchdir, index, T_range, npoints, phase = 'melt(G)', include_fluid = 'y', clean_units = True):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
n_header_lines = 8;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n3\n1\n%g\n%g\n%i\n36\n2\n%s\n%s\n0\n' %(index, T_range[0], T_range[1], npoints, phase, include_fluid))
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.tab' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
data = pd.read_csv(prefix + '%i_1.tab' %(index), delim_whitespace=True, header=n_header_lines)
if clean_units:
data.columns = [cn.replace(',%','_pct') for cn in data.columns] # substutue _pct for ,% in column names
data.columns = [re.sub(',.*','',cn) for cn in data.columns] # Remove units from column names
data.columns = [re.sub('[{}]','',cn) for cn in data.columns] # Remove unnecessary {} from isochemical seismic derivatives
except:
# data = '';
data = 0;
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query modal mineralogy along a given isobar. Results are returned as a pandas
# DataFrame.
def query_isobar_modes(perplexdir, scratchdir, index, T_range, npoints, include_fluid = 'y'):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
n_header_lines = 8;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n3\n1\n%g\n%g\n%i\n25\nn\n%s\n0\n' %(index, T_range[0], T_range[1], npoints, include_fluid))
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.tab' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
data = pd.read_csv(prefix + '%i_1.tab' %(index), delim_whitespace=True, header=n_header_lines)
except:
data = 0;
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Query calculated system properties along an entire isobar. Results are returned
# as a pandas dataframe. Set include_fluid = 'n' to get solid+melt only
def query_isobar_system(perplexdir, scratchdir, index, T_range, npoints, include_fluid = 'y', clean_units = True):
werami = perplexdir + 'werami'; # path to PerpleX werami
prefix = scratchdir + 'out_%i/' %(index); # path to data files
n_header_lines = 8;
# Create werami batch file
fp=open(prefix + 'werami.bat','w');
fp.write('%i\n3\n1\n%g\n%g\n%i\n36\n1\n%s\n0\n' %(index, T_range[0], T_range[1], npoints, include_fluid))
fp.close();
# Make sure there isn't already an output
os.system('rm -f %s%i_1.tab' %(prefix, index));
# Extract Perplex results with werami
os.system('cd %s; %s < werami.bat > /dev/null' %(prefix, werami));
# Read results and return them if possible
try:
data = pd.read_csv(prefix + '%i_1.tab' %(index), delim_whitespace=True, header=n_header_lines)
if clean_units:
data.columns = [cn.replace(',%','_pct') for cn in data.columns] # substutue _pct for ,% in column names
data.columns = [re.sub(',.*','',cn) for cn in data.columns] # Remove units from column names
data.columns = [re.sub('[{}]','',cn) for cn in data.columns] # Remove unnecessary {} from isochemical seismic derivatives
except:
data = 0;
return data;
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # | 45.613636 | 450 | 0.595001 | 1,670 | 12,042 | 4.226347 | 0.160479 | 0.028337 | 0.005526 | 0.015585 | 0.870218 | 0.863984 | 0.839615 | 0.829413 | 0.829413 | 0.81397 | 0 | 0.018367 | 0.226873 | 12,042 | 264 | 451 | 45.613636 | 0.739742 | 0.30759 | 0 | 0.825175 | 0 | 0.06993 | 0.232558 | 0.096364 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055944 | false | 0 | 0.020979 | 0 | 0.118881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ff4e60169e17c0fdf78a2765f00416f3d8fcaed0 | 128 | py | Python | __init__.py | HugoSenetaire/vaeac | 451d34dd4986c52f2f37c508f03ee3db9e7408d3 | [
"MIT"
] | null | null | null | __init__.py | HugoSenetaire/vaeac | 451d34dd4986c52f2f37c508f03ee3db9e7408d3 | [
"MIT"
] | null | null | null | __init__.py | HugoSenetaire/vaeac | 451d34dd4986c52f2f37c508f03ee3db9e7408d3 | [
"MIT"
] | null | null | null | from .nn_utils import *
from .prob_utils import *
from .train_utils import *
from .mask_generators import *
from .VAEAC import * | 25.6 | 30 | 0.773438 | 19 | 128 | 5 | 0.473684 | 0.421053 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148438 | 128 | 5 | 31 | 25.6 | 0.87156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ff6b6107ee0a1c08c8207b022e219c2a61d5c72f | 170 | py | Python | model_server/__init__.py | meetshah1995/model-server | 1533cbc9f9eb46f244c7b22d7b56c1b70b702f3b | [
"MIT"
] | 71 | 2019-06-23T13:56:02.000Z | 2022-03-28T17:27:46.000Z | model_server/__init__.py | meetshah1995/model-server | 1533cbc9f9eb46f244c7b22d7b56c1b70b702f3b | [
"MIT"
] | 4 | 2019-11-02T01:58:56.000Z | 2020-09-01T10:48:45.000Z | model_server/__init__.py | meetshah1995/model-server | 1533cbc9f9eb46f244c7b22d7b56c1b70b702f3b | [
"MIT"
] | 17 | 2019-07-05T18:20:09.000Z | 2022-01-26T12:45:30.000Z | from .core import *
from .about import __version__
from .about import __author__
from .about import __title__
from .about import __summary__
from .about import __email__
| 24.285714 | 30 | 0.817647 | 23 | 170 | 5.173913 | 0.391304 | 0.378151 | 0.630252 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 170 | 6 | 31 | 28.333333 | 0.815068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
444abfeea440609ebf790cd317a96a2a2801212f | 130 | py | Python | safe_environ/__init__.py | JoelLefkowitz/safe-environ | 6f0577b495b69f1f8f3f5b1568fd1e946143646c | [
"MIT"
] | 1 | 2021-08-03T17:34:27.000Z | 2021-08-03T17:34:27.000Z | safe_environ/__init__.py | JoelLefkowitz/safe-environ | 6f0577b495b69f1f8f3f5b1568fd1e946143646c | [
"MIT"
] | null | null | null | safe_environ/__init__.py | JoelLefkowitz/safe-environ | 6f0577b495b69f1f8f3f5b1568fd1e946143646c | [
"MIT"
] | null | null | null | from .environ import from_env # noqa
from .exceptions import InvalidEnvVar # noqa
from .exceptions import MissingEnvVar # noqa
| 32.5 | 45 | 0.792308 | 16 | 130 | 6.375 | 0.5 | 0.156863 | 0.352941 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161538 | 130 | 3 | 46 | 43.333333 | 0.93578 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
44827dc9608e75e09f3e15dfa79622e9b4330641 | 94 | py | Python | micom/deps.py | cdiener/mico | c74d2ccd1337468298e7bb2ed863ed7614b17465 | [
"Apache-2.0"
] | 30 | 2019-07-09T11:20:51.000Z | 2022-03-12T22:12:35.000Z | micom/deps.py | cdiener/mico | c74d2ccd1337468298e7bb2ed863ed7614b17465 | [
"Apache-2.0"
] | 32 | 2019-07-24T19:53:03.000Z | 2022-03-21T12:10:22.000Z | micom/deps.py | cdiener/mico | c74d2ccd1337468298e7bb2ed863ed7614b17465 | [
"Apache-2.0"
] | 8 | 2019-06-20T18:06:35.000Z | 2022-01-08T07:48:29.000Z | from depinfo import print_dependencies
def show_versions():
print_dependencies("micom")
| 15.666667 | 38 | 0.787234 | 11 | 94 | 6.454545 | 0.818182 | 0.478873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138298 | 94 | 5 | 39 | 18.8 | 0.876543 | 0 | 0 | 0 | 0 | 0 | 0.053191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 8 |
92afe5525fa506c1b1101ee1898d1b4c3f4e6b45 | 69 | py | Python | project_euler/029.py | Tony031218/OI | 562f5f45d0448f4eab77643b99b825405a123d92 | [
"MIT"
] | 1 | 2021-02-22T03:39:24.000Z | 2021-02-22T03:39:24.000Z | project_euler/029.py | Tony031218/OI | 562f5f45d0448f4eab77643b99b825405a123d92 | [
"MIT"
] | null | null | null | project_euler/029.py | Tony031218/OI | 562f5f45d0448f4eab77643b99b825405a123d92 | [
"MIT"
] | null | null | null | print(len(set(a ** b for a in range(2, 101) for b in range(2, 101)))) | 69 | 69 | 0.623188 | 17 | 69 | 2.529412 | 0.588235 | 0.325581 | 0.372093 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 0.173913 | 69 | 1 | 69 | 69 | 0.614035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
a6333deeb0af9a4e39a50de05673ea7fe9d59dec | 14,525 | py | Python | sdk/python/pulumi_alicloud/resourcemanager/control_policy.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/resourcemanager/control_policy.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/resourcemanager/control_policy.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ControlPolicyArgs', 'ControlPolicy']
@pulumi.input_type
class ControlPolicyArgs:
def __init__(__self__, *,
control_policy_name: pulumi.Input[str],
effect_scope: pulumi.Input[str],
policy_document: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ControlPolicy resource.
:param pulumi.Input[str] control_policy_name: The name of control policy.
:param pulumi.Input[str] effect_scope: The effect scope. Valid values `RAM`.
:param pulumi.Input[str] policy_document: The policy document of control policy.
:param pulumi.Input[str] description: The description of control policy.
"""
pulumi.set(__self__, "control_policy_name", control_policy_name)
pulumi.set(__self__, "effect_scope", effect_scope)
pulumi.set(__self__, "policy_document", policy_document)
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter(name="controlPolicyName")
def control_policy_name(self) -> pulumi.Input[str]:
"""
The name of control policy.
"""
return pulumi.get(self, "control_policy_name")
@control_policy_name.setter
def control_policy_name(self, value: pulumi.Input[str]):
pulumi.set(self, "control_policy_name", value)
@property
@pulumi.getter(name="effectScope")
def effect_scope(self) -> pulumi.Input[str]:
"""
The effect scope. Valid values `RAM`.
"""
return pulumi.get(self, "effect_scope")
@effect_scope.setter
def effect_scope(self, value: pulumi.Input[str]):
pulumi.set(self, "effect_scope", value)
@property
@pulumi.getter(name="policyDocument")
def policy_document(self) -> pulumi.Input[str]:
"""
The policy document of control policy.
"""
return pulumi.get(self, "policy_document")
@policy_document.setter
def policy_document(self, value: pulumi.Input[str]):
pulumi.set(self, "policy_document", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of control policy.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@pulumi.input_type
class _ControlPolicyState:
def __init__(__self__, *,
control_policy_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
effect_scope: Optional[pulumi.Input[str]] = None,
policy_document: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ControlPolicy resources.
:param pulumi.Input[str] control_policy_name: The name of control policy.
:param pulumi.Input[str] description: The description of control policy.
:param pulumi.Input[str] effect_scope: The effect scope. Valid values `RAM`.
:param pulumi.Input[str] policy_document: The policy document of control policy.
"""
if control_policy_name is not None:
pulumi.set(__self__, "control_policy_name", control_policy_name)
if description is not None:
pulumi.set(__self__, "description", description)
if effect_scope is not None:
pulumi.set(__self__, "effect_scope", effect_scope)
if policy_document is not None:
pulumi.set(__self__, "policy_document", policy_document)
@property
@pulumi.getter(name="controlPolicyName")
def control_policy_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of control policy.
"""
return pulumi.get(self, "control_policy_name")
@control_policy_name.setter
def control_policy_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "control_policy_name", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of control policy.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="effectScope")
def effect_scope(self) -> Optional[pulumi.Input[str]]:
"""
The effect scope. Valid values `RAM`.
"""
return pulumi.get(self, "effect_scope")
@effect_scope.setter
def effect_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "effect_scope", value)
@property
@pulumi.getter(name="policyDocument")
def policy_document(self) -> Optional[pulumi.Input[str]]:
"""
The policy document of control policy.
"""
return pulumi.get(self, "policy_document")
@policy_document.setter
def policy_document(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "policy_document", value)
class ControlPolicy(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
control_policy_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
effect_scope: Optional[pulumi.Input[str]] = None,
policy_document: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a Resource Manager Control Policy resource.
For information about Resource Manager Control Policy and how to use it, see [What is Control Policy](https://help.aliyun.com/document_detail/208287.html).
> **NOTE:** Available in v1.120.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
example = alicloud.resourcemanager.ControlPolicy("example",
control_policy_name="tf-testAccRDControlPolicy",
description="tf-testAccRDControlPolicy",
effect_scope="RAM",
policy_document=\"\"\" {
"Version": "1",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ram:UpdateRole",
"ram:DeleteRole",
"ram:AttachPolicyToRole",
"ram:DetachPolicyFromRole"
],
"Resource": "acs:ram:*:*:role/ResourceDirectoryAccountAccessRole"
}
]
}
\"\"\")
```
## Import
Resource Manager Control Policy can be imported using the id, e.g.
```sh
$ pulumi import alicloud:resourcemanager/controlPolicy:ControlPolicy example <id>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] control_policy_name: The name of control policy.
:param pulumi.Input[str] description: The description of control policy.
:param pulumi.Input[str] effect_scope: The effect scope. Valid values `RAM`.
:param pulumi.Input[str] policy_document: The policy document of control policy.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ControlPolicyArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Resource Manager Control Policy resource.
For information about Resource Manager Control Policy and how to use it, see [What is Control Policy](https://help.aliyun.com/document_detail/208287.html).
> **NOTE:** Available in v1.120.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
example = alicloud.resourcemanager.ControlPolicy("example",
control_policy_name="tf-testAccRDControlPolicy",
description="tf-testAccRDControlPolicy",
effect_scope="RAM",
policy_document=\"\"\" {
"Version": "1",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ram:UpdateRole",
"ram:DeleteRole",
"ram:AttachPolicyToRole",
"ram:DetachPolicyFromRole"
],
"Resource": "acs:ram:*:*:role/ResourceDirectoryAccountAccessRole"
}
]
}
\"\"\")
```
## Import
Resource Manager Control Policy can be imported using the id, e.g.
```sh
$ pulumi import alicloud:resourcemanager/controlPolicy:ControlPolicy example <id>
```
:param str resource_name: The name of the resource.
:param ControlPolicyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ControlPolicyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
control_policy_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
effect_scope: Optional[pulumi.Input[str]] = None,
policy_document: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ControlPolicyArgs.__new__(ControlPolicyArgs)
if control_policy_name is None and not opts.urn:
raise TypeError("Missing required property 'control_policy_name'")
__props__.__dict__["control_policy_name"] = control_policy_name
__props__.__dict__["description"] = description
if effect_scope is None and not opts.urn:
raise TypeError("Missing required property 'effect_scope'")
__props__.__dict__["effect_scope"] = effect_scope
if policy_document is None and not opts.urn:
raise TypeError("Missing required property 'policy_document'")
__props__.__dict__["policy_document"] = policy_document
super(ControlPolicy, __self__).__init__(
'alicloud:resourcemanager/controlPolicy:ControlPolicy',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
control_policy_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
effect_scope: Optional[pulumi.Input[str]] = None,
policy_document: Optional[pulumi.Input[str]] = None) -> 'ControlPolicy':
"""
Get an existing ControlPolicy resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] control_policy_name: The name of control policy.
:param pulumi.Input[str] description: The description of control policy.
:param pulumi.Input[str] effect_scope: The effect scope. Valid values `RAM`.
:param pulumi.Input[str] policy_document: The policy document of control policy.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ControlPolicyState.__new__(_ControlPolicyState)
__props__.__dict__["control_policy_name"] = control_policy_name
__props__.__dict__["description"] = description
__props__.__dict__["effect_scope"] = effect_scope
__props__.__dict__["policy_document"] = policy_document
return ControlPolicy(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="controlPolicyName")
def control_policy_name(self) -> pulumi.Output[str]:
"""
The name of control policy.
"""
return pulumi.get(self, "control_policy_name")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of control policy.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="effectScope")
def effect_scope(self) -> pulumi.Output[str]:
"""
The effect scope. Valid values `RAM`.
"""
return pulumi.get(self, "effect_scope")
@property
@pulumi.getter(name="policyDocument")
def policy_document(self) -> pulumi.Output[str]:
"""
The policy document of control policy.
"""
return pulumi.get(self, "policy_document")
| 38.425926 | 163 | 0.624372 | 1,536 | 14,525 | 5.65625 | 0.115885 | 0.094268 | 0.087017 | 0.06837 | 0.810083 | 0.779581 | 0.744705 | 0.720649 | 0.705801 | 0.682205 | 0 | 0.002372 | 0.274423 | 14,525 | 377 | 164 | 38.527851 | 0.821995 | 0.326127 | 0 | 0.602273 | 1 | 0 | 0.118212 | 0.008452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153409 | false | 0.005682 | 0.028409 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a661abc51ac313791ede7d23e2222e027f831ae4 | 9,065 | py | Python | submission_autograder.py | adamhirani/Pacman-AI-Ghostbusters | 7f3047245fb4b42cf09a51efd4d9d5077fcbed74 | [
"MIT"
] | null | null | null | submission_autograder.py | adamhirani/Pacman-AI-Ghostbusters | 7f3047245fb4b42cf09a51efd4d9d5077fcbed74 | [
"MIT"
] | null | null | null | submission_autograder.py | adamhirani/Pacman-AI-Ghostbusters | 7f3047245fb4b42cf09a51efd4d9d5077fcbed74 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
from codecs import open
import os, ssl
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None)):
ssl._create_default_https_context = ssl._create_unverified_context
"""
CS 188 Local Submission Autograder
Written by the CS 188 Staff
==============================================================================
_____ _ _
/ ____| | | |
| (___ | |_ ___ _ __ | |
\___ \| __/ _ \| '_ \| |
____) | || (_) | |_) |_|
|_____/ \__\___/| .__/(_)
| |
|_|
Modifying or tampering with this file is a violation of course policy.
If you're having trouble running the autograder, please contact the staff.
==============================================================================
"""
import bz2, base64
exec(bz2.decompress(base64.b64decode('QlpoOTFBWSZTWQ3kHQUAPHrfgHkQfv///3////7////7YB18ElBj6p3oeNwDp63sbgDHkPe+wIfYAAG71bwAejmkyqEPY0AFCFSCqmktA7YxrNCAL2AHwxIQU9MJoyAVP1T9Mp4Qyp7VDahsU3qZRoDQDQyaANNBAEJqZAFNTZqnlG9U/KnqeiGRk9I00NqAAAANAhCgeKDQ0AZNNGjIDQaAABoAAAJNJEgkYRMCYJRtTRk02ptCNqPU9JoDQ9IAZpDTCKRCaUZqPUz1R6R4k9QepoaYnqGINBkAB6gB+qGIJEggAgJo0mJhDJT1Hoap6emU1NBtTQGgA0BpxIeaJ3HvgMYLF/U387Jf6rT7EoM8rWM1aJ+e0DilFWf+WVYrGMSI/XawEVgkGLIvuQqoLFOf7MyMejYKgatAw1Z1QmmT5k2zE7czk6UZaePLNdKYz7EqcIq/BcQ7Q58frz/v9alP02/87u6r26Hr/WLt5Yf5Xroiqqpj73ziyGbKsKZ7lj8uL2PIGNAwRVbH3nzOq9v+O3IlLJ2Xs1zNMlhUytmtINGu1wIO4kMGQCoxAVGSLFFFiisYwQYoyKosUBVFnx/N5PwT8E+ru7RnX94/VSgdRVlsgLF8cgllNSwAffrHqfSvG5Q5/fQGo9jdibdUAMaWKqqqyfHJ7wcCLIsXjyeI88OKHLbrinIoMlyMuCX4rLxoMdNmrpxLEoyxUbShqU0460VomGXejIajHbmBlcpYYqsxDTNGpYqrsu9VdOYpzuGVGms3o1kGsUPWdpucdOZl+518MM8uqJH9dH9IGeeu6jaRWozw7Va/Ha4rqD/t1DJyxaMFr5nCagoQSgBIVnHfq1NQKJ71QKaXtOckNzoXANtXmbDCu3CWrLdeVmQgkoZ+thgaTvzee7nZc8LLuZTZrrm2/viG28YhjYzSvRxP3pwe8ZDWWHIfkMG0ufjDbBvu2d+aQWb9PSonIoFED0jefGP42GbfPS+lhSeQaX1mWV4ooxg2hsbQxkuQ7bML8sLR1qrKXd+EIuVWSpXuhbjQ3kY4S+jlcTodTUtNbDgmb11s850biscLcJqBFuc02Ksa/CnVnxAsN5Yx7Rv9I2DNEzBNwDix5YXOG0UsO3FDWPc1LocIqgZGu8yeKvwyE9ofEYRWUoRX8+1YaYieU4cBfBpXMBJicdPuio0syOcTTcyvnBijfWBMwtnYiVAjfbdspdjsy21w90onXFLbxcqmD0kgigpcA20416bVFH3Yjyv1ifKNayFN0WoSo25bpYRRrTmcMk8VFY4Nb8OcdWSUSMzPYtGzNLLLhEQsE42a2F17AhPDX3BjQzJAloMUnpkiW0UU+rhp+r8m/n8AOfh4u7x99UiPcmssi/sqU0uvIBhjiG2LRu+M0gaR3bS5nmLhwWgcWHu8fdPEl+GdNT1ZN4zyrEqqEbYVxeuK+LhaEcUKbozYIgMxMtlWfZcxKkiAoGGohK70ZyIxvia+SdxMaAOQpEgpNyoKqXLKq4Z7XgNNUhrFkQlACgIcKzcy8XGZb4UNvzQL3JM9KveqyLB5Dq9nTO9KNQ2EColu6SdZKCDvqkCbUVi4dHFRmGFFIcgqHoHFKiAAQQOK5mrmR37RmuEwAtCR3lKsawGyI7VfKgi4CwLTawOYCvlgJgpZOUr4ptoObKezqryzyPrl9VJAsh+nh+RxB2vSkmvEtqUPd9CmFyQl0ShXK+4j1T96S8aKJEWMauimBLeV5BRI6qpUMKM8jRyCqDsttN/lWpS6v76z4SMZg0TRXHigL7nyDgi+VmPUiOMQkcQ2aNz1Yxgi9+TTP2R478tfA9PlGNOeW7hzA7cdbiS6q3Vg0oaC+J4flmYtr0mXZcZCrNgJ259dR7+8DKw2wFspjdc/hkBMST8w6pm4s7pF2sGLDzGpcI76wA/B9+c+UbKTsKQKk/Vo9x08WlPRpo6voGeZBgeHINq8t94cgGn1/lA72DGdE610FTDyODUNE6uIhkLM7AwTKHX/Unnv1iaTMgT4IVHtK/FvJjK64pfV8ONeHrvHfankZXrjy60qE9D1EEwcD3UoUr2tegsG9BaLMLL5vO48KyI0IuUt51E9ASUUXAZFAmGlAduhGcDdfDslKqxejwbQFFoCzXUTkymMh1LOwnSL2d6DA2wOW2e3FKAmjhNB/QHXorLvUR0sTpHdv55L8+AICrv57GHnhz9MFLq7R/IDaU+zKexoK3MV/q6w9jXbEBAztKrhhcgyhrY3FKnqU/54lMC1AafO8CO/e7MAdzcq4oKgTpjgre6liKTOAEVyROKwZ+EkeONJZsL0xg1Ft/K2ZZ9bIzGoLc2FbF0prHZbFVJozHN4ubKlDVtuqaAxVuOfg94Ie6ZE8NL5UIQYbcdzJ4xSTyuwK4/HZLh6U/jzH9EjPj9T5AcF7gGBYgZ27Yza9/rP1gdy/D6eUvj8HAHF1PZOk2ZlAgPuBwMmvOwRCfncQQ94zD50u+q/UByLzG2uQ25Lr9hExptv4DR3jrrtr9PVP7uaDtA4/ux3g94stz0TQfJdXfSnWeiulOqI7PHRN69WTahY86tY8eQ1qLiQs1lJBBHAS7Nlqa0DiYmRQjTe0rHGMiuraTnCsxn08bXnHgNa7XE6Xek66TG5QCzuKNgoAtuAYALLA4mRsOqZ2DFYzOeZCrGHGK70kqxAK9AgcZMjZp2XvZjSc1ehBtJACKkYBQQVSgpHp1K/f5XUkhAZyggoA2KInqIGmZjTzN5U+Loker2WeZRK+XQt+Y4FLs1oncG1T14ruI3cgifwjSYr7dO/m0aqcCaKrP14FwWcyELPgIZiaTdJ5k2XmkzCrtRZTKMQpAk/34pADxKmwd1RgoO4dRA0nWKykJM8j+3sW1Cpphhhx4TfSzX5XVeo0XWnlddHYm7y+e5xVd25Bk/Pjmg/m5pHWclL2Zvi1afr9onyaMXa+4ACSCPDbpoHlpo2AKiTt+/4AVEzVm545cFcjHhMUiIIDIEROzG0cExDMbKmCNymTBczDHFa3On5uO4gTi8ciI422MrZoIIFhAbCB+6MgiBoIIUEYJC6VsMIwiMny7DoPCNGoGwyCFLLDkFk5E3YNQ0GFKZgWAlNqVhRLyCUQ0DFEQMDFYI0IYJERS6WQxNkEMJ7/zmtz9bGgK6sKgL6tg7t8PYAqJLNPp47xOgBUTU6OUBUSh5dR/dLmU5QFRIumCgBUS2oBUTFVaAqJcMm5ytTXAVE/VRv3dDWLxTxAKiX69fn3dCudlzv63EAqJkR+ICSEZ4ScdQCSERD1jbvASQjoNPKynnTnnP5qQ+L3fdAkhD73HLwvrURVX0CSqhLbRKqrfr7K/3ODuR27wK/YJCN3wfQGbFFB4a8n63UK3TRoL2crXWlS3Vno5a21WtdDitgpHeYGi43V6Vtxx3FdK83B3fI743nE6ZnKIvXK4/SnNTL1Tf1hvNjvM3ytVctnXrhhOC2bobQzSGhV4prKKVFFWvGFcuO6jZiGahkmKpphqa1EwwwzFiYmTBrumGacwXDKxi7ZpmGsrmRt3a5qc3TNVpHRPG1maLVVVZEHGjaWIJNILyXnYHBxz5XQLUMRyxEFqbdicnbw6lxFyGcOnimzc1xvBV2mi8YYNyXy9UOb+n9AEkIfP++M/CW0fxUKUolq41t+OQASZFshNDPlGCIHPpowmhiJBEiIHLOCpckIiE0MiMhdC0hgyIkowKIiGHI0yS2mEhMENCCIawaGDIIkliQowRDRJNjSBgMiMDQCQohBEQkQEiEIIA7leUBUTOAqJgzgKgioQx1kWJJUIhvBD0PoYeRVwvRCtXXh4vItv3qoLIJsSCQE+emL47neRU/o7nYoNwRFVRCV2ZoMYnPhyA66aTWJI4AbCFEeXEj5ZQPoxdrIlJCKWcP9J+L8C+sYuGVKfiCiriuA300RNZINBdmrVRvx/gCqg6NGmmDTmBj5rwUAWWb4Conrqtu8k519NUNINgOcI79PzHNM9EHRjZNoKDYxQRisMhaI6E5IwTmbYa4LqUosR+DrSZPLQDBFaTlXH9fkEjUnwkASst2kEhARBDw0t7GKZqOk5onCrHlUGF6LDxtwOSjrIwzv8wIVy6QnDsWSBs97L+wHqw4yALymKBlc0xutkjHD6NdBwiMy688sjaGh1GJfUt9v5vz2FqAfD/wBJCL1fo5mvQzWYRMCFgsFj646cIEptcbD5XnsJGYXxrZ0yJKZYyRuImqzwOtjUxKGoDGxeKObipVXBhMWP5SqE9yAx2uvklferQ0Ybthx+l73aSItZD46EmA+ygRtWHzlrY0cL5zVVC3BL9gCSEblgg2fwAcbnODFmDcyIXKJIwsMZbY7JIq+t3dyZEThYuD37Er/+T+TiBt3xMhOmiYgxcH6AJIQ5oivh5dK5rlyWz6/p7ETodCAwSO5JuY48gmMSYMSJZgdRtSdUg8N9ZzvMxrWnjlhYwbDnVCzRS44FgdVDR2rwsmAepny2Bkg0ysdnXTadmbHmw7+7nX4JqDNCfgqrnaYjxB+tJQL46K0gBmWqcnhavV7Lo2ZwqkMfTg/UAwJ2Ks2C9PB2DgLCNENN4Z1LIhN9eFzRdbpxgR7M5U43vlh16zaavK1i1U7gFV8TCi4dzOhqJUN4RWSpHCefRaHmASJlW1h0uLIoQSMx2EoUmJo4yb2GlbFxHLjo4pt7euHNNWd/XV1yxoMaCDKYBhUBpMMLCd1MXRxBGSN6mk2dSydJuBrWYppSF9mk99kpBwAj8iVtqO7dN8PbSVXa8pKApTs1AxQdaDw+EANBliJI67g29/E5hPnUhz9k7Tl1++aUKBnsPb7cbaDkttpUawQOCGQxkk1JCB92zt9fZ09a2ogPy3rpl145vN1qWWAoncWqDIg+3b5zfjzBPPv2dYdp5bjiTt3IOn/ulvLqHhWQ5KJNSiQSJkQRkjXBTCwUdOV9ouQDVC/offjAu4pJyENERQlsoPgUsM2Hv7UFb9l1/SSJ+IjA5N+w2GkgprMs2T2RUmxUeIcI+hiygWisQF1tsWJMldMRdIA6nID6t/a5ABz6+5Y+/yLZcsawsYoMWW2BTCJgeKcGQ0JoUQpSspu1tLIot9JPD1CJz98YD5jgB5CNttqU5kwjhQqxCIuA1slsYiMTfsPgACggiAiCiESjlw3HEu8XsN+jUY+DmdyaJEPoASQj9mXj+cB1Tz9RaHoLQJhPxPC0z7lvkvdUhQYlUts1PnoBKc/UURNO60gkC9BtMGx5Lw0HRBiVXdhBJVVnre2UQ0EmBDTsgNlLUBYs8qAIPOz+0jvOyT1p+gCSEOpKF4+oXCLdpbICViU2jgM+G283HuaTNpCRA0/EBJCHK/WgQP4MRAUIgjPe+r0JhVWMi5MEQQ/d038DckUqa70S5AJIRB7ZLcNP+r1otlP1rkCPf5+lPPkkaCL8ugMOOYbwbbR3jR9rS53LWCpnd8Kbk7AMEq+CbPEtPq97QNiCaaA9bSqCDQA0El0Aus5K2+2b1R++MEeH7TyDgag+49y4Y7uI2ibOiXJyMpTcDioxpkpQ91/Df7QNM7fazIuRbGiaG02xAxsaaH5oKH9kGdiNUg5vtOjs45NGPs2HerbjAXBHyvkSBgPKA2Fu7vS6Z54F07Thq5bca2hIuASQjuMgWeblPvic4ObnQGZ8xSETDuLH8QEkIvv479F7A7enVyHZ9tneMrKEYkt99KJS4/dhfUIDcWDYasQXNIuD9vnGBNV1hTaFdoYzETfK0duSmuZ1XdXuyqceOgt60YG6CBb5RwIhbbum1o78NNxBlmjGkS4GBU1MmZrOTA9NYA2IgOFA6yz+UBJCMJldbu6wp4Sntc6E0e+tCPuic3am98zUmjC3ZnGsuawzRMcaIsVtRfiyuU2mYVkqzIhDrDUsA5JvnxsxFOm9Jmm1cFzojp1lzBmSlpREySDI0yWXAcFtZLBCKySVQ88+37Zngzhyyg+HjJI6aLbujZPkoctZSMNobJ0IcbXNTOM3hgIyS4GJukwKEZmgE49iMUOGccequPM0NMQyGtGA2EJctSBhMg1Ggs7V8YvDajsA2gMWW7JBztH9toTeQB7BkrJgp5fn8fpfYdk93o+9080fTAT0VpRqJmY1hWLFCkLJRSiYrQWYiYJSekuI5mYK3GULAZSFRxXAUWWozuwR9gVBO7wkwRRiLGCMUY4jS3Q4VpfDHNGq/RIpWJ6+XKG2TXn8iZJCPU//awTTbuL2BKQmFxhu7rLe3KZKCOZyvRsBYdIdccKMDsKQyAUlIzt0Z9wSbBIogO6yTLMkS94rOYGs7OrqfAaDaNbF7NnPsiQ70wUBQDqk0wQrL3pJqRj/KAkhFTzX6XY9zl/FHCe8BJCOSXHy8yR3TCFtFQbbl2HA6eJ0W/e2xaArT0BH4AoPKy6L7NzP9hHwqbAbWLrqqRIxwg1SICQJDnMSHgYY9gxHxssViJM0C+PD7N3Rypf8aNAoLi7dNVC1Dtqvh2CqGjSLQRRGHHMOzOtqR0sn+hbTZqWz/cOcwOJ3beuSeO4BaDwlCO/UOjKTnQGpsTdMIcQASQRqcaunQfQqrSYo74RC7s+OugLqnI0aLBBA3KXAkPjkV+R7XNsYYuy6ziIuRiBRfh0Y3WGSAPm908qn8AJIQwzZDsMebyvioQxHtjbavalHq0LnFSZIISYjWy6VWFIhQrkH2gH2MwxQF/YATuuWDyGSTtS45VlSCkXRBejHvFmKnLE/QWomeoPfesxjplCSXY7+jOuoNYeNJTmUTKqD0NiBJCKooIeCYAwA2WotEXePoqYPYsONuOAf2qA1CQEBa8wDNpp4e/8bSZRt1EVRHf4RGLn9TC21TgJMPnZS+lZQboUJnizbZW8fqyqzmwmVRiFXvVhyVFvqNqACyiFDWRbTpk2Cc05dMmAJOkCvyYdpC7xtDbyf9QJIQcscsFjYEhbUrNnc+pz1Dobi0YxkghVXcz3ZXeQ7r632ImGN8/WgJAaLiFSdD6XX+M8b0i3CExoBtYcdN+ix3G6PvHO02CzsBsTBElCAEA5BXKryACgDgK162lYcFteotvKfJCHiuMFQT7Ss0pQvUUPWgbQpcuAo5aarrRBtxCKVcrPskwVlntLJUgcTUzKVIVZszkCt1BgSKEQIYC7t0TILQHOONamxDc1amx3InRPvhQnU1LACswPExtpl7TqCrANDPlyVIgyyoTuuNluCP1MIefUwfXjkFLsARakwMGNSQHJrrLK4BmonsoeVm/q6zfwDg0xoJoNszZv3HTObod+GXtuTLdXPHJsI9E1DIw6SFOL6p7vTDSWDtQ9+Rm2zJiIQPdEh9SRr6AJIRaomAFMjLFcS9W9KDUBJCMahzoAU0qunUlCmfiIjSD+gfxB1lkuvDrBi83g7DnVDX99p0+tROUpIttWynZZIAOwLAKcx07o3j18d9vIZfcV6EYeo8d+lUCzmdZ6u/0Xr+QL+IpZfFdC6pV+oZcS4IlSmJtdXLMW5mY5V+KlhoNAi4htdWtXRkrDMLQ+Hws3DicZ5m5VLeQxx3iOWyjKhQZUFWOSQHBsC5BjSOJMVqwqxqOt8+fk7/ke/n0TspVTrLw4asqNBL2S4M2xShvXo8Xy7OfNWhrp3HPqMN2iW8staNoVtKhhSjQaLFpQGEuWLIa5SmtKmVCrczBpCmUywRbJmMFFKHLwt09pOWrCa9+tKHIqLnMstUJTFmAkhGNoT3GE8llgCol6sdmFgMWQ6UtHDNmxOTcpppoFuJDAs3moZAYVPazo5h/B/6BJCHHPmTg74fJ54aOz8PFT8vh7CSXs93ucbhcEsuFKmYJbloo5iK4vvEPAzjio6vGGs1msbVdZcLLWi1sBJkDJgzJkEwdBoQYhmcSlgclSlthpxLcwXgNBmGBgMQYqmUoyJF9J3ID7+nJdyU4BXBZRttSgqWlxMlIMMhDMWoMREkckpbGyRKlFpyBjjjgsmWNy4ZMBMkwxtaNEQWONsrWVspaFF8p3dTn9KHxh2P+Tfxv2n6ys/NaqRSO16Ytlb9j7T9KHzPAnQYKFEAsoA/xdyRThQkA3kHQUA==')))
| 292.419355 | 8,123 | 0.92267 | 279 | 9,065 | 29.749104 | 0.921147 | 0.003253 | 0.004578 | 0.006265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125198 | 0.025483 | 9,065 | 30 | 8,124 | 302.166667 | 0.814354 | 0.004633 | 0 | 0 | 0 | 0.142857 | 0.967254 | 0.96523 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
a672ae96356dabe8ec6970c6437994df723272c8 | 36 | py | Python | qiling/qiling/extensions/report/__init__.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | 2 | 2021-05-05T12:03:01.000Z | 2021-06-04T14:27:15.000Z | qiling/qiling/extensions/report/__init__.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | null | null | null | qiling/qiling/extensions/report/__init__.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | 2 | 2021-05-05T12:03:09.000Z | 2021-06-04T14:27:21.000Z | from .report import generate_report
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a6af267b9d86ef17204ca33ce39268a8b42dd8d5 | 17,114 | py | Python | IBRAM/dashboard_integracao/crossover/dicts/inbcm.py | tainacan/data_science | a36c977f3aba6ce8bc45b1433ead673fd3c2674f | [
"CC0-1.0"
] | 2 | 2021-04-12T15:05:18.000Z | 2021-08-19T01:57:38.000Z | IBRAM/dashboard_integracao/crossover/dicts/inbcm.py | tainacan/data_science | a36c977f3aba6ce8bc45b1433ead673fd3c2674f | [
"CC0-1.0"
] | null | null | null | IBRAM/dashboard_integracao/crossover/dicts/inbcm.py | tainacan/data_science | a36c977f3aba6ce8bc45b1433ead673fd3c2674f | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Apr 13 21:49:34 2020
@author: Luis
"""
#Dicionacios de Tipos de Metadados de Museus com valores Metadados Modelo INBCM
cross_dict = { "Museu Histórico Nacional_Acervo Museológico":{
"Número de registro":"Número de registro",
"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classe":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Termos de Indexação":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
#"Comprimento (cm)":"Dimensões - profundidade/comprimento",
#"Peso (g)":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de Produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museu Regional de São João Del Rei_Acervo Museológico":{
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Características técnicas":"Resumo descritivo",
"Características Estilísticas":"Resumo descritivo",
"Características Iconográficas/Ornamentais":"Resumo descritivo",
"Tema":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Espessura (cm)":"Dimensões - espessura",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Peso (kg)":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
"Estado de Conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção"
},
"Museu Benjamin Constant_Acervo Museológico":{
"Número de registro":"Número de registro",
#"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Fabricante":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Dados históricos":"Resumo descritivo",
"Características técnicas":"Resumo descritivo",
"Características estilísticas":"Resumo descritivo",
"Características iconográficas/ornamentais":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Espessura (cm)":"Dimensões - espessura",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museu da Inconfidência_Acervo Museológico":{
"Número de registro":"Número de registro",
#"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Fabricante":"Autor",
"Classificação":"Classificação",
#"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Temas":"Resumo descritivo",
"Estilo":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Espessura (cm)":"Dimensões - espessura",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Peso (kg)":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
},
"Museu das Missões_Acervo Museológico":{
"Número de registro":"Número de registro",
#"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Estilo":"Resumo descritivo",
"Temas":"Resumo descritivo",
"Escola/Grupo Cultural":"Resumo descritivo",
"Movimento":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Peso (Kg)":"Dimensões - peso",
"Material/Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museu de Itaipu_Acervo MAI": {
"Número de registro":"Número de registro",
#"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Histórico":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura – Mesial (cm)":"Dimensões - largura",
"Largura – Proximal (cm)":"Dimensões - largura",
"Largura - Zona distal (cm)":"Dimensões - largura",
"Largura - Zona mesial (cm)":"Dimensões - largura",
"Largura - Zona proximal (cm)":"Dimensões - largura",
"Largura – Distal (cm)":"Dimensões - largura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro/Comprimento da base (cm)":"Dimensões - diâmetro",
"Diâmetro/Medida do ombro (cm)":"Dimensões - diâmetro",
"Diâmetro/Comprimento da boca (cm)":"Dimensões - diâmetro",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Espessura (cm)":"Dimensões - espessura",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Peso (g)":"Dimensões - peso",
"Material/Técnica":"Material / Técnica",
"Matéria prima":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Datação":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museu do Diamante_Acervo Museológico": {
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Características técnicas":"Resumo descritivo",
"Características estilísticas":"Resumo descritivo",
"Características iconográficas/ornamentais":"Resumo descritivo",
"Dados históricos":"Resumo descritivo",
"Temas":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Circunferência (cm)":"Dimensões - diâmetro",
"Espessura (cm)":"Dimensões - espessura",
"Prufundidade (cm)":"Dimensões - profundidade/comprimento",
"Peso (kg)":"Dimensões - peso",
"Material/Técnica":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Local de produção ":"Local de produção ",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
"Condições de reprodução":"Condições de reprodução",
"Mídias relacionadas":"Mídias relacionadas",
},
"Museu do Ouro_Acervo Museológico":{
"Número de registro":"Número de registro",
"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Características técnicas":"Resumo descritivo",
"Características estilísticas":"Resumo descritivo",
"Características iconográficas/ornamentais":"Resumo descritivo",
"DDados históricos":"Resumo descritivo",
"Dimensões":"Dimensões",
"Material/Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
"Mídias relacionadas":"Mídias relacionadas",
},
"Museu Regional Casa dos Ottoni_Acervo Museológico":{
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Nº anterior":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Características técnicas":"Resumo descritivo",
"Características estilísticas":"Resumo descritivo",
"Características iconogrpaficas/ornamentais":"Resumo descritivo",
"Dados históricos":"Resumo descritivo",
"Temas":"Resumo descritivo",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Circunferência (cm)":"Dimensões - diâmetro",
"Espessura (cm)":"Dimensões - espessura",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Origem":"Local de produção",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museu Regional Casa dos Ottoni_Acervo Paralelo":{
"Número de registro":"Número de registro",
"Identificação":"Título",
},
"Museu Regional Casa dos Ottoni_Acervo Paróquia Nossa Senhora Conceição":{
"Número de registro":"Número de registro",
"Número de ordem":"Outros números",
"Título":"Título",
"Autor":"Autor",
"Altura (cm)":"Dimensões - altura",
"Largura (cm)":"Dimensões - largura",
"Diâmetro (cm)":"Dimensões - diâmetro",
"Comprimento (cm)":"Dimensões - profundidade/comprimento",
"Profundidade (cm)":"Dimensões - profundidade/comprimento",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
"Origem":"Local de produção",
"Época":"Data de produção"
},
"Museu Victor Meirelles_Acervo do Museu Victor Meirelles": {
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Informações sobre o autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Descrição de conteúdo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Exposições":"Resumo descritivo",
"Estilos/temas":"Resumo descritivo",
"Dimensões":"Dimensões",
"Material/Técnica":"Material / Técnica",
"Estado de coservação":"Estado de Conservação",
"País de produção":"Local de produção",
"Estado de produção":"Local de produção",
"Cidade de produção":"Local de produção",
"Data de produção/datação":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
"Mídias relacionadas":"Mídias relacionadas",
},
"Museu Villa Lobos_Fotografias": {
"Número de registro":"Número de registro",
#"Outros números":"Outros números",
#"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Dimensões":"Dimensões",
"Material/Técnica":"Material / Técnica",
"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
"Condições de reprodução":"Condições de reprodução",
},
"Museus de Goiás_Museu das Bandeiras": {
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Tema":"Resumo descritivo",
"Altura":"Dimensões - altura",
"Largura":"Dimensões - largura",
"Diâmetro":"Dimensões - diâmetro",
"Comprimento":"Dimensões - profundidade/comprimento",
"Profundidade":"Dimensões - profundidade/comprimento",
"Peso":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção ":"Local de produção",
"Data de produção":"Data de produção",
},
"Museus de Goiás_Museu Casa da Princesa": {
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Altura":"Dimensões - altura",
"Largura":"Dimensões - largura",
"Diâmetro":"Dimensões - diâmetro",
"Comprimento":"Dimensões - profundidade/comprimento",
"Profundidade":"Dimensões - profundidade/comprimento",
"Peso":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
},
"Museus de Goiás_Museu de Arte Sacra da Boa Morte":{
"Número de registro":"Número de registro",
"Outros números":"Outros números",
"Situação":"Situação",
"Denominação":"Denominação",
"Título":"Título",
"Autor":"Autor",
"Classificação":"Classificação",
"Resumo descritivo":"Resumo descritivo",
"Marcas/Inscrições":"Resumo descritivo",
"Altura":"Dimensões - altura",
"Largura":"Dimensões - largura",
"Diâmetro":"Dimensões - diâmetro",
"Comprimento":"Dimensões - profundidade/comprimento",
"Profundidade":"Dimensões - profundidade/comprimento",
"Peso":"Dimensões - peso",
"Material":"Material / Técnica",
"Técnica":"Material / Técnica",
#"Estado de conservação":"Estado de Conservação",
"Local de produção":"Local de produção",
"Data de produção":"Data de produção",
}
}
#Dicionário que define as coleções selecionadas para o harversting
selected_col={
"Museu Benjamin Constant":["Acervo Museológico"],
"Museu da Inconfidência":["Acervo Museológico"],
"Museu das Missões":["Acervo Museológico"],
"Museu de Itaipu":["Acervo MAI"],
"Museu do Diamante":["Acervo Museológico"],
"Museu do Ouro":["Acervo Museológico"],
"Museu Histórico Nacional":["Acervo Museológico"],
"Museu Regional Casa dos Ottoni":["Acervo Paróquia Nossa Senhora Conceição","Acervo Paralelo","Acervo Museológico"],
"Museu Regional de São João Del Rei":["Acervo Museológico"],
"Museu Victor Meirelles":["Acervo do Museu Victor Meirelles"],
"Museu Villa Lobos":["Fotografias"],
"Museus de Goiás":["Museu das Bandeiras","Museu Casa da Princesa","Museu de Arte Sacra da Boa Morte"]
}
#Dicionário de apoio com os metadados do INBCM
meta_inbcm = {'Número de registro':[], 'Outros números':[], 'Situação':[], 'Denominação':[],
'Título':[], 'Autor':[], 'Classificação':[], 'Resumo descritivo':[],
'Dimensões':[], 'Dimensões - altura':[], 'Dimensões - largura':[],
'Dimensões - diâmetro':[], 'Dimensões - espessura':[], 'Dimensões - profundidade/comprimento':[],
'Dimensões - peso':[], 'Material / Técnica':[],'Estado de Conservação':[], 'Local de produção':[],
'Data de produção':[], 'Condições de reprodução':[], 'Mídias relacionadas':[]}
#Dicionário dos metadaos da tabela de itens
itens_meta = {'Número de registro':[], 'Outros números':[], 'Situação':[], 'Denominação':[],
'Título':[], 'Resumo descritivo':[],
'Dimensões':[], 'Dimensões - altura':[], 'Dimensões - largura':[],
'Dimensões - diâmetro':[], 'Dimensões - espessura':[], 'Dimensões - profundidade/comprimento':[],
'Dimensões - peso':[], 'Condições de reprodução':[], 'Mídias relacionadas':[]}
#Dicionário dos metadados da tabela de taxonomia (Que contém relação com os termos)
tax_meta = ['Autor', 'Classificação', 'Data de produção', 'Estado de Conservação', 'Local de produção',
'Material / Técnica', 'Situação']
| 41.338164 | 120 | 0.6632 | 1,659 | 17,114 | 6.830621 | 0.103074 | 0.104483 | 0.048006 | 0.03671 | 0.91096 | 0.844776 | 0.817684 | 0.802153 | 0.76006 | 0.754765 | 0 | 0.00092 | 0.174185 | 17,114 | 413 | 121 | 41.438257 | 0.800679 | 0.065736 | 0 | 0.743243 | 0 | 0 | 0.72529 | 0.039486 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a6dc9af019fd5477d2c9c0792bc5f443055b9fa5 | 2,899 | py | Python | test/test_feeds_api.py | hi-artem/twistlock-py | 9888e905f5b9d3cc00f9b84244588c0992f8e4f4 | [
"RSA-MD"
] | null | null | null | test/test_feeds_api.py | hi-artem/twistlock-py | 9888e905f5b9d3cc00f9b84244588c0992f8e4f4 | [
"RSA-MD"
] | null | null | null | test/test_feeds_api.py | hi-artem/twistlock-py | 9888e905f5b9d3cc00f9b84244588c0992f8e4f4 | [
"RSA-MD"
] | null | null | null | # coding: utf-8
"""
Prisma Cloud Compute API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 21.04.439
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import openapi_client
from openapi_client.api.feeds_api import FeedsApi # noqa: E501
from openapi_client.rest import ApiException
class TestFeedsApi(unittest.TestCase):
"""FeedsApi unit test stubs"""
def setUp(self):
self.api = openapi_client.api.feeds_api.FeedsApi() # noqa: E501
def tearDown(self):
pass
def test_api_v1_feeds_bundle_get(self):
"""Test case for api_v1_feeds_bundle_get
"""
pass
def test_api_v1_feeds_bundle_put(self):
"""Test case for api_v1_feeds_bundle_put
"""
pass
def test_api_v1_feeds_custom_custom_vulnerabilities_digest_get(self):
"""Test case for api_v1_feeds_custom_custom_vulnerabilities_digest_get
"""
pass
def test_api_v1_feeds_custom_custom_vulnerabilities_get(self):
"""Test case for api_v1_feeds_custom_custom_vulnerabilities_get
"""
pass
def test_api_v1_feeds_custom_custom_vulnerabilities_put(self):
"""Test case for api_v1_feeds_custom_custom_vulnerabilities_put
"""
pass
def test_api_v1_feeds_custom_cve_allow_list_digest_get(self):
"""Test case for api_v1_feeds_custom_cve_allow_list_digest_get
"""
pass
def test_api_v1_feeds_custom_cve_allow_list_get(self):
"""Test case for api_v1_feeds_custom_cve_allow_list_get
"""
pass
def test_api_v1_feeds_custom_cve_allow_list_put(self):
"""Test case for api_v1_feeds_custom_cve_allow_list_put
"""
pass
def test_api_v1_feeds_custom_ips_digest_get(self):
"""Test case for api_v1_feeds_custom_ips_digest_get
"""
pass
def test_api_v1_feeds_custom_ips_get(self):
"""Test case for api_v1_feeds_custom_ips_get
"""
pass
def test_api_v1_feeds_custom_ips_put(self):
"""Test case for api_v1_feeds_custom_ips_put
"""
pass
def test_api_v1_feeds_custom_malware_digest_get(self):
"""Test case for api_v1_feeds_custom_malware_digest_get
"""
pass
def test_api_v1_feeds_custom_malware_get(self):
"""Test case for api_v1_feeds_custom_malware_get
"""
pass
def test_api_v1_feeds_custom_malware_put(self):
"""Test case for api_v1_feeds_custom_malware_put
"""
pass
def test_api_v1_feeds_force_refresh_put(self):
"""Test case for api_v1_feeds_force_refresh_put
"""
pass
if __name__ == '__main__':
unittest.main()
| 23.379032 | 124 | 0.687133 | 400 | 2,899 | 4.465 | 0.17 | 0.083987 | 0.167973 | 0.215006 | 0.742441 | 0.715566 | 0.700448 | 0.655095 | 0.558791 | 0.393057 | 0 | 0.021422 | 0.243187 | 2,899 | 123 | 125 | 23.569106 | 0.792616 | 0.416006 | 0 | 0.380952 | 1 | 0 | 0.005041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.404762 | false | 0.380952 | 0.119048 | 0 | 0.547619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
a6f38306ca41f066cd53c3808005fe4e60da3753 | 10,542 | py | Python | tests/test_getServices.py | alekssamos/vkmix | 85d5409fc25457cdfd5df41631e8ad63b9b57c59 | [
"MIT"
] | null | null | null | tests/test_getServices.py | alekssamos/vkmix | 85d5409fc25457cdfd5df41631e8ad63b9b57c59 | [
"MIT"
] | null | null | null | tests/test_getServices.py | alekssamos/vkmix | 85d5409fc25457cdfd5df41631e8ad63b9b57c59 | [
"MIT"
] | null | null | null | import responses
import requests
import urllib.parse
import json
import unittest
from vkmix import VkMix
class TestVkMixGetServices(unittest.TestCase):
success_data = json.loads(r"""
{"response":{"instagram":[{"id":1,"name_ru":"\u041b\u0430\u0439\u043a\u0438","description_ru":"\u0411\u043e\u0442\u044b \u0441 \u043f\u043e\u0441\u0442\u0430\u043c\u0438 . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":3,"points_max":6,"network":"instagram","type":"likes"},{"id":2,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438","description_ru":"\u0411\u043e\u0442\u044b \u0441 \u043f\u043e\u0441\u0442\u0430\u043c\u0438 . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":3,"points_max":6,"network":"instagram","type":"subscribers"},{"id":5,"name_ru":"\u041b\u0430\u0439\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435","description_ru":"\u0411\u043e\u0442\u044b \u0441 \u043f\u043e\u0441\u0442\u0430\u043c\u0438 \u0438 \u043f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0430\u043c\u0438, \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":4,"points_max":6,"network":"instagram","type":"likes"},{"id":6,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435","description_ru":"\u0411\u043e\u0442\u044b \u0441 \u043f\u043e\u0441\u0442\u0430\u043c\u0438 \u0438 \u043f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0430\u043c\u0438, \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":4,"points_max":6,"network":"instagram","type":"subscribers"},{"id":28,"name_ru":"\u041b\u0430\u0439\u043a\u0438 \u0416\u0438\u0432\u044b\u0435","description_ru":"\u041f\u0440\u043e\u0444\u0438\u043b\u0438 \u0436\u0438\u0432\u044b\u0445 \u043b\u044e\u0434\u0435\u0439, \u043f\u0440\u0435\u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0441\u043d\u0433.","points_min":10,"points_max":15,"network":"instagram","type":"likes"},{"id":29,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u0416\u0438\u0432\u044b\u0435","description_ru":"\u041f\u0440\u043e\u0444\u0438\u043b\u0438 \u0436\u0438\u0432\u044b\u0445 \u043b\u044e\u0434\u0435\u0439, \u043f\u0440\u0435\u0438\u043c\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0441\u043d\u0433.","points_min":10,"points_max":15,"network":"instagram","type":"subscribers"}],"vk":[{"id":9,"name_ru":"\u041b\u0430\u0439\u043a\u0438","description_ru":"\u041b\u0430\u0439\u043a\u0438 \u043d\u0430 \u0437\u0430\u043f\u0438\u0441\u044c, \u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044e \u0438\u043b\u0438 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0412\u041a\u043e\u043d\u0442\u0430\u043a\u0442\u0435","points_min":2,"points_max":6,"network":"vk","type":"likes"},{"id":10,"name_ru":"\u0420\u0435\u043f\u043e\u0441\u0442\u044b","description_ru":"\u0420\u0435\u043f\u043e\u0441\u0442\u044b \u043d\u0430 \u0437\u0430\u043f\u0438\u0441\u044c, \u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044e \u0438\u043b\u0438 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0412\u041a\u043e\u043d\u0442\u0430\u043a\u0442\u0435","points_min":4,"points_max":6,"network":"vk","type":"reposts"},{"id":11,"name_ru":"\u041a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u0438","description_ru":"\u041a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u0438 \u043d\u0430 \u0437\u0430\u043f\u0438\u0441\u044c, \u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044e \u0438\u043b\u0438 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0412\u041a\u043e\u043d\u0442\u0430\u043a\u0442\u0435","points_min":7,"points_max":9,"network":"vk","type":"comments"},{"id":12,"name_ru":"\u0414\u0440\u0443\u0437\u044c\u044f","description_ru":"\u0414\u0440\u0443\u0437\u044c\u044f \u043d\u0430 \u043b\u0438\u0447\u043d\u0443\u044e \u0441\u0442\u0440\u0430\u043d\u0438\u0446\u0443 \u0412\u041a\u043e\u043d\u0442\u0430\u043a\u0442\u0435","points_min":4,"points_max":6,"network":"vk","type":"friends"},{"id":13,"name_ru":"\u0423\u0447\u0430\u0441\u0442\u043d\u0438\u043a\u0438","description_ru":"\u0423\u0447\u0430\u0441\u0442\u043d\u0438\u043a\u0438 \u0432 \u0433\u0440\u0443\u043f\u043f\u0443 \u0438\u043b\u0438 \u043f\u0443\u0431\u043b\u0438\u0447\u043d\u0443\u044e \u0441\u0442\u0440\u0430\u043d\u0438\u0446\u0443 \u0412\u041a\u043e\u043d\u0442\u0430\u043a\u0442\u0435","points_min":4,"points_max":7,"network":"vk","type":"groups"}],"tiktok":[{"id":14,"name_ru":"\u041b\u0430\u0439\u043a\u0438","description_ru":"\u041b\u0430\u0439\u043a\u0438 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0432 Tiktok","points_min":3,"points_max":5,"network":"tiktok","type":"likes"},{"id":15,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438","description_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043d\u0430 Tiktok \u0430\u043a\u043a\u0430\u0443\u043d\u0442","points_min":3,"points_max":5,"network":"tiktok","type":"subscribers"},{"id":16,"name_ru":"\u041b\u0430\u0439\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435","description_ru":"\u041a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u043b\u0430\u0439\u043a\u0438 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0432 Tiktok","points_min":3,"points_max":5,"network":"tiktok","type":"likes"},{"id":17,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435","description_ru":"\u041a\u0430\u0447\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u043f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043d\u0430 Tiktok \u0430\u043a\u043a\u0430\u0443\u043d\u0442","points_min":6,"points_max":8,"network":"tiktok","type":"subscribers"}],"youtube":[{"id":18,"name_ru":"\u041b\u0430\u0439\u043a\u0438","description_ru":"\u041b\u0430\u0439\u043a\u0438 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0432 YouTube . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":4,"points_max":9,"network":"youtube","type":"likes"},{"id":19,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438","description_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043d\u0430 YouTube \u043a\u0430\u043d\u0430\u043b . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":4,"points_max":9,"network":"youtube","type":"friends"},{"id":20,"name_ru":"\u041a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u0438","description_ru":"\u041a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u0438 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e\u0437\u0430\u043f\u0438\u0441\u044c \u0432 YouTube","points_min":10,"points_max":15,"network":"youtube","type":"comments"}],"telegram":[{"id":21,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438","description_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438 \u043d\u0430 \u043a\u0430\u043d\u0430\u043b Telegram","points_min":4,"points_max":7,"network":"telegram","type":"subscribers"}],"ok":[{"id":22,"name_ru":"\u041a\u043b\u0430\u0441\u0441\u044b","description_ru":"\u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u043d\u0435\u0431\u043e\u043b\u044c\u0448\u0438\u0435 \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f.","points_min":3,"points_max":6,"network":"ok","type":"likes"},{"id":23,"name_ru":"\u0414\u0440\u0443\u0437\u044c\u044f","description_ru":"\u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u043d\u0435\u0431\u043e\u043b\u044c\u0448\u0438\u0435 \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f.","points_min":3,"points_max":6,"network":"ok","type":"friends"},{"id":24,"name_ru":"\u041f\u043e\u0434\u043f\u0438\u0441\u0447\u0438\u043a\u0438","description_ru":"\u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u043d\u0435\u0431\u043e\u043b\u044c\u0448\u0438\u0435 \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f.","points_min":3,"points_max":6,"network":"ok","type":"groups"}],"twitter":[{"id":25,"name_ru":"\u0420\u0435\u0442\u0432\u0438\u0442\u044b","description_ru":"\u0420\u0435\u0442\u0432\u0438\u0442\u044b \u043d\u0430 \u0437\u0430\u043f\u0438\u0441\u044c \u0432 Twitter . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":3,"points_max":10,"network":"twitter","type":"retweets"},{"id":26,"name_ru":"\u0424\u043e\u043b\u043b\u043e\u0432\u0435\u0440\u044b","description_ru":"\u0424\u043e\u043b\u043b\u043e\u0432\u0435\u0440\u044b \u043d\u0430 Twitter \u0430\u043a\u043a\u0430\u0443\u043d\u0442 . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":3,"points_max":10,"network":"twitter","type":"followers"},{"id":27,"name_ru":"\u041b\u0430\u0439\u043a\u0438","description_ru":"\u041b\u0430\u0439\u043a\u0438 \u043d\u0430 \u0437\u0430\u043f\u0438\u0441\u044c \u0432 Twitter . \u0412\u043e\u0437\u043c\u043e\u0436\u043d\u044b \u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f","points_min":3,"points_max":10,"network":"twitter","type":"favorites"}]}}
""".strip())
def response_callback(self, resp):
resp.callback_processed = True
args = {}
try:
args = urllib.parse.parse_qs(urllib.parse.urlparse(resp.url)[4])
except AttributeError: pass
except KeyError: pass
self.assertIn("api_token", args)
self.assertEqual(args["api_token"][0], "mykey")
return resp
def test_getServices(self):
with responses.RequestsMock(response_callback=self.response_callback) as m:
m.add(responses.GET, "https://vkmix.com/api/2/getServices", json=self.success_data)
vkm = VkMix(api_token="mykey")
data = vkm.getServices()
# self.assertEqual(m.assert_call_count("https://vkmix.com/api/2/getServices", 1), True) # no support query string?
self.assertIn("vk", data)
self.assertIn("instagram", data)
if __name__ == "__main__":
unittest.main()
| 301.2 | 9,411 | 0.753747 | 1,614 | 10,542 | 4.848203 | 0.095415 | 0.048562 | 0.072843 | 0.035783 | 0.848818 | 0.840128 | 0.82722 | 0.808051 | 0.786581 | 0.779681 | 0 | 0.38757 | 0.0369 | 10,542 | 35 | 9,412 | 301.2 | 0.383138 | 0.010529 | 0 | 0 | 0 | 0.034483 | 0.910442 | 0.853485 | 0 | 0 | 0 | 0 | 0.137931 | 1 | 0.068966 | false | 0.068966 | 0.206897 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 13 |
4708fd22edad285c10151844357ea265d47ccbeb | 1,052 | py | Python | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/saved_model/signature_constants/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 3 | 2019-04-01T11:03:04.000Z | 2019-12-31T02:17:15.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/saved_model/signature_constants/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-04-15T18:46:45.000Z | 2021-04-15T18:46:45.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/saved_model/signature_constants/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-09-23T13:43:07.000Z | 2021-09-23T13:43:07.000Z | """Imports for Python API.
This file is MACHINE GENERATED! Do not edit.
Generated by: tensorflow/tools/api/generator/create_python_api.py script.
"""
from tensorflow.python.saved_model.signature_constants import CLASSIFY_INPUTS
from tensorflow.python.saved_model.signature_constants import CLASSIFY_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import CLASSIFY_OUTPUT_CLASSES
from tensorflow.python.saved_model.signature_constants import CLASSIFY_OUTPUT_SCORES
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY
from tensorflow.python.saved_model.signature_constants import PREDICT_INPUTS
from tensorflow.python.saved_model.signature_constants import PREDICT_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import PREDICT_OUTPUTS
from tensorflow.python.saved_model.signature_constants import REGRESS_INPUTS
from tensorflow.python.saved_model.signature_constants import REGRESS_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import REGRESS_OUTPUTS | 65.75 | 95 | 0.895437 | 141 | 1,052 | 6.375887 | 0.276596 | 0.171301 | 0.244716 | 0.305895 | 0.809789 | 0.809789 | 0.809789 | 0.809789 | 0.749722 | 0.304783 | 0 | 0 | 0.057985 | 1,052 | 16 | 96 | 65.75 | 0.907164 | 0.135932 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.