hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c788d631d1d237cc626cf0a634afe19d5a987c8c | 109 | py | Python | reference/shape/primitives_2d/arc_open.py | abhikpal/p5-examples | b7ac2f7713c4b724cf99579d1249141007a4619a | [
"MIT"
] | 16 | 2018-03-05T07:09:28.000Z | 2022-03-12T11:44:10.000Z | reference/shape/primitives_2d/arc_open.py | abhikpal/p5-examples | b7ac2f7713c4b724cf99579d1249141007a4619a | [
"MIT"
] | 5 | 2017-08-14T07:58:30.000Z | 2019-01-10T05:40:07.000Z | reference/shape/primitives_2d/arc_open.py | abhikpal/p5-examples | b7ac2f7713c4b724cf99579d1249141007a4619a | [
"MIT"
] | 13 | 2017-08-21T10:23:01.000Z | 2021-07-31T00:03:42.000Z | from p5 import *
def draw():
no_loop()
arc((180, 180), 251, 251, 0, PI + QUARTER_PI, 'OPEN')
run()
| 13.625 | 57 | 0.559633 | 18 | 109 | 3.277778 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 0.247706 | 109 | 7 | 58 | 15.571429 | 0.54878 | 0 | 0 | 0 | 0 | 0 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c7a52a5477b6b6b922a6ad4d9e9564760ba841dc | 392 | py | Python | prompts/date.py | Robert-96/py-prompts | 683185aea28f589e89cc80225a7326e60131b3c0 | [
"MIT"
] | null | null | null | prompts/date.py | Robert-96/py-prompts | 683185aea28f589e89cc80225a7326e60131b3c0 | [
"MIT"
] | null | null | null | prompts/date.py | Robert-96/py-prompts | 683185aea28f589e89cc80225a7326e60131b3c0 | [
"MIT"
] | null | null | null | import sys
from datetime import datetime
class DatePromptPS1(object):
def __str__(self):
return "[%s] >>> " % (datetime.now().strftime('%d/%m'))
def __len__(self):
return len(str(self))
class DatePromptPS2(object):
def __str__(self):
width = len(sys.ps1)
return "... ".rjust(width, " ")
sys.ps1 = DatePromptPS1()
sys.ps2 = DatePromptPS2()
| 17.043478 | 63 | 0.604592 | 45 | 392 | 5 | 0.488889 | 0.093333 | 0.106667 | 0.142222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.232143 | 392 | 22 | 64 | 17.818182 | 0.724252 | 0 | 0 | 0.153846 | 0 | 0 | 0.048469 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.153846 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c7a6e0c740de6f3ae0d493c05e960edddf05b34a | 338 | py | Python | todoapp/models.py | shrionit/todo_api | 2864d5521d788e8f0f160279b1486b1155be192a | [
"MIT"
] | null | null | null | todoapp/models.py | shrionit/todo_api | 2864d5521d788e8f0f160279b1486b1155be192a | [
"MIT"
] | null | null | null | todoapp/models.py | shrionit/todo_api | 2864d5521d788e8f0f160279b1486b1155be192a | [
"MIT"
] | null | null | null | from django.db import models
from django.urls import reverse
class TodoItem(models.Model):
note = models.TextField(max_length=500)
date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.note
def get_absolute_url(self):
return reverse("TodoItem_detail", kwargs={"pk": self.pk}) | 28.166667 | 65 | 0.707101 | 46 | 338 | 4.978261 | 0.673913 | 0.087336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010949 | 0.189349 | 338 | 12 | 65 | 28.166667 | 0.824818 | 0 | 0 | 0 | 0 | 0 | 0.050147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.222222 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c7a9d44e9f378d3ce127b341cc3db17701d2edba | 108 | py | Python | C_Data_resources/solutions/ex2_1.py | oercompbiomed/CBM101 | 20010dcb99fbf218c4789eb5918dcff8ceb94898 | [
"MIT"
] | 7 | 2019-07-03T07:41:55.000Z | 2022-02-06T20:25:37.000Z | C_Data_resources/solutions/ex2_1.py | oercompbiomed/CBM101 | 20010dcb99fbf218c4789eb5918dcff8ceb94898 | [
"MIT"
] | 9 | 2019-03-14T15:15:09.000Z | 2019-08-01T14:18:21.000Z | C_Data_resources/solutions/ex2_1.py | oercompbiomed/CBM101 | 20010dcb99fbf218c4789eb5918dcff8ceb94898 | [
"MIT"
] | 11 | 2019-03-12T10:43:11.000Z | 2021-10-05T12:15:00.000Z | def plot(k):
plt.imshow(X[k].reshape(8,8), cmap='gray')
plt.title(f"Number = {y[k]}")
plt.show() | 27 | 46 | 0.555556 | 20 | 108 | 3 | 0.75 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.175926 | 108 | 4 | 47 | 27 | 0.651685 | 0 | 0 | 0 | 0 | 0 | 0.174312 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c7bab59e8f10dbc3976b497427e2131399dc5b4d | 655 | py | Python | tests/fixtures/model.py | nickderobertis/github-secrets | 43c599c012b18fd8bb98cd3c61064dfce42512bb | [
"MIT"
] | null | null | null | tests/fixtures/model.py | nickderobertis/github-secrets | 43c599c012b18fd8bb98cd3c61064dfce42512bb | [
"MIT"
] | 1 | 2021-02-21T18:50:06.000Z | 2021-02-21T18:50:06.000Z | tests/fixtures/model.py | nickderobertis/github-secrets | 43c599c012b18fd8bb98cd3c61064dfce42512bb | [
"MIT"
] | null | null | null | import pytest
from github_secrets.app import GithubSecretsApp
from github_secrets.manager import SecretsManager
from tests.config import CONFIG_FILE_PATH_YAML, APP_CONFIG_FILE_PATH_YAML
def get_secrets_manager(**kwargs) -> SecretsManager:
manager = SecretsManager(config_path=CONFIG_FILE_PATH_YAML, **kwargs)
return manager
def get_secrets_app(**kwargs) -> GithubSecretsApp:
app = GithubSecretsApp(config_path=APP_CONFIG_FILE_PATH_YAML, **kwargs)
return app
@pytest.fixture(scope="function")
def secrets_manager():
return get_secrets_manager()
@pytest.fixture(scope="function")
def secrets_app():
return get_secrets_app()
| 25.192308 | 75 | 0.79542 | 84 | 655 | 5.869048 | 0.25 | 0.081136 | 0.11359 | 0.146045 | 0.31643 | 0.267748 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119084 | 655 | 25 | 76 | 26.2 | 0.854419 | 0 | 0 | 0.125 | 0 | 0 | 0.024427 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c7d0fc847327bb3036a38063378dfc70076abeb7 | 1,133 | py | Python | src/z3c/formui/interfaces.py | zopefoundation/z3c.formu | 978abd2d0cb28a536da59dc27377605193fe41da | [
"ZPL-2.1"
] | 1 | 2018-11-11T14:04:11.000Z | 2018-11-11T14:04:11.000Z | src/z3c/formui/interfaces.py | zopefoundation/z3c.formu | 978abd2d0cb28a536da59dc27377605193fe41da | [
"ZPL-2.1"
] | 5 | 2017-12-05T14:29:28.000Z | 2021-09-20T06:36:31.000Z | src/z3c/formui/interfaces.py | zopefoundation/z3c.formu | 978abd2d0cb28a536da59dc27377605193fe41da | [
"ZPL-2.1"
] | 2 | 2015-04-03T06:02:45.000Z | 2017-12-05T15:13:03.000Z | ##############################################################################
#
# Copyright (c) 2007 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Form UI Interfaces."""
from zope.publisher.interfaces.browser import IBrowserRequest
from zope.viewlet.interfaces import IViewletManager
class IFormUILayer(IBrowserRequest):
"""A basic layer for the Form UI package."""
class IDivFormLayer(IFormUILayer):
"""A layer that supports forms created only using DIV elements."""
class ITableFormLayer(IFormUILayer):
"""A layer that supports forms created using tables."""
class ICSS(IViewletManager):
"""CSS viewlet manager."""
| 34.333333 | 78 | 0.653133 | 127 | 1,133 | 5.826772 | 0.622047 | 0.032432 | 0.037838 | 0.059459 | 0.113514 | 0.113514 | 0.113514 | 0 | 0 | 0 | 0 | 0.006135 | 0.136805 | 1,133 | 32 | 79 | 35.40625 | 0.750511 | 0.57105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
c7ee633318b472a21ffc52b8f5190c8945ac6494 | 282 | py | Python | plugins/commands/start.py | oktest145/TorrentSearcherBot | d68dba2bd980394baadf0c8e9b3fb19d1cc2a905 | [
"MIT"
] | 26 | 2020-08-05T06:51:23.000Z | 2021-07-12T09:56:57.000Z | plugins/commands/start.py | oktest145/TorrentSearcherBot | d68dba2bd980394baadf0c8e9b3fb19d1cc2a905 | [
"MIT"
] | 2 | 2020-08-18T06:36:55.000Z | 2021-02-03T10:36:50.000Z | plugins/commands/start.py | oktest145/TorrentSearcherBot | d68dba2bd980394baadf0c8e9b3fb19d1cc2a905 | [
"MIT"
] | 65 | 2020-08-17T17:43:04.000Z | 2021-10-02T08:01:59.000Z | from pyrogram import Client, filters
from pyrogram.types import Message
from config import START_MESSAGE
@Client.on_message(filters.command("start"))
async def start_message(c: Client, m: Message):
await m.reply_text(START_MESSAGE, reply_to_message_id=m.message_id)
| 28.2 | 72 | 0.780142 | 42 | 282 | 5.02381 | 0.47619 | 0.170616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138298 | 282 | 9 | 73 | 31.333333 | 0.868313 | 0 | 0 | 0 | 0 | 0 | 0.018315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
1be7beb5a2a66aa3735a3574bec2559e0dc9b2cf | 681 | py | Python | jesse/strategies/Test16/__init__.py | noenfugler/jesse | 217a3168620a755c1a9576d9deb27105db7dccf8 | [
"MIT"
] | 3,999 | 2018-11-09T10:38:51.000Z | 2022-03-31T12:29:12.000Z | jesse/strategies/Test16/__init__.py | noenfugler/jesse | 217a3168620a755c1a9576d9deb27105db7dccf8 | [
"MIT"
] | 172 | 2020-04-16T16:19:08.000Z | 2022-03-28T13:28:55.000Z | jesse/strategies/Test16/__init__.py | noenfugler/jesse | 217a3168620a755c1a9576d9deb27105db7dccf8 | [
"MIT"
] | 495 | 2019-03-01T21:48:53.000Z | 2022-03-30T15:35:19.000Z | from jesse.strategies import Strategy
# test_increasing_position_size_after_opening
class Test16(Strategy):
def should_long(self):
return self.price < 7
def go_long(self):
qty = 1
self.buy = qty, 7
self.stop_loss = qty, 5
self.take_profit = qty, 15
def update_position(self):
# buy 1 more at current price
if self.price == 10:
self.buy = 1, 10
self.take_profit = 2, 15
self.stop_loss = 2, 5
def go_short(self):
pass
def should_cancel(self):
return False
def filters(self):
return []
def should_short(self):
return False
| 20.029412 | 45 | 0.580029 | 91 | 681 | 4.175824 | 0.461538 | 0.105263 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042316 | 0.340675 | 681 | 33 | 46 | 20.636364 | 0.804009 | 0.104258 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318182 | false | 0.045455 | 0.045455 | 0.181818 | 0.590909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
402da257fac91a1822b14500650292149c4bc858 | 608 | py | Python | setup.py | GlennHD/py-suricataparser | 6b19175b05cc2f6af67271c6f74bae5e3dd827e3 | [
"Apache-2.0"
] | 10 | 2021-03-29T21:45:37.000Z | 2022-03-27T15:42:28.000Z | setup.py | GlennHD/py-suricataparser | 6b19175b05cc2f6af67271c6f74bae5e3dd827e3 | [
"Apache-2.0"
] | null | null | null | setup.py | GlennHD/py-suricataparser | 6b19175b05cc2f6af67271c6f74bae5e3dd827e3 | [
"Apache-2.0"
] | 3 | 2021-07-22T21:28:22.000Z | 2022-03-18T17:44:44.000Z | from setuptools import setup
import suricataparser
setup(
name="suricataparser",
version=suricataparser.__version__,
author="Michail Tsyganov",
url="https://github.com/m-chrome/py-suricataparser",
description="Suricata rule parser",
packages=["suricataparser"],
python_requires=">=3.6",
classifiers=[
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
]
)
| 27.636364 | 61 | 0.641447 | 60 | 608 | 6.416667 | 0.616667 | 0.197403 | 0.25974 | 0.27013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018947 | 0.21875 | 608 | 21 | 62 | 28.952381 | 0.791579 | 0 | 0 | 0 | 0 | 0 | 0.523026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4069f6be3aedc735dc09c87e11aca6aa2f6f5c12 | 17,693 | py | Python | applications/popart/bert/tests/unit/pytorch/full_graph_utils.py | kew96/GraphcoreExamples | 22dc0d7e3755b0a7f16cdf694c6d10c0f91ee8eb | [
"MIT"
] | null | null | null | applications/popart/bert/tests/unit/pytorch/full_graph_utils.py | kew96/GraphcoreExamples | 22dc0d7e3755b0a7f16cdf694c6d10c0f91ee8eb | [
"MIT"
] | null | null | null | applications/popart/bert/tests/unit/pytorch/full_graph_utils.py | kew96/GraphcoreExamples | 22dc0d7e3755b0a7f16cdf694c6d10c0f91ee8eb | [
"MIT"
] | null | null | null | # Copyright (c) 2019 Graphcore Ltd. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import numpy as np
import torch
import popart
import onnx
from bert_model import ExecutionMode
from tests.utils import run_py, copy_weights_to_torch, run_fwd_model, check_tensors, check_model
from tests import torch_lamb
def get_mapping(config, init=None):
if init is None:
init = {}
if config.execution_mode == ExecutionMode.DEFAULT:
embedding_proj = {
"bert.embeddings.word_embeddings.weight": "Embedding/Embedding_Dict",
"bert.embeddings.position_embeddings.weight": "Embedding/Positional_Dict",
"bert.embeddings.token_type_embeddings.weight": "Embedding/Segment_Dict",
"bert.embeddings.LayerNorm.weight": "Embedding/Gamma",
"bert.embeddings.LayerNorm.bias": "Embedding/Beta",
}
init.update(**embedding_proj)
if config.split_qkv:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": f"Layer{i}/Attention/Q",
f"bert.encoder.layer.{i}.attention.self.key.weight": f"Layer{i}/Attention/K",
f"bert.encoder.layer.{i}.attention.self.value.weight": f"Layer{i}/Attention/V",
f"bert.encoder.layer.{i}.attention.output.dense.weight": f"Layer{i}/Attention/Out",
f"bert.encoder.layer.{i}.attention.output.LayerNorm.weight": f"Layer{i}/Attention/Gamma",
f"bert.encoder.layer.{i}.attention.output.LayerNorm.bias": f"Layer{i}/Attention/Beta",
f"bert.encoder.layer.{i}.intermediate.dense.weight": f"Layer{i}/FF/1/W",
f"bert.encoder.layer.{i}.intermediate.dense.bias": f"Layer{i}/FF/1/B",
f"bert.encoder.layer.{i}.output.dense.weight": f"Layer{i}/FF/2/W",
f"bert.encoder.layer.{i}.output.dense.bias": f"Layer{i}/FF/2/B",
f"bert.encoder.layer.{i}.output.LayerNorm.weight": f"Layer{i}/FF/Gamma",
f"bert.encoder.layer.{i}.output.LayerNorm.bias": f"Layer{i}/FF/Beta",
}
init.update(**layer)
else:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": f"Layer{i}/Attention/QKV",
f"bert.encoder.layer.{i}.attention.self.key.weight": f"Layer{i}/Attention/QKV",
f"bert.encoder.layer.{i}.attention.self.value.weight": f"Layer{i}/Attention/QKV",
f"bert.encoder.layer.{i}.attention.output.dense.weight": f"Layer{i}/Attention/Out",
f"bert.encoder.layer.{i}.attention.output.LayerNorm.weight": f"Layer{i}/Attention/Gamma",
f"bert.encoder.layer.{i}.attention.output.LayerNorm.bias": f"Layer{i}/Attention/Beta",
f"bert.encoder.layer.{i}.intermediate.dense.weight": f"Layer{i}/FF/1/W",
f"bert.encoder.layer.{i}.intermediate.dense.bias": f"Layer{i}/FF/1/B",
f"bert.encoder.layer.{i}.output.dense.weight": f"Layer{i}/FF/2/W",
f"bert.encoder.layer.{i}.output.dense.bias": f"Layer{i}/FF/2/B",
f"bert.encoder.layer.{i}.output.LayerNorm.weight": f"Layer{i}/FF/Gamma",
f"bert.encoder.layer.{i}.output.LayerNorm.bias": f"Layer{i}/FF/Beta",
}
init.update(**layer)
else:
embedding_proj = {
"bert.embeddings.word_embeddings.weight": "BertModel/Encoder/Embeddings/Token/weight",
"bert.embeddings.position_embeddings.weight": "BertModel/Encoder/Embeddings/Position/weight",
"bert.embeddings.token_type_embeddings.weight": "BertModel/Encoder/Embeddings/Segment/weight",
"bert.embeddings.LayerNorm.weight": "BertModel/Encoder/Embeddings/Norm/Gamma",
"bert.embeddings.LayerNorm.bias": "BertModel/Encoder/Embeddings/Norm/Beta",
}
init.update(**embedding_proj)
if config.split_qkv:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": f'BertModel/Encoder/Layer{i}/Attention/Q',
f"bert.encoder.layer.{i}.attention.self.key.weight": f'BertModel/Encoder/Layer{i}/Attention/K',
f"bert.encoder.layer.{i}.attention.self.value.weight": f'BertModel/Encoder/Layer{i}/Attention/V',
f"bert.encoder.layer.{i}.attention.output.dense.weight": f'BertModel/Encoder/Layer{i}/Attention/Out',
f"bert.encoder.layer.{i}.attention.output.LayerNorm.weight": f'BertModel/Encoder/Layer{i}/Attention/Norm/Gamma',
f"bert.encoder.layer.{i}.attention.output.LayerNorm.bias": f'BertModel/Encoder/Layer{i}/Attention/Norm/Beta',
f"bert.encoder.layer.{i}.intermediate.dense.weight": f'BertModel/Encoder/Layer{i}/FF/1/Dense/Weight',
f"bert.encoder.layer.{i}.intermediate.dense.bias": f'BertModel/Encoder/Layer{i}/FF/1/Dense/Bias',
f"bert.encoder.layer.{i}.output.dense.weight": f'BertModel/Encoder/Layer{i}/FF/2/Dense/Weight',
f"bert.encoder.layer.{i}.output.dense.bias": f'BertModel/Encoder/Layer{i}/FF/2/Dense/Bias',
f"bert.encoder.layer.{i}.output.LayerNorm.weight": f'BertModel/Encoder/Layer{i}/FF/Norm/Gamma',
f"bert.encoder.layer.{i}.output.LayerNorm.bias": f'BertModel/Encoder/Layer{i}/FF/Norm/Beta',
}
init.update(**layer)
else:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": f'BertModel/Encoder/Layer{i}/Attention/QKV',
f"bert.encoder.layer.{i}.attention.self.key.weight": f'BertModel/Encoder/Layer{i}/Attention/QKV',
f"bert.encoder.layer.{i}.attention.self.value.weight": f'BertModel/Encoder/Layer{i}/Attention/QKV',
f"bert.encoder.layer.{i}.attention.output.dense.weight": f'BertModel/Encoder/Layer{i}/Attention/Out',
f"bert.encoder.layer.{i}.attention.output.LayerNorm.weight": f'BertModel/Encoder/Layer{i}/Attention/Norm/Gamma',
f"bert.encoder.layer.{i}.attention.output.LayerNorm.bias": f'BertModel/Encoder/Layer{i}/Attention/Norm/Beta',
f"bert.encoder.layer.{i}.intermediate.dense.weight": f'BertModel/Encoder/Layer{i}/FF/1/Dense/Weight',
f"bert.encoder.layer.{i}.intermediate.dense.bias": f'BertModel/Encoder/Layer{i}/FF/1/Dense/Bias',
f"bert.encoder.layer.{i}.output.dense.weight": f'BertModel/Encoder/Layer{i}/FF/2/Dense/Weight',
f"bert.encoder.layer.{i}.output.dense.bias": f'BertModel/Encoder/Layer{i}/FF/2/Dense/Bias',
f"bert.encoder.layer.{i}.output.LayerNorm.weight": f'BertModel/Encoder/Layer{i}/FF/Norm/Gamma',
f"bert.encoder.layer.{i}.output.LayerNorm.bias": f'BertModel/Encoder/Layer{i}/FF/Norm/Beta',
}
init.update(**layer)
return init
def get_transform(config, init=None):
if init is None:
init = {}
def q_transform(arr):
return arr[:, 0:config.hidden_size].T
def k_transform(arr):
return arr[:, config.hidden_size:config.hidden_size * 2].T
def v_transform(arr):
return arr[:, config.hidden_size * 2:config.hidden_size * 3].T
if config.split_qkv:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": np.transpose,
f"bert.encoder.layer.{i}.attention.self.key.weight": np.transpose,
f"bert.encoder.layer.{i}.attention.self.value.weight": np.transpose,
f"bert.encoder.layer.{i}.attention.output.dense.weight": np.transpose,
f"bert.encoder.layer.{i}.intermediate.dense.weight": np.transpose,
f"bert.encoder.layer.{i}.output.dense.weight": np.transpose,
}
init.update(**layer)
else:
for i in range(config.num_layers):
layer = {
f"bert.encoder.layer.{i}.attention.self.query.weight": q_transform,
f"bert.encoder.layer.{i}.attention.self.key.weight": k_transform,
f"bert.encoder.layer.{i}.attention.self.value.weight": v_transform,
f"bert.encoder.layer.{i}.attention.output.dense.weight": np.transpose,
f"bert.encoder.layer.{i}.intermediate.dense.weight": np.transpose,
f"bert.encoder.layer.{i}.output.dense.weight": np.transpose,
}
init.update(**layer)
return init
def fwd_graph(popart_model, torch_model, mode, mapping=None, transform=None, replication_factor=1, replicated_tensor_sharding = False):
# ------------------- PopART --------------------
config = popart_model.config
builder = popart_model.builder
sequence_info = popart.TensorInfo(
"UINT32", [config.micro_batch_size * config.sequence_length])
indices = builder.addInputTensor(sequence_info)
positions = builder.addInputTensor(sequence_info)
segments = builder.addInputTensor(sequence_info)
data = {
indices: np.random.randint(
0, config.vocab_length, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32),
positions: np.random.randint(
0, config.sequence_length, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32),
segments: np.random.randint(
0, 2, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32)
}
user_options = {}
if mode == ExecutionMode.PHASED:
user_options = {
"batchSerializationFactor": 1,
"executionPhases": popart_model.total_execution_phases
}
output = popart_model(indices, positions, segments)
ipus = 2
else:
output = popart_model.build_graph(indices, positions, segments)
ipus = popart_model.total_ipus
proto = builder.getModelProto()
outputs, _ = run_py(proto,
data,
output,
user_options=user_options,
execution_mode=mode,
replication_factor=replication_factor,
replicated_tensor_sharding=replicated_tensor_sharding,
ipus=ipus)
# ----------------- PopART -> PyTorch ----------------
proto = onnx.load_model_from_string(proto)
inputs = {
"input_ids": data[indices].reshape(replication_factor * config.micro_batch_size,
config.sequence_length).astype(np.int32),
"position_ids": data[positions].reshape(replication_factor * config.micro_batch_size,
config.sequence_length).astype(np.int32),
"token_type_ids": data[segments].reshape(replication_factor * config.micro_batch_size,
config.sequence_length).astype(np.int32)
}
torch_to_onnx = get_mapping(config, init=mapping)
transform_weights = get_transform(config, init=transform)
# ------------------- PyTorch -------------------------
# Turn off dropout
torch_model.eval()
copy_weights_to_torch(torch_model, proto,
torch_to_onnx, transform_weights)
torch_outputs = run_fwd_model(inputs, torch_model)
check_tensors(torch_outputs, outputs)
def bwd_graph(popart_model,
torch_model,
mode,
popart_loss_fn,
torch_loss_fn,
mapping=None,
transform=None,
replication_factor=1,
replicated_tensor_sharding=False,
opt_type="SGD"):
np.random.seed(1984)
random.seed(1984)
torch.manual_seed(1984)
# ------------------- PopART --------------------
config = popart_model.config
builder = popart_model.builder
sequence_info = popart.TensorInfo(
"UINT32", [config.micro_batch_size * config.sequence_length])
indices = builder.addInputTensor(sequence_info)
positions = builder.addInputTensor(sequence_info)
segments = builder.addInputTensor(sequence_info)
data = {
indices: np.random.randint(
0, config.vocab_length, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32),
positions: np.random.randint(
0, config.sequence_length, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32),
segments: np.random.randint(
0, 2, (replication_factor, config.micro_batch_size * config.sequence_length)).astype(np.uint32)
}
num_reps = 5
user_options = {}
if mode == ExecutionMode.PHASED:
user_options = {
"batchSerializationFactor": 1,
"executionPhases": popart_model.total_execution_phases
}
output = popart_model(indices, positions, segments)
ipus = 2
else:
output = popart_model.build_graph(indices, positions, segments)
ipus = popart_model.total_ipus
loss = popart_loss_fn(output)
proto = builder.getModelProto()
if opt_type == "SGD":
optimizer = popart.ConstSGD(1e-3)
elif opt_type == "LAMB":
optMap = {
"defaultLearningRate": (1e-3, True),
"defaultBeta1": (0.9, True),
"defaultBeta2": (0.999, True),
"defaultWeightDecay": (0.0, True),
"maxWeightNorm": (10.0, True),
"defaultEps": (1e-8, True),
"lossScaling": (1.0, True),
}
optimizer = popart.Adam(optMap,
mode=popart.AdamMode.Lamb)
elif opt_type == "LAMB_NO_BIAS":
optMap = {
"defaultLearningRate": (1, False),
"defaultBeta1": (0, False),
"defaultBeta2": (0, False),
"defaultWeightDecay": (0.0, False),
"defaultEps": (1e-8, False),
"lossScaling": (1.0, False),
}
optimizer = popart.Adam(optMap,
mode=popart.AdamMode.LambNoBias)
else:
raise ValueError(f"Unknown opt_type={opt_type}")
patterns = popart.Patterns()
if mode == ExecutionMode.PHASED:
patterns.enablePattern("TiedGatherPattern", False)
patterns.enablePattern("SparseAccumulatePattern", False)
outputs, post_proto = run_py(proto,
data,
output,
loss=loss,
optimizer=optimizer,
user_options=user_options,
execution_mode=mode,
patterns=patterns,
replication_factor=replication_factor,
replicated_tensor_sharding=replicated_tensor_sharding,
ipus=ipus,
num_reps=num_reps)
# ----------------- PopART -> PyTorch ----------------
proto = onnx.load_model_from_string(proto)
inputs = {
"input_ids": data[indices].reshape(replication_factor * config.micro_batch_size, config.sequence_length).astype(np.int32),
"position_ids": data[positions].reshape(replication_factor * config.micro_batch_size, config.sequence_length).astype(np.int32),
"token_type_ids": data[segments].reshape(replication_factor * config.micro_batch_size, config.sequence_length).astype(np.int32)
}
torch_to_onnx = get_mapping(config, init=mapping)
transform_weights = get_transform(config, init=transform)
# ------------------- PyTorch -------------------------
# Turn off dropout
torch_model.eval()
copy_weights_to_torch(torch_model, proto,
torch_to_onnx, transform_weights)
if opt_type == "SGD":
optim = torch.optim.SGD(torch_model.parameters(), 1e-3,
weight_decay=0.0, momentum=0.0)
elif opt_type == "LAMB":
optim = torch_lamb.Lamb(torch_model.parameters(),
lr=1e-3, weight_decay=0.0, biasCorrection=True)
for _ in range(num_reps):
torch_outputs = torch_model(
**{k: torch.from_numpy(t).long() for k, t in inputs.items()})
torch_loss = torch_loss_fn(torch_outputs)
torch_loss.backward()
optim.step()
optim.zero_grad()
check_tensors([output.detach().numpy()
for output in torch_outputs], outputs, margin=1.5e-06)
check_model(torch_model, post_proto,
torch_to_onnx, transform_weights,
margin=5e-5)
| 48.607143 | 135 | 0.601311 | 2,061 | 17,693 | 5.034935 | 0.121786 | 0.062446 | 0.105233 | 0.098294 | 0.778838 | 0.753975 | 0.732003 | 0.699142 | 0.691626 | 0.681314 | 0 | 0.010011 | 0.260385 | 17,693 | 363 | 136 | 48.741047 | 0.782974 | 0.052224 | 0 | 0.557432 | 0 | 0 | 0.325193 | 0.287156 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023649 | false | 0 | 0.027027 | 0.010135 | 0.067568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4076bfe112fb67e01a74f2a1a5ba32ca28579bb8 | 82 | py | Python | recipes/Python/577944_Random_Binary_List/recipe-577944.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/577944_Random_Binary_List/recipe-577944.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/577944_Random_Binary_List/recipe-577944.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | from random import *
randBinList = lambda n: [randint(0,1) for b in range(1,n+1)]
| 27.333333 | 60 | 0.695122 | 16 | 82 | 3.5625 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.158537 | 82 | 2 | 61 | 41 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
408aaa995e753c508b2f0cd65c35fda7f28c70d5 | 126 | py | Python | src/gretel_synthetics/errors.py | DLPerf/gretel-synthetics | 58a820327e283ecc224de3686aa035b7e32bfaa6 | [
"Apache-2.0"
] | 252 | 2020-03-02T16:41:11.000Z | 2022-03-28T20:57:15.000Z | src/gretel_synthetics/errors.py | DLPerf/gretel-synthetics | 58a820327e283ecc224de3686aa035b7e32bfaa6 | [
"Apache-2.0"
] | 39 | 2020-03-16T18:33:48.000Z | 2021-11-10T19:13:53.000Z | src/gretel_synthetics/errors.py | DLPerf/gretel-synthetics | 58a820327e283ecc224de3686aa035b7e32bfaa6 | [
"Apache-2.0"
] | 36 | 2020-05-21T14:45:27.000Z | 2022-03-01T01:32:58.000Z | """
Custom error classes
"""
class GenerationError(Exception):
pass
class TooManyInvalidError(RuntimeError):
pass
| 10.5 | 40 | 0.722222 | 11 | 126 | 8.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18254 | 126 | 11 | 41 | 11.454545 | 0.883495 | 0.15873 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
409bee5c5d500a2409e68a6afd049032bab76832 | 650 | py | Python | doc/source/image/brightness.py | ppawlak/pystacia | 854053a2872c9374e2c121c4af549f6bba640116 | [
"MIT"
] | 9 | 2015-02-11T21:33:33.000Z | 2021-06-14T14:55:24.000Z | doc/source/image/brightness.py | ppawlak/pystacia | 854053a2872c9374e2c121c4af549f6bba640116 | [
"MIT"
] | 1 | 2016-08-01T12:31:17.000Z | 2016-08-01T12:31:17.000Z | doc/source/image/brightness.py | ppawlak/pystacia | 854053a2872c9374e2c121c4af549f6bba640116 | [
"MIT"
] | 2 | 2015-08-21T08:23:25.000Z | 2018-10-31T02:52:50.000Z | from os.path import dirname, join
from pystacia import lena
dest = join(dirname(__file__), '../_static/generated')
image = lena(128)
image.brightness(-1)
image.write(join(dest, 'lena_brightness-1.jpg'))
image.close()
image = lena(128)
image.brightness(-0.6)
image.write(join(dest, 'lena_brightness-0.6.jpg'))
image.close()
image = lena(128)
image.brightness(-0.25)
image.write(join(dest, 'lena_brightness-0.25.jpg'))
image.close()
image = lena(128)
image.brightness(0.25)
image.write(join(dest, 'lena_brightness0.25.jpg'))
image.close()
image = lena(128)
image.brightness(0.75)
image.write(join(dest, 'lena_brightness0.75.jpg'))
image.close()
| 20.967742 | 54 | 0.733846 | 102 | 650 | 4.578431 | 0.235294 | 0.141328 | 0.12848 | 0.182013 | 0.777302 | 0.719486 | 0.556745 | 0.462527 | 0.462527 | 0.374732 | 0 | 0.065546 | 0.084615 | 650 | 30 | 55 | 21.666667 | 0.719328 | 0 | 0 | 0.434783 | 1 | 0 | 0.206154 | 0.175385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
40ac68f12b17c2d8d345f3fe5d1a87798ead247a | 235 | py | Python | project/blog/templatetags/str_to_lowercase.py | ivanprytula/django-celery-telegram-api | 2668247546ce4d34a5d1054126a6314bdf72eaae | [
"MIT"
] | null | null | null | project/blog/templatetags/str_to_lowercase.py | ivanprytula/django-celery-telegram-api | 2668247546ce4d34a5d1054126a6314bdf72eaae | [
"MIT"
] | null | null | null | project/blog/templatetags/str_to_lowercase.py | ivanprytula/django-celery-telegram-api | 2668247546ce4d34a5d1054126a6314bdf72eaae | [
"MIT"
] | null | null | null | from django import template
from django.template.defaultfilters import stringfilter
register = template.Library()
@register.filter(name='str_to_lowercase')
@stringfilter
def sting_value_to_lowercase(value):
return value.lower()
| 21.363636 | 55 | 0.812766 | 29 | 235 | 6.413793 | 0.62069 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102128 | 235 | 10 | 56 | 23.5 | 0.881517 | 0 | 0 | 0 | 0 | 0 | 0.068085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
40c14dd39c1eb16c4077aa27fa4601de12eea083 | 322 | py | Python | code/5/function_types.py | TeamLab/introduction_to_pythoy_TEAMLAB_MOOC | ebf1ff02d6a341bfee8695eac478ff8297cb97e4 | [
"MIT"
] | 65 | 2017-11-01T01:57:21.000Z | 2022-02-08T13:36:25.000Z | code/5/function_types.py | TeamLab/introduction_to_pythoy_TEAMLAB_MOOC | ebf1ff02d6a341bfee8695eac478ff8297cb97e4 | [
"MIT"
] | 9 | 2017-11-03T15:05:30.000Z | 2018-05-17T03:18:36.000Z | code/5/function_types.py | TeamLab/introduction_to_pythoy_TEAMLAB_MOOC | ebf1ff02d6a341bfee8695eac478ff8297cb97e4 | [
"MIT"
] | 64 | 2017-11-01T01:57:23.000Z | 2022-01-19T03:52:12.000Z | # def a_calculateRectangleArea():
# print (5 * 7)
# def b_calculateRectangleArea(x, y):
# print (x * y)
# print(b_calculateRectangleArea(4, 8))
# def c_calculateRectangleArea():
# return 5 * 7
def d_calculateRectangleArea(x, y):
print("함수 안에 있습니다")
return (x * y)
print(d_calculateRectangleArea(5,7))
| 24.769231 | 39 | 0.673913 | 42 | 322 | 5.02381 | 0.404762 | 0.037915 | 0.132701 | 0.274882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030534 | 0.186335 | 322 | 12 | 40 | 26.833333 | 0.774809 | 0.590062 | 0 | 0 | 0 | 0 | 0.080645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
40c4aa65fc8620335c9187c1ed2bb117b9c2d02c | 493 | py | Python | db.py | TitusKirch/justgamingbot | acb925bd335d03c8d59ba581c9e9bc82e6ca870a | [
"MIT"
] | null | null | null | db.py | TitusKirch/justgamingbot | acb925bd335d03c8d59ba581c9e9bc82e6ca870a | [
"MIT"
] | null | null | null | db.py | TitusKirch/justgamingbot | acb925bd335d03c8d59ba581c9e9bc82e6ca870a | [
"MIT"
] | null | null | null | import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
load_dotenv()
# setup database engine, create tables and setup session
db_engine = create_engine('mysql+mysqldb://' + os.getenv('DATABASE_USER') + ':' + os.getenv('DATABASE_PASSWORD') +
'@' + os.getenv('DATABASE_HOST') + '/' + os.getenv('DATABASE_NAME'), encoding="utf8", echo=False)
Session = sessionmaker(bind=db_engine)
db_session = Session()
| 35.214286 | 123 | 0.713996 | 61 | 493 | 5.590164 | 0.459016 | 0.093842 | 0.187683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00241 | 0.158215 | 493 | 13 | 124 | 37.923077 | 0.819277 | 0.109533 | 0 | 0 | 0 | 0 | 0.180778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
40edabf9ee3467ce971b028c9d3c42fd47a3ea1f | 1,084 | py | Python | tests/conftest.py | nxet/skidcoinz-factory | 2b6107bcae2f93bf33a3992b063131751f3e137f | [
"MIT"
] | null | null | null | tests/conftest.py | nxet/skidcoinz-factory | 2b6107bcae2f93bf33a3992b063131751f3e137f | [
"MIT"
] | null | null | null | tests/conftest.py | nxet/skidcoinz-factory | 2b6107bcae2f93bf33a3992b063131751f3e137f | [
"MIT"
] | null | null | null | import pytest
import brownie
#
# utils
#
@pytest.fixture(scope='module')
def deployer(accounts):
return accounts[-1]
@pytest.fixture(scope='module')
def UniswapV2Pair(pm):
def fn(address):
uniPair = pm('Uniswap/v2-core@1.0.1').UniswapV2Pair
return uniPair.at(address)
return fn
#
# v1
#
from v1.conftest import Config as _ConfigV1
from v1.conftest import deploy_fixture as deploy_fixture_v1
@pytest.fixture(scope='module')
def ConfigV1():
return _ConfigV1
@pytest.fixture(scope='module')
def ContractFixtureV1(GenericSkidCoinV1, deployer):
return deploy_fixture_v1(GenericSkidCoinV1, deployer)
#
# v2
#
from v2.conftest import Config as _ConfigV2
from v2.conftest import deploy_fixture as deploy_fixture_v2
@pytest.fixture(scope='module')
def ConfigV2():
return _ConfigV2
@pytest.fixture(scope='module')
def ContractFixtureV2(GenericSkidCoinV2, deployer):
return deploy_fixture_v2(GenericSkidCoinV2, deployer)
#
# apply fn_isolation to all future tests
#
@pytest.fixture(autouse=True)
def isolation(fn_isolation):
pass
| 18.066667 | 59 | 0.75 | 136 | 1,084 | 5.860294 | 0.316176 | 0.114178 | 0.135508 | 0.180678 | 0.308657 | 0.105395 | 0.105395 | 0 | 0 | 0 | 0 | 0.031351 | 0.146679 | 1,084 | 59 | 60 | 18.372881 | 0.83027 | 0.046125 | 0 | 0.2 | 0 | 0 | 0.055828 | 0.020568 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.033333 | 0.2 | 0.166667 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
40f28fed3ba7c1bc38261c43ceaa706323bd8d70 | 1,976 | py | Python | src/game/parents/base_objects/exit.py | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 3 | 2016-01-10T09:22:01.000Z | 2016-05-01T23:16:16.000Z | src/game/parents/base_objects/exit.py | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 1 | 2016-03-29T02:52:49.000Z | 2016-03-29T02:52:49.000Z | src/game/parents/base_objects/exit.py | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 1 | 2020-04-16T15:45:26.000Z | 2020-04-16T15:45:26.000Z | """
Contains exit-related stuff.
"""
from src.game.parents.base_objects.base import BaseObject
class ExitObject(BaseObject):
"""
An 'Exit' is used for moving from one location to another. The command
handler checks a player's location for an exit's name/alias that matches
the user's input. If a match is found, the player moves to the exit's
destination.
"""
#
## Begin properties.
#
def get_destination(self):
"""
Returns the object's destination.
:rtype: BaseObject or ``None``.
:returns: A reference to the exit's destination BaseObject. If no
destination is set, or the destination has been destroyed, this
returns ``None``.
:raises: NoSuchObject if the ID can't be found in the DB.
"""
return self._object_store.get_object(self.destination_id)
def set_destination(self, obj_or_id):
"""
Sets the object's destination.
:type obj_or_id: int or BaseObject
:param obj_or_id: The new destination for the object in ID or
BaseObject instance form.
"""
self._generic_baseobject_to_id_property_setter('destination_id', obj_or_id)
destination = property(get_destination, set_destination)
@property
def base_type(self):
"""
Returns this object's type lineage.
:rtype: str
:returns: ``'exit'``
"""
return 'exit'
#
## Begin methods
#
def pass_object_through(self, obj):
"""
Attempts to pass an object through this exit. Takes into consideration
any additional locks/permissions.
:param BaseObject obj: The object to attempt to pass through this
exit.
"""
if not self.destination:
obj.emit_to('That exit leads to nowhere.')
return
# Move the object on through to destination.
obj.move_to(self.destination)
| 26 | 83 | 0.621964 | 249 | 1,976 | 4.819277 | 0.39759 | 0.0375 | 0.023333 | 0.016667 | 0.035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.297065 | 1,976 | 75 | 84 | 26.346667 | 0.863931 | 0.517713 | 0 | 0 | 0 | 0 | 0.063025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.066667 | 0.066667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
40f3e7edb1877ad487b3c2e68ab572deb08c7971 | 185 | py | Python | ALGORITHM/large.py | Nerdcode/NEWTOHACK | dbeb9ea773ac01bbd80b4ed951f16914c4a37d8f | [
"MIT"
] | null | null | null | ALGORITHM/large.py | Nerdcode/NEWTOHACK | dbeb9ea773ac01bbd80b4ed951f16914c4a37d8f | [
"MIT"
] | null | null | null | ALGORITHM/large.py | Nerdcode/NEWTOHACK | dbeb9ea773ac01bbd80b4ed951f16914c4a37d8f | [
"MIT"
] | null | null | null | # Python program to find largest
# number in a list
# list of numbers
list1 = [10, 20, 4, 45, 99]
# printing the maximum element
print("Largest element is:", max(list1))
| 18.5 | 41 | 0.648649 | 28 | 185 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079137 | 0.248649 | 185 | 9 | 42 | 20.555556 | 0.784173 | 0.513514 | 0 | 0 | 0 | 0 | 0.22619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
9059f53318d3979e3448492e4923d6a40733a0ed | 976 | py | Python | simulation/events.py | alexpod1000/WoW-Spell-Casting-Simulation | 9c3ab6f2e06eef1e1892e7f8b3663b0687bb31df | [
"Apache-2.0"
] | null | null | null | simulation/events.py | alexpod1000/WoW-Spell-Casting-Simulation | 9c3ab6f2e06eef1e1892e7f8b3663b0687bb31df | [
"Apache-2.0"
] | null | null | null | simulation/events.py | alexpod1000/WoW-Spell-Casting-Simulation | 9c3ab6f2e06eef1e1892e7f8b3663b0687bb31df | [
"Apache-2.0"
] | null | null | null | class Event:
def __init__(self, event_name, sender, params=None):
"""
Defines an event.
name: event name
sender: who has sent the event
params: dictionary of event parameters
"""
self.event_name = event_name
self.sender = sender
self.params = params
@property
def name(self):
return self.event_name
def handle(self, simulation, sender, params):
"""
Method that will be called when the event will need to be handled.
Defined by a function: (simulation_instance, sender, params)->resulting_parameters
"""
pass
class EventEmitter:
def __init__(self, emitter):
"""
Emit events with some policy (for implementing automatic event generation).
emitter: entity that will be the sender of the events emitted by this emitter
"""
self.emitter = emitter
def emit(self, model):
pass | 27.111111 | 90 | 0.608607 | 114 | 976 | 5.087719 | 0.464912 | 0.093103 | 0.067241 | 0.062069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.319672 | 976 | 36 | 91 | 27.111111 | 0.873494 | 0.419057 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.133333 | 0 | 0.066667 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
905a1f847caa21efe9f9053020db627d8b52ae53 | 274 | py | Python | malcolm/modules/zebra/blocks/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | malcolm/modules/zebra/blocks/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | malcolm/modules/zebra/blocks/__init__.py | aaron-parsons/pymalcolm | 4e7ebd6b09382ab7e013278a81097d17873fa5c4 | [
"Apache-2.0"
] | null | null | null | from malcolm.yamlutil import make_block_creator, check_yaml_names
zebra_driver_block = make_block_creator(
__file__, "zebra_driver_block.yaml")
zebra_runnable_block = make_block_creator(
__file__, "zebra_runnable_block.yaml")
__all__ = check_yaml_names(globals())
| 30.444444 | 65 | 0.821168 | 37 | 274 | 5.27027 | 0.432432 | 0.138462 | 0.246154 | 0.215385 | 0.307692 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10219 | 274 | 8 | 66 | 34.25 | 0.792683 | 0 | 0 | 0 | 0 | 0 | 0.175182 | 0.175182 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
905de6f46fa7aac0ec3afec8ebf0d4275da8b014 | 167 | py | Python | democelery/security/apps.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | 1 | 2021-12-02T05:29:37.000Z | 2021-12-02T05:29:37.000Z | democelery/security/apps.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | null | null | null | democelery/security/apps.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.apps import AppConfig
class SecurityConfig(AppConfig):
name = 'democelery.security'
| 18.555556 | 39 | 0.748503 | 19 | 167 | 6.315789 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.149701 | 167 | 8 | 40 | 20.875 | 0.838028 | 0.125749 | 0 | 0 | 0 | 0 | 0.131944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
9097bdcb809cdcc6e1bc477bad5047285c4087c0 | 340 | py | Python | filter_plugins/sets.py | maruina/ansible-playbooks | 6535ece0de9bd1f1f3dc4352fe4ee440090a9ccb | [
"MIT"
] | null | null | null | filter_plugins/sets.py | maruina/ansible-playbooks | 6535ece0de9bd1f1f3dc4352fe4ee440090a9ccb | [
"MIT"
] | null | null | null | filter_plugins/sets.py | maruina/ansible-playbooks | 6535ece0de9bd1f1f3dc4352fe4ee440090a9ccb | [
"MIT"
] | null | null | null | """To be deleted once https://github.com/ansible/ansible/pull/15062 has been merged"""
def issubset(a, b):
return set(a) <= set(b)
def issuperset(a, b):
return set(a) >= set(b)
class FilterModule(object):
def filters(self):
return {
'issubset': issubset,
'issuperset': issuperset,
}
| 20 | 86 | 0.588235 | 43 | 340 | 4.651163 | 0.581395 | 0.02 | 0.08 | 0.11 | 0.16 | 0.16 | 0.16 | 0 | 0 | 0 | 0 | 0.02 | 0.264706 | 340 | 16 | 87 | 21.25 | 0.78 | 0.235294 | 0 | 0 | 0 | 0 | 0.070866 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.3 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
9098586743dca3b1c3b3e62a1d65427ac52faef8 | 182 | py | Python | dbt_dag_factory/__version__.py | tomasfarias/dbt-dag-factory | 817af463ae00d34d48ccc16f7a9f36a97409d8f3 | [
"MIT"
] | 4 | 2021-12-13T02:50:34.000Z | 2022-03-01T16:37:52.000Z | dbt_dag_factory/__version__.py | tomasfarias/dbt-dag-factory | 817af463ae00d34d48ccc16f7a9f36a97409d8f3 | [
"MIT"
] | null | null | null | dbt_dag_factory/__version__.py | tomasfarias/dbt-dag-factory | 817af463ae00d34d48ccc16f7a9f36a97409d8f3 | [
"MIT"
] | null | null | null | """The module's version information."""
__author__ = "Tomás Farías Santana"
__copyright__ = "Copyright 2021 Tomás Farías Santana"
__title__ = "dbt-dag-factory"
__version__ = "0.1.0"
| 30.333333 | 53 | 0.747253 | 23 | 182 | 5.217391 | 0.73913 | 0.183333 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04375 | 0.120879 | 182 | 5 | 54 | 36.4 | 0.70625 | 0.181319 | 0 | 0 | 0 | 0 | 0.524476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
90b68606a50f35db9e513bed30d949d45fc38b13 | 39 | py | Python | Chapter 01/Chap01_Example1.53.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.53.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.53.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | a=4
b=2
# addition operator
print(a+b)
| 7.8 | 19 | 0.692308 | 9 | 39 | 3 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.153846 | 39 | 4 | 20 | 9.75 | 0.757576 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
90cb2a759065ac38f9dbf3d2135573f0c3d76926 | 22 | py | Python | day_bot/__init__.py | zenmaldives/instagram-bot | cbd0b2dcebb9b80ffdbf82af8d7b560beddcd211 | [
"MIT"
] | 56 | 2016-03-06T07:43:44.000Z | 2022-01-04T16:42:40.000Z | day_bot/__init__.py | zenmaldives/instagram-bot | cbd0b2dcebb9b80ffdbf82af8d7b560beddcd211 | [
"MIT"
] | 7 | 2015-04-20T12:55:34.000Z | 2016-02-18T15:49:30.000Z | day_bot/__init__.py | zenmaldives/instagram-bot | cbd0b2dcebb9b80ffdbf82af8d7b560beddcd211 | [
"MIT"
] | 15 | 2015-04-20T14:13:14.000Z | 2016-02-15T06:56:59.000Z | __author__ = 'gipmon'
| 11 | 21 | 0.727273 | 2 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
90d7d8b6985b978cad168847e34495f2df7f855b | 505 | py | Python | refinery/units/compression/bz2.py | larsborn/refinery | c8b19156b17e5fa5de5c72bc668a14d646584560 | [
"BSD-3-Clause"
] | null | null | null | refinery/units/compression/bz2.py | larsborn/refinery | c8b19156b17e5fa5de5c72bc668a14d646584560 | [
"BSD-3-Clause"
] | null | null | null | refinery/units/compression/bz2.py | larsborn/refinery | c8b19156b17e5fa5de5c72bc668a14d646584560 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import bz2 as bz2_
from .. import arg, Unit
from ...lib.argformats import number
class bz2(Unit):
"""
BZip2 compression and decompression.
"""
def __init__(self, level: arg('-l', type=number[1:9], help='compression level preset between 1 and 9') = 9):
super().__init__(level=level)
def process(self, data):
return bz2_.decompress(data)
def reverse(self, data):
return bz2_.compress(data, self.args.level)
| 24.047619 | 112 | 0.641584 | 69 | 505 | 4.536232 | 0.57971 | 0.051118 | 0.089457 | 0.108626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032663 | 0.211881 | 505 | 20 | 113 | 25.25 | 0.753769 | 0.158416 | 0 | 0 | 0 | 0 | 0.102941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.3 | 0.2 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
90edaa8c8449d7fdcd5cab414d1a98a45e113140 | 285 | py | Python | projects/admin.py | akshaya9/fosswebsite | 5669a90ebb1fea2213c207a938236fba7643375c | [
"MIT"
] | 369 | 2017-10-02T03:04:24.000Z | 2022-03-26T10:54:55.000Z | projects/admin.py | akshaya9/fosswebsite | 5669a90ebb1fea2213c207a938236fba7643375c | [
"MIT"
] | 121 | 2017-10-01T14:21:48.000Z | 2018-11-08T16:57:26.000Z | projects/admin.py | akshaya9/fosswebsite | 5669a90ebb1fea2213c207a938236fba7643375c | [
"MIT"
] | 69 | 2017-10-13T11:04:38.000Z | 2021-12-08T06:23:19.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.contrib import admin
# Register your models here.
from .models import *
admin.site.register(Project)
admin.site.register(ProjectMembers)
admin.site.register(ProjectScreenShot)
admin.site.register(Language)
| 21.923077 | 39 | 0.792982 | 36 | 285 | 6.138889 | 0.555556 | 0.162896 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003906 | 0.101754 | 285 | 12 | 40 | 23.75 | 0.859375 | 0.168421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
90fcd5dd50300fc402fc5b48fba437ffcb9637bc | 167 | py | Python | test/basic/test_geojson.py | deeplook/gdp | ca2b1a5d55136bf3ade0df8a1c77c07e425a77fc | [
"MIT"
] | null | null | null | test/basic/test_geojson.py | deeplook/gdp | ca2b1a5d55136bf3ade0df8a1c77c07e425a77fc | [
"MIT"
] | null | null | null | test/basic/test_geojson.py | deeplook/gdp | ca2b1a5d55136bf3ade0df8a1c77c07e425a77fc | [
"MIT"
] | null | null | null | def test_point():
"Test point."
from geojson import Point
p = Point((-115.81, 37.24))
assert p == {"coordinates": [-115.81, 37.24], "type": "Point"}
| 27.833333 | 66 | 0.568862 | 24 | 167 | 3.916667 | 0.583333 | 0.191489 | 0.148936 | 0.191489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 0.227545 | 167 | 5 | 67 | 33.4 | 0.589147 | 0.065868 | 0 | 0 | 0 | 0 | 0.185629 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
291daac52e5b58c591704ae3730ca65b5d304934 | 278 | py | Python | physiossl/utils/profile.py | larryshaw0079/PhysioLearn | 6438924a1b2a0c2ce4c238f504654f9a7f993d9e | [
"MIT"
] | 2 | 2021-12-11T15:17:47.000Z | 2021-12-27T07:39:31.000Z | physiossl/utils/profile.py | larryshaw0079/PhysioSSL | 6438924a1b2a0c2ce4c238f504654f9a7f993d9e | [
"MIT"
] | null | null | null | physiossl/utils/profile.py | larryshaw0079/PhysioSSL | 6438924a1b2a0c2ce4c238f504654f9a7f993d9e | [
"MIT"
] | null | null | null | """
@Time : 2021/11/26 11:38
@File : profile.py
@Software: PyCharm
@Desc :
"""
import numpy as np
def embedding_visualize(embeddings: np.ndarray, labels: np.ndarray = None, proj_dim: int = 3):
pass
def knn_monitor():
pass
def logits_accuracy():
pass
| 13.9 | 94 | 0.643885 | 39 | 278 | 4.487179 | 0.794872 | 0.102857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060465 | 0.226619 | 278 | 19 | 95 | 14.631579 | 0.753488 | 0.284173 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.428571 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
2925abae8257de8adbbe9aeff0aa8f9fad6b0f6e | 51 | py | Python | smtpc/__init__.py | msztolcman/smtpc | 68617d52b552b2eaf0633f09460726ffbccfe54c | [
"MIT"
] | 5 | 2021-03-14T21:17:31.000Z | 2021-07-10T22:28:23.000Z | smtpc/__init__.py | msztolcman/smtpc | 68617d52b552b2eaf0633f09460726ffbccfe54c | [
"MIT"
] | 1 | 2021-03-31T20:19:19.000Z | 2021-04-01T06:47:55.000Z | smtpc/__init__.py | msztolcman/smtpc | 68617d52b552b2eaf0633f09460726ffbccfe54c | [
"MIT"
] | 2 | 2021-03-24T09:22:33.000Z | 2021-05-12T20:36:52.000Z | __all__ = ('__version__', )
__version__ = '0.9.2'
| 12.75 | 27 | 0.627451 | 6 | 51 | 3.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.156863 | 51 | 3 | 28 | 17 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0.313725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
29266d9413b347bc7e0d1e6abe7b1c199daa0acd | 1,087 | py | Python | rpm_package_explorer/exceptions.py | chong601/rpm-package-explorer | 5a14f15f90612528b323d0bb4cdeb2005925b02b | [
"MIT"
] | 1 | 2021-07-15T23:07:17.000Z | 2021-07-15T23:07:17.000Z | rpm_package_explorer/exceptions.py | chong601/rpm-package-explorer | 5a14f15f90612528b323d0bb4cdeb2005925b02b | [
"MIT"
] | null | null | null | rpm_package_explorer/exceptions.py | chong601/rpm-package-explorer | 5a14f15f90612528b323d0bb4cdeb2005925b02b | [
"MIT"
] | 1 | 2021-07-15T23:07:23.000Z | 2021-07-15T23:07:23.000Z | # TODO: add/import version support here
class InvalidState(Exception):
"""Used when repomd data reaches an unexpected state"""
pass
class UnsupportedFileListException(Exception):
"""Used when the file list version is unsupported"""
def __init__(self, version):
super().__init__(f'This file list database version is unsupported. Please raise an issue. '
f'Currently support version {version} only.')
class UnsupportedPrimaryDatabaseException(Exception):
"""Used when the primary version is unsupported"""
def __init__(self, version) -> None:
super().__init__(f'This primary database version is unsupported. Please raise an issue. '
f'Currently support version {version} only.')
class UnsupportedOtherDatabaseException(Exception):
"""Used when the other version is unsupported"""
def __init__(self, version) -> None:
super().__init__(f'This other database version is unsupported. Please raise an issue. '
f'Currently support version {version} only.')
| 40.259259 | 99 | 0.685373 | 121 | 1,087 | 5.958678 | 0.338843 | 0.074896 | 0.166436 | 0.083218 | 0.558946 | 0.558946 | 0.558946 | 0.506241 | 0.506241 | 0.506241 | 0 | 0 | 0.225391 | 1,087 | 26 | 100 | 41.807692 | 0.856295 | 0.205152 | 0 | 0.357143 | 0 | 0 | 0.391459 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.214286 | false | 0.071429 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
2931be39061d9d722b71afb7a556f1fba541b273 | 261 | py | Python | jangli/test/list_of_object_test.py | AbhimanyuHK/Json_Object_Conv | 601f47697ef67e8e9665d2c4183169480e33041b | [
"MIT"
] | 1 | 2020-05-09T02:14:41.000Z | 2020-05-09T02:14:41.000Z | jangli/test/list_of_object_test.py | AbhimanyuHK/Json_Object_Conv | 601f47697ef67e8e9665d2c4183169480e33041b | [
"MIT"
] | null | null | null | jangli/test/list_of_object_test.py | AbhimanyuHK/Json_Object_Conv | 601f47697ef67e8e9665d2c4183169480e33041b | [
"MIT"
] | null | null | null | from jangli.list_of_object import ListObject
class A:
def __init__(self, b):
self.b = b
def append_test():
lt = ListObject(A)
lt.append(A(7))
print(lt)
def insert_test():
lt = ListObject(A)
lt.insert(1, A(8))
print(lt)
| 13.736842 | 44 | 0.601533 | 41 | 261 | 3.634146 | 0.512195 | 0.067114 | 0.214765 | 0.228188 | 0.255034 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.264368 | 261 | 18 | 45 | 14.5 | 0.760417 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.416667 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
297b770706691b8a34e01f4496192e24c3f8cc06 | 337 | py | Python | 03-sdp-inro/hanoi.py | iproduct/intro-python | 8fcf682286dad3fc65f46ccff33aefab9c601306 | [
"Apache-2.0"
] | 3 | 2022-01-10T07:56:37.000Z | 2022-02-14T16:37:56.000Z | 03-sdp-inro/hanoi.py | iproduct/intro-python | 8fcf682286dad3fc65f46ccff33aefab9c601306 | [
"Apache-2.0"
] | null | null | null | 03-sdp-inro/hanoi.py | iproduct/intro-python | 8fcf682286dad3fc65f46ccff33aefab9c601306 | [
"Apache-2.0"
] | 1 | 2022-02-14T16:36:46.000Z | 2022-02-14T16:36:46.000Z |
def hanoi(n, from_tower, to_tower, other_tower):
if n == 1: # recursion bottom
print(f'{from_tower} -> {to_tower}')
else: # recursion step
hanoi(n - 1, from_tower, other_tower, to_tower)
print(f'{from_tower} -> {to_tower}')
hanoi(n - 1, other_tower, to_tower, from_tower)
hanoi(4, 1, 3, 2)
| 30.636364 | 55 | 0.602374 | 52 | 337 | 3.653846 | 0.346154 | 0.236842 | 0.315789 | 0.252632 | 0.231579 | 0.231579 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.252226 | 337 | 10 | 56 | 33.7 | 0.72619 | 0.091988 | 0 | 0.25 | 0 | 0 | 0.172185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
297ec9262ccd2c9f2563334c49d7df9880bbcc1a | 97 | py | Python | package/__init__.py | shahriyardx/discord-cog-package-template | a85658153846a075501921c682263d13e9194cbd | [
"MIT"
] | null | null | null | package/__init__.py | shahriyardx/discord-cog-package-template | a85658153846a075501921c682263d13e9194cbd | [
"MIT"
] | null | null | null | package/__init__.py | shahriyardx/discord-cog-package-template | a85658153846a075501921c682263d13e9194cbd | [
"MIT"
] | null | null | null | from .cog import CogName
def setup(bot):
bot.add_cog(CogName(bot))
__version__ = "0.0.1"
| 10.777778 | 29 | 0.670103 | 16 | 97 | 3.75 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037975 | 0.185567 | 97 | 8 | 30 | 12.125 | 0.721519 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
462ebf84062bebd84d94ee7518cd884e221e55ed | 601 | py | Python | flute/apps/amber/const.py | gumupaier/flute | 2a6816355ead2cab26a606cae304d275216eaa16 | [
"Apache-2.0"
] | 4 | 2020-10-30T12:00:28.000Z | 2021-05-10T05:51:13.000Z | flute/apps/amber/const.py | gumupaier/flute | 2a6816355ead2cab26a606cae304d275216eaa16 | [
"Apache-2.0"
] | null | null | null | flute/apps/amber/const.py | gumupaier/flute | 2a6816355ead2cab26a606cae304d275216eaa16 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# @Time : 2020/12/11 10:13 上午
# @File : const.py
VERSION_IMAGE_MAP = {
'storaged': {
'nightly': 'vesoft/nebula-storaged:nightly',
'1.2': 'vesoft/nebula-storaged:v1.2.0',
'1.1': 'vesoft/nebula-storaged:v1.1.0'
},
'metad': {
'nightly': 'vesoft/nebula-metad:nightly',
'1.2': 'vesoft/nebula-metad:v1.2.0',
'1.1': 'vesoft/nebula-metad:v1.1.0'
},
'graphd': {
'nightly': 'vesoft/nebula-graphd:nightly',
'1.2': 'vesoft/nebula-graphd:v1.2.0',
'1.1': 'vesoft/nebula-graphd:v1.1.0'
}
}
| 28.619048 | 52 | 0.529118 | 82 | 601 | 3.853659 | 0.317073 | 0.341772 | 0.18038 | 0.142405 | 0.370253 | 0.170886 | 0.170886 | 0 | 0 | 0 | 0 | 0.094923 | 0.246256 | 601 | 20 | 53 | 30.05 | 0.602649 | 0.1198 | 0 | 0 | 0 | 0 | 0.584762 | 0.474286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4630f29f361645b2935ec2cedfaa7ca1a50e4c72 | 6,123 | py | Python | conans/test/functional/editable/graph_related_test.py | Ignition/conan | 84a38590987ecb9f3011f73babc95598ea62535f | [
"MIT"
] | null | null | null | conans/test/functional/editable/graph_related_test.py | Ignition/conan | 84a38590987ecb9f3011f73babc95598ea62535f | [
"MIT"
] | null | null | null | conans/test/functional/editable/graph_related_test.py | Ignition/conan | 84a38590987ecb9f3011f73babc95598ea62535f | [
"MIT"
] | null | null | null | # coding=utf-8
import os
import textwrap
import unittest
from parameterized import parameterized
from conans.model.ref import ConanFileReference
from conans.test.utils.tools import TestClient, TestServer
conanfile_base = textwrap.dedent("""\
from conans import ConanFile
class APck(ConanFile):
{body}
""")
conanfile = conanfile_base.format(body="pass")
conan_package_layout = textwrap.dedent("""\
[includedirs]
src/include
""")
class EmptyCacheTestMixin(object):
""" Will check that the cache after using the link is empty """
def setUp(self):
self.servers = {"default": TestServer()}
self.t = TestClient(servers=self.servers, users={"default": [("lasote", "mypass")]},
path_with_spaces=False)
self.ref = ConanFileReference.loads('lib/version@user/channel')
self.assertFalse(os.path.exists(self.t.cache.base_folder(self.ref)))
def tearDown(self):
self.t.run('editable remove {}'.format(self.ref))
self.assertFalse(self.t.cache.installed_as_editable(self.ref))
class ExistingCacheTestMixin(object):
""" Will check that the cache after using the link contains the same data as before """
def setUp(self):
self.servers = {"default": TestServer()}
self.t = TestClient(servers=self.servers, users={"default": [("lasote", "mypass")]},
path_with_spaces=False)
self.ref = ConanFileReference.loads('lib/version@user/channel')
self.t.save(files={'conanfile.py': conanfile})
self.t.run('create . {}'.format(self.ref))
self.assertTrue(os.path.exists(self.t.cache.base_folder(self.ref)))
self.assertListEqual(sorted(os.listdir(self.t.cache.base_folder(self.ref))),
['build', 'export', 'export_source', 'locks', 'metadata.json',
'metadata.json.lock', 'package', 'source'])
def tearDown(self):
self.t.run('editable remove {}'.format(self.ref))
self.assertTrue(os.path.exists(self.t.cache.base_folder(self.ref)))
self.assertListEqual(sorted(os.listdir(self.t.cache.base_folder(self.ref))),
['build', 'export', 'export_source', 'locks', 'metadata.json',
'metadata.json.lock', 'package', 'source'])
class RelatedToGraphBehavior(object):
def test_do_nothing(self):
self.t.save(files={'conanfile.py': conanfile,
"mylayout": conan_package_layout, })
self.t.run('editable add . {}'.format(self.ref))
self.assertTrue(self.t.cache.installed_as_editable(self.ref))
@parameterized.expand([(True, ), (False, )])
def test_install_requirements(self, update):
# Create a parent and remove it from cache
ref_parent = ConanFileReference.loads("parent/version@lasote/channel")
self.t.save(files={'conanfile.py': conanfile})
self.t.run('create . {}'.format(ref_parent))
self.t.run('upload {} --all'.format(ref_parent))
self.t.run('remove {} --force'.format(ref_parent))
self.assertFalse(os.path.exists(self.t.cache.base_folder(ref_parent)))
# Create our project and link it
self.t.save(files={'conanfile.py':
conanfile_base.format(body='requires = "{}"'.format(ref_parent)),
"mylayout": conan_package_layout, })
self.t.run('editable add . {}'.format(self.ref))
# Install our project and check that everything is in place
update = ' --update' if update else ''
self.t.run('install {}{}'.format(self.ref, update))
self.assertIn(" lib/version@user/channel from user folder - Editable", self.t.out)
self.assertIn(" parent/version@lasote/channel from 'default' - Downloaded",
self.t.out)
self.assertTrue(os.path.exists(self.t.cache.base_folder(ref_parent)))
@parameterized.expand([(True,), (False,)])
def test_middle_graph(self, update):
# Create a parent and remove it from cache
ref_parent = ConanFileReference.loads("parent/version@lasote/channel")
self.t.save(files={'conanfile.py': conanfile})
self.t.run('create . {}'.format(ref_parent))
self.t.run('upload {} --all'.format(ref_parent))
self.t.run('remove {} --force'.format(ref_parent))
self.assertFalse(os.path.exists(self.t.cache.base_folder(ref_parent)))
# Create our project and link it
path_to_lib = os.path.join(self.t.current_folder, 'lib')
self.t.save(files={'conanfile.py':
conanfile_base.format(body='requires = "{}"'.format(ref_parent)),
"mylayout": conan_package_layout, },
path=path_to_lib)
self.t.run('editable add "{}" {}'.format(path_to_lib, self.ref))
# Create a child an install it (in other folder, do not override the link!)
path_to_child = os.path.join(self.t.current_folder, 'child')
ref_child = ConanFileReference.loads("child/version@lasote/channel")
self.t.save(files={'conanfile.py': conanfile_base.
format(body='requires = "{}"'.format(self.ref)), },
path=path_to_child)
update = ' --update' if update else ''
self.t.run('create "{}" {} {}'.format(path_to_child, ref_child, update))
child_remote = 'No remote' if update else 'Cache'
self.assertIn(" child/version@lasote/channel from local cache - {}".format(child_remote),
self.t.out)
self.assertIn(" lib/version@user/channel from user folder - Editable", self.t.out)
self.assertIn(" parent/version@lasote/channel from 'default' - Downloaded", self.t.out)
self.assertTrue(os.path.exists(self.t.cache.base_folder(ref_parent)))
class CreateLinkOverEmptyCache(EmptyCacheTestMixin, RelatedToGraphBehavior, unittest.TestCase):
pass
class CreateLinkOverExistingCache(ExistingCacheTestMixin, RelatedToGraphBehavior, unittest.TestCase):
pass
| 45.355556 | 101 | 0.63139 | 729 | 6,123 | 5.207133 | 0.185185 | 0.054004 | 0.029505 | 0.033193 | 0.716544 | 0.707587 | 0.682561 | 0.658851 | 0.623024 | 0.623024 | 0 | 0.000211 | 0.22489 | 6,123 | 134 | 102 | 45.69403 | 0.799621 | 0.069737 | 0 | 0.575758 | 0 | 0 | 0.203453 | 0.047208 | 0 | 0 | 0 | 0 | 0.161616 | 1 | 0.070707 | false | 0.050505 | 0.070707 | 0 | 0.191919 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
467e579db0cf8efc321b38a34d5b42cf4ccd3fd2 | 3,224 | py | Python | extractstopwords.py | npedrazzini/PreModernSlavic-NLP | ab17849c5b2dfb8f4733db13d2259c97b9180974 | [
"MIT"
] | 1 | 2021-09-20T08:41:24.000Z | 2021-09-20T08:41:24.000Z | extractstopwords.py | npedrazzini/PreModernSlavic-NLP | ab17849c5b2dfb8f4733db13d2259c97b9180974 | [
"MIT"
] | null | null | null | extractstopwords.py | npedrazzini/PreModernSlavic-NLP | ab17849c5b2dfb8f4733db13d2259c97b9180974 | [
"MIT"
] | 1 | 2021-08-07T08:34:07.000Z | 2021-08-07T08:34:07.000Z | import pandas as pd
df = pd.read_csv('/Users/nilo/Desktop/_TOROT_/stopwords.csv')
stopwordsdf = pd.DataFrame(df, columns=['lemma_id','pos','form','lemma'])
C = stopwordsdf[(stopwordsdf["pos"] == 'C-') & (stopwordsdf["lemma"] != 'FIXME')]
C = sorted(C["form"].unique())
print('C- =')
print("set(\n\"\"\"\n" + " ".join(C)+ "\n\"\"\".split()\n)")
Dq = stopwordsdf[(stopwordsdf["pos"] == 'Dq') & (stopwordsdf["lemma"] != 'FIXME')]
Dq = sorted(Dq["form"].unique())
print('Dq =')
print("set(\n\"\"\"\n" + " ".join(Dq)+ "\n\"\"\".split()\n)")
Du = stopwordsdf[(stopwordsdf["pos"] == 'Du') & (stopwordsdf["lemma"] != 'FIXME')]
Du = sorted(Du["form"].unique())
print('Du =')
print("set(\n\"\"\"\n" + " ".join(Du)+ "\n\"\"\".split()\n)")
G = stopwordsdf[(stopwordsdf["pos"] == 'G-') & (stopwordsdf["lemma"] != 'FIXME')]
G = sorted(G["form"].unique())
print('G- =')
print("set(\n\"\"\"\n" + " ".join(G)+ "\n\"\"\".split()\n)")
I = stopwordsdf[(stopwordsdf["pos"] == 'I-') & (stopwordsdf["lemma"] != 'FIXME')]
I = sorted(I["form"].unique())
print('I- =')
print("set(\n\"\"\"\n" + " ".join(I)+ "\n\"\"\".split()\n)")
Ma = stopwordsdf[(stopwordsdf["pos"] == 'Ma') & (stopwordsdf["lemma"] != 'FIXME')]
Ma = sorted(Ma["form"].unique())
print('Ma =')
print("set(\n\"\"\"\n" + " ".join(Ma)+ "\n\"\"\".split()\n)")
Pd = stopwordsdf[(stopwordsdf["pos"] == 'Pd') & (stopwordsdf["lemma"] != 'FIXME')]
Pd = sorted(Pd["form"].unique())
print('Pd =')
print("set(\n\"\"\"\n" + " ".join(Pd)+ "\n\"\"\".split()\n)")
Pi = stopwordsdf[(stopwordsdf["pos"] == 'Pi') & (stopwordsdf["lemma"] != 'FIXME')]
Pi = sorted(Pi["form"].unique())
print('Pi =')
print("set(\n\"\"\"\n" + " ".join(Pi)+ "\n\"\"\".split()\n)")
Pk = stopwordsdf[(stopwordsdf["pos"] == 'Pk') & (stopwordsdf["lemma"] != 'FIXME')]
Pk = sorted(Pk["form"].unique())
print('Pk =')
print("set(\n\"\"\"\n" + " ".join(Pk)+ "\n\"\"\".split()\n)")
Pp = stopwordsdf[(stopwordsdf["pos"] == 'Pp') & (stopwordsdf["lemma"] != 'FIXME')]
Pp = sorted(Pp["form"].unique())
print('Pp =')
print("set(\n\"\"\"\n" + " ".join(Pp)+ "\n\"\"\".split()\n)")
Pr = stopwordsdf[(stopwordsdf["pos"] == 'Pr') & (stopwordsdf["lemma"] != 'FIXME')]
Pr = sorted(Pr["form"].unique())
print('Pr =')
print("set(\n\"\"\"\n" + " ".join(Pr)+ "\n\"\"\".split()\n)")
Ps = stopwordsdf[(stopwordsdf["pos"] == 'Ps') & (stopwordsdf["lemma"] != 'FIXME')]
Ps = sorted(Ps["form"].unique())
print('Ps =')
print("set(\n\"\"\"\n" + " ".join(Ps)+ "\n\"\"\".split()\n)")
Pt = stopwordsdf[(stopwordsdf["pos"] == 'Pt') & (stopwordsdf["lemma"] != 'FIXME')]
Pt = sorted(Pt["form"].unique())
print('Pt =')
print("set(\n\"\"\"\n" + " ".join(Pt)+ "\n\"\"\".split()\n)")
Px = stopwordsdf[(stopwordsdf["pos"] == 'Px') & (stopwordsdf["lemma"] != 'FIXME')]
Px = sorted(Px["form"].unique())
print('Px =')
print("set(\n\"\"\"\n" + " ".join(Px)+ "\n\"\"\".split()\n)")
R = stopwordsdf[(stopwordsdf["pos"] == 'R-') & (stopwordsdf["lemma"] != 'FIXME')]
R = sorted(R["form"].unique())
print('R- =')
print("set(\n\"\"\"\n" + " ".join(R)+ "\n\"\"\".split()\n)")
V = stopwordsdf[(stopwordsdf["pos"] == 'V-') & (stopwordsdf["lemma"] != 'FIXME')]
V = sorted(V["form"].unique())
print('V- =')
print("set(\n\"\"\"\n" + " ".join(V)+ "\n\"\"\".split()\n)") | 38.380952 | 82 | 0.522333 | 408 | 3,224 | 4.117647 | 0.102941 | 0.209524 | 0.238095 | 0.095238 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102667 | 3,224 | 84 | 83 | 38.380952 | 0.580712 | 0 | 0 | 0 | 0 | 0 | 0.300155 | 0.012713 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.014925 | 0.477612 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
468fd50fd813893776c245dff35b32e1ca4bf944 | 123 | py | Python | Django_Developer/HyperJob Agency/resume/urls.py | dimk00z/JetBrains-Academy | 04304e6221f464292a2687d5b0a0260f8b557da4 | [
"MIT"
] | null | null | null | Django_Developer/HyperJob Agency/resume/urls.py | dimk00z/JetBrains-Academy | 04304e6221f464292a2687d5b0a0260f8b557da4 | [
"MIT"
] | null | null | null | Django_Developer/HyperJob Agency/resume/urls.py | dimk00z/JetBrains-Academy | 04304e6221f464292a2687d5b0a0260f8b557da4 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import ResumeListView
urlpatterns = [
path("", ResumeListView.as_view()),
]
| 15.375 | 39 | 0.723577 | 14 | 123 | 6.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162602 | 123 | 7 | 40 | 17.571429 | 0.854369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
469c3c1b619806380cbea384ee1a3a2113c10a84 | 2,167 | py | Python | src/cp_request/visitor.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | src/cp_request/visitor.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | src/cp_request/visitor.py | aquariumbio/experiment-request | 026e3eb767c47f980a35004e9ded5e4e33553693 | [
"MIT"
] | null | null | null | import abc
from cp_request import (
Attribute,
Control,
ExperimentalRequest,
Measurement,
NamedEntity,
Sample,
Treatment,
Unit,
Value,
Version
)
from cp_request.design import (
DesignBlock,
BlockReference,
GenerateBlock,
ProductBlock,
ReplicateBlock,
SumBlock,
SubjectReference,
TreatmentReference,
TreatmentValueReference
)
class RequestVisitor(abc.ABC):
"""
Abstract visitor for structured request classes.
Includes stubbed visit methods for each class, with each simply returning.
To create a visitor, inherit from this class, define an initializer, and
each appropriate visit method.
"""
@abc.abstractmethod
def __init__(self):
pass
def visit_design_block(self, block: DesignBlock):
return
def visit_product_block(self, block: ProductBlock):
return
def visit_block_reference(self, reference: BlockReference):
return
def visit_sum_block(self, block: SumBlock):
return
def visit_subject_reference(self, reference: SubjectReference):
return
def visit_treatment_reference(self, reference: TreatmentReference):
return
def visit_treatment_value_reference(self,
reference: TreatmentValueReference):
return
def visit_replicate_block(self, block: ReplicateBlock):
return
def visit_generate_block(self, block: GenerateBlock):
return
def visit_attribute(self, attribute: Attribute):
return
def visit_version(self, version: Version):
return
def visit_treatment(self, treatment: Treatment):
return
def visit_sample(self, sample: Sample):
return
def visit_control(self, control: Control):
return
def visit_measurement(self, measurement: Measurement):
return
def visit_unit(self, unit: Unit):
return
def visit_value(self, value: Value):
return
def visit_named_entity(self, entity: NamedEntity):
return
def visit_experiment(self, experiment: ExperimentalRequest):
return
| 22.112245 | 78 | 0.670512 | 217 | 2,167 | 6.529954 | 0.308756 | 0.107269 | 0.177841 | 0.048694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265344 | 2,167 | 97 | 79 | 22.340206 | 0.890075 | 0.105215 | 0 | 0.283582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.298507 | false | 0.014925 | 0.044776 | 0.283582 | 0.641791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
469f3dd1590c6f187f62bbab790e848b4bd2722f | 151 | py | Python | Jan17/SimpleDictionary2.py | RoyalBiharCoders/bootcampJan2021 | 31d1cae15d9dca3d0a57157b3ee115c575c7c2b6 | [
"Apache-2.0"
] | 3 | 2021-01-22T09:16:34.000Z | 2021-02-06T10:07:40.000Z | Jan17/SimpleDictionary2.py | RoyalBiharCoders/bootcampJan2021 | 31d1cae15d9dca3d0a57157b3ee115c575c7c2b6 | [
"Apache-2.0"
] | null | null | null | Jan17/SimpleDictionary2.py | RoyalBiharCoders/bootcampJan2021 | 31d1cae15d9dca3d0a57157b3ee115c575c7c2b6 | [
"Apache-2.0"
] | 2 | 2021-01-10T15:46:35.000Z | 2021-02-01T13:24:57.000Z | #Defining a simple dictionary where key is "A" and its value is a list i.e. ["apple", "animal"]
myDict = {"A": ["apple", "animal"]}
print(myDict["A"]) | 37.75 | 95 | 0.642384 | 25 | 151 | 3.88 | 0.68 | 0.061856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152318 | 151 | 4 | 96 | 37.75 | 0.757813 | 0.622517 | 0 | 0 | 0 | 0 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
46a72b918890beea76e84fb418bc485830acbc7a | 208 | py | Python | yunionclient/api/dnsrecords.py | tb365/mcclient_python | 06647e7496b9e2c3aeb5ade1276c81871063159b | [
"Apache-2.0"
] | 3 | 2021-09-22T11:34:08.000Z | 2022-03-13T04:55:17.000Z | yunionclient/api/dnsrecords.py | xhw20190116/python_yunionsdk | eb7c8c08300d38dac204ec4980a775abc9c7083a | [
"Apache-2.0"
] | 13 | 2019-06-06T08:25:41.000Z | 2021-07-16T07:26:10.000Z | yunionclient/api/dnsrecords.py | xhw20190116/python_yunionsdk | eb7c8c08300d38dac204ec4980a775abc9c7083a | [
"Apache-2.0"
] | 7 | 2019-03-31T05:43:36.000Z | 2021-03-04T09:59:05.000Z | from yunionclient.common import base
class DNSRecordManager(base.StandaloneManager):
keyword = 'dnsrecord'
keyword_plural = 'dnsrecords'
_columns = ['ID', 'Name', 'Records', 'TTL', 'is_public']
| 26 | 60 | 0.706731 | 21 | 208 | 6.857143 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158654 | 208 | 7 | 61 | 29.714286 | 0.822857 | 0 | 0 | 0 | 0 | 0 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
46ac2c4badf766b0620ffb58b32db74c2321e9f7 | 1,715 | py | Python | sample/helpers.py | itsbetuliremsedef/vulnerability-analyzer | 94f6c7ba4fe823fba9b814142d01a3cbde15372c | [
"BSD-2-Clause"
] | null | null | null | sample/helpers.py | itsbetuliremsedef/vulnerability-analyzer | 94f6c7ba4fe823fba9b814142d01a3cbde15372c | [
"BSD-2-Clause"
] | null | null | null | sample/helpers.py | itsbetuliremsedef/vulnerability-analyzer | 94f6c7ba4fe823fba9b814142d01a3cbde15372c | [
"BSD-2-Clause"
] | null | null | null | import json
### SWAGGER DOCS DETAILS
__API_LABEL = "Scout24"
__API_DEFINITION = "dependency vulnerability check"
### JSON KEY DETAILS
__JSON_KEY_VULS = "vulnerabilities"
__JSON_KEY_DEPS = "dependencies"
__JSON_KEY_NAME = "name"
__JSON_KEY_SEVERITY = "severity"
__JSON_KEY_FILE_NAME = "fileName"
__JSON_KEY_OUT_NUM_VULS = "num_vulnerabilities"
__JSON_KEY_OUT_FILE_NAMES = "file_names"
__JSON_KEY_INDEX_SPLITTER = "####"
__JSON_KEY_OUTPUT_NAME = "vulnerability_name"
### ENDPOINT DETAILS
__ENDPOINT_EXERCISE1 = "/exercise1/about-owasp"
__ENDPOINT_EXERCISE2 = "/exercise2/filter-sort"
__ENDPOINT_EXERCISE3 = "/exercise3/histogram"
### JSON FILE
__FILEPATH = "./file/report.json"
### EXERCISE1 DEFINITION FILE IN JSON FORMAT
__FILEPATH_EXERCISE1 = "./file/exercise1(owasp-def).json"
### HELPERS
def api_label():
return __API_LABEL
def api_def():
return __API_DEFINITION
def json_key_v():
return __JSON_KEY_VULS
def json_key_name():
return __JSON_KEY_NAME
def json_key_severity():
return __JSON_KEY_SEVERITY
def json_splitter():
return __JSON_KEY_INDEX_SPLITTER
def json_key_out_name():
return __JSON_KEY_OUTPUT_NAME
def json_key_out_file_names():
return __JSON_KEY_OUT_FILE_NAMES
def json_key_out_num_vul():
return __JSON_KEY_OUT_NUM_VULS
def json_key_file_name():
return __JSON_KEY_FILE_NAME
def json_key_file_deps():
return __JSON_KEY_DEPS
def json_file_path():
return __FILEPATH
def json_file_exercise1_path():
return __FILEPATH_EXERCISE1
def endpoint_exercise1():
return __ENDPOINT_EXERCISE1
def endpoint_exercise2():
return __ENDPOINT_EXERCISE2
def endpoint_exercise3():
return __ENDPOINT_EXERCISE3
| 16.028037 | 57 | 0.772012 | 226 | 1,715 | 5.137168 | 0.20354 | 0.162791 | 0.100775 | 0.03876 | 0.078381 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013005 | 0.148105 | 1,715 | 106 | 58 | 16.179245 | 0.781656 | 0.065889 | 0 | 0 | 0 | 0 | 0.158095 | 0.048254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.326531 | false | 0 | 0.020408 | 0.326531 | 0.673469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
46b0b7f85cd8e61cb2a9359f711fe4226d5aaed9 | 160 | py | Python | PiWriter/config.py | GammaGames/piwriter | 101a8ebec5e98a98216d553d04a63fda20f40ff7 | [
"MIT"
] | null | null | null | PiWriter/config.py | GammaGames/piwriter | 101a8ebec5e98a98216d553d04a63fda20f40ff7 | [
"MIT"
] | null | null | null | PiWriter/config.py | GammaGames/piwriter | 101a8ebec5e98a98216d553d04a63fda20f40ff7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from configparser import ConfigParser
def get_config():
config = ConfigParser()
config.read('piwriter.ini')
return config
| 16 | 37 | 0.7125 | 19 | 160 | 5.947368 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.18125 | 160 | 9 | 38 | 17.777778 | 0.854962 | 0.13125 | 0 | 0 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
d3b8bde476ba1d035b74a4cba14959304b5e702e | 813 | py | Python | tests/test_task_processing.py | sobolevn/paasta | 8b87e0b13816c09b3d063b6d3271e6c7627fd264 | [
"Apache-2.0"
] | 1,711 | 2015-11-10T18:04:56.000Z | 2022-03-23T08:53:16.000Z | tests/test_task_processing.py | sobolevn/paasta | 8b87e0b13816c09b3d063b6d3271e6c7627fd264 | [
"Apache-2.0"
] | 1,689 | 2015-11-10T17:59:04.000Z | 2022-03-31T20:46:46.000Z | tests/test_task_processing.py | sobolevn/paasta | 8b87e0b13816c09b3d063b6d3271e6c7627fd264 | [
"Apache-2.0"
] | 267 | 2015-11-10T19:17:16.000Z | 2022-02-08T20:59:52.000Z | # Copyright 2015-2017 Yelp Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# We just want to test that task_processing is available in the virtualenv
def test_import():
from task_processing.task_processor import TaskProcessor
tp = TaskProcessor()
tp.load_plugin("task_processing.plugins.mesos")
| 36.954545 | 74 | 0.765068 | 123 | 813 | 5.00813 | 0.674797 | 0.097403 | 0.042208 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017751 | 0.168512 | 813 | 21 | 75 | 38.714286 | 0.893491 | 0.767528 | 0 | 0 | 0 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
313d4337e8c5d5af080d77317ac9375a5683858d | 10,057 | py | Python | pyvisdk/mo/host_datastore_system.py | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | pyvisdk/mo/host_datastore_system.py | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | pyvisdk/mo/host_datastore_system.py | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null |
from pyvisdk.base.managed_object_types import ManagedObjectTypes
from pyvisdk.base.base_entity import BaseEntity
import logging
########################################
# Automatically generated, do not edit.
########################################
log = logging.getLogger(__name__)
class HostDatastoreSystem(BaseEntity):
'''This managed object creates and removes datastores from the host.To a host, a
datastore is a storage abstraction that is backed by one of several types of
storage volumes:An ESX Server system automatically discovers the VMFS volume on
attached Logical Unit Numbers (LUNs) on startup and after re-scanning the host
bus adapter. Datastores are automatically created. The datastore label is based
on the VMFS volume label. If there is a conflict with an existing datastore, it
is made unique by appending a suffix. The VMFS volume label will be
unchanged.Destroying the datastore removes the partitions that compose the VMFS
volume.Datastores are never automatically removed because transient storage
connection outages may occur. They must be removed from the host using this
interface.See Datastore'''
def __init__(self, core, name=None, ref=None, type=ManagedObjectTypes.HostDatastoreSystem):
super(HostDatastoreSystem, self).__init__(core, name=name, ref=ref, type=type)
@property
def capabilities(self):
'''Capability vector indicating the available product features.'''
return self.update('capabilities')
@property
def datastore(self):
'''List of datastores on this host.'''
return self.update('datastore')
def ConfigureDatastorePrincipal(self, userName, password=None):
'''Configures datastore principal user for the host.Configures datastore principal
user for the host.
:param userName: Datastore principal user name.
:param password: Optional password for systems that require password for user impersonation.
'''
return self.delegate("ConfigureDatastorePrincipal")(userName, password)
def CreateLocalDatastore(self, name, path):
'''Creates a new local datastore.
:param name: The name of a datastore to create on the local host.
:param path: The file path for a directory in which the virtual machine data will be stored.
'''
return self.delegate("CreateLocalDatastore")(name, path)
def CreateNasDatastore(self, spec):
'''Creates a new network-attached storage datastore.
:param spec: The specification for creating a network-attached storage volume.
'''
return self.delegate("CreateNasDatastore")(spec)
def CreateVmfsDatastore(self, spec):
'''Creates a new VMFS datastore.
:param spec: The specification for creating a datastore backed by a VMFS.
'''
return self.delegate("CreateVmfsDatastore")(spec)
def ExpandVmfsDatastore(self, datastore, spec):
'''Increases the capacity of an existing VMFS datastore by expanding (increasing
the size of) an existing extent of the datastore.
:param datastore: The datastore whose capacity should be increased.
:param spec: The specification describing which extent of the VMFS datastore to expand.
'''
return self.delegate("ExpandVmfsDatastore")(datastore, spec)
def ExtendVmfsDatastore(self, datastore, spec):
'''Increases the capacity of an existing VMFS datastore by adding new extents to
the datastore.
:param datastore: The datastore whose capacity should be increased.
:param spec: The specification describing what extents to add to a VMFS datastore.
'''
return self.delegate("ExtendVmfsDatastore")(datastore, spec)
def QueryAvailableDisksForVmfs(self, datastore=None):
'''Query to list disks that can be used to contain VMFS datastore extents. If the
optional parameter name is supplied, queries for disks that can be used to
contain extents for a VMFS datastore identified by the supplied name.
Otherwise, the method retrieves disks that can be used to contain new VMFS
datastores.Query to list disks that can be used to contain VMFS datastore
extents. If the optional parameter name is supplied, queries for disks that can
be used to contain extents for a VMFS datastore identified by the supplied
name. Otherwise, the method retrieves disks that can be used to contain new
VMFS datastores.Query to list disks that can be used to contain VMFS datastore
extents. If the optional parameter name is supplied, queries for disks that can
be used to contain extents for a VMFS datastore identified by the supplied
name. Otherwise, the method retrieves disks that can be used to contain new
VMFS datastores.
:param datastore: The managed object reference of the VMFS datastore you want extents for.
'''
return self.delegate("QueryAvailableDisksForVmfs")(datastore)
def QueryUnresolvedVmfsVolumes(self):
'''Get the list of unbound VMFS volumes. For sharing a volume across hosts, a VMFS
volume is bound to its underlying block device storage. When a low level block
copy is performed to copy or move the VMFS volume, the copied volume will be
unbound.
'''
return self.delegate("QueryUnresolvedVmfsVolumes")()
def QueryVmfsDatastoreCreateOptions(self, devicePath, vmfsMajorVersion=None):
'''Queries options for creating a new VMFS datastore for a disk.See devicePath
:param devicePath: The devicePath of the disk on which datastore creation options are generated.See devicePath
:param vmfsMajorVersion: major version of VMFS to be used for formatting the datastore. If this parameter is not specified, then the default VMFS version for the host is used.See devicePathvSphere API 5.0
'''
return self.delegate("QueryVmfsDatastoreCreateOptions")(devicePath, vmfsMajorVersion)
def QueryVmfsDatastoreExpandOptions(self, datastore):
'''Queries for options for increasing the capacity of an existing VMFS datastore
by expanding (increasing the size of) an existing extent of the datastore.
:param datastore: The datastore to be expanded.
'''
return self.delegate("QueryVmfsDatastoreExpandOptions")(datastore)
def QueryVmfsDatastoreExtendOptions(self, datastore, devicePath, suppressExpandCandidates=None):
'''Queries for options for increasing the capacity of an existing VMFS datastore
by adding new extents using space from the specified disk.See devicePath
:param datastore: The datastore to be extended.See devicePath
:param devicePath: The devicePath of the disk on which datastore extension options are generated.See devicePath
:param suppressExpandCandidates: Indicates whether to exclude options that can be used for extent expansion also. Free space can be used for adding an extent or expanding an existing extent. If this parameter is set to true, the list of options returned will not include free space that can be used for expansion.See devicePathvSphere API 4.0
'''
return self.delegate("QueryVmfsDatastoreExtendOptions")(datastore, devicePath, suppressExpandCandidates)
def RemoveDatastore(self, datastore):
'''Removes a datastore from a host.
:param datastore: The datastore to be removed.
'''
return self.delegate("RemoveDatastore")(datastore)
def ResignatureUnresolvedVmfsVolume_Task(self, resolutionSpec):
'''Resignature an unbound VMFS volume. To safely enable sharing of the volume
across hosts, a VMFS volume is bound to its underlying block device storage.
When a low level block copy is performed to copy or move the VMFS volume, the
copied volume will be unbound. In order for the VMFS volume to be usable, a
resolution operation is needed to determine whether the VMFS volume should be
treated as a new volume or not and what extents compose that volume in the
event there is more than one unbound volume.Resignature an unbound VMFS volume.
To safely enable sharing of the volume across hosts, a VMFS volume is bound to
its underlying block device storage. When a low level block copy is performed
to copy or move the VMFS volume, the copied volume will be unbound. In order
for the VMFS volume to be usable, a resolution operation is needed to determine
whether the VMFS volume should be treated as a new volume or not and what
extents compose that volume in the event there is more than one unbound volume.
:param resolutionSpec: A data object that describes what the disk extents to be used for creating the new VMFS volume.
'''
return self.delegate("ResignatureUnresolvedVmfsVolume_Task")(resolutionSpec)
def UpdateLocalSwapDatastore(self, datastore=None):
'''Choose the localSwapDatastore for this host. Any change to this setting will
affect virtual machines that subsequently power on or resume from a suspended
state at this host, or that migrate to this host while powered on; virtual
machines that are currently powered on at this host will not yet be affected.
:param datastore: The selected datastore. If this argument is unset, then the localSwapDatastore property becomes unset. Otherwise, the host must have read/write access to the indicated datastore.
'''
return self.delegate("UpdateLocalSwapDatastore")(datastore) | 50.537688 | 350 | 0.698021 | 1,247 | 10,057 | 5.615878 | 0.222935 | 0.024275 | 0.035985 | 0.02042 | 0.410253 | 0.400257 | 0.380551 | 0.368556 | 0.355419 | 0.355419 | 0 | 0.000528 | 0.246992 | 10,057 | 199 | 351 | 50.537688 | 0.924204 | 0.634384 | 0 | 0.04878 | 1 | 0 | 0.138603 | 0.088583 | 0 | 0 | 0 | 0 | 0 | 1 | 0.414634 | false | 0.04878 | 0.073171 | 0 | 0.902439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3158b6e6998e8d5c8e4deb3fdeab7c76580c4f1a | 108 | py | Python | win/devkit/other/pymel/extras/completion/py/maya/app/mayabullet/Trace.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | 10 | 2018-03-30T16:09:02.000Z | 2021-12-07T07:29:19.000Z | win/devkit/other/pymel/extras/completion/py/maya/app/mayabullet/Trace.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | null | null | null | win/devkit/other/pymel/extras/completion/py/maya/app/mayabullet/Trace.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | 9 | 2018-06-02T09:18:49.000Z | 2021-12-20T09:24:35.000Z | def Trace(tag=''):
pass
def TracePrint(strMsg):
pass
_traceEnabled = False
_traceIndent = 0
| 7.2 | 23 | 0.638889 | 12 | 108 | 5.583333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.259259 | 108 | 14 | 24 | 7.714286 | 0.825 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
315b13fc668e1d229343673d8b58e7c2598b4424 | 238 | py | Python | src/refdoc/__init__.py | novopl/sphinx-refdoc | ca26b374bdb20db18801b8db6f909e9118a67864 | [
"MIT"
] | null | null | null | src/refdoc/__init__.py | novopl/sphinx-refdoc | ca26b374bdb20db18801b8db6f909e9118a67864 | [
"MIT"
] | 14 | 2017-10-11T10:22:34.000Z | 2021-06-01T22:37:14.000Z | src/refdoc/__init__.py | novopl/sphinx-refdoc | ca26b374bdb20db18801b8db6f909e9118a67864 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module implements python reference documentation generator for sphinx.
"""
from __future__ import absolute_import
from .logic import generate_docs
__version__ = '0.3.1'
__all__ = [
'generate_docs'
]
| 18.307692 | 75 | 0.726891 | 29 | 238 | 5.448276 | 0.827586 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.159664 | 238 | 12 | 76 | 19.833333 | 0.77 | 0.411765 | 0 | 0 | 1 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
318a53285747d99ff8546ea8d86c75cd0f1cf01b | 369 | py | Python | fatf/accountability/__init__.py | So-Cool/fat-forensics | 6fa252a1d90fe543242ef030a5f8a3f9c9f692fe | [
"BSD-3-Clause"
] | 48 | 2019-09-12T04:54:48.000Z | 2022-02-27T01:49:55.000Z | fatf/accountability/__init__.py | So-Cool/fat-forensics | 6fa252a1d90fe543242ef030a5f8a3f9c9f692fe | [
"BSD-3-Clause"
] | 4 | 2019-11-04T00:01:15.000Z | 2021-01-27T16:35:29.000Z | fatf/accountability/__init__.py | So-Cool/fat-forensics | 6fa252a1d90fe543242ef030a5f8a3f9c9f692fe | [
"BSD-3-Clause"
] | 11 | 2019-09-17T13:39:43.000Z | 2021-07-27T11:04:33.000Z | """
The :mod:`fatf.accountability` module holds a range of accountability methods.
This module holds a variety of techniques that can be used to assess *privacy*,
*security* and *robustness* of artificial intelligence pipelines and the
machine learning process: *data*, *models* and *predictions*.
"""
# Author: Kacper Sokol <k.sokol@bristol.ac.uk>
# License: new BSD
| 36.9 | 79 | 0.758808 | 52 | 369 | 5.384615 | 0.807692 | 0.078571 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138211 | 369 | 9 | 80 | 41 | 0.880503 | 0.96748 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
31b07bed3c61d5cafcb3fc73df6e8e6b7e47e032 | 473 | py | Python | src/Selenium2Library/keywords/__init__.py | piaoransk/selenium2libForMyself | b6bb880c3aa8ab6e5ddaffdb574aab6150ae3604 | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2017-09-30T05:47:28.000Z | 2019-04-15T11:58:40.000Z | src/Selenium2Library/keywords/__init__.py | piaoransk/selenium2libForMyself | b6bb880c3aa8ab6e5ddaffdb574aab6150ae3604 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/Selenium2Library/keywords/__init__.py | piaoransk/selenium2libForMyself | b6bb880c3aa8ab6e5ddaffdb574aab6150ae3604 | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2018-02-13T10:22:39.000Z | 2019-07-04T07:39:28.000Z | from .alert import AlertKeywords
from .browsermanagement import BrowserManagementKeywords
from .cookie import CookieKeywords
from .element import ElementKeywords
from .formelement import FormElementKeywords
from .javascript import JavaScriptKeywords
from .runonfailure import RunOnFailureKeywords
from .screenshot import ScreenshotKeywords
from .selectelement import SelectElementKeywords
from .tableelement import TableElementKeywords
from .waiting import WaitingKeywords
| 39.416667 | 56 | 0.883721 | 44 | 473 | 9.5 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 473 | 11 | 57 | 43 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
31b4db5426c98c266c7765d06d53ea430178b58f | 1,563 | py | Python | cysecuretools/targets/cyb06xx7/maps/memory_map.py | cypresssemiconductorco/cysecuretools | f27b6a7a5d5829427d746bac046c496bfe2b5898 | [
"Apache-2.0"
] | 9 | 2019-09-16T19:33:20.000Z | 2020-11-05T00:56:20.000Z | cysecuretools/targets/cyb06xx7/maps/memory_map.py | Infineon/cysecuretools | f27b6a7a5d5829427d746bac046c496bfe2b5898 | [
"Apache-2.0"
] | 1 | 2021-04-16T08:17:16.000Z | 2021-05-21T05:55:58.000Z | cysecuretools/targets/cyb06xx7/maps/memory_map.py | Infineon/cysecuretools | f27b6a7a5d5829427d746bac046c496bfe2b5898 | [
"Apache-2.0"
] | 1 | 2019-10-03T17:24:24.000Z | 2019-10-03T17:24:24.000Z | """
Copyright (c) 2019-2020 Cypress Semiconductor Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cysecuretools.core import MemoryMapBase
class MemoryMap_cyb06xx7(MemoryMapBase):
@property
def FLASH_ADDRESS(self):
return 0x10000000
@property
def FLASH_SIZE(self):
return 0x000E0000
@property
def PROVISION_JWT_PACKET_ADDRESS(self):
return 0x100FB600
@property
def PROVISION_JWT_PACKET_SIZE(self):
return 0x4A00
@property
def SPE_IMAGE_ID(self):
return 1
@property
def NSPE_IMAGE_ID(self):
return 16
@property
def SMIF_MEM_MAP_START(self):
return 0x18000000
@property
def VECTOR_TABLE_ADDR_ALIGNMENT(self):
return 0x400
# SFB addresses
@property
def TOC1_ADDRESS(self):
return 0x16007800
@property
def TOC1_SFB_ADDRESS_OFFSET(self):
return 0x14
@property
def TOC1_HASH_OBJ_OFFSET(self):
return 0x08
@property
def SYSCALL_TABLE_ADDR(self):
return 0x16002400
| 22.985294 | 72 | 0.707614 | 203 | 1,563 | 5.310345 | 0.566502 | 0.122449 | 0.04731 | 0.029685 | 0.053803 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072148 | 0.237364 | 1,563 | 67 | 73 | 23.328358 | 0.832215 | 0.381958 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082463 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.026316 | 0.315789 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
31d7b16a61e93f151f32930afc808c7133e2dceb | 900 | py | Python | polyaxon_schemas/specs/__init__.py | orf/polyaxon-schemas | dce55df25ae752fc3fbf465ea53add126746d630 | [
"MIT"
] | null | null | null | polyaxon_schemas/specs/__init__.py | orf/polyaxon-schemas | dce55df25ae752fc3fbf465ea53add126746d630 | [
"MIT"
] | null | null | null | polyaxon_schemas/specs/__init__.py | orf/polyaxon-schemas | dce55df25ae752fc3fbf465ea53add126746d630 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function
from polyaxon_schemas.specs import kinds
from polyaxon_schemas.specs.build import BuildSpecification
from polyaxon_schemas.specs.experiment import ExperimentSpecification
from polyaxon_schemas.specs.group import GroupSpecification
from polyaxon_schemas.specs.job import JobSpecification
from polyaxon_schemas.specs.notebook import NotebookSpecification
from polyaxon_schemas.specs.pipelines import PipelineSpecification
from polyaxon_schemas.specs.tensorboard import TensorboardSpecification
SPECIFICATION_BY_KIND = {
kinds.BUILD: BuildSpecification,
kinds.EXPERIMENT: ExperimentSpecification,
kinds.GROUP: GroupSpecification,
kinds.JOB: JobSpecification,
kinds.NOTEBOOK: NotebookSpecification,
kinds.TENSORBOARD: TensorboardSpecification,
kinds.PIPELINE: PipelineSpecification,
}
| 40.909091 | 71 | 0.843333 | 90 | 900 | 8.255556 | 0.355556 | 0.129206 | 0.204576 | 0.258412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001236 | 0.101111 | 900 | 21 | 72 | 42.857143 | 0.917182 | 0.023333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.055556 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
31dc6cedee0fbc975dcb4293784c57cf774c51b7 | 712 | py | Python | library/record_shutit_build/record_shutit_build.py | aidanhs/shutit | e8b1aa0f0df4941cc759a76837dbee3ef88e5a60 | [
"MIT"
] | null | null | null | library/record_shutit_build/record_shutit_build.py | aidanhs/shutit | e8b1aa0f0df4941cc759a76837dbee3ef88e5a60 | [
"MIT"
] | null | null | null | library/record_shutit_build/record_shutit_build.py | aidanhs/shutit | e8b1aa0f0df4941cc759a76837dbee3ef88e5a60 | [
"MIT"
] | null | null | null | """ShutIt module. See http://shutit.tk
"""
from shutit_module import ShutItModule
class record_shutit_build(ShutItModule):
def is_installed(self, shutit):
# Always run this
return False
def build(self, shutit):
# default the delivery to bash here
shutit.add_to_bashrc('''export SHUTIT_OPTIONS="$SHUTIT_OPTIONS --delivery bash"''')
return True
def module():
return record_shutit_build(
'shutit.tk.record_shutit_build.record_shutit_build', 0.39952141313136,
description='Module to record a shutit build. See README.md in the source folder.',
maintainer='ian.miell@gmail.com',
depends=['shutit.tk.setup','shutit.tk.shutit.shutit','shutit.tk.ttygif.ttygif','shutit.tk.docker.docker']
)
| 26.37037 | 107 | 0.75 | 100 | 712 | 5.2 | 0.49 | 0.092308 | 0.130769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024 | 0.122191 | 712 | 26 | 108 | 27.384615 | 0.808 | 0.120787 | 0 | 0 | 0 | 0 | 0.445705 | 0.241491 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
31e5336a533c97580ce8a988f1ba5aecae4f458d | 1,223 | py | Python | platform/polycommon/polycommon/options/option_manager.py | admariner/polyaxon | ba355c38166047eb11e60de4cee4d7c3b48db323 | [
"Apache-2.0"
] | 3,200 | 2017-05-09T11:35:31.000Z | 2022-03-28T05:43:22.000Z | platform/polycommon/polycommon/options/option_manager.py | admariner/polyaxon | ba355c38166047eb11e60de4cee4d7c3b48db323 | [
"Apache-2.0"
] | 1,324 | 2017-06-29T07:21:27.000Z | 2022-03-27T12:41:10.000Z | platform/polycommon/polycommon/options/option_manager.py | admariner/polyaxon | ba355c38166047eb11e60de4cee4d7c3b48db323 | [
"Apache-2.0"
] | 341 | 2017-01-10T23:06:53.000Z | 2022-03-10T08:15:18.000Z | #!/usr/bin/python
#
# Copyright 2018-2021 Polyaxon, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Tuple
from polyaxon.utils.manager_interface import ManagerInterface
from polycommon.options.option import Option
class OptionManager(ManagerInterface):
def _get_state_data( # pylint:disable=arguments-differ
self, option: Option
) -> Tuple[str, Option]:
return option.key, option
def subscribe(self, option: Option) -> None: # pylint:disable=arguments-differ
"""
>>> subscribe(SomeOption)
"""
super().subscribe(obj=option)
def get(self, key: str) -> Option: # pylint:disable=arguments-differ
return super().get(key=key)
| 33.054054 | 83 | 0.715454 | 163 | 1,223 | 5.343558 | 0.595092 | 0.068886 | 0.075775 | 0.096441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012133 | 0.191333 | 1,223 | 36 | 84 | 33.972222 | 0.868554 | 0.569092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.166667 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
9ec3d5d30407a5cebacf33d77713d29a95453c96 | 5,461 | py | Python | sources/app/routes.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | 1 | 2021-06-23T20:19:45.000Z | 2021-06-23T20:19:45.000Z | sources/app/routes.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | null | null | null | sources/app/routes.py | pablintino/Altium-DBlib-source | 65e85572f84048a7e7c5a116b429e09ac9a33e82 | [
"MIT"
] | null | null | null | #
# MIT License
#
# Copyright (c) 2020 Pablo Rodriguez Nava, @pablintino
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
from app import api
from rest_layer.component_list_resource import ComponentListResource
from rest_layer.component_resource import ComponentResource
from rest_layer.footprint_component_reference_resource import FootprintComponentReferenceResource
from rest_layer.footprint_data_resource import FootprintDataResource
from rest_layer.footprint_element_component_reference_resource import FootprintElementComponentReferenceResource
from rest_layer.footprint_list_resource import FootprintListResource
from rest_layer.footprint_resource import FootprintResource
from rest_layer.inventory.inventory_category_item_list_resource import InventoryCategoryItemListResource
from rest_layer.inventory.inventory_category_list_resource import InventoryCategoryListResource
from rest_layer.inventory.inventory_category_parent_resource import InventoryCategoryParentResource
from rest_layer.inventory.inventory_category_resource import InventoryCategoryResource
from rest_layer.inventory.inventory_item_category_resource import InventoryItemCategoryResource
from rest_layer.inventory.inventory_item_list_resource import InventoryItemListResource
from rest_layer.inventory.inventory_item_location_resource import InventoryItemLocationResource
from rest_layer.inventory.inventory_item_property_element_resource import InventoryItemPropertyElementResource
from rest_layer.inventory.inventory_item_property_list_resource import InventoryItemPropertyListResource
from rest_layer.inventory.inventory_item_resource import InventoryItemResource
from rest_layer.inventory.inventory_item_stock_location_resource import InventoryItemStockLocationResource
from rest_layer.inventory.inventory_location_list_resource import InventoryLocationListResource
from rest_layer.inventory.inventory_location_resource import InventoryLocationResource
from rest_layer.inventory.inventory_stocks_mass_update_resource import InventoryStocksMassUpdateResource
from rest_layer.metadata_api import MetadataResource
from rest_layer.symbol_component_reference_resource import SymbolComponentReferenceResource
from rest_layer.symbol_data_resource import SymbolDataResource
from rest_layer.symbol_list_resource import SymbolListResource
from rest_layer.symbol_resource import SymbolResource
api.add_resource(MetadataResource, '/metadata')
api.add_resource(ComponentListResource, '/components')
api.add_resource(ComponentResource, '/components/<int:id>')
api.add_resource(SymbolComponentReferenceResource, '/components/<int:id>/symbol')
api.add_resource(FootprintComponentReferenceResource, '/components/<int:id>/footprints')
api.add_resource(FootprintElementComponentReferenceResource, '/components/<int:id>/footprints/<int:id_f>')
api.add_resource(SymbolListResource, '/symbols')
api.add_resource(SymbolResource, '/symbols/<int:id>')
api.add_resource(SymbolDataResource, '/symbols/<int:id>/data')
api.add_resource(FootprintListResource, '/footprints')
api.add_resource(FootprintResource, '/footprints/<int:id>')
api.add_resource(FootprintDataResource, '/footprints/<int:id>/data')
# Items endpoints
api.add_resource(InventoryItemListResource, '/inventory/items')
api.add_resource(InventoryItemResource, '/inventory/items/<int:id>')
api.add_resource(InventoryItemLocationResource, '/inventory/items/<int:id>/locations')
api.add_resource(InventoryItemPropertyListResource, '/inventory/items/<int:id>/properties')
api.add_resource(InventoryItemPropertyElementResource, '/inventory/items/<int:id>/properties/<int:prop_id>')
api.add_resource(InventoryItemStockLocationResource, '/inventory/items/<int:id>/locations/<int:id_loc>/stock')
api.add_resource(InventoryItemCategoryResource, '/inventory/items/<int:id>/category')
# Locations endpoints
api.add_resource(InventoryLocationListResource, '/inventory/locations')
api.add_resource(InventoryLocationResource, '/inventory/locations/<int:id>')
# Stock management endpoints
api.add_resource(InventoryStocksMassUpdateResource, '/inventory/stocks/updates')
# Categories endpoints
api.add_resource(InventoryCategoryListResource, '/inventory/categories')
api.add_resource(InventoryCategoryResource, '/inventory/categories/<int:id>')
api.add_resource(InventoryCategoryItemListResource, '/inventory/categories/<int:id>/items')
api.add_resource(InventoryCategoryParentResource, '/inventory/categories/<int:id>/parent')
| 61.359551 | 112 | 0.855704 | 626 | 5,461 | 7.268371 | 0.263578 | 0.045714 | 0.074286 | 0.067692 | 0.161538 | 0.108791 | 0.018901 | 0 | 0 | 0 | 0 | 0.000789 | 0.071415 | 5,461 | 88 | 113 | 62.056818 | 0.89647 | 0.21681 | 0 | 0 | 0 | 0 | 0.162627 | 0.13156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.509434 | 0 | 0.509434 | 0.188679 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
9ec4ce9d1f53a12ca2717ff07b632bca8aae1f96 | 30,454 | py | Python | tests/meshes.py | cbcoutinho/learn_dg | b22bf91d1a0daedb6b48590c7361c3a9c3c7f371 | [
"BSD-2-Clause"
] | 6 | 2017-03-08T09:26:10.000Z | 2020-06-25T01:25:12.000Z | tests/meshes.py | cbcoutinho/learn_dg | b22bf91d1a0daedb6b48590c7361c3a9c3c7f371 | [
"BSD-2-Clause"
] | null | null | null | tests/meshes.py | cbcoutinho/learn_dg | b22bf91d1a0daedb6b48590c7361c3a9c3c7f371 | [
"BSD-2-Clause"
] | 1 | 2018-01-03T05:51:10.000Z | 2018-01-03T05:51:10.000Z | """
All of these meshes were created using either the test1D.geo or
test2D.geo files in the test directory. To create a mesh using
`gmsh`, calculate the following from the main source dir:
$ gmsh test/test2D.geo -order 5 -2
This will create a test/test2D.msh file. Steps are similar for
other orders of quadrilaterals, etc.
"""
import textwrap
""" Global 1D meshes """
def mesh_Linear1DAdvDiffEqual():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$Nodes
53
1 0 0 0
2 1 0 0
3 0.005030611117855227 0 0
4 0.01028898505360888 0 0
5 0.01578543473744853 0 0
6 0.02153073886104991 0 0
7 0.0275361641320945 0 0
8 0.03381348747337311 0 0
9 0.04037502051188173 0 0
10 0.04723363071047033 0 0
11 0.05440276741538541 0 0
12 0.06189649069088329 0 0
13 0.06972949691528545 0 0
14 0.0779171446731899 0 0
15 0.08647549147831562 0 0
16 0.09542132338499346 0 0
17 0.1047721814948864 0 0
18 0.1145464039321533 0 0
19 0.1247631588183544 0 0
20 0.1354424829767921 0 0
21 0.1466053185760331 0 0
22 0.1582735583739696 0 0
23 0.1704700819318919 0 0
24 0.1832188106214766 0 0
25 0.1965447437932882 0 0
26 0.2104740155104575 0 0
27 0.2250339402607451 0 0
28 0.2402530764359224 0 0
29 0.2561612661172213 0 0
30 0.2727897036477114 0 0
31 0.2901710001687771 0 0
32 0.3083392463940801 0 0
33 0.3273300718524898 0 0
34 0.3471807106396735 0 0
35 0.3679300994701113 0 0
36 0.3896189290639822 0 0
37 0.4122897303969599 0 0
38 0.4359869623522767 0 0
39 0.4607570985513004 0 0
40 0.4866487146275841 0 0
41 0.5137125865497462 0 0
42 0.5420017871557876 0 0
43 0.5715717987934756 0 0
44 0.6024806105980948 0 0
45 0.6347888244962619 0 0
46 0.6685598190666602 0 0
47 0.7038597970735351 0 0
48 0.7407580124632287 0 0
49 0.7793268079814842 0 0
50 0.8196418280784888 0 0
51 0.8617821258089199 0 0
52 0.9058303511173761 0 0
53 0.9518728765844403 0 0
$EndNodes
$Elements
54
1 15 2 0 1 1
2 15 2 0 2 2
3 1 2 0 1 1 3
4 1 2 0 1 3 4
5 1 2 0 1 4 5
6 1 2 0 1 5 6
7 1 2 0 1 6 7
8 1 2 0 1 7 8
9 1 2 0 1 8 9
10 1 2 0 1 9 10
11 1 2 0 1 10 11
12 1 2 0 1 11 12
13 1 2 0 1 12 13
14 1 2 0 1 13 14
15 1 2 0 1 14 15
16 1 2 0 1 15 16
17 1 2 0 1 16 17
18 1 2 0 1 17 18
19 1 2 0 1 18 19
20 1 2 0 1 19 20
21 1 2 0 1 20 21
22 1 2 0 1 21 22
23 1 2 0 1 22 23
24 1 2 0 1 23 24
25 1 2 0 1 24 25
26 1 2 0 1 25 26
27 1 2 0 1 26 27
28 1 2 0 1 27 28
29 1 2 0 1 28 29
30 1 2 0 1 29 30
31 1 2 0 1 30 31
32 1 2 0 1 31 32
33 1 2 0 1 32 33
34 1 2 0 1 33 34
35 1 2 0 1 34 35
36 1 2 0 1 35 36
37 1 2 0 1 36 37
38 1 2 0 1 37 38
39 1 2 0 1 38 39
40 1 2 0 1 39 40
41 1 2 0 1 40 41
42 1 2 0 1 41 42
43 1 2 0 1 42 43
44 1 2 0 1 43 44
45 1 2 0 1 44 45
46 1 2 0 1 45 46
47 1 2 0 1 46 47
48 1 2 0 1 47 48
49 1 2 0 1 48 49
50 1 2 0 1 49 50
51 1 2 0 1 50 51
52 1 2 0 1 51 52
53 1 2 0 1 52 53
54 1 2 0 1 53 2
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Quad1DAdvDiffEqual():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$Nodes
53
1 0 0 0
2 1 0 0
3 0.01028898512093517 0 0
4 0.0215307390081578 0 0
5 0.0338134875014839 0 0
6 0.04723363127339462 0 0
7 0.06189649036599121 0 0
8 0.07791714442309114 0 0
9 0.09542132322625055 0 0
10 0.1145464038838541 0 0
11 0.1354424830607478 0 0
12 0.1582735586150919 0 0
13 0.1832188114257096 0 0
14 0.2104740173701267 0 0
15 0.2402530757044003 0 0
16 0.2727897030614678 0 0
17 0.3083392459861587 0 0
18 0.3471807104480421 0 0
19 0.3896189291322947 0 0
20 0.4359869680476123 0 0
21 0.4866487219042465 0 0
22 0.5420017867318138 0 0
23 0.6024806089626876 0 0
24 0.6685598177119734 0 0
25 0.7407580114554201 0 0
26 0.8196418274935637 0 0
27 0.9058303510422514 0 0
28 0.005144492560467496 0 0
29 0.01590986206454648 0 0
30 0.02767211325482123 0 0
31 0.04052355938743926 0 0
32 0.05456506081969292 0 0
33 0.06990681739454088 0 0
34 0.08666923382467084 0 0
35 0.1049838635550523 0 0
36 0.1249944434723082 0 0
37 0.1468580208379198 0 0
38 0.1707461850204007 0 0
39 0.1968464143979182 0 0
40 0.2253635465372635 0 0
41 0.2565213893829341 0 0
42 0.2905644745238133 0 0
43 0.3277599782171004 0 0
44 0.3683998197901684 0 0
45 0.4128029485899535 0 0
46 0.4613178449759294 0 0
47 0.5143252543179917 0 0
48 0.5722411978472507 0 0
49 0.6355202133373304 0 0
50 0.7046589145836968 0 0
51 0.7801999194744919 0 0
52 0.8627360892679076 0 0
53 0.9529151755211257 0 0
$EndNodes
$Elements
28
1 15 2 0 1 1
2 15 2 0 2 2
3 8 2 0 1 1 3 28
4 8 2 0 1 3 4 29
5 8 2 0 1 4 5 30
6 8 2 0 1 5 6 31
7 8 2 0 1 6 7 32
8 8 2 0 1 7 8 33
9 8 2 0 1 8 9 34
10 8 2 0 1 9 10 35
11 8 2 0 1 10 11 36
12 8 2 0 1 11 12 37
13 8 2 0 1 12 13 38
14 8 2 0 1 13 14 39
15 8 2 0 1 14 15 40
16 8 2 0 1 15 16 41
17 8 2 0 1 16 17 42
18 8 2 0 1 17 18 43
19 8 2 0 1 18 19 44
20 8 2 0 1 19 20 45
21 8 2 0 1 20 21 46
22 8 2 0 1 21 22 47
23 8 2 0 1 22 23 48
24 8 2 0 1 23 24 49
25 8 2 0 1 24 25 50
26 8 2 0 1 25 26 51
27 8 2 0 1 26 27 52
28 8 2 0 1 27 2 53
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Cub1DAdvDiffEqual():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$Nodes
79
1 0 0 0
2 1 0 0
3 0.01028898512093517 0 0
4 0.0215307390081578 0 0
5 0.0338134875014839 0 0
6 0.04723363127339462 0 0
7 0.06189649036599121 0 0
8 0.07791714442309114 0 0
9 0.09542132322625055 0 0
10 0.1145464038838541 0 0
11 0.1354424830607478 0 0
12 0.1582735586150919 0 0
13 0.1832188114257096 0 0
14 0.2104740173701267 0 0
15 0.2402530757044003 0 0
16 0.2727897030614678 0 0
17 0.3083392459861587 0 0
18 0.3471807104480421 0 0
19 0.3896189291322947 0 0
20 0.4359869680476123 0 0
21 0.4866487219042465 0 0
22 0.5420017867318138 0 0
23 0.6024806089626876 0 0
24 0.6685598177119734 0 0
25 0.7407580114554201 0 0
26 0.8196418274935637 0 0
27 0.9058303510422514 0 0
28 0.003429661706978295 0 0
29 0.006859323413956723 0 0
30 0.01403623641667604 0 0
31 0.01778348771241692 0 0
32 0.02562498850593348 0 0
33 0.02971923800370915 0 0
34 0.03828686875878747 0 0
35 0.04276025001609104 0 0
36 0.05212125097092682 0 0
37 0.05700887066845901 0 0
38 0.06723670838502425 0 0
39 0.0725769264040577 0 0
40 0.08375187069081094 0 0
41 0.08958659695853075 0 0
42 0.1017963501121184 0 0
43 0.1081713769979863 0 0
44 0.1215117636094902 0 0
45 0.1284771233351214 0 0
46 0.1430528415788625 0 0
47 0.1506632000969772 0 0
48 0.1665886428852978 0 0
49 0.1749037271555037 0 0
50 0.1923038800738486 0 0
51 0.2013889487219877 0 0
52 0.2204003701482179 0 0
53 0.2303267229263091 0 0
54 0.2510986181567562 0 0
55 0.261944160609112 0 0
56 0.2846395507030314 0 0
57 0.296489398344595 0 0
58 0.3212864008067865 0 0
59 0.3342335556274143 0 0
60 0.361326783342793 0 0
61 0.3754728562375439 0 0
62 0.4050749421040672 0 0
63 0.4205309550758398 0 0
64 0.452874219333157 0 0
65 0.4697614706187017 0 0
66 0.5050997435133822 0 0
67 0.5235507651225979 0 0
68 0.5621613941421051 0 0
69 0.5823210015523964 0 0
70 0.6245070118791162 0 0
71 0.6465334147955447 0 0
72 0.6926258822931223 0 0
73 0.7166919468742712 0 0
74 0.767052616801468 0 0
75 0.7933472221475159 0 0
76 0.8483713353431263 0 0
77 0.8771008431926888 0 0
78 0.9372202340281676 0 0
79 0.9686101170140837 0 0
$EndNodes
$Elements
28
1 15 2 0 1 1
2 15 2 0 2 2
3 26 2 0 1 1 3 28 29
4 26 2 0 1 3 4 30 31
5 26 2 0 1 4 5 32 33
6 26 2 0 1 5 6 34 35
7 26 2 0 1 6 7 36 37
8 26 2 0 1 7 8 38 39
9 26 2 0 1 8 9 40 41
10 26 2 0 1 9 10 42 43
11 26 2 0 1 10 11 44 45
12 26 2 0 1 11 12 46 47
13 26 2 0 1 12 13 48 49
14 26 2 0 1 13 14 50 51
15 26 2 0 1 14 15 52 53
16 26 2 0 1 15 16 54 55
17 26 2 0 1 16 17 56 57
18 26 2 0 1 17 18 58 59
19 26 2 0 1 18 19 60 61
20 26 2 0 1 19 20 62 63
21 26 2 0 1 20 21 64 65
22 26 2 0 1 21 22 66 67
23 26 2 0 1 22 23 68 69
24 26 2 0 1 23 24 70 71
25 26 2 0 1 24 25 72 73
26 26 2 0 1 25 26 74 75
27 26 2 0 1 26 27 76 77
28 26 2 0 1 27 2 78 79
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
""" Single 2D meshes """
def mesh_Single2D_quadquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
9
1 0 0 0
2 1 0 0
3 1 1 0
4 0 1 0
5 0.5 0 0
6 1 0.5 0
7 0.5 1 0
8 0 0.5 0
9 0.5 0.5 0
$EndNodes
$Elements
5
1 8 2 1 1 1 2 5
2 8 2 2 2 2 3 6
3 8 2 3 3 3 4 7
4 8 2 4 4 4 1 8
7 10 2 5 1 1 2 3 4 5 6 7 8 9
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Single2D_cubquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
16
1 0 0 0
2 1 0 0
3 1 1 0
4 0 1 0
5 0.3333333333333333 0 0
6 0.6666666666666667 0 0
7 1 0.3333333333333333 0
8 1 0.6666666666666667 0
9 0.6666666666666667 1 0
10 0.3333333333333333 1 0
11 0 0.6666666666666667 0
12 0 0.3333333333333333 0
13 0.3333333333333333 0.3333333333333333 0
14 0.6666666666666667 0.3333333333333333 0
15 0.6666666666666667 0.6666666666666667 0
16 0.3333333333333333 0.6666666666666667 0
$EndNodes
$Elements
5
1 26 2 1 1 1 2 5 6
2 26 2 2 2 2 3 7 8
3 26 2 3 3 3 4 9 10
4 26 2 4 4 4 1 11 12
7 36 2 5 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Single2D_quarquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
25
1 0 0 0
2 1 0 0
3 1 1 0
4 0 1 0
5 0.25 0 0
6 0.5 0 0
7 0.75 0 0
8 1 0.25 0
9 1 0.5 0
10 1 0.75 0
11 0.75 1 0
12 0.5 1 0
13 0.25 1 0
14 0 0.75 0
15 0 0.5 0
16 0 0.25 0
17 0.25 0.25 0
18 0.75 0.25 0
19 0.75 0.75 0
20 0.25 0.75 0
21 0.5 0.25 0
22 0.75 0.5 0
23 0.5 0.75 0
24 0.25 0.5 0
25 0.5 0.5 0
$EndNodes
$Elements
5
1 27 2 1 1 1 2 5 6 7
2 27 2 2 2 2 3 8 9 10
3 27 2 3 3 3 4 11 12 13
4 27 2 4 4 4 1 14 15 16
5 37 2 5 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
""" Global 2D meshes """
def mesh_Multiple2D_biquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
9
1 0 0 0
2 1 0 0
3 1 1 0
4 0 1 0
5 0.499999999998694 0 0
6 1 0.499999999998694 0
7 0.5000000000020591 1 0
8 0 0.5000000000020591 0
9 0.5000000000003766 0.5000000000003766 0
$EndNodes
$Elements
12
1 1 2 1 1 1 5
2 1 2 1 1 5 2
3 1 2 2 2 2 6
4 1 2 2 2 6 3
5 1 2 3 3 3 7
6 1 2 3 3 7 4
7 1 2 4 4 4 8
8 1 2 4 4 8 1
9 3 2 5 1 1 5 9 8
10 3 2 5 1 8 9 7 4
11 3 2 5 1 5 2 6 9
12 3 2 5 1 9 6 3 7
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Multiple2D_quadquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
15
1 0 0 0
2 1 0 0
3 1 2 0
4 0 2 0
5 0.4999999999986718 0 0
6 1 0.999999999997388 0
7 1 0.4999999999988388 0
8 1 1.499999999998694 0
9 0.5000000000013305 2 0
10 0 1.000000000004118 0
11 0 1.500000000001978 0
12 0 0.5000000000020592 0
13 0.5 1.000000000000753 0
14 0.4999999999993359 0.500000000000449 0
15 0.5000000000006652 1.500000000000336 0
$EndNodes
$Elements
8
1 8 2 1 1 1 2 5
2 8 2 2 2 2 6 7
3 8 2 2 2 6 3 8
4 8 2 3 3 3 4 9
5 8 2 4 4 4 10 11
6 8 2 4 4 10 1 12
7 10 2 5 1 1 2 6 10 5 7 13 12 14
8 10 2 5 1 10 6 3 4 13 8 9 11 15
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Multiple2D_cubquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
28
1 0 0 0
2 2 0 0
3 2 1 0
4 0 1 0
5 0.999999999997388 0 0
6 0.3333333333326662 0 0
7 0.6666666666650272 0 0
8 1.333333333331592 0 0
9 1.666666666665796 0 0
10 2 0.3333333333324915 0
11 2 0.6666666666657831 0
12 1.000000000004118 1 0
13 1.666666666667929 1 0
14 1.333333333336069 1 0
15 0.6666666666694121 1 0
16 0.3333333333347059 1 0
17 0 0.6666666666668164 0
18 0 0.3333333333341704 0
19 0.9999999999996314 0.3333333333333333 0
20 1.000000000001875 0.6666666666666666 0
21 0.3333333333333461 0.3333333333338913 0
22 0.6666666666664889 0.3333333333336123 0
23 0.6666666666679507 0.6666666666667167 0
24 0.333333333334026 0.6666666666667667 0
25 1.333333333333084 0.3333333333330527 0
26 1.666666666666507 0.3333333333327721 0
27 1.666666666667219 0.6666666666660778 0
28 1.333333333334577 0.6666666666663723 0
$EndNodes
$Elements
8
1 26 2 1 1 1 5 6 7
2 26 2 1 1 5 2 8 9
3 26 2 2 2 2 3 10 11
4 26 2 3 3 3 12 13 14
5 26 2 3 3 12 4 15 16
6 26 2 4 4 4 1 17 18
7 36 2 5 1 1 5 12 4 6 7 19 20 15 16 17 18 21 22 23 24
8 36 2 5 1 5 2 3 12 8 9 10 11 13 14 20 19 25 26 27 28
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Multiple2D_quarquad():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
81
1 0 0 0
2 1 0 0
3 1 1 0
4 0 1 0
5 0.499999999998694 0 0
6 0.124999999999778 0 0
7 0.2499999999994167 0 0
8 0.3749999999990553 0 0
9 0.6249999999990205 0 0
10 0.749999999999347 0 0
11 0.8749999999996735 0 0
12 1 0.499999999998694 0
13 1 0.124999999999778 0
14 1 0.2499999999994167 0
15 1 0.3749999999990553 0
16 1 0.6249999999990205 0
17 1 0.749999999999347 0
18 1 0.8749999999996735 0
19 0.5000000000020591 1 0
20 0.8750000000003566 1 0
21 0.7500000000008733 1 0
22 0.6250000000015228 1 0
23 0.3750000000015443 1 0
24 0.2500000000010296 1 0
25 0.1250000000005148 1 0
26 0 0.5000000000020591 0
27 0 0.8750000000003566 0
28 0 0.7500000000008733 0
29 0 0.6250000000015228 0
30 0 0.3750000000015443 0
31 0 0.2500000000010296 0
32 0 0.1250000000005148 0
33 0.5000000000003766 0.5000000000003766 0
34 0.4999999999991147 0.1250000000000941 0
35 0.4999999999995353 0.2500000000001883 0
36 0.4999999999999559 0.3750000000002824 0
37 0.3750000000002824 0.5000000000007973 0
38 0.2500000000001883 0.5000000000012179 0
39 0.1250000000000941 0.5000000000016385 0
40 0.1249999999998568 0.1250000000004091 0
41 0.374999999999362 0.1250000000001989 0
42 0.3749999999999754 0.3750000000005972 0
43 0.1250000000000152 0.3750000000012285 0
44 0.2499999999996095 0.1250000000003041 0
45 0.3749999999996689 0.2500000000003982 0
46 0.2499999999999954 0.375000000000913 0
47 0.124999999999936 0.2500000000008189 0
48 0.2499999999998024 0.2500000000006086 0
49 0.5000000000007973 0.6250000000002824 0
50 0.5000000000012179 0.7500000000001883 0
51 0.5000000000016385 0.8750000000000941 0
52 0.1250000000001991 0.6250000000012115 0
53 0.3750000000005979 0.6250000000005911 0
54 0.3750000000012286 0.875000000000158 0
55 0.1250000000004097 0.8750000000002901 0
56 0.2500000000003986 0.6250000000009015 0
57 0.3750000000009135 0.7500000000003584 0
58 0.2500000000008193 0.8750000000002242 0
59 0.1250000000003044 0.750000000000701 0
60 0.250000000000609 0.7500000000005298 0
61 0.8750000000000941 0.4999999999991147 0
62 0.7500000000001883 0.4999999999995353 0
63 0.6250000000002824 0.4999999999999559 0
64 0.6249999999993349 0.1250000000000147 0
65 0.8749999999997776 0.1249999999998566 0
66 0.8749999999999873 0.3749999999993615 0
67 0.6249999999999665 0.3749999999999752 0
68 0.7499999999995564 0.1249999999999357 0
69 0.8749999999998831 0.2499999999996092 0
70 0.7499999999999771 0.3749999999996685 0
71 0.6249999999996507 0.249999999999995 0
72 0.7499999999997669 0.2499999999998022 0
73 0.6250000000005915 0.6249999999999656 0
74 0.8750000000001588 0.6249999999993346 0
75 0.8750000000002894 0.8749999999997766 0
76 0.6250000000012124 0.874999999999988 0
77 0.7500000000003588 0.6249999999996503 0
78 0.8750000000002246 0.7499999999995559 0
79 0.7500000000007013 0.8749999999998826 0
80 0.6250000000009021 0.749999999999977 0
81 0.7500000000005302 0.7499999999997666 0
$EndNodes
$Elements
12
1 27 2 1 1 1 5 6 7 8
2 27 2 1 1 5 2 9 10 11
3 27 2 2 2 2 12 13 14 15
4 27 2 2 2 12 3 16 17 18
5 27 2 3 3 3 19 20 21 22
6 27 2 3 3 19 4 23 24 25
7 27 2 4 4 4 26 27 28 29
8 27 2 4 4 26 1 30 31 32
9 37 2 5 1 1 5 33 26 6 7 8 34 35 36 37 38 39 30 31 32 40 41 42 43 44 45 46 47 48
10 37 2 5 1 26 33 19 4 39 38 37 49 50 51 23 24 25 27 28 29 52 53 54 55 56 57 58 59 60
11 37 2 5 1 5 2 12 33 9 10 11 13 14 15 61 62 63 36 35 34 64 65 66 67 68 69 70 71 72
12 37 2 5 1 33 12 3 19 63 62 61 16 17 18 20 21 22 51 50 49 73 74 75 76 77 78 79 80 81
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
def mesh_Multiple2D_cubquad_BIG():
gmsh_buffer = """\
$MeshFormat
2.2 0 8
$EndMeshFormat
$PhysicalNames
5
1 1 "lower"
1 2 "right"
1 3 "upper"
1 4 "left"
2 5 "domain"
$EndPhysicalNames
$Nodes
217
1 0 0 0
2 10 0 0
3 10 1 0
4 0 1 0
5 0.9999999999991893 0 0
6 1.999999999996827 0 0
7 2.99999999999391 0 0
8 3.999999999991027 0 0
9 4.999999999992411 0 0
10 5.999999999993932 0 0
11 6.999999999995455 0 0
12 7.999999999996978 0 0
13 8.99999999999849 0 0
14 0.3333333333330583 0 0
15 0.6666666666660547 0 0
16 1.33333333333196 0 0
17 1.666666666664427 0 0
18 2.333333333329206 0 0
19 2.666666666661519 0 0
20 3.333333333326038 0 0
21 3.666666666658527 0 0
22 4.333333333324638 0 0
23 4.666666666658328 0 0
24 5.333333333326355 0 0
25 5.66666666666031 0 0
26 6.333333333327753 0 0
27 6.666666666661572 0 0
28 7.333333333328992 0 0
29 7.666666666663239 0 0
30 8.333333333331113 0 0
31 8.666666666664689 0 0
32 9.333333333331863 0 0
33 9.666666666665456 0 0
34 10 0.499999999998694 0
35 10 0.1666666666663331 0
36 10 0.3333333333325136 0
37 10 0.666666666665796 0
38 10 0.8333333333328979 0
39 8.999999999999998 1 0
40 7.999999999999996 1 0
41 6.999999999999994 1 0
42 5.999999999999991 1 0
43 4.999999999999989 1 0
44 3.999999999999988 1 0
45 2.999999999999985 1 0
46 1.999999999999982 1 0
47 0.9999999999999911 1 0
48 9.666666666666673 1 0
49 9.333333333333345 1 0
50 8.666666666666368 1 0
51 8.33333333333335 1 0
52 7.666666666666633 1 0
53 7.333333333333157 1 0
54 6.66666666666631 1 0
55 6.333333333333216 1 0
56 5.66666666666689 1 0
57 5.333333333333732 1 0
58 4.666666666666902 1 0
59 4.33333333333347 1 0
60 3.666666666666637 1 0
61 3.333333333333336 1 0
62 2.6666666666668 1 0
63 2.333333333333366 1 0
64 1.666666666666762 1 0
65 1.333333333333098 1 0
66 0.6666666666672185 1 0
67 0.3333333333340462 1 0
68 0 0.5000000000020591 0
69 0 0.8333333333339644 0
70 0 0.6666666666680346 0
71 0 0.3333333333347061 0
72 0 0.166666666667353 0
73 0.9999999999995903 0.5000000000017226 0
74 1.999999999998405 0.5000000000013862 0
75 2.999999999996947 0.5000000000010496 0
76 3.999999999995508 0.5000000000007131 0
77 4.999999999996199 0.5000000000003766 0
78 5.999999999996961 0.5000000000000402 0
79 6.999999999997724 0.4999999999997035 0
80 7.999999999998488 0.4999999999993671 0
81 8.999999999999243 0.4999999999990305 0
82 0.999999999999323 0.1666666666672409 0
83 0.9999999999994567 0.3333333333344817 0
84 0.6666666666663936 0.5000000000018348 0
85 0.3333333333331968 0.500000000001947 0
86 0.3333333333331046 0.1666666666673156 0
87 0.6666666666661679 0.1666666666672783 0
88 0.666666666666281 0.3333333333345566 0
89 0.3333333333331508 0.3333333333346314 0
90 0.9999999999997239 0.666666666667815 0
91 0.9999999999998576 0.8333333333339075 0
92 0.3333333333334801 0.6666666666679614 0
93 0.6666666666666687 0.6666666666678883 0
94 0.6666666666669441 0.8333333333339267 0
95 0.3333333333337634 0.8333333333339457 0
96 1.999999999997353 0.1666666666671287 0
97 1.999999999997879 0.3333333333342575 0
98 1.666666666665467 0.5000000000014984 0
99 1.333333333332528 0.5000000000016105 0
100 1.33333333333215 0.1666666666672034 0
101 1.666666666664774 0.1666666666671662 0
102 1.666666666665121 0.3333333333343322 0
103 1.333333333332339 0.3333333333344071 0
104 1.99999999999893 0.6666666666675908 0
105 1.999999999999456 0.8333333333337953 0
106 1.333333333332718 0.6666666666677402 0
107 1.666666666665899 0.6666666666676657 0
108 1.666666666666331 0.833333333333833 0
109 1.333333333332908 0.8333333333338703 0
110 2.999999999994922 0.1666666666670165 0
111 2.999999999995935 0.333333333334033 0
112 2.6666666666641 0.5000000000011618 0
113 2.333333333331252 0.500000000001274 0
114 2.333333333329888 0.1666666666670913 0
115 2.66666666666238 0.166666666667054 0
116 2.666666666663241 0.3333333333341079 0
117 2.33333333333057 0.3333333333341827 0
118 2.99999999999796 0.6666666666673664 0
119 2.999999999998972 0.8333333333336832 0
120 2.333333333331957 0.6666666666675158 0
121 2.666666666665 0.6666666666674413 0
122 2.666666666665901 0.8333333333337207 0
123 2.333333333332661 0.8333333333337583 0
124 3.99999999999252 0.1666666666669044 0
125 3.999999999994014 0.3333333333338087 0
126 3.666666666662654 0.5000000000008252 0
127 3.333333333329801 0.5000000000009375 0
128 3.333333333327292 0.1666666666669791 0
129 3.666666666659903 0.1666666666669418 0
130 3.66666666666128 0.3333333333338835 0
131 3.333333333328547 0.3333333333339584 0
132 3.999999999997001 0.666666666667142 0
133 3.999999999998494 0.8333333333335711 0
134 3.333333333330978 0.6666666666672915 0
135 3.666666666663983 0.6666666666672169 0
136 3.66666666666531 0.8333333333336087 0
137 3.333333333332158 0.8333333333336459 0
138 4.999999999993674 0.1666666666667922 0
139 4.999999999994936 0.3333333333335844 0
140 4.666666666662635 0.5000000000004887 0
141 4.333333333329072 0.500000000000601 0
142 4.333333333326115 0.166666666666867 0
143 4.666666666659764 0.1666666666668296 0
144 4.666666666661202 0.3333333333336592 0
145 4.333333333327593 0.333333333333734 0
146 4.999999999997462 0.6666666666669177 0
147 4.999999999998725 0.8333333333334588 0
148 4.333333333330538 0.6666666666670671 0
149 4.666666666664058 0.6666666666669925 0
150 4.666666666665482 0.8333333333334965 0
151 4.333333333332003 0.833333333333534 0
152 5.999999999994942 0.1666666666666801 0
153 5.999999999995951 0.3333333333333601 0
154 5.666666666663374 0.5000000000001523 0
155 5.333333333329786 0.5000000000002645 0
156 5.333333333327497 0.1666666666667548 0
157 5.666666666661333 0.1666666666667175 0
158 5.666666666662355 0.333333333333435 0
159 5.333333333328642 0.3333333333335097 0
160 5.999999999997971 0.6666666666666934 0
161 5.999999999998981 0.8333333333333467 0
162 5.333333333331101 0.6666666666668427 0
163 5.666666666664547 0.6666666666667682 0
164 5.666666666665719 0.8333333333333843 0
165 5.333333333332417 0.8333333333334217 0
166 6.999999999996212 0.1666666666665678 0
167 6.999999999996968 0.3333333333331356 0
168 6.666666666664137 0.4999999999998158 0
169 6.333333333330549 0.4999999999999279 0
170 6.333333333328685 0.1666666666666426 0
171 6.666666666662429 0.1666666666666053 0
172 6.666666666663283 0.3333333333332105 0
173 6.333333333329618 0.3333333333332854 0
174 6.999999999998481 0.666666666666469 0
175 6.999999999999237 0.8333333333332344 0
176 6.333333333331436 0.6666666666666184 0
177 6.666666666664862 0.6666666666665438 0
178 6.666666666665588 0.8333333333332722 0
179 6.333333333332329 0.8333333333333095 0
180 7.999999999997481 0.1666666666664557 0
181 7.999999999997985 0.3333333333329114 0
182 7.6666666666649 0.4999999999994792 0
183 7.333333333331312 0.4999999999995914 0
184 7.333333333329763 0.1666666666665304 0
185 7.666666666663795 0.1666666666664931 0
186 7.666666666664351 0.3333333333329862 0
187 7.333333333330538 0.333333333333061 0
188 7.999999999998991 0.6666666666662446 0
189 7.999999999999493 0.8333333333331223 0
190 7.333333333331926 0.6666666666663942 0
191 7.666666666665479 0.6666666666663195 0
192 7.666666666666059 0.8333333333331598 0
193 7.333333333332543 0.8333333333331974 0
194 8.999999999998741 0.1666666666663435 0
195 8.999999999998993 0.333333333332687 0
196 8.666666666665659 0.4999999999991427 0
197 8.333333333332073 0.4999999999992549 0
198 8.333333333331431 0.1666666666664183 0
199 8.666666666665014 0.1666666666663809 0
200 8.666666666665339 0.3333333333327618 0
201 8.333333333331753 0.3333333333328367 0
202 8.999999999999496 0.6666666666660204 0
203 8.999999999999746 0.8333333333330102 0
204 8.333333333332497 0.6666666666661698 0
205 8.666666666665897 0.6666666666660952 0
206 8.666666666666135 0.8333333333330478 0
207 8.333333333332922 0.833333333333085 0
208 9.666666666666414 0.4999999999988062 0
209 9.333333333332829 0.4999999999989183 0
210 9.333333333332185 0.16666666666634 0
211 9.66666666666578 0.1666666666663366 0
212 9.666666666666099 0.3333333333325714 0
213 9.333333333332506 0.3333333333326292 0
214 9.333333333333 0.6666666666659455 0
215 9.666666666666503 0.6666666666658709 0
216 9.66666666666659 0.8333333333329357 0
217 9.333333333333172 0.8333333333329729 0
$EndNodes
$Elements
44
1 26 2 1 1 1 5 14 15
2 26 2 1 1 5 6 16 17
3 26 2 1 1 6 7 18 19
4 26 2 1 1 7 8 20 21
5 26 2 1 1 8 9 22 23
6 26 2 1 1 9 10 24 25
7 26 2 1 1 10 11 26 27
8 26 2 1 1 11 12 28 29
9 26 2 1 1 12 13 30 31
10 26 2 1 1 13 2 32 33
11 26 2 2 2 2 34 35 36
12 26 2 2 2 34 3 37 38
13 26 2 3 3 3 39 48 49
14 26 2 3 3 39 40 50 51
15 26 2 3 3 40 41 52 53
16 26 2 3 3 41 42 54 55
17 26 2 3 3 42 43 56 57
18 26 2 3 3 43 44 58 59
19 26 2 3 3 44 45 60 61
20 26 2 3 3 45 46 62 63
21 26 2 3 3 46 47 64 65
22 26 2 3 3 47 4 66 67
23 26 2 4 4 4 68 69 70
24 26 2 4 4 68 1 71 72
25 36 2 5 1 1 5 73 68 14 15 82 83 84 85 71 72 86 87 88 89
26 36 2 5 1 68 73 47 4 85 84 90 91 66 67 69 70 92 93 94 95
27 36 2 5 1 5 6 74 73 16 17 96 97 98 99 83 82 100 101 102 103
28 36 2 5 1 73 74 46 47 99 98 104 105 64 65 91 90 106 107 108 109
29 36 2 5 1 6 7 75 74 18 19 110 111 112 113 97 96 114 115 116 117
30 36 2 5 1 74 75 45 46 113 112 118 119 62 63 105 104 120 121 122 123
31 36 2 5 1 7 8 76 75 20 21 124 125 126 127 111 110 128 129 130 131
32 36 2 5 1 75 76 44 45 127 126 132 133 60 61 119 118 134 135 136 137
33 36 2 5 1 8 9 77 76 22 23 138 139 140 141 125 124 142 143 144 145
34 36 2 5 1 76 77 43 44 141 140 146 147 58 59 133 132 148 149 150 151
35 36 2 5 1 9 10 78 77 24 25 152 153 154 155 139 138 156 157 158 159
36 36 2 5 1 77 78 42 43 155 154 160 161 56 57 147 146 162 163 164 165
37 36 2 5 1 10 11 79 78 26 27 166 167 168 169 153 152 170 171 172 173
38 36 2 5 1 78 79 41 42 169 168 174 175 54 55 161 160 176 177 178 179
39 36 2 5 1 11 12 80 79 28 29 180 181 182 183 167 166 184 185 186 187
40 36 2 5 1 79 80 40 41 183 182 188 189 52 53 175 174 190 191 192 193
41 36 2 5 1 12 13 81 80 30 31 194 195 196 197 181 180 198 199 200 201
42 36 2 5 1 80 81 39 40 197 196 202 203 50 51 189 188 204 205 206 207
43 36 2 5 1 13 2 34 81 32 33 35 36 208 209 195 194 210 211 212 213
44 36 2 5 1 81 34 3 39 209 208 37 38 48 49 203 202 214 215 216 217
$EndElements
"""
gmsh_buffer = textwrap.dedent(gmsh_buffer)
return gmsh_buffer
| 27.990809 | 89 | 0.667334 | 5,731 | 30,454 | 3.534985 | 0.164544 | 0.027938 | 0.015845 | 0.010267 | 0.220248 | 0.169159 | 0.164174 | 0.161706 | 0.160916 | 0.156622 | 0 | 0.838666 | 0.294969 | 30,454 | 1,087 | 90 | 28.016559 | 0.104886 | 0.010672 | 0 | 0.272458 | 0 | 0.027641 | 0.951113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010859 | false | 0 | 0.000987 | 0 | 0.022705 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9ec69c4aeee11acdf739ebe1dbf8c7d86756b92d | 64 | py | Python | Python/Sets/py-introduction-to-sets.py | kamleshmugdiya/Hackerrank | 2dfb3689dd6cc7848e8b3d91f8674384652e5a56 | [
"MIT"
] | 2 | 2020-05-18T14:59:34.000Z | 2020-05-23T15:22:55.000Z | Python/Sets/py-introduction-to-sets.py | kamleshmugdiya/Hackerrank | 2dfb3689dd6cc7848e8b3d91f8674384652e5a56 | [
"MIT"
] | null | null | null | Python/Sets/py-introduction-to-sets.py | kamleshmugdiya/Hackerrank | 2dfb3689dd6cc7848e8b3d91f8674384652e5a56 | [
"MIT"
] | 1 | 2018-10-09T11:43:17.000Z | 2018-10-09T11:43:17.000Z | def average(arr):
x = set(arr)
return ((sum(x)/len(x)))
| 16 | 28 | 0.53125 | 11 | 64 | 3.090909 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234375 | 64 | 3 | 29 | 21.333333 | 0.693878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9ecd1c4414710cb9f3bc0fd95dcfea356d426a42 | 76,238 | py | Python | analysis/analyseGroundFlatfields.py | Borlaff/EuclidVisibleInstrument | 73a64ad275054d7b1a26f0fe556eae222b65f613 | [
"BSD-2-Clause"
] | 5 | 2016-12-13T16:58:53.000Z | 2019-12-29T05:29:00.000Z | analysis/analyseGroundFlatfields.py | Borlaff/EuclidVisibleInstrument | 73a64ad275054d7b1a26f0fe556eae222b65f613 | [
"BSD-2-Clause"
] | null | null | null | analysis/analyseGroundFlatfields.py | Borlaff/EuclidVisibleInstrument | 73a64ad275054d7b1a26f0fe556eae222b65f613 | [
"BSD-2-Clause"
] | 3 | 2015-07-13T10:01:41.000Z | 2019-05-28T13:41:47.000Z | """
A simple script to analyse ground/lab flat fields.
"""
import matplotlib
#matplotlib.use('pdf')
matplotlib.rc('text', usetex=True)
matplotlib.rcParams['font.size'] = 17
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('axes', linewidth=1.1)
matplotlib.rcParams['legend.fontsize'] = 11
matplotlib.rcParams['legend.handlelength'] = 3
matplotlib.rcParams['xtick.major.size'] = 5
matplotlib.rcParams['ytick.major.size'] = 5
matplotlib.rcParams['image.interpolation'] = 'none'
import matplotlib.pyplot as plt
import pyfits as pf
import numpy as np
import glob as g
from scipy import fftpack
from scipy import ndimage
from scipy import signal
import cPickle
from support import files as fileIO
import scipy.optimize as optimize
from matplotlib import animation
def makeFlat(files):
shape = pf.getdata(files[0]).shape
summed = np.zeros(shape)
for file in files:
data = pf.getdata(file)
prescan = data[11:2056, 9:51].mean()
overscan = data[11:2056, 4150:4192].mean()
Q0 = data[:, 51:2098]
Q1 = data[:, 2098:4145]
#subtract the bias levels
Q0 -= prescan
Q1 -= overscan
data[:, 51:2098] = Q0
data[:, 2098:4145] = Q1
fileIO.writeFITS(data, file.replace('.fits', 'biasremoved.fits'), int=False)
summed += data
summed /= summed[11:2056, 56:4131].mean()
#write out FITS file
fileIO.writeFITS(summed, 'combined.fits', int=False)
avg = np.average(np.asarray([pf.getdata(file) for file in files]), axis=0)
avg /= avg[11:2056, 56:4131].mean()
fileIO.writeFITS(avg, 'averaged.fits', int=False)
avg = np.median(np.asarray([pf.getdata(file) for file in files]), axis=0)
avg /= avg[11:2056, 56:4131].mean()
fileIO.writeFITS(avg, 'median.fits', int=False)
def measureNoise(data, size, file, gain=3.5, flat='combined.fits', debug=False):
"""
Measure average signal level and variance in several patches within a single image.
.. Warning:: One must flat field the data before calculating the variance. Hence, the results are uncertain.
It is better to use a pairwise analysis if at least two exposures at a given flux level is
available.
:param data:
:param size:
:return:
"""
#move to electrons
data *= gain
#means of prescan and overscan
prescan = data[11:2056, 9:51].mean()
overscan = data[11:2056, 4150:4192].mean()
#take out pre and overscan
#x should start from 55 and go to 2090 for first Q
#from 2110 to 4130 for the second Q
# y should range from 10 to 2055 to have clean area...
Q0 = data[:, 51:2098].copy()
Q1 = data[:, 2098:4145].copy()
if debug:
print prescan, overscan
#subtract the bias levels
Q0 -= prescan
Q1 -= overscan
#load a flat and remove-pixel-to-pixel variation due to the flat...
flat = pf.getdata(flat)
Q0 /= flat[:, 51:2098]
Q1 /= flat[:, 2098:4145]
data[:, 51:2098] = Q0
data[:, 2098:4145] = Q1
fileIO.writeFITS(data, file.replace('.fits', 'flattened.fits'), int=False)
Q0 = data[11:2056, 56:2091].copy()
Q1 = data[11:2056, 2111:4131].copy()
#number of pixels in new areas
Q0y, Q0x = Q0.shape
Q1y, Q1x = Q1.shape
#number of patches
Q0y = int(np.floor(Q0y / size))
Q0x = int(np.floor(Q0x / size))
Q1y = int(np.floor(Q1y / size))
Q1x = int(np.floor(Q1x / size))
flux = []
variance = []
for i in range(Q0y):
for j in range(Q0x):
minidy = i*int(size)
maxidy = minidy + int(size)
minidx = j*int(size)
maxidx = minidx + int(size)
patch = Q0[minidy:maxidy, minidx:maxidx]
avg = np.mean(patch)
var = np.var(patch)
#filter out stuff too close to saturation
if avg < 300000 and var/avg < 2.5 and avg/var < 2.5:
flux.append(avg)
variance.append(var)
for i in range(Q1y):
for j in range(Q1x):
minidy = i * int(size)
maxidy = minidy + int(size)
minidx = j * int(size)
maxidx = minidx + int(size)
patch = Q1[minidy:maxidy, minidx:maxidx]
avg = np.mean(patch)
var = np.var(patch)
#filter out stuff too close to saturation
if avg < 300000 and var/avg < 2.5 and avg/var < 2.5:
flux.append(avg)
variance.append(var)
flux = np.asarray(flux)
variance = np.asarray(variance)
print file, np.mean(flux), np.mean(variance)
results = dict(flux=flux, variance=variance)
return results
def measureNoiseRandomPositions(data, size, file, flat='combined.fits', rands=50, debug=False):
"""
Measure average signal level and variance in several patches within a single image
:param data:
:param size:
:return:
"""
#means of prescan and overscan
prescan = data[11:2056, 9:51].mean()
overscan = data[11:2056, 4150:4192].mean()
#take out pre and overscan
#x should start from 55 and go to 2090 for first Q
#from 2110 to 4130 for the second Q
# y should range from 10 to 2055 to have clean area...
Q0 = data[:, 51:2098].copy()
Q1 = data[:, 2098:4145].copy()
if debug:
print prescan, overscan
#subtract the bias levels
Q0 -= prescan
Q1 -= overscan
#load a flat and remove-pixel-to-pixel variation due to the flat...
flat = pf.getdata(flat)
Q0 /= flat[:, 51:2098]
Q1 /= flat[:, 2098:4145]
data[:, 51:2098] = Q0
data[:, 2098:4145] = Q1
fileIO.writeFITS(data, file.replace('.fits', 'flattened.fits'), int=False)
Q0 = data[11:2056, 56:2091].copy()
Q1 = data[11:2056, 2111:4131].copy()
#number of pixels in new areas
Q0y, Q0x = Q0.shape
Q1y, Q1x = Q1.shape
h = size / 2.
Q0y -= h
Q0x -= h
Q1y -= h
Q1x -= h
xpos = np.random.random_integers(h, min(Q0x, Q1x), size=rands)
ypos = np.random.random_integers(h, min(Q0y, Q1y), size=rands)
flux = []
variance = []
for i in xpos:
for j in ypos:
patch = Q0[j-h:j+h, i-h:i+h]
avg = np.mean(patch)
var = np.var(patch)
#filter out stuff too close to saturation
if avg < 62000:# and var/avg < 2.0 and avg/var < 2.0:
flux.append(avg)
variance.append(var)
#print avg, var
for i in xpos:
for j in ypos:
patch = Q1[j-h:j+h, i-h:i+h]
avg = np.mean(patch)
var = np.var(patch)
#filter out stuff too close to saturation
if avg < 62000:# and var/avg < 2.0 and avg/var < 2.0:
flux.append(avg)
variance.append(var)
flux = np.asarray(flux)
variance = np.asarray(variance)
results = dict(flux=flux, variance=variance)
return results
def plotAutocorrelation(data, output='Autocorrelation.pdf'):
"""
:param data:
:return:
"""
import acor
fig = plt.figure()
plt.subplots_adjust(left=0.15)
plt.title(r'CCD273-84-2-F15, Serial number: 11312-14-01')
ax = fig.add_subplot(111)
for file, values in data.iteritems():
flux = np.mean(values['flux'])
variance = values['variance']
tau, mean, sigma = acor.acor(variance)
ax.plot(flux, mean, 'bo')
ax.set_xlabel(r'$ \left < \mathrm{Signal} \right > \quad [e^{-}]$')
ax.set_ylabel('mean')
#plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
plt.savefig(output)
plt.close()
def plotResults(data, size, pairwise=True, output='FlatfieldFullwellEstimate.pdf'):
"""
:param data:
:return:
"""
size = int(size)
fig = plt.figure()
plt.subplots_adjust(left=0.15)
plt.title(r'CCD273-84-2-F15, Serial number: 11312-14-01')
ax = fig.add_subplot(111)
mf = np.asarray([])
mv = np.asarray([])
for file, values in data.iteritems():
flux = values['flux']
variance = values['variance']
mflux = np.mean(flux)
mvar = np.mean(variance)
mf = np.hstack((mf, flux))
mv = np.hstack((mv, variance))
ax.plot(flux, variance, 'r.', alpha=0.05)
ax.plot(np.median(flux), np.median(variance), 'bo')
ax.errorbar(mflux, mvar, yerr=np.std(variance), marker='s', ecolor='green',
mfc='green', mec='green', ms=5, mew=1)
ax.plot([-1,], [-1,], 'r.', label='data')
ax.plot([-1,], [-1,], 'bo', label='median')
ax.errorbar([-1,], [-1,], marker='s', ecolor='green', mfc='green', mec='green', ms=5, mew=1, label='mean')
msk = mf < 220000
z = np.polyfit(mf[msk], mv[msk], 2)
ev = np.poly1d(z)
x = np.linspace(0, 250000, 100)
#second order but no intercept
fitfunc = lambda p, x: p[0]*x*(1 - p[1]*x)
errfunc = lambda p, x, y: fitfunc(p, x) - y
p1, success = optimize.leastsq(errfunc, [1.0, 1e-6], args=(mf[msk], mv[msk]))
y2 = fitfunc(p1, x)
ax.plot(x, ev(x), 'k-', lw=2, label='2nd order fit')
ax.plot(x, y2, 'y:', lw=2, label='2nd order fit, no intercept')
txt = r'$y = %.3e \times x^{2} + %.4f x + %.3f$' % (z[0], z[1], z[2])
txt2 = r'$y = %.4f x (1 - %.3e \times x)$' % (p1[0], p1[1])
ax.text(0.3, 0.1, txt, horizontalalignment='left', verticalalignment='center', transform=ax.transAxes,
alpha=0.5, size='small')
ax.text(0.3, 0.15, txt2, horizontalalignment='left', verticalalignment='center', transform=ax.transAxes,
alpha=0.5, size='small')
ax.plot([0, 250000], [0, 250000], 'm--', lw=1.5, label='shot noise')
ax.set_xlim(0, 240000)
ax.set_ylim(0, 170000)
ax.set_xlabel(r'$ \left < \mathrm{Signal}_{%i \times %i} \right > \quad [e^{-}]$' % (size, size))
if pairwise:
ax.set_ylabel(r'$\frac{1}{2}\sigma^{2}(\Delta \mathrm{Signal}_{%i \times %i}) \quad [(e^{-})^{2}]$' % (size, size))
else:
ax.set_ylabel(r'$\sigma^{2}(\mathrm{Signal}_{%i \times %i}) \quad [(e^{-})^{2}]$' % (size, size))
plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
plt.savefig(output)
plt.close()
def plotResultsRowColumn(data, pairwise=True, output='FlatfieldFullwellEstimateRowColumn.pdf'):
"""
:param data:
:return:
"""
#diff plot
fig = plt.figure()
plt.subplots_adjust(left=0.15)
plt.title(r'CCD273-84-2-F15, Serial number: 11312-14-01')
ax = fig.add_subplot(111)
for file, values in data.iteritems():
rowvariance = values['rowvariance']
rowflux = values['rowflux']
rowmflux = np.mean(rowflux)
rowmvar = np.mean(rowvariance)
columnflux = values['columnflux']
columnmflux = np.mean(columnflux)
columnvariance = values['columnvariance']
columnmvar = np.mean(columnvariance)
flux = (rowmflux+columnmflux)/2.
ax.plot(flux, rowmvar / columnmvar, 'bo')
#ax.plot(flux, np.median(rowvariance) / np.median(columnvariance), 'rs')
ax.plot([-1, ], [-1, ], 'bo', label='mean')
#ax.plot([-1, ], [-1, ], 'rs', label='median')
ax.plot([0, 250000], [1, 1], 'k--', lw=1.5)
ax.set_xlim(0, 240000)
#ax.set_ylim(0.9, 1.1)
ax.set_ylim(0.99, 1.01)
ax.set_ylabel(r'$\frac{\sigma^{2}_{row}}{\sigma^{2}_{column}}$')
ax.set_xlabel(r'$ \left < \mathrm{Signal} \right > \quad [e^{-}]$')
plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
plt.savefig('RowColumnDifference.pdf')
plt.close()
#fits
fig = plt.figure()
plt.subplots_adjust(left=0.15)
plt.title(r'CCD273-84-2-F15, Serial number: 11312-14-01')
ax = fig.add_subplot(111)
rowmf = np.asarray([])
rowmv = np.asarray([])
columnmf = np.asarray([])
columnmv = np.asarray([])
for file, values in data.iteritems():
rowflux = values['rowflux']
rowvariance = values['rowvariance']
rowmflux = np.mean(rowflux)
rowmvar = np.mean(rowvariance)
rowmf = np.hstack((rowmf, rowflux))
rowmv = np.hstack((rowmv, rowvariance))
columnflux = values['columnflux']
columnvariance = values['columnvariance']
columnmflux = np.mean(columnflux)
columnmvar = np.mean(columnvariance)
columnmf = np.hstack((columnmf, columnflux))
columnmv = np.hstack((columnmv, columnvariance))
ax.plot(np.median(rowflux), np.median(rowvariance), 'ro')
ax.errorbar(rowmflux, rowmvar, yerr=np.std(rowvariance), marker='s', ecolor='red',
mfc='red', mec='red', ms=5, mew=1)
ax.plot(np.median(columnflux), np.median(columnvariance), 'bo')
ax.errorbar(columnmflux, columnmvar, yerr=np.std(columnvariance), marker='s', ecolor='blue',
mfc='blue', mec='blue', ms=5, mew=1)
ax.plot([-1,], [-1,], 'ro', label='row median')
ax.errorbar([-1,], [-1,], marker='s', ecolor='red', mfc='red', mec='red', ms=5, mew=1, label='row mean')
ax.plot([-1,], [-1,], 'bo', label='column median')
ax.errorbar([-1,], [-1,], marker='s', ecolor='blue', mfc='blue', mec='blue', ms=5, mew=1, label='column mean')
msk = rowmf < 170000
x = np.linspace(0, 250000, 100)
#second order but no intercept
fitfunc = lambda p, x: p[0]*x*(1 - p[1]*x)
errfunc = lambda p, x, y: fitfunc(p, x) - y
p1, success = optimize.leastsq(errfunc, [1.0, 1e-6], args=(rowmf[msk], rowmv[msk]))
y2 = fitfunc(p1, x)
ax.plot(x, y2, 'r-', lw=2, label='2nd order fit (row)')
txt2 = r'$y_{row} = %.4f x (1 - %.3e \times x)$' % (p1[0], p1[1])
ax.text(0.3, 0.15, txt2, horizontalalignment='left', verticalalignment='center', transform=ax.transAxes,
alpha=0.5, size='small')
msk = columnmf < 170000
#second order but no intercept
p2, success = optimize.leastsq(errfunc, [1.0, 1e-6], args=(columnmf[msk], columnmv[msk]))
y3 = fitfunc(p2, x)
ax.plot(x, y3, 'b--', lw=2, label='2nd order fit (column)')
txt2 = r'$y_{column} = %.4f x (1 - %.3e \times x)$' % (p2[0], p2[1])
ax.text(0.3, 0.1, txt2, horizontalalignment='left', verticalalignment='center', transform=ax.transAxes,
alpha=0.5, size='small')
ax.plot([0, 250000], [0, 250000], 'k--', lw=1.5, label='shot noise')
ax.set_xlim(0, 240000)
ax.set_ylim(0, 170000)
ax.set_xlabel(r'$ \left < \mathrm{Signal} \right > \quad [e^{-}]$')
if pairwise:
ax.set_ylabel(r'$\frac{1}{2}\sigma^{2}(\Delta \mathrm{Signal}) \quad [(e^{-})^{2}]$')
else:
ax.set_ylabel(r'$\sigma^{2}(\Delta \mathrm{Signal}) \quad [(e^{-})^{2}]$')
plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
plt.savefig(output)
plt.close()
def findPairs():
"""
:return:
"""
files = g.glob('05*Euclid.fits')
for file in files:
data = pf.getdata(file)
c1 = np.average(data[1800:1810, 200:210])
print file, c1
def pairwiseNoise(pairs, gain=3.5, size=100.0, simple=False):
"""
Calculates the mean flux within a region of size * size and the variance from the difference image.
The variance of the difference image is divided by 2, given that var(x-y) = var(x) + var(y).
The calculations are performed in electrons so that variance = noise**2 should be equal to the mean counts
if no correlated noise and other effects are present. This would be the case of pure shot noise.
:return:
"""
results = {}
for f1, f2 in pairs:
#move from ADUs to electrons
d1 = pf.getdata(f1) * gain
d2 = pf.getdata(f2) * gain
if simple:
#pre/overscans
prescan1 = d1[11:2056, 9:51].mean()
overscan1 = d1[11:2056, 4150:4192].mean()
prescan2 = d2[11:2056, 9:51].mean()
overscan2 = d2[11:2056, 4150:4192].mean()
#define quadrants and subtract the bias levels
Q10 = d1[11:2051, 58:2095].copy() - prescan1
Q20 = d2[11:2051, 58:2095].copy() - prescan2
Q11 = d1[11:2051, 2110:4132].copy() - overscan1
Q21 = d2[11:2051, 2110:4132].copy() - overscan2
else:
y1, x1 = d1.shape
#subtract over/prescan row-by-row to minimise the any bias variation in column direction
for row in range(y1):
prescan1 = np.median(d1[row, 9:48])
prescan2 = np.median(d2[row, 9:48])
d1[row, :2099] -= prescan1
d2[row, :2099] -= prescan2
for row in range(y1):
overscan1 = np.median(d1[row, 4152:4190])
overscan2 = np.median(d2[row, 4152:4190])
d1[row, 2100:] -= overscan1
d2[row, 2100:] -= overscan2
#define quadrants; usable image area Q0
Q10 = d1[11:2051, 58:2095].copy()
Q20 = d2[11:2051, 58:2095].copy()
#Q1
Q11 = d1[11:2051, 2110:4132].copy()
Q21 = d2[11:2051, 2110:4132].copy()
#number of pixels in new areas
Q10y, Q10x = Q10.shape
Q11y, Q11x = Q11.shape
#number of patches
Q0y = int(np.floor(Q10y / size))
Q0x = int(np.floor(Q10x / size))
Q1y = int(np.floor(Q11y / size))
Q1x = int(np.floor(Q11x / size))
flux = []
variance = []
for i in range(Q0y):
for j in range(Q0x):
minidy = i * int(size)
maxidy = minidy + int(size)
minidx = j * int(size)
maxidx = minidx + int(size)
patch1 = np.ravel(Q10[minidy:maxidy, minidx:maxidx])
patch2 = np.ravel(Q20[minidy:maxidy, minidx:maxidx])
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 100:
flux.append((avg1+avg2)/2.)
variance.append(var/2.)
#variance.append(var/np.sqrt(2.))
for i in range(Q1y):
for j in range(Q1x):
minidy = i * int(size)
maxidy = minidy + int(size)
minidx = j * int(size)
maxidx = minidx + int(size)
patch1 = np.ravel(Q11[minidy:maxidy, minidx:maxidx])
patch2 = np.ravel(Q21[minidy:maxidy, minidx:maxidx])
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 100:
flux.append((avg1+avg2)/2.)
variance.append(var/2.)
#variance.append(var/np.sqrt(2.))
flux = np.asarray(flux)
variance = np.asarray(variance)
results[f1] = dict(flux=flux, variance=variance)
print f1, flux.mean(), variance.mean()
return results
def pairwiseNoiseRowColumns(pairs, gain=3.5):
"""
Calculates the mean flux within a row/column and the variance from the difference image.
The variance of the difference image is divided by 2, given that var(x-y) = var(x) + var(y).
The calculations are performed in electrons so that variance = noise**2 should be equal to the mean counts
if no correlated noise and other effects are present. This would be the case of pure shot noise.
:return:
"""
print 'file , column flux, row flux, column variance, row variance'
results = {}
for f1, f2 in pairs:
#move from ADUs to electrons
d1 = pf.getdata(f1) * gain
d2 = pf.getdata(f2) * gain
y1, x1 = d1.shape
#subtract over/prescan row-by-row to minimise the any bias variation in column direction
for row in range(y1):
prescan1 = np.median(d1[row, 9:48])
prescan2 = np.median(d2[row, 9:48])
d1[row, :2099] -= prescan1
d2[row, :2099] -= prescan2
for row in range(y1):
#not really an overscan but prescan to a different node
overscan1 = np.median(d1[row, 4152:4190])
overscan2 = np.median(d2[row, 4152:4190])
d1[row, 2100:] -= overscan1
d2[row, 2100:] -= overscan2
#define quadrants; usable image area Q0
Q10 = d1[11:2051, 58:2095].copy()
Q20 = d2[11:2051, 58:2095].copy()
#Q1
Q11 = d1[11:2051, 2110:4132].copy()
Q21 = d2[11:2051, 2110:4132].copy()
#number of pixels in newly defined areas
Q10y, Q10x = Q10.shape
Q11y, Q11x = Q11.shape
#data containers
rowflux = []
rowvariance = []
columnflux = []
columnvariance = []
#loop over rows in Q0
for row in range(Q10y):
patch1 = Q10[row, :]
patch2 = Q20[row, :]
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 200:
rowflux.append((avg1+avg2)/2.)
rowvariance.append(var/2.)
#loop over columns in Q0
for column in range(Q10x):
patch1 = Q10[:, column]
patch2 = Q20[:, column]
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 200:
columnflux.append((avg1+avg2)/2.)
columnvariance.append(var/2.)
#loop over rows in Q1
for row in range(Q11y):
patch1 = Q11[row, :]
patch2 = Q21[row, :]
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 200:
rowflux.append((avg1+avg2)/2.)
rowvariance.append(var/2.)
#loop over columns in Q1
for column in range(Q11x):
patch1 = Q11[:, column]
patch2 = Q21[:, column]
avg1 = np.mean(patch1)
avg2 = np.mean(patch2)
diff = patch1.copy() - patch2.copy()
var = np.var(diff)
#filter out stuff if the averages are too far off
if avg1-avg2 < 200:
columnflux.append((avg1+avg2)/2.)
columnvariance.append(var/2.)
rowflux = np.asarray(rowflux)
rowvariance = np.asarray(rowvariance)
columnflux = np.asarray(columnflux)
columnvariance = np.asarray(columnvariance)
results[f1] = dict(rowflux=rowflux, rowvariance=rowvariance,
columnflux=columnflux, columnvariance=columnvariance)
#print f1, np.median(columnflux), np.median(rowflux), np.median(columnvariance), np.median(rowvariance)
print f1, np.mean(columnflux), np.mean(rowflux), np.mean(columnvariance), np.mean(rowvariance)
return results
def plotDetectorCounts():
"""
:return:
"""
#files = g.glob('05*Euclid.fits')
#files = g.glob('05*Euclidflattened.fits')
files = g.glob('05*Euclidbiasremoved.fits')
fig = plt.figure(1)
plt.title(r'CCD273-84-2-F15, Serial number: 11312-14-01')
ax = fig.add_subplot(111)
for file in files:
data = pf.getdata(file)
c1 = np.average(data[1800:1810, 200:210])
c2 = np.average(data[1000:1010, 1800:1810])
c3 = np.average(data[1000:1010, 2300:2310])
c4 = np.average(data[200:210, 3800:3810])
plt.figure(2)
im = plt.imshow(data, origin='lower', extent=[0, 4100, 0, 2070])
plt.plot([205, 1805, 2305, 3805], [1805, 1005, 1005, 205], 'rs')
plt.text(200, 1810, 'C1')
plt.text(1800, 1010, 'C2')
plt.text(2300, 1010, 'C3')
plt.text(3800, 210, 'C4')
plt.xlim(0, 4100)
plt.ylim(0, 2070)
c = plt.colorbar(im)
c.set_label('Image Scale')
plt.xlabel('X [pixels]')
plt.ylabel('Y [pixels]')
plt.savefig(file.replace('.fits', '.png'))
plt.close()
del data
plt.plot(c1, c1 / c2, 'bo')
plt.plot(c4, c4 / c3, 'rs')
plt.plot(c1, c1 / c2, 'bo', label='C1 vs C1 / C2')
plt.plot(c4, c4 / c3, 'rs', label='C4 vs C4 / C3')
ax.set_xlim(0, 65000)
ax.set_xlabel('Counts [C1 or C4] [ADU]')
ax.set_ylabel('Delta Counts [C1/C2 or C4/C3] [ADU]')
plt.legend(shadow=True, fancybox=True, numpoints=1)
plt.savefig('gradient.pdf')
def simulatePoissonProcess(max=200000, size=200):
"""
Simulate a Poisson noise process.
:param max:
:param size:
:return: None
"""
#for non-linearity
from support import VISinstrumentModel
size = int(size)
fluxlevels = np.linspace(1000, max, 50)
#readnoise
readnoise = np.random.normal(loc=0, scale=4.5, size=(size, size))
#PRNU
prnu = np.random.normal(loc=1.0, scale=0.02, size=(size, size))
fig = plt.figure(1)
plt.title(r'Simulation: $%i \times %s$ region' % (size, size))
plt.subplots_adjust(left=0.14)
ax = fig.add_subplot(111)
for flux in fluxlevels:
d1 = np.random.poisson(flux, (size, size))*prnu + readnoise
d2 = np.random.poisson(flux, (size, size))*prnu + readnoise
fx = (np.average(d1) + np.average(d2)) / 2.
ax.plot(fx, np.var(d1-d2)/2., 'bo')
d1 = np.random.poisson(flux, (size, size))*prnu + readnoise
d2 = np.random.poisson(flux, (size, size))*prnu + readnoise
#d1nonlin = VISinstrumentModel.CCDnonLinearityModelSinusoidal(d1, 0.1, phase=0.5, multi=1.5)
#d2nonlin = VISinstrumentModel.CCDnonLinearityModelSinusoidal(d2, 0.1, phase=0.5, multi=1.5)
d1nonlin = VISinstrumentModel.CCDnonLinearityModel(d1)
d2nonlin = VISinstrumentModel.CCDnonLinearityModel(d2)
fx = (np.average(d1) + np.average(d2)) / 2.
ax.plot(fx, np.var(d1nonlin-d2nonlin)/2., 'rs')
d1 = np.random.poisson(flux, (size, size))*prnu*1.05 + readnoise #5% gain change
d2 = np.random.poisson(flux, (size, size))*prnu + readnoise
fx = (np.average(d1) + np.average(d2)) / 2.
ax.plot(fx, np.var(d1 - d2) / 2., 'mD')
ax.plot([-1, ], [-1, ], 'bo', label='data (linear)')
ax.plot([-1, ], [-1, ], 'rs', label='data (non-linear)')
ax.plot([-1, ], [-1, ], 'mD', label='data (gain change)')
ax.plot([0, max], [0, max], 'k-', lw=1.5, label='shot noise')
ax.set_xlim(0, max)
ax.set_ylim(0, max)
ax.set_xlabel(r'$ \left < \mathrm{Signal}_{%i \times %i} \right > \quad [e^{-}]$' % (size, size))
ax.set_ylabel(r'$\frac{1}{2}\sigma^{2}(\Delta \mathrm{Signal}) \quad [(e^{-})^{2}]$')
plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
plt.savefig('Simulation.pdf')
plt.close()
def simulatePoissonProcessRowColumn(max=200000, size=200, short=True, Gaussian=False):
"""
:param max:
:param size:
:return:
"""
fluxlevels = np.linspace(1000, max, 40)
#readnoise
readnoise = np.random.normal(loc=0, scale=4.5, size=(size, size))
#PRNU
prnu = np.random.normal(loc=1.0, scale=0.02, size=(size, size))
fig = plt.figure(1)
plt.title(r'Simulation: $%i \times %s$ region' % (size, size))
plt.subplots_adjust(left=0.14)
ax = fig.add_subplot(111)
#correlation coefficient
val = 1.455e-6 * 1.8
print val
for flux in fluxlevels:
d1 = np.random.poisson(flux, (size, size)) * prnu + readnoise
d2 = np.random.poisson(flux, (size, size)) * prnu + readnoise
#convolution
if ~Gaussian:
#kernel = np.array([[0,val*flux,0],[0,(1-val),0],[0,val*flux,0]])
#kernel = np.array([[0,val*flux,0],[val*flux,(1-val),val*flux],[0,val*flux,0]])
kernel = np.array([[0, val*flux/4., 0],
[val*flux/4., (1-val), val*flux/4.],
[0, val*flux/4., 0]])
d1 = ndimage.convolve(d1, kernel)
d2 = ndimage.convolve(d2, kernel)
#gaussian smoothing
if Gaussian:
if short:
d1 = ndimage.filters.gaussian_filter(d1, [2, 0])
d2 = ndimage.filters.gaussian_filter(d2, [2, 0])
else:
d2 = ndimage.filters.gaussian_filter(d2, [15, 0])
d1 = ndimage.filters.gaussian_filter(d1, [15, 0])
#change the correlation in row direction
#for column in range(size):
# for row in range(size-2):
# d1[row+1, column] = (d1[row, column] + d1[row+1, column] + d1[row+2, column]) / 3.
# d2[row+1, column] = (d2[row, column] + d2[row+1, column] + d2[row+2, column]) / 3.
#calculate correlation in row/column direction
rowvar = []
rowfx = []
for row1, row2 in zip(d1, d2):
var = np.var(row1 - row2) / 2.
fx = (np.average(row1) + np.average(row2)) / 2.
rowvar.append(var)
rowfx.append(fx)
ax.plot(rowfx, rowvar, 'b.', alpha=0.1)
ax.plot(np.average(np.asarray(rowfx)), np.average(np.asarray(rowvar)), 'bo', zorder=14)
#ax.plot(np.median(np.asarray(rowfx)), np.median(np.asarray(rowvar)), 'bo')
colvar = []
colfx = []
for column1, column2 in zip(d1.T, d2.T):
var = np.var(column1 - column2) / 2.
fx = (np.average(column1) + np.average(column2)) / 2.
colvar.append(var)
colfx.append(fx)
ax.plot(colfx, colvar, 'r.', alpha=0.1)
ax.plot(np.average(np.asarray(colfx))+2000, np.average(np.asarray(colvar)), 'rs', zorder=14)
#ax.plot(np.average(np.median(colfx)), np.median(np.asarray(colvar)), 'rs')
#save d1 to a FITS file...
if short:
fileIO.writeFITS(d1, 'correlatedNoiseShort.fits', int=False)
else:
fileIO.writeFITS(d1, 'correlatedNoiseLong.fits', int=False)
ax.plot([0, ], [0, ], 'bo', label='row')
ax.plot([0, ], [0, ], 'rs', label='column')
ax.plot([0, max], [0, max], 'k-', lw=1.5, label='shot noise')
#fitted curve
x = np.arange(0, max+1000, 1000)
y = -1.375e-6*x**2 + 0.9857*x + 1084.37
ax.plot(x, y, 'g-', label='2nd order curve')
ax.set_xlim(0, max)
ax.set_ylim(0, max)
ax.set_xlabel(r'$ \left < \mathrm{Signal}_{%i \times %i} \right > \quad [e^{-}]$' % (size, size))
ax.set_ylabel(r'$\frac{1}{2}\sigma^{2}(\Delta \mathrm{Signal}) \quad [(e^{-})^{2}]$')
plt.legend(shadow=True, fancybox=True, loc='upper left', numpoints=1)
if short:
plt.savefig('SimulationRowColShort.pdf')
else:
plt.savefig('SimulationRowColLong.pdf')
plt.close()
def analyseCorrelationFourier(file1='05Sep_14_35_00s_Euclid.fits', file2='05Sep_14_36_31s_Euclid.fits',
gain=3.1, small=False, shift=False, interpolation='none'):
"""
:param file1:
:param file2:
:param small:
:param shift:
:param interpolation:
:return: None
"""
d1 = pf.getdata(file1) * gain
d2 = pf.getdata(file2) * gain
#pre/overscans
#prescan1 = d1[11:2056, 9:51].mean()
overscan1 = d1[11:2056, 4150:4192].mean()
#prescan2 = d2[11:2056, 9:51].mean()
overscan2 = d2[11:2056, 4150:4192].mean()
#define quadrants and subtract the bias levels
#Q10 = d1[11:2051, 58:2095].copy() - prescan1
#Q20 = d2[11:2051, 58:2095].copy() - prescan2
Q11 = d1[11:2050, 2110:4131].copy() - overscan1
Q21 = d2[11:2050, 2110:4131].copy() - overscan2
#limit to 1024
Q11 = Q11[300:1324, 300:1324]
Q21 = Q21[300:1324, 300:1324]
q1y, q1x = Q11.shape
#small region
if small:
Q11 = Q11[500:756, 500:756].copy()
Q21 = Q21[500:756, 500:756].copy()
print Q11.shape
#Fourier analysis: calculate 2D power spectrum and take a log
if shift:
fourierSpectrum1 = np.log10(np.abs(fftpack.fftshift(fftpack.fft2(Q11))) + 1)
fourierSpectrum2 = np.log10(np.abs(fftpack.fftshift(fftpack.fft2(Q21))) + 1)
else:
fourierSpectrum1 = np.log10(np.abs(fftpack.fft2(Q11)) + 1)
fourierSpectrum2 = np.log10(np.abs(fftpack.fft2(Q21)) + 1)
#difference image
diff = (Q11 - Q21).copy()
if shift:
fourierSpectrumD = np.log10(np.abs(fftpack.fftshift(fftpack.fft2(diff))) + 1)
else:
fourierSpectrumD = np.log10(np.abs(fftpack.fft2(diff)) + 1)
#plot images
fig = plt.figure(figsize=(14.5,6.5))
plt.suptitle('Fourier Analysis of Flat-field Data')
plt.suptitle(r'Original Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.52, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.79, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(Q11, origin='lower', interpolation=interpolation)
if small:
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e', ticks=[3.1*45000, 3.1*47000, 3.1*49000])
i2 = ax2.imshow(fourierSpectrum1[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2.5, vmax=6.5, rasterized=True)
i3 = ax3.imshow(fourierSpectrum1[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2.5, vmax=6.5, rasterized=True)
else:
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e', ticks=[3.1*35000, 3.1*40000, 3.1*45000, 3.1*50000])
i2 = ax2.imshow(fourierSpectrum1[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=4, vmax=7, rasterized=True)
i3 = ax3.imshow(fourierSpectrum1[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=4, vmax=7, rasterized=True)
tmpx = ax3.get_xlim()
tmpy = ax3.get_ylim()
ax3.set_xlim(tmpx[1] - 20, tmpx[1])
ax3.set_ylim(tmpy[1] - 20, tmpy[1])
if small:
plt.colorbar(i2, ax=ax2, orientation='horizontal')
plt.colorbar(i3, ax=ax3, orientation='horizontal')
else:
plt.colorbar(i2, ax=ax2, orientation='horizontal')
plt.colorbar(i3, ax=ax3, orientation='horizontal')#, ticks=[10, 10.5, 11, 11.5, 12, 12.5])
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax3.set_xlabel('$l_{x}$')
ax1.set_ylabel('Y [pixel]')
#ax2.set_ylabel('$l_{y}$')
#ax3.set_ylabel('$l_{y}$')
if small:
plt.savefig('Fourier1.pdf')
else:
plt.savefig('Fourier1Full.pdf')
plt.close()
fig = plt.figure(figsize=(14.5,6.5))
plt.suptitle('Fourier Analysis of Flat-field Data')
plt.suptitle(r'Original Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.52, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.79, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(Q21, origin='lower', interpolation=interpolation)
if small:
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e', ticks=[3.1*45000, 3.1*47000, 3.1*49000])
i2 = ax2.imshow(fourierSpectrum2[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2, vmax=7,
rasterized=True)
i3 = ax3.imshow(fourierSpectrum2[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2, vmax=7,
rasterized=True)
else:
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e', ticks=[3.1*35000, 3.1*40000, 3.1*45000, 3.1*50000])
i2 = ax2.imshow(fourierSpectrum2[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=4, vmax=7,
rasterized=True)
i3 = ax3.imshow(fourierSpectrum2[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=4, vmax=7,
rasterized=True)
tmpx = ax3.get_xlim()
tmpy = ax3.get_ylim()
ax3.set_xlim(tmpx[1] - 20, tmpx[1])
ax3.set_ylim(tmpy[1] - 20, tmpy[1])
if small:
plt.colorbar(i2, ax=ax2, orientation='horizontal')#, ticks=[6, 7, 8, 9, 10, 11, 12])
plt.colorbar(i3, ax=ax3, orientation='horizontal')
else:
plt.colorbar(i2, ax=ax2, orientation='horizontal')#, ticks=[8, 9.5, 11, 13])
plt.colorbar(i3, ax=ax3, orientation='horizontal')#, ticks=[10, 10.5, 11, 11.5, 12, 12.5])
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax3.set_xlabel('$l_{x}$')
ax1.set_ylabel('Y [pixel]')
#ax2.set_ylabel('$l_{y}$')
#ax3.set_ylabel('$l_{y}$')
if small:
plt.savefig('Fourier2.pdf')
else:
plt.savefig('Fourier2Full.pdf')
plt.close()
fig = plt.figure(figsize=(14.5,6.5))
plt.suptitle('Fourier Analysis of Flat-field Data')
plt.suptitle(r'Original Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.52, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.79, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(diff, origin='lower', interpolation=interpolation, vmin=-1200, vmax=1200)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%i', ticks=[-1200, -600, 0, 600, 1200])
if small:
i2 = ax2.imshow(fourierSpectrumD[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2, vmax=6.5,
rasterized=True)
i3 = ax3.imshow(fourierSpectrumD[0:128, 0:128], interpolation=interpolation, origin='lower', vmin=2, vmax=6.5,
rasterized=True)
else:
i2 = ax2.imshow(fourierSpectrumD[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=2.5, vmax=7.5,
rasterized=True)
i3 = ax3.imshow(fourierSpectrumD[0:q1y/2, 0:q1x/2], interpolation=interpolation, origin='lower', vmin=2.5, vmax=7.5,
rasterized=True)
tmpx = ax3.get_xlim()
tmpy = ax3.get_ylim()
ax3.set_xlim(tmpx[1] - 20, tmpx[1])
ax3.set_ylim(tmpy[1] - 20, tmpy[1])
if small:
plt.colorbar(i2, ax=ax2, orientation='horizontal')#, ticks=[7, 8, 9, 10, 11, 12])
plt.colorbar(i3, ax=ax3, orientation='horizontal')#, ticks=[7, 7.5, 8, 8.5, 9, 9.5])
else:
plt.colorbar(i2, ax=ax2, orientation='horizontal')#, ticks=[9.5, 10, 10.5, 11, 11.5])
plt.colorbar(i3, ax=ax3, orientation='horizontal')#, ticks=[9, 9.5, 10, 10.5, 11])
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax3.set_xlabel('$l_{x}$')
ax1.set_ylabel('Y [pixel]')
#ax2.set_ylabel('$l_{y}$')
#ax3.set_ylabel('$l_{y}$')
if small:
plt.savefig('FourierDifference.pdf')
else:
plt.savefig('FourierDifferenceFull.pdf')
plt.close()
def spatialAutocorrelation(file1='05Sep_14_35_00s_Euclid.fits', file2='05Sep_14_36_31s_Euclid.fits',
gain=3.1, interpolation='none', size=1024):
"""
Calculates a spatial 2D autocorrelation from the difference image and visualises the findings.
Generates also two plots from simulated data for comparisons.
:param file1: name of the first FITS file to be used in the analysis
:param file2: name of the second FITS file to be used in the analysis
:param gain: gain conversion between ADUs and electrons
:param interpolation: whether the visualisation grid should be interpolated or not
:param size: side length in pixels of the area to be used for analysis
:return: None
"""
d1 = pf.getdata(file1) * gain
d2 = pf.getdata(file2) * gain
#pre/overscans
overscan1 = d1[11:2056, 4150:4192].mean()
overscan2 = d2[11:2056, 4150:4192].mean()
#define quadrants and subtract the bias levels
Q11 = d1[11:2050, 2110:4131].copy() - overscan1
Q21 = d2[11:2050, 2110:4131].copy() - overscan2
#limit to 1024
Q11 = Q11[300:1324, 300:1324]
Q21 = Q21[300:1324, 300:1324]
print Q11.mean(), Q21.mean()
level = (np.average(Q11) + np.average(Q21)) / 2.
if size > Q11.shape[0] or size > Q11.shape[1]:
print 'size too large, will abort...'
return None
diff = Q11 - Q21
diff = diff[0:size, 0:size]
#autoc = signal.correlate2d(diff, diff, mode='full') #slow
autoc = signal.fftconvolve(diff, np.flipud(np.fliplr(diff)), mode='full')
autoc /= np.max(autoc)
autoc *= 100.
fileIO.writeFITS(autoc, 'autocorrelationRealdata.fits', int=False)
yc, xc = autoc.shape
xc /= 2.
yc /= 2.
xc -= 0.5
yc -= 0.5
#plot images
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle(r'Autocorrelation of Flat Field Data ($%i \times %i$ grid)' % (size, size))
plt.suptitle(r'Difference Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.51, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.78, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(diff, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.0f', ticks=[-1500, -750, 0, 750, 1500])
i2 = ax2.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
ax3.set_xlim(xc - 5, xc + 5)
ax3.set_ylim(yc - 5, yc + 5)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
plt.colorbar(i3, ax=ax3, orientation='horizontal')#, ticks=[10, 10.5, 11, 11.5, 12, 12.5])
ax1.set_xlabel('X [pixel]')
ax1.set_ylabel('Y [pixel]')
plt.savefig('SpatialAutocorrelation.pdf')
plt.close()
diff = np.random.poisson(level, size=(size, size)) - np.random.poisson(level, size=(size, size))
#autoc = signal.correlate2d(diff, diff, mode='full') #slow
autoc = signal.fftconvolve(diff, np.flipud(np.fliplr(diff)), mode='full')
autoc /= np.max(autoc)
autoc *= 100.
fileIO.writeFITS(autoc, 'autocorrelationSimulated.fits', int=False)
yc, xc = autoc.shape
xc /= 2.
yc /= 2.
xc -= 0.5
yc -= 0.5
#plot images
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle(r'Autocorrelation of Simulated Data ($%i \times %i$ grid)' % (size, size))
plt.suptitle(r'Difference Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.51, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.78, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(diff, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.0f', ticks=[-1500, -750, 0, 750, 1500])
i2 = ax2.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
ax3.set_xlim(xc - 5, xc + 5)
ax3.set_ylim(yc - 5, yc + 5)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
plt.colorbar(i3, ax=ax3, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax1.set_ylabel('Y [pixel]')
plt.savefig('SpatialAutocorrelationSimulation.pdf')
plt.close()
size = 128
diff = np.random.poisson(level, size=(size, size))
autoc = signal.correlate2d(diff, diff, mode='full')
autoc /= np.max(autoc)
autoc *= 100.
yc, xc = autoc.shape
xc /= 2.
yc /= 2.
#plot images
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle(r'Autocorrelation of Simulated Data ($%i \times %i$ grid)' % (size, size))
plt.suptitle(r'Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.51, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.78, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(diff, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e')
i2 = ax2.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(autoc, interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
ax3.set_xlim(xc - 5, xc + 5)
ax3.set_ylim(yc - 5, yc + 5)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
plt.colorbar(i3, ax=ax3, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax1.set_ylabel('Y [pixel]')
plt.savefig('SpatialAutocorrelationSimulationSingle.pdf')
plt.close()
def spatialAutocorrelationMovie(pairs, gain=3.1, interpolation='none', calculate=False):
"""
Calculates a spatial 2D autocorrelation from the difference image and visualises the findings.
:param gain: gain conversion between ADUs and electrons
:param interpolation: whether the visualisation grid should be interpolated or not
:param calculate: whether to recalculate the autocorrelation functions or load from existing file
:return: None
"""
if calculate:
data = {}
for file1, file2 in pairs:
d1 = pf.getdata(file1) * gain
d2 = pf.getdata(file2) * gain
#pre/overscans
overscan1 = d1[11:2056, 4150:4192].mean()
overscan2 = d2[11:2056, 4150:4192].mean()
#define quadrants and subtract the bias levels
Q11 = d1[11:2050, 2110:4131].copy() - overscan1
Q21 = d2[11:2050, 2110:4131].copy() - overscan2
#limit to 1024
Q11 = Q11[500:1524, 500:1524]
Q21 = Q21[500:1524, 500:1524]
#difference image
diff = Q11 - Q21
#check the average signal levels
Q11a = np.average(Q11)
Q21a = np.average(Q21)
d = np.abs(Q11a - Q21a)
print Q11a, Q21a, d, file1, file2
if d > 50.:
print 'too large difference in the average signal level, will ignore...'
#seems to be rather sensitive to this!
continue
autoc = signal.fftconvolve(diff, np.flipud(np.fliplr(diff)), mode='full')
autoc /= np.max(autoc)
autoc *= 100.
level = (Q11a + Q21a) / 2.
data[level] = dict(data=diff, autoc=autoc)
fileIO.cPickleDumpDictionary(data, 'spatialAutocorrelationMovieData.pk')
else:
data = cPickle.load(open('spatialAutocorrelationMovieData.pk'))
#average signal levels, sorted
keys = data.keys()
keys.sort()
#centre of the autocorrelation
yc, xc = data[keys[0]]['autoc'].shape
xc /= 2.
yc /= 2.
xc -= 0.5
yc -= 0.5
#plot images
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle(r'Autocorrelation of Flat Field Data')
plt.suptitle(r'Difference Image $[e^{-}]$', x=0.24, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.51, y=0.26)
plt.suptitle(r'Autocorrelation Interaction $[\%]$', x=0.78, y=0.26)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
i1 = ax1.imshow(data[keys[0]]['data'], origin='lower', interpolation=interpolation, vmin=-1000, vmax=1000, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.0f', ticks=[-1000, -500, 0, 500, 1000])
i2 = ax2.imshow(data[keys[0]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(data[keys[0]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
ax3.set_xlim(xc - 5, xc + 5)
ax3.set_ylim(yc - 5, yc + 5)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
p3 = plt.colorbar(i3, ax=ax3, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax1.set_ylabel('Y [pixel]')
text = ax1.text(0.02, 0.9, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(data[keys[0]]['data'], origin='lower', interpolation=interpolation, vmin=-1000, vmax=1000, rasterized=True)
i2 = ax2.imshow(data[keys[0]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(data[keys[0]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, i2, p2, i3, p3, text
def animate(i):
i1 = ax1.imshow(data[keys[i]]['data'], origin='lower', interpolation=interpolation, vmin=-1000, vmax=1000, rasterized=True)
i2 = ax2.imshow(data[keys[i]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
i3 = ax3.imshow(data[keys[i]]['autoc'], interpolation=interpolation, origin='lower', rasterized=True, vmin=0, vmax=5)
text.set_text('Mean Signal $\sim$ %.0f $e^{-}$' % keys[i])
return i1, p1, i2, p2, i3, p3, text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(keys), interval=2, blit=True)
anim.save('SpatialCorrelation.mp4', fps=1)
def examples(interpolation='none'):
"""
This function generates 1D and 2D power spectra from simulated data.
:param interpolation:
:return: None
"""
Pois1D = np.random.poisson(100000, 1024)
PowerSpectrum = np.log10(np.abs(fftpack.fft(Pois1D)))
#PowerSpectrum = np.log10(np.abs(fftpack.fftshift(fftpack.fft(Pois1D))))
print '1D Poisson:'
print np.mean(PowerSpectrum), np.median(PowerSpectrum), np.min(PowerSpectrum), np.max(PowerSpectrum), np.std(PowerSpectrum)
fig = plt.figure(figsize=(14, 8))
plt.suptitle('Fourier Analysis of Poisson Noise')
plt.suptitle('Input Data', x=0.32, y=0.93)
plt.suptitle(r'Power Spectrum', x=0.72, y=0.93)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
a = plt.axes([.65, .6, .2, .2], axisbg='y')
ax1.plot(Pois1D, 'bo')
ax2.plot(PowerSpectrum, 'r-')
a.plot(PowerSpectrum, 'r-')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Input Values')
ax2.set_ylabel(r'$\log_{10}$(Power Spectrum)')
ax1.set_xlim(0, 1024)
ax2.set_xlim(0, 1024)
ax2.set_ylim(2, 7)
a.set_xlim(0, 20)
plt.savefig('FourierPoisson1D.pdf')
plt.close()
#remove mean
Pois1D -= 100000 #np.mean(Pois1D)
PowerSpectrum = np.abs(fftpack.fft(Pois1D))
print '1D Poisson (mean removed):'
print np.mean(PowerSpectrum), np.median(PowerSpectrum), np.min(PowerSpectrum), np.max(PowerSpectrum), np.std(
PowerSpectrum)
fig = plt.figure(figsize=(14, 8))
plt.suptitle('Fourier Analysis of Poisson Noise (mean removed)')
plt.suptitle('Input Data', x=0.32, y=0.93)
plt.suptitle(r'Power Spectrum', x=0.72, y=0.93)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
a = plt.axes([.65, .6, .2, .2], axisbg='y')
ax1.plot(Pois1D, 'bo')
ax2.plot(PowerSpectrum, 'r-')
a.hist(PowerSpectrum, bins=20)
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Input Values')
#ax2.set_ylabel(r'$\log_{10}$(Power Spectrum)')
ax2.set_ylabel('Power Spectrum')
ax1.set_xlim(0, 1024)
ax2.set_xlim(0, 1024)
#ax2.set_ylim(10**2, 10**7)
#a.set_xlim(0, 20)
plt.savefig('FourierPoissonMeanRemoved1D.pdf')
plt.close()
Sin1D = 20.*np.sin(np.arange(256) / 10.)
PowerSpectrum = np.log10(np.abs(fftpack.fft(Sin1D)))
print '1D Sin:'
print np.mean(PowerSpectrum), np.median(PowerSpectrum), np.min(PowerSpectrum), np.max(PowerSpectrum), np.std(PowerSpectrum)
fig = plt.figure(figsize=(14, 8))
plt.suptitle('Fourier Analysis of Sine Wave')
plt.suptitle('Input Data', x=0.32, y=0.93)
plt.suptitle(r'Power Spectrum', x=0.72, y=0.93)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
a = plt.axes([.65, .6, .2, .2], axisbg='y')
ax1.plot(Sin1D, 'bo')
ax2.plot(PowerSpectrum, 'r-')
a.plot(PowerSpectrum, 'r-')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Input Values')
ax2.set_ylabel(r'$\log_{10}$(Power Spectrum)')
ax1.set_xlim(0, 256)
ax2.set_xlim(0, 256)
a.set_xlim(0, 20)
plt.savefig('FourierSin1D.pdf')
plt.close()
Top1D = np.zeros(256)
Top1D[100:110] = 1.
PowerSpectrum = np.log10(np.abs(fftpack.fft(Top1D)))
print '1D Tophat:'
print np.mean(PowerSpectrum), np.median(PowerSpectrum), np.min(PowerSpectrum), np.max(PowerSpectrum), np.std(PowerSpectrum)
fig = plt.figure(figsize=(14, 8))
plt.suptitle('Fourier Analysis of Tophat')
plt.suptitle('Input Data', x=0.32, y=0.93)
plt.suptitle(r'Power Spectrum', x=0.72, y=0.93)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(Top1D, 'bo')
ax2.plot(PowerSpectrum, 'r-')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Input Values')
ax2.set_ylabel(r'$\log_{10}$(Power Spectrum)')
ax1.set_xlim(0, 256)
ax2.set_xlim(0, 256)
plt.savefig('FourierTophat1D.pdf')
plt.close()
s = 2048
ss = s / 2
Pois = np.random.poisson(100000, size=(s, s))
#fourierSpectrum1 = np.log10(np.abs(fftpack.fftshift(fftpack.fft2(Pois))))
fourierSpectrum1 = np.log10(np.abs(fftpack.fft2(Pois)))
print 'Poisson 2d:', np.var(Pois)
print np.mean(fourierSpectrum1), np.median(fourierSpectrum1), np.std(fourierSpectrum1), np.max(fourierSpectrum1), np.min(fourierSpectrum1)
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle('Fourier Analysis of Poisson Data')
plt.suptitle('Original Image', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(Pois, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99000, 100000, 101000])
i2 = ax2.imshow(fourierSpectrum1[0:ss, 0:ss], interpolation=interpolation, origin='lower',
rasterized=True, vmin=3, vmax=7)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Y [pixel]')
plt.savefig('FourierPoisson.pdf')
ax2.set_xlim(0, 10)
ax2.set_ylim(0, 10)
plt.savefig('FourierPoisson2.pdf')
ax2.set_xlim(ss-10, ss-1)
ax2.set_ylim(ss-10, ss-1)
plt.savefig('FourierPoisson3.pdf')
plt.close()
#Poisson with smoothing...
#val = 1.455e-6 / 2.
#flux = 100000
#kernel = np.array([[0, val * flux, 0], [val * flux, (1 - val), val * flux], [0, val * flux, 0]])
#kernel = np.array([[0.01, 0.02, 0.01], [0.02, 0.88, 0.02], [0.01, 0.02, 0.01]])
kernel = np.array([[0.0025, 0.01, 0.0025], [0.01, 0.95, 0.01], [0.0025, 0.01, 0.0025]])
Pois = ndimage.convolve(Pois.copy(), kernel)
#Pois = ndimage.filters.gaussian_filter(Pois.copy(), sigma=0.4)
fourierSp = np.log10(np.abs(fftpack.fft2(Pois)))
print 'Poisson 2d Smoothed:', np.var(Pois)
print np.mean(fourierSp), np.median(fourierSp), np.std(fourierSp), np.max(fourierSp), np.min(fourierSp)
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle('Fourier Analysis of Smoothed Poisson Data')
plt.suptitle('Original Image', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(Pois, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99000, 100000, 101000])
i2 = ax2.imshow(fourierSp[0:ss, 0:ss], interpolation=interpolation, origin='lower',
rasterized=True, vmin=3, vmax=7)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax1.set_ylabel('Y [pixel]')
plt.savefig('FourierPoissonSmooth.pdf')
ax2.set_xlim(0, 10)
ax2.set_ylim(0, 10)
plt.savefig('FourierPoissonSmooth2.pdf')
ax2.set_xlim(ss-10, ss-1)
ax2.set_ylim(ss-10, ss-1)
plt.savefig('FourierPoissonSmooth3.pdf')
plt.close()
#difference
fig = plt.figure()
plt.suptitle('Power Spectrum of Smoothed Poisson Data / Power Spectrum of Poisson Data')
ax = fig.add_subplot(111)
i = ax.imshow(fourierSp[0:ss, 0:ss] / fourierSpectrum1[0:ss, 0:ss],
origin='lower', interpolation=interpolation, vmin=0.9, vmax=1.1)
plt.colorbar(i, ax=ax, orientation='horizontal')
plt.savefig('FourierPSDiv.pdf')
ax.set_xlim(0, 10)
ax.set_ylim(0, 10)
plt.savefig('FourierPSDiv2.pdf')
ax.set_xlim(ss-10, ss-1)
ax.set_ylim(ss-10, ss-1)
plt.savefig('FourierPSDiv3.pdf')
plt.close()
#x = np.arange(1024)
#y = 10 * np.sin(x / 30.) + 20
#img = np.vstack([y, ] * 1024)
x, y = np.mgrid[0:32, 0:32]
#img = 10*np.sin(x/40.) * 10*np.sin(y/40.)
img = 100 * np.cos(x*np.pi/4.) * np.cos(y*np.pi/4.)
kernel = np.array([[0.0025, 0.01, 0.0025], [0.01, 0.95, 0.01], [0.0025, 0.01, 0.0025]])
img = ndimage.convolve(img.copy(), kernel)
fourierSpectrum2 = np.abs(fftpack.fft2(img))
#fourierSpectrum2 = np.log10(np.abs(fftpack.fftshift(fftpack.fft2(img))))
print np.mean(fourierSpectrum2), np.median(fourierSpectrum2), np.std(fourierSpectrum2), np.max(fourierSpectrum2), np.min(fourierSpectrum2)
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle('Fourier Analysis of Flat-field Data')
plt.suptitle('Original Image', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e')
i2 = ax2.imshow(fourierSpectrum2[0:512, 0:512], interpolation=interpolation, origin='lower',
rasterized=True)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax2.set_ylim(0, 16)
ax2.set_xlim(0, 16)
ax1.set_ylabel('Y [pixel]')
plt.savefig('FourierSin.pdf')
plt.close()
x, y = np.mgrid[0:1024, 0:1024]
img = 10*np.sin(x/40.) * 10*np.sin(y/40.)
fourierSpectrum2 = np.log10(np.abs(fftpack.fft2(img)))
print np.mean(fourierSpectrum2), np.median(fourierSpectrum2), np.std(fourierSpectrum2), np.max(fourierSpectrum2), np.min(fourierSpectrum2)
fig = plt.figure(figsize=(14.5, 6.5))
plt.suptitle('Fourier Analysis of Flat-field Data')
plt.suptitle('Original Image', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation)
plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1e')
i2 = ax2.imshow(fourierSpectrum2[0:512, 0:512], interpolation=interpolation, origin='lower',
rasterized=True, vmin=-1, vmax=7)
plt.colorbar(i2, ax=ax2, orientation='horizontal')
ax1.set_xlabel('X [pixel]')
ax2.set_xlabel('$l_{x}$')
ax2.set_ylim(0, 20)
ax2.set_xlim(0, 20)
ax1.set_ylabel('Y [pixel]')
plt.savefig('FourierSin2.pdf')
plt.close()
def sinusoidalExample():
interpolation = 'none'
x, y = np.mgrid[0:32, 0:32]
img = 100 * np.cos(x*np.pi/4.) * np.cos(y*np.pi/4.)
power = np.log10(np.abs(fftpack.fft2(img.copy())))
sigma = np.linspace(0.2, 3.0, 20)
fig = plt.figure(figsize=(14.5, 7))
plt.suptitle('Fourier Analysis of Sinusoidal Data')
plt.suptitle('Gaussian Smoothed', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[-100, -50, 0, 50, 100])
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower',
rasterized=True, vmin=-1, vmax=7)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=-1, vmax=7)
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, i2, p2, sigma_text
def animate(i):
im = ndimage.filters.gaussian_filter(img.copy(), sigma=sigma[i])
power = np.log10(np.abs(fftpack.fft2(im)))
i1 = ax1.imshow(im, origin='lower', interpolation=interpolation)
i2 = ax2.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=-1, vmax=7)
sigma_text.set_text('sigma=%f' % sigma[i])
return i1, p1, p2, p2, sigma_text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=20, interval=1, blit=True)
anim.save('FourierSmoothing.mp4', fps=3)
def poissonExample():
interpolation = 'none'
img = np.random.poisson(100000, size=(32, 32))
power = np.log10(np.abs(fftpack.fft2(img.copy())))
sigma = np.linspace(0.2, 3.0, 20)
fig = plt.figure(figsize=(14.5, 7))
plt.suptitle('Fourier Analysis of Poisson Data')
plt.suptitle('Gaussian Smoothed', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99500, 100000, 100500])
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower',
rasterized=True, vmin=2, vmax=7)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, p2, p2, sigma_text
def animate(i):
im = ndimage.filters.gaussian_filter(img.copy(), sigma=sigma[i])
power = np.log10(np.abs(fftpack.fft2(im)))
i1 = ax1.imshow(im, origin='lower', interpolation=interpolation)
i2 = ax2.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text.set_text('sigma=%f' % sigma[i])
return i1, p1, p2, p2, sigma_text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=20, interval=1, blit=True)
anim.save('FourierSmoothingPoisson.mp4', fps=3)
def poissonExampleLowpass():
interpolation = 'none'
img = np.random.poisson(100000, size=(32, 32))
power = np.log10(np.abs(fftpack.fft2(img.copy())))
sigma = np.linspace(0.1, 100.0, 20)
fig = plt.figure(figsize=(14.5, 7))
plt.suptitle('Fourier Analysis of Poisson Data (lowpass filtering)')
plt.suptitle('Lowpass Filtered', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99500, 100000, 100500])
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower',
rasterized=True, vmin=2, vmax=7)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, p2, p2, sigma_text
def animate(i):
kernel_low = [[1.0/sigma[i],1.0/sigma[i],1.0/sigma[i]],
[1.0/sigma[i],1.0/sigma[i],1.0/sigma[i]],
[1.0/sigma[i],1.0/sigma[i],1.0/sigma[i]]]
im = ndimage.convolve(img.copy(), kernel_low)
power = np.log10(np.abs(fftpack.fft2(im)))
i1 = ax1.imshow(im, origin='lower', interpolation=interpolation)
i2 = ax2.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text.set_text('kernel %f' % (1./sigma[i]))
return i1, p1, p2, p2, sigma_text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=20, interval=1, blit=True)
anim.save('FourierSmoothingPoissonLowpass.mp4', fps=3)
def poissonExamplePixelSharing():
interpolation = 'none'
img = np.random.poisson(100000, size=(32, 32))
power = np.log10(np.abs(fftpack.fft2(img.copy())))
sigma = np.logspace(-4, 1, 100)
fig = plt.figure(figsize=(14.5, 7))
plt.suptitle('Fourier Analysis of Poisson Data (kernel smoothing)')
plt.suptitle('Kernel Convolved', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99500, 100000, 100500])
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower',
rasterized=True, vmin=2, vmax=7)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
i2 = ax1.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, p2, p2, sigma_text
def animate(i):
kernel = [[0.0, sigma[i]/4., 0.0],
[sigma[i]/4., 1.0 - sigma[i], sigma[i]/4.],
[0.0, sigma[i]/4., 0.0]]
im = ndimage.convolve(img.copy(), kernel)
power = np.log10(np.abs(fftpack.fft2(im)))
i1 = ax1.imshow(im, origin='lower', interpolation=interpolation)
i2 = ax2.imshow(power[0:16, 0:16], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=7)
sigma_text.set_text('kernel %f' % sigma[i])
return i1, p1, p2, p2, sigma_text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=1, blit=True)
anim.save('FourierSmoothingPoissonSharing.mp4', fps=3)
def poissonExamplePixelSharing2():
interpolation = 'none'
flux = 100000
size = 2**6
ss = size /2
img = np.random.poisson(flux, size=(size, size))
power = np.log10(np.abs(fftpack.fft2(img.copy())))
sigma = np.logspace(-3, -0.1, 25)
fig = plt.figure(figsize=(14.5, 7))
plt.suptitle('Fourier Analysis of Poisson Data (kernel smoothing)')
plt.suptitle('Kernel Convolved', x=0.32, y=0.26)
plt.suptitle(r'$\log_{10}$(2D Power Spectrum)', x=0.72, y=0.26)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
p1 = plt.colorbar(i1, ax=ax1, orientation='horizontal', format='%.1f', ticks=[99500, 100000, 100500])
i2 = ax1.imshow(power[0:ss, 0:ss], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=6)
p2 = plt.colorbar(i2, ax=ax2, orientation='horizontal')
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
def init():
i1 = ax1.imshow(img, origin='lower', interpolation=interpolation, rasterized=True)
i2 = ax1.imshow(power[0:ss, 0:ss], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=6)
sigma_text = ax1.text(0.02, 0.95, '', transform=ax1.transAxes)
return i1, p1, p2, p2, sigma_text
def animate(i):
kernel = [[0.0, sigma[i]/4., 0.0],
[sigma[i]/4., 1.0 - sigma[i], sigma[i]/4.],
[0.0, sigma[i]/4., 0.0]]
im = ndimage.convolve(img.copy(), kernel)
print 'smoothed', sigma[i], np.var(img), np.var(im)
power = np.log10(np.abs(fftpack.fft2(im)))
i1 = ax1.imshow(im, origin='lower', interpolation=interpolation)
i2 = ax2.imshow(power[0:ss, 0:ss], interpolation=interpolation, origin='lower', rasterized=True, vmin=2, vmax=6)
sigma_text.set_text('kernel %e' % sigma[i])
return i1, p1, p2, p2, sigma_text
#note that the frames defines the number of times animate functions is being called
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=25, interval=1, blit=True)
anim.save('FourierSmoothingPoissonSharing2.mp4', fps=3)
def comparePower(file1='05Sep_14_35_00s_Euclid.fits', file2='05Sep_14_36_31s_Euclid.fits', gain=3.1):
d1 = pf.getdata(file1) * gain
d2 = pf.getdata(file2) * gain
#pre/overscans
overscan1 = d1[11:2056, 4150:4192].mean()
overscan2 = d2[11:2056, 4150:4192].mean()
#define quadrants and subtract the bias levels
Q11 = d1[11:2050, 2110:4131] - overscan1
Q21 = d2[11:2050, 2110:4131] - overscan2
#limit to 1024
Q11 = Q11[300:1324, 300:1324]
Q21 = Q21[300:1324, 300:1324]
#difference image
diff = Q11 - Q21
fourierSpectrumD = np.abs(fftpack.fft2(diff))[0:512, 0:512]
cornervalues = fourierSpectrumD[510:512, 510:512]
print 'data'
print cornervalues
print np.log10(cornervalues)
print fourierSpectrumD[511:512, 511:512]
print np.log10(fourierSpectrumD[511:512, 511:512])
#simulate
res = []
flux = 145000
size = 1024
ss = size / 2
#for x in xrange(20):
# img1 = np.random.poisson(flux, size=(size, size))
# img2 = np.random.poisson(flux, size=(size, size))
# power = np.abs(fftpack.fft2((img1 - img2)))[0:ss, 0:ss]
# res.append(power)
#res = np.average(res, axis=0)
img1 = np.random.poisson(flux, size=(size, size))
img2 = np.random.poisson(flux, size=(size, size))
res = np.abs(fftpack.fft2((img1 - img2)))[0:ss, 0:ss]
print 'simulated'
cornervalues = res[510:512, 510:512]
print cornervalues
print np.log10(cornervalues)
print res[511:512, 511:512]
print np.log10(res[511:512, 511:512])
fig = plt.figure(figsize=(15, 7))
plt.suptitle('Power Spectrum values')
plt.suptitle('Difference Image', x=0.3, y=0.94)
plt.suptitle('Simulated Poisson Data', x=0.72, y=0.94)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
fig.subplots_adjust(wspace=0.25)
ax1.hist(np.ravel(fourierSpectrumD), bins=40, normed=True, range=[0, 1750000], label='power spectrum values')
ax1.axvline(x=fourierSpectrumD[511, 511], c='r', ls='-', lw=2, zorder=14, label='(512, 512)')
ax1.axvline(x=fourierSpectrumD[510, 510], c='g', ls=':', lw=2, zorder=14, label='(511, 511)')
ax1.axvline(x=fourierSpectrumD[511, 510], c='y', ls='--', lw=2, zorder=14, label='(511, 512)')
ax1.axvline(x=fourierSpectrumD[510, 511], c='m', ls='-.', lw=2, zorder=14, label='(512, 511)')
ax2.hist(np.ravel(res), bins=40, normed=True, range=[0, 1750000], label='power spectrum values')
ax2.axvline(x=res[511, 511], c='r', ls='-', lw=2, zorder=14, label='(512, 512)')
ax2.axvline(x=res[510, 510], c='g', ls=':', lw=2, zorder=14, label='(511, 511)')
ax2.axvline(x=res[511, 510], c='y', ls='--', lw=2, zorder=14, label='(511, 512)')
ax2.axvline(x=res[510, 511], c='m', ls='-.', lw=2, zorder=14, label='(512, 511)')
ax1.locator_params(nbins=6)
ax2.locator_params(nbins=6)
ax1.legend(shadow=True, fancybox=True)
ax2.legend(shadow=True, fancybox=True)
plt.savefig('PowerSpectrumDistributions.pdf')
plt.close()
if __name__ == '__main__':
size = 200.
#makeFlat(files)
#plotDetectorCounts()
#findPairs()
#
pairs = [('05Sep_14_57_00s_Euclid.fits', '05Sep_14_58_27s_Euclid.fits'),
('05Sep_14_41_10s_Euclid.fits', '05Sep_14_43_30s_Euclid.fits'),
('05Sep_14_25_05s_Euclid.fits', '05Sep_14_26_30s_Euclid.fits'),
('05Sep_15_00_09s_Euclid.fits', '05Sep_15_02_21s_Euclid.fits'),
('05Sep_14_45_07s_Euclid.fits', '05Sep_14_46_28s_Euclid.fits'),
('05Sep_14_27_58s_Euclid.fits', '05Sep_14_30_22s_Euclid.fits'),
('05Sep_14_09_15s_Euclid.fits', '05Sep_14_10_38s_Euclid.fits'),
('05Sep_15_03_51s_Euclid.fits', '05Sep_15_05_18s_Euclid.fits'),
('05Sep_14_47_56s_Euclid.fits', '05Sep_14_49_25s_Euclid.fits'),
('05Sep_14_31_56s_Euclid.fits', '05Sep_14_33_23s_Euclid.fits'),
('05Sep_14_13_32s_Euclid.fits', '05Sep_14_14_57s_Euclid.fits'),
('05Sep_15_06_50s_Euclid.fits', '05Sep_15_08_18s_Euclid.fits'),
('05Sep_14_50_59s_Euclid.fits', '05Sep_14_52_31s_Euclid.fits'),
('05Sep_14_35_00s_Euclid.fits', '05Sep_14_36_31s_Euclid.fits')]
#examples()
#sinusoidalExample()
#poissonExample()
#poissonExampleLowpass()
#poissonExamplePixelSharing()
#poissonExamplePixelSharing2()
#analyseCorrelationFourier()
#analyseCorrelationFourier(small=True)
#analyseCorrelationFourier(shift=True)
#analyseCorrelationFourier(small=True, shift=True)
#comparePower()
#spatialAutocorrelation()
spatialAutocorrelationMovie(pairs)
#simulation
# simulatePoissonProcess(size=size)
# simulatePoissonProcessRowColumn()
# simulatePoissonProcessRowColumn(short=False)
# #pixel region
# output = pairwiseNoise(pairs, size=size)
# fileIO.cPickleDumpDictionary(output, 'data.pk')
#
# output = cPickle.load(open('data.pk'))
# plotAutocorrelation(output)
# plotResults(output, size)
#
# #row-column
# out = pairwiseNoiseRowColumns(pairs)
# fileIO.cPickleDumpDictionary(out, 'dataRowColumn.pk')
#
# out = cPickle.load(open('dataRowColumn.pk'))
# plotResultsRowColumn(out)
#
# out = {}
# for file in g.glob('05*Euclid.fits'):
# data = pf.getdata(file)
# results = measureNoise(data, size, file)
# try:
# print file, np.median(results['flux']), np.median(results['variance'])
# except:
# print 'No useful data in ', file
# continue
# out[file] = results
# fileIO.cPickleDumpDictionary(out, 'dataOLD.pk')
# out = cPickle.load(open('dataOLD.pk'))
# plotResults(out, size, pairwise=False, output='FullwellEstimateOLD.pdf')
| 37.685615 | 142 | 0.606968 | 10,798 | 76,238 | 4.244768 | 0.077144 | 0.017519 | 0.016167 | 0.034711 | 0.752504 | 0.718316 | 0.69532 | 0.678477 | 0.660151 | 0.645577 | 0 | 0.095825 | 0.228941 | 76,238 | 2,022 | 143 | 37.704253 | 0.683894 | 0.097615 | 0 | 0.628444 | 0 | 0.005957 | 0.12643 | 0.028763 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002978 | 0.010424 | null | null | 0.028295 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9ed032d61b371886f55c64a19ab95ddf65ab4e9d | 77 | py | Python | apps/carts/urls.py | makethedayunique/pandama-online-store | 38c02809a89087f5a6c83fd6ee2c39dab8d66f6c | [
"MIT"
] | null | null | null | apps/carts/urls.py | makethedayunique/pandama-online-store | 38c02809a89087f5a6c83fd6ee2c39dab8d66f6c | [
"MIT"
] | null | null | null | apps/carts/urls.py | makethedayunique/pandama-online-store | 38c02809a89087f5a6c83fd6ee2c39dab8d66f6c | [
"MIT"
] | null | null | null | from django.urls import path
from apps.carts import views
urlpatterns = [
]
| 12.833333 | 28 | 0.766234 | 11 | 77 | 5.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 29 | 15.4 | 0.921875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
9eeccc4455ebe53f713a177c395b28ea20afc5ae | 182 | py | Python | polybar-scripts/inbox-imap-python/inbox-imap-python.py | alexshoo/polybar-scripts | ecfbaa45400d6184f73dd447313925f92c74828c | [
"Unlicense"
] | 1 | 2019-02-15T18:42:24.000Z | 2019-02-15T18:42:24.000Z | polybar-scripts/inbox-imap-python/inbox-imap-python.py | alexshoo/polybar-scripts | ecfbaa45400d6184f73dd447313925f92c74828c | [
"Unlicense"
] | null | null | null | polybar-scripts/inbox-imap-python/inbox-imap-python.py | alexshoo/polybar-scripts | ecfbaa45400d6184f73dd447313925f92c74828c | [
"Unlicense"
] | 1 | 2019-03-29T13:17:22.000Z | 2019-03-29T13:17:22.000Z | #!/usr/bin/python
import imaplib
obj = imaplib.IMAP4_SSL('imap.mail.net', 993)
obj.login('userlogin', 'pass123')
obj.select()
print(len(obj.search(None, 'unseen')[1][0].split()))
| 18.2 | 52 | 0.686813 | 28 | 182 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053892 | 0.082418 | 182 | 9 | 53 | 20.222222 | 0.688623 | 0.087912 | 0 | 0 | 0 | 0 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
9ef50558c0381559a1e50911c64d21819abda570 | 178 | py | Python | project/__init__.py | sveetch/Sveetoy | 02a7fca00dd2602e97ebee845f7a76eadcbcc2d0 | [
"MIT"
] | 1 | 2017-10-24T09:45:59.000Z | 2017-10-24T09:45:59.000Z | project/__init__.py | sveetch/Sveetoy | 02a7fca00dd2602e97ebee845f7a76eadcbcc2d0 | [
"MIT"
] | 55 | 2017-01-22T16:02:53.000Z | 2020-08-04T15:18:44.000Z | project/__init__.py | sveetch/Sveetoy | 02a7fca00dd2602e97ebee845f7a76eadcbcc2d0 | [
"MIT"
] | 1 | 2018-06-13T15:47:29.000Z | 2018-06-13T15:47:29.000Z | # -*- coding: utf-8 -*-
"""
Sveetoy Demo project to build with Optimus
``__version__`` define the Sass library version, not the demonstration project.
"""
__version__ = "0.9.1"
| 22.25 | 79 | 0.696629 | 24 | 178 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0.157303 | 178 | 7 | 80 | 25.428571 | 0.746667 | 0.820225 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
73539562d73737b9e59771170b650c6b923a3a03 | 346 | py | Python | trinity/rpc/modules/main.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | 1 | 2018-12-09T11:56:53.000Z | 2018-12-09T11:56:53.000Z | trinity/rpc/modules/main.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | null | null | null | trinity/rpc/modules/main.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | 2 | 2018-12-09T15:58:11.000Z | 2020-09-29T07:10:21.000Z | from lahja import (
Endpoint
)
from trinity.chains.base import BaseAsyncChain
class RPCModule:
_chain = None
def __init__(self, chain: BaseAsyncChain, event_bus: Endpoint) -> None:
self._chain = chain
self._event_bus = event_bus
def set_chain(self, chain: BaseAsyncChain) -> None:
self._chain = chain
| 20.352941 | 75 | 0.679191 | 41 | 346 | 5.439024 | 0.439024 | 0.161435 | 0.206278 | 0.161435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239884 | 346 | 16 | 76 | 21.625 | 0.847909 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
735b36f9763e24197fb1834d6ff9aa6fc5164e76 | 268 | py | Python | project/forms.py | abrusebas1997/Activintine | 1d1a0ce06284bd08cee8c46843583a37ac98dd1c | [
"MIT"
] | null | null | null | project/forms.py | abrusebas1997/Activintine | 1d1a0ce06284bd08cee8c46843583a37ac98dd1c | [
"MIT"
] | null | null | null | project/forms.py | abrusebas1997/Activintine | 1d1a0ce06284bd08cee8c46843583a37ac98dd1c | [
"MIT"
] | null | null | null | from django import forms
from project.models import Activity
class ActivityForm(forms.ModelForm):
class Meta:
# """ Render and process a form based on the Activity model. """
model = Activity
fields = ("title", "content", "author", "image")
| 26.8 | 68 | 0.664179 | 32 | 268 | 5.5625 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227612 | 268 | 9 | 69 | 29.777778 | 0.859903 | 0.231343 | 0 | 0 | 0 | 0 | 0.112745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
735fc86cb60a5e51b78737cd180e9731652ebdcf | 219 | py | Python | normflow/__init__.py | pkulwj1994/normalizing-flows | 326321c4aea4a3f6ab703f82e21277a79cd7d9e4 | [
"MIT"
] | 96 | 2020-10-17T12:02:41.000Z | 2022-03-31T23:53:35.000Z | normflow/__init__.py | pkulwj1994/normalizing-flows | 326321c4aea4a3f6ab703f82e21277a79cd7d9e4 | [
"MIT"
] | 4 | 2020-05-05T16:39:58.000Z | 2021-12-17T09:32:26.000Z | normflow/__init__.py | pkulwj1994/normalizing-flows | 326321c4aea4a3f6ab703f82e21277a79cd7d9e4 | [
"MIT"
] | 16 | 2020-05-05T15:41:33.000Z | 2022-03-31T09:40:28.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
from .core import *
from . import flows
from . import distributions
from . import transforms
from . import nets
from . import utils
from . import HAIS
__version__ = '1.0' | 18.25 | 27 | 0.69863 | 31 | 219 | 4.806452 | 0.612903 | 0.402685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01676 | 0.182648 | 219 | 12 | 28 | 18.25 | 0.815642 | 0.196347 | 0 | 0 | 0 | 0 | 0.017143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7dfdb5f977665942e820c1c3d6fe148ffeef9ba7 | 126 | py | Python | src/test/python/a.py | xiaoma20082008/pvm | a1c3c312c362ca30c7202645b047664e36a690e7 | [
"Apache-2.0"
] | null | null | null | src/test/python/a.py | xiaoma20082008/pvm | a1c3c312c362ca30c7202645b047664e36a690e7 | [
"Apache-2.0"
] | null | null | null | src/test/python/a.py | xiaoma20082008/pvm | a1c3c312c362ca30c7202645b047664e36a690e7 | [
"Apache-2.0"
] | null | null | null | a = {}
b = 123
c = ''
d = {}
print(a)
e = 123.456
f = [1, "2", "3.14", False, {}, [4, "5", {}, True]]
g = (1, "2", True, None) | 15.75 | 51 | 0.380952 | 24 | 126 | 2 | 0.791667 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191489 | 0.253968 | 126 | 8 | 52 | 15.75 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0.055118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b4056a4bd916e9711695468cb950533a190b4703 | 113 | py | Python | utils/parsing.py | jaobernardi/roboscovid-redacted | 831abbcf42781560c89c5a6782ab7de238b43aca | [
"MIT"
] | null | null | null | utils/parsing.py | jaobernardi/roboscovid-redacted | 831abbcf42781560c89c5a6782ab7de238b43aca | [
"MIT"
] | null | null | null | utils/parsing.py | jaobernardi/roboscovid-redacted | 831abbcf42781560c89c5a6782ab7de238b43aca | [
"MIT"
] | null | null | null | def parse_string(input, vars={}):
for var in vars:
input = input.replace(f"${var}$", vars[var])
return input
| 22.6 | 46 | 0.663717 | 18 | 113 | 4.111111 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150442 | 113 | 4 | 47 | 28.25 | 0.770833 | 0 | 0 | 0 | 0 | 0 | 0.061947 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b40723909d0ed1749efd89e26284d9be69af7be0 | 216 | py | Python | users/factories.py | Arpit8081/Phishtray_Edited_Version | 9f3342e6fd2620b7f01ad91ce5b36fa8ea111bc8 | [
"MIT"
] | 2 | 2020-03-31T12:38:10.000Z | 2022-01-21T22:21:06.000Z | users/factories.py | Arpit8081/Phishtray_Edited_Version | 9f3342e6fd2620b7f01ad91ce5b36fa8ea111bc8 | [
"MIT"
] | 252 | 2018-05-24T14:55:24.000Z | 2022-02-26T13:02:10.000Z | users/factories.py | Arpit8081/Phishtray_Edited_Version | 9f3342e6fd2620b7f01ad91ce5b36fa8ea111bc8 | [
"MIT"
] | 11 | 2018-06-23T14:54:42.000Z | 2021-02-19T11:33:44.000Z | import factory
from users.models import User
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = User
username = factory.Sequence(lambda n: "username{0:0=2d}".format(n + 1))
| 19.636364 | 75 | 0.708333 | 28 | 216 | 5.464286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.185185 | 216 | 10 | 76 | 21.6 | 0.846591 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b40c1dd2ce522c2f2497e93d9095e9beefcec550 | 175 | py | Python | PyZoo/validators/utils/listSecure.py | franzinBr/PyZoo | a2deab63f46fbdfa92b0602d8efbfc9f19b9fef9 | [
"MIT"
] | 3 | 2021-09-29T22:23:55.000Z | 2022-02-16T13:52:56.000Z | PyZoo/validators/utils/listSecure.py | franzinBr/PyZoo | a2deab63f46fbdfa92b0602d8efbfc9f19b9fef9 | [
"MIT"
] | null | null | null | PyZoo/validators/utils/listSecure.py | franzinBr/PyZoo | a2deab63f46fbdfa92b0602d8efbfc9f19b9fef9 | [
"MIT"
] | null | null | null |
class ListSecure(list):
def get(self, index, default=None):
try:
return self.__getitem__(index)
except IndexError:
return default
| 21.875 | 42 | 0.588571 | 18 | 175 | 5.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.331429 | 175 | 7 | 43 | 25 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
b410c5bd9e6cedd6d9fa28dcfb6736cb54c3ac65 | 231 | py | Python | 387-First_Unique_Character_in_a_String.py | QuenLo/leecode | ce861103949510dc54fd5cb336bd992c40748de2 | [
"MIT"
] | 6 | 2018-06-13T06:48:42.000Z | 2020-11-25T10:48:13.000Z | 387-First_Unique_Character_in_a_String.py | QuenLo/leecode | ce861103949510dc54fd5cb336bd992c40748de2 | [
"MIT"
] | null | null | null | 387-First_Unique_Character_in_a_String.py | QuenLo/leecode | ce861103949510dc54fd5cb336bd992c40748de2 | [
"MIT"
] | null | null | null | class Solution:
def firstUniqChar(self, s: str) -> int:
count = collections.Counter(s)
for indx, ch in enumerate(s):
if count[ch] < 2:
return indx
return -1
| 23.1 | 43 | 0.484848 | 26 | 231 | 4.307692 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015038 | 0.424242 | 231 | 9 | 44 | 25.666667 | 0.827068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
b4283ac24168066c9d1d9a6553d694458b356177 | 137 | py | Python | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/tempfile/tempfile_tempdir.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/tempfile/tempfile_tempdir.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/tempfile/tempfile_tempdir.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | #
"""
"""
# end_pymotw_header
import tempfile
tempfile.tempdir = "/I/changed/this/path"
print("gettempdir():", tempfile.gettempdir())
| 12.454545 | 45 | 0.693431 | 15 | 137 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109489 | 137 | 10 | 46 | 13.7 | 0.762295 | 0.124088 | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b43d2b9da5ca68495129b75b7d80ef687ebe14ce | 244 | py | Python | blendmotion/core/__init__.py | DeepL2/BlendMotion | 4db804cc47f38b51d255d84c0e4d9f951900bf2b | [
"MIT"
] | null | null | null | blendmotion/core/__init__.py | DeepL2/BlendMotion | 4db804cc47f38b51d255d84c0e4d9f951900bf2b | [
"MIT"
] | 2 | 2019-01-06T09:15:09.000Z | 2019-01-06T09:16:52.000Z | blendmotion/core/__init__.py | DeepL2/BlendMotion | 4db804cc47f38b51d255d84c0e4d9f951900bf2b | [
"MIT"
] | 1 | 2019-01-06T09:12:51.000Z | 2019-01-06T09:12:51.000Z | from blendmotion.core import animation, effector, rigging
def register():
animation.register()
effector.register()
rigging.register()
def unregister():
rigging.unregister()
effector.unregister()
animation.unregister()
| 20.333333 | 57 | 0.717213 | 23 | 244 | 7.608696 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172131 | 244 | 11 | 58 | 22.181818 | 0.866337 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0.111111 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b43dd1a344a9cba9e511f20c32261cc725c7d568 | 98 | py | Python | GRADE 9/Python/BraydenViana-Python-Video8.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | GRADE 9/Python/BraydenViana-Python-Video8.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | GRADE 9/Python/BraydenViana-Python-Video8.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | foods = ['bacon', 'tuna', 'ham', 'sausages', 'beef']
for f in foods:
print(f)
print(len(f)) | 19.6 | 53 | 0.561224 | 15 | 98 | 3.666667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193878 | 98 | 5 | 54 | 19.6 | 0.696203 | 0 | 0 | 0 | 0 | 0 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
b4496955204178ea5c22a852367cc1c883835e5a | 182 | py | Python | server/fundraisers/urls.py | Techbikers/techbikers-api | f9c6ea467d1ae730e9cabe0d4785423634c044e5 | [
"MIT"
] | 2 | 2016-08-14T04:21:04.000Z | 2017-05-23T22:04:48.000Z | server/fundraisers/urls.py | Techbikers/techbikers-api | f9c6ea467d1ae730e9cabe0d4785423634c044e5 | [
"MIT"
] | 19 | 2015-08-26T10:05:02.000Z | 2018-06-27T20:08:54.000Z | server/fundraisers/urls.py | Techbikers/api | f9c6ea467d1ae730e9cabe0d4785423634c044e5 | [
"MIT"
] | 6 | 2015-08-19T16:49:13.000Z | 2018-05-25T16:38:24.000Z | from django.conf.urls import url, include
from server.fundraisers.views import FundraisersList
urlpatterns = [
url(r'^$', FundraisersList.as_view(), name='fundraiser-list')
]
| 20.222222 | 65 | 0.747253 | 22 | 182 | 6.136364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126374 | 182 | 8 | 66 | 22.75 | 0.849057 | 0 | 0 | 0 | 0 | 0 | 0.093407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b45509ec6d18f1e633346c9e3d8f69d71b9e1ea6 | 1,458 | py | Python | tsvis/parser/utils/vis_logging.py | iGame-Lab/TS-VIS | b0cd8d13ac1ebc5d857597b2a373b8e51e606358 | [
"Apache-2.0"
] | 15 | 2021-08-30T09:45:27.000Z | 2022-03-28T04:49:54.000Z | tsvis/parser/utils/vis_logging.py | iGame-Lab/TS-VIS | b0cd8d13ac1ebc5d857597b2a373b8e51e606358 | [
"Apache-2.0"
] | 1 | 2021-08-30T09:55:49.000Z | 2021-08-30T09:55:49.000Z | tsvis/parser/utils/vis_logging.py | iGame-Lab/TS-VIS | b0cd8d13ac1ebc5d857597b2a373b8e51e606358 | [
"Apache-2.0"
] | 1 | 2022-03-28T04:50:16.000Z | 2022-03-28T04:50:16.000Z | # -*- coding: utf-8 -*-
"""
Copyright 2021 Tianshu AI Platform. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
=============================================================
"""
from tsvis.server.command_line import get_cmd_line
from pathlib import Path
class VisLogging:
_instance = None
# 保证只有一个单例
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = object.__new__(cls)
return cls._instance
def __init__(self, cmd_line):
if cmd_line.action != "migrate":
self._logging_path = Path(cmd_line.args.logdir).absolute()
self._cache_path = self._logging_path.parent / "__viscache__"
@property
def logdir(self):
return self._logging_path
@property
def cachedir(self):
return self._cache_path
_logging = VisLogging(get_cmd_line())
def get_logger():
return _logging
| 30.375 | 74 | 0.652949 | 186 | 1,458 | 4.897849 | 0.569892 | 0.065862 | 0.049396 | 0.035126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008043 | 0.23251 | 1,458 | 47 | 75 | 31.021277 | 0.806077 | 0.462963 | 0 | 0.095238 | 0 | 0 | 0.026536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.095238 | 0.142857 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
b45795316d6c8041db93ffee842df708f74f2199 | 669 | py | Python | src/dotctl/installers/macappstore.py | wwmoraes/dotctl | 104e7dc3db8ef0389f03108a589a97c6f0923692 | [
"MIT"
] | null | null | null | src/dotctl/installers/macappstore.py | wwmoraes/dotctl | 104e7dc3db8ef0389f03108a589a97c6f0923692 | [
"MIT"
] | null | null | null | src/dotctl/installers/macappstore.py | wwmoraes/dotctl | 104e7dc3db8ef0389f03108a589a97c6f0923692 | [
"MIT"
] | null | null | null | from typing import List
from dotctl.installers.installer import Installer
from functools import cached_property
class MacAppStore(Installer):
@property
def base_cmd(self):
return ["mas"]
@property
def install_cmd(self):
return ["install"]
@property
def uninstall_cmd(self):
return ["uninstall"]
@cached_property
def list(self) -> List[str]:
list_process = self.__cmd__([*self.base_cmd, "list"], capture=True)
entries = list_process.stdout.splitlines()
return sorted([entry.split(" ")[0] for entry in entries])
def is_installed(self, package: str, binary: str = None) -> bool:
return (binary or package) in self.list
| 23.892857 | 71 | 0.705531 | 87 | 669 | 5.275862 | 0.45977 | 0.095861 | 0.084967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001815 | 0.176383 | 669 | 27 | 72 | 24.777778 | 0.831216 | 0 | 0 | 0.15 | 0 | 0 | 0.035874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.15 | 0.2 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
81f208bf97700d94e58e78bea66e5f17caccdf6f | 2,951 | py | Python | read_data19.py | zyxwvu321/Classifer_SSL_Longtail | e6c09414c49e695b0f4221a3c6245ae3929a1788 | [
"MIT"
] | null | null | null | read_data19.py | zyxwvu321/Classifer_SSL_Longtail | e6c09414c49e695b0f4221a3c6245ae3929a1788 | [
"MIT"
] | null | null | null | read_data19.py | zyxwvu321/Classifer_SSL_Longtail | e6c09414c49e695b0f4221a3c6245ae3929a1788 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
import pandas as pd
import numpy as np
import shutil
import os
from pathlib import Path
from tqdm import tqdm
#import cv2
#%%
src_im_fd = 'D:/dataset/ISIC/ISIC_2019_Training_Input/'
tar_im_fd = '../data/train19/'
df = pd.read_csv('D:/dataset/ISIC/ISIC_2019_Training_GroundTruth.csv')
df_v = df.values
img_name = df_v[:,0]
label_np = df_v[:,1:]
#labels = np.zeros_like(img_name)
#
#for idx,v in enumerate(label_np):
# labels[idx] = np.where(label_np[1]==1)[0][0]
#
labels = [ np.where(v==1)[0][0] for v in label_np]
dict_label = dict()
for i in range(8):
dict_label[i] = df.columns[1:][i]
for val,key in dict_label.items():
os.makedirs(tar_im_fd +key, exist_ok =True)
for idx,fn in enumerate(tqdm(img_name)):
src_fn = Path(src_im_fd)/(fn + '.jpg')
tar_fn = Path(tar_im_fd)/ dict_label[labels[idx]]/(fn + '.jpg')
if os.path.exists(str(src_fn)):
shutil.copyfile(src_fn,tar_fn)
else:
print(f'filename {str(src_fn)} not exist')
# #%% write test
# df = pd.read_csv('./data/ISIC/ISIC2018_Task3_Testing_Score_imb.csv')
# tar_im_fd = './data/ISIC/test18/'
# src_im_fd = '/home/minjie/dataset/ISIC/ISIC2018_Task3_Test_Input/'
# for val,key in dict_label.items():
# os.makedirs(tar_im_fd +key, exist_ok =True)
# df_v = df.values
# img_name = df_v[:,0]
# label_np = df_v[:,1:]
# labels = [ np.where(v==v.max())[0][0] for v in label_np]
# for idx,fn in enumerate(tqdm(img_name)):
# src_fn = Path(src_im_fd)/(fn + '.jpg')
# tar_fn = Path(tar_im_fd)/ dict_label[labels[idx]]/(fn + '.jpg')
# if os.path.exists(str(src_fn)):
# shutil.copyfile(src_fn,tar_fn)
# else:
# print(f'filename {str(src_fn)} not exist')
# #%% read ISIC19 data
# src_im_fd = '/home/minjie/dataset/ISIC/ISIC_2019_Training_Input/'
# tar_im_fd = './data/ISIC/train19/'
# df = pd.read_csv('./data/ISIC/ISIC_2019_Training_GroundTruth.csv')
# df_v = df.values
# img_name = df_v[:,0]
# label_np = df_v[:,1:]
# #labels = np.zeros_like(img_name)
# #
# #for idx,v in enumerate(label_np):
# # labels[idx] = np.where(label_np[1]==1)[0][0]
# #
# labels = [ np.where(v==1)[0][0] for v in label_np]
# dict_label = dict()
# n_label = len(df.columns)-2
# for i in range(n_label):
# dict_label[i] = df.columns[1:][i]
# dict_label[3]= 'AKIEC'
# for val,key in dict_label.items():
# os.makedirs(tar_im_fd +key, exist_ok =True)
# for idx,fn in enumerate(tqdm(img_name)):
# src_fn = Path(src_im_fd)/(fn + '.jpg')
# tar_fn = Path(tar_im_fd)/ dict_label[labels[idx]]/(fn + '.jpg')
# if os.path.exists(str(src_fn)):
# #img = cv2.imread(str(src_fn))
# #img_resize = cv2.resize(img,(600,450))
# #cv2.imwrite(str(tar_fn),img_resize)
# shutil.copyfile(src_fn,tar_fn)
# else:
# print(f'filename {str(src_fn)} not exist')
| 26.827273 | 70 | 0.633006 | 506 | 2,951 | 3.456522 | 0.195652 | 0.034305 | 0.036021 | 0.04574 | 0.781018 | 0.75586 | 0.731275 | 0.672956 | 0.672956 | 0.672956 | 0 | 0.029938 | 0.185022 | 2,951 | 110 | 71 | 26.827273 | 0.697297 | 0.653338 | 0 | 0 | 0 | 0 | 0.154412 | 0.095588 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.24 | 0 | 0.24 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c32f8c2cafd56b294844266bbb327e02e8f9736a | 180 | py | Python | todoapp/todo/urls.py | joaofranca13/to-do-django | 08fc0c87ce9c3ab4c71acfe1b4a1fcd06b54427b | [
"MIT"
] | null | null | null | todoapp/todo/urls.py | joaofranca13/to-do-django | 08fc0c87ce9c3ab4c71acfe1b4a1fcd06b54427b | [
"MIT"
] | null | null | null | todoapp/todo/urls.py | joaofranca13/to-do-django | 08fc0c87ce9c3ab4c71acfe1b4a1fcd06b54427b | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.home, name='home'),
path('updatetask/<int:pk>/', views.updatetask, name='updatetask'),
]
| 18 | 70 | 0.661111 | 22 | 180 | 5.409091 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161111 | 180 | 9 | 71 | 20 | 0.788079 | 0 | 0 | 0 | 0 | 0 | 0.188889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c33811d3dab0b108196f893f658efbdca66a3329 | 718 | py | Python | src/notifier/properties.py | fmudrunek/github-slack-pr-notifier | fe0240e2ddc0d41ab3a7db9d9680ff2a13ef551e | [
"MIT"
] | null | null | null | src/notifier/properties.py | fmudrunek/github-slack-pr-notifier | fe0240e2ddc0d41ab3a7db9d9680ff2a13ef551e | [
"MIT"
] | null | null | null | src/notifier/properties.py | fmudrunek/github-slack-pr-notifier | fe0240e2ddc0d41ab3a7db9d9680ff2a13ef551e | [
"MIT"
] | null | null | null | import os
import json
from typing import Dict, List
def __get_env(variable):
if variable not in os.environ:
raise ValueError(f"Environment variable '{variable}' not found")
return os.environ[variable]
def get_github_token() -> str:
return __get_env("GITHUB_TOKEN")
def get_slack_bearer_token() -> str:
return __get_env("SLACK_BEARER_TOKEN")
def get_github_api_url() -> str:
return __get_env("GITHUB_REST_API_URL")
def read_config(config_path) -> Dict[str, List[str]]:
with open(config_path) as json_data_file:
config = json.load(json_data_file)
return {entry["slack_channel"]: entry["repositories"] for entry in config['notifications']}
| 25.642857 | 96 | 0.696379 | 100 | 718 | 4.66 | 0.43 | 0.051502 | 0.077253 | 0.096567 | 0.143777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197772 | 718 | 27 | 97 | 26.592593 | 0.809028 | 0 | 0 | 0 | 0 | 0 | 0.188133 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.176471 | 0.176471 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c3489aaefa685e0eaf805688fc6c32d79700bd81 | 756 | py | Python | cloudrail/knowledge/context/aws/resource_based_policy.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/resource_based_policy.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/resource_based_policy.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | from abc import abstractmethod
from typing import Optional, List
from cloudrail.knowledge.context.aws.iam.policy import Policy
from cloudrail.knowledge.context.aws.aws_resource import AwsResource
from cloudrail.knowledge.context.aws.service_name import AwsServiceName, AwsServiceAttributes
class ResourceBasedPolicy(AwsResource):
def __init__(self, account: str, region: str, tf_resource_type: AwsServiceName, aws_service_attributes: AwsServiceAttributes = None):
super().__init__(account, region, tf_resource_type, aws_service_attributes)
self.resource_based_policy: Optional[Policy] = None
@abstractmethod
def get_keys(self) -> List[str]:
pass
@property
def is_tagable(self) -> bool:
return False
| 36 | 137 | 0.77381 | 89 | 756 | 6.325843 | 0.47191 | 0.069272 | 0.117229 | 0.154529 | 0.170515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150794 | 756 | 20 | 138 | 37.8 | 0.876947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.066667 | 0.333333 | 0.066667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
5edad65a3a706deeafe6386cf26e4067580d474e | 825 | py | Python | pptx/opc/shared.py | handwriter/python-pptx | 22351c6f9fe637cadddca3461c4899af7d439711 | [
"MIT"
] | 1 | 2020-03-20T01:47:10.000Z | 2020-03-20T01:47:10.000Z | pptx/opc/shared.py | handwriter/python-pptx | 22351c6f9fe637cadddca3461c4899af7d439711 | [
"MIT"
] | null | null | null | pptx/opc/shared.py | handwriter/python-pptx | 22351c6f9fe637cadddca3461c4899af7d439711 | [
"MIT"
] | null | null | null | # encoding: utf-8
"""
Objects shared by modules in the pptx.opc sub-package
"""
from __future__ import absolute_import, print_function, unicode_literals
class CaseInsensitiveDict(dict):
"""
Mapping type that behaves like dict except that it matches without respect
to the case of the key. E.g. cid['A'] == cid['a']. Note this is not
general-purpose, just complete enough to satisfy opc package needs. It
assumes str keys for example.
"""
def __contains__(self, key):
return super(CaseInsensitiveDict, self).__contains__(key.lower())
def __getitem__(self, key):
return super(CaseInsensitiveDict, self).__getitem__(key.lower())
def __setitem__(self, key, value):
return super(CaseInsensitiveDict, self).__setitem__(key.lower(), value)
| 31.730769 | 80 | 0.688485 | 104 | 825 | 5.163462 | 0.644231 | 0.039106 | 0.167598 | 0.189944 | 0.1527 | 0.1527 | 0 | 0 | 0 | 0 | 0 | 0.001541 | 0.213333 | 825 | 25 | 81 | 33 | 0.825886 | 0.380606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0.375 | 1 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
5edc841872c5752fdbe75bbc4fd42360406f04b5 | 450 | py | Python | lebanese_channels/services/nbn.py | blazeinmedia/Lebanese-Channels | f314868ac3da69ce5a27f6f953145096be1c31eb | [
"MIT"
] | 1 | 2020-04-09T19:39:35.000Z | 2020-04-09T19:39:35.000Z | lebanese_channels/services/nbn.py | blazeinmedia/Lebanese-Channels | f314868ac3da69ce5a27f6f953145096be1c31eb | [
"MIT"
] | null | null | null | lebanese_channels/services/nbn.py | blazeinmedia/Lebanese-Channels | f314868ac3da69ce5a27f6f953145096be1c31eb | [
"MIT"
] | null | null | null | from lebanese_channels.channel import Channel
from lebanese_channels.services.utils import stream
class NBN(Channel):
def get_name(self) -> str:
return 'NBN'
def get_logo(self) -> str:
return 'https://nbntv.me/wp-content/uploads/2018/08/cropped-nbn-logo-512-192x192.jpg'
def get_stream_url(self) -> str:
return stream.fetch_from('http://player.l1vetv.com/nbn')
def get_epg_data(self):
return None
| 26.470588 | 93 | 0.691111 | 65 | 450 | 4.646154 | 0.584615 | 0.07947 | 0.129139 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043836 | 0.188889 | 450 | 16 | 94 | 28.125 | 0.783562 | 0 | 0 | 0 | 0 | 0.090909 | 0.237778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.181818 | 0.363636 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5ee55b9c983b2f94bd320f586932371216e6a6f4 | 591 | py | Python | inclusive/inclusive.py | numpde/inclusive | 63f637473272f7af66ffe8d1d3fcd4ddf1c22a72 | [
"MIT"
] | null | null | null | inclusive/inclusive.py | numpde/inclusive | 63f637473272f7af66ffe8d1d3fcd4ddf1c22a72 | [
"MIT"
] | null | null | null | inclusive/inclusive.py | numpde/inclusive | 63f637473272f7af66ffe8d1d3fcd4ddf1c22a72 | [
"MIT"
] | null | null | null | import builtins
from collections.abc import Iterable
class Template:
def __init__(self, builtin_function):
self.builtin_function = builtin_function
def __call__(self, *args):
return self.builtin_function(*args)
def __getitem__(self, args):
if isinstance(args, Iterable):
old = self.builtin_function(*args)
new = self.builtin_function(old.start, old.stop + 1, old.step)
else:
old = self.builtin_function(args)
new = self.builtin_function((old.start or 0) + 1, old.stop + 1, old.step)
return new
range = Template(builtins.range)
slice = Template(builtins.slice)
| 24.625 | 76 | 0.737733 | 82 | 591 | 5.073171 | 0.365854 | 0.288462 | 0.319712 | 0.165865 | 0.341346 | 0.269231 | 0.269231 | 0.269231 | 0.269231 | 0.269231 | 0 | 0.007968 | 0.150592 | 591 | 23 | 77 | 25.695652 | 0.820717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.117647 | 0.058824 | 0.470588 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5ef746ab4cc0103069cd4eb908a678e8c53ceca1 | 71 | py | Python | pdbprocessor/appinfo.py | igik20/pdbprocessor | 2d385dd895019e5703508915599d35db2df4bcb2 | [
"MIT"
] | null | null | null | pdbprocessor/appinfo.py | igik20/pdbprocessor | 2d385dd895019e5703508915599d35db2df4bcb2 | [
"MIT"
] | null | null | null | pdbprocessor/appinfo.py | igik20/pdbprocessor | 2d385dd895019e5703508915599d35db2df4bcb2 | [
"MIT"
] | null | null | null | class AppInfo:
VERSION = "Alpha 0.2"
AUTHOR = "Igor Trujnara"
| 17.75 | 28 | 0.619718 | 9 | 71 | 4.888889 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.267606 | 71 | 4 | 29 | 17.75 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6f11356e6c06cab1bad7698ae514a58841aa063b | 5,709 | py | Python | HVAE/Modules.py | omiethescientist/HyperbolicDeepLearning | 33554d3bf4668fdd1945df5be69ab38a1e7db686 | [
"MIT"
] | 1 | 2020-01-10T20:35:05.000Z | 2020-01-10T20:35:05.000Z | HVAE/Modules.py | omiethescientist/HyperbolicDeepLearning | 33554d3bf4668fdd1945df5be69ab38a1e7db686 | [
"MIT"
] | null | null | null | HVAE/Modules.py | omiethescientist/HyperbolicDeepLearning | 33554d3bf4668fdd1945df5be69ab38a1e7db686 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import geoopt
from geoopt.manifolds import PoincareBall
from RiemannLayers import GeodesicLayer, MobiusLayer
# Code Inspired by Emile Mathieu from Microsoft Research and Oxford Stats
# Paper: https://arxiv.org/pdf/1901.06033.pdf
# Define Encoder and Decoder Modules for the VAE
class WrappedEncoder(nn.Module):
def __init__(self, input_dim, latent_dim, n_hlayers, hlayer_size, activation, dropout, manifold):
super(WrappedEncoder, self).__init__()
self.x = input_dim
self.z = latent_dim
self.n = n_hlayers
self.n_size = hlayer_size
self.activation = activation
self.dropout = dropout
self.manifold = manifold
#Create custom encoder architechture
layers = []
layers.extend([nn.Linear(self.x, self.n_size), self.activation, nn.Dropout(p=self.dropout)])
for i in range(self.n - 1):
layers.extend([nn.Linear(self.n_size, self.n_size),
self.activation,
nn.Dropout(p=self.dropout)])
self.enc = nn.Sequential(*layers)
self.learned_param = nn.Linear(self.n_size, self.z)
def forward(self, inputs):
e = self.enc(inputs)
param = self.learned_param(e)
mu = self.manifold.expmap0(param)
log_Sigma = F.softplus(mu)
return mu, log_Sigma
class WrappedDecoder(nn.Module):
def __init__(self, latent_dim, output_dim, n_hlayers, hlayer_size, activation, dropout, manifold):
super(WrappedDecoder, self).__init__()
self.z = latent_dim
self.x = output_dim
self.n = n_hlayers
self.n_size = hlayer_size
self.activation = activation
self.dropout = dropout
self.manifold = manifold
#Create Custom Encoder Architechture
layers = []
layers.extend([nn.Linear(self.z, self.n_size), self.activation, nn.Dropout(p=self.dropout)])
for i in range(self.n - 1):
layers.extend([nn.Linear(self.n_size, self.n_size),
self.activation,
nn.Dropout(p=self.dropout)])
self.dec = nn.Sequential(*layers)
self.output_layer = nn.Linear(self.n_size, self.x)
def forward(self, embeddings):
emb = self.manifold.logmap0(embeddings)
emb = self.dec(emb)
recon = self.output_layer(emb)
return recon
class MobiusEncoder(nn.Module):
def __init__(self, input_dim, latent_dim, n_hlayers, hlayer_size, activation, dropout, manifold):
super(MobiusEncoder, self).__init__()
self.x = input_dim
self.z = latent_dim
self.n = n_hlayers
self.n_size = hlayer_size
self.activation = activation
self.dropout = dropout
self.manifold = manifold
layers = []
layers.extend([nn.Linear(self.x, self.n_size), self.activation, nn.Dropout(p=self.dropout)])
for i in range(self.n - 1):
layers.extend([nn.Linear(self.n_size, self.n_size),
self.activation,
nn.Dropout(p=self.dropout)])
self.enc = nn.Sequential(*layers)
self.sigma_out = nn.Linear(self.n_size, self.z)
self.output_layer = MobiusLayer(self.n_size, self.z, self.manifold)
def forward(self, inputs):
e = self.enc(inputs)
mu = self.output_layer(e)
mu = self.manifold.expmap0(mu)
log_Sigma = F.softplus(self.sigma_out(e))
return mu, log_Sigma
class GeodesicDecoder(nn.Module):
def __init__(self, latent_dim, output_dim, n_hlayers, hlayer_size, activation, dropout, manifold):
super(GeodesicDecoder, self).__init__()
self.z = latent_dim
self.x = output_dim
self.n = n_hlayers
self.n_size = hlayer_size
self.activation = activation
self.dropout = dropout
self.manifold = manifold
input_layer = GeodesicLayer(self.z, self.n_size, self.manifold)
layers = [input_layer]
layers.extend([self.activation, nn.Dropout(p=self.dropout)])
for i in range(self.n - 1):
layers.extend([nn.Linear(self.n_size, self.n_size),
self.activation,
nn.Dropout(p=self.dropout)])
self.dec = nn.Sequential(*layers)
self.output_layer = nn.Linear(self.n_size, self.x)
def forward(self, embeddings):
decode = self.dec(embeddings)
recon = self.output_layer(decode)
return recon
class MobiusDecoder(nn.Module):
def __init__(self, latent_dim, output_dim, n_hlayers, hlayer_size, activation, dropout, manifold):
super(MobiusDecoder, self).__init__()
self.z = latent_dim
self.x = output_dim
self.n = n_hlayers
self.n_size = hlayer_size
self.activation = activation
self.dropout = dropout
self.manifold = manifold
layers = []
layers.extend([MobiusLayer(self.z, self.n_size, self.manifold), self.activation, nn.Dropout(p=self.dropout)])
for i in range(self.n - 1):
layers.extend([nn.Linear(self.n_size, self.n_size),
self.activation,
nn.Dropout(p=self.dropout)])
self.dec = nn.Sequential(*layers)
self.output_layer = nn.Linear(self.n_size, self.x)
def forward(self, embeddings):
emb = self.dec(embeddings)
recon = self.output_layer(emb)
return recon
#Debugging Code
#if __name__ == '__main__':
# inputs = torch.randn(64, 6000).double()
# x = inputs.shape[1]
# z = 2
# n = 2
# n_size = 256
# activ = nn.LeakyReLU()
# drop_rate = 0.2
# manifold = PoincareBall(c=1)
# Encoders = [WrappedEncoder(x, z, n, n_size, activ, drop_rate, manifold),
# MobiusEncoder(x, z, n, n_size, activ, drop_rate, manifold)]
# for e in Encoders:
# e = e.double()
# print(e(inputs))
# embeddings = torch.randn(64, 2).double()
# Decoders = [GeodesicDecoder(z, x, n, n_size, activ, drop_rate, manifold),
# MobiusDecoder(z, x, n, n_size, activ, drop_rate, manifold)]
# for d in Decoders:
# d = d.double()
# print(d(embeddings))
| 33.19186 | 111 | 0.678753 | 805 | 5,709 | 4.637267 | 0.141615 | 0.048219 | 0.062684 | 0.073132 | 0.733994 | 0.710956 | 0.704795 | 0.647469 | 0.627645 | 0.596571 | 0 | 0.007642 | 0.197758 | 5,709 | 171 | 112 | 33.385965 | 0.807424 | 0.147662 | 0 | 0.715447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081301 | false | 0 | 0.04878 | 0 | 0.211382 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6f29e4984cd54995362e3f86984d2e51409eef2b | 189 | py | Python | etc/gunicorn.conf.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | null | null | null | etc/gunicorn.conf.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | 8 | 2020-07-22T02:06:35.000Z | 2021-09-22T19:22:27.000Z | etc/gunicorn.conf.py | NestorMonroy/BlogTemplate | 82dfc7eb26e8a8ff0d51f29176c3b4d537092be7 | [
"MIT"
] | null | null | null | workers = 2
bind = '127.0.0.1:8000'
workers = 1
timeout = 60
errorlog = '/usr/local/apps/blog-nestor/nblog.gunicorng.error'
accesslog = '/usr/local/apps/blog-nestor/nblog.gunicorng.access'
| 27 | 64 | 0.73545 | 30 | 189 | 4.633333 | 0.666667 | 0.115108 | 0.172662 | 0.230216 | 0.517986 | 0.517986 | 0.517986 | 0 | 0 | 0 | 0 | 0.081871 | 0.095238 | 189 | 6 | 65 | 31.5 | 0.730994 | 0 | 0 | 0 | 0 | 0 | 0.597884 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6f305c939d18f7600fecc97812393e8050ecf030 | 2,616 | py | Python | catalog/model/plain_models.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | catalog/model/plain_models.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | catalog/model/plain_models.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | #-*- coding: utf-8 -*-
""" EOSS catalog system
catalog objects and fixed data structures used for the serialization/deserialization process
"""
__author__ = "Thilo Wehrmann, Steffen Gebhardt"
__copyright__ = "Copyright 2016, EOSS GmbH"
__credits__ = ["Thilo Wehrmann", "Steffen Gebhardt"]
__license__ = "GPL"
__version__ = "1.0.0"
__maintainer__ = "Thilo Wehrmann"
__email__ = "twehrmann@eoss.cloud"
__status__ = "Production"
from datetime import datetime
class ResourcesURLS(object):
def __init__(self):
self.metadata_url = None
self.resource_url = None
self.quicklook_url = None
class Catalog_Dataset(object):
def __init__(self):
self.entity_id = None
self.acq_time = None
self.sensor = None
self.tile_identifier = None
self.clouds = None
self.level = None
self.daynight = None
self.time_registered = datetime.utcnow()
def __hash__(self):
return hash(self.entity_id) ^ hash(self.tile_identifier) ^ hash(self.acq_time)
class S3PrivateContainer(object):
def __init__(self):
self.region = None
self.bucket = None
self.filename = None
def to_dict(self):
return dict(s3privat=self.__dict__)
class S3PublicContainer(object):
def __init__(self):
self.http = None
self.bucket = None
self.prefix = None
def to_dict(self):
return dict(s3public=self.__dict__)
class SentinelS3Container(object):
def __init__(self):
self.zip = None
self.bucket = None
self.tile = None
self.product = None
self.quicklook = None
def to_dict(self):
return dict(s3public=self.__dict__)
class CopernicusSciHubContainer(object):
def __init__(self):
self.http = None
def to_dict(self):
return dict(scihub=self.__dict__)
class USGSOrderContainer(object):
def __init__(self):
self.link = None
def to_dict(self):
return dict(usgs=self.__dict__)
class GoogleLandsatContainer(object):
supported_sensors = {'OLI_TIRS': 'L8', 'LANDSAT_ETM_SLC_OFF': 'L7', 'LANDSAT_ETM': 'L7',
'LANDSAT_TM': 'L5', 'TIRS': 'L8', 'OLI': 'L8'}
base = 'http://storage.googleapis.com/earthengine-public/landsat/%s/%03d/%03d/%s.tar.bz'
def __init__(self):
self.link = None
def to_dict(self):
return dict(google=self.__dict__)
class PlanetContainer(object):
def __init__(self):
self.analytic = None
self.visual = None
def to_dict(self):
return dict(planet=self.__dict__)
| 24.222222 | 92 | 0.647171 | 309 | 2,616 | 5.074434 | 0.365696 | 0.091837 | 0.063138 | 0.086097 | 0.316327 | 0.205995 | 0.205995 | 0.119898 | 0.119898 | 0.119898 | 0 | 0.01214 | 0.244266 | 2,616 | 107 | 93 | 24.448598 | 0.780981 | 0.051223 | 0 | 0.342466 | 0 | 0.013699 | 0.115198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232877 | false | 0 | 0.013699 | 0.109589 | 0.506849 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6f47fee45670779fff046e3ea197a15ae1be3db4 | 38 | py | Python | tests/src/industry/framework/__init__.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | 3 | 2019-09-02T18:21:22.000Z | 2020-04-17T16:34:57.000Z | tests/src/industry/framework/__init__.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | null | null | null | tests/src/industry/framework/__init__.py | rpeach-sag/apama-industry-analytics-kit | a3f6039915501d41251b6f7ec41b0cb8111baf7b | [
"Apache-2.0"
] | null | null | null | __all__ = [ "BaseTest", "Correlator" ] | 38 | 38 | 0.657895 | 3 | 38 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6f54330a15167a0071b3b689ba003037c296ab29 | 98 | py | Python | output/models/ms_data/wildcards/wild_o012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/wildcards/wild_o012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/wildcards/wild_o012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.ms_data.wildcards.wild_o012_xsd.wild_o012 import Foo
__all__ = [
"Foo",
]
| 16.333333 | 71 | 0.744898 | 15 | 98 | 4.333333 | 0.8 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.142857 | 98 | 5 | 72 | 19.6 | 0.702381 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6f56837109e174665ba1d7bb7e0ec728829ee7c2 | 116 | py | Python | seismiqb/__init__.py | gazprom-neft/seismiqb | d4906d41c79407c99cfa6f91d6005c0e453d1138 | [
"Apache-2.0"
] | 73 | 2019-10-08T08:50:12.000Z | 2022-03-23T20:18:02.000Z | seismiqb/__init__.py | gazprom-neft/seismiqb | d4906d41c79407c99cfa6f91d6005c0e453d1138 | [
"Apache-2.0"
] | 69 | 2019-09-06T14:00:57.000Z | 2022-03-30T13:02:54.000Z | seismiqb/__init__.py | gazprom-neft/seismiqb | d4906d41c79407c99cfa6f91d6005c0e453d1138 | [
"Apache-2.0"
] | 28 | 2019-11-04T18:40:07.000Z | 2022-03-23T16:18:54.000Z | """Init file"""
from . import batchflow
from .src import * # pylint: disable=wildcard-import
__version__ = '0.1.0'
| 19.333333 | 52 | 0.698276 | 16 | 116 | 4.8125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.146552 | 116 | 5 | 53 | 23.2 | 0.747475 | 0.362069 | 0 | 0 | 0 | 0 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6f668ae88a15837fe0e68163ed52b0e1004acdf8 | 398 | py | Python | tests/beacon/types/test_fork_data.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | 1 | 2018-12-09T11:56:53.000Z | 2018-12-09T11:56:53.000Z | tests/beacon/types/test_fork_data.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | null | null | null | tests/beacon/types/test_fork_data.py | shreyasnbhat/py-evm | cd31d83185e102a7cb2f11e2f67923b069ee9cef | [
"MIT"
] | 2 | 2018-12-09T15:58:11.000Z | 2020-09-29T07:10:21.000Z | from eth.beacon.types.fork_data import (
ForkData,
)
def test_defaults(sample_fork_data_params):
fork_data = ForkData(**sample_fork_data_params)
assert fork_data.pre_fork_version == sample_fork_data_params['pre_fork_version']
assert fork_data.post_fork_version == sample_fork_data_params['post_fork_version']
assert fork_data.fork_slot == sample_fork_data_params['fork_slot']
| 36.181818 | 86 | 0.798995 | 59 | 398 | 4.864407 | 0.305085 | 0.278746 | 0.243902 | 0.348432 | 0.557491 | 0.216028 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113065 | 398 | 10 | 87 | 39.8 | 0.813031 | 0 | 0 | 0 | 0 | 0 | 0.105528 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6f7942f138265d9b78e71c0374638965bbb6e088 | 184 | py | Python | stats/queries.py | TravelChain/golos-ql | a2acad0b56d349f3811b2bd0fc8ec1ce3257156c | [
"MIT"
] | 5 | 2018-08-28T20:54:54.000Z | 2022-02-09T21:21:53.000Z | stats/queries.py | TravelChain/golos-ql | a2acad0b56d349f3811b2bd0fc8ec1ce3257156c | [
"MIT"
] | null | null | null | stats/queries.py | TravelChain/golos-ql | a2acad0b56d349f3811b2bd0fc8ec1ce3257156c | [
"MIT"
] | 2 | 2018-09-26T06:28:34.000Z | 2018-11-20T20:14:00.000Z | import graphene
from stats.types import Stats
class StatsQuery(graphene.ObjectType):
stats = graphene.Field(Stats)
def resolve_stats(self, context):
return Stats()
| 16.727273 | 38 | 0.722826 | 22 | 184 | 6 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 184 | 10 | 39 | 18.4 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 3 |
4895029808bc2d4e25becca7179cc71558bcb3c0 | 433 | py | Python | AC_TD3_code/utils/commons.py | Jiang-HB/AC_CDQ | 4b4ec2d611c4481ad0b99cf7ea79eb23014a0325 | [
"MIT"
] | 7 | 2021-05-03T05:50:14.000Z | 2022-03-24T15:35:59.000Z | AC_TD3_code/utils/commons.py | Jiang-HB/AC_CDQ | 4b4ec2d611c4481ad0b99cf7ea79eb23014a0325 | [
"MIT"
] | null | null | null | AC_TD3_code/utils/commons.py | Jiang-HB/AC_CDQ | 4b4ec2d611c4481ad0b99cf7ea79eb23014a0325 | [
"MIT"
] | 1 | 2022-03-25T02:24:53.000Z | 2022-03-25T02:24:53.000Z | import pickle
def load_data(path):
file = open(path, "rb")
data = pickle.load(file)
file.close()
return data
def save_data(path, data):
file = open(path, "wb")
pickle.dump(data, file)
file.close()
def chunker_list(seq, size):
return [seq[pos: pos + size] for pos in range(0, len(seq), size)]
def chunker_num(num, size):
return [list(range(num))[pos: pos + size] for pos in range(0, num, size)] | 24.055556 | 77 | 0.635104 | 70 | 433 | 3.871429 | 0.357143 | 0.059041 | 0.088561 | 0.095941 | 0.177122 | 0.177122 | 0.177122 | 0.177122 | 0 | 0 | 0 | 0.005848 | 0.210162 | 433 | 18 | 77 | 24.055556 | 0.78655 | 0 | 0 | 0.142857 | 0 | 0 | 0.009217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.071429 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
48a22edbd76757826317bab349c3dda7fe2d8fb3 | 13,146 | py | Python | web/transiq/restapi/tests/employee_roles_functionality_mapping_tests.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | web/transiq/restapi/tests/employee_roles_functionality_mapping_tests.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | 14 | 2020-06-05T23:06:45.000Z | 2022-03-12T00:00:18.000Z | web/transiq/restapi/tests/employee_roles_functionality_mapping_tests.py | manibhushan05/transiq | 763fafb271ce07d13ac8ce575f2fee653cf39343 | [
"Apache-2.0"
] | null | null | null | import json
from django.contrib.auth.models import User
from django.urls import reverse
from model_mommy import mommy
from rest_framework import status
from rest_framework.test import APITestCase
from authentication.models import Profile
from employee.models import Employee
from restapi.models import TaskDashboardFunctionalities, EmployeeRoles, EmployeeRolesFunctionalityMapping
from utils.models import AahoOffice
class ErfmTests(APITestCase):
def setUp(self):
self.login_url = reverse('login')
self.logout_url = reverse('logout')
self.erfmlist_url = reverse('employee_roles_functionalities_mapping_list/')
self.erfmcreate_url = reverse('employee_roles_functionalities_mapping_create/')
self.user = User.objects.create_user(username='john_doe',
email='harshadasawant89@gmail.com',
password='abc12345')
Profile.objects.create(
user=self.user,
name='John_Doe',
phone='9619125174',
)
self.login_data = self.client.post(self.login_url, {'username': 'john_doe', 'password': 'abc12345'}).content
self.login_data = json.loads(self.login_data.decode('utf8'))
self.token = 'Token {}'.format(self.login_data['token'])
self.client.credentials(HTTP_AUTHORIZATION=self.token)
self.aaho_office = mommy.make(AahoOffice)
self.employee = mommy.make(Employee, office=self.aaho_office)
self.employee_id = self.employee.id
self.employee_roles = mommy.make(EmployeeRoles)
self.employeeroles_id = self.employee_roles.id
self.taskdf = mommy.make(TaskDashboardFunctionalities)
self.functionality = self.taskdf.functionality
self.tdid = self.taskdf.id
self.erfm = mommy.make(EmployeeRolesFunctionalityMapping, td_functionality=self.taskdf,
employee_role=self.employee_roles, caption="Bharat")
self.erfm_id = self.erfm.id
self.caption = self.erfm.caption
class ErfmCreateTests(ErfmTests):
"""
Test ID:TS02RQ00006
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:wrong content type
Status code:415
"""
def test_erfm_create_415_header_with_wrong_content_type(self):
# Negative test case of req quotes create with HTTP Header Authorization token with wrong content type
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": self.caption
}), content_type='application/pdf')
self.assertEqual(response.status_code, status.HTTP_415_UNSUPPORTED_MEDIA_TYPE)
"""
Test ID:TS02RQ00007
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:invalid method header
Status code:401
"""
def test_erfm_create_401_no_header(self):
# Negative test case of req quotes create with no HTTP Header Authorization token
self.client.credentials()
response = self.client.post(self.erfmcreate_url, {"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": self.caption
})
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertEqual(response.data['detail'], "Authentication credentials were not provided.")
"""
Test ID:TS02RQ00008
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:expired header
Status code:401
"""
def test_erfm_create_401_expired_header(self):
# Negative test case of req quotes create with expired/logged out HTTP Header Authorization token
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.delete(self.logout_url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": self.caption
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertEqual(response.data['detail'], "Invalid token.")
"""
Test ID:TS02RQ00008
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:wrong token
Status code:401
"""
def test_erfm_create_401_wrong_token(self):
# Negative test case of req quotes create with wrong HTTP Header Authorization token
token = 'Token 806fa0efd3ce26fe080f65da4ad5a137e1d056ff'
self.client.credentials(HTTP_AUTHORIZATION=token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": self.caption
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertEqual(response.data['detail'], "Invalid token.")
"""
Test ID:TS02RQ00009
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:wrong vehicle number
Status code:400
"""
def test_erfm_create_400_emptybody(self):
# Negative test case of req quotes create with HTTP Header Authorization token and wrong vehicle_no
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data['td_functionality_id'][0], "This field is required.")
self.assertEqual(response.data['employee_role_id'][0], "This field is required.")
self.assertEqual(response.data['caption'][0], "This field is required.")
"""
Test ID:TS02RQ00010
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:wrong requirement
Status code:400
"""
def test_erfm_create_400_fields_empty(self):
# Negative test case of req quotes create with HTTP Header Authorization token and wrong requirement_id
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": "",
"employee_role_id": "",
"caption": ""
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data['td_functionality_id'][0], "A valid integer is required.")
self.assertEqual(response.data['employee_role_id'][0], "A valid integer is required.")
self.assertEqual(response.data['caption'][0], "This field may not be blank.")
"""
Test ID:TS02RQ00011
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:wrong supplier
Status code:400
"""
def test_erfm_create_400_corrupt_fields(self):
# Negative test case of req quotes create with HTTP Header Authorization token and wrong broker_id
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": "jhg",
"employee_role_id": "dsfy",
"caption": "jhgfq"
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data['td_functionality_id'][0], "A valid integer is required.")
self.assertEqual(response.data['employee_role_id'][0], "A valid integer is required.")
def test_erfm_create_400_non_existent_tdid(self):
# Negative test case of req quotes create with HTTP Header Authorization token and wrong broker_id
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": 324223,
"employee_role_id": self.employee_roles,
"caption": self.caption
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_erfm_create_400_non_existent_employeeid(self):
# Negative test case of req quotes create with HTTP Header Authorization token and wrong broker_id
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": 7643645,
"caption": "jhgfq"
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
"""
Test ID:TS02RQ00012
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:failure
Message:rate not integer
Status code:400
"""
def test_erfm_create_uniquefield(self):
# Negative test case of tdf create with HTTP Header Authorization token and functionality not unique
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url, json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": "Inward Entry"
}), content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
"""
Test ID:TS02RQ00014
Created By:Hari
Created On:11/12/2018
Scenario:req-quotes-create/
Status:success
Message:requirement quote created
Status code:201
"""
def test_erfm_create_201(self):
# Positive test case of req quotes create with HTTP Header Authorization token
self.taskdf = mommy.make(TaskDashboardFunctionalities)
self.functionality = self.taskdf.functionality
self.tdid = self.taskdf.id
self.erfm = mommy.make(EmployeeRolesFunctionalityMapping, td_functionality=self.taskdf,
employee_role=self.employee_roles, caption="Bharat")
self.erfm_id = self.erfm.id
self.caption = self.erfm.caption
self.client.credentials(HTTP_AUTHORIZATION=self.token)
response = self.client.post(self.erfmcreate_url,
json.dumps({"td_functionality_id": self.tdid,
"employee_role_id": self.employeeroles_id,
"caption": self.caption
}),
content_type='application/json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(response.data['status'], "success")
self.assertEqual(response.data['msg'], "Employee Roles Functionalities Mapping Created")
| 50.367816 | 116 | 0.588012 | 1,335 | 13,146 | 5.614981 | 0.130337 | 0.022412 | 0.076708 | 0.046825 | 0.759072 | 0.739061 | 0.727054 | 0.683031 | 0.657818 | 0.641942 | 0 | 0.033269 | 0.330062 | 13,146 | 260 | 117 | 50.561538 | 0.817872 | 0.089077 | 0 | 0.464286 | 0 | 0 | 0.133668 | 0.015366 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.085714 | false | 0.014286 | 0.071429 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
48a98d83a1203cf9dcf76c6ba1e1621f7c41c711 | 206 | py | Python | Project_Codev0.1/Class-diagram_Classes/Wallpaper.py | cyberseihis/Wallsource | 4bd981e75c3ebf97c9673ffb80147ef2bdf7d61a | [
"MIT"
] | null | null | null | Project_Codev0.1/Class-diagram_Classes/Wallpaper.py | cyberseihis/Wallsource | 4bd981e75c3ebf97c9673ffb80147ef2bdf7d61a | [
"MIT"
] | null | null | null | Project_Codev0.1/Class-diagram_Classes/Wallpaper.py | cyberseihis/Wallsource | 4bd981e75c3ebf97c9673ffb80147ef2bdf7d61a | [
"MIT"
] | null | null | null | Class Wallpaper:
def setWallpaperName(self, WallpaperName: str):
self.WallpaperName = WallpaperName
def remove(self):
del self
def pick(self)
return WallpaperName
| 18.727273 | 51 | 0.645631 | 20 | 206 | 6.65 | 0.55 | 0.255639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291262 | 206 | 11 | 52 | 18.727273 | 0.910959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
48dc87bfcd88a717eec42697f4eb5eaca3859450 | 81 | py | Python | monosi/scheduler/__init__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | 1 | 2022-02-20T21:42:16.000Z | 2022-02-20T21:42:16.000Z | monosi/scheduler/__init__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | null | null | null | monosi/scheduler/__init__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | null | null | null | from monosi.scheduler.base import MonosiScheduler
scheduler = MonosiScheduler()
| 20.25 | 49 | 0.839506 | 8 | 81 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 81 | 3 | 50 | 27 | 0.931507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
48de51ff8e5859d98ea0e54420963113fb408370 | 883 | py | Python | third-party/corenlp/third-party/stanza/test/slow_tests/text/test_senna.py | arunchaganty/odd-nails | d3667ea666c02b7a71af1c26c4b22b9f4ab4c7c0 | [
"Apache-2.0"
] | 5 | 2020-03-19T07:19:49.000Z | 2021-09-29T06:33:47.000Z | third-party/stanza/test/slow_tests/text/test_senna.py | arunchaganty/django-corenlp | 4cda142d375bdac84057cedc3d08b525b1e2d498 | [
"Apache-2.0"
] | 3 | 2015-12-03T00:30:26.000Z | 2016-01-05T22:07:20.000Z | third-party/corenlp/third-party/stanza/test/slow_tests/text/test_senna.py | arunchaganty/hypatia | d3667ea666c02b7a71af1c26c4b22b9f4ab4c7c0 | [
"Apache-2.0"
] | 3 | 2020-03-19T07:19:50.000Z | 2021-03-30T13:42:27.000Z | __author__ = 'victor'
import numpy as np
from unittest import TestCase
from stanza.text.vocab import SennaVocab
class TestSenna(TestCase):
def test_get_embeddings(self):
v = SennaVocab()
v.add("!")
E = v.get_embeddings()
e_exclamation = np.array([float(e) for e in """
-1.03682 1.77856 -0.693547 1.5948 1.5799 0.859243 1.15221 -0.976317 0.745304 -0.494589 0.308086 0.25239
-0.1976 1.26203 0.813864 -0.940734 -0.215163 0.11645 0.525697 1.95766 0.394232 1.27717 0.710788 -0.389351
0.161775 -0.106038 1.14148 0.607948 0.189781 -1.06022 0.280702 0.0251156 -0.198067 2.33027 0.408584
0.350751 -0.351293 1.77318 -0.723457 -0.13806 -1.47247 0.541779 -2.57005 -0.227714 -0.817816 -0.552209
0.360149 -0.10278 -0.36428 -0.64853
""".split()])
self.assertTrue(np.allclose(e_exclamation, E[v["!"]]))
| 40.136364 | 113 | 0.650057 | 150 | 883 | 3.766667 | 0.56 | 0.046018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.466476 | 0.206116 | 883 | 21 | 114 | 42.047619 | 0.339515 | 0 | 0 | 0 | 0 | 0.235294 | 0.573046 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
48ff1c1bb077a0cec584e9e40158a71e6705fed2 | 383 | py | Python | p4gen/__init__.py | siddhantbajaj1/Whispersnapper_p4benchmark_with_p4-version16_Compatibility | 093dea92c6419ddaf9126903b5275646f89fda37 | [
"Apache-2.0"
] | 15 | 2017-03-13T03:09:49.000Z | 2021-11-12T15:31:29.000Z | p4gen/__init__.py | AbhinavJindl/Whippersnapper_P4_benchmark | 9b06aeeebaf763a45c643b5a0901a36b94343759 | [
"Apache-2.0"
] | 1 | 2017-05-06T09:55:57.000Z | 2017-05-06T11:59:50.000Z | p4gen/__init__.py | AbhinavJindl/Whippersnapper_P4_benchmark | 9b06aeeebaf763a45c643b5a0901a36b94343759 | [
"Apache-2.0"
] | 13 | 2016-12-07T01:56:30.000Z | 2021-06-04T08:08:32.000Z | """
A python module that generates P4 benchmarking programs
.. moduleauthor:: Tu Dang <huynh.tu.dang@usi.sh>
"""
from subprocess import call
from pkg_resources import resource_filename
def copy_scripts(output_dir):
call(['cp', resource_filename(__name__, 'template/run_switch.sh'), output_dir])
call(['cp', resource_filename(__name__, 'template/run_test.py'), output_dir])
| 31.916667 | 83 | 0.759791 | 53 | 383 | 5.150943 | 0.641509 | 0.175824 | 0.095238 | 0.10989 | 0.336996 | 0.336996 | 0.336996 | 0.336996 | 0.336996 | 0 | 0 | 0.002941 | 0.112272 | 383 | 11 | 84 | 34.818182 | 0.8 | 0.27154 | 0 | 0 | 1 | 0 | 0.169742 | 0.081181 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
48ffc4550f0db067a00eca3191231180875b9c83 | 87 | py | Python | emplog/theapi/apps.py | tsitsiflora/newapi | 2f1c85b6b529c246fa1c890303f40b7308177d73 | [
"Apache-2.0"
] | null | null | null | emplog/theapi/apps.py | tsitsiflora/newapi | 2f1c85b6b529c246fa1c890303f40b7308177d73 | [
"Apache-2.0"
] | null | null | null | emplog/theapi/apps.py | tsitsiflora/newapi | 2f1c85b6b529c246fa1c890303f40b7308177d73 | [
"Apache-2.0"
] | null | null | null | from django.apps import AppConfig
class TheapiConfig(AppConfig):
name = 'theapi'
| 14.5 | 33 | 0.747126 | 10 | 87 | 6.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 87 | 5 | 34 | 17.4 | 0.902778 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5b048e0da32688964dcb9b81de946d6a5a2ccf61 | 893 | py | Python | src/azure-firewall/azext_firewall/_client_factory.py | blackchoey/azure-cli-extensions | bbfd80ba164c4605dbdbe5e2b8dc26c3aa0f29e4 | [
"MIT"
] | 1 | 2021-09-16T09:13:38.000Z | 2021-09-16T09:13:38.000Z | src/azure-firewall/azext_firewall/_client_factory.py | blackchoey/azure-cli-extensions | bbfd80ba164c4605dbdbe5e2b8dc26c3aa0f29e4 | [
"MIT"
] | null | null | null | src/azure-firewall/azext_firewall/_client_factory.py | blackchoey/azure-cli-extensions | bbfd80ba164c4605dbdbe5e2b8dc26c3aa0f29e4 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
def network_client_factory(cli_ctx, aux_subscriptions=None, **_):
from azure.cli.core.commands.client_factory import get_mgmt_service_client
from .profiles import CUSTOM_FIREWALL
return get_mgmt_service_client(cli_ctx, CUSTOM_FIREWALL, aux_subscriptions=aux_subscriptions,
api_version='2019-04-01')
def cf_firewalls(cli_ctx, _):
return network_client_factory(cli_ctx).azure_firewalls
def cf_firewall_fqdn_tags(cli_ctx, _):
return network_client_factory(cli_ctx).azure_firewall_fqdn_tags
| 44.65 | 97 | 0.610302 | 96 | 893 | 5.302083 | 0.510417 | 0.070727 | 0.117878 | 0.13556 | 0.220039 | 0.168959 | 0.168959 | 0.168959 | 0.168959 | 0 | 0 | 0.010283 | 0.128779 | 893 | 19 | 98 | 47 | 0.643959 | 0.37626 | 0 | 0 | 0 | 0 | 0.018116 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.222222 | 0.222222 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5b0a59a3025b923d22c3bbeda256f8b2d00022d3 | 321 | py | Python | python/tests/test_ie.py | coleve27/simple_sauce | aae43a5402e111a6ca94520110760f3ae4b7b8d6 | [
"MIT"
] | null | null | null | python/tests/test_ie.py | coleve27/simple_sauce | aae43a5402e111a6ca94520110760f3ae4b7b8d6 | [
"MIT"
] | null | null | null | python/tests/test_ie.py | coleve27/simple_sauce | aae43a5402e111a6ca94520110760f3ae4b7b8d6 | [
"MIT"
] | null | null | null | from simplesauce.options import SauceOptions
class TestInternetExplorer(object):
def test_defaults(self):
sauce = SauceOptions('internet explorer')
assert sauce.browser_name == 'internet explorer'
assert sauce.browser_version == 'latest'
assert sauce.platform_name == 'Windows 10'
| 26.75 | 56 | 0.71028 | 33 | 321 | 6.787879 | 0.69697 | 0.147321 | 0.196429 | 0.241071 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007843 | 0.205607 | 321 | 11 | 57 | 29.181818 | 0.870588 | 0 | 0 | 0 | 0 | 0 | 0.155763 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.