hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ede81f243af3398e8f56425a8d4199584d3a742d | 242 | py | Python | test/appengine_api_tests.py | al3x/downforeveryoneorjustme | e1c47f78330b05753027b5d0f46b3d4f49261347 | [
"Apache-2.0"
] | 26 | 2015-01-13T23:41:29.000Z | 2020-04-09T01:24:44.000Z | test/appengine_api_tests.py | ananthrk/downforeveryoneorjustme | e1c47f78330b05753027b5d0f46b3d4f49261347 | [
"Apache-2.0"
] | 2 | 2015-04-15T16:51:52.000Z | 2017-09-04T10:00:49.000Z | test/appengine_api_tests.py | ananthrk/downforeveryoneorjustme | e1c47f78330b05753027b5d0f46b3d4f49261347 | [
"Apache-2.0"
] | 5 | 2015-05-25T11:34:01.000Z | 2021-07-13T19:19:29.000Z | import unittest
from google.appengine.api import urlfetch
class AppEngineAPITest(unittest.TestCase):
def test_urlfetch(self):
response = urlfetch.fetch('http://www.google.com')
self.assertEquals(0, response.content.find('<html>'))
| 30.25 | 57 | 0.760331 | 30 | 242 | 6.1 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00463 | 0.107438 | 242 | 7 | 58 | 34.571429 | 0.842593 | 0 | 0 | 0 | 0 | 0 | 0.11157 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
ede90b07bb914c61617203da0d78fdef9c56ce61 | 1,515 | py | Python | qcloudsdkvpc/CreateFlowLogRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkvpc/CreateFlowLogRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkvpc/CreateFlowLogRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from qcloudsdkcore.request import Request
class CreateFlowLogRequest(Request):
def __init__(self):
super(CreateFlowLogRequest, self).__init__(
'vpc', 'qcloudcliV1', 'CreateFlowLog', 'vpc.api.qcloud.com')
def get_cloudLogId(self):
return self.get_params().get('cloudLogId')
def set_cloudLogId(self, cloudLogId):
self.add_param('cloudLogId', cloudLogId)
def get_flowLogDescription(self):
return self.get_params().get('flowLogDescription')
def set_flowLogDescription(self, flowLogDescription):
self.add_param('flowLogDescription', flowLogDescription)
def get_flowLogName(self):
return self.get_params().get('flowLogName')
def set_flowLogName(self, flowLogName):
self.add_param('flowLogName', flowLogName)
def get_resourceId(self):
return self.get_params().get('resourceId')
def set_resourceId(self, resourceId):
self.add_param('resourceId', resourceId)
def get_resourceType(self):
return self.get_params().get('resourceType')
def set_resourceType(self, resourceType):
self.add_param('resourceType', resourceType)
def get_trafficType(self):
return self.get_params().get('trafficType')
def set_trafficType(self, trafficType):
self.add_param('trafficType', trafficType)
def get_vpcId(self):
return self.get_params().get('vpcId')
def set_vpcId(self, vpcId):
self.add_param('vpcId', vpcId)
| 29.134615 | 72 | 0.687129 | 165 | 1,515 | 6.090909 | 0.193939 | 0.041791 | 0.097512 | 0.118408 | 0.181095 | 0.181095 | 0 | 0 | 0 | 0 | 0 | 0.001634 | 0.192079 | 1,515 | 51 | 73 | 29.705882 | 0.819444 | 0.013861 | 0 | 0 | 0 | 0 | 0.133378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.030303 | 0.212121 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
610c84993beb613ca5c83dcf688086e419e85c4c | 1,219 | py | Python | test/ofx_test_utils.py | myfreecomm/fixofx | 6eb5c286a97c6af8e9c2fd502ee93ca097eaa91f | [
"Apache-2.0"
] | 4 | 2015-09-25T03:45:42.000Z | 2017-12-20T05:16:15.000Z | test/ofx_test_utils.py | myfreecomm/fixofx | 6eb5c286a97c6af8e9c2fd502ee93ca097eaa91f | [
"Apache-2.0"
] | 1 | 2015-09-17T21:52:21.000Z | 2015-09-18T16:03:52.000Z | test/ofx_test_utils.py | myfreecomm/fixofx | 6eb5c286a97c6af8e9c2fd502ee93ca097eaa91f | [
"Apache-2.0"
] | null | null | null | # Copyright 2005-2010 Wesabe, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
fixtures = os.path.join(os.path.dirname(__file__) or '.', "fixtures")
def get_checking_stmt():
return _read_file("checking.ofx")
def get_savings_stmt():
return _read_file("savings.ofx")
def get_savings_with_self_closed_empty_tag_stmt():
return _read_file("savings_with_self_closed_empty_tag.ofx")
def get_creditcard_stmt():
return _read_file("creditcard.ofx")
def get_blank_memo_stmt():
return _read_file("blank_memo.ofx")
def get_tag_with_line_break_stmt():
return _read_file("tag_with_line_break.ofx")
def _read_file(filename):
return open(os.path.join(fixtures, filename), 'rU').read()
| 29.731707 | 74 | 0.757998 | 188 | 1,219 | 4.659574 | 0.478723 | 0.063927 | 0.09589 | 0.123288 | 0.115297 | 0.06621 | 0 | 0 | 0 | 0 | 0 | 0.011505 | 0.144381 | 1,219 | 40 | 75 | 30.475 | 0.82838 | 0.454471 | 0 | 0 | 0 | 0 | 0.189522 | 0.093991 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0.0625 | 0.4375 | 0.9375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
b6282638b7a5f8803c62e8ae2cb8e0ed33e66c5f | 218 | py | Python | tsadm/slave/urls.py | jctincan/tsadm-webapp | f67ac891c58b240434260692cdf0ed8b6b9dcef6 | [
"BSD-3-Clause"
] | null | null | null | tsadm/slave/urls.py | jctincan/tsadm-webapp | f67ac891c58b240434260692cdf0ed8b6b9dcef6 | [
"BSD-3-Clause"
] | null | null | null | tsadm/slave/urls.py | jctincan/tsadm-webapp | f67ac891c58b240434260692cdf0ed8b6b9dcef6 | [
"BSD-3-Clause"
] | null | null | null |
from django.conf.urls import patterns, include, url
urlpatterns = patterns('',
url(r'^(\d+)/$', 'tsadm.slave.views.dashboard', name='dashboard'),
url(r'^admin/$', 'tsadm.slave.views.admin', name='admin'),
)
| 24.222222 | 70 | 0.646789 | 28 | 218 | 5.035714 | 0.607143 | 0.056738 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123853 | 218 | 8 | 71 | 27.25 | 0.73822 | 0 | 0 | 0 | 0 | 0 | 0.368664 | 0.230415 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b62e8682b1aa35938683234481acd17825241cdd | 71 | py | Python | main.py | Grimmaldi/supergoaltracker | e5cc7175816a0477b28117b6840972761c109270 | [
"MIT"
] | null | null | null | main.py | Grimmaldi/supergoaltracker | e5cc7175816a0477b28117b6840972761c109270 | [
"MIT"
] | 3 | 2022-03-20T15:46:18.000Z | 2022-03-27T21:26:39.000Z | main.py | Grimmaldi/supergoaltracker | e5cc7175816a0477b28117b6840972761c109270 | [
"MIT"
] | null | null | null | from app import app
if __name__ == '__main__':
app = app.Session() | 17.75 | 26 | 0.661972 | 10 | 71 | 3.9 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211268 | 71 | 4 | 27 | 17.75 | 0.696429 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b639c388508053fbf6f6f4b27eeac43628c79a6c | 306 | py | Python | session-0/asym/std_asym.py | Ivan-Solovyev/data-analysis-tutorial | 08bbfd1b907187f596da9d9460a2247562c28b88 | [
"MIT"
] | 1 | 2021-12-02T09:00:32.000Z | 2021-12-02T09:00:32.000Z | session-0/asym/std_asym.py | Ivan-Solovyev/data-analysis-tutorial | 08bbfd1b907187f596da9d9460a2247562c28b88 | [
"MIT"
] | null | null | null | session-0/asym/std_asym.py | Ivan-Solovyev/data-analysis-tutorial | 08bbfd1b907187f596da9d9460a2247562c28b88 | [
"MIT"
] | 5 | 2020-07-29T03:54:43.000Z | 2022-03-23T09:54:38.000Z | from math import sqrt, pow
def std_asym_ostap(n1,n2):
return (VE(n1,n1).asym(VE(n2,n2))).error()
def std_asym_calc(n1,n2):
return 2.*n1*sqrt(1./n1+1./n2) /( n2*pow(n1/n2+1.,2))
print("n=100")
print(" ostap = " + str(std_asym_ostap(100,100)))
print(" calc. = " + str(std_asym_calc (100,100)))
| 23.538462 | 57 | 0.627451 | 59 | 306 | 3.118644 | 0.355932 | 0.152174 | 0.108696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129278 | 0.140523 | 306 | 12 | 58 | 25.5 | 0.570342 | 0 | 0 | 0 | 0 | 0 | 0.081699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
b643b4d400a7bca3427485fad0a4e08b11fcc707 | 6,376 | py | Python | xdwlib/struct.py | linxsorg/xdwlib | 47a5a568085f40cd101a0661aa3abb56749f9890 | [
"ZPL-2.1"
] | null | null | null | xdwlib/struct.py | linxsorg/xdwlib | 47a5a568085f40cd101a0661aa3abb56749f9890 | [
"ZPL-2.1"
] | null | null | null | xdwlib/struct.py | linxsorg/xdwlib | 47a5a568085f40cd101a0661aa3abb56749f9890 | [
"ZPL-2.1"
] | null | null | null | #!/usr/bin/env python3
# vim: set fileencoding=utf-8 fileformat=unix expandtab :
"""struct.py -- Point and Rect
Copyright (C) 2010 HAYASHI Hideki <hideki@hayasix.com> All rights reserved.
This software is subject to the provisions of the Zope Public License,
Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
FOR A PARTICULAR PURPOSE.
"""
from collections import namedtuple
import math
__all__ = ("Point", "Rect", "EPSILON")
PI = math.pi
EPSILON = 0.01 # mm
_Point = namedtuple("Point", "x y")
class Point(_Point):
"""Point represented by 2D coordinate.
>>> p = Point(0, 10)
>>> p
Point(0.00, 10.00)
>>> p + Point(5, 10)
Point(5.00, 20.00)
>>> p - Point(5, 10)
Point(-5.00, 0.00)
>>> -p
Point(0.00, -10.00)
>>> p * 2
Point(0.00, 20.00)
>>> p / 2
Point(0.00, 5.00)
>>> p.shift(Point(20, 30))
Point(20.00, 40.00)
>>> p.shift([20, 30])
Point(20.00, 40.00)
>>> p.shift(20)
Point(20.00, 10.00)
>>> list(p)
[0, 10]
>>> p == Point(0, 10)
True
>>> p != Point(0, 10)
False
>>> p == Point(5, 10)
False
>>> p != Point(5, 10)
True
>>> bool(p)
True
>>> bool(Point(0, 0))
False
>>> p.rotate(30)
Point(-5.00, 8.66)
>>> p.rotate(30, origin=Point(10, 10))
Point(1.34, 5.00)
"""
def __str__(self):
return f"({self.x:.2f}, {self.y:.2f})"
def __repr__(self):
return "Point" + self.__str__()
def int(self):
return Point(*map(int, self))
fix = int
def floor(self):
return Point(*map(math.floor, self))
def ceil(self):
return Point(*map(math.ceil, self))
@staticmethod
def _round(f, places=0):
# Round a number in accordance with the traditional way,
# while Python's round() rounds to the nearest even number.
return math.floor(f * math.pow(10, places) + .5) / math.pow(10, places)
def round(self, places=0):
return Point(self._round(self.x, places), self._round(self.y, places))
def __bool__(self):
return self != (0, 0)
def __neg__(self):
return Point(-self.x, -self.y)
def __add__(self, pnt):
return self.shift(pnt)
def __sub__(self, pnt):
return self.shift(-pnt)
def __mul__(self, n):
if not isinstance(n, (int, float)):
raise NotImplementedError
return Point(self.x * n, self.y * n)
__rmul__ = __mul__
def __truediv__(self, n):
if not isinstance(n, (int, float)):
raise NotImplementedError
return Point(self.x / n, self.y / n)
def shift(self, pnt, _y = 0):
if isinstance(pnt, (tuple, list)):
return Point(self.x + pnt[0], self.y + pnt[1])
elif isinstance(pnt, (int, float)) and isinstance(_y, (int, float)):
return Point(self.x + pnt, self.y + _y)
else:
raise NotImplementedError
def rotate(self, degree, origin=None):
p = Point(*self)
if origin is not None:
p -= origin
rad = PI * degree / 180.0
sin, cos = math.sin(rad), math.cos(rad)
p = Point(p.x * cos - p.y * sin, p.x * sin + p.y * cos)
if origin is not None:
p += origin
return p
_Rect = namedtuple("_Rect", "left top right bottom")
class Rect(_Rect):
"""Half-open rectangular region.
A region is represented by half-open coodinate intervals. Left-top
coordinate is inclusive but right-bottom one is exclusive.
>>> r = Rect(0, 10, 20, 30)
>>> r
Rect(0.00, 10.00, 20.00, 30.00)
>>> r.position()
Point(0.00, 10.00)
>>> r.size()
Point(20.00, 20.00)
>>> r.shift(Point(15, 25))
Rect(15.00, 35.00, 35.00, 55.00)
>>> r * 2
Rect(0.00, 10.00, 40.00, 50.00)
>>> r / 2
Rect(0.00, 10.00, 10.00, 20.00)
>>> list(r)
[0, 10, 20, 30]
>>> r == Rect(0, 10, 20, 30)
True
>>> r != Rect(0, 10, 20, 30)
False
>>> r.position()
Point(0.00, 10.00)
>>> r.size()
Point(20.00, 20.00)
>>> r.position_and_size()
(Point(0.00, 10.00), Point(20.00, 20.00))
"""
def __str__(self):
return f"({', '.join(f'{x:.2f}' for x in self)})"
def __repr__(self):
return "Rect" + self.__str__()
def half_open(self):
"""Get half-open version i.e. right-bottom is excluded."""
return Rect(self.left, self.top,
self.right + EPSILON, self.bottom + EPSILON)
def closed(self):
"""Get closed version i.e. rigit-bottom is included."""
return Rect(self.left, self.top,
self.right - EPSILON, self.bottom - EPSILON)
def int(self):
"""Special method to adapt to XDW_RECT."""
return Rect(*map(int, self))
fix = int
def position(self):
return Point(self.left, self.top)
def size(self):
return Point(self.right - self.left, self.bottom - self.top)
def position_and_size(self):
return (self.position(), self.size())
def __mul__(self, n):
if not isinstance(n, (int, float)):
raise NotImplementedError
return Rect(self.left, self.top,
self.left + (self.right - self.left) * n,
self.top + (self.bottom - self.top) * n)
__rmul__ = __mul__
def __truediv__(self, n):
if not isinstance(n, (int, float)):
raise NotImplementedError
return Rect(self.left, self.top,
self.left + (self.right - self.left) / n,
self.top + (self.bottom - self.top) / n)
def shift(self, pnt, _y=0):
if isinstance(pnt, (tuple, list)):
x, y = pnt
elif isinstance(pnt, (int, float)) and isinstance(_y, (int, float)):
x, y = pnt, _y
else:
raise NotImplementedError
return Rect(self.left + x, self.top + y,
self.right + x, self.bottom + y)
def rotate(self, degree, origin=None):
return Rect(p.rotate(degree, origin=origin) for p in self)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 26.131148 | 79 | 0.558501 | 916 | 6,376 | 3.774017 | 0.20524 | 0.015042 | 0.017356 | 0.016199 | 0.430431 | 0.391091 | 0.318774 | 0.274805 | 0.255135 | 0.255135 | 0 | 0.064886 | 0.289366 | 6,376 | 243 | 80 | 26.238683 | 0.69808 | 0.334536 | 0 | 0.380952 | 0 | 0 | 0.034002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.257143 | false | 0 | 0.028571 | 0.161905 | 0.609524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
b6465545c0f9db4b5071fac0297f0ee2d6686b31 | 336 | py | Python | menpo3d/base.py | apapaion/menpo3d | 09aaeb37fdd9435827041715dc8248e49e63a2d0 | [
"BSD-3-Clause"
] | 134 | 2015-03-14T22:53:45.000Z | 2022-03-26T05:24:32.000Z | menpo3d/base.py | apapaion/menpo3d | 09aaeb37fdd9435827041715dc8248e49e63a2d0 | [
"BSD-3-Clause"
] | 51 | 2015-02-02T12:48:53.000Z | 2021-05-19T03:20:38.000Z | menpo3d/base.py | apapaion/menpo3d | 09aaeb37fdd9435827041715dc8248e49e63a2d0 | [
"BSD-3-Clause"
] | 48 | 2015-02-02T16:48:52.000Z | 2022-03-17T15:41:49.000Z | import os
from pathlib import Path
def menpo3d_src_dir_path():
r"""The path to the top of the menpo3d Python package.
Useful for locating where the data folder is stored.
Returns
-------
path : str
The full path to the top of the Menpo3d package
"""
return Path(os.path.abspath(__file__)).parent
| 21 | 58 | 0.669643 | 51 | 336 | 4.27451 | 0.607843 | 0.055046 | 0.082569 | 0.110092 | 0.220183 | 0.220183 | 0.220183 | 0 | 0 | 0 | 0 | 0.011952 | 0.252976 | 336 | 15 | 59 | 22.4 | 0.856574 | 0.547619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b69d04502a6f8d71ba5ee9a05c5959dfeda35999 | 1,093 | py | Python | Auths/models.py | cool199966/AccountManager | 7026e605482c59c53796be62125dc298bc3104ef | [
"MIT"
] | 1 | 2022-03-29T13:54:42.000Z | 2022-03-29T13:54:42.000Z | Auths/models.py | cool-develope/AccountManager | 7026e605482c59c53796be62125dc298bc3104ef | [
"MIT"
] | null | null | null | Auths/models.py | cool-develope/AccountManager | 7026e605482c59c53796be62125dc298bc3104ef | [
"MIT"
] | null | null | null | from django.db import models
import datetime
from django.contrib.auth.models import (
BaseUserManager, AbstractBaseUser, Group, PermissionsMixin
)
class MyUserManager(BaseUserManager):
def create_user(self, username, password = None):
user = self.model(
username = username,
)
user.set_password(password)
user.save(using = self._db)
return user
def create_superuser(self, username, password):
user = self.create_user(username, password)
user.is_admin = True
user.is_superuser = True
user.save(using = self._db)
return user
class MyUser(AbstractBaseUser, PermissionsMixin):
username = models.CharField(max_length = 20, unique = True)
is_virtual = models.BooleanField(default = False)
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default = False)
objects = MyUserManager()
USERNAME_FIELD = 'username'
REQUIRED_FIELD = []
def __str__(self):
return self.username
def get_full_name(self):
return self.username
def get_short_name(self):
return self.username
@property
def is_staff(self):
return self.is_admin | 23.76087 | 60 | 0.756633 | 138 | 1,093 | 5.818841 | 0.369565 | 0.07472 | 0.069738 | 0.082192 | 0.179328 | 0.141968 | 0.072229 | 0 | 0 | 0 | 0 | 0.002151 | 0.149131 | 1,093 | 46 | 61 | 23.76087 | 0.86129 | 0 | 0 | 0.194444 | 0 | 0 | 0.007313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.111111 | 0.083333 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 3 |
fcc2c6d2b9ea0e61eef4b9a727178a5fce8e5eb2 | 117 | py | Python | Turtles.py | Comp-Sci-Principles-2018-19/chapter-2-exercises-lanoflatfaceo | 7c21318193017910b63fd6d9ba47903acc89a153 | [
"MIT"
] | null | null | null | Turtles.py | Comp-Sci-Principles-2018-19/chapter-2-exercises-lanoflatfaceo | 7c21318193017910b63fd6d9ba47903acc89a153 | [
"MIT"
] | null | null | null | Turtles.py | Comp-Sci-Principles-2018-19/chapter-2-exercises-lanoflatfaceo | 7c21318193017910b63fd6d9ba47903acc89a153 | [
"MIT"
] | null | null | null | import turtle
wn=turtle.Screen()
alex=turtle.Turtle()
alex.forward(50)
alex.left(90)
alex.forward(30)
wn.mainloop() | 13 | 20 | 0.752137 | 19 | 117 | 4.631579 | 0.578947 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.076923 | 117 | 9 | 21 | 13 | 0.759259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fcc87ec4fa84a859d50e46156359c2558e44454c | 3,040 | py | Python | scoreboard_ocr/scoreboard_display.py | Nerdlytics/scoreboard_recognition | d423ce341c121f0ca93d734b264e986392134386 | [
"MIT"
] | null | null | null | scoreboard_ocr/scoreboard_display.py | Nerdlytics/scoreboard_recognition | d423ce341c121f0ca93d734b264e986392134386 | [
"MIT"
] | 8 | 2020-01-19T16:04:36.000Z | 2020-04-11T22:35:01.000Z | scoreboard_ocr/scoreboard_display.py | Nerdlytics/scoreboard_recognition | d423ce341c121f0ca93d734b264e986392134386 | [
"MIT"
] | null | null | null | scoreboard_display = [['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25'],
['51', '50', '49', '48', '47', '46', '45', '44', '43', '42', '41', '40', '39', '38', '37', '36', '35', '34', '33', '32', '31', '30', '29', '28', '27', '26'],
['52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77'],
['103', '102', '101', '100', '99', '98', '97', '96', '95', '94', '93', '92', '91', '90', '89', '88', '87', '86', '85', '84', '83', '82', '81', '80', '79', '78'],
['104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129'],
['155', '154', '153', '152', '151', '150', '149', '148', '147', '146', '145', '144', '143', '142', '141', '140', '139', '138', '137', '136', '135', '134', '133', '132', '131', '130'],
['156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181'],
['207', '206', '205', '204', '203', '202', '201', '200', '199', '198', '197', '196', '195', '194', '193', '192', '191', '190', '189', '188', '187', '186', '185', '184', '183', '182'],
['208', '209', '210', '211', '212', '213', '214', '215', '216', '217', '218', '219', '220', '221', '222', '223', '224', '225', '226', '227', '228', '229', '230', '231', '232', '233'],
['259', '258', '257', '256', '255', '254', '253', '252', '251', '250', '249', '248', '247', '246', '245', '244', '243', '242', '241', '240', '239', '238', '237', '236', '235', '234'],
['260', '261', '262', '263', '264', '265', '266', '267', '268', '269', '270', '271', '272', '273', '274', '275', '276', '277', '278', '279', '280', '281', '282', '283', '284', '285'],
['311', '310', '309', '308', '307', '306', '305', '304', '303', '302', '301', '300', '299', '298', '297', '296', '295', '294', '293', '292', '291', '290', '289', '288', '287', '286'],
['312', '313', '314', '315', '316', '317', '318', '319', '320', '321', '322', '323', '324', '325', '326', '327', '328', '329', '330', '331', '332', '333', '334', '335', '336', '337'],
['363', '362', '361', '360', '359', '358', '357', '356', '355', '354', '353', '352', '351', '350', '349', '348', '347', '346', '345', '344', '343', '342', '341', '340', '339', '338'],
['364', '365', '366', '367', '368', '369', '370', '371', '372', '373', '374', '375', '376', '377', '378', '379', '380', '381', '382', '383', '384', '385', '386', '387', '388', '389'],
['415', '414', '413', '412', '411', '410', '409', '408', '407', '406', '405', '404', '403', '402', '401', '400', '399', '398', '397', '396', '395', '394', '393', '392', '391', '390'],
['416', '417', '418', '419', '420', '421', '422', '423', '424', '425', '426', '427', '428', '429', '430', '431', '432', '433', '434', '435', '436', '437', '438', '439', '440', '441']]
| 168.888889 | 183 | 0.405921 | 444 | 3,040 | 2.777027 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.468413 | 0.146053 | 3,040 | 17 | 184 | 178.823529 | 0.006549 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fccd2a530ead0326ad7dd4cc03386520a9df637f | 505 | py | Python | differential/plugins/pterclub.py | funqc/Differential | 738ebf9a2a54ea04498b3394f80d980aad083ea7 | [
"MIT"
] | 52 | 2021-10-12T11:23:45.000Z | 2022-03-18T04:15:03.000Z | differential/plugins/pterclub.py | funqc/Differential | 738ebf9a2a54ea04498b3394f80d980aad083ea7 | [
"MIT"
] | 4 | 2021-10-15T13:58:42.000Z | 2022-03-15T12:42:35.000Z | differential/plugins/pterclub.py | funqc/Differential | 738ebf9a2a54ea04498b3394f80d980aad083ea7 | [
"MIT"
] | 5 | 2021-11-18T05:41:23.000Z | 2022-03-09T03:13:15.000Z | import argparse
from differential.plugins.nexusphp import NexusPHP
class PTerClub(NexusPHP):
@classmethod
def get_aliases(cls):
return 'pter',
@classmethod
def get_help(cls):
return 'PTerClub插件,适用于PTerClub'
@classmethod
def add_parser(cls, parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
return super().add_parser(parser)
def __init__(self, **kwargs):
super().__init__(upload_url="https://pterclub.com/upload.php", **kwargs)
| 22.954545 | 84 | 0.691089 | 55 | 505 | 6.109091 | 0.545455 | 0.125 | 0.10119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19604 | 505 | 21 | 85 | 24.047619 | 0.827586 | 0 | 0 | 0.214286 | 0 | 0 | 0.112871 | 0.043564 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.214286 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
fcf5ffc9ef02cc09ed9ac38bacf1f6162d980844 | 114 | py | Python | semanticeditor/utils/general.py | spookylukey/semanticeditor | 82777fe63869c0f38530b9c20696de995d2fa874 | [
"BSD-3-Clause"
] | null | null | null | semanticeditor/utils/general.py | spookylukey/semanticeditor | 82777fe63869c0f38530b9c20696de995d2fa874 | [
"BSD-3-Clause"
] | null | null | null | semanticeditor/utils/general.py | spookylukey/semanticeditor | 82777fe63869c0f38530b9c20696de995d2fa874 | [
"BSD-3-Clause"
] | null | null | null | """
Generic utilities
"""
def any(seq):
for i in seq:
if i:
return True
return False
| 11.4 | 23 | 0.508772 | 15 | 114 | 3.866667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.394737 | 114 | 9 | 24 | 12.666667 | 0.84058 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
fcfc1dfc919229f8945327e93c8aabcb984030c5 | 1,031 | py | Python | tests/mocks/meetup.py | PyColorado/boulderpython.org | fbe6ba581f213523fd7cde0816c6f31edbb4804a | [
"MIT"
] | 5 | 2018-01-18T16:47:53.000Z | 2018-05-19T14:42:18.000Z | tests/mocks/meetup.py | PyColorado/boulderpython.org | fbe6ba581f213523fd7cde0816c6f31edbb4804a | [
"MIT"
] | 28 | 2017-08-11T16:03:17.000Z | 2018-12-02T17:30:19.000Z | tests/mocks/meetup.py | PyColorado/boulderpython.org | fbe6ba581f213523fd7cde0816c6f31edbb4804a | [
"MIT"
] | 7 | 2017-08-11T04:46:05.000Z | 2019-07-11T15:16:45.000Z | # -*- coding: utf-8 -*-
"""
meetup.py
~~~~~~~~~
a mock for the Meetup APi client
"""
class MockMeetupGroup:
def __init__(self, *args, **kwargs):
self.name = "Mock Meetup Group"
self.link = "https://www.meetup.com/MeetupGroup/"
self.next_event = {
"id": 0,
"name": "Monthly Meetup",
"venue": "Galvanize",
"yes_rsvp_count": 9,
"time": 1518571800000, # February 13, 2018 6:30PM
"utc_offset": -25200000,
}
class MockMeetupEvents:
def __init__(self, *args, **kwargs):
self.results = [MockMeetupGroup().next_event] + [self.events(_) for _ in range(1, 6)]
def events(self, idx):
return {k: idx for k in ["id", "venue", "time", "utc_offset"]}
class MockMeetup:
api_key = ""
def __init__(self, *args, **kwargs):
return
def GetGroup(self, *args, **kwargs):
return MockMeetupGroup()
def GetEvents(self, *args, **kwargs):
return MockMeetupEvents()
| 24.547619 | 93 | 0.553831 | 113 | 1,031 | 4.867257 | 0.522124 | 0.072727 | 0.127273 | 0.081818 | 0.129091 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0.047749 | 0.28904 | 1,031 | 41 | 94 | 25.146341 | 0.702592 | 0.096993 | 0 | 0.12 | 0 | 0 | 0.148352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0 | 0.16 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
fcff9cc3e23b570d8492cf00e65943351af677a0 | 1,264 | py | Python | accounts/views.py | manuggz/memes_telegram_bot | 2ed73aac099923d08c89616ec35c965204cac119 | [
"Apache-2.0"
] | null | null | null | accounts/views.py | manuggz/memes_telegram_bot | 2ed73aac099923d08c89616ec35c965204cac119 | [
"Apache-2.0"
] | null | null | null | accounts/views.py | manuggz/memes_telegram_bot | 2ed73aac099923d08c89616ec35c965204cac119 | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth import authenticate, login
from django.contrib.auth.models import User
from django.shortcuts import render, HttpResponseRedirect
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth import views as auth_views
def crear_cuenta(request):
if request.user.is_authenticated():
return HttpResponseRedirect('/chat/')
if request.method == "POST":
form = UserCreationForm(request.POST)
if form.is_valid():
user = User.objects.create_user(username=form.cleaned_data['username'],
password=form.cleaned_data['password1'])
user_autenticado = authenticate(username=user.username, password=form.cleaned_data['password1'])
login(request, user_autenticado)
return HttpResponseRedirect('/chat/')
else:
form = UserCreationForm()
return render(request, 'registration/signup.html', {'form': form})
def login_check(request):
# Todos los usuarios autenticados tienen permiso de chatear
# Por lo que no hace falta que se autentique con otra cuenta
if request.user.is_authenticated():
return HttpResponseRedirect('/BotTelegram/')
return auth_views.login(request)
| 35.111111 | 108 | 0.699367 | 141 | 1,264 | 6.177305 | 0.432624 | 0.057405 | 0.078071 | 0.096441 | 0.277842 | 0.215844 | 0.123995 | 0 | 0 | 0 | 0 | 0.002 | 0.208861 | 1,264 | 35 | 109 | 36.114286 | 0.869 | 0.091772 | 0 | 0.173913 | 0 | 0 | 0.072489 | 0.020961 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.086957 | false | 0.086957 | 0.217391 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
1e08bf6c174e80d317e171840bcccc9978e86d5e | 3,798 | py | Python | tests/_event/test_mouse_up_interface.py | ynsnf/apysc | b10ffaf76ec6beb187477d0a744fca00e3efc3fb | [
"MIT"
] | 16 | 2021-04-16T02:01:29.000Z | 2022-01-01T08:53:49.000Z | tests/_event/test_mouse_up_interface.py | ynsnf/apysc | b10ffaf76ec6beb187477d0a744fca00e3efc3fb | [
"MIT"
] | 613 | 2021-03-24T03:37:38.000Z | 2022-03-26T10:58:37.000Z | tests/_event/test_mouse_up_interface.py | simon-ritchie/apyscript | c319f8ab2f1f5f7fad8d2a8b4fc06e7195476279 | [
"MIT"
] | 2 | 2021-06-20T07:32:58.000Z | 2021-12-26T08:22:11.000Z | from random import randint
from typing import Any
from typing import Dict
from retrying import retry
import apysc as ap
from apysc._event.mouse_up_interface import MouseUpInterface
from apysc._expression import expression_data_util
from apysc._type.variable_name_interface import VariableNameInterface
class _TestMouseUp(MouseUpInterface, VariableNameInterface):
def __init__(self) -> None:
"""Test class for mouse up interface.
"""
self.variable_name = 'test_mouse_up'
class TestMouseUpInterface:
def on_mouse_up_1(
self, e: ap.MouseEvent, options: Dict[str, Any]) -> None:
"""
Test handler for mouse up event.
Parameters
----------
e : MouseEvent
Created event instance.
options : dict
Optional arguments dictionary.
"""
def on_mouse_up_2(
self, e: ap.MouseEvent, options: Dict[str, Any]) -> None:
"""
Test handler for mouse up event.
Parameters
----------
e : MouseEvent
Created event instance.
options : dict
Optional arguments dictionary.
"""
@retry(stop_max_attempt_number=15, wait_fixed=randint(10, 3000))
def test__initialize_mouse_up_handlers_if_not_initialized(self) -> None:
interface_1: MouseUpInterface = MouseUpInterface()
interface_1._initialize_mouse_up_handlers_if_not_initialized()
assert interface_1._mouse_up_handlers == {}
interface_1._initialize_mouse_up_handlers_if_not_initialized()
assert interface_1._mouse_up_handlers == {}
@retry(stop_max_attempt_number=15, wait_fixed=randint(10, 3000))
def test_mouseup(self) -> None:
expression_data_util.empty_expression()
interface_1: _TestMouseUp = _TestMouseUp()
name: str = interface_1.mouseup(
handler=self.on_mouse_up_1, options={'msg': 'Hello!'})
assert name in interface_1._mouse_up_handlers
expression: str = \
expression_data_util.get_current_event_handler_scope_expression()
expected: str = f'function {name}('
assert expected in expression
expression = expression_data_util.get_current_expression()
expected = (
f'{interface_1.variable_name}.mouseup({name});'
)
assert expected in expression
@retry(stop_max_attempt_number=15, wait_fixed=randint(10, 3000))
def test_unbind_mouseup(self) -> None:
expression_data_util.empty_expression()
interface_1: _TestMouseUp = _TestMouseUp()
name: str = interface_1.mouseup(handler=self.on_mouse_up_1)
interface_1.unbind_mouseup(handler=self.on_mouse_up_1)
assert interface_1._mouse_up_handlers == {}
expression: str = expression_data_util.get_current_expression()
expected: str = (
f'{interface_1.variable_name}.off('
f'"{ap.MouseEventType.MOUSEUP.value}", {name});'
)
assert expected in expression
@retry(stop_max_attempt_number=15, wait_fixed=randint(10, 3000))
def test_unbind_mouseup_all(self) -> None:
expression_data_util.empty_expression()
interface_1: _TestMouseUp = _TestMouseUp()
interface_1.mouseup(handler=self.on_mouse_up_1)
interface_1.mouseup(handler=self.on_mouse_up_2)
interface_1.unbind_mouseup_all()
interface_1._mouse_up_handlers == {}
expression: str = expression_data_util.get_current_expression()
expected: str = (
f'{interface_1.variable_name}.off('
f'"{ap.MouseEventType.MOUSEUP.value}");'
)
assert expected in expression
| 36.519231 | 78 | 0.651659 | 428 | 3,798 | 5.415888 | 0.182243 | 0.060397 | 0.062123 | 0.02157 | 0.72692 | 0.704055 | 0.701467 | 0.651855 | 0.640207 | 0.640207 | 0 | 0.021019 | 0.260927 | 3,798 | 103 | 79 | 36.873786 | 0.804774 | 0.089784 | 0 | 0.409091 | 0 | 0 | 0.071117 | 0.056457 | 0 | 0 | 0 | 0 | 0.121212 | 1 | 0.106061 | false | 0 | 0.121212 | 0 | 0.257576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1e0a84eff400caa1b64f0d51ecb416d92de0482f | 378 | py | Python | margen/segment02/pitch.py | DaviRaubach/la_otra_margen | 5f7f2745d11a4dc6cd824236ad2af9af150289b2 | [
"CC0-1.0"
] | null | null | null | margen/segment02/pitch.py | DaviRaubach/la_otra_margen | 5f7f2745d11a4dc6cd824236ad2af9af150289b2 | [
"CC0-1.0"
] | null | null | null | margen/segment02/pitch.py | DaviRaubach/la_otra_margen | 5f7f2745d11a4dc6cd824236ad2af9af150289b2 | [
"CC0-1.0"
] | null | null | null | import abjad
I_pitches = {
"matA": abjad.PitchSegment([1, 6, 11, -6, -1, 4]),
"matB": abjad.PitchSegment([1, 6, 11]),
}
II_pitches = {
"matA": abjad.PitchSegment([4, -1, -6, -10, -4, 1]),
"matB": abjad.PitchSegment([4, -1, -6])
}
III_pitches = {
"matA": abjad.PitchSegment([1, -4, -10, 11, 6, 1]),
"matB": abjad.PitchSegment([1, -4, -10])
}
| 25.2 | 56 | 0.529101 | 53 | 378 | 3.716981 | 0.264151 | 0.517767 | 0.365482 | 0.426396 | 0.741117 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112245 | 0.222222 | 378 | 14 | 57 | 27 | 0.557823 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1e0bccc81b0c212a8ab41b2597bdffc455ed2eb3 | 105 | py | Python | app/schema/token.py | jburckel/fastapi-login-example | f2e2574dd8f4fce31cae3237bb92d90315350137 | [
"MIT"
] | null | null | null | app/schema/token.py | jburckel/fastapi-login-example | f2e2574dd8f4fce31cae3237bb92d90315350137 | [
"MIT"
] | null | null | null | app/schema/token.py | jburckel/fastapi-login-example | f2e2574dd8f4fce31cae3237bb92d90315350137 | [
"MIT"
] | 1 | 2020-09-21T12:44:43.000Z | 2020-09-21T12:44:43.000Z | from .mixin import AppSchemaBase
class Token(AppSchemaBase):
access_token: str
token_type: str
| 15 | 32 | 0.752381 | 13 | 105 | 5.923077 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 105 | 6 | 33 | 17.5 | 0.905882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1e20d14aa5cadf888513af055af81b5931b4e301 | 3,036 | py | Python | ExpSettings/Dataset/SyntheticImages/Dataset.py | gokhangg/Uncertainix | feb86dc9a8152bc133f99c56d8f15bf760754218 | [
"Apache-2.0"
] | null | null | null | ExpSettings/Dataset/SyntheticImages/Dataset.py | gokhangg/Uncertainix | feb86dc9a8152bc133f99c56d8f15bf760754218 | [
"Apache-2.0"
] | null | null | null | ExpSettings/Dataset/SyntheticImages/Dataset.py | gokhangg/Uncertainix | feb86dc9a8152bc133f99c56d8f15bf760754218 | [
"Apache-2.0"
] | null | null | null | # *=========================================================================
# *
# * Copyright Erasmus MC Rotterdam and contributors
# * This software is licensed under the Apache 2 license, quoted below.
# * Copyright 2019 Erasmus MC Rotterdam.
# * Copyright 2019 Gokhan Gunay <g.gunay@erasmsumc.nl>
# * Licensed under the Apache License, Version 2.0 (the "License"); you may not
# * use this file except in compliance with the License. You may obtain a copy of
# * the License at
# * http: //www.apache.org/licenses/LICENSE-2.0
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# * License for the specific language governing permissions and limitations under
# * the License.
# *=========================================================================
from Parameter.Parameter import Parameter as Par
from ExpSettings.DatasetBase import DatasetBase
from ExpSettings.Dataset.SyntheticImages.Environment import Environment
import os
__selfPath = os.path.dirname(os.path.realpath(__file__))
def GetParameters():
mapFunct = lambda a : pow(2, a)
"""Simulated Dataset"""
par = []
"""Simulated Dataset"""
par1 = Par("Metric1Weight", "Gauss", 4.12, 2.65)
par1.SetMapFunct(mapFunct)
"""Simulated Dataset"""
par2 = Par("FinalGridSpacingInPhysicalUnits", "Gauss", 4.37, 0.55)
par2.SetMapFunct(mapFunct)
par.append(par1)
par.append(par2)
return par
"""
@brief: Used to generate weights file from PCE executable for registration sampling locations .
@return: NA.
"""
__DATASET_SIZE = 30
def GetFixedImage(ind: int):
return __selfPath + "/Images/ImFlatN.mhd"
def GetFixedImageSegmentation(ind: int):
return __selfPath + "/Images/ImFlat.mhd"
def GetMovingImage(ind: int):
return __selfPath + "/Images/Im" + str(ind) + "N.mhd"
def GetMovingImageSegmentation(ind: int):
return __selfPath + "/Images/Im" + str(ind) + ".mhd"
def GetDataset(ind: int):
retVal = {}
retVal.update({"fixedIm": GetFixedImage(ind)})
retVal.update({"movingIm": GetMovingImage(ind)})
retVal.update({"fixedSeg": GetFixedImageSegmentation(ind)})
retVal.update({"movingSeg": GetMovingImageSegmentation(ind)})
return retVal
def GetPceSettingsFile():
return __selfPath + "/PceSettings.json"
class Dataset(DatasetBase):
def __init__(self):
pass
def GetDatasetSize(self):
return __DATASET_SIZE
def GetDatasetWithIndex(self, ind:int):
return GetDataset(ind)
def GetMethodExtensionParams(self, ind:int):
return {"commandlineParameters": {}}
def GetModeExtensionParams(self, ind:int):
return {"sampleSize": 100, "batchSize":50, "isVector": True}
def GetParameters(self, datasetIndex):
return GetParameters()
def GetEnvironment(self, rootDir):
return Environment(rootDir)
| 30.666667 | 96 | 0.665349 | 335 | 3,036 | 5.952239 | 0.471642 | 0.024072 | 0.042126 | 0.04012 | 0.060181 | 0.034102 | 0.034102 | 0.034102 | 0 | 0 | 0 | 0.016084 | 0.18083 | 3,036 | 98 | 97 | 30.979592 | 0.785686 | 0.300725 | 0 | 0 | 0 | 0 | 0.113198 | 0.027126 | 0 | 0 | 0 | 0 | 0 | 1 | 0.291667 | false | 0.020833 | 0.083333 | 0.229167 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1e2c70f4123e227934c612aca4b3f6707598fb24 | 198 | py | Python | accounts/forms/__init__.py | BloodLagbe/blood_lagbe | 597fcabea3523c9932381c591f65c0a91cc5f74c | [
"Apache-2.0"
] | 3 | 2021-04-24T16:30:09.000Z | 2021-06-19T08:02:22.000Z | accounts/forms/__init__.py | BloodLagbe/blood_lagbe | 597fcabea3523c9932381c591f65c0a91cc5f74c | [
"Apache-2.0"
] | 16 | 2021-04-24T07:44:34.000Z | 2021-04-28T17:12:25.000Z | accounts/forms/__init__.py | BloodLagbe/blood_lagbe | 597fcabea3523c9932381c591f65c0a91cc5f74c | [
"Apache-2.0"
] | 4 | 2021-04-24T23:42:51.000Z | 2021-06-20T16:53:00.000Z | from .forms import(
LoginForm,
RegistrationForm,
ProfileForm
)
from .search_doner import SearchDoner
__all__ = [
LoginForm,
RegistrationForm,
ProfileForm,
SearchDoner
]
| 14.142857 | 37 | 0.69697 | 16 | 198 | 8.3125 | 0.625 | 0.37594 | 0.541353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 198 | 13 | 38 | 15.230769 | 0.886667 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1e33bfecf12d884b056149d9e05862e880469b84 | 307 | py | Python | bh_modules/erlangcase.py | jfcherng-sublime/ST-BracketHighlighter | 223ffd4ceafd58686503e3328934c039e959a88c | [
"Unlicense",
"MIT"
] | 1,047 | 2015-01-01T16:11:42.000Z | 2022-03-12T08:29:13.000Z | bh_modules/erlangcase.py | jfcherng-sublime/ST-BracketHighlighter | 223ffd4ceafd58686503e3328934c039e959a88c | [
"Unlicense",
"MIT"
] | 374 | 2015-01-07T02:47:55.000Z | 2022-03-24T12:59:09.000Z | bh_modules/erlangcase.py | jfcherng-sublime/ST-BracketHighlighter | 223ffd4ceafd58686503e3328934c039e959a88c | [
"Unlicense",
"MIT"
] | 223 | 2015-01-11T04:21:06.000Z | 2021-10-05T15:00:32.000Z | """
BracketHighlighter.
Copyright (c) 2013 - 2016 Isaac Muse <isaacmuse@gmail.com>
License: MIT
"""
from BracketHighlighter.bh_plugin import import_module
lowercase = import_module("bh_modules.lowercase")
def validate(*args):
"""Check if bracket is lowercase."""
return lowercase.validate(*args)
| 21.928571 | 58 | 0.749186 | 37 | 307 | 6.108108 | 0.72973 | 0.106195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029963 | 0.130293 | 307 | 13 | 59 | 23.615385 | 0.816479 | 0.400651 | 0 | 0 | 0 | 0 | 0.116959 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1e35c32ef63c48b1ca0a41c93b46faa23eaeb61c | 275 | py | Python | tests/projects/flask1/main.py | mblackgeo/lambdarado_py | 9a1e8538b569bbfdf0b0de11b42e97290812c826 | [
"MIT"
] | 4 | 2021-05-11T03:50:57.000Z | 2022-01-20T14:20:44.000Z | tests/projects/flask1/main.py | mblackgeo/lambdarado_py | 9a1e8538b569bbfdf0b0de11b42e97290812c826 | [
"MIT"
] | 1 | 2022-01-20T14:21:56.000Z | 2022-01-20T14:22:54.000Z | tests/projects/flask1/main.py | mblackgeo/lambdarado_py | 9a1e8538b569bbfdf0b0de11b42e97290812c826 | [
"MIT"
] | 1 | 2022-02-28T10:06:35.000Z | 2022-02-28T10:06:35.000Z | from flask import Flask
from lambdarado import start
def get_app():
app = Flask(__name__)
@app.route('/a')
def get_a():
return 'AAA'
@app.route('/b')
def get_b():
return 'BBB'
return app
print("RUNNING main.py")
start(get_app)
| 12.5 | 28 | 0.589091 | 39 | 275 | 3.948718 | 0.487179 | 0.116883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276364 | 275 | 21 | 29 | 13.095238 | 0.773869 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.153846 | 0.615385 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1e3c1dd9c86f7c8cf4c2d93e5bb0dcb96cc42bb8 | 7,801 | py | Python | src/tests/test_inputCheck.py | retsilagracias/horoscope-cli | fcbaf11a4db21fcf609b480f9800943c8cc37815 | [
"MIT"
] | null | null | null | src/tests/test_inputCheck.py | retsilagracias/horoscope-cli | fcbaf11a4db21fcf609b480f9800943c8cc37815 | [
"MIT"
] | null | null | null | src/tests/test_inputCheck.py | retsilagracias/horoscope-cli | fcbaf11a4db21fcf609b480f9800943c8cc37815 | [
"MIT"
] | null | null | null | from horoscopecli.inputCheck import validInputCategoryOption, validInputSign, validInputDateOption
def test_belier_validInputSign():
resultWithoutAccent = validInputSign("belier")
resultWithAccent = validInputSign("bélier")
assert resultWithAccent[0] == True
assert resultWithAccent[1] == "aries"
assert resultWithAccent[2] == "belier"
assert resultWithoutAccent[0] == True
assert resultWithoutAccent[1] == "aries"
assert resultWithoutAccent[2] == "belier"
def test_Taureau_validInputSign():
result = validInputSign("Taureau")
assert result[0] == True
assert result[1] == "taurus"
assert result[2] == "taureau"
def test_gemeaux_validInputSign():
resultWithoutAccent = validInputSign("gemeaux")
resultWithAccent = validInputSign("gémeaux")
assert resultWithAccent[0] == True
assert resultWithAccent[1] == "gemini"
assert resultWithAccent[2] == "gemeaux"
assert resultWithoutAccent[0] == True
assert resultWithoutAccent[1] == "gemini"
assert resultWithoutAccent[2] == "gemeaux"
def test_CaNcer_validInputSign():
result = validInputSign("CaNcer")
assert result[0] == True
assert result[1] == "cancer"
assert result[2] == "cancer"
def test_lion_validInputSign():
result = validInputSign("lion")
assert result[0] == True
assert result[1] == "leo"
assert result[2] == "lion"
def test_viergE_validInputSign():
result = validInputSign("viergE")
assert result[0] == True
assert result[1] == "virgo"
assert result[2] == "vierge"
def test_balance_validInputSign():
result = validInputSign("balance")
assert result[0] == True
assert result[1] == "libra"
assert result[2] == "balance"
def test_Scorpion_validInputSign():
result = validInputSign("Scorpion")
assert result[0] == True
assert result[1] == "scorpio"
assert result[2] == "scorpion"
def test_sagiTtaire_validInputSign():
result = validInputSign("sagiTtaire")
assert result[0] == True
assert result[1] == "sagittarius"
assert result[2] == "sagittaire"
def test_capricorne_validInputSign():
result = validInputSign("capricorne")
assert result[0] == True
assert result[1] == "capricorn"
assert result[2] == "capricorne"
def test_Verseau_validInputSign():
result = validInputSign("Verseau")
assert result[0] == True
assert result[1] == "aquarius"
assert result[2] == "verseau"
def test_PoIsSoNs_validInputSign():
result = validInputSign("PoIsSoNs")
assert result[0] == True
assert result[1] == "pisces"
assert result[2] == "poissons"
def test_Aries_validInputSign():
result = validInputSign("Aries")
assert result[0] == True
assert result[1] == ("aries")
assert result[2] == ("belier")
def test_taurus_validInputSign():
result = validInputSign("taurus")
assert result[0] == True
assert result[1] == ("taurus")
assert result[2] == ("taureau")
def test_GeMiNi_validInputSign():
result = validInputSign("GeMiNi")
assert result[0] == True
assert result[1] == ("gemini")
assert result[2] == ("gemeaux")
def test_cancer_validInputSign():
result = validInputSign("cancer")
assert result[0] == True
assert result[1] == ("cancer")
assert result[2] == ("cancer")
def test_leo_validInputSign():
result = validInputSign("leo")
assert result[0] == True
assert result[1] == ("leo")
assert result[2] == ("lion")
def test_viergO_validInputSign():
result = validInputSign("virgO")
assert result[0] == True
assert result[1] == ("virgo")
assert result[2] == ("vierge")
def test_libra_validInputSign():
result = validInputSign("libra")
assert result[0] == True
assert result[1] == ("libra")
assert result[2] == ("balance")
def test_Scorpio_validInputSign():
result = validInputSign("scorpio")
assert result[0] == True
assert result[1] == ("scorpio")
assert result[2] == ("scorpion")
def test_sagittarius_validInputSign():
result = validInputSign("sagittarius")
assert result[0] == True
assert result[1] == ("sagittarius")
assert result[2] == ("sagittaire")
def test_CapriCorn_validInputSign():
result = validInputSign("CapriCorn")
assert result[0] == True
assert result[1] == ("capricorn")
assert result[2] == ("capricorne")
def test_aquarius_validInputSign():
result = validInputSign("aquarius")
assert result[0] == True
assert result[1] == ("aquarius")
assert result[2] == ("verseau")
def test_Pisces_validInputSign():
result = validInputSign("Pisces")
assert result[0] == True
assert result[1] == ("pisces")
assert result[2] == ("poissons")
def test_InvalidSign_validInputSign():
result = validInputSign("InvalidSign")
assert result[0] == False
def test_gemeau_validInputSign():
result = validInputSign("gemeau")
assert result[0] == False
def test_noOption_validInputDateOption():
result = validInputDateOption(False, False, False, False)
assert result[0] == True
assert result[1] == "today"
def test_today_validInputDateOption():
result = validInputDateOption(True, False, False, False)
assert result[0] == True
assert result[1] == "today"
def test_week_validInputDateOption():
result = validInputDateOption(False, True, False, False)
assert result[0] == True
assert result[1] == "week"
def test_month_validInputDateOption():
result = validInputDateOption(False, False, True, False)
assert result[0] == True
assert result[1] == "month"
def test_year_validInputDateOption():
result = validInputDateOption(False, False, False, True)
assert result[0] == True
assert result[1] == "year"
def test_twoOptions_validInputDateOption():
result = validInputDateOption(False, True, True, False)
assert result[0] == False
def test_threeOptions_validInputDateOption():
result = validInputDateOption(True, True, True, False)
assert result[0] == False
def test_fourOptions_validInputDateOption():
result = validInputDateOption(True, True, True, True)
assert result[0] == False
def test_noOption_validInputCategoryOption():
result = validInputCategoryOption(False, False, False, False, False)
assert result[0] == True
assert result[1] == "self"
def test_love_validInputCategoryOption():
result = validInputCategoryOption(True, False, False, False, False)
assert result[0] == True
assert result[1] == "love"
def test_work_validInputCategoryOption():
result = validInputCategoryOption(False, True, False, False, False)
assert result[0] == True
assert result[1] == "work"
def test_finance_validInputCategoryOption():
result = validInputCategoryOption(False, False, True, False, False)
assert result[0] == True
assert result[1] == "finance"
def test_self_validInputCategoryOption():
result = validInputCategoryOption(False, False, False, True, False)
assert result[0] == True
assert result[1] == "self"
def test_family_validInputCategoryOption():
result = validInputCategoryOption(False, False, False, False, True)
assert result[0] == True
assert result[1] == "family"
def test_twoOptions_validInputCategoryOption():
result = validInputCategoryOption(False, True, False, True, False)
assert result[0] == False
def test_threeOptions_validInputCategoryOption():
result = validInputCategoryOption(True, True, False, True, False)
assert result[0] == False
def test_fourOptions_validInputCategoryOption():
result = validInputCategoryOption(False, True, True, True, True)
assert result[0] == False
def test_fiveOptions_validInputCategoryOption():
result = validInputCategoryOption(True, True, True, True, True)
assert result[0] == False
| 31.840816 | 98 | 0.692475 | 821 | 7,801 | 6.472594 | 0.073082 | 0.219044 | 0.102747 | 0.10557 | 0.660519 | 0.589575 | 0.516184 | 0.435077 | 0.428679 | 0.405721 | 0 | 0.017018 | 0.178951 | 7,801 | 244 | 99 | 31.971311 | 0.812646 | 0 | 0 | 0.25 | 0 | 0 | 0.076272 | 0 | 0 | 0 | 0 | 0 | 0.545 | 1 | 0.22 | false | 0 | 0.005 | 0 | 0.225 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1e4bd3839eab345fc45832ae908f28c4ea53ff3c | 1,052 | py | Python | edmunds/profiler/drivers/basedriver.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 4 | 2017-09-07T13:39:50.000Z | 2018-05-31T16:14:50.000Z | edmunds/profiler/drivers/basedriver.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 103 | 2017-03-19T15:58:21.000Z | 2018-07-11T20:36:17.000Z | edmunds/profiler/drivers/basedriver.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 2 | 2017-10-14T15:20:11.000Z | 2018-04-20T09:55:44.000Z |
from edmunds.globals import abc, ABC
class BaseDriver(ABC):
"""
The base driver for profiler-drivers
"""
def __init__(self, app):
"""
Initiate the instance
:param app: The application
:type app: Edmunds.Application
"""
self._app = app
@abc.abstractmethod
def process(self, profiler, start, end, environment, suggestive_file_name):
"""
Process the results
:param profiler: The profiler
:type profiler: cProfile.Profile
:param start: Start of profiling
:type start: int
:param end: End of profiling
:type end: int
:param environment: The environment
:type environment: Environment
:param suggestive_file_name: A suggestive file name
:type suggestive_file_name: str
"""
pass
| 30.057143 | 79 | 0.495247 | 93 | 1,052 | 5.483871 | 0.419355 | 0.109804 | 0.141176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.442015 | 1,052 | 34 | 80 | 30.941176 | 0.868825 | 0.603612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
1e5c414c9596bddab6137c7c0d788734d49dba78 | 2,653 | py | Python | web/tracker/models.py | webisteme/punkmoney | 79253f8a37c80789e22c5c63eb6c88ccade61286 | [
"MIT"
] | 1 | 2018-10-01T11:41:57.000Z | 2018-10-01T11:41:57.000Z | web/tracker/models.py | webisteme/punkmoney | 79253f8a37c80789e22c5c63eb6c88ccade61286 | [
"MIT"
] | null | null | null | web/tracker/models.py | webisteme/punkmoney | 79253f8a37c80789e22c5c63eb6c88ccade61286 | [
"MIT"
] | null | null | null | from django.db import models
class events(models.Model):
id = models.AutoField(primary_key=True)
note_id = models.BigIntegerField(null=True, blank=True)
tweet_id = models.BigIntegerField()
type = models.IntegerField(null=True, blank=True)
timestamp = models.DateTimeField()
from_user = models.CharField(max_length=90)
to_user = models.CharField(max_length=90)
class Meta:
db_table = u'tracker_events'
class notes(models.Model):
id = models.BigIntegerField(max_length=30, primary_key=True)
issuer = models.CharField(max_length=90, blank=True)
bearer = models.CharField(max_length=90, blank=True)
promise = models.CharField(max_length=420, blank=True)
created = models.DateTimeField(null=True, blank=True)
expiry = models.DateTimeField(null=True, blank=True)
status = models.IntegerField(null=True, blank=True)
transferable = models.IntegerField(null=True, blank=True)
type = models.IntegerField(null=True, blank=True)
conditional = models.CharField(max_length=420, null=True, blank=True)
class Meta:
db_table = u'tracker_notes'
class trustlist(models.Model):
id = models.AutoField(primary_key=True)
user = models.CharField(max_length=90, blank=True)
trusted = models.CharField(max_length=90, blank=True)
class Meta:
db_table = u'tracker_trust_list'
class tags(models.Model):
id = models.AutoField(primary_key=True)
tag = models.CharField(max_length=30)
class Meta:
db_table = u'tracker_tags'
class tweets(models.Model):
id = models.AutoField(primary_key=True)
timestamp = models.DateTimeField(null=True, blank=True)
tweet_id = models.BigIntegerField(null=True, blank=True)
author = models.CharField(max_length=90, blank=True)
content = models.CharField(max_length=420, blank=True)
reply_to_id = models.BigIntegerField(null=True, blank=True)
parsed = models.CharField(max_length=1, null=True, blank=True)
url = models.CharField(max_length=420, null=True, blank=True)
display_url = models.CharField(max_length=420, null=True, blank=True)
img_url = models.CharField(max_length=420, null=True, blank=True)
tag_1 = models.IntegerField(null=True, blank=True)
tag_2 = models.IntegerField(null=True, blank=True)
tag_3 = models.IntegerField(null=True, blank=True)
class Meta:
db_table = u'tracker_tweets'
class users(models.Model):
id = models.AutoField(primary_key=True)
username = models.CharField(max_length=90, blank=True)
karma = models.IntegerField(null=True, blank=True)
class Meta:
db_table = u'tracker_users' | 42.111111 | 73 | 0.720694 | 359 | 2,653 | 5.192201 | 0.175487 | 0.130365 | 0.132511 | 0.173283 | 0.774142 | 0.772532 | 0.626073 | 0.354077 | 0.175429 | 0.160944 | 0 | 0.018962 | 0.165096 | 2,653 | 63 | 74 | 42.111111 | 0.822573 | 0 | 0 | 0.22807 | 0 | 0 | 0.03165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017544 | 0 | 0.894737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
1e7c67e0a9f238cd366a07d35c3544885629f78b | 372 | py | Python | flashtext/flashtextDemo.py | polarbear0330/i_like_demos | e713b9833da4d2126657fe7605537fd4aaee11ef | [
"Apache-2.0"
] | 1 | 2017-05-09T09:28:35.000Z | 2017-05-09T09:28:35.000Z | flashtext/flashtextDemo.py | polarbear0330/i_like_demos | e713b9833da4d2126657fe7605537fd4aaee11ef | [
"Apache-2.0"
] | null | null | null | flashtext/flashtextDemo.py | polarbear0330/i_like_demos | e713b9833da4d2126657fe7605537fd4aaee11ef | [
"Apache-2.0"
] | null | null | null | from flashtext import KeywordProcessor
keywordProcessor = KeywordProcessor()
keywordProcessor.add_keyword_from_file("keywords.txt")
keywordProcessor.add_keyword("orange", "watermelon")
print(" ")
print(keywordProcessor.get_all_keywords())
print(keywordProcessor.extract_keywords("I like apple and Banana!"))
print(keywordProcessor.replace_keywords("I like orange!")) | 26.571429 | 68 | 0.817204 | 40 | 372 | 7.4 | 0.525 | 0.324324 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069892 | 372 | 14 | 69 | 26.571429 | 0.855491 | 0 | 0 | 0 | 0 | 0 | 0.179625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
1e7c883cf25b8c576632aac93552052d4d021b8f | 1,606 | py | Python | scheduling/create_scheduling_data/agent.py | CORE-Robotics-Lab/Personalized_Neural_Trees | 3e8dd12fe4fc850be65c96c847eb143ef3bcdc2e | [
"MIT"
] | 3 | 2021-05-22T19:25:01.000Z | 2021-12-01T07:59:56.000Z | scheduling/create_scheduling_data/agent.py | CORE-Robotics-Lab/Personalized_Neural_Trees | 3e8dd12fe4fc850be65c96c847eb143ef3bcdc2e | [
"MIT"
] | null | null | null | scheduling/create_scheduling_data/agent.py | CORE-Robotics-Lab/Personalized_Neural_Trees | 3e8dd12fe4fc850be65c96c847eb143ef3bcdc2e | [
"MIT"
] | null | null | null | import random
import numpy as np
from scheduling.create_scheduling_data.constants import *
class Agent:
def __init__(self, v = None, z = None, name = ""):
if v == None:
self.v = random.randint(0,10) # velocity
else:
self.v = v
if z == None:
self.z = (random.randint(0, grid_size_x-1),random.randint(0,grid_size_y-1))
self.orig_location = self.z
else:
self.z = z
self.orig_location = z
self.isBusy = False
self.name = name
self.curr_finish_time = 0
self.curr_task = None
self.task_list = []
self.orientation = np.random.uniform(0, np.pi)
self.task_event_dict = {} # task_num: [start_time, expected_finish_time]
def set_orientation(self, new_orientation):
self.orientation = new_orientation
def getv(self):
return self.v
def getz(self):
return self.z
def getisBusy(self):
return self.isBusy
def changebusy(self,b):
self.isBusy = b
def updateAgentLocation(self, new_location):
self.z = new_location
def getOrientation(self):
return self.orientation
def getName(self):
return self.name
def setFinishTime(self, finish_time):
self.curr_finish_time = finish_time
def getFinishTime(self):
return self.curr_finish_time
def setCurrTask(self, task):
self.curr_task = task
self.task_list.append(task)
def getCurrTask(self):
return self.curr_task
# TODO: add method to print all traits
| 26.327869 | 87 | 0.610212 | 208 | 1,606 | 4.538462 | 0.336538 | 0.074153 | 0.103814 | 0.057203 | 0.04661 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007986 | 0.298257 | 1,606 | 60 | 88 | 26.766667 | 0.829636 | 0.05604 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 1 | 0.276596 | false | 0 | 0.06383 | 0.148936 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
1e8b5e801fb33480ceaca4540478760d9fe24dde | 1,480 | py | Python | SVassembly/__init__.py | AV321/SVPackage | c9c625af7f5047ddb43ae79f8beb2ce9aadf7697 | [
"MIT"
] | null | null | null | SVassembly/__init__.py | AV321/SVPackage | c9c625af7f5047ddb43ae79f8beb2ce9aadf7697 | [
"MIT"
] | null | null | null | SVassembly/__init__.py | AV321/SVPackage | c9c625af7f5047ddb43ae79f8beb2ce9aadf7697 | [
"MIT"
] | 1 | 2019-01-22T19:16:24.000Z | 2019-01-22T19:16:24.000Z | from SVassembly import bedpe2window_f
from bedpe2window_f import bedpe2window
from SVassembly import get_shared_bcs_f
from get_shared_bcs_f import get_shared_bcs
from SVassembly import assign_sv_haps_f
from assign_sv_haps_f import assign_sv_haps
from SVassembly import count_bcs_f
from count_bcs_f import count_bcs #can't have "-"
from SVassembly import map_to_genome_f
from map_to_genome_f import map_to_genome
from SVassembly import extract_reads_2_0_new
from extract_reads_2_0_new import extract_readsv2_0_new #LR v2.0
from SVassembly import extract_reads_2_0_old
from extract_reads_2_0_old import extract_readsv2_0_old #LR v2.0
from SVassembly import extract_reads_by_barcode_2_1_new #uncomment these
from extract_reads_by_barcode_2_1_new import extract_readsv2_1_new #LR v2.1
from SVassembly import extract_reads_by_barcode_2_1_old #uncomment these
from extract_reads_by_barcode_2_1_old import extract_readsv2_1_old #LR v2.1
from SVassembly import InterestingContigs
from InterestingContigs import interestingContigs
from SVassembly import filt_svs
from filt_svs import filter_svs
from SVassembly import align
from align import mappyAlign
from SVassembly import mappy_contig_eval
from mappy_contig_eval import interesting_contigs_mappy
#from SVassembly import phase_svs
#from phase_svs import phase
#from SVassembly import plot_bcs_across_bkpts #this is an R file
from SVassembly import plot_bcs_across_bkpts
from plot_bcs_across_bkpts import plot_bcs_bkpt
| 31.489362 | 75 | 0.877703 | 253 | 1,480 | 4.719368 | 0.209486 | 0.187605 | 0.268007 | 0.090452 | 0.345059 | 0.304858 | 0.279732 | 0.175879 | 0.140704 | 0 | 0 | 0.026555 | 0.109459 | 1,480 | 46 | 76 | 32.173913 | 0.879363 | 0.131081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
1eb0616d9ab54adba50432c69e2f0d3d29ad3b8e | 2,208 | py | Python | experiments/compile_scripts.py | uiuc-arc/DeepJ | 1c0493511b12394ca6f9a0098d3401cdcab50806 | [
"MIT"
] | 2 | 2022-01-20T15:46:19.000Z | 2022-01-29T16:51:37.000Z | experiments/compile_scripts.py | uiuc-arc/DeepJ | 1c0493511b12394ca6f9a0098d3401cdcab50806 | [
"MIT"
] | null | null | null | experiments/compile_scripts.py | uiuc-arc/DeepJ | 1c0493511b12394ca6f9a0098d3401cdcab50806 | [
"MIT"
] | 1 | 2021-10-31T02:02:43.000Z | 2021-10-31T02:02:43.000Z | import os
is_fpsound = False
if is_fpsound:
sound = '-D SOUND'
else:
sound = ''
print('Compiling ConvBig_Classify')
os.system(f'g++ -std=c++17 -O2 -fopenmp convbig_classify.cpp -o convbig_classify')
print('Compiling ConvBig')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convbig.cpp -o convbig')
print('Compiling ConvBig_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convbig_splitting.cpp -o convbig_splitting')
print('Compiling ConvBig_Compose')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convbig_compose.cpp -o convbig_compose')
print('Compiling ConvBig_Compose_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convbig_compose_splitting.cpp -o convbig_compose_splitting')
print('Compiling ConvMed_Classify')
os.system(f'g++ -std=c++17 -O2 -fopenmp convmed_classify.cpp -o convmed_classify')
print('Compiling ConvMed')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convmed.cpp -o convmed')
print('Compiling ConvMed_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convmed_splitting.cpp -o convmed_splitting')
print('Compiling ConvMed_Compose')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convmed_compose.cpp -o convmed_compose')
print('Compiling ConvMed_Compose_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp convmed_compose_splitting.cpp -o convmed_compose_splitting')
print('Compiling FFNN_Classify')
os.system(f'g++ -std=c++17 -O2 -fopenmp ffnn_classify.cpp -o ffnn_classify')
print('Compiling FFNN')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp ffnn.cpp -o ffnn')
print('Compiling FFNN_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp ffnn_splitting.cpp -o ffnn_splitting')
print('Compiling FFNN_Compose')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp ffnn_compose.cpp -o ffnn_compose')
print('Compiling FFNN_Compose_Splitting')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp ffnn_compose_splitting.cpp -o ffnn_compose_splitting')
print('Compiling Perturb Baseline')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp perturb_baseline.cpp -o perturb_baseline')
print('Compiling Perturb_Compose Baseline')
os.system(f'g++ {sound} -std=c++17 -O2 -fopenmp perturb_compose_baseline.cpp -o perturb_compose_baseline')
| 46.978723 | 108 | 0.74683 | 353 | 2,208 | 4.518414 | 0.082153 | 0.149216 | 0.095925 | 0.106583 | 0.487147 | 0.487147 | 0.475862 | 0.475862 | 0.475862 | 0.475862 | 0 | 0.025373 | 0.089674 | 2,208 | 46 | 109 | 48 | 0.768159 | 0 | 0 | 0 | 0 | 0.35 | 0.768569 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0.425 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
1ec1e4359d56546f46a7816e9ba4d788788f4cbc | 13,587 | py | Python | smi_analysis/integrate1D.py | NSLS-II-SMI/smi-analysis | d3e1237da86fac42014c0b27f49962e85a04277c | [
"BSD-3-Clause"
] | 1 | 2019-08-19T22:53:22.000Z | 2019-08-19T22:53:22.000Z | smi_analysis/integrate1D.py | NSLS-II-SMI/SMI_analysis | d3e1237da86fac42014c0b27f49962e85a04277c | [
"BSD-3-Clause"
] | 1 | 2020-04-21T15:19:12.000Z | 2020-04-21T15:19:12.000Z | smi_analysis/integrate1D.py | NSLS-II-SMI/smi-analysis | d3e1237da86fac42014c0b27f49962e85a04277c | [
"BSD-3-Clause"
] | 1 | 2020-04-21T15:17:14.000Z | 2020-04-21T15:17:14.000Z | import numpy as np
from pyFAI.multi_geometry import MultiGeometry
from pyFAI.ext import splitBBox
def inpaint_saxs(imgs, ais, masks):
"""
Inpaint the 2D image collected by the pixel detector to remove artifacts in later data reduction
Parameters:
-----------
:param imgs: List of 2D image in pixel
:type imgs: ndarray
:param ais: List of AzimuthalIntegrator/Transform generated using pyGIX/pyFAI which contain the information about the experiment geometry
:type ais: list of AzimuthalIntegrator / TransformIntegrator
:param masks: List of 2D image (same dimension as imgs)
:type masks: ndarray
"""
inpaints, mask_inpaints = [], []
for i, (img, ai, mask) in enumerate(zip(imgs, ais, masks)):
inpaints.append(ai.inpainting(img.copy(order='C'),
mask))
mask_inpaints.append(np.logical_not(np.ones_like(mask)))
return inpaints, mask_inpaints
def cake_saxs(inpaints, ais, masks, radial_range=(0, 60), azimuth_range=(-90, 90), npt_rad=250, npt_azim=250):
"""
Unwrapp the stitched image from q-space to 2theta-Chi space (Radial-Azimuthal angle)
Parameters:
-----------
:param inpaints: List of 2D inpainted images
:type inpaints: List of ndarray
:param ais: List of AzimuthalIntegrator/Transform generated using pyGIX/pyFAI which contain the information about the experiment geometry
:type ais: list of AzimuthalIntegrator / TransformIntegrator
:param masks: List of 2D image (same dimension as inpaints)
:type masks: List of ndarray
:param radial_range: minimum and maximum of the radial range in degree
:type radial_range: Tuple
:param azimuth_range: minimum and maximum of the 2th range in degree
:type azimuth_range: Tuple
:param npt_rad: number of point in the radial range
:type npt_rad: int
:param npt_azim: number of point in the azimuthal range
:type npt_azim: int
"""
mg = MultiGeometry(ais,
unit='q_A^-1',
radial_range=radial_range,
azimuth_range=azimuth_range,
wavelength=None,
empty=0.0,
chi_disc=180)
cake, q, chi = mg.integrate2d(lst_data=inpaints,
npt_rad=npt_rad,
npt_azim=npt_azim,
correctSolidAngle=True,
lst_mask=masks)
return cake, q, chi[::-1]
def integrate_rad_saxs(inpaints, ais, masks, radial_range=(0, 40), azimuth_range=(0, 90), npt=2000):
"""
Radial integration of transmission data using the pyFAI multigeometry module
Parameters:
-----------
:param inpaints: List of 2D inpainted images
:type inpaints: List of ndarray
:param ais: List of AzimuthalIntegrator/Transform generated using pyGIX/pyFAI which contain the information about the experiment geometry
:type ais: list of AzimuthalIntegrator / TransformIntegrator
:param masks: List of 2D image (same dimension as inpaints)
:type masks: List of ndarray
:param radial_range: minimum and maximum of the radial range in degree
:type radial_range: Tuple
:param azimuth_range: minimum and maximum of the 2th range in degree
:type azimuth_range: Tuple
:param npt: number of point of the final 1D profile
:type npt: int
"""
mg = MultiGeometry(ais,
unit='q_A^-1',
radial_range=radial_range,
azimuth_range=azimuth_range,
wavelength=None,
empty=-1,
chi_disc=180)
q, i_rad = mg.integrate1d(lst_data=inpaints,
npt=npt,
correctSolidAngle=True,
lst_mask=masks)
return q, i_rad
def integrate_azi_saxs(cake, q_array, chi_array, radial_range=(0, 10), azimuth_range=(-90, 0)):
"""
Azimuthal integration of transmission data using masked array on a caked images (image in 2-theta_chi space)
Parameters:
-----------
:param cake: 2D array unwrapped in 2th-chi space
:type cake: ndarray (same dimension as tth_array and chiarray)
:param q_array: 2D array containing 2th angles of each pixel
:type q_array: ndarray (same dimension as cake and chiarray)
:param chi_array: 2D array containing chi angles of each pixel
:type chi_array: ndarray (same dimension as cake and tth_array)
:param radial_range: minimum and maximum of the radial range in degree
:type radial_range: Tuple
:param azimuth_range: minimum and maximum of the 2th range in degree
:type azimuth_range: Tuple
"""
q_mesh, chi_mesh = np.meshgrid(q_array, chi_array)
cake_mask = np.ma.masked_array(cake)
cake_mask = np.ma.masked_where(q_mesh < radial_range[0], cake_mask)
cake_mask = np.ma.masked_where(q_mesh > radial_range[1], cake_mask)
cake_mask = np.ma.masked_where(azimuth_range[0] > chi_mesh, cake_mask)
cake_mask = np.ma.masked_where(azimuth_range[1] < chi_mesh, cake_mask)
i_azi = cake_mask.mean(axis=1)
return chi_array, i_azi
def integrate_rad_gisaxs(img, q_par, q_per, bins=1000, radial_range=None, azimuth_range=None):
"""
Radial integration of Grazing incidence data using the pyFAI multigeometry module
Parameters:
-----------
:param q_par: minimum and maximum q_par (in A-1) of the input image
:type q_par: Tuple
:param q_per: minimum and maximum of q_par in A-1
:type q_per: Tuple
:param bins: number of point of the final 1D profile
:type bins: int
:param img: 2D array containing the stitched intensity
:type img: ndarray
:param radial_range: q_par range (in A-1) at the which the integration will be done
:type radial_range: Tuple
:param azimuth_range: q_per range (in A-1) at the which the integration will be done
:type azimuth_range: Tuple
"""
# recalculate the q-range of the input array
q_h = np.linspace(q_par[0], q_par[-1], np.shape(img)[1])
q_v = np.linspace(q_per[0], q_per[-1], np.shape(img)[0])[::-1]
if radial_range is None:
radial_range = (0, q_h.max())
if azimuth_range is None:
azimuth_range = (0, q_v.max())
q_h_te, q_v_te = np.meshgrid(q_h, q_v)
tth_array = np.sqrt(q_h_te ** 2 + q_v_te ** 2)
chi_array = np.rad2deg(np.arctan2(q_h_te, q_v_te))
# Mask the remeshed array
img_mask = np.ma.masked_array(img, mask=img == 0)
img_mask = np.ma.masked_where(img < 1E-5, img_mask)
img_mask = np.ma.masked_where(tth_array < radial_range[0], img_mask)
img_mask = np.ma.masked_where(tth_array > radial_range[1], img_mask)
img_mask = np.ma.masked_where(chi_array < np.min(azimuth_range), img_mask)
img_mask = np.ma.masked_where(chi_array > np.max(azimuth_range), img_mask)
q_rad, i_rad, _, _ = splitBBox.histoBBox1d(img_mask,
pos0=tth_array,
delta_pos0=np.ones_like(img_mask) * (q_par[1] - q_par[0])/np.shape(
img_mask)[1],
pos1=q_v_te,
delta_pos1=np.ones_like(img_mask) * (q_per[1] - q_per[0])/np.shape(
img_mask)[0],
bins=bins,
pos0Range=np.array([np.min(tth_array), np.max(tth_array)]),
pos1Range=q_per,
dummy=None,
delta_dummy=None,
mask=img_mask.mask
)
return q_rad, i_rad
def integrate_qpar(img, q_par, q_per, q_par_range=None, q_per_range=None):
"""
Horizontal integration of a 2D array using masked array
Parameters:
-----------
:param q_par: minimum and maximum q_par (in A-1) of the input image
:type q_par: Tuple
:param q_per: minimum and maximum of q_par in A-1
:type q_per: Tuple
:param img: 2D array containing intensity
:type img: ndarray
:param q_par_range: q_par range (in A-1) at the which the integration will be done
:type q_par_range: Tuple
:param q_per_range: q_per range (in A-1) at the which the integration will be done
:type q_per_range: Tuple
"""
if q_par_range is None:
q_par_range = (np.asarray(q_par).min(), np.asarray(q_par).max())
if q_per_range is None:
q_per_range = (np.asarray(q_per).min(), np.asarray(q_per).max())
q_par = np.linspace(q_par[0], q_par[1], np.shape(img)[1])
q_per = np.linspace(q_per[0], q_per[1], np.shape(img)[0])[::-1]
qpar_mesh, qper_mesh = np.meshgrid(q_par, q_per)
img_mask = np.ma.masked_array(img, mask=img == 0)
img_mask = np.ma.masked_where(qper_mesh < q_per_range[0], img_mask)
img_mask = np.ma.masked_where(qper_mesh > q_per_range[1], img_mask)
img_mask = np.ma.masked_where(q_par_range[0] > qpar_mesh, img_mask)
img_mask = np.ma.masked_where(q_par_range[1] < qpar_mesh, img_mask)
i_par = np.mean(img_mask, axis=0)
return q_par, i_par
def integrate_qper(img, q_par, q_per, q_par_range=None, q_per_range=None):
"""
Vertical integration of a 2D array using masked array
Parameters:
-----------
:param q_par: minimum and maximum q_par (in A-1) of the input image
:type q_par: Tuple
:param q_per: minimum and maximum of q_par in A-1
:type q_per: Tuple
:param img: 2D array containing intensity
:type img: ndarray
:param q_par_range: q_par range (in A-1) at the which the integration will be done
:type q_par_range: Tuple
:param q_per_range: q_per range (in A-1) at the which the integration will be done
:type q_per_range: Tuple
"""
if q_par_range is None:
q_par_range = (np.asarray(q_par).min(), np.asarray(q_par).max())
if q_per_range is None:
q_per_range = (np.asarray(q_per).min(), np.asarray(q_per).max())
q_par = np.linspace(q_par[0], q_par[1], np.shape(img)[1])
q_per = np.linspace(q_per[0], q_per[1], np.shape(img)[0])[::-1]
q_par_mesh, q_per_mesh = np.meshgrid(q_par, q_per)
img_mask = np.ma.masked_array(img, mask=img == 0)
img_mask = np.ma.masked_where(q_per_mesh < q_per_range[0], img_mask)
img_mask = np.ma.masked_where(q_per_mesh > q_per_range[1], img_mask)
img_mask = np.ma.masked_where(q_par_mesh < q_par_range[0], img_mask)
img_mask = np.ma.masked_where(q_par_mesh > q_par_range[1], img_mask)
i_per = np.mean(img_mask, axis=1)
return q_per, i_per
# TODO: Implement azimuthal integration for GI
def cake_gisaxs(img, q_par, q_per, bins=None, radial_range=None, azimuth_range=None):
"""
Unwrap the stitched image from q-space to 2theta-Chi space (Radial-Azimuthal angle)
Parameters:
-----------
:param img: List of 2D images
:type img: List of ndarray
:param q_par: minimum and maximum q_par (in A-1) of the input image
:type q_par: Tuple
:param q_per: minimum and maximum of q_par in A-1
:type q_per: Tuple
:param bins: number of point in both x and y direction of the final cake
:type bins: Tuple
:param radial_range: minimum and maximum of the radial range in degree
:type radial_range: Tuple
:param azimuth_range: minimum and maximum of the 2th range in degree
:type azimuth_range: Tuple
"""
if bins is None:
bins = tuple(reversed(img.shape))
if radial_range is None:
radial_range = (0, q_par[-1])
if azimuth_range is None:
azimuth_range = (-180, 180)
azimuth_range = np.deg2rad(azimuth_range)
# recalculate the q-range of the input array
q_h = np.linspace(q_par[0], q_par[-1], bins[0])
q_v = np.linspace(q_per[0], q_per[-1], bins[1])[::-1]
q_h_te, q_v_te = np.meshgrid(q_h, q_v)
tth_array = np.sqrt(q_h_te**2 + q_v_te**2)
chi_array = -np.arctan2(q_h_te, q_v_te)
# Mask the remeshed array
img_mask = np.ma.masked_array(img, mask=img == 0)
img_mask = np.ma.masked_where(tth_array < radial_range[0], img_mask)
img_mask = np.ma.masked_where(tth_array > radial_range[1], img_mask)
img_mask = np.ma.masked_where(chi_array < np.min(azimuth_range), img_mask)
img_mask = np.ma.masked_where(chi_array > np.max(azimuth_range), img_mask)
cake, q, chi, _, _ = splitBBox.histoBBox2d(weights=img_mask,
pos0=tth_array,
delta_pos0=np.ones_like(img_mask) * (q_par[1] - q_par[0])/bins[1],
pos1=chi_array,
delta_pos1=np.ones_like(img_mask) * (q_per[1] - q_per[0])/bins[1],
bins=bins,
pos0Range=np.array([np.min(radial_range), np.max(radial_range)]),
pos1Range=np.array([np.min(azimuth_range), np.max(azimuth_range)]),
dummy=None,
delta_dummy=None,
mask=img_mask.mask)
return cake, q, np.rad2deg(chi)[::-1]
| 41.42378 | 141 | 0.614411 | 2,016 | 13,587 | 3.923611 | 0.09623 | 0.031353 | 0.026296 | 0.046018 | 0.767383 | 0.729836 | 0.712137 | 0.671176 | 0.661062 | 0.645512 | 0 | 0.019304 | 0.287039 | 13,587 | 327 | 142 | 41.550459 | 0.797254 | 0.377272 | 0 | 0.408759 | 0 | 0 | 0.00164 | 0 | 0 | 0 | 0 | 0.003058 | 0 | 1 | 0.058394 | false | 0 | 0.021898 | 0 | 0.138686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1eceacebfd4dbe82e0eb8410fe4b6fe74dbc1b33 | 5,235 | py | Python | pyroc/compare.py | noudald/pyroc | 9f73df0c0b0718918d9cbbb532d0eeea73a421ae | [
"MIT"
] | 1 | 2022-01-01T03:42:05.000Z | 2022-01-01T03:42:05.000Z | pyroc/compare.py | noudald/pyroc | 9f73df0c0b0718918d9cbbb532d0eeea73a421ae | [
"MIT"
] | null | null | null | pyroc/compare.py | noudald/pyroc | 9f73df0c0b0718918d9cbbb532d0eeea73a421ae | [
"MIT"
] | null | null | null | """Tools for comparing ROC curves with AUC."""
from math import erf
from typing import Optional, Tuple
import numpy as np
from pyroc import bootstrap_roc, ROC
def gaussian_cdf(x: float) -> float:
"""Gaussian cummulative distribution function for N(0, 1).
Parameters
----------
x
Quantile for which to compute the cummulative distribution.
Returns
-------
Cummulative distribution for quantile x for Gaussian distribution N(0, 1).
"""
return (1.0 + erf(x / 2.0**.5)) / 2.0
def compare_bootstrap(
roc1: ROC,
roc2: ROC,
alt_hypothesis: float = 0.05,
seed: Optional[int] = None) -> Tuple[bool, float]:
"""Compute roc1 < roc2 with alternative hypothesis using DeLong
bootstrapping.
The idea behind the this algorithm is to bootstrap roc1 and roc2, and
compute the AUC (Area Under the Curve) for each of the bootstraps for roc1
and roc2. For each bootstraps of roc1 and roc2 we compute the difference of
the AUCs of ROC curves. Let
aucs_diff = [auc11 - auc21, auc12 - auc22, ..., auc1n - auc2n],
where auc1i is the AUC of ith bootstrap of roc1, and auc2i is the AUC of
the ith bootstrap of roc2. We define a new stochast by
Z = mean(aucs_diff) / std(aucs_diff).
We assume that Z ~ N(0, 1), i.e. Z is drawn from a Gaussian distribution
centered around 0 with standard deviation 1. Our zero hypothesis is that
roc1 >= roc2, or in other words that P(Z) < 1 - alt_hypothesis. So that our
alternative hypothesis is that roc1 < roc2. We reject the zero hypothesis
if P(Z) > 1 - alt_hypothesis.
Parameters
----------
roc1
The "to be assumed" smaller ROC curve than roc2.
roc2
The "to be assumed" larger ROC curve than roc1.
alt_hypothesis
The density for which we reject the zero hypothesis, and for which we
therefore accept roc1 < roc2.
seed
Seed used for DeLong bootstrapping. If no seed is given a random seed
will be used, resulting in non-deterministic results.
Raises
------
ValueError
If alt_hypothesis is not between 0 and 1.
Returns
-------
Tuple of a boolean and the p-value. I.e. the boolean represents if we can
accept the alternative hypothesis roc1 < roc2, and the p-value represents
the strength with which we accept the alternative hypothesis roc1 < roc2.
"""
if not 0 <= alt_hypothesis <= 1:
raise ValueError('Alternative hypothesis must be between 0 and 1.')
bootstrap_auc1 = np.array(list(roc.auc
for roc in bootstrap_roc(roc1, seed=seed)))
bootstrap_auc2 = np.array(list(roc.auc
for roc in bootstrap_roc(roc2, seed=seed)))
aucs = bootstrap_auc2 - bootstrap_auc1
sample = np.mean(aucs)
if np.std(aucs) > 0:
sample /= np.std(aucs)
p_value = 1 - gaussian_cdf(sample)
return p_value < alt_hypothesis, p_value
def compare_binary(
roc1: ROC,
roc2: ROC,
alt_hypothesis: float = 0.05,
seed: Optional[int] = None) -> Tuple[bool, float]:
"""Compute roc1 < roc2 using binary comparison with bootstrapping.
The idea behind the this algorithm is to bootstrap roc1 and roc2, and
compute the AUC (Area Under the Curve) for each of the bootstraps for roc1
and roc2. For each bootstraps of roc1 and roc2 we compute the difference of
the AUCs of ROC curves. Let
aucs_diff = [auc11 - auc21, auc12 - auc22, ..., auc1n - auc2n],
where auc1i is the AUC of ith bootstrap of roc1, and auc2i is the AUC of
the ith bootstrap of roc2. We define the statistical strength, i.e.
p-value, for which we can reject the zero hypothesis roc1 > roc2 as
p_value = sum(aucs_diff > 0) / n.
If p_value is smaller than alt_hypothesis we accept the alternative
hypothesis roc1 < roc2.
Parameters
----------
roc1
The "to be assumed" smaller ROC curve than roc2.
roc2
The "to be assumed" larger ROC curve than roc1.
alt_hypothesis
The density for which we reject the zero hypothesis, and for which we
therefore accept roc1 < roc2.
seed
Seed used for DeLong bootstrapping. If no seed is given a random seed
will be used, resulting in non-deterministic results.
Raises
------
ValueError
If alt_hypothesis is not between 0 and 1.
Returns
-------
Tuple of a boolean and the p-value. I.e. the boolean represents if we can
accept the alternative hypothesis roc1 < roc2, and the p-value represents
the strength with which we accept the alternative hypothesis roc1 < roc2.
"""
if not 0 <= alt_hypothesis <= 1:
raise ValueError('Alternative hypothesis must be between 0 and 1.')
bootstrap_auc1 = np.array(list(roc.auc
for roc in bootstrap_roc(roc1, seed=seed)))
bootstrap_auc2 = np.array(list(roc.auc
for roc in bootstrap_roc(roc2, seed=seed)))
aucs = bootstrap_auc2 - bootstrap_auc1
p_value = sum(aucs <= 0) / aucs.size
return p_value < alt_hypothesis, p_value
| 34.215686 | 79 | 0.649857 | 760 | 5,235 | 4.419737 | 0.192105 | 0.050313 | 0.019649 | 0.044656 | 0.753498 | 0.722239 | 0.722239 | 0.691873 | 0.691873 | 0.691873 | 0 | 0.0335 | 0.275836 | 5,235 | 152 | 80 | 34.440789 | 0.852546 | 0.633047 | 0 | 0.648649 | 0 | 0 | 0.059231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.108108 | 0 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1ed5ca41b8d66ded4bdeb1cf3ea0b3e35e8d7ee3 | 988 | py | Python | PDFSegmenter/util/StorageUtil.py | MBAigner/PDFSegmenter | a559809355f7d3607111b55934dd5f24b674b365 | [
"MIT"
] | 11 | 2020-09-20T16:49:49.000Z | 2022-01-20T01:52:55.000Z | GraphConverter/util/StorageUtil.py | MBAigner/GraphConverter | 243a4b24928ce3b6b570a16a31280cc322e713fc | [
"MIT"
] | null | null | null | GraphConverter/util/StorageUtil.py | MBAigner/GraphConverter | 243a4b24928ce3b6b570a16a31280cc322e713fc | [
"MIT"
] | 1 | 2020-10-29T12:35:59.000Z | 2020-10-29T12:35:59.000Z | import pickle
def save_object(obj, path, name):
"""
:param obj:
:param path:
:param name:
:return:
"""
with open(path + name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_object(path, name):
"""
:param path:
:param name:
:return:
"""
with open(path + name + '.pkl', 'rb') as f:
return pickle.load(f)
def get_file_name(path):
"""
:param path:
:return:
"""
parts = path.split("/")
return parts[len(parts) - 1]
def replace_file_type(file_name, new_type):
"""
:param file_name:
:param new_type:
:return:
"""
file_name_parts = file_name.split(".")
return file_name.replace(file_name_parts[len(file_name_parts)-1], new_type)
def cut_file_type(file_name):
"""
:param file_name:
:return:
"""
file_name_parts = file_name.split(".")
return file_name.replace("." + file_name_parts[len(file_name_parts)-1], "")
| 17.642857 | 79 | 0.58502 | 131 | 988 | 4.167939 | 0.236641 | 0.21978 | 0.142857 | 0.065934 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0 | 0.004076 | 0.255061 | 988 | 55 | 80 | 17.963636 | 0.737772 | 0.176113 | 0 | 0.125 | 0 | 0 | 0.022956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.0625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
1ed7990a7bbc4e7457fa21480a6253094958d11c | 198 | py | Python | src/eazyserver/rpc/__init__.py | MacherLabs/eazyserver | ce207f06971f8e81a9282f58de74d71b07ba7118 | [
"MIT"
] | 4 | 2019-02-23T13:24:36.000Z | 2021-03-25T07:55:09.000Z | src/eazyserver/rpc/__init__.py | MacherLabs/eazyserver | ce207f06971f8e81a9282f58de74d71b07ba7118 | [
"MIT"
] | 1 | 2019-03-07T13:01:14.000Z | 2019-03-07T13:01:14.000Z | src/eazyserver/rpc/__init__.py | MacherLabs/eazyserver | ce207f06971f8e81a9282f58de74d71b07ba7118 | [
"MIT"
] | 2 | 2019-05-09T15:07:21.000Z | 2020-09-22T16:14:56.000Z | import logging
logger = logging.getLogger(__name__)
logger.debug("Loaded " + __name__)
from jsonrpcserver import methods
from .exceptions import *
from .influxdb_api import *
from .meta import *
| 18 | 36 | 0.777778 | 24 | 198 | 6.041667 | 0.583333 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 198 | 10 | 37 | 19.8 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0.035354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1edc4333fc4f6d215d68beba2d0a07189e353a24 | 1,101 | py | Python | typings/bpy_extras/wm_utils/progress_report.py | Argmaster/PyR3 | 6786bcb6a101fe4bd4cc50fe43767b8178504b15 | [
"MIT"
] | 2 | 2021-12-12T18:51:52.000Z | 2022-02-23T09:49:16.000Z | src/blender/blender_autocomplete-master/2.92/bpy_extras/wm_utils/progress_report.py | JonasWard/ClayAdventures | a716445ac690e4792e70658319aa1d5299f9c9e9 | [
"MIT"
] | 2 | 2021-11-08T12:09:02.000Z | 2021-12-12T23:01:12.000Z | src/blender/blender_autocomplete-master/2.92/bpy_extras/wm_utils/progress_report.py | JonasWard/ClayAdventures | a716445ac690e4792e70658319aa1d5299f9c9e9 | [
"MIT"
] | null | null | null | import sys
import typing
class ProgressReport:
curr_step = None
''' '''
running = None
''' '''
start_time = None
''' '''
steps = None
''' '''
wm = None
''' '''
def enter_substeps(self, nbr, msg):
'''
'''
pass
def finalize(self):
'''
'''
pass
def initialize(self, wm):
'''
'''
pass
def leave_substeps(self, msg):
'''
'''
pass
def start(self):
'''
'''
pass
def step(self, msg, nbr):
'''
'''
pass
def update(self, msg):
'''
'''
pass
class ProgressReportSubstep:
final_msg = None
''' '''
level = None
''' '''
msg = None
''' '''
nbr = None
''' '''
progress = None
''' '''
def enter_substeps(self, nbr, msg):
'''
'''
pass
def leave_substeps(self, msg):
'''
'''
pass
def step(self, msg, nbr):
'''
'''
pass
| 11.350515 | 39 | 0.367847 | 87 | 1,101 | 4.574713 | 0.321839 | 0.140704 | 0.100503 | 0.100503 | 0.447236 | 0.447236 | 0.447236 | 0.339196 | 0.18593 | 0 | 0 | 0 | 0.474114 | 1,101 | 96 | 40 | 11.46875 | 0.687392 | 0 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0.294118 | 0.058824 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
9498a8c6ee04d55e9083c6decd5b3cafed1a96b6 | 1,194 | py | Python | impor.py | raotnameh/FAKE_NEWS_LIAR-PLUS-dataset | fbf2a953c16fc111c4afb876eb0857a8d4bb7cb5 | [
"Apache-2.0"
] | 2 | 2019-08-11T21:15:09.000Z | 2019-10-30T16:54:00.000Z | impor.py | raotnameh/FAKE_NEWS_LIAR-PLUS-dataset | fbf2a953c16fc111c4afb876eb0857a8d4bb7cb5 | [
"Apache-2.0"
] | null | null | null | impor.py | raotnameh/FAKE_NEWS_LIAR-PLUS-dataset | fbf2a953c16fc111c4afb876eb0857a8d4bb7cb5 | [
"Apache-2.0"
] | null | null | null |
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import folium
import json
import re
import glob
import os
import string
import random
import requests
import scipy
from matplotlib.colors import *
import seaborn as sn
from dateutil.parser import parse
import datetime as dt
pd.options.mode.chained_assignment = None # default='warn'
# Sklearn imports
from sklearn.model_selection import *
from sklearn.preprocessing import *
from sklearn.linear_model import *
from sklearn.ensemble import *
from sklearn.metrics import *
from sklearn.utils import shuffle
from sklearn.utils.class_weight import *
from sklearn.svm import *
from sklearn.externals import *
from scipy.stats import *
# For sentiment analysis
import vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
nltk.download('vader_lexicon')
from google.cloud import language
from tqdm import tqdm
# Import for WordCloud
import wordcloud
# Text classifier - TextBlob
from textblob.classifiers import NaiveBayesClassifier
# Local imports
from helpers import *
import cleaningtool as ct | 22.528302 | 68 | 0.826633 | 160 | 1,194 | 6.1375 | 0.45625 | 0.100815 | 0.121181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132328 | 1,194 | 53 | 69 | 22.528302 | 0.947876 | 0.096315 | 0 | 0 | 0 | 0 | 0.012127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.948718 | 0 | 0.948718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
94d92d7bf931d71ef42fd53ff7d0ec216ee8563b | 1,189 | py | Python | exercises/en/test_02_07.py | hfboyce/MCL-DSCI-571-machine-learning | 25757369491ac547daa94ff1143ca7389d433a6e | [
"MIT"
] | 1 | 2020-11-23T03:19:18.000Z | 2020-11-23T03:19:18.000Z | exercises/en/test_02_07.py | hfboyce/MCL-DSCI-571-machine-learning | 25757369491ac547daa94ff1143ca7389d433a6e | [
"MIT"
] | 13 | 2020-10-02T16:48:24.000Z | 2020-12-09T18:58:21.000Z | exercises/en/test_02_07.py | hfboyce/MCL-DSCI-571-machine-learning | 25757369491ac547daa94ff1143ca7389d433a6e | [
"MIT"
] | 2 | 2020-10-28T19:43:42.000Z | 2021-03-30T22:57:47.000Z | def test():
# Here we can either check objects created in the solution code, or the
# string value of the solution, available as __solution__. A helper for
# printing formatted messages is available as __msg__. See the testTemplate
# in the meta.json for details.
# If an assertion fails, the message will be displayed
assert 'DecisionTreeClassifier' in __solution__, "Make sure you are specifying a 'DecisionTreeClassifier'."
assert model.get_params()['random_state'] == 1, "Make sure you are settting the model's 'random_state' to 1."
assert 'model.fit' in __solution__, "Make sure you are using the '.fit()' function to fit 'X' and 'y'."
assert 'model.predict(X)' in __solution__, "Make sure you are using the model to predict on 'X'."
assert list(predicted).count('Canada') == 6, "Your predicted values are incorrect. Are you fitting the model properly?"
assert list(predicted).count('Both') == 8, "Your predicted values are incorrect. Are you fitting the model properly?"
assert list(predicted).count('America') == 11, "Your predicted values are incorrect. Are you fitting the model properly?"
__msg__.good("Nice work, well done!") | 79.266667 | 125 | 0.724138 | 173 | 1,189 | 4.820809 | 0.479769 | 0.047962 | 0.052758 | 0.067146 | 0.378897 | 0.378897 | 0.35012 | 0.35012 | 0.273381 | 0.273381 | 0 | 0.006148 | 0.179142 | 1,189 | 15 | 126 | 79.266667 | 0.848361 | 0.248949 | 0 | 0 | 0 | 0 | 0.613739 | 0.052928 | 0 | 0 | 0 | 0 | 0.777778 | 1 | 0.111111 | true | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
94e3747bade02f542649433d315425e5214cbff5 | 1,028 | py | Python | tests/transformers/test_date_parser.py | GoC-Spending/fuzzy-tribble | c6c7b82faad577025a799af84c64686e903499e6 | [
"MIT"
] | 6 | 2017-09-20T16:28:27.000Z | 2018-10-08T18:41:05.000Z | tests/transformers/test_date_parser.py | GoC-Spending/fuzzy-tribble | c6c7b82faad577025a799af84c64686e903499e6 | [
"MIT"
] | 13 | 2017-12-02T01:35:10.000Z | 2018-02-28T14:06:31.000Z | tests/transformers/test_date_parser.py | GoC-Spending/fuzzy-tribble | c6c7b82faad577025a799af84c64686e903499e6 | [
"MIT"
] | 2 | 2017-12-19T03:50:46.000Z | 2018-02-20T04:47:14.000Z | import datetime
import pandas as pd
from tribble.transformers import date_parser
def test_apply() -> None:
data = pd.DataFrame([{'id': 1, 'data': '2017-01-01'}])
output = date_parser.DateParser('data').apply(data)
assert output.to_dict('records') == [
{'id': 1, 'data': datetime.date(2017, 1, 1)}
]
def test_reversed_month_day() -> None:
data = pd.DataFrame([{'id': 1, 'data': '2017-13-01'}])
output = date_parser.DateParser('data').apply(data)
assert output.to_dict('records') == [
{'id': 1, 'data': datetime.date(2017, 1, 13)}
]
def test_non_date() -> None:
data = pd.DataFrame([{'id': 1, 'data': 'foo'}])
output = date_parser.DateParser('data').apply(data)
assert output.to_dict('records') == [
{'id': 1, 'data': None}
]
def test_bad_date() -> None:
data = pd.DataFrame([{'id': 1, 'data': '2017-01-32'}])
output = date_parser.DateParser('data').apply(data)
assert output.to_dict('records') == [
{'id': 1, 'data': None}
]
| 28.555556 | 58 | 0.592412 | 138 | 1,028 | 4.289855 | 0.246377 | 0.040541 | 0.094595 | 0.128378 | 0.773649 | 0.773649 | 0.773649 | 0.773649 | 0.665541 | 0.557432 | 0 | 0.054745 | 0.200389 | 1,028 | 35 | 59 | 29.371429 | 0.66545 | 0 | 0 | 0.37037 | 0 | 0 | 0.121595 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.148148 | false | 0 | 0.111111 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
94f21e95ced1386a941e623940a6426863cc4545 | 1,113 | py | Python | battleships/migrations/0002_auto_20181202_1829.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | battleships/migrations/0002_auto_20181202_1829.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | battleships/migrations/0002_auto_20181202_1829.py | ArturAdamczyk/Battleships | 748e4fa87ed0c17c57abbdf5a0a2bca3c91dff24 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2018-12-02 17:29
import battleships.models.ship
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('battleships', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='carrier',
name='experience',
field=models.CharField(default=battleships.models.ship.Experience('RECRUIT'), max_length=20),
),
migrations.AlterField(
model_name='destroyer',
name='experience',
field=models.CharField(default=battleships.models.ship.Experience('RECRUIT'), max_length=20),
),
migrations.AlterField(
model_name='frigate',
name='experience',
field=models.CharField(default=battleships.models.ship.Experience('RECRUIT'), max_length=20),
),
migrations.AlterField(
model_name='submarine',
name='experience',
field=models.CharField(default=battleships.models.ship.Experience('RECRUIT'), max_length=20),
),
]
| 31.8 | 105 | 0.619048 | 107 | 1,113 | 6.35514 | 0.373832 | 0.125 | 0.154412 | 0.170588 | 0.657353 | 0.657353 | 0.657353 | 0.657353 | 0.657353 | 0.657353 | 0 | 0.032727 | 0.25876 | 1,113 | 34 | 106 | 32.735294 | 0.791515 | 0.040431 | 0 | 0.571429 | 1 | 0 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a20882b19e76633a9de6527854d316da9b23d890 | 28,958 | py | Python | data_split/prepareSingleTests.py | UKPLab/linspector | 46a7cca6ad34dc673feb47c4d452f1248d5e635b | [
"Apache-2.0"
] | 21 | 2019-03-21T12:10:09.000Z | 2022-03-01T04:42:34.000Z | data_split/prepareSingleTests.py | UKPLab/linspector | 46a7cca6ad34dc673feb47c4d452f1248d5e635b | [
"Apache-2.0"
] | 1 | 2019-03-25T17:27:40.000Z | 2019-04-04T07:01:08.000Z | data_split/prepareSingleTests.py | UKPLab/linspector | 46a7cca6ad34dc673feb47c4d452f1248d5e635b | [
"Apache-2.0"
] | 2 | 2019-04-17T07:36:12.000Z | 2019-10-10T09:15:29.000Z | # -*- coding: utf-8 -*-
import sys
sys.path.append('../')
import argparse
import pickle
import random
from data_split.util import *
from data_util.reader import *
from data_util.schema import *
def ensure_dir(file_path):
directory = os.path.dirname(file_path)
if not os.path.exists(directory):
os.makedirs(directory)
def reverse_dict_list(orig_dict):
rev_dict = dict()
for test in orig_dict:
for lang in orig_dict[test]:
if lang not in rev_dict:
rev_dict[lang] = [test]
else:
rev_dict[lang].append(test)
return rev_dict
def get_mixed_surface(feat, lang, vocab, threshold):
"""
Get cnt number of frequent surface and lemma which DOES not contain the 'feat'
None of the surface forms should have the feat
:param feat: morphological feature
:param lang: language
:param vocab: list of frequent words
:param cnt: number of desired nonsense labels
:return: list of word, label tuple
"""
freq_surf = []
rare_surf = []
schema = UnimorphSchema()
data = load_ds("unimorph", lang)
forbid_vocab = dict()
# make a vocabulary of forbidden words
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
x_feats = schema.decode_msd(x["msd"])[0]
if feat in x_feats:
forbid_vocab[x["form"]] = 1
# include each surface form once
surf_cnt = dict()
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
# if any of the surface forms have the feat, exclude them
if x["form"] in forbid_vocab:
continue
# exclude x, if the surface form is already in the data
if x["form"] in surf_cnt:
continue
else:
surf_cnt[x['form']] = 1
x_feats = schema.decode_msd(x["msd"])[0]
## Exceptions: drop V.PTCP from the case and gender tests - russian
if (feat in ['Case', 'Gender']) and (lang == 'russian') and (x_feats['Part of Speech'] == 'Participle'):
continue
## Exceptions: If it is a gender test and the noun does not have a gender feature, ignore
if (feat == 'Gender') and (lang == 'russian') and (x_feats['Part of Speech'] == 'Noun') and (
'Gender' not in x_feats):
continue
if feat not in x_feats:
# flag x
x["flag"] = 1
if x["form"].lower() in vocab:
freq_surf.append(x)
else:
rare_surf.append(x)
"""
else:
if feat == 'Number' and (x_feats['Part of Speech'] != 'Noun') and (feat in x_feats):
x["flag"] = 1
if x["form"].lower() in vocab:
freq_surf.append(x)
else:
rare_surf.append(x)
"""
# Try to sample 80%-20% if possible
if (len(freq_surf) >= int(threshold * 0.8)) and (len(rare_surf) >= int(threshold * 0.2)):
shuffled_frequent = random.sample(freq_surf, int(threshold * 0.8))
shuffled_rare = random.sample(rare_surf, int(threshold * 0.2))
instances = shuffled_frequent + shuffled_rare
# else get all the frequent ones, and sample the rest from the rare ones
elif (len(freq_surf) + len(rare_surf)) >= threshold:
shuffled_frequent = random.sample(freq_surf, len(freq_surf))
shuffled_rare = random.sample(rare_surf, int(threshold - len(freq_surf)))
instances = shuffled_frequent + shuffled_rare
else:
print("Not enough instances are left")
return []
return instances
def split_for_morph_test_mixed(feat, lang, vocab, nonlabelratio, savedir, threshold=10000):
"""
Splits unimorph data into training, dev an test for the given feature and language.
Precheck 1: We check if the feature can have more than one label beforehand
This function eliminates cases where the form has space - e.g., "anlıyor musun"
This function eliminates cases where the feature is very sparse (seen less than 5 times)
This function eliminates ambiguous forms
:param feat: Case, Valency...
:param lang: turkish, russian, english...
:param vocab: frequent word list from wikipedia
:param savedir: folder to save the splits
:param threshold: fixed to 10K
:return: Default output directory is ./output/feature/lang/train-dev-test.txt
"""
freq_surf = []
rare_surf = []
schema = UnimorphSchema()
data = load_ds("unimorph", lang)
# make a label dictionary for noisy labels
label_cnt = dict()
# make a surface form dictionary for ambiguous
surf_cnt = dict()
nonlabel_cnt = threshold * nonlabelratio
reallabel_cnt = threshold * (1. - nonlabelratio)
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
x_feats = schema.decode_msd(x["msd"])[0]
if feat in x_feats:
# There is a bug with Number: 'Part of Speech'
# Don't include verbs/verb like words to singular/plural test
# if feat=='Number' and (x_feats['Part of Speech']!='Noun'):
# continue
## Exceptions: drop V.PTCP from the case and gender tests - russian
if (feat in ['Case', 'Gender']) and (lang == 'russian') and (x_feats['Part of Speech'] == 'Participle'):
continue
## Exceptions: If it is a gender test and the noun does not have a gender feature, ignore
if (feat == 'Gender') and (lang == 'russian') and (x_feats['Part of Speech'] == 'Noun') and (
'Gender' not in x_feats):
continue
if x["form"].lower() in vocab:
freq_surf.append(x)
# rare surface and frequent lemma
else:
rare_surf.append(x)
# for sparse labels
if x_feats[feat] not in label_cnt:
label_cnt[x_feats[feat]] = 1
else:
label_cnt[x_feats[feat]] += 1
# for amb. forms
if x['form'] not in surf_cnt:
surf_cnt[x['form']] = 1
else:
surf_cnt[x['form']] += 1
# if there is any (very) sparse label, exclude those
forbid_labs = []
for label in label_cnt:
if (label_cnt[label]) < 5:
forbid_labs.append(label)
# if there are any surface forms with multiple values, exclude those
amb_form_dict = dict()
for surf, cnt in surf_cnt.items():
if cnt > 1:
amb_form_dict[surf] = 1
# check here if we don't have enough instances or labels already
if ((len(label_cnt) - len(forbid_labs)) < 2) or len(surf_cnt) < reallabel_cnt:
print("Not enough instances or labels are left")
return False
# Exclude the noisy labels, ambiguous forms and rare words
if ((len(forbid_labs) > 0) or (len(amb_form_dict) > 0)):
freq_surf = []
rare_surf = []
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
# exclude amb. forms
if x["form"] in amb_form_dict:
continue
x_feats = schema.decode_msd(x["msd"])[0]
if 'Part of Speech' not in x_feats:
# probably a mistake in unimorph, just pass
continue
# exclude non nominal forms which has plurality tag
# if feat=='Number' and (x_feats['Part of Speech']!='Noun'):
# continue
## Exceptions: drop V.PTCP from the case and gender tests - russian
if (feat in ['Case', 'Gender']) and (lang == 'russian') and x_feats['Part of Speech'] == 'Participle':
continue
## Exceptions: If it is a gender test and the noun does not have a gender feature, ignore
if (feat == 'Gender') and (lang == 'russian') and (x_feats['Part of Speech'] == 'Noun') and (
'Gender' not in x_feats):
continue
if (feat in x_feats) and (x_feats[feat] not in forbid_labs):
# instances.append(x)
# if frequent surface
if x["form"].lower() in vocab:
freq_surf.append(x)
# rare surface
else:
rare_surf.append(x)
# Try to sample 80%-20% if possible
if (len(freq_surf) >= int(reallabel_cnt * 0.8)) and (len(rare_surf) >= int(reallabel_cnt * 0.2)):
shuffled_frequent = random.sample(freq_surf, int(reallabel_cnt * 0.8))
shuffled_rare = random.sample(rare_surf, int(reallabel_cnt * 0.2))
instances = shuffled_frequent + shuffled_rare
# else get all the frequent ones, and sample the rest from the rare ones
elif (len(freq_surf) + len(rare_surf)) >= reallabel_cnt:
shuffled_frequent = random.sample(freq_surf, len(freq_surf))
shuffled_rare = random.sample(rare_surf, int(reallabel_cnt - len(freq_surf)))
instances = shuffled_frequent + shuffled_rare
else:
print("Not enough instances are left")
return False
# get the nonlabel instances
non_instances = get_mixed_surface(feat, lang, vocab, nonlabel_cnt)
if len(non_instances) == 0:
return False
all_instances = instances + non_instances
shuffled_instances = random.sample(all_instances, threshold)
train_inst = shuffled_instances[:int(threshold * 0.7)]
dev_inst = shuffled_instances[int(threshold * 0.7):int(threshold * 0.9)]
test_inst = shuffled_instances[int(threshold * 0.9):]
train_path = os.path.join(savedir, feat, lang, "train.txt")
ensure_dir(train_path)
dev_path = os.path.join(savedir, feat, lang, "dev.txt")
ensure_dir(dev_path)
test_path = os.path.join(savedir, feat, lang, "test.txt")
ensure_dir(test_path)
# Write file
with open(train_path, 'w') as fout:
for inst in train_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if "flag" not in inst:
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
else:
fout.write("\t".join([inst["form"], "None"]) + "\n")
fout.close()
with open(dev_path, 'w') as fout:
for inst in dev_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if "flag" not in inst:
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
else:
fout.write("\t".join([inst["form"], "None"]) + "\n")
fout.close()
with open(test_path, 'w') as fout:
for inst in test_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if "flag" not in inst:
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
else:
fout.write("\t".join([inst["form"], "None"]) + "\n")
fout.close()
return True
def split_for_morph_test(feat, lang, vocab, savedir, threshold=10000):
"""
Splits unimorph data into training, dev an test for the given feature and language.
Precheck 1: We check if the feature can have more than one label beforehand
This function eliminates cases where the form has space - e.g., "anlıyor musun"
This function eliminates cases where the feature is very sparse (seen less than 5 times)
This function eliminates ambiguous forms
:param feat: Case, Valency...
:param lang: turkish, russian, english...
:param vocab: frequent word list from wikipedia
:param savedir: folder to save the splits
:param threshold: fixed to 10K
:return: Default output directory is ./output/feature/lang/train-dev-test.txt
"""
freq_surf = []
rare_surf = []
schema = UnimorphSchema()
data = load_ds("unimorph", lang)
# make a label dictionary for noisy labels
label_cnt = dict()
# make a surface form dictionary for ambiguous
surf_cnt = dict()
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
x_feats = schema.decode_msd(x["msd"])[0]
if feat in x_feats:
# There is a bug with Number: 'Part of Speech'
# Don't include verbs/verb like words to singular/plural test
# if feat=='Number' and (x_feats['Part of Speech']!='Noun'):
# continue
# instances.append(x)
if x["form"].lower() in vocab:
freq_surf.append(x)
# rare surface
else:
rare_surf.append(x)
# for sparse labels
if x_feats[feat] not in label_cnt:
label_cnt[x_feats[feat]] = 1
else:
label_cnt[x_feats[feat]] += 1
# for amb. forms
if x['form'] not in surf_cnt:
surf_cnt[x['form']] = 1
else:
surf_cnt[x['form']] += 1
# if there is any (very) sparse label, exclude those
forbid_labs = []
for label in label_cnt:
if (label_cnt[label]) < 5:
forbid_labs.append(label)
# if there are any surface forms with multiple values, exclude those
amb_form_dict = dict()
for surf, cnt in surf_cnt.items():
if cnt > 1:
amb_form_dict[surf] = 1
# check here if we don't have enough instances or labels already
if ((len(label_cnt) - len(forbid_labs)) < 2) or len(surf_cnt) < threshold:
print("Not enough instances or labels are left")
return False
# Exclude the noisy labels, ambiguous forms and rare words
if ((len(forbid_labs) > 0) or (len(amb_form_dict) > 0)):
freq_surf = []
rare_surf = []
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
# exclude amb. forms
if x["form"] in amb_form_dict:
continue
x_feats = schema.decode_msd(x["msd"])[0]
if 'Part of Speech' not in x_feats:
# probably a mistake in unimorph, just pass
continue
# exclude non nominal forms which has plurality tag
# if feat=='Number' and (x_feats['Part of Speech']!='Noun'):
# continue
if (feat in x_feats) and (x_feats[feat] not in forbid_labs):
# instances.append(x)
# if frequent surface
if x["form"].lower() in vocab:
freq_surf.append(x)
# rare surface
else:
rare_surf.append(x)
# Try to sample 80%-20% if possible
if (len(freq_surf) >= int(threshold * 0.8)) and (len(rare_surf) >= int(threshold * 0.2)):
shuffled_frequent = random.sample(freq_surf, int(threshold * 0.8))
shuffled_rare = random.sample(rare_surf, int(threshold * 0.2))
instances = shuffled_frequent + shuffled_rare
# else get all the frequent ones, and sample the rest from the rare ones
elif (len(freq_surf) + len(rare_surf)) >= threshold:
shuffled_frequent = random.sample(freq_surf, len(freq_surf))
shuffled_rare = random.sample(rare_surf, int(threshold - len(freq_surf)))
instances = shuffled_frequent + shuffled_rare
else:
print("Not enough instances are left")
return False
shuffled_instances = random.sample(instances, threshold)
train_inst = shuffled_instances[:int(threshold * 0.7)]
dev_inst = shuffled_instances[int(threshold * 0.7):int(threshold * 0.9)]
test_inst = shuffled_instances[int(threshold * 0.9):]
train_path = os.path.join(savedir, feat, lang, "train.txt")
ensure_dir(train_path)
dev_path = os.path.join(savedir, feat, lang, "dev.txt")
ensure_dir(dev_path)
test_path = os.path.join(savedir, feat, lang, "test.txt")
ensure_dir(test_path)
# Write file
with open(train_path, 'w') as fout:
for inst in train_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
fout.close()
with open(dev_path, 'w') as fout:
for inst in dev_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
fout.close()
with open(test_path, 'w') as fout:
for inst in test_inst:
x_feats = schema.decode_msd(inst["msd"])[0]
if feat == 'Person':
x_feats[feat] = x_feats[feat] + " " + x_feats['Number']
fout.write("\t".join([inst["form"], x_feats[feat]]) + "\n")
fout.close()
return True
def split_for_number_test(lang, vocab, savedir, threshold=10000):
"""
Create train, dev, test splits for 'number of characters' and 'number of morphemes' tests
:param lang: turkish, russian, english...
:param vocab: frequent word list from wikipedia
:param savedir: folder to save the splits
:param threshold: fixed to 10K
:return: Default output directory is ./output/CharacterCount/lang/train-dev-test.txt and
./output/TagCount/lang/train-dev-test.txt
"""
# instances = []
freq_surf = []
rare_surf = []
schema = UnimorphSchema()
data = load_ds("unimorph", lang)
surf_dict = dict()
for x in data[lang]:
# exclude lemmas with space
if ' ' in x["form"]:
continue
# exclude duplicates
if x['form'] in surf_dict:
continue
# exclude rare words
# if x[raretype].lower() not in vocab:
# continue
# else:
surf_dict[x['form']] = 1
x["num_chars"] = str(len(x["form"]))
x["num_morph_tags"] = str(len(schema.decode_msd(x["msd"])[0]))
if x["form"].lower() in vocab:
freq_surf.append(x)
# rare surface and frequent lemma
else:
rare_surf.append(x)
# instances.append(x)
# Try to sample 80%-20% if possible
if (len(freq_surf) >= int(threshold * 0.8)) and (len(rare_surf) >= int(threshold * 0.2)):
shuffled_frequent = random.sample(freq_surf, int(threshold * 0.8))
shuffled_rare = random.sample(rare_surf, int(threshold * 0.2))
instances = shuffled_frequent + shuffled_rare
# else get all the frequent ones, and sample the rest from the rare ones
elif (len(freq_surf) + len(rare_surf)) >= threshold:
shuffled_frequent = random.sample(freq_surf, len(freq_surf))
shuffled_rare = random.sample(rare_surf, int(threshold - len(freq_surf)))
instances = shuffled_frequent + shuffled_rare
else:
print("Not enough instances are left")
return False
shuffled_instances = random.sample(instances, threshold)
train_inst = shuffled_instances[:int(threshold * 0.7)]
dev_inst = shuffled_instances[int(threshold * 0.7):int(threshold * 0.9)]
test_inst = shuffled_instances[int(threshold * 0.9):]
feat = "CharacterCount"
train_path = os.path.join(savedir, feat, lang, "train.txt")
ensure_dir(train_path)
dev_path = os.path.join(savedir, feat, lang, "dev.txt")
ensure_dir(dev_path)
test_path = os.path.join(savedir, feat, lang, "test.txt")
ensure_dir(test_path)
# Write file
with open(train_path, 'w') as fout:
for inst in train_inst:
fout.write("\t".join([inst["form"], inst["num_chars"]]) + "\n")
fout.close()
with open(dev_path, 'w') as fout:
for inst in dev_inst:
fout.write("\t".join([inst["form"], inst["num_chars"]]) + "\n")
fout.close()
with open(test_path, 'w') as fout:
for inst in test_inst:
fout.write("\t".join([inst["form"], inst["num_chars"]]) + "\n")
fout.close()
feat = "TagCount"
train_path = os.path.join(savedir, feat, lang, "train.txt")
ensure_dir(train_path)
dev_path = os.path.join(savedir, feat, lang, "dev.txt")
ensure_dir(dev_path)
test_path = os.path.join(savedir, feat, lang, "test.txt")
ensure_dir(test_path)
# Write file
with open(train_path, 'w') as fout:
for inst in train_inst:
fout.write("\t".join([inst["form"], inst["num_morph_tags"]]) + "\n")
fout.close()
with open(dev_path, 'w') as fout:
for inst in dev_inst:
fout.write("\t".join([inst["form"], inst["num_morph_tags"]]) + "\n")
fout.close()
with open(test_path, 'w') as fout:
for inst in test_inst:
fout.write("\t".join([inst["form"], inst["num_morph_tags"]]) + "\n")
fout.close()
return True
def split_for_nonsense(lang, pseudodir, savedir, type="ort", threshold=10000):
"""
Create splits in two different formats:
Binary: given the word, guess if it is pseduo or not
Old20: given the pseudo word, guess its level of nonsense - approximately
Probably binary one makes more sense, but there are more options available
:param lang: any supported-prcessed wuggy language under generated folder
:param pseudodir: folder of pseudo files generated by wuggy
:param savedir: folder to save the splits
:param type: ort or phon
:param threshold:
:return:
"""
instances = []
words = []
fin_path = os.path.join(pseudodir, (type + "_" + lang))
# Read file
i = 0
with open(fin_path) as fin:
for line in fin:
if i == 0:
i += 1
continue
x = {}
all_cols = line.rstrip().split("\t")
x["word"] = all_cols[0]
words.append(x["word"])
x["non_sense"] = all_cols[1]
instances.append(x)
fin.close()
# make a vocab
word_vocab = list(set(words))
if len(instances) < threshold:
print("Not enough instances")
return False
if len(word_vocab) < (threshold / 2):
print("Not enough words")
return False
# shuffle is an in-place operation
random.shuffle(word_vocab)
shuffled_instances = random.sample(instances, threshold)
shuffled_labels = np.random.choice([0, 1], size=(threshold,), p=[1. / 2, 1. / 2])
train_inst = shuffled_instances[:int(threshold * 0.7)]
train_labels = shuffled_labels[:int(threshold * 0.7)]
dev_inst = shuffled_instances[int(threshold * 0.7):int(threshold * 0.9)]
dev_labels = shuffled_labels[int(threshold * 0.7):int(threshold * 0.9)]
test_inst = shuffled_instances[int(threshold * 0.9):]
test_labels = shuffled_labels[int(threshold * 0.9):]
feat = "NonSense_Binary"
train_path = os.path.join(savedir, feat, lang, "train.txt")
ensure_dir(train_path)
dev_path = os.path.join(savedir, feat, lang, "dev.txt")
ensure_dir(dev_path)
test_path = os.path.join(savedir, feat, lang, "test.txt")
ensure_dir(test_path)
# Write file
wi = 0
with open(train_path, 'w') as fout:
for inst, label in zip(train_inst, train_labels):
if label == 0:
fout.write("\t".join([inst["non_sense"], str(label)]) + "\n")
elif label == 1:
fout.write("\t".join([word_vocab[wi], str(label)]) + "\n")
wi += 1
fout.close()
with open(dev_path, 'w') as fout:
for inst, label in zip(dev_inst, dev_labels):
if label == 0:
fout.write("\t".join([inst["non_sense"], str(label)]) + "\n")
elif label == 1:
fout.write("\t".join([word_vocab[wi], str(label)]) + "\n")
wi += 1
fout.close()
with open(test_path, 'w') as fout:
for inst, label in zip(test_inst, test_labels):
if label == 0:
fout.write("\t".join([inst["non_sense"], str(label)]) + "\n")
elif label == 1:
fout.write("\t".join([word_vocab[wi], str(label)]) + "\n")
wi += 1
fout.close()
return True
def main(args):
langs = {'portuguese': 'pt',
'french': 'fr',
'serbo-croatian': 'sh',
'polish': 'pl',
'czech': 'cs',
'modern-greek': 'el',
'catalan': 'ca',
'bulgarian': 'bg',
'danish': 'da',
'estonian': 'et',
'quechua': 'qu',
'swedish': 'sv',
'armenian': 'hy',
'macedonian': 'mk',
'arabic': 'ar',
'dutch': 'nl',
'hungarian': 'hu',
'italian': 'it',
'romanian': 'ro',
'ukranian': 'uk',
'german': 'de',
'finnish': 'fi',
'russian': 'ru',
'turkish': 'tr',
'spanish': 'es'
}
# Language specific vocabulary sizes
# wiki vocabulary sizes: de: 2275234, es: 985668, fi: 730484, tr: 416052, ru: 1888424,
# pt: 592109, fr: 1152450, sh: 454675, pl: 1032578, cs:627842, el: 306450, ca:490566, bg: 334079, da: 312957
# et: 329988, qu: 23074, sv: 1143274, hy: 332673, mk: 176948, ar: 610978, nl: 871023, hu: 793867, it: 871054,
# ro: 354325, uk: 912459
langs_vocab = {'german': 750000, 'finnish': 500000, 'russian': 750000, 'turkish': 500000, 'spanish': 500000, \
'portuguese': 500000, 'french': 750000, 'serbo-croatian': 500000, 'polish': 750000, 'czech': 500000, \
'modern-greek': 500000, 'catalan': 500000, 'bulgarian': 500000, 'danish': 500000, 'estonian': 500000, \
'quechua': 500000, 'swedish': 750000, 'armenian': 500000, 'macedonian': 500000, 'arabic': 500000, \
'dutch': 600000, 'hungarian': 600000, 'italian': 600000, 'romanian': 500000, 'ukranian': 750000}
with open('test_vs_lang_feat_over_10K.pkl', 'rb') as handle:
test_vs_lang = pickle.load(handle)
lang_vs_test = reverse_dict_list(test_vs_lang)
# Load preprocessed statistics
with open('supported_languages_over_10K.pkl', 'rb') as handle:
supported_lang_list = pickle.load(handle)
if args.feat == 1:
for lang in lang_vs_test:
if lang in langs:
embfile = os.path.join('..', "embeddings", "wiki." + langs[lang] + ".vec")
print("Reading vocabulary for lang " + lang)
vocab = load_dict(embfile, maxvoc=langs_vocab[lang])
for test_name in lang_vs_test[lang]:
print("Preparing " + lang + " - " + test_name)
split_for_morph_test_mixed(test_name, lang, vocab, args.nonlabelratio, args.savedir)
if args.common == 1:
# General tests for all supported languages
for lang in supported_lang_list:
if lang in langs:
# get the vocabulary first
embfile = os.path.join('..', "embeddings", "wiki." + langs[lang] + ".vec")
print("Reading vocabulary for lang " + lang)
vocab = load_dict(embfile, maxvoc=langs_vocab[lang])
print("Preparing Character and Tag Count Tests- " + lang)
split_for_number_test(lang, vocab, args.savedir)
print("Preparing POS Test- " + lang)
split_for_morph_test("Part of Speech", lang, vocab, args.savedir)
if args.pseudo == 1:
# Pseudo word tests only for languages with wuggy support
# Orthographic pseudo
ort_lang_lst = ["turkish", "german", "spanish", "english", 'dutch', 'french', 'serbian_latin', 'basque',
'vietnamese']
for lang in ort_lang_lst:
print("Processing orthographic " + lang)
split_for_nonsense(lang, args.pseudodir, args.savedir, type="ort")
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Prepare feature tests
parser.add_argument('--nonlabelratio', type=float, default=0.3)
parser.add_argument('--savedir', type=str, default='./probing_datasets_2')
parser.add_argument('--feat', type=int, default=0)
parser.add_argument('--common', type=int, default=1)
parser.add_argument('--pseudo', type=int, default=1)
parser.add_argument('--pseudodir', type=str, default='./generated_wuggy_files')
args = parser.parse_args()
main(args)
| 37.705729 | 122 | 0.579184 | 3,775 | 28,958 | 4.310464 | 0.10649 | 0.024705 | 0.025565 | 0.018068 | 0.738324 | 0.724988 | 0.7103 | 0.699299 | 0.688852 | 0.678835 | 0 | 0.024558 | 0.296913 | 28,958 | 767 | 123 | 37.754889 | 0.774656 | 0.213792 | 0 | 0.697959 | 0 | 0 | 0.096614 | 0.003843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016327 | false | 0 | 0.014286 | 0 | 0.061224 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a21fc7c205150998371a4d620812a958d885dc65 | 336 | py | Python | locale/pot/api/core/_autosummary/pyvista-UniformGrid-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 4 | 2020-08-07T08:19:19.000Z | 2020-12-04T09:51:11.000Z | locale/pot/api/core/_autosummary/pyvista-UniformGrid-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 19 | 2020-08-06T00:24:30.000Z | 2022-03-30T19:22:24.000Z | locale/pot/api/core/_autosummary/pyvista-UniformGrid-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 1 | 2021-03-09T07:50:40.000Z | 2021-03-09T07:50:40.000Z | import pyvista
import vtk
import numpy as np
#
grid = pyvista.UniformGrid()
#
vtkgrid = vtk.vtkImageData()
grid = pyvista.UniformGrid(vtkgrid)
#
dims = (10, 10, 10)
grid = pyvista.UniformGrid(dims)
#
spacing = (2, 1, 5)
grid = pyvista.UniformGrid(dims, spacing)
#
origin = (10, 35, 50)
grid = pyvista.UniformGrid(dims, spacing, origin)
| 18.666667 | 49 | 0.714286 | 45 | 336 | 5.333333 | 0.422222 | 0.229167 | 0.458333 | 0.325 | 0.4625 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0.052265 | 0.145833 | 336 | 17 | 50 | 19.764706 | 0.783972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a22664719551bd9b619aac10e37bfaca231b7de4 | 1,277 | py | Python | app/barber/forms.py | avb76/barbershop | 975b501b0c53600909910619e248dff627acaa22 | [
"MIT"
] | null | null | null | app/barber/forms.py | avb76/barbershop | 975b501b0c53600909910619e248dff627acaa22 | [
"MIT"
] | null | null | null | app/barber/forms.py | avb76/barbershop | 975b501b0c53600909910619e248dff627acaa22 | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, SubmitField, TextAreaField, IntegerField
from wtforms.validators import DataRequired, EqualTo, ValidationError, Length
from app.models import Barber
class NewBarberForm(FlaskForm):
first_name = StringField('First Name', validators=[DataRequired()])
last_name = StringField('Last Name', validators=[DataRequired()])
username = StringField('Username', validators=[DataRequired()])
password = PasswordField('Password', validators=[DataRequired()])
password2 = PasswordField('Repeat Password', validators=[DataRequired(), EqualTo('password')])
submit = SubmitField('Add barber')
def validate_username(self, username):
user = Barber.query.filter_by(username=username.data).first()
if user is not None:
raise ValidationError('Please use a different username.')
class NewServiceForm(FlaskForm):
name = StringField('Name', validators=[DataRequired()])
description = TextAreaField('Description', validators=[DataRequired(), Length(min=0, max=300)])
duration = IntegerField('Duration (minutes)', validators=[DataRequired()])
price = IntegerField('Price (RON)', validators=[DataRequired()])
submit = SubmitField('Add service')
| 47.296296 | 99 | 0.736883 | 125 | 1,277 | 7.488 | 0.464 | 0.211538 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004545 | 0.138606 | 1,277 | 26 | 100 | 49.115385 | 0.846364 | 0 | 0 | 0 | 0 | 0 | 0.121378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.142857 | 0.190476 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
bf87f1499d3b717a9457ce2ddb8d7a572916001a | 41,840 | py | Python | tests/util/answers.py | selectel/python-selvpcclient | 99955064215c2be18b568e5e9b34f17087ec304f | [
"Apache-2.0"
] | 7 | 2017-07-15T12:44:23.000Z | 2020-03-24T09:45:11.000Z | tests/util/answers.py | selectel/python-selvpcclient | 99955064215c2be18b568e5e9b34f17087ec304f | [
"Apache-2.0"
] | 13 | 2017-07-05T09:34:09.000Z | 2021-04-20T08:18:46.000Z | tests/util/answers.py | selectel/python-selvpcclient | 99955064215c2be18b568e5e9b34f17087ec304f | [
"Apache-2.0"
] | 9 | 2017-06-29T13:51:35.000Z | 2021-06-26T21:00:49.000Z | from tests.util.params import LOGO_BASE64
PROJECTS_LIST = {
'projects': [{
"id": "15c578ea47a5466db2aeb57dc8443676",
"name": "pr1",
"url": "http://11111.selvpc.ru",
"enabled": True,
"theme": {
"color": "",
"logo": "",
"brand_color": "",
}
}, {
"id": "2c578ea47a5466db2aeb57dc8443676",
"name": "pr2",
"url": "http://11111.selvpc.ru",
"enabled": True,
"theme": {
"color": "",
"logo": "",
"brand_color": "",
}
}]
}
PROJECTS_CREATE = {
'project': {
"id": "15c578ea47a5466db2aeb57dc8443676",
"name": "project1",
"url": "http://11111.selvpc.ru",
"enabled": True,
"custom_url": "",
"theme": {
"color": "",
"logo": "",
"brand_color": "",
}
}
}
PROJECTS_CREATE_WITH_AUTO_QUOTAS = {
'project': {
"id": "15c578ea47a5466db2aeb57dc8443676",
"name": "project1",
"url": "http://11111.selvpc.ru",
"enabled": True,
"quotas": {
"compute_cores": [
{
"region": "ru-1",
"used": 0,
"zone": "ru-1a",
"value": 10,
},
],
},
}
}
PROJECTS_SET = {
'project': {
"id": "15c578ea47a5466db2aeb57dc8443676",
"name": "project1",
"url": "http://11111.selvpc.ru",
"enabled": True,
"custom_url": "www.customhost.no",
"theme": {
"color": "",
"logo": "",
"brand_color": "",
}
}
}
PROJECTS_SET_WITHOUT_CNAME = PROJECTS_CREATE
PROJECTS_SHOW = {
'project': {
"id": "f5c578ea47a5466db2aeb57dc8443676",
"name": "pr1",
"url": "http://11111.selvpc.ru",
"enabled": True,
"quotas": {
"compute_cores": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 10,
"used": 1
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 10,
"used": 0
}
],
"compute_ram": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 1024,
"used": 512
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 2048,
"used": 0
}
]
},
"theme": {
"color": "",
"logo": "",
"brand_color": "",
}
}
}
PROJECTS_SHOW_ROLES = {
'roles': [{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}, {
"project_id": "2_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}]
}
PROJECT_CUSTOMIZE = {
'theme': {
"color": "00ffee",
"logo": LOGO_BASE64,
"brand_color": "00ffee",
}
}
CUSTOMIZATION_CREATE = PROJECT_CUSTOMIZE
CUSTOMIZATION_SHOW = PROJECT_CUSTOMIZE
CUSTOMIZATION_UPDATE = {
'theme': {
"color": "00eeff",
"logo": LOGO_BASE64,
"brand_color": "00ffee",
}
}
CUSTOMIZATION_NO_THEME = {
"theme": {"color": "", "logo": "", "brand_color": ""}
}
LIMITS_SHOW = {
'quotas': {
"compute_cores": [
{
"region": "ru-2",
"zone": "ru-2a",
"value": 10
},
{
"region": "ru-1",
"zone": "ru-1a",
"value": 10
},
{
"region": "ru-3",
"zone": "ru-3c",
"value": 10
},
{
"region": "ru-3",
"zone": "ru-3a",
"value": 10
},
{
"region": "ru-3",
"zone": "ru-3z",
"value": 10
}
],
"compute_ram": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 1024
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 2048
}
],
"volume_gigabytes_fast": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"volume_gigabytes_universal": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"volume_gigabytes_basic": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"image_gigabytes": [
{
"region": "ru-2",
"zone": None,
"value": 10
},
{
"region": "ru-1",
"zone": None,
"value": 10
},
],
"network_floatingips": [
{
"region": "ru-1",
"zone": None,
"value": 5
}
],
"network_subnets_29": [
{
"region": "ru-1",
"zone": None,
"value": 1
}
],
"license_windows_2012_standard": [
{
"region": "ru-1",
"zone": None,
"value": 1
}
]
}
}
LIMITS_SHOW_FREE = {
'quotas': {
"compute_cores": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 10
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 10
}
],
"compute_ram": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 1024
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 2048
}
],
"volume_gigabytes_fast": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"volume_gigabytes_universal": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"volume_gigabytes_basic": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100
}
],
"image_gigabytes": [
{
"region": "ru-1",
"zone": None,
"value": 10
}
],
"network_floatingips": [
{
"region": "ru-1",
"zone": None,
"value": 5
}
],
"network_subnets_29": [
{
"region": "ru-1",
"zone": None,
"value": 1
}
],
"license_windows_2012_standard": [
{
"region": "ru-1",
"zone": None,
"value": 1
}
]
}
}
QUOTAS_OPTIMIZE_ALL_USING = {
'quotas': {
"compute_cores": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 10,
"used": 10
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 10,
"used": 10
}
],
"compute_ram": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 1024,
"used": 1024
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 2048,
"used": 2048
}
],
"volume_gigabytes_fast": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100,
"used": 100
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100,
"used": 100
}
]
}
}
QUOTAS_LIST = {
"quotas": {
"30bde559615740d28bb63ee626fd0f25": {
"compute_cores": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 36
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 14
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 66
}
],
"volume_gigabytes_basic": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 44
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 25
},
{
"region": "ru1",
"used": 0,
"zone": "ru1c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2b",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 81
}
],
"compute_ram": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 9728
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 4608
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 12800
}
],
"volume_gigabytes_fast": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 47
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 26
},
{
"region": "ru1",
"used": 0,
"zone": "ru1c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2b",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 26
}
],
"image_gigabytes": [
{
"region": "ru1",
"used": 0,
"zone": None,
"value": 46
},
{
"region": "ru2",
"used": 0,
"zone": None,
"value": 44
}
]
},
"efae8856aa67477a97847ad595306628": {
"compute_cores": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 11
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 0
}
],
"volume_gigabytes_basic": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 13
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 0
},
{
"region": "ru1",
"used": 0,
"zone": "ru1c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2b",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 0
}
],
"compute_ram": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 3840
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 0
}
],
"volume_gigabytes_fast": [
{
"region": "ru1",
"used": 0,
"zone": "ru1b",
"value": 12
},
{
"region": "ru1",
"used": 0,
"zone": "ru1a",
"value": 0
},
{
"region": "ru1",
"used": 0,
"zone": "ru1c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2b",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2c",
"value": 0
},
{
"region": "ru2",
"used": 0,
"zone": "ru2a",
"value": 0
}
],
"image_gigabytes": [
{
"region": "ru1",
"used": 0,
"zone": None,
"value": 13
},
{
"region": "ru2",
"used": 0,
"zone": None,
"value": 0
}
]
}
}
}
QUOTAS_SET = {
"quotas": {
"volume_gigabytes_basic": [
{
"region": "ru-1",
"used": 0,
"zone": "ru-1b",
"value": 0
},
{
"region": "ru-1",
"used": 0,
"zone": "ru-1a",
"value": 0
},
{
"region": "ru-2",
"used": 0,
"zone": "ru-2a",
"value": 0
}
],
"compute_cores": [
{
"region": "ru-1",
"used": 0,
"zone": "ru-1b",
"value": 1
},
{
"region": "ru-1",
"used": 2,
"zone": "ru-1a",
"value": 2
},
{
"region": "ru-2",
"used": 0,
"zone": "ru-2a",
"value": 1
}
],
"volume_gigabytes_universal": [
{
"region": "ru-1",
"used": 0,
"zone": "ru-1b",
"value": 0
},
{
"region": "ru-1",
"used": 0,
"zone": "ru-1a",
"value": 0
},
{
"region": "ru-2",
"used": 0,
"zone": "ru-2a",
"value": 0
}
],
"compute_ram": [
{
"region": "ru-1",
"used": 0,
"zone": "ru-1b",
"value": 512
},
{
"region": "ru-1",
"used": 1536,
"zone": "ru-1a",
"value": 1536
},
{
"region": "ru-2",
"used": 0,
"zone": "ru-2a",
"value": 0
}
],
"volume_gigabytes_fast": [
{
"region": "ru-1",
"used": 5,
"zone": "ru-1b",
"value": 5
},
{
"region": "ru-1",
"used": 20,
"zone": "ru-1a",
"value": 20
},
{
"region": "ru-2",
"used": 0,
"zone": "ru-2a",
"value": 0
}
],
"image_gigabytes": [
{
"region": "ru-1",
"used": 0,
"zone": None,
"value": 0
},
{
"region": "ru-2",
"used": 0,
"zone": None,
"value": 0
}
]
}
}
QUOTAS_SHOW = {
'quotas': {
"compute_cores": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 10,
"used": 0
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 10,
"used": 0
}
],
"compute_ram": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 1024,
"used": 0
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 2048,
"used": 0
}
],
"volume_gigabytes_fast": [
{
"region": "ru-1",
"zone": "ru-1a",
"value": 100,
"used": 0
},
{
"region": "ru-1",
"zone": "ru-1b",
"value": 100,
"used": 0
}
],
"network_subnets_29_vrrp": [
{
"region": None,
"used": 0,
"value": 0,
"zone": None
}
],
}
}
USERS_LIST = {
'users': [{
"id": "f9fd1d3167ba4641a3190b4848382216",
"name": "user1",
"enabled": True
}, {
"id": "1d3161d317ba4641a3190b4848382216",
"name": "user2",
"enabled": True
}]
}
USERS_CREATE = {
'user': {
"id": "f9fd1d3167ba4641a3190b4848382216",
"name": "user",
"enabled": True
}
}
USERS_ROLE_SHOW = {
'roles': [{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}, {
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}]
}
USERS_EMPTY = {
"field": "user_id",
"error": "invalid_id"
}
USERS_SET = USERS_CREATE
USERS_SHOW = USERS_CREATE
TOKENS_CREATE = {
'token': {
'id': "a9a81014462d499d9d55d3402991f224"
}
}
LICENSES_LIST = {
'licenses': [{
"id": 0,
"region": "ru-1",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"status": "ACTIVE"
}, {
"id": 1,
"region": "ru-2",
"type": "license_windows_2012_standard",
"project_id": "xxxx1cb46966421fb8b3f3fd9b4db75b",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"status": "ACTIVE"
}]
}
LICENSES_SHOW = {
"license": {
"id": 420,
"region": "ru-1",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"status": "ACTIVE"
}
}
LICENSES_CREATE = {
'licenses': [{
"id": 1,
"region": "ru-1",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"status": "ACTIVE"
}, {
"id": 2,
"region": "ru-2",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"status": "ACTIVE"
}]
}
ROLES_LIST = {
'roles': [{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "1900efc62db34decae9f2dbc04a8ce0f"
}, {
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}]
}
ROLES_ADD = {
'role': {
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}
}
FLOATINGIP_LIST = {
"floatingips": [
{
"status": "ACTIVE",
"tenant_id": "a2e6dd715ca24681b9b335d247b83d16",
"servers": [
{
"status": "ACTIVE",
"updated": "2016-01-01T00:00:00Z",
"id": "dc113178-b573-4459-bdde-272ec18140f3",
"name": "Raya"
}
],
"fixed_ip_address": "192.168.0.4",
"floating_ip_address": "12.34.56.78",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"port_id": "dc801110-94f2-4fdd-b71a-74e2d3d8bfd0",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-1"
},
{
"status": "ACTIVE",
"tenant_id": "xxxxdd715ca24681b9b335d247b83d16",
"servers": [
{
"status": "ACTIVE",
"updated": "2016-01-01T00:00:00Z",
"id": "dc113178-b573-4459-bdde-272ec18140f3",
"name": "Raya"
}
],
"fixed_ip_address": "192.168.0.4",
"floating_ip_address": "12.34.56.78",
"project_id": "xxxxdd715ca24681b9b335d247b83d16",
"port_id": "dc801110-94f2-4fdd-b71a-74e2d3d8bfd0",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-2"
}
]
}
FLOATINGIP_ADD = {
"floatingips": [
{
"status": "ACTIVE",
"tenant_id": "a2e6dd715ca24681b9b335d247b83d16",
"servers": [
{
"status": "ACTIVE",
"updated": "2016-01-01T00:00:00Z",
"id": "dc113178-b573-4459-bdde-272ec18140f3",
"name": "Raya"
}
],
"fixed_ip_address": "192.168.0.4",
"floating_ip_address": "12.34.56.78",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"port_id": "dc801110-94f2-4fdd-b71a-74e2d3d8bfd0",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-1"
}
]
}
FLOATINGIP_SHOW = {
"floatingip": {
"status": "ACTIVE",
"servers": [
{
"status": "ACTIVE",
"updated": "2016-01-01T00:00:00Z",
"id": "dc113178-b573-4459-bdde-272ec18140f3",
"name": "Raya"
}
],
"floating_ip_address": "12.34.56.78",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-1"
}
}
SUBNET_LIST = {
"subnets": [
{
"id": 20,
"region": "ru-1",
"cidr": "192.168.5.32/29",
"enabled": True,
"network_id": "70e73ef1-bade-4377-a52c-4a8cff843170",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"status": "ACTIVE",
"subnet_id": "61053c51-93f4-4d64-9a94-d4f88d1ee88f",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
]
}, {
"id": 21,
"region": "ru-2",
"cidr": "192.168.5.32/29",
"enabled": True,
"network_id": "70e73ef1-bade-4377-a52c-4a8cff843170",
"project_id": "xxxxcb46966421fb8b3f3fd9b4db75b",
"status": "ACTIVE",
"subnet_id": "61053c51-93f4-4d64-9a94-d4f88d1ee88f",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
]
}
]
}
SUBNET_ADD = {
"subnets": [
{
"id": 20,
"region": "ru-1",
"cidr": "192.168.5.32/29",
"enabled": True,
"network_id": "70e73ef1-bade-4377-a52c-4a8cff843170",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"status": "ACTIVE",
"subnet_id": "61053c51-93f4-4d64-9a94-d4f88d1ee88f",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
]
}, {
"id": 21,
"region": "ru-1",
"cidr": "192.168.5.32/29",
"enabled": True,
"network_id": "70e73ef1-bade-4377-a52c-4a8cff843170",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"status": "ACTIVE",
"subnet_id": "61053c51-93f4-4d64-9a94-d4f88d1ee88f",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
]
}
]
}
SUBNET_SHOW = {
"subnet": {
"status": "ACTIVE",
"subnet_id": "6145fba6-dbe2-47af-bad2-6d1dcese5996",
"region": "ru1",
"servers": [
{
"id": "177b0416-2830-4557-898a-581c1147f0ff",
"updated": "2016-01-01T00:00:00Z",
"status": "PAUSED",
"name": "s1"
}
],
"network_id": "47e4a3e8-a2c0-400c-a20c-2b3bf2f8b681",
"cidr": "192.168.5.0/29",
"project_id": "7810f45ae1be4a1f8ab3e95aef2e3ddd",
"id": 420,
}
}
CAPABILITIES_LIST = {
"capabilities": {
"licenses": [
{
"availability": [
"ru-1",
"ru-2"
],
"type": "license_windows_2012_standard"
}
],
"regions": [
{
"description": "Moscow",
"is_default": True,
"name": "ru-2",
"zones": [
{
"description": "Berzarina-1 (ru-2a)",
"enabled": True,
"is_default": True,
"is_private": False,
"name": "ru-2a"
}
]
},
{
"description": "Saint Petersburg",
"is_default": False,
"name": "ru-1",
"zones": [
{
"description": "Dubrovka-1 (ru-1a)",
"enabled": True,
"is_default": False,
"is_private": False,
"name": "ru-1a"
},
{
"description": "Dubrovka-2 (ru-1b)",
"enabled": True,
"is_default": True,
"is_private": False,
"name": "ru-1b"
}
]
}
],
"resources": [
{
"name": "network_floatingips",
"quota_scope": None,
"quotable": False,
"unbillable": False,
},
{
"name": "volume_gigabytes_universal",
"quota_scope": "zone",
"quotable": True,
"unbillable": True,
},
{
"name": "volume_gigabytes_basic",
"quota_scope": "zone",
"quotable": True,
"unbillable": False,
},
{
"name": "compute_ram",
"quota_scope": "zone",
"quotable": True,
"unbillable": False,
},
{
"name": "volume_gigabytes_fast",
"quota_scope": "zone",
"quotable": True,
"unbillable": False,
},
{
"name": "license_windows_2012_standard",
"quota_scope": None,
"quotable": False,
"unbillable": False,
},
{
"name": "image_gigabytes",
"quota_scope": "region",
"quotable": True,
"unbillable": False,
},
{
"name": "network_subnets_29",
"quota_scope": None,
"quotable": False,
"unbillable": False,
},
{
"name": "compute_cores",
"quota_scope": "zone",
"quotable": True,
"unbillable": False,
},
{
"name": "network_subnets_25",
"quota_scope": None,
"quotable": False,
"unbillable": True,
}
],
"subnets": [
{
"availability": [
"ru-1",
"ru-2"
],
"prefix_length": "29",
"type": "ipv4"
}
],
"traffic": {
"granularities": [
{
"granularity": 3600,
"timespan": 96
},
{
"granularity": 1,
"timespan": 32
},
{
"granularity": 86400,
"timespan": 1825
}
]
}
}
}
VRRP_ADD = {
"vrrp_subnets":
[{
"status": "DOWN",
"cidr": "78.155.195.8/29",
"project_id": "b63ab68796e34858befb8fa2a8b1e12a",
"id": 6,
"subnets": [
{
"network_id": "827fe85f-a379-4f28-a426-2ddf7ddab6a2",
"subnet_id": "6595e66c-b14e-4167-9a48-6be6fb407c63",
"region": "ru-1"
},
{
"network_id": "68b6a3e0-d016-4248-b8de-03cb20cacb2c",
"subnet_id": "9e8cf4bb-a385-401d-bda4-395f3985ead1",
"region": "ru-2"
}
],
}]
}
VRRP_SHOW = {
"vrrp_subnet": {
"status": "DOWN",
"subnets": [
{
"network_id": "1eb0e13d-0ce6-4c00-99e8-45e4787766fd",
"subnet_id": "053f7817-6804-4fad-8f6b-0d1edef074ed",
"region": "ru-1"
},
{
"network_id": "74694b81-4203-4599-ae71-029182f9cef9",
"subnet_id": "cc1d50b9-4890-4173-a750-4537c1f747a2",
"region": "ru-2"
}
],
"servers": [],
"cidr": "78.155.195.0/29",
"project_id": "b63ab68796e34858befb8fa2a8b1e12a",
"id": 2,
"master_region": "ru-1",
"slave_region": "ru-2"
}
}
VRRP_LIST = {
"vrrp_subnets": [
{
"status": "DOWN",
"subnets": [],
"servers": [],
"cidr": "78.155.196.0/29",
"project_id": "x63ab68796e34858befb8fa2a8b1e12a",
"id": 3,
"master_region": "ru-1",
"slave_region": "ru-2"
},
{
"status": "DOWN",
"subnets": [],
"servers": [],
"cidr": "78.155.195.0/29",
"project_id": "b63ab68796e34858befb8fa2a8b1e12a",
"id": 2,
"master_region": "ru-1",
"slave_region": "ru-2"
}
]
}
QUOTAS_PARTIAL = {
"quotas": {
"fail": [
{
"region": "ru-1",
"resource": "compute_ram",
"zone": "ru-1b",
"value": 2048
}
],
"ok": [
{
"region": "ru-1",
"resource": "image_gigabytes",
"zone": None,
"used": 0,
"value": 400
}
],
"error": "multi_status"
}
}
QUOTAS_PARTIAL_RESULT = {
"quotas": {
'image_gigabytes': [
{
"zone": None,
"region": "ru-1",
"value": 400,
"used": 0}
]
}
}
FLOATING_IPS_PARTIAL = {
"floatingips": {
"fail": [
{
"region": "ru-2",
"quantity": 1
}
],
"ok": [
{
"status": "DOWN",
"floating_ip_address": "12.34.56.77",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-1"
},
{
"status": "DOWN",
"floating_ip_address": "12.34.56.78",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97b",
"region": "ru-1"
}
],
"error": "multi_status"
}
}
FLOATING_IPS_PARTIAL_RESULT = [
{
"status": "DOWN",
"floating_ip_address": "12.34.56.77",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97a",
"region": "ru-1"
},
{
"status": "DOWN",
"floating_ip_address": "12.34.56.78",
"project_id": "a2e6dd715ca24681b9b335d247b83d16",
"id": "0d987b46-bad5-41b7-97e3-bac9974aa97b",
"region": "ru-1"
}
]
LICENSES_PARTIAL = {
'licenses':
{
"fail": [
{
"quantity": 1,
"region": "ru-2",
"type": "license_windows_2012_standard"
}
],
"ok": [
{
"id": 1,
"region": "ru-1",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"status": "DOWN"
}
],
"error": "multi_status"
}
}
LICENSES_PARTIAL_RESULT = [
{
"id": 1,
"region": "ru-1",
"type": "license_windows_2012_standard",
"project_id": "e7081cb46966421fb8b3f3fd9b4db75b",
"status": "DOWN"
}
]
ROLES_PARTIAL = {
'roles': {
"fail": [{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "1900efc62db34decae9f2dbc04a8ce0f"
}],
"ok": [
{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}
],
"error": "multi_status"
}
}
ROLES_PARTIAL_RESULT = [
{
"project_id": "1_7354286c9ebf464d86efc16fb56d4fa3",
"user_id": "5900efc62db34decae9f2dbc04a8ce0f"
}
]
SUBNETS_PARTIAL = {
"subnets": {
"fail": [
{
"region": "ru-2",
"prefix_length": 29,
"quantity": 1,
"type": "ipv4"
}
],
"ok": [
{
"status": "DOWN",
"subnet_id": "6145fba6-dbe2-47af-bad2-6d1dcese5996",
"region": "ru-1",
"network_id": "47e4a3e8-a2c0-400c-a20c-2b3bf2f8b681",
"cidr": "192.168.5.0/29",
"project_id": "7810f45ae1be4a1f8ab3e95aef2e3ddd",
"id": 420
}
],
"error": "multi_status"
}
}
SUBNETS_PARTIAL_RESULT = [
{
"status": "DOWN",
"subnet_id": "6145fba6-dbe2-47af-bad2-6d1dcese5996",
"region": "ru-1",
"network_id": "47e4a3e8-a2c0-400c-a20c-2b3bf2f8b681",
"cidr": "192.168.5.0/29",
"project_id": "7810f45ae1be4a1f8ab3e95aef2e3ddd",
"id": 420
}
]
KEYPAIR_LIST = {
"keypairs": [
{
"name": "User_1",
"public_key": "ssh-rsa ... user@name",
"regions": [
"ru-1"
],
"user_id": "88ad5569d8c64f828ac3d2efa4e552dd"
},
{
"name": "User_2",
"public_key": "ssh-rsa ... user@name",
"regions": [
"ru-2"
],
"user_id": "88ad5569d8c64f828ac3d2efa4e552dd"
}
]
}
KEYPAIR_ADD = {
"keypair": [
{
"name": "MOSCOW_KEY",
"region": "ru-1",
"user_id": "88ad5569d8c64f828ac3d2efa4e552dd"
},
{
"name": "MOSCOW_KEY",
"region": "ru-2",
"user_id": "88ad5569d8c64f828ac3d2efa4e552dd"
}
]
}
| 26.199123 | 73 | 0.328489 | 2,700 | 41,840 | 4.959259 | 0.115556 | 0.062733 | 0.054444 | 0.041748 | 0.780433 | 0.747349 | 0.72233 | 0.635624 | 0.599701 | 0.560119 | 0 | 0.169986 | 0.526028 | 41,840 | 1,596 | 74 | 26.215539 | 0.505219 | 0 | 0 | 0.589112 | 0 | 0 | 0.304039 | 0.109106 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000648 | 0 | 0.000648 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bfbd4ea24f52b1f2ee0d10cbddf9da3af3c314fa | 1,181 | py | Python | pinolo/tasks.py | piger/pinolo | 9fca121adccb759c7f50c8862813c279c17b6d48 | [
"BSD-3-Clause"
] | 2 | 2016-04-13T07:12:28.000Z | 2018-04-10T15:14:25.000Z | pinolo/tasks.py | piger/pinolo | 9fca121adccb759c7f50c8862813c279c17b6d48 | [
"BSD-3-Clause"
] | null | null | null | pinolo/tasks.py | piger/pinolo | 9fca121adccb759c7f50c8862813c279c17b6d48 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
pinolo.tasks
~~~~~~~~~~~~
A Task object is a `threading.Thread` instance that will be executed
without blocking the main thread.
This is useful to perform potentially blocking actions like fecthing
resources via HTTP.
:copyright: (c) 2013 Daniel Kertesz
:license: BSD, see LICENSE for more details.
"""
import threading
class Task(threading.Thread):
"""A task is an execution unit that will be run in a separate thread
that should not block tha main thread (handling irc connections).
"""
def __init__(self, event, *args, **kwargs):
self.event = event
super(Task, self).__init__(*args, **kwargs)
@property
def queue(self):
return self.event.client.bot.coda
@property
def reply(self):
return self.event.reply
def run(self):
raise RuntimeError("Must be implemented!")
def put_results(self, *data):
"""Task output will be sent to the main thread via the configured
queue; data should be a string containing the full output, that will
later be splitted on newlines."""
self.queue.put(tuple(data))
| 28.804878 | 76 | 0.647756 | 157 | 1,181 | 4.815287 | 0.566879 | 0.047619 | 0.026455 | 0.050265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005669 | 0.253175 | 1,181 | 40 | 77 | 29.525 | 0.851474 | 0.523285 | 0 | 0.133333 | 0 | 0 | 0.040984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.066667 | 0.133333 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
449c69aeb84e825bf01bd5371b20267c504112a5 | 134 | py | Python | example/cat/urls.py | govindsinghr3/s3directmul | a8e28d4af491dfbfaf3bc18c44efb3467e9b1603 | [
"MIT"
] | 1 | 2020-02-28T12:27:01.000Z | 2020-02-28T12:27:01.000Z | example/cat/urls.py | govindsinghr3/s3directmul | a8e28d4af491dfbfaf3bc18c44efb3467e9b1603 | [
"MIT"
] | null | null | null | example/cat/urls.py | govindsinghr3/s3directmul | a8e28d4af491dfbfaf3bc18c44efb3467e9b1603 | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, url
from .views import MyView
urlpatterns = [
url('', MyView.as_view(), name='form'),
]
| 14.888889 | 43 | 0.679104 | 18 | 134 | 5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171642 | 134 | 8 | 44 | 16.75 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
44c197ce3c0345e83eea3cd5f823cbdd5413b901 | 23 | py | Python | boxuegu/boxuegu/apps/users/constants.py | 1111111111122222222223333333333/Django_boxuegu | cec4451bcc55f04013efd4e4cb7c4098a0fd5056 | [
"MIT"
] | null | null | null | boxuegu/boxuegu/apps/users/constants.py | 1111111111122222222223333333333/Django_boxuegu | cec4451bcc55f04013efd4e4cb7c4098a0fd5056 | [
"MIT"
] | 6 | 2021-02-08T20:30:13.000Z | 2022-03-11T23:50:00.000Z | boxuegu/boxuegu/apps/users/constants.py | 1111111111122222222223333333333/Django_boxuegu | cec4451bcc55f04013efd4e4cb7c4098a0fd5056 | [
"MIT"
] | null | null | null | EMAIL_EXPIRES = 60 * 2
| 11.5 | 22 | 0.695652 | 4 | 23 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.217391 | 23 | 1 | 23 | 23 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
44cfdb489562c4d561d19a7518f0065fa73d7dde | 9,573 | py | Python | pyaff4/rdfvalue.py | rmddcr/pyaff4 | 398941ac51754900d9619c954d216bf14b9811ea | [
"Apache-2.0"
] | null | null | null | pyaff4/rdfvalue.py | rmddcr/pyaff4 | 398941ac51754900d9619c954d216bf14b9811ea | [
"Apache-2.0"
] | null | null | null | pyaff4/rdfvalue.py | rmddcr/pyaff4 | 398941ac51754900d9619c954d216bf14b9811ea | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
"""RDF Values are responsible for serialization."""
from __future__ import unicode_literals
from future import standard_library
standard_library.install_aliases()
from builtins import str
from builtins import object
import functools
import urllib.parse
import urllib.request, urllib.parse, urllib.error
import binascii
import posixpath
import rdflib
from pyaff4 import registry
from pyaff4 import utils
# pylint: disable=protected-access
class Memoize(object):
def __call__(self, f):
f.memo_pad = {}
@functools.wraps(f)
def Wrapped(self, *args):
key = tuple(args)
if len(f.memo_pad) > 100:
f.memo_pad.clear()
if key not in f.memo_pad:
f.memo_pad[key] = f(self, *args)
return f.memo_pad[key]
return Wrapped
class RDFValue(object):
datatype = ""
def __init__(self, initializer=None):
self.Set(initializer)
def GetRaptorTerm(self):
return rdflib.Literal(self.SerializeToString(),
datatype=self.datatype)
def SerializeToString(self):
"""Serializes to a sequence of bytes."""
return ""
def UnSerializeFromString(self, string):
"""Unserializes from bytes."""
raise NotImplementedError
def Set(self, string):
raise NotImplementedError
def __bytes__(self):
return self.SerializeToString()
def __eq__(self, other):
return utils.SmartStr(self) == utils.SmartStr(other)
def __req__(self, other):
return utils.SmartStr(self) == utils.SmartStr(other)
def __hash__(self):
return hash(self.SerializeToString())
class RDFBytes(RDFValue):
value = b""
datatype = rdflib.XSD.hexBinary
def SerializeToString(self):
return binascii.hexlify(self.value)
def UnSerializeFromString(self, string):
self.Set(binascii.unhexlify(string))
def Set(self, data):
self.value = data
def __eq__(self, other):
if isinstance(other, RDFBytes):
return self.value == other.value
class XSDString(RDFValue):
"""A unicode string."""
datatype = rdflib.XSD.string
def SerializeToString(self):
return utils.SmartStr(self.value)
def UnSerializeFromString(self, string):
self.Set(utils.SmartUnicode(string))
def Set(self, data):
self.value = utils.SmartUnicode(data)
def __str__(self):
return self.value
@functools.total_ordering
class XSDInteger(RDFValue):
datatype = rdflib.XSD.integer
def SerializeToString(self):
return utils.SmartStr(self.value)
def UnSerializeFromString(self, string):
self.Set(int(string))
def Set(self, data):
self.value = int(data)
def __eq__(self, other):
if isinstance(other, XSDInteger):
return self.value == other.value
return self.value == other
def __int__(self):
return self.value
def __long__(self):
return int(self.value)
def __cmp__(self, o):
return self.value - o
def __add__(self, o):
return self.value + o
def __lt__(self, o):
return self.value < o
def __str__(self):
return str(self.value)
class RDFHash(XSDString):
# value is the hex encoded digest.
def __eq__(self, other):
if isinstance(other, RDFHash):
if self.datatype == other.datatype:
return self.value == other.value
return utils.SmartStr(self.value) == utils.SmartStr(other)
def __ne__(self, other):
return not self == other
def digest(self):
return binascii.unhexlify(self.value)
class SHA512Hash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#SHA512")
class SHA256Hash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#SHA256")
class SHA1Hash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#SHA1")
class Blake2bHash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#Blake2b")
class MD5Hash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#MD5")
class SHA512BlockMapHash(RDFHash):
datatype = rdflib.URIRef("http://aff4.org/Schema#blockMapHashSHA512")
class URN(RDFValue):
"""Represent a URN.
According to RFC1738 URLs must be encoded in ASCII. Therefore the
internal representation of a URN is bytes. When creating the URN
from other forms (e.g. filenames, we assume UTF8 encoding if the
filename is a unicode string.
"""
# The encoded URN as a unicode string.
value = None
original_filename = None
@classmethod
def FromFileName(cls, filename):
"""Parse the URN from filename.
Filename may be a unicode string, in which case it will be
UTF8 encoded into the URN. URNs are always ASCII.
"""
result = cls("file:%s" % urllib.request.pathname2url(filename))
result.original_filename = filename
return result
@classmethod
def NewURNFromFilename(cls, filename):
return cls.FromFileName(filename)
def ToFilename(self):
# For file: urls we exactly reverse the conversion applied in
# FromFileName.
if self.value.startswith("file:"):
return urllib.request.url2pathname(self.value[5:])
components = self.Parse()
if components.scheme == "file":
return components.path
def GetRaptorTerm(self):
return rdflib.URIRef(self.value)
def SerializeToString(self):
components = self.Parse()
return utils.SmartStr(urllib.parse.urlunparse(components))
def UnSerializeFromString(self, string):
utils.AssertStr(string)
self.Set(utils.SmartUnicode(string))
return self
def Set(self, data):
if data is None:
return
elif isinstance(data, URN):
self.value = data.value
else:
utils.AssertUnicode(data)
self.value = data
def Parse(self):
return self._Parse(self.value)
# URL parsing seems to be slow in Python so we cache it as much as possible.
@Memoize()
def _Parse(self, value):
components = urllib.parse.urlparse(value)
# dont normalise path for http URI's
if components.scheme and not components.scheme == "http":
normalized_path = posixpath.normpath(components.path)
if normalized_path == ".":
normalized_path = ""
components = components._replace(path=normalized_path)
if not components.scheme:
# For file:// URNs, we need to parse them from a filename.
components = components._replace(
netloc="",
path=urllib.request.pathname2url(value),
scheme="file")
self.original_filename = value
return components
def Scheme(self):
components = self.Parse()
return components.scheme
def Append(self, component, quote=True):
components = self.Parse()
if quote:
component = urllib.parse.quote(component)
# Work around usual posixpath.join bug.
component = component.lstrip("/")
new_path = posixpath.normpath(posixpath.join(
"/", components.path, component))
components = components._replace(path=new_path)
return URN(urllib.parse.urlunparse(components))
def RelativePath(self, urn):
urn_value = str(urn)
if urn_value.startswith(self.value):
return urn_value[len(self.value):]
def __str__(self):
return self.value
def __lt__(self, other):
return self.value < utils.SmartUnicode(other)
def __repr__(self):
return "<%s>" % self.value
def AssertURN(urn):
if not isinstance(urn, URN):
raise TypeError("Expecting a URN.")
def AssertURN(urn):
if not isinstance(urn, URN):
raise TypeError("Expecting a URN.")
registry.RDF_TYPE_MAP.update({
rdflib.XSD.hexBinary: RDFBytes,
rdflib.XSD.string: XSDString,
rdflib.XSD.integer: XSDInteger,
rdflib.XSD.int: XSDInteger,
rdflib.XSD.long: XSDInteger,
rdflib.URIRef("http://aff4.org/Schema#SHA512"): SHA512Hash,
rdflib.URIRef("http://aff4.org/Schema#SHA256"): SHA256Hash,
rdflib.URIRef("http://aff4.org/Schema#SHA1"): SHA1Hash,
rdflib.URIRef("http://aff4.org/Schema#MD5"): MD5Hash,
rdflib.URIRef("http://aff4.org/Schema#Blake2b"): Blake2bHash,
rdflib.URIRef("http://aff4.org/Schema#blockMapHashSHA512"): SHA512BlockMapHash,
rdflib.URIRef("http://afflib.org/2009/aff4#SHA512"): SHA512Hash,
rdflib.URIRef("http://afflib.org/2009/aff4#SHA256"): SHA256Hash,
rdflib.URIRef("http://afflib.org/2009/aff4#SHA1"): SHA1Hash,
rdflib.URIRef("http://afflib.org/2009/aff4#MD5"): MD5Hash,
rdflib.URIRef("http://afflib.org/2009/aff4#blockMapHashSHA512"): SHA512BlockMapHash
})
| 27.587896 | 87 | 0.651311 | 1,131 | 9,573 | 5.412025 | 0.240495 | 0.045581 | 0.044437 | 0.039209 | 0.319392 | 0.258781 | 0.231825 | 0.131024 | 0.06829 | 0.06829 | 0 | 0.017936 | 0.242871 | 9,573 | 346 | 88 | 27.66763 | 0.826573 | 0.152199 | 0 | 0.240566 | 0 | 0 | 0.075331 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 1 | 0.231132 | false | 0 | 0.056604 | 0.108491 | 0.603774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
44d4d050ac098be9e394e56783387233beb62b57 | 190 | py | Python | LittlePerformance/module/_tmp/test.py | TatsuyaOGth/PepperScript | bf2ef38976afbbd679b9d81b36d1035b392521d9 | [
"CC0-1.0"
] | null | null | null | LittlePerformance/module/_tmp/test.py | TatsuyaOGth/PepperScript | bf2ef38976afbbd679b9d81b36d1035b392521d9 | [
"CC0-1.0"
] | null | null | null | LittlePerformance/module/_tmp/test.py | TatsuyaOGth/PepperScript | bf2ef38976afbbd679b9d81b36d1035b392521d9 | [
"CC0-1.0"
] | null | null | null | # -*- encoding: UTF-8 -*-
from naoqi import ALProxy
IP = '127.0.0.1'
PORT = 49340
motion_proxy = ALProxy("ALMotion",IP,PORT)
motion_proxy.openHand('LHand')
motion_proxy.openHand('RHand')
| 17.272727 | 42 | 0.705263 | 28 | 190 | 4.678571 | 0.678571 | 0.251908 | 0.290076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071856 | 0.121053 | 190 | 10 | 43 | 19 | 0.712575 | 0.121053 | 0 | 0 | 0 | 0 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
44e32735f8a27f27f94b6e9b157e83a3efe9a0ed | 1,874 | py | Python | qcloudsdkapigateway/RunApiRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkapigateway/RunApiRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkapigateway/RunApiRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from qcloudsdkcore.request import Request
class RunApiRequest(Request):
def __init__(self):
super(RunApiRequest, self).__init__(
'apigateway', 'qcloudcliV1', 'RunApi', 'apigateway.api.qcloud.com')
def get_apiId(self):
return self.get_params().get('apiId')
def set_apiId(self, apiId):
self.add_param('apiId', apiId)
def get_contentType(self):
return self.get_params().get('contentType')
def set_contentType(self, contentType):
self.add_param('contentType', contentType)
def get_requestBody(self):
return self.get_params().get('requestBody')
def set_requestBody(self, requestBody):
self.add_param('requestBody', requestBody)
def get_requestBodyDict(self):
return self.get_params().get('requestBodyDict')
def set_requestBodyDict(self, requestBodyDict):
self.add_param('requestBodyDict', requestBodyDict)
def get_requestHeader(self):
return self.get_params().get('requestHeader')
def set_requestHeader(self, requestHeader):
self.add_param('requestHeader', requestHeader)
def get_requestMethod(self):
return self.get_params().get('requestMethod')
def set_requestMethod(self, requestMethod):
self.add_param('requestMethod', requestMethod)
def get_requestPath(self):
return self.get_params().get('requestPath')
def set_requestPath(self, requestPath):
self.add_param('requestPath', requestPath)
def get_requestQuery(self):
return self.get_params().get('requestQuery')
def set_requestQuery(self, requestQuery):
self.add_param('requestQuery', requestQuery)
def get_serviceId(self):
return self.get_params().get('serviceId')
def set_serviceId(self, serviceId):
self.add_param('serviceId', serviceId)
| 29.28125 | 79 | 0.6873 | 205 | 1,874 | 6.068293 | 0.165854 | 0.043408 | 0.101286 | 0.12299 | 0.188103 | 0.188103 | 0 | 0 | 0 | 0 | 0 | 0.001322 | 0.192636 | 1,874 | 63 | 80 | 29.746032 | 0.820886 | 0.011206 | 0 | 0 | 0 | 0 | 0.136143 | 0.013506 | 0 | 0 | 0 | 0 | 0 | 1 | 0.463415 | false | 0 | 0.02439 | 0.219512 | 0.731707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7818bb22a2a0106d2b35255ef5c676f48223ad0c | 138 | py | Python | precedent/models/bad_query.py | lewisjared/precedent | 6447634c6961f44b058a0379b782c8d46a21560e | [
"MIT"
] | null | null | null | precedent/models/bad_query.py | lewisjared/precedent | 6447634c6961f44b058a0379b782c8d46a21560e | [
"MIT"
] | null | null | null | precedent/models/bad_query.py | lewisjared/precedent | 6447634c6961f44b058a0379b782c8d46a21560e | [
"MIT"
] | null | null | null | from django.db import models
class BadQuery(models.Model):
date = models.DateTimeField(auto_now=True)
query = models.TextField() | 23 | 46 | 0.746377 | 18 | 138 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 138 | 6 | 47 | 23 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7832c8a18523ffb201793709a64fdbf272669118 | 1,738 | py | Python | addons/Sprytile-6b68d00/rx/linq/observable/transduce.py | trisadmeslek/V-Sekai-Blender-tools | 0d8747387c58584b50c69c61ba50a881319114f8 | [
"MIT"
] | 733 | 2017-08-22T09:47:54.000Z | 2022-03-27T23:56:52.000Z | rx/linq/observable/transduce.py | asheraryam/Sprytile | c63be50d14b07192ff134ceab256f0d69b9c4c92 | [
"MIT"
] | 74 | 2017-08-16T09:13:05.000Z | 2022-03-15T02:31:49.000Z | rx/linq/observable/transduce.py | asheraryam/Sprytile | c63be50d14b07192ff134ceab256f0d69b9c4c92 | [
"MIT"
] | 77 | 2017-09-14T16:56:11.000Z | 2022-03-27T13:55:16.000Z | """Transducers for RxPY.
There are several different implementations of transducers in Python.
This implementation is currently targeted for:
- http://code.sixty-north.com/python-transducers
You should also read the excellent article series "Understanding
Transducers through Python" at:
- http://sixty-north.com/blog/series/understanding-transducers-through-python
Other implementations of transducers in Python are:
- https://github.com/cognitect-labs/transducers-python
"""
from rx.core import Observable, AnonymousObservable
from rx.internal import extensionmethod
class Observing(object):
"""An observing transducer."""
def __init__(self, observer):
self.observer = observer
def initial(self):
return self.observer
def step(self, obs, input):
return obs.on_next(input)
def complete(self, obs):
return obs.on_completed()
def __call__(self, result, item):
return self.step(result, item)
@extensionmethod(Observable)
def transduce(self, transducer):
"""Execute a transducer to transform the observable sequence.
Keyword arguments:
:param Transducer transducer: A transducer to execute.
:returns: An Observable sequence containing the results from the
transducer.
:rtype: Observable
"""
source = self
def subscribe(observer):
xform = transducer(Observing(observer))
def on_next(v):
try:
xform.step(observer, v)
except Exception as e:
observer.on_error(e)
def on_completed():
xform.complete(observer)
return source.subscribe(on_next, observer.on_error, on_completed)
return AnonymousObservable(subscribe)
| 25.558824 | 78 | 0.695627 | 200 | 1,738 | 5.965 | 0.45 | 0.030176 | 0.04694 | 0.050293 | 0.132439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219217 | 1,738 | 67 | 79 | 25.940299 | 0.879145 | 0.424051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.074074 | 0.148148 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
783c6e3dcbf1023809e12a0e989e924850829a01 | 281 | py | Python | management_utils/management_utils.py | singparvi/Photo-Transform-Application | 9f93aff236794f0f23b0826b65dcf91c7ddbc0b3 | [
"MIT"
] | null | null | null | management_utils/management_utils.py | singparvi/Photo-Transform-Application | 9f93aff236794f0f23b0826b65dcf91c7ddbc0b3 | [
"MIT"
] | null | null | null | management_utils/management_utils.py | singparvi/Photo-Transform-Application | 9f93aff236794f0f23b0826b65dcf91c7ddbc0b3 | [
"MIT"
] | null | null | null | """
cpu_data_loader.py - Loads all data in the ../../data/transcribed_stories directory
"""
import pathlib
import os.path as path
class CPUDataLoader():
def __init__(self):
self.data_path = path.join(path.dirname(__file__), "..", "..", "data", "transcribed_stories")
| 25.545455 | 101 | 0.686833 | 36 | 281 | 5 | 0.666667 | 0.166667 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149466 | 281 | 10 | 102 | 28.1 | 0.753138 | 0.295374 | 0 | 0 | 0 | 0 | 0.142105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
78518dbb827250b7b706626c909295a7bb16606b | 125 | py | Python | StatisticsFunctions/standardDeviation.py | mkm99/TeamProject_StatsCalculator | 81085c1af47f38d3e49b43d667e312016c44ad10 | [
"MIT"
] | null | null | null | StatisticsFunctions/standardDeviation.py | mkm99/TeamProject_StatsCalculator | 81085c1af47f38d3e49b43d667e312016c44ad10 | [
"MIT"
] | 7 | 2020-03-03T21:37:57.000Z | 2020-03-06T04:11:42.000Z | StatisticsFunctions/standardDeviation.py | mkm99/TeamProject_StatsCalculator | 81085c1af47f38d3e49b43d667e312016c44ad10 | [
"MIT"
] | null | null | null | import numpy as np
class StandardDeviation():
@staticmethod
def standardDeviation(data):
return np.std(data) | 20.833333 | 32 | 0.704 | 14 | 125 | 6.285714 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216 | 125 | 6 | 33 | 20.833333 | 0.897959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
78614f22d935f3d142bf7ca036e4178dacad3150 | 1,359 | bzl | Python | source/bazel/rules/tree_hcp/string_tree_to_static_tree_parser.bzl | luxe/CodeLang-compiler | 78837d90bdd09c4b5aabbf0586a5d8f8f0c1e76a | [
"MIT"
] | 33 | 2019-05-30T07:43:32.000Z | 2021-12-30T13:12:32.000Z | source/bazel/rules/tree_hcp/string_tree_to_static_tree_parser.bzl | luxe/CodeLang-compiler | 78837d90bdd09c4b5aabbf0586a5d8f8f0c1e76a | [
"MIT"
] | 371 | 2019-05-16T15:23:50.000Z | 2021-09-04T15:45:27.000Z | source/bazel/rules/tree_hcp/string_tree_to_static_tree_parser.bzl | UniLang/compiler | c338ee92994600af801033a37dfb2f1a0c9ca897 | [
"MIT"
] | 6 | 2019-08-22T17:37:36.000Z | 2020-11-07T07:15:32.000Z | load("//bazel/rules/cpp:object.bzl", "cpp_object")
load("//bazel/rules/hcp:hcp.bzl", "hcp")
load("//bazel/rules/hcp:hcp_hdrs_derive.bzl", "hcp_hdrs_derive")
def string_tree_to_static_tree_parser(name):
#the file names to use
target_name = name + "_string_tree_parser_dat"
in_file = name + ".dat"
outfile = name + "_string_tree_parser.hcp"
#converting hcp to hpp/cpp
native.genrule(
name = target_name,
srcs = [in_file],
outs = [outfile],
tools = ["//code/programs/transcompilers/tree_hcp/string_tree_to_static_tree_parser:string_tree_to_static_tree_parser"],
cmd = "$(location //code/programs/transcompilers/tree_hcp/string_tree_to_static_tree_parser:string_tree_to_static_tree_parser) -i $(SRCS) -o $@",
)
#compile hcp file
#unique dep (TODO: dynamically decide)
static_struct_dep = "//code/utilities/code:concept_static_tree_structs"
deps = [
"//code/utilities/data_structures/tree/generic:string_tree",
"//code/utilities/data_structures/tree/generic:string_to_string_tree",
"//code/utilities/types/strings/transformers/appending:lib",
"//code/utilities/data_structures/tree/generic/tokens:tree_token",
"//code/utilities/types/vectors/observers:lib",
static_struct_dep,
]
hcp(name + "_string_tree_parser", deps)
| 39.970588 | 153 | 0.701987 | 179 | 1,359 | 4.988827 | 0.340782 | 0.111982 | 0.067189 | 0.100784 | 0.416573 | 0.371781 | 0.297872 | 0.199328 | 0.199328 | 0.199328 | 0 | 0 | 0.164827 | 1,359 | 33 | 154 | 41.181818 | 0.786784 | 0.072848 | 0 | 0 | 0 | 0.041667 | 0.610669 | 0.547771 | 0 | 0 | 0 | 0.030303 | 0 | 1 | 0.041667 | false | 0 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
78669a47b4d2056f5795f7757d17b963b58901f7 | 2,926 | py | Python | cogs/utils/db.py | NextChai/FURYBot | 074ef86950167cf26a6b3afc3acc9a0a5ce91c9e | [
"MIT"
] | null | null | null | cogs/utils/db.py | NextChai/FURYBot | 074ef86950167cf26a6b3afc3acc9a0a5ce91c9e | [
"MIT"
] | 3 | 2021-12-31T07:03:24.000Z | 2021-12-31T07:16:21.000Z | cogs/utils/db.py | NextChai/Fury-Bot | 074ef86950167cf26a6b3afc3acc9a0a5ce91c9e | [
"MIT"
] | null | null | null | """
The MIT License (MIT)
Copyright (c) 2020-present NextChai
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
from __future__ import annotations
from typing import ClassVar, List
from functools import cached_property
class Row:
"""Used to represent a Row to a database.
Attributes
----------
name: :class:`str`
The key's name.
value: :class:`str`
The key's value.
"""
__slots__ = ('name', 'value', '_original_args')
def __init__(self, name: str, type: str, *args) -> None:
self.name = name
self.value = type
self._original_args = args # EXAMPLE:: PRIMARY KEY NOT NULL
def __str__(self) -> str:
return f'{self.name} {self.value} {" ".join(self._original_args)}'
def __repr__(self) -> str:
return f'<Row :: name: {self.name}:: value: {self.value}>'
class TableMeta(type):
__table_name__: str
def __new__(cls, *args, **kwargs):
name, bases, attrs = args
attrs['__table_name__'] = kwargs.pop('name', name)
new_cls = super().__new__(cls, name, bases, attrs, **kwargs)
return new_cls
@classmethod
def qualified_name(cls) -> str:
return cls.__table_name__
class Table(metaclass=TableMeta):
__table_name__: ClassVar[str]
def __init__(self, *, keys: List[Row]) -> None:
self.keys: List[Row] = keys
@cached_property
def qualified_name(self) -> str:
return self.__table_name__
def create_string(self) -> str:
""":class:`str`: Returns a string that can be used to create the table."""
return 'CREATE TABLE IF NOT EXISTS {0} ({1});'.format(
self.qualified_name,
', '.join([str(key) for key in self.keys])
)
async def create(self, connection) -> None:
"""Creates the table."""
await connection.execute(self.create_string())
| 32.511111 | 82 | 0.665072 | 395 | 2,926 | 4.744304 | 0.402532 | 0.046958 | 0.020811 | 0.014941 | 0.016009 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002688 | 0.237184 | 2,926 | 89 | 83 | 32.876404 | 0.836918 | 0.452837 | 0 | 0 | 0 | 0 | 0.120656 | 0.018361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0 | 0.081081 | 0.108108 | 0.621622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
786f50385bc4363183c9ed20db0b507df0526042 | 4,865 | py | Python | raco/myrial/exceptions.py | uwescience/raco | 1f2bedbef71bacf715340289f4973d85a3c1dc97 | [
"BSD-3-Clause"
] | 61 | 2015-02-09T17:27:40.000Z | 2022-03-28T14:37:53.000Z | raco/myrial/exceptions.py | uwescience/raco | 1f2bedbef71bacf715340289f4973d85a3c1dc97 | [
"BSD-3-Clause"
] | 201 | 2015-01-03T02:46:19.000Z | 2017-09-19T02:16:36.000Z | raco/myrial/exceptions.py | uwescience/raco | 1f2bedbef71bacf715340289f4973d85a3c1dc97 | [
"BSD-3-Clause"
] | 17 | 2015-06-03T12:01:30.000Z | 2021-11-27T15:49:21.000Z |
class MyrialCompileException(Exception):
pass
class MyrialUnexpectedEndOfFileException(MyrialCompileException):
def __str__(self):
return "Unexpected end-of-file"
class MyrialParseException(MyrialCompileException):
def __init__(self, token):
self.token = token
def __str__(self):
return 'Parse error at token %s on line %d' % (self.token.value,
self.token.lineno)
class MyrialScanException(MyrialCompileException):
def __init__(self, token):
self.token = token
def __str__(self):
return 'Illegal token string %s on line %d' % (self.token.value,
self.token.lineno)
class DuplicateFunctionDefinitionException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return 'Duplicate function definition for %s on line %d' % (self.funcname, # noqa
self.lineno) # noqa
class NoSuchFunctionException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return 'No such function definition for %s on line %d' % (self.funcname, # noqa
self.lineno) # noqa
class ReservedTokenException(MyrialCompileException):
def __init__(self, token, lineno):
self.token = token
self.lineno = lineno
def __str__(self):
return 'The token "%s" on line %d is reserved.' % (self.token,
self.lineno) # noqa
class InvalidArgumentList(MyrialCompileException):
def __init__(self, funcname, expected_args, lineno):
self.funcname = funcname
self.expected_args = expected_args
self.lineno = lineno
def __str__(self):
return "Incorrect number of arguments for %s(%s) on line %d" % (
self.funcname, ','.join(self.expected_args), self.lineno)
class UndefinedVariableException(MyrialCompileException):
def __init__(self, funcname, var, lineno):
self.funcname = funcname
self.var = var
self.lineno = lineno
def __str__(self):
return "Undefined variable %s in function %s at line %d" % (
self.var, self.funcname, self.lineno)
class DuplicateVariableException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return "Duplicately defined in function %s at line %d" % (
self.funcname, self.lineno)
class BadApplyDefinitionException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return "Bad apply definition for in function %s at line %d" % (
self.funcname, self.lineno)
class UnnamedStateVariableException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return "Unnamed state variable in function %s at line %d" % (
self.funcname, self.lineno)
class IllegalWildcardException(MyrialCompileException):
def __init__(self, funcname, lineno):
self.funcname = funcname
self.lineno = lineno
def __str__(self):
return "Illegal use of wildcard in function %s at line %d" % (
self.funcname, self.lineno)
class NestedTupleExpressionException(MyrialCompileException):
def __init__(self, lineno):
self.lineno = lineno
def __str__(self):
return "Illegal use of tuple expression on line %d" % self.lineno
class InvalidEmitList(MyrialCompileException):
def __init__(self, function, lineno):
self.function = function
self.lineno = lineno
def __str__(self):
return "Wrong number of emit arguments in %s at line %d" % (
self.function, self.lineno)
class IllegalColumnNamesException(MyrialCompileException):
def __init__(self, lineno):
self.lineno = lineno
def __str__(self):
return "Invalid column names on line %d" % self.lineno
class ColumnIndexOutOfBounds(Exception):
pass
class SchemaMismatchException(MyrialCompileException):
def __init__(self, op_name):
self.op_name = op_name
def __str__(self):
return "Incompatible input schemas for %s operation" % self.op_name
class NoSuchRelationException(MyrialCompileException):
def __init__(self, relname):
self.relname = relname
def __str__(self):
return "No such relation: %s" % self.relname
| 29.484848 | 90 | 0.642754 | 499 | 4,865 | 5.985972 | 0.174349 | 0.087044 | 0.056913 | 0.091061 | 0.593907 | 0.51222 | 0.481419 | 0.431202 | 0.431202 | 0.431202 | 0 | 0 | 0.272765 | 4,865 | 164 | 91 | 29.664634 | 0.844262 | 0.004933 | 0 | 0.54955 | 0 | 0 | 0.143566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.297297 | false | 0.018018 | 0 | 0.153153 | 0.621622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
788f71f9cc159c688647d0021e6480432be1fe82 | 226 | py | Python | tutorial/client.py | kangjunseo/GraphQL | dde0381ddde80c4716d5d0233e95710d93a15a70 | [
"MIT"
] | null | null | null | tutorial/client.py | kangjunseo/GraphQL | dde0381ddde80c4716d5d0233e95710d93a15a70 | [
"MIT"
] | null | null | null | tutorial/client.py | kangjunseo/GraphQL | dde0381ddde80c4716d5d0233e95710d93a15a70 | [
"MIT"
] | null | null | null | from gql import Client
from gql.transport.requests import RequestsHTTPTransport
transport = RequestsHTTPTransport(url='http://localhost:8080/v1/graphql')
client = Client(transport=transport, fetch_schema_from_transport=True) | 37.666667 | 73 | 0.840708 | 27 | 226 | 6.925926 | 0.592593 | 0.074866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.070796 | 226 | 6 | 74 | 37.666667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.140969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
78b1088a5138ebe9e271cee17d9f041b05a1e072 | 122 | py | Python | pyeccodes/defs/mars/grib_oper_ea_def.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | 7 | 2020-04-14T09:41:17.000Z | 2021-08-06T09:38:19.000Z | pyeccodes/defs/mars/grib_oper_ea_def.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | null | null | null | pyeccodes/defs/mars/grib_oper_ea_def.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | 3 | 2020-04-30T12:44:48.000Z | 2020-12-15T08:40:26.000Z | import pyeccodes.accessors as _
def load(h):
if (h.get_l('class') == 8):
h.alias('mars.origin', 'centre')
| 13.555556 | 40 | 0.581967 | 18 | 122 | 3.833333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.229508 | 122 | 8 | 41 | 15.25 | 0.723404 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
152faa8abbce27b3e2233aadc2f9c27068ddad3e | 3,073 | py | Python | exploration/similarity.py | webmasterraj/gitrecommender | aa32ced2110fd15191bf0b7fdb46ab16bdcff709 | [
"MIT"
] | 1 | 2017-06-24T15:40:03.000Z | 2017-06-24T15:40:03.000Z | exploration/similarity.py | webmasterraj/gitrecommender | aa32ced2110fd15191bf0b7fdb46ab16bdcff709 | [
"MIT"
] | null | null | null | exploration/similarity.py | webmasterraj/gitrecommender | aa32ced2110fd15191bf0b7fdb46ab16bdcff709 | [
"MIT"
] | null | null | null | def similar_users_query(user):
# Pass user object, return query to get users who starred same things as them
query = """
select subq.user_id
, sum(1/log(subq.stargazers_count+1)) `score`
, count(*) `count`
from
(select others.user_id
, others.starred_repo_id
, repo.repo_name
, repo.description
, repo.last_modified
, repo.language
, repo.stargazers_count
, repo.forks_count
, repo.from_hacker_news
from
(select user_id
, starred_repo_id
from github_user_starred_repos
where user_id != {0}) others
join
(select starred_repo_id
from github_user_starred_repos
where user_id = {0}) usr
on others.starred_repo_id=usr.starred_repo_id
join
(select id
, repo_name
, description
, last_modified
, language
, stargazers_count
, forks_count
, from_hacker_news
from github_repos) repo
on others.starred_repo_id=repo.id) subq
group by subq.user_id
order by 2 desc
""".format(user.id)
return query
def similar_repos_query(user):
# Pass user object, return query to get users who starred same things as them
query = """
select other_repos.user_id
, other_repos.starred_repo_id
, repo.repo_name
, repo.description
, repo.last_modified
, repo.language
, repo.stargazers_count
, repo.forks_count
, repo.from_hacker_news
, hn.added_at
, hn.submission_time
, hn.title
, hn.url
from
(select user_id
, starred_repo_id
from github_user_starred_repos
where user_id != {0}) other_repos
join
(select distinct(others.user_id) `user_id`
from
(select user_id
, starred_repo_id
from github_user_starred_repos
where user_id != {0}) others
join
(select starred_repo_id
from github_user_starred_repos
where user_id = {0}) usr
on others.starred_repo_id = usr.starred_repo_id) others
on other_repos.user_id=others.user_id
join
(select id
, repo_name
, description
, last_modified
, language
, stargazers_count
, forks_count
, from_hacker_news
from github_repos) repo
on other_repos.starred_repo_id=repo.id
join
(select added_at
, submission_time
, title
, url
, github_repo_name
from hacker_news) hn
on repo.repo_name = hn.github_repo_name
""".format(user.id)
return query
| 31.040404 | 81 | 0.52945 | 338 | 3,073 | 4.508876 | 0.171598 | 0.070866 | 0.110892 | 0.055774 | 0.757874 | 0.706037 | 0.681759 | 0.681759 | 0.681759 | 0.681759 | 0 | 0.004442 | 0.413928 | 3,073 | 98 | 82 | 31.357143 | 0.841755 | 0.049138 | 0 | 0.691489 | 0 | 0 | 0.938335 | 0.164782 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
153bb6e2b74499852615242fd0c43386ac956fe1 | 911 | py | Python | 175 _labs/d_linked_node.py | lzeeorno/Python-practice175 | e2998830114b304f9857b8f7d89dafec7ae02080 | [
"Apache-2.0"
] | null | null | null | 175 _labs/d_linked_node.py | lzeeorno/Python-practice175 | e2998830114b304f9857b8f7d89dafec7ae02080 | [
"Apache-2.0"
] | null | null | null | 175 _labs/d_linked_node.py | lzeeorno/Python-practice175 | e2998830114b304f9857b8f7d89dafec7ae02080 | [
"Apache-2.0"
] | 1 | 2019-03-09T07:41:12.000Z | 2019-03-09T07:41:12.000Z | class d_linked_node:
def __init__(self, initData, initNext, initPrevious):
# constructs a new node and initializes it to contain
# the given object (initData) and links to the given next
# and previous nodes.
self.__data = initData
self.__next = initNext
self.__previous = initPrevious
if (initPrevious != None):
initPrevious.__next = self
if (initNext != None):
initNext.__previous = self
def getData(self):
return self.__data
def getNext(self):
return self.__next
def getPrevious(self):
return self.__previous
def setData(self, newData):
self.__data = newData
def setNext(self, newNext):
self.__next= newNext
def setPrevious(self, newPrevious):
self.__previous= newPrevious | 29.387097 | 67 | 0.581778 | 92 | 911 | 5.456522 | 0.413043 | 0.047809 | 0.083665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.349067 | 911 | 31 | 68 | 29.387097 | 0.846543 | 0.141603 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.142857 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
153d7fb61c38c41553a4e473fb3e5c0de21d4d4c | 2,184 | py | Python | app/tests/test_parser_cve_json.py | stanleyshuang/cvedata | b90afb3e4f90a4ed24d93bd044181c836b4d3747 | [
"Apache-2.0"
] | null | null | null | app/tests/test_parser_cve_json.py | stanleyshuang/cvedata | b90afb3e4f90a4ed24d93bd044181c836b4d3747 | [
"Apache-2.0"
] | null | null | null | app/tests/test_parser_cve_json.py | stanleyshuang/cvedata | b90afb3e4f90a4ed24d93bd044181c836b4d3747 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Author: Stanley Huang
# Project: crawler 1.0
# Date: 2021-07-10
#
import unittest
from pkg.util.parser_cve_json import is_cve_json_filename
from pkg.util.parser_cve_json import extract_cveid
from pkg.util.parser_cve_json import splitcveid
class IsCveJsonFilenameTestCase(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_is_cve_json_filename_10(self):
self.assertTrue(is_cve_json_filename('CVE-2021-28809'))
def test_is_cve_json_filename_20(self):
self.assertTrue(is_cve_json_filename('CVE-2021-3660'))
def test_is_cve_json_filename_30(self):
self.assertFalse(is_cve_json_filename('CVE-2021-3660.json'))
def test_is_cve_json_filename_40(self):
self.assertFalse(is_cve_json_filename('openpgp-encrypted-message'))
class ExtractCveidTestCase(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_extract_cveid_10(self):
self.assertTrue('CVE-2021-28491'==extract_cveid('CVE-2021-28491. - SQLite heap overflow'))
def test_extract_cveid_20(self):
self.assertTrue(None==extract_cveid('TYPO3 Form Designer backend module of the Form Framework is vulnerable to cross-site scripting'))
def test_extract_cveid_30(self):
self.assertTrue('CVE-2020-11575'==extract_cveid('Display and loop C codes, CVE-2020-11575, are vulnerable to heap based buffer overflow'))
class ExtractCveidTestCase(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_splitcveid_10(self):
self.assertTrue('2021', '28491'==splitcveid('CVE-2021-28491'))
def test_splitcveid_20(self):
self.assertTrue((None, None)==splitcveid('TYPO3 Form Designer backend module of the Form Framework is vulnerable to cross-site scripting'))
def test_splitcveid_30(self):
self.assertTrue('2020', '11575'==splitcveid('Display and loop C codes, CVE-2020-11575, are vulnerable to heap based buffer overflow'))
| 33.6 | 148 | 0.681777 | 285 | 2,184 | 5.014035 | 0.263158 | 0.058782 | 0.056683 | 0.107068 | 0.655004 | 0.621414 | 0.559132 | 0.435269 | 0.435269 | 0.376487 | 0 | 0.076336 | 0.220238 | 2,184 | 64 | 149 | 34.125 | 0.762772 | 0.040293 | 0 | 0.358974 | 0 | 0 | 0.26087 | 0.012352 | 0 | 0 | 0 | 0 | 0.25641 | 1 | 0.410256 | false | 0.153846 | 0.102564 | 0 | 0.589744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
156821be7ed14dd901dce4ac81ab7755213aa310 | 165 | py | Python | electricitylci/canadian_imports.py | gschivley/ElectricityLCI | 1c1c1b69705d3ffab1e1e844aaf7379e4f51198e | [
"CC0-1.0"
] | 1 | 2019-04-15T18:11:16.000Z | 2019-04-15T18:11:16.000Z | electricitylci/canadian_imports.py | gschivley/ElectricityLCI | 1c1c1b69705d3ffab1e1e844aaf7379e4f51198e | [
"CC0-1.0"
] | 3 | 2019-05-07T19:04:22.000Z | 2019-09-30T21:29:59.000Z | electricitylci/canadian_imports.py | gschivley/ElectricityLCI | 1c1c1b69705d3ffab1e1e844aaf7379e4f51198e | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Feb 20 15:10:09 2019
@author: cooneyg
"""
import pandas as pd
ca_tech_mix = pd.read_csv('data/canadian_imports.csv')
| 12.692308 | 54 | 0.666667 | 28 | 165 | 3.785714 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094891 | 0.169697 | 165 | 12 | 55 | 13.75 | 0.678832 | 0.460606 | 0 | 0 | 0 | 0 | 0.320513 | 0.320513 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
159a295855895460560feb94935eaba5b7b885ee | 196 | py | Python | main/experiments/init.py | resuly/Traffic-3DResnets | cb48c6b7a4921d8593368c262c0d3c0406a6cc32 | [
"MIT"
] | 2 | 2021-05-13T07:46:04.000Z | 2021-09-11T11:42:15.000Z | main/experiments/init.py | resuly/Traffic-3DResnets | cb48c6b7a4921d8593368c262c0d3c0406a6cc32 | [
"MIT"
] | null | null | null | main/experiments/init.py | resuly/Traffic-3DResnets | cb48c6b7a4921d8593368c262c0d3c0406a6cc32 | [
"MIT"
] | null | null | null | import os,sys,glob,shutil
lists = glob.glob('./*/*.log')
lists += glob.glob('./*/*.tar')
lists += glob.glob('./*/*.pk')
lists += glob.glob('./*/*weights.json')
for f in lists:
os.remove(f)
| 17.818182 | 39 | 0.571429 | 29 | 196 | 3.862069 | 0.517241 | 0.321429 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132653 | 196 | 10 | 40 | 19.6 | 0.658824 | 0 | 0 | 0 | 0 | 0 | 0.220513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
159a92f6efa3a811dff4d2c2b32d80f32fd49fbb | 175 | py | Python | examples/sim_taxi/config/env.py | CPFL/AMS | bb685024b1c061e7144dc2ef93e09d6d6c830af8 | [
"Apache-2.0"
] | 26 | 2018-02-16T10:49:19.000Z | 2022-03-23T16:42:48.000Z | examples/sim_taxi/config/env.py | CPFL/Autoware-Management-System | bb685024b1c061e7144dc2ef93e09d6d6c830af8 | [
"Apache-2.0"
] | 10 | 2018-11-13T08:16:49.000Z | 2019-01-09T04:59:24.000Z | examples/graph/config/env.py | CPFL/AMS | bb685024b1c061e7144dc2ef93e09d6d6c830af8 | [
"Apache-2.0"
] | 19 | 2018-03-28T07:38:45.000Z | 2022-01-27T05:18:21.000Z | #!/usr/bin/env python
# coding: utf-8
import os
from dotenv import load_dotenv
load_dotenv(os.path.abspath(os.path.dirname(__file__))+'/sample.env')
env = dict(os.environ)
| 17.5 | 69 | 0.742857 | 29 | 175 | 4.275862 | 0.655172 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.102857 | 175 | 9 | 70 | 19.444444 | 0.783439 | 0.194286 | 0 | 0 | 0 | 0 | 0.079137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
15a9b7abe9ab116968a744dad3510257e70aa315 | 111 | py | Python | src/txtwrpr/__init__.py | hendrikdutoit/TxtWrpr | 7800254ca76d17b97c77702fcf185fe93ea8bcd6 | [
"MIT"
] | null | null | null | src/txtwrpr/__init__.py | hendrikdutoit/TxtWrpr | 7800254ca76d17b97c77702fcf185fe93ea8bcd6 | [
"MIT"
] | null | null | null | src/txtwrpr/__init__.py | hendrikdutoit/TxtWrpr | 7800254ca76d17b97c77702fcf185fe93ea8bcd6 | [
"MIT"
] | null | null | null |
"""Application Utilities for Bright Edge eServices developments"""
from .txtwrpr import *
_version = "2.2.2"
| 18.5 | 66 | 0.738739 | 14 | 111 | 5.785714 | 0.857143 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031579 | 0.144144 | 111 | 5 | 67 | 22.2 | 0.821053 | 0.540541 | 0 | 0 | 0 | 0 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
15bbc0ef07fd0f3c49fdc9253453c03bbf9b0a1d | 977 | py | Python | rf_pymods/smwrand.py | rfinnie/rf-pymods | 14c73ba1a8614c9579a7957de97d7b3c7058891e | [
"Unlicense"
] | null | null | null | rf_pymods/smwrand.py | rfinnie/rf-pymods | 14c73ba1a8614c9579a7957de97d7b3c7058891e | [
"Unlicense"
] | null | null | null | rf_pymods/smwrand.py | rfinnie/rf-pymods | 14c73ba1a8614c9579a7957de97d7b3c7058891e | [
"Unlicense"
] | null | null | null | # SPDX-FileCopyrightText: Copyright (C) 2019-2021 Ryan Finnie
# SPDX-License-Identifier: MIT
class SMWRand:
"""Super Mario World random number generator
Based on deconstruction by Retro Game Mechanics Explained
https://www.youtube.com/watch?v=q15yNrJHOak
"""
# SPDX-SnippetComment: Originally from https://github.com/rfinnie/rf-pymods
# SPDX-SnippetCopyrightText: Copyright (C) 2019-2021 Ryan Finnie
# SPDX-LicenseInfoInSnippet: MIT
seed_1 = 0
seed_2 = 0
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def _rand(self):
self.seed_1 = (self.seed_1 + (self.seed_1 << 2) + 1) & 0xFF
self.seed_2 = (
(self.seed_2 << 1) + int((self.seed_2 & 0x90) in (0x90, 0))
) & 0xFF
return self.seed_1 ^ self.seed_2
def rand(self):
output_2 = self._rand()
output_1 = self._rand()
return (output_1, output_2)
| 27.138889 | 79 | 0.635619 | 130 | 977 | 4.569231 | 0.484615 | 0.107744 | 0.060606 | 0.065657 | 0.181818 | 0.153199 | 0.107744 | 0 | 0 | 0 | 0 | 0.0631 | 0.253838 | 977 | 35 | 80 | 27.914286 | 0.751715 | 0.411464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 0 | 0 | 1 | 0.235294 | false | 0.058824 | 0 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
ec698900a476a259034508e51008d66c5de32af5 | 371 | py | Python | simple_page/urls.py | aprild-c/WiCS_Website_v2 | 7e38a6c650efb07d224ba9914d08c8e4a3667101 | [
"MIT"
] | null | null | null | simple_page/urls.py | aprild-c/WiCS_Website_v2 | 7e38a6c650efb07d224ba9914d08c8e4a3667101 | [
"MIT"
] | null | null | null | simple_page/urls.py | aprild-c/WiCS_Website_v2 | 7e38a6c650efb07d224ba9914d08c8e4a3667101 | [
"MIT"
] | null | null | null | from django.urls import path
from simple_page import views
urlpatterns = [
path('', views.home, name='simple_page'),
path('calendar/', views.calendar, name='simple_page'),
path('contact/', views.contact, name='simple_page'),
path('eboard/', views.eboard, name='simple_page'),
path('alumni-speaker-series/',views.speaker_series, name='simple_page'),
] | 37.1 | 76 | 0.700809 | 48 | 371 | 5.270833 | 0.354167 | 0.237154 | 0.27668 | 0.284585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123989 | 371 | 10 | 77 | 37.1 | 0.778462 | 0 | 0 | 0 | 0 | 0 | 0.271505 | 0.05914 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ec8728c7e7cb1908b2335c86c881a88dcdeb4b26 | 668 | py | Python | backpack/extensions/secondorder/hbp/custom_module.py | pitmonticone/backpack | b5bdb7e1171a1c0ace187915b9e54c2087d61d3a | [
"MIT"
] | null | null | null | backpack/extensions/secondorder/hbp/custom_module.py | pitmonticone/backpack | b5bdb7e1171a1c0ace187915b9e54c2087d61d3a | [
"MIT"
] | null | null | null | backpack/extensions/secondorder/hbp/custom_module.py | pitmonticone/backpack | b5bdb7e1171a1c0ace187915b9e54c2087d61d3a | [
"MIT"
] | null | null | null | """Module extensions for custom properties of HBPBaseModule."""
from backpack.core.derivatives.scale_module import ScaleModuleDerivatives
from backpack.core.derivatives.sum_module import SumModuleDerivatives
from backpack.extensions.secondorder.hbp.hbpbase import HBPBaseModule
class HBPScaleModule(HBPBaseModule):
"""HBP extension for ScaleModule."""
def __init__(self):
"""Initialization."""
super().__init__(derivatives=ScaleModuleDerivatives())
class HBPSumModule(HBPBaseModule):
"""HBP extension for SumModule."""
def __init__(self):
"""Initialization."""
super().__init__(derivatives=SumModuleDerivatives())
| 31.809524 | 73 | 0.751497 | 61 | 668 | 7.934426 | 0.491803 | 0.07438 | 0.066116 | 0.11157 | 0.18595 | 0.18595 | 0.18595 | 0 | 0 | 0 | 0 | 0 | 0.139222 | 668 | 20 | 74 | 33.4 | 0.841739 | 0.223054 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.333333 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
eca6a81e5b7d21c327f69df655f7f4493dd305bd | 1,161 | py | Python | application/utils/__init__.py | fajaragungpramana/backend-warungku-deprecated | e1378781550c61b8946908eefbab6e9ffec61195 | [
"Apache-2.0"
] | 1 | 2021-01-18T07:39:44.000Z | 2021-01-18T07:39:44.000Z | application/utils/__init__.py | fajaragungpramana/backend-warungku-deprecated | e1378781550c61b8946908eefbab6e9ffec61195 | [
"Apache-2.0"
] | null | null | null | application/utils/__init__.py | fajaragungpramana/backend-warungku-deprecated | e1378781550c61b8946908eefbab6e9ffec61195 | [
"Apache-2.0"
] | null | null | null | import os
import uuid
import requests
from datetime import datetime
from dotenv import load_dotenv
from flask import jsonify, make_response, request
# get .env path and set it
load_dotenv('../backend-warungku/.env')
# This function to get .env variable configuration
# @params var - fill with the same variable name in .env configuration
def get_env(var: str):
return str(os.environ.get(var))
# This function to get date time now with custom format
def date_now(pattern: str = '%d %b %Y %H:%M:%S'):
return datetime.now().strftime(pattern)
# This function to get user ip address
def get_ip_address():
return requests.get('https://ipinfo.io/').json()['ip']
# This to generate unique id
def get_unique_id():
return str(uuid.uuid4())
# This to make json response
def json_response(response: dict, http_code: int):
return make_response(jsonify(response)), http_code
# This to verify value is none or not
def is_none(value):
return isinstance(value, type(None))
# This to get body or form data
def get_post(var: str):
return request.form.get(var)
# This to get parameter
def get_param(var: str):
return request.args.get(var) | 26.386364 | 70 | 0.732127 | 188 | 1,161 | 4.43617 | 0.430851 | 0.029976 | 0.05036 | 0.061151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00103 | 0.163652 | 1,161 | 44 | 71 | 26.386364 | 0.857878 | 0.322997 | 0 | 0 | 0 | 0 | 0.078608 | 0.030928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.347826 | false | 0 | 0.26087 | 0.347826 | 0.956522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
ece03d3e786976781d7eace1e7e8c722f3bd7272 | 350 | py | Python | PythonExercicios/ex025.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | PythonExercicios/ex025.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | PythonExercicios/ex025.py | gabjohann/python_3 | 380cb622669ed82d6b22fdd09d41f02f1ad50a73 | [
"MIT"
] | null | null | null | # Crie um programa que leia o nome de uma pessoa e diga se ela tem 'Silva' no nome
nome = str(input('Digite seu nome completo: ')).strip()
print('Seu nome tem Silva? {}'.format('SILVA' in nome.upper()))
# Resolução da aula
# nome = str(input('Qual é seu nome completo? ')).strip()
# print('Seu nome tem Silva? {}'.format('silva' in nome.lower()))
| 35 | 82 | 0.671429 | 58 | 350 | 4.051724 | 0.568966 | 0.119149 | 0.102128 | 0.170213 | 0.485106 | 0.485106 | 0.485106 | 0.485106 | 0.485106 | 0.485106 | 0 | 0 | 0.162857 | 350 | 9 | 83 | 38.888889 | 0.802048 | 0.622857 | 0 | 0 | 0 | 0 | 0.417323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
ece51d84a80ce49f39d083d9811e55c06c50ed1e | 3,242 | py | Python | tests/test-xml.py | privazio/pyone | a20037e8442ee9b4d1bb71481531613bf50f978b | [
"Apache-2.0"
] | 3 | 2018-01-07T16:56:24.000Z | 2018-02-27T07:52:04.000Z | tests/test-xml.py | privazio/pyone | a20037e8442ee9b4d1bb71481531613bf50f978b | [
"Apache-2.0"
] | 4 | 2018-01-06T18:27:29.000Z | 2018-02-16T13:55:47.000Z | tests/test-xml.py | privazio/pyone | a20037e8442ee9b4d1bb71481531613bf50f978b | [
"Apache-2.0"
] | 1 | 2020-04-26T14:22:09.000Z | 2020-04-26T14:22:09.000Z | # Copyright 2018 www.privaz.io Valletech AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import pyone.bindings as bindings
import xml.dom.minidom as dom
domimp = dom.getDOMImplementation()
nakedXmlSample = '''<MARKETPLACE_POOL><MARKETPLACE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>OpenNebula Public</NAME><MARKET_MAD><![CDATA[one]]></MARKET_MAD><ZONE_ID><![CDATA[0]]></ZONE_ID><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><MARKETPLACEAPPS><ID>0</ID><ID>1</ID><ID>2</ID><ID>3</ID><ID>4</ID><ID>5</ID><ID>6</ID><ID>7</ID><ID>8</ID><ID>9</ID><ID>10</ID><ID>11</ID><ID>12</ID><ID>13</ID><ID>14</ID><ID>15</ID><ID>16</ID><ID>17</ID><ID>18</ID><ID>19</ID><ID>20</ID><ID>21</ID><ID>22</ID><ID>23</ID><ID>24</ID></MARKETPLACEAPPS><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>1</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>1</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><TEMPLATE><DESCRIPTION><![CDATA[OpenNebula Systems MarketPlace]]></DESCRIPTION><MARKET_MAD><![CDATA[one]]></MARKET_MAD></TEMPLATE></MARKETPLACE></MARKETPLACE_POOL>'''
xmlSample = '''<MARKETPLACE_POOL xmlns='http://opennebula.org/XMLSchema'><MARKETPLACE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>OpenNebula Public</NAME><MARKET_MAD><![CDATA[one]]></MARKET_MAD><ZONE_ID><![CDATA[0]]></ZONE_ID><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><MARKETPLACEAPPS><ID>0</ID><ID>1</ID><ID>2</ID><ID>3</ID><ID>4</ID><ID>5</ID><ID>6</ID><ID>7</ID><ID>8</ID><ID>9</ID><ID>10</ID><ID>11</ID><ID>12</ID><ID>13</ID><ID>14</ID><ID>15</ID><ID>16</ID><ID>17</ID><ID>18</ID><ID>19</ID><ID>20</ID><ID>21</ID><ID>22</ID><ID>23</ID><ID>24</ID></MARKETPLACEAPPS><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>1</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>1</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><TEMPLATE><DESCRIPTION><![CDATA[OpenNebula Systems MarketPlace]]></DESCRIPTION><MARKET_MAD><![CDATA[one]]></MARKET_MAD></TEMPLATE></MARKETPLACE></MARKETPLACE_POOL>'''
ns = 'http://opennebula.org/XMLSchema'
class XmlTests(unittest.TestCase):
def test_raw_instanciation(self):
marketpool = bindings.CreateFromDocument(xmlSample)
m0 = marketpool.MARKETPLACE[0]
self.assertEqual(m0.NAME, "OpenNebula Public")
def test_adding_namespace(self):
doc = dom.parseString(nakedXmlSample)
doc.documentElement.setAttribute('xmlns', ns)
marketpool = bindings.CreateFromDocument(doc.toxml())
m0 = marketpool.MARKETPLACE[0]
self.assertEqual(m0.NAME, "OpenNebula Public")
| 81.05 | 1,003 | 0.710056 | 553 | 3,242 | 4.039783 | 0.271248 | 0.085944 | 0.008953 | 0.030439 | 0.59803 | 0.59803 | 0.59803 | 0.59803 | 0.59803 | 0.59803 | 0 | 0.041888 | 0.072178 | 3,242 | 39 | 1,004 | 83.128205 | 0.700798 | 0.172424 | 0 | 0.222222 | 0 | 0.111111 | 0.749625 | 0.653298 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
eceaedfca5e8f20534c181a7f4d569fce86eec9a | 405 | py | Python | tests/line_markers/test_init.py | jamescooke/flake8-aaa | 9df248e10538946531b67da4564bb229a91baece | [
"MIT"
] | 44 | 2018-04-08T21:25:43.000Z | 2022-01-20T14:28:16.000Z | tests/line_markers/test_init.py | jamescooke/flake8-aaa | 9df248e10538946531b67da4564bb229a91baece | [
"MIT"
] | 72 | 2018-03-30T14:30:48.000Z | 2022-03-31T16:18:16.000Z | tests/line_markers/test_init.py | jamescooke/flake8-aaa | 9df248e10538946531b67da4564bb229a91baece | [
"MIT"
] | 1 | 2018-10-17T18:49:25.000Z | 2018-10-17T18:49:25.000Z | from flake8_aaa.line_markers import LineMarkers
from flake8_aaa.types import LineType
def test():
result = LineMarkers(5 * [''], 7)
assert result.types == [
LineType.unprocessed,
LineType.unprocessed,
LineType.unprocessed,
LineType.unprocessed,
LineType.unprocessed,
]
assert result.lines == ['', '', '', '', '']
assert result.fn_offset == 7
| 23.823529 | 47 | 0.624691 | 40 | 405 | 6.225 | 0.475 | 0.381526 | 0.433735 | 0.610442 | 0.381526 | 0.381526 | 0.381526 | 0.381526 | 0 | 0 | 0 | 0.016447 | 0.249383 | 405 | 16 | 48 | 25.3125 | 0.802632 | 0 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
019f4f96de4b2c86a926ff6c4501ee68c9ee1382 | 331 | py | Python | src/NotYetSelfAware/layers/activations/tanh.py | ezalos/NotYetSelfAware | aa8374d24259be9c93b9b5fc00c07f03538a79df | [
"MIT"
] | 1 | 2021-10-02T09:17:46.000Z | 2021-10-02T09:17:46.000Z | src/NotYetSelfAware/layers/activations/tanh.py | ezalos/NotYetSelfAware | aa8374d24259be9c93b9b5fc00c07f03538a79df | [
"MIT"
] | null | null | null | src/NotYetSelfAware/layers/activations/tanh.py | ezalos/NotYetSelfAware | aa8374d24259be9c93b9b5fc00c07f03538a79df | [
"MIT"
] | null | null | null | import numpy as np
from .base import BaseActivation
class Tanh(BaseActivation):
def __init__(self) -> None:
pass
def forward(self, Z):
# A = np.tanh(Z)
up = np.exp(Z) - np.exp(-Z)
dn = np.exp(Z) + np.exp(-Z)
A = up / dn
return A
def backward(self, Z):
A = self.forward(Z)
dA = 1 - np.power(A, 2)
return dA
| 16.55 | 32 | 0.607251 | 59 | 331 | 3.338983 | 0.440678 | 0.101523 | 0.121827 | 0.081218 | 0.121827 | 0.121827 | 0 | 0 | 0 | 0 | 0 | 0.007905 | 0.23565 | 331 | 19 | 33 | 17.421053 | 0.770751 | 0.042296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.071429 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
01b2686f3033ccd51eff6f344cce645f1a3589f1 | 1,480 | py | Python | site/trips/models.py | CKPBot/site | af8d4f7ebc370accefb61f1977782f127141ca68 | [
"MIT"
] | null | null | null | site/trips/models.py | CKPBot/site | af8d4f7ebc370accefb61f1977782f127141ca68 | [
"MIT"
] | null | null | null | site/trips/models.py | CKPBot/site | af8d4f7ebc370accefb61f1977782f127141ca68 | [
"MIT"
] | null | null | null | from django.db import models
class Post(models.Model):
iden = models.CharField(max_length=100)
content = models.CharField(max_length=100)
domain = models.CharField(max_length=100, null=True, blank=True)
created_at = models.DateTimeField(auto_now_add=True)
class Article(models.Model):
content = models.TextField(u'Content')
frontId = models.CharField(u'frontId', max_length=50, null=True, blank=True)
def __unicode__(self):
return self.frontId
class IDForm(models.Model):
account = models.CharField(max_length=100)
password = models.CharField(max_length=100)
userRule = models.CharField(u'userRule', max_length=50, null=True, blank=True)
login_stats = models. BooleanField()
def __unicode__(self):
return self.account
class StateForm(models.Model):
account = models.CharField(max_length=100)
password = models.CharField(max_length=100)
userRule = models.CharField(u'userRule', max_length=50, null=True, blank=True)
def __unicode__(self):
return self.account
class QuestionForm(models.Model):
account = models.CharField(max_length=100)
data = models.TextField(u'data')
def __unicode__(self):
return self.account
class LogForm(models.Model):
user = models.CharField(max_length=100)
logData = models.CharField(u'logData', max_length=50, null=True, blank=True)
feature = models.TextField(u'feature')
locate = models.TextField(u'locate')
created_at = models.DateTimeField(auto_now_add=True)
def __unicode__(self):
return self.user | 30.204082 | 79 | 0.766892 | 205 | 1,480 | 5.341463 | 0.234146 | 0.178082 | 0.147945 | 0.19726 | 0.665753 | 0.567123 | 0.545205 | 0.442922 | 0.325114 | 0.325114 | 0 | 0.026616 | 0.111486 | 1,480 | 49 | 80 | 30.204082 | 0.806084 | 0 | 0 | 0.459459 | 0 | 0 | 0.036462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0.054054 | 0.027027 | 0.135135 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
01b558ab4133478ccddc69bfe8b40cd7a7d6dd5f | 33 | py | Python | Algorithm-Selection/gripsPredictorPkg/__init__.py | GregorCH/algoselection | 4919d6bede01298f269985fdce94d8176d2ad62b | [
"MIT"
] | 2 | 2018-02-23T08:53:22.000Z | 2021-05-12T11:17:46.000Z | Algorithm-Selection/gripsPredictorPkg/__init__.py | GregorCH/algoselection | 4919d6bede01298f269985fdce94d8176d2ad62b | [
"MIT"
] | null | null | null | Algorithm-Selection/gripsPredictorPkg/__init__.py | GregorCH/algoselection | 4919d6bede01298f269985fdce94d8176d2ad62b | [
"MIT"
] | 1 | 2018-04-09T20:28:36.000Z | 2018-04-09T20:28:36.000Z | __all__ = ['predictor', 'tests']
| 16.5 | 32 | 0.636364 | 3 | 33 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
01bc408bbf11fba843d24a7f20cf34f3043f24bb | 172 | py | Python | profile_support.py | matikasiyanda/21cmNEST | 949fb798efb5e419804cb692f876a9402e8e8dec | [
"Unlicense"
] | 1 | 2019-08-27T10:11:37.000Z | 2019-08-27T10:11:37.000Z | profile_support.py | matikasiyanda/21cmNEST | 949fb798efb5e419804cb692f876a9402e8e8dec | [
"Unlicense"
] | null | null | null | profile_support.py | matikasiyanda/21cmNEST | 949fb798efb5e419804cb692f876a9402e8e8dec | [
"Unlicense"
] | 1 | 2020-03-02T04:33:45.000Z | 2020-03-02T04:33:45.000Z | import __builtin__
try:
profile = __builtin__.profile
except AttributeError:
# No line profiler, provide a pass-through version
def profile(func): return func
| 21.5 | 54 | 0.75 | 21 | 172 | 5.761905 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19186 | 172 | 7 | 55 | 24.571429 | 0.870504 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
01ce565d6cd9a41a3837274a9e1c33ec3d6a5025 | 102 | py | Python | example/example_tr.py | tolgahanuzun/markovch | dea979e89715f0b882c4abecadfdfedffa04f24b | [
"MIT"
] | 5 | 2018-01-12T09:14:15.000Z | 2019-04-16T12:16:14.000Z | example/example_tr.py | tolgahanuzun/markovch | dea979e89715f0b882c4abecadfdfedffa04f24b | [
"MIT"
] | null | null | null | example/example_tr.py | tolgahanuzun/markovch | dea979e89715f0b882c4abecadfdfedffa04f24b | [
"MIT"
] | null | null | null | from markovch import markov
diagram = markov.Markov('./data_tr.txt')
print(diagram.result_list(50))
| 17 | 40 | 0.764706 | 15 | 102 | 5.066667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.098039 | 102 | 5 | 41 | 20.4 | 0.804348 | 0 | 0 | 0 | 0 | 0 | 0.127451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
01d61b9775800e6fb6efa5373083c3233bc64162 | 471 | py | Python | app/app/tests/utils/utils.py | cs-nerds/lishebora-shipping-service | ef98f13b4e560edc71a987b4ccf46c9144c6ad0f | [
"MIT"
] | null | null | null | app/app/tests/utils/utils.py | cs-nerds/lishebora-shipping-service | ef98f13b4e560edc71a987b4ccf46c9144c6ad0f | [
"MIT"
] | null | null | null | app/app/tests/utils/utils.py | cs-nerds/lishebora-shipping-service | ef98f13b4e560edc71a987b4ccf46c9144c6ad0f | [
"MIT"
] | null | null | null | import random
import string
def random_lower_string() -> str:
return "".join(random.choices(string.ascii_lowercase, k=32))
def random_code() -> str:
return str(random.randint(0, 1000))
def random_currency() -> str:
return "".join(random.choices(string.ascii_uppercase, k=3))
def random_longitude() -> float:
return random.random() * random.choice([180, 180])
def random_latitude() -> float:
return random.random() * random.choice([90, -90])
| 20.478261 | 64 | 0.687898 | 63 | 471 | 5.015873 | 0.412698 | 0.142405 | 0.082278 | 0.120253 | 0.455696 | 0.455696 | 0.234177 | 0 | 0 | 0 | 0 | 0.045226 | 0.154989 | 471 | 22 | 65 | 21.409091 | 0.748744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | true | 0 | 0.166667 | 0.416667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
01fcb851495352b86f6bfa6adea5c5f9d73b1d70 | 139 | py | Python | clubadm/apps.py | clubadm/clubadm | 1e460253cdd30271aa359b53bdb600d2c6ca91b0 | [
"MIT"
] | 25 | 2015-12-21T04:33:11.000Z | 2021-12-13T17:55:00.000Z | clubadm/apps.py | clubadm/clubadm | 1e460253cdd30271aa359b53bdb600d2c6ca91b0 | [
"MIT"
] | 16 | 2015-12-22T08:23:09.000Z | 2020-12-23T20:00:10.000Z | clubadm/apps.py | clubadm/clubadm | 1e460253cdd30271aa359b53bdb600d2c6ca91b0 | [
"MIT"
] | 6 | 2015-12-21T18:37:57.000Z | 2016-02-22T23:45:46.000Z | from django.apps import AppConfig
class ClubADMConfig(AppConfig):
name = "clubadm"
verbose_name = "Клуб анонимных Дедов Морозов"
| 19.857143 | 49 | 0.748201 | 16 | 139 | 6.4375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179856 | 139 | 6 | 50 | 23.166667 | 0.903509 | 0 | 0 | 0 | 0 | 0 | 0.251799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
bf32021a158f8fe0334afdd817689a59b4174e4e | 131 | py | Python | src/ds-visuals/snippets/tree/create.py | Ray784/ds-visualization | 47597e7cdb98f4f1692cbf9eaa88810c3d2b37c9 | [
"MIT"
] | null | null | null | src/ds-visuals/snippets/tree/create.py | Ray784/ds-visualization | 47597e7cdb98f4f1692cbf9eaa88810c3d2b37c9 | [
"MIT"
] | 1 | 2022-03-02T10:57:50.000Z | 2022-03-02T10:57:50.000Z | src/ds-visuals/snippets/tree/create.py | Ray784/ds-visualization | 47597e7cdb98f4f1692cbf9eaa88810c3d2b37c9 | [
"MIT"
] | null | null | null | def createTree(self, root, *elements):
root = None
for element in elements:
root = self.insert(root, element)
return root | 14.555556 | 38 | 0.70229 | 18 | 131 | 5.111111 | 0.611111 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198473 | 131 | 9 | 39 | 14.555556 | 0.87619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1717458e20e9cc4db4d35021225344a7fe7edb8c | 377 | py | Python | Commands/Command.py | densikat/PyRobotSim | 724418c6537fd5428e706fc3b7807c5b128ea677 | [
"MIT"
] | null | null | null | Commands/Command.py | densikat/PyRobotSim | 724418c6537fd5428e706fc3b7807c5b128ea677 | [
"MIT"
] | null | null | null | Commands/Command.py | densikat/PyRobotSim | 724418c6537fd5428e706fc3b7807c5b128ea677 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class Command(ABC):
def __init__(self,commandname):
self.commandname = commandname
@abstractmethod
def initializecommand(self, command):
pass
@abstractmethod
def validateinstruction(self, robot, table):
pass
@abstractmethod
def executeinstruction(self, robot, table):
pass
| 17.136364 | 48 | 0.671088 | 35 | 377 | 7.114286 | 0.457143 | 0.204819 | 0.168675 | 0.144578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257294 | 377 | 21 | 49 | 17.952381 | 0.889286 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0.230769 | 0.076923 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
17195d7a3c4f297277705fee26d57efe2d026230 | 87 | py | Python | 2020-11-02/meio.py | pufe/programa | 7f79566597446e9e39222e6c15fa636c3dd472bb | [
"MIT"
] | 2 | 2020-12-12T00:02:40.000Z | 2021-04-21T19:49:59.000Z | 2020-11-02/meio.py | pufe/programa | 7f79566597446e9e39222e6c15fa636c3dd472bb | [
"MIT"
] | null | null | null | 2020-11-02/meio.py | pufe/programa | 7f79566597446e9e39222e6c15fa636c3dd472bb | [
"MIT"
] | null | null | null | n = int(input())
lado = 2
for i in range(n):
lado = 2*lado-1
print(lado*lado)
| 14.5 | 20 | 0.563218 | 17 | 87 | 2.882353 | 0.647059 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 0.264368 | 87 | 5 | 21 | 17.4 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
175bb838cee18336bd8a62ac16a8a963bd31ba34 | 7,122 | py | Python | sampleScan.py | tink3rtanner/opc | b94b40bb176bfedacbaf4a2dafb7cdac619818a2 | [
"MIT"
] | 29 | 2019-09-22T07:02:40.000Z | 2022-03-25T23:06:06.000Z | sampleScan.py | tink3rtanner/opc | b94b40bb176bfedacbaf4a2dafb7cdac619818a2 | [
"MIT"
] | 3 | 2019-10-07T16:11:38.000Z | 2020-05-07T20:44:57.000Z | sampleScan.py | tink3rtanner/opc | b94b40bb176bfedacbaf4a2dafb7cdac619818a2 | [
"MIT"
] | 5 | 2019-10-07T17:03:56.000Z | 2022-03-18T08:51:22.000Z | import os
directory="/home/pi/Desktop/samplepacks/"
sampleList=[["test","test"]]
def main():
for file in os.listdir(directory):
fullPath = directory + file
if os.path.isdir(fullPath):
#print
#print "directory: ",file
#print fullPath
containsAif=0
#each folder in parent directory
for subfile in os.listdir(fullPath):
subfullPath=fullPath+"/"+subfile
#a path within a path
#print "SUBFILE: ",subfile
if os.path.isdir(subfullPath):
if subfile=="synth" or "drum":
#print "nested directories, but it's okay cuz you named them"
readAifDir(subfile,subfullPath)
elif subfile.endswith(".aif") or subfile.endswith(".aiff"):
containsAif=1
elif subfile.endswith(".DS_Store"):
continue
else:
print "what's going on here. name your folders or hold it with the nesting"
print "SUBFILE: ",subfile
if containsAif==1:
readAifDir(file,fullPath)
# else:
# sampleList.append([file,fullPath]) #adds file andfullpath to samplelist
# #if file.endswith(".atm") or file.endswith(".py"):
if ['test', 'test'] in sampleList:
sampleList.remove(['test','test'])
#print sampleList
# for sample in sampleList:
# print
# print sample[1] #fullpath
# atts=readAif(sample[1]) #reads aiff and gets attributes!
# print atts['type']
# #print atts
def readAifDir(name,path):
#should return amount of .aif's found in dir
aifsampleList=[["a","a"]]
print
print "readAif directory: ",name
print path
for file in os.listdir(path):
fullPath=path+"/"+file
if file.endswith('.aif')or file.endswith(".aiff"):
#print "aif found at file: ",fullPath
atts=readAif(fullPath)
aifsampleList.append([file,fullPath])
#print atts['type']
elif file.endswith(".DS_Store"):
#ignore .DS_Store mac files
continue
else:
print fullPath, " is not a aif. what gives?"
if ["a","a"] in aifsampleList:
aifsampleList.remove(["a","a"])
for sample in aifsampleList:
print sample[1] #fullpath
atts=readAif(sample[1]) #reads aiff and gets attributes!
print atts['type']
#print atts
def readAif(path):
#print "//READAIFF from file ", path
#print
# SAMPLE DRUM AIFF METADATA
# /home/pi/Desktop/samplepacks/kits1/rz1.aif
# drum_version : 1
# type : drum
# name : user
# octave : 0
# pitch : ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']
# start : ['0', '24035422', '48070845', '86012969', '123955093', '144951088', '175722759', '206494430', '248851638', '268402991', '312444261', '428603973', '474613364', '601936581', '729259799', '860697810', '992135821', '1018188060', '1044240299', '1759004990', '1783040413', '1820982537', '1845017959', '1882960084']
# end : ['24031364', '48066787', '86008911', '123951035', '144947030', '175718701', '206490372', '248847580', '268398933', '312440203', '428599915', '474609306', '601932523', '729255741', '860693752', '992131763', '1018184002', '1044236241', '1759000932', '1783036355', '1820978479', '1845013902', '1882956026', '1906991448']
# playmode : ['8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192']
# reverse : ['8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192']
# volume : ['8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192', '8192']
# dyna_env : ['0', '8192', '0', '8192', '0', '0', '0', '0']
# fx_active : false
# fx_type : delay
# fx_params : ['8000', '8000', '8000', '8000', '8000', '8000', '8000', '8000']
# lfo_active : false
# lfo_type : tremolo
# lfo_params : ['16000', '16000', '16000', '16000', '0', '0', '0', '0']
# SAMPLE SYNTH METADATA
# /home/pi/Desktop/samplepacks/C-MIX/mtrap.aif
# adsr : ['64', '10746', '32767', '14096', '4000', '64', '4000', '4000']
# base_freq : 440.0
# fx_active : true
# fx_params : ['64', '0', '18063', '16000', '0', '0', '0', '0']
# fx_type : nitro
# knobs : ['0', '2193', '2540', '4311', '12000', '12288', '28672', '8192']
# lfo_active : false
# lfo_params : ['16000', '0', '0', '16000', '0', '0', '0', '0']
# lfo_type : tremolo
# name : mtrap
# octave : 0
# synth_version : 2
# type : sampler
attdata={}
with open(path,'rb') as fp:
line=fp.readline()
#print line
if 'op-1' in line:
#print
#print 'op-1 appl chunk found!'
#print subline=line.split("op-1")
# subline=line.split("op-1")[0]
# print subline[1]
data=line.split('{', 1)[1].split('}')[0] #data is everything in brackets
#print
#print "data!"
#print data
data=switchBrack(data,",","|")
attlist=data.split(",")
#print
#print "attlist"
#print attlist
#print
#print "attname: attvalue"
for i,line in enumerate(attlist):
#print line
linesplit=line.split(":")
attname=linesplit[0]
attname=attname[1:-1]
attvalue=linesplit[1]
valtype=""
#print attvalue
if isInt(attvalue):
valtype='int'
if isfloat(attvalue):
valtype='float'
if attvalue=="false" or attvalue=="true":
valtype='bool'
for j,char in enumerate(list(attvalue)):
#print "j,char"
#print j, char
if valtype=="":
if char=='"':
#print "string: ",char
valtype="string"
elif char=="[":
valtype="list"
if valtype=="":
valtype="no type detected"
elif valtype=="string":
attvalue=attvalue[1:-1]
elif valtype=="list":
attvalue=attvalue[1:-1]
attvalue=attvalue.split("|")
#print "list found"
# for k,item in enumerate(attvalue):
# print k,item
#attvalue[k]=
#print attvalue[1]
#print attname,":",attvalue
#print valtype
#print
attdata.update({attname:attvalue})
#print attdata['type']
if 'type' in attdata:
#print "type exists"
True
else:
#print "type doesn't exist"
attdata.update({'type':'not specified'})
#except:
# attdata.update({'type':'not specified'})
return attdata
# attdata[attname]=value
#print attdata
def isInt(s):
try:
int(s)
return True
except ValueError:
return False
def isfloat(s):
try:
float(s)
return True
except ValueError:
return False
def switchBrack(data,fromdelim,todelim):
datalist=list(data)
inbrack=0
for i,char in enumerate(datalist):
#print i, " ",char
if char=="[":
inbrack=1
#print "in brackets"
if char=="]":
inbrack=0
#print "out of brackets"
if inbrack ==1:
if char==fromdelim:
#print "comma found!"
if data[i-1].isdigit():
#print "num preceding comma found"
datalist[i]=todelim
newdata="".join(datalist)
#print newdata
return newdata
main()
| 25.345196 | 326 | 0.596462 | 887 | 7,122 | 4.767756 | 0.277339 | 0.130527 | 0.187278 | 0.238354 | 0.197446 | 0.142823 | 0.142823 | 0.135257 | 0.115867 | 0.115867 | 0 | 0.169067 | 0.210194 | 7,122 | 280 | 327 | 25.435714 | 0.582756 | 0.516428 | 0 | 0.172727 | 0 | 0 | 0.097663 | 0.008688 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009091 | null | null | 0.072727 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
175faeee3cfd60447e8cdaf79007116370a89f41 | 238 | py | Python | scripts/04_3d_concepts/modeling/polygon_reduction/polygonreduction_create_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 85 | 2019-09-06T22:53:15.000Z | 2022-03-27T01:33:09.000Z | scripts/04_3d_concepts/modeling/polygon_reduction/polygonreduction_create_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 11 | 2019-09-03T22:59:19.000Z | 2022-02-27T03:42:52.000Z | scripts/04_3d_concepts/modeling/polygon_reduction/polygonreduction_create_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 31 | 2019-09-09T09:35:35.000Z | 2022-03-28T09:08:47.000Z | """
Copyright: MAXON Computer GmbH
Author: Yannick Puech
Description:
- Creates a new PolygonReduction object.
Class/method highlighted:
- c4d.utils.PolygonReduction
"""
import c4d
polyReduction = c4d.utils.PolygonReduction()
| 15.866667 | 44 | 0.756303 | 25 | 238 | 7.2 | 0.8 | 0.088889 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.155462 | 238 | 14 | 45 | 17 | 0.880597 | 0.718487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
17665acaef0965ab5ef31388109575fa38dc8d4f | 780 | py | Python | manticore/core/smtlib/__init__.py | ivanpustogarov/manticore | f17410b8427ddbd5d751d8824bdf10ce33c9f3ce | [
"Apache-2.0"
] | null | null | null | manticore/core/smtlib/__init__.py | ivanpustogarov/manticore | f17410b8427ddbd5d751d8824bdf10ce33c9f3ce | [
"Apache-2.0"
] | null | null | null | manticore/core/smtlib/__init__.py | ivanpustogarov/manticore | f17410b8427ddbd5d751d8824bdf10ce33c9f3ce | [
"Apache-2.0"
] | 1 | 2018-08-12T17:29:11.000Z | 2018-08-12T17:29:11.000Z | from __future__ import absolute_import # noqa
from .expression import Expression, Bool, BitVec, Array, BitVecConstant # noqa
from .constraints import ConstraintSet # noqa
from .solver import * # noqa
from . import operators as Operators # noqa
import logging
logger = logging.getLogger(__name__)
'''
class OperationNotPermited(SolverException):
def __init__(self):
super(OperationNotPermited, self).__init__("You cant build this expression") #no childrens
class ConcretizeException(SolverException):
def __init__(self, expression):
super(ConcretizeException, self).__init__("Need to concretize the following and retry\n"+str(expression)) #no childrens
self.expression = expression
'''
class VisitorException(Exception):
pass
| 31.2 | 131 | 0.75 | 83 | 780 | 6.746988 | 0.542169 | 0.057143 | 0.05 | 0.092857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167949 | 780 | 24 | 132 | 32.5 | 0.862866 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.666667 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
176a14d4d9c836b724f04cf845e7dfc03e125205 | 197 | py | Python | sheets/serializers.py | LD31D/django_sheets | fa626012fd00d69fbbac4fe4542150902d2be8cb | [
"MIT"
] | null | null | null | sheets/serializers.py | LD31D/django_sheets | fa626012fd00d69fbbac4fe4542150902d2be8cb | [
"MIT"
] | null | null | null | sheets/serializers.py | LD31D/django_sheets | fa626012fd00d69fbbac4fe4542150902d2be8cb | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Cell
class CellSerializer(serializers.ModelSerializer):
class Meta:
model = Cell
fields = ('coordinates', 'value') | 19.7 | 50 | 0.71066 | 20 | 197 | 6.95 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213198 | 197 | 10 | 51 | 19.7 | 0.896774 | 0 | 0 | 0 | 0 | 0 | 0.080808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
176c751a40ba19e4a05a51527d9a92d2dae692b0 | 257 | py | Python | lwc/views.py | codingforentrepreneurs/launch-with-code | 45b20b799819d7ab3ea528ae63a2cf2773fc6e13 | [
"MIT"
] | 64 | 2015-01-04T20:01:08.000Z | 2021-09-08T16:40:48.000Z | lwc/views.py | codingforentrepreneurs/launch-with-code | 45b20b799819d7ab3ea528ae63a2cf2773fc6e13 | [
"MIT"
] | 1 | 2016-01-05T16:52:10.000Z | 2016-01-05T17:02:24.000Z | lwc/views.py | codingforentrepreneurs/launch-with-code | 45b20b799819d7ab3ea528ae63a2cf2773fc6e13 | [
"MIT"
] | 85 | 2015-01-03T20:28:17.000Z | 2022-03-02T20:25:44.000Z | from django.shortcuts import render
def testhome(request):
context = {}
template = "donotuse.html"
return render(request, template, context)
# def home2(request):
# context = {}
# template = "home2.html"
# return render(request, template, context) | 19.769231 | 44 | 0.712062 | 29 | 257 | 6.310345 | 0.482759 | 0.153005 | 0.240437 | 0.251366 | 0.415301 | 0.415301 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.159533 | 257 | 13 | 44 | 19.769231 | 0.837963 | 0.392996 | 0 | 0 | 0 | 0 | 0.085526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
178d05b1d8bb6f8c30ffa020c7ddd40a9b93f7a2 | 265 | py | Python | main.py | hiyoung123/ProxyPool | 79f3d96e52873637ad6bff89908c7b8218d878d5 | [
"MIT"
] | 2 | 2021-05-10T07:59:22.000Z | 2021-05-10T08:40:27.000Z | main.py | hiyoung123/ProxyPool | 79f3d96e52873637ad6bff89908c7b8218d878d5 | [
"MIT"
] | null | null | null | main.py | hiyoung123/ProxyPool | 79f3d96e52873637ad6bff89908c7b8218d878d5 | [
"MIT"
] | 1 | 2021-05-31T06:28:46.000Z | 2021-05-31T06:28:46.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
import os
import sys
from scrapy.cmdline import execute
if __name__ == '__main__':
sys.path.append(os.path.abspath(__file__))
# execute(['scrapy', 'crawl', 'XiLa'])
execute(['scrapy', 'crawl', 'Kuai'])
| 20.384615 | 46 | 0.641509 | 34 | 265 | 4.647059 | 0.705882 | 0.164557 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.158491 | 265 | 12 | 47 | 22.083333 | 0.704036 | 0.30566 | 0 | 0 | 0 | 0 | 0.127072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bd665cbb30872e6e356793046c9b4863e6327d40 | 67,153 | py | Python | Caracteriz_nubes_Anio.py | cmcuervol/Estefania | 13b564261dfc786b93c77fbc442a568018f87cc9 | [
"MIT"
] | 2 | 2020-09-13T07:55:25.000Z | 2020-09-21T13:36:23.000Z | Caracteriz_nubes_Anio.py | cmcuervol/Estefania | 13b564261dfc786b93c77fbc442a568018f87cc9 | [
"MIT"
] | null | null | null | Caracteriz_nubes_Anio.py | cmcuervol/Estefania | 13b564261dfc786b93c77fbc442a568018f87cc9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
from datetime import datetime, timedelta
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import os
import matplotlib.ticker as tck
import matplotlib.font_manager as fm
import math as m
import matplotlib.dates as mdates
import netCDF4 as nc
from netCDF4 import Dataset
id
import itertools
import datetime
from scipy.stats import ks_2samp
import matplotlib.colors as colors
#------------------------------------------------------------------------------
# Motivación codigo -----------------------------------------------------------
"Codigo que permite la porderación de la nubosidad por la ponderación de sus horas. Se realiza para el "
"horizonte de tiempo de mayor rango q se tenga de los datos de GOES CH2. Está sijeto a los umbrales de"
"cada hora, por lo que se vrea un back up en la carpeta de Back uos de Drive"
#################################################################################################
## -----------------INCORPORANDO LOS DATOS DE RADIACIÓN Y DE LOS EXPERIMENTOS----------------- ##
#################################################################################################
df_P975 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Piranometro/60012018_2019.txt', parse_dates=[2])
df_P350 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Piranometro/60022018_2019.txt', parse_dates=[2])
df_P348 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Piranometro/60032018_2019.txt', parse_dates=[2])
df_P975 = df_P975.set_index(["fecha_hora"])
df_P975.index = df_P975.index.tz_localize('UTC').tz_convert('America/Bogota')
df_P975.index = df_P975.index.tz_localize(None)
df_P350 = df_P350.set_index(["fecha_hora"])
df_P350.index = df_P350.index.tz_localize('UTC').tz_convert('America/Bogota')
df_P350.index = df_P350.index.tz_localize(None)
df_P348 = df_P348.set_index(["fecha_hora"])
df_P348.index = df_P348.index.tz_localize('UTC').tz_convert('America/Bogota')
df_P348.index = df_P348.index.tz_localize(None)
df_P975.index = pd.to_datetime(df_P975.index, format="%Y-%m-%d %H:%M:%S", errors='coerce')
df_P350.index = pd.to_datetime(df_P350.index, format="%Y-%m-%d %H:%M:%S", errors='coerce')
df_P348.index = pd.to_datetime(df_P348.index, format="%Y-%m-%d %H:%M:%S", errors='coerce')
## ----------------ACOTANDO LOS DATOS A VALORES VÁLIDOS---------------- ##
'Como en este caso lo que interesa es la radiacion, para la filtración de los datos, se'
'considerarán los datos de radiacion mayores a 0.'
df_P975 = df_P975[(df_P975['radiacion'] > 0) ]
df_P350 = df_P350[(df_P350['radiacion'] > 0) ]
df_P348 = df_P348[(df_P348['radiacion'] > 0) ]
df_P975_h = df_P975.groupby(pd.Grouper(level='fecha_hora', freq='1H')).mean()
df_P350_h = df_P350.groupby(pd.Grouper(level='fecha_hora', freq='1H')).mean()
df_P348_h = df_P348.groupby(pd.Grouper(level='fecha_hora', freq='1H')).mean()
df_P975_h = df_P975_h.between_time('06:00', '17:59')
df_P350_h = df_P350_h.between_time('06:00', '17:59')
df_P348_h = df_P348_h.between_time('06:00', '17:59')
#-----------------------------------------------------------------------------
# Rutas para las fuentes -----------------------------------------------------
prop = fm.FontProperties(fname='/home/nacorreasa/SIATA/Cod_Califi/AvenirLTStd-Heavy.otf' )
prop_1 = fm.FontProperties(fname='/home/nacorreasa/SIATA/Cod_Califi/AvenirLTStd-Book.otf')
prop_2 = fm.FontProperties(fname='/home/nacorreasa/SIATA/Cod_Califi/AvenirLTStd-Black.otf')
################################################################################################
## -------------------------------UMBRALES DE LAS REFLECTANCIAS------------------------------ ##
################################################################################################
Umbral_up_348 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_348_Nuba.csv', sep=',', header = None)
Umbral_down_348 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_348_Desp.csv', sep=',', header = None)
Umbral_up_348.columns=['Hora', 'Umbral']
Umbral_up_348.index = Umbral_up_348['Hora']
Umbral_up_348 = Umbral_up_348.drop(['Hora'], axis=1)
Umbral_down_348.columns=['Hora', 'Umbral']
Umbral_down_348.index = Umbral_down_348['Hora']
Umbral_down_348 = Umbral_down_348.drop(['Hora'], axis=1)
#Umbrales_348 = [Umbral_down_348, Umbral_up_348]
Umbral_up_350 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_350_Nuba.csv', sep=',', header = None)
Umbral_down_350 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_350_Desp.csv', sep=',', header = None)
Umbral_up_350.columns=['Hora', 'Umbral']
Umbral_up_350.index = Umbral_up_350['Hora']
Umbral_up_350 = Umbral_up_350.drop(['Hora'], axis=1)
Umbral_down_350.columns=['Hora', 'Umbral']
Umbral_down_350.index = Umbral_down_350['Hora']
Umbral_down_350 = Umbral_down_350.drop(['Hora'], axis=1)
#Umbrales_350 = [Umbral_down_350, Umbral_up_350]
Umbral_up_975 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_975_Nuba.csv', sep=',', header = None)
Umbral_down_975 = pd.read_table('/home/nacorreasa/Maestria/Datos_Tesis/Umbrales_Horarios/Umbral_Hourly_975_Desp.csv', sep=',', header = None)
Umbral_up_975.columns=['Hora', 'Umbral']
Umbral_up_975.index = Umbral_up_975['Hora']
Umbral_up_975 = Umbral_up_975.drop(['Hora'], axis=1)
Umbral_down_975.columns=['Hora', 'Umbral']
Umbral_down_975.index = Umbral_down_975['Hora']
Umbral_down_975 = Umbral_down_975.drop(['Hora'], axis=1)
#Umbrales_975 = [Umbral_down_975, Umbral_up_975]
####################################################################################
## ----------------LECTURA DE LOS DATOS DE GOES CH2 MALLA GENERAL---------------- ##
####################################################################################
Rad = np.load('/home/nacorreasa/Maestria/Datos_Tesis/Arrays/Array_Rad_2018_2019CH2.npy')
#################################################################################################
##-------------------LECTURA DE LOS DATOS DE CH2 GOES PARA CADA PIXEL--------------------------##
#################################################################################################
Rad_pixel_975 = np.load('/home/nacorreasa/Maestria/Datos_Tesis/Arrays/Array_Rad_pix975_Anio.npy')
Rad_pixel_350 = np.load('/home/nacorreasa/Maestria/Datos_Tesis/Arrays/Array_Rad_pix350_Anio.npy')
Rad_pixel_348 = np.load('/home/nacorreasa/Maestria/Datos_Tesis/Arrays/Array_Rad_pix348_Anio.npy')
fechas_horas = np.load('/home/nacorreasa/Maestria/Datos_Tesis/Arrays/Array_FechasHoras_Anio.npy')
df_fh = pd.DataFrame()
df_fh ['fecha_hora'] = fechas_horas
df_fh['fecha_hora'] = pd.to_datetime(df_fh['fecha_hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_fh.index = df_fh['fecha_hora']
w = pd.date_range(df_fh.index.min(), df_fh.index.max()).difference(df_fh.index)
df_fh = df_fh[df_fh.index.hour != 5]
fechas_horas = df_fh['fecha_hora'].values
## -- Selección del pixel de la TS
Rad_df_975 = pd.DataFrame()
Rad_df_975['Fecha_Hora'] = fechas_horas
Rad_df_975['Radiacias'] = Rad_pixel_975
Rad_df_975['Fecha_Hora'] = pd.to_datetime(Rad_df_975['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
Rad_df_975.index = Rad_df_975['Fecha_Hora']
Rad_df_975 = Rad_df_975.drop(['Fecha_Hora'], axis=1)
## -- Selección del pixel de la CI
Rad_df_350 = pd.DataFrame()
Rad_df_350['Fecha_Hora'] = fechas_horas
Rad_df_350['Radiacias'] = Rad_pixel_350
Rad_df_350['Fecha_Hora'] = pd.to_datetime(Rad_df_350['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
Rad_df_350.index = Rad_df_350['Fecha_Hora']
Rad_df_350 = Rad_df_350.drop(['Fecha_Hora'], axis=1)
## -- Selección del pixel de la JV
Rad_df_348 = pd.DataFrame()
Rad_df_348['Fecha_Hora'] = fechas_horas
Rad_df_348['Radiacias'] = Rad_pixel_348
Rad_df_348['Fecha_Hora'] = pd.to_datetime(Rad_df_348['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
Rad_df_348.index = Rad_df_348['Fecha_Hora']
Rad_df_348 = Rad_df_348.drop(['Fecha_Hora'], axis=1)
'OJOOOO DESDE ACÁ-----------------------------------------------------------------------------------------'
'Se comenta porque se estaba perdiendo la utilidad de la información cada 10 minutos al suavizar la serie.'
## ------------------------CAMBIANDO LOS DATOS HORARIOS POR LOS ORIGINALES---------------------- ##
Rad_df_348_h = Rad_df_348
Rad_df_350_h = Rad_df_350
Rad_df_975_h = Rad_df_975
## ------------------------------------DATOS HORARIOS DE REFLECTANCIAS------------------------- ##
# Rad_df_348_h = Rad_df_348.groupby(pd.Grouper(freq="H")).mean()
# Rad_df_350_h = Rad_df_350.groupby(pd.Grouper(freq="H")).mean()
# Rad_df_975_h = Rad_df_975.groupby(pd.Grouper(freq="H")).mean()
'OJOOOO HASTA ACÁ-----------------------------------------------------------------------------------------'
Rad_df_348_h = Rad_df_348_h.between_time('06:00', '17:59')
Rad_df_350_h = Rad_df_350_h.between_time('06:00', '17:59')
Rad_df_975_h = Rad_df_975_h.between_time('06:00', '17:59')
## ---------------------------------FDP COMO GRÁFICA----------------------------------------- ##
fig = plt.figure(figsize=[10, 6])
plt.rc('axes', edgecolor='gray')
ax1 = fig.add_subplot(1, 3, 1)
ax1.spines['top'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax1.hist(Rad_df_348_h['Radiacias'].values[~np.isnan(Rad_df_348_h['Radiacias'].values)], bins='auto', alpha = 0.5)
#Umbrales_line1 = [ax1.axvline(x=xc, color='k', linestyle='--') for xc in Umbrales_348]
#ax1.text(Umbrales_348[0], 1000, str(Umbrales_348[0]) , fontsize=10, fontproperties=prop_1)
ax1.set_title(u'Distribución del FR en JV', fontproperties=prop, fontsize = 15)
ax1.set_ylabel(u'Frecuencia', fontproperties=prop_1, fontsize = 15)
ax1.set_xlabel(u'Reflectancia', fontproperties=prop_1, fontsize = 15)
ax2 = fig.add_subplot(1, 3, 2)
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.hist(Rad_df_350_h['Radiacias'].values[~np.isnan(Rad_df_350_h['Radiacias'].values)], bins='auto', alpha = 0.5)
#Umbrales_line2 = [ax2.axvline(x=xc, color='k', linestyle='--') for xc in Umbrales_350]
ax2.set_title(u'Distribución del FR en CI', fontproperties=prop, fontsize = 15)
ax2.set_ylabel(u'Frecuencia', fontproperties=prop_1, fontsize = 15)
ax2.set_xlabel(u'Reflectancia', fontproperties=prop_1, fontsize = 15)
ax3 = fig.add_subplot(1, 3, 3)
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
ax3.hist(Rad_df_975_h['Radiacias'].values[~np.isnan(Rad_df_975_h['Radiacias'].values)], bins='auto', alpha = 0.5)
#Umbrales_line3 = [ax3.axvline(x=xc, color='k', linestyle='--') for xc in Umbrales_975]
ax3.set_title(u'Distribución del FR en TS', fontproperties=prop, fontsize = 15)
ax3.set_ylabel(u'Frecuencia', fontproperties=prop_1, fontsize = 15)
ax3.set_xlabel(u'Reflectancia', fontproperties=prop_1, fontsize = 15)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/HistogramaFrecuenciasCH2_2018.png')
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/HistogramaFrecuenciasCH2_2018.png nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
################################################################################################
## -------------------------OBTENER EL DF DEL ESCENARIO DESPEJADO---------------------------- ##
################################################################################################
Rad_desp_348 = []
FH_Desp_348 = []
for i in range(len(Rad_df_348_h)):
for j in range(len(Umbral_down_348.index)):
if (Rad_df_348_h.index[i].hour == Umbral_down_348.index[j]) & (Rad_df_348_h.Radiacias.values[i] <= Umbral_down_348.values[j]):
Rad_desp_348.append(Rad_df_348_h.Radiacias.values[i])
FH_Desp_348.append(Rad_df_348_h.index[i])
df_348_desp = pd.DataFrame()
df_348_desp['Radiacias'] = Rad_desp_348
df_348_desp['Fecha_Hora'] = FH_Desp_348
df_348_desp['Fecha_Hora'] = pd.to_datetime(df_348_desp['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_348_desp.index = df_348_desp['Fecha_Hora']
df_348_desp = df_348_desp.drop(['Fecha_Hora'], axis=1)
Rad_desp_350 = []
FH_Desp_350 = []
for i in range(len(Rad_df_350_h)):
for j in range(len(Umbral_down_350.index)):
if (Rad_df_350_h.index[i].hour == Umbral_down_350.index[j]) & (Rad_df_350_h.Radiacias.values[i] <= Umbral_down_350.values[j]):
Rad_desp_350.append(Rad_df_350_h.Radiacias.values[i])
FH_Desp_350.append(Rad_df_350_h.index[i])
df_350_desp = pd.DataFrame()
df_350_desp['Radiacias'] = Rad_desp_350
df_350_desp['Fecha_Hora'] = FH_Desp_350
df_350_desp['Fecha_Hora'] = pd.to_datetime(df_350_desp['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_350_desp.index = df_350_desp['Fecha_Hora']
df_350_desp = df_350_desp.drop(['Fecha_Hora'], axis=1)
Rad_desp_975 = []
FH_Desp_975 = []
for i in range(len(Rad_df_975_h)):
for j in range(len(Umbral_down_975.index)):
if (Rad_df_975_h.index[i].hour == Umbral_down_975.index[j]) & (Rad_df_975_h.Radiacias.values[i] <= Umbral_down_975.values[j]):
Rad_desp_975.append(Rad_df_975_h.Radiacias.values[i])
FH_Desp_975.append(Rad_df_975_h.index[i])
df_975_desp = pd.DataFrame()
df_975_desp['Radiacias'] = Rad_desp_975
df_975_desp['Fecha_Hora'] = FH_Desp_975
df_975_desp['Fecha_Hora'] = pd.to_datetime(df_975_desp['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_975_desp.index = df_975_desp['Fecha_Hora']
df_975_desp = df_975_desp.drop(['Fecha_Hora'], axis=1)
################################################################################################
## --------------------------OBTENER EL DF DEL ESCENARIO NUBADO------------------------------ ##
################################################################################################
Rad_nuba_348 = []
FH_Nuba_348 = []
for i in range(len(Rad_df_348_h)):
for j in range(len(Umbral_up_348.index)):
if (Rad_df_348_h.index[i].hour == Umbral_up_348.index[j]) & (Rad_df_348_h.Radiacias.values[i] >= Umbral_up_348.values[j]):
Rad_nuba_348.append(Rad_df_348_h.Radiacias.values[i])
FH_Nuba_348.append(Rad_df_348_h.index[i])
df_348_nuba = pd.DataFrame()
df_348_nuba['Radiacias'] = Rad_nuba_348
df_348_nuba['Fecha_Hora'] = FH_Nuba_348
df_348_nuba['Fecha_Hora'] = pd.to_datetime(df_348_nuba['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_348_nuba.index = df_348_nuba['Fecha_Hora']
df_348_nuba = df_348_nuba.drop(['Fecha_Hora'], axis=1)
Rad_nuba_350 = []
FH_Nuba_350 = []
for i in range(len(Rad_df_350_h)):
for j in range(len(Umbral_up_350.index)):
if (Rad_df_350_h.index[i].hour == Umbral_up_350.index[j]) & (Rad_df_350_h.Radiacias.values[i] >= Umbral_up_350.values[j]):
Rad_nuba_350.append(Rad_df_350_h.Radiacias.values[i])
FH_Nuba_350.append(Rad_df_350_h.index[i])
df_350_nuba = pd.DataFrame()
df_350_nuba['Radiacias'] = Rad_nuba_350
df_350_nuba['Fecha_Hora'] = FH_Nuba_350
df_350_nuba['Fecha_Hora'] = pd.to_datetime(df_350_nuba['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_350_nuba.index = df_350_nuba['Fecha_Hora']
df_350_nuba = df_350_nuba.drop(['Fecha_Hora'], axis=1)
Rad_nuba_975 = []
FH_Nuba_975 = []
for i in range(len(Rad_df_975_h)):
for j in range(len(Umbral_up_975.index)):
if (Rad_df_975_h.index[i].hour == Umbral_up_975.index[j]) & (Rad_df_975_h.Radiacias.values[i] >= Umbral_up_975.values[j]):
Rad_nuba_975.append(Rad_df_975_h.Radiacias.values[i])
FH_Nuba_975.append(Rad_df_975_h.index[i])
df_975_nuba = pd.DataFrame()
df_975_nuba['Radiacias'] = Rad_nuba_975
df_975_nuba['Fecha_Hora'] = FH_Nuba_975
df_975_nuba['Fecha_Hora'] = pd.to_datetime(df_975_nuba['Fecha_Hora'], format="%Y-%m-%d %H:%M", errors='coerce')
df_975_nuba.index = df_975_nuba['Fecha_Hora']
df_975_nuba = df_975_nuba.drop(['Fecha_Hora'], axis=1)
## -------------------------OBTENER LAS HORAS Y FECHAS DESPEJADAS---------------------------- ##
Hora_desp_348 = df_348_desp.index.hour
Fecha_desp_348 = df_348_desp.index.date
Hora_desp_350 = df_350_desp.index.hour
Fecha_desp_350 = df_350_desp.index.date
Hora_desp_975 = df_975_desp.index.hour
Fecha_desp_975 = df_975_desp.index.date
## ----------------------------OBTENER LAS HORAS Y FECHAS NUBADAS---------------------------- ##
Hora_nuba_348 = df_348_nuba.index.hour
Fecha_nuba_348 = df_348_nuba.index.date
Hora_nuba_350 = df_350_nuba.index.hour
Fecha_nuba_350 = df_350_nuba.index.date
Hora_nuba_975 = df_975_nuba.index.hour
Fecha_nuba_975 = df_975_nuba.index.date
## -----------------------------DIBUJAR LOS HISTOGRAMAS DE LAS HORAS ------ ----------------------- #
fig = plt.figure(figsize=[10, 6])
plt.rc('axes', edgecolor='gray')
ax1 = fig.add_subplot(1, 3, 1)
ax1.spines['top'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax1.hist(Hora_desp_348, bins='auto', alpha = 0.5, color = 'orange', label = 'Desp')
ax1.hist(Hora_nuba_348, bins='auto', alpha = 0.5, label = 'Nub')
ax1.set_title(u'Distribución de nubes por horas en JV', fontproperties=prop, fontsize = 8)
ax1.set_ylabel(u'Frecuencia', fontproperties=prop_1)
ax1.set_xlabel(u'Horas', fontproperties=prop_1)
ax1.set_ylim(0, 1350)
ax1.legend()
ax2 = fig.add_subplot(1, 3, 2)
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.hist(Hora_desp_350, bins='auto', alpha = 0.5, color = 'orange', label = 'Desp')
ax2.hist(Hora_nuba_350, bins='auto', alpha = 0.5, label = 'Nub')
ax2.set_title(u'Distribución de nubes por horas en CI', fontproperties=prop, fontsize = 8)
ax2.set_ylabel(u'Frecuencia', fontproperties=prop_1)
ax2.set_xlabel(u'Horas', fontproperties=prop_1)
ax2.set_ylim(0, 1350)
ax2.legend()
ax3 = fig.add_subplot(1, 3, 3)
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
ax3.hist(Hora_desp_975, bins='auto', alpha = 0.5, color = 'orange', label = 'Desp')
ax3.hist(Hora_nuba_975, bins='auto', alpha = 0.5, label = 'Nub')
ax3.set_title(u'Distribución de nubes por horas en TS', fontproperties=prop, fontsize = 8)
ax3.set_ylabel(u'Frecuencia', fontproperties=prop_1)
ax3.set_xlabel(u'Horas', fontproperties=prop_1)
ax3.set_ylim(0, 1350)
ax3.legend()
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/HistoNubaDespAnio2018.png')
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/HistoNubaDespAnio2018.png nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
##----------ENCONTRANDO LAS RADIACIONES CORRESPONDIENTES A LAS HORAS NUBOSAS----------##
df_FH_nuba_348 = pd.DataFrame()
df_FH_nuba_348 ['Fechas'] = Fecha_nuba_348
df_FH_nuba_348 ['Horas'] = Hora_nuba_348
df_FH_nuba_350 = pd.DataFrame()
df_FH_nuba_350 ['Fechas'] = Fecha_nuba_350
df_FH_nuba_350 ['Horas'] = Hora_nuba_350
df_FH_nuba_975 = pd.DataFrame()
df_FH_nuba_975 ['Fechas'] = Fecha_nuba_975
df_FH_nuba_975 ['Horas'] = Hora_nuba_975
df_FH_nuba_348_groupH = df_FH_nuba_348.groupby('Horas')['Fechas'].unique()
df_nuba_348_groupH = pd.DataFrame(df_FH_nuba_348_groupH[df_FH_nuba_348_groupH.apply(lambda x: len(x)>1)]) ##NO entiendo bien acá que se está haciendo
df_FH_nuba_350_groupH = df_FH_nuba_350.groupby('Horas')['Fechas'].unique()
df_nuba_350_groupH = pd.DataFrame(df_FH_nuba_350_groupH[df_FH_nuba_350_groupH.apply(lambda x: len(x)>1)])
df_FH_nuba_975_groupH = df_FH_nuba_975.groupby('Horas')['Fechas'].unique()
df_nuba_975_groupH = pd.DataFrame(df_FH_nuba_975_groupH[df_FH_nuba_975_groupH.apply(lambda x: len(x)>1)])
c = np.arange(6, 18, 1)
Sk_Nuba_stat_975 = {}
Sk_Nuba_pvalue_975 = {}
Composites_Nuba_975 = {}
for i in df_FH_nuba_975_groupH.index:
H = str(i)
if len(df_FH_nuba_975_groupH.loc[i]) == 1 :
list = df_P975_h[df_P975_h.index.date == df_FH_nuba_975_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_nuba_975_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_nuba_975_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P975_h[df_P975_h.index.date == df_FH_nuba_975_groupH.loc[i][j]]['radiacion']))
stat_975 = []
pvalue_975 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P975_h['radiacion'][df_P975_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_975.append(SK[0])
pvalue_975.append(SK[1])
except ValueError:
stat_975.append(np.nan)
pvalue_975.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = [temporal_CD['radiacion'].values[w] for w in range(len(temporal_CD['radiacion'].values))]
list_sk_stat = stat_975
list_sk_pvalue = pvalue_975
Composites_Nuba_975[H] = list
Sk_Nuba_stat_975 [H] = list_sk_stat
Sk_Nuba_pvalue_975 [H] = list_sk_pvalue
del H
Comp_Nuba_975_df = pd.DataFrame(Composites_Nuba_975, index = c)
#Comp_Nuba_975_df = pd.DataFrame.from_dict(Composites_Nuba_975,orient='index').transpose()
Sk_Nuba_stat_975_df = pd.DataFrame(Sk_Nuba_stat_975, index = c)
Sk_Nuba_pvalue_975_df = pd.DataFrame(Sk_Nuba_pvalue_975, index = c)
Sk_Nuba_stat_350 = {}
Sk_Nuba_pvalue_350 = {}
Composites_Nuba_350 = {}
for i in df_FH_nuba_350_groupH.index:
H = str(i)
if len(df_FH_nuba_350_groupH.loc[i]) == 1 :
list = df_P350_h[df_P350_h.index.date == df_FH_nuba_350_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_nuba_350_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_nuba_350_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P350_h[df_P350_h.index.date == df_FH_nuba_350_groupH.loc[i][j]]['radiacion']))
stat_350 = []
pvalue_350 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P350_h['radiacion'][df_P350_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_350.append(SK[0])
pvalue_350.append(SK[1])
except ValueError:
stat_350.append(np.nan)
pvalue_350.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = temporal_CD['radiacion'].values
list_sk_stat = stat_350
list_sk_pvalue = pvalue_350
Composites_Nuba_350[H] = list
Sk_Nuba_stat_350 [H] = list_sk_stat
Sk_Nuba_pvalue_350 [H] = list_sk_pvalue
del H
Comp_Nuba_350_df = pd.DataFrame(Composites_Nuba_350, index = c)
Sk_Nuba_stat_350_df = pd.DataFrame(Sk_Nuba_stat_350, index = c)
Sk_Nuba_pvalue_350_df = pd.DataFrame(Sk_Nuba_pvalue_350, index = c)
Sk_Nuba_stat_348 = {}
Sk_Nuba_pvalue_348 = {}
Composites_Nuba_348 = {}
for i in df_FH_nuba_348_groupH.index:
H = str(i)
if len(df_FH_nuba_348_groupH.loc[i]) == 1 :
list = df_P348_h[df_P348_h.index.date == df_FH_nuba_348_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_nuba_348_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_nuba_348_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P348_h[df_P348_h.index.date == df_FH_nuba_348_groupH.loc[i][j]]['radiacion']))
stat_348 = []
pvalue_348 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P348_h['radiacion'][df_P348_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_348.append(SK[0])
pvalue_348.append(SK[1])
except ValueError:
stat_348.append(np.nan)
pvalue_348.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = temporal_CD['radiacion'].values
list_sk_stat = stat_348
list_sk_pvalue = pvalue_348
Composites_Nuba_348[H] = list
Sk_Nuba_stat_348 [H] = list_sk_stat
Sk_Nuba_pvalue_348 [H] = list_sk_pvalue
del H
Comp_Nuba_348_df = pd.DataFrame(Composites_Nuba_348, index = c)
Sk_Nuba_stat_348_df = pd.DataFrame(Sk_Nuba_stat_348, index = c)
Sk_Nuba_pvalue_348_df = pd.DataFrame(Sk_Nuba_pvalue_348, index = c)
##----------ENCONTRANDO LAS RADIACIONES CORRESPONDIENTES A LAS HORAS DESPEJADAS----------##
df_FH_desp_348 = pd.DataFrame()
df_FH_desp_348 ['Fechas'] = Fecha_desp_348
df_FH_desp_348 ['Horas'] = Hora_desp_348
df_FH_desp_350 = pd.DataFrame()
df_FH_desp_350 ['Fechas'] = Fecha_desp_350
df_FH_desp_350 ['Horas'] = Hora_desp_350
df_FH_desp_975 = pd.DataFrame()
df_FH_desp_975 ['Fechas'] = Fecha_desp_975
df_FH_desp_975 ['Horas'] = Hora_desp_975
df_FH_desp_348_groupH = df_FH_desp_348.groupby('Horas')['Fechas'].unique()
df_desp_348_groupH = pd.DataFrame(df_FH_desp_348_groupH[df_FH_desp_348_groupH.apply(lambda x: len(x)>1)]) ##NO entiendo bien acá que se está haciendo
df_FH_desp_350_groupH = df_FH_desp_350.groupby('Horas')['Fechas'].unique()
df_desp_350_groupH = pd.DataFrame(df_FH_desp_350_groupH[df_FH_desp_350_groupH.apply(lambda x: len(x)>1)])
df_FH_desp_975_groupH = df_FH_desp_975.groupby('Horas')['Fechas'].unique()
df_desp_975_groupH = pd.DataFrame(df_FH_desp_975_groupH[df_FH_desp_975_groupH.apply(lambda x: len(x)>1)])
Sk_Desp_stat_975 = {}
Sk_Desp_pvalue_975 = {}
Composites_Desp_975 = {}
for i in df_FH_desp_975_groupH.index:
H = str(i)
if len(df_FH_desp_975_groupH.loc[i]) == 1 :
list = df_P975_h[df_P975_h.index.date == df_FH_desp_975_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_desp_975_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_desp_975_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P975_h[df_P975_h.index.date == df_FH_desp_975_groupH.loc[i][j]]['radiacion']))
stat_975 = []
pvalue_975 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P975_h['radiacion'][df_P975_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_975.append(SK[0])
pvalue_975.append(SK[1])
except ValueError:
stat_975.append(np.nan)
pvalue_975.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = temporal_CD['radiacion'].values
list_sk_stat = stat_975
list_sk_pvalue = pvalue_975
Composites_Desp_975[H] = list
Sk_Desp_stat_975 [H] = list_sk_stat
Sk_Desp_pvalue_975 [H] = list_sk_pvalue
del H
Comp_Desp_975_df = pd.DataFrame(Composites_Desp_975, index = c)
Sk_Desp_stat_975_df = pd.DataFrame(Sk_Desp_stat_975, index = c)
Sk_Desp_pvalue_975_df = pd.DataFrame(Sk_Desp_pvalue_975, index = c)
Sk_Desp_stat_350 = {}
Sk_Desp_pvalue_350 = {}
Composites_Desp_350 = {}
for i in df_FH_desp_350_groupH.index:
H = str(i)
if len(df_FH_desp_350_groupH.loc[i]) == 1 :
list = df_P350_h[df_P350_h.index.date == df_FH_desp_350_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_desp_350_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_desp_350_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P350_h[df_P350_h.index.date == df_FH_desp_350_groupH.loc[i][j]]['radiacion']))
stat_350 = []
pvalue_350 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P350_h['radiacion'][df_P350_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_350.append(SK[0])
pvalue_350.append(SK[1])
except ValueError:
stat_350.append(np.nan)
pvalue_350.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = temporal_CD['radiacion'].values
list_sk_stat = stat_350
list_sk_pvalue = pvalue_350
Composites_Desp_350[H] = list
Sk_Desp_stat_350 [H] = list_sk_stat
Sk_Desp_pvalue_350 [H] = list_sk_pvalue
del H
Comp_Desp_350_df = pd.DataFrame(Composites_Desp_350, index = c)
Sk_Desp_stat_350_df = pd.DataFrame(Sk_Desp_stat_350, index = c)
Sk_Desp_pvalue_350_df = pd.DataFrame(Sk_Desp_pvalue_350, index = c)
Sk_Desp_stat_348 = {}
Sk_Desp_pvalue_348 = {}
Composites_Desp_348 = {}
for i in df_FH_desp_348_groupH.index:
H = str(i)
if len(df_FH_desp_348_groupH.loc[i]) == 1 :
list = df_P348_h[df_P348_h.index.date == df_FH_desp_348_groupH.loc[i][0]]['radiacion'].values
list_sk_stat = np.ones(12)*np.nan
list_sk_pvalue = np.ones(12)*np.nan
elif len(df_FH_desp_348_groupH.loc[i]) > 1 :
temporal = pd.DataFrame()
for j in range(len(df_FH_desp_348_groupH.loc[i])):
temporal = temporal.append(pd.DataFrame(df_P348_h[df_P348_h.index.date == df_FH_desp_348_groupH.loc[i][j]]['radiacion']))
stat_348 = []
pvalue_348 = []
for k in c:
temporal_sk = temporal[temporal.index.hour == k].radiacion.values
Rad_sk = df_P348_h['radiacion'][df_P348_h.index.hour == k].values
try:
SK = ks_2samp(temporal_sk,Rad_sk)
stat_348.append(SK[0])
pvalue_348.append(SK[1])
except ValueError:
stat_348.append(np.nan)
pvalue_348.append(np.nan)
temporal_CD = temporal.groupby(by=[temporal.index.hour]).mean()
list = temporal_CD['radiacion'].values
list_sk_stat = stat_348
list_sk_pvalue = pvalue_348
Composites_Desp_348[H] = list
Sk_Desp_stat_348 [H] = list_sk_stat
Sk_Desp_pvalue_348 [H] = list_sk_pvalue
del H
Comp_Desp_348_df = pd.DataFrame(Composites_Desp_348, index = c)
Sk_Desp_stat_348_df = pd.DataFrame(Sk_Desp_stat_348, index = c)
Sk_Desp_pvalue_348_df = pd.DataFrame(Sk_Desp_pvalue_348, index = c)
##-------------------ESTANDARIZANDO LAS FORMAS DE LOS DATAFRAMES A LAS HORAS CASO DESPEJADO----------------##
Comp_Desp_348_df = Comp_Desp_348_df[(Comp_Desp_348_df.index >= 6)&(Comp_Desp_348_df.index <18)]
Comp_Desp_350_df = Comp_Desp_350_df[(Comp_Desp_350_df.index >= 6)&(Comp_Desp_350_df.index <18)]
Comp_Desp_975_df = Comp_Desp_975_df[(Comp_Desp_975_df.index >= 6)&(Comp_Desp_975_df.index <18)]
s = [str(i) for i in Comp_Nuba_348_df.index.values]
ListNan = np.empty((1,len(Comp_Desp_348_df)))
ListNan [:] = np.nan
def convert(set):
return [*set, ]
a_Desp_348 = convert(set(s).difference(Comp_Desp_348_df.columns.values))
a_Desp_348.sort(key=int)
if len(a_Desp_348) > 0:
idx = [i for i,x in enumerate(s) if x in a_Desp_348]
for i in range(len(a_Desp_348)):
Comp_Desp_348_df.insert(loc = idx[i], column = a_Desp_348[i], value=ListNan[0])
del idx
a_Desp_350 = convert(set(s).difference(Comp_Desp_350_df.columns.values))
a_Desp_350.sort(key=int)
if len(a_Desp_350) > 0:
idx = [i for i,x in enumerate(s) if x in a_Desp_350]
for i in range(len(a_Desp_350)):
Comp_Desp_350_df.insert(loc = idx[i], column = a_Desp_350[i], value=ListNan[0])
del idx
a_Desp_975 = convert(set(s).difference(Comp_Desp_975_df.columns.values))
a_Desp_975.sort(key=int)
if len(a_Desp_975) > 0:
idx = [i for i,x in enumerate(s) if x in a_Desp_975]
for i in range(len(a_Desp_975)):
Comp_Desp_975_df.insert(loc = idx[i], column = a_Desp_975[i], value=ListNan[0])
del idx
s = [str(i) for i in Comp_Desp_348_df.index.values]
Comp_Desp_348_df = Comp_Desp_348_df[s]
Comp_Desp_350_df = Comp_Desp_350_df[s]
Comp_Desp_975_df = Comp_Desp_975_df[s]
##-------------------ESTANDARIZANDO LAS FORMAS DE LOS DATAFRAMES A LAS HORAS CASO NUBADO----------------##
Comp_Nuba_348_df = Comp_Nuba_348_df[(Comp_Nuba_348_df.index >= 6)&(Comp_Nuba_348_df.index <18)]
Comp_Nuba_350_df = Comp_Nuba_350_df[(Comp_Nuba_350_df.index >= 6)&(Comp_Nuba_350_df.index <18)]
Comp_Nuba_975_df = Comp_Nuba_975_df[(Comp_Nuba_975_df.index >= 6)&(Comp_Nuba_975_df.index <18)]
s = [str(i) for i in Comp_Nuba_348_df.index.values]
ListNan = np.empty((1,len(Comp_Nuba_348_df)))
ListNan [:] = np.nan
def convert(set):
return [*set, ]
a_Nuba_348 = convert(set(s).difference(Comp_Nuba_348_df.columns.values))
a_Nuba_348.sort(key=int)
if len(a_Nuba_348) > 0:
idx = [i for i,x in enumerate(s) if x in a_Nuba_348]
for i in range(len(a_Nuba_348)):
Comp_Nuba_348_df.insert(loc = idx[i], column = a_Nuba_348[i], value=ListNan[0])
del idx
a_Nuba_350 = convert(set(s).difference(Comp_Nuba_350_df.columns.values))
a_Nuba_350.sort(key=int)
if len(a_Nuba_350) > 0:
idx = [i for i,x in enumerate(s) if x in a_Nuba_350]
for i in range(len(a_Nuba_350)):
Comp_Nuba_350_df.insert(loc = idx[i], column = a_Nuba_350[i], value=ListNan[0])
del idx
a_Nuba_975 = convert(set(s).difference(Comp_Nuba_975_df.columns.values))
a_Nuba_975.sort(key=int)
if len(a_Nuba_975) > 0:
idx = [i for i,x in enumerate(s) if x in a_Nuba_975]
for i in range(len(a_Nuba_975)):
Comp_Nuba_975_df.insert(loc = idx[i], column = a_Nuba_975[i], value=ListNan[0])
del idx
Comp_Nuba_348_df = Comp_Nuba_348_df[s]
Comp_Nuba_350_df = Comp_Nuba_350_df[s]
Comp_Nuba_975_df = Comp_Nuba_975_df[s]
##-------------------CONTEO DE LA CANTIDAD DE DÍAS CONSIDERADOS NUBADOS Y DESPEJADOS----------------##
Cant_Days_Nuba_348 = []
for i in range(len(s)):
try:
Cant_Days_Nuba_348.append(len(df_FH_nuba_348_groupH[df_FH_nuba_348_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Nuba_348.append(0)
Cant_Days_Nuba_350 = []
for i in range(len(s)):
try:
Cant_Days_Nuba_350.append(len(df_FH_nuba_350_groupH[df_FH_nuba_350_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Nuba_350.append(0)
Cant_Days_Nuba_975 = []
for i in range(len(s)):
try:
Cant_Days_Nuba_975.append(len(df_FH_nuba_975_groupH[df_FH_nuba_975_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Nuba_975.append(0)
Cant_Days_Desp_348 = []
for i in range(len(s)):
try:
Cant_Days_Desp_348.append(len(df_FH_desp_348_groupH[df_FH_desp_348_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Desp_348.append(0)
Cant_Days_Desp_350 = []
for i in range(len(s)):
try:
Cant_Days_Desp_350.append(len(df_FH_desp_350_groupH[df_FH_desp_350_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Desp_350.append(0)
Cant_Days_Desp_975 = []
for i in range(len(s)):
try:
Cant_Days_Desp_975.append(len(df_FH_desp_975_groupH[df_FH_desp_975_groupH .index == int(s[i])].values[0]))
except IndexError:
Cant_Days_Desp_975.append(0)
##-------------------AJUSTADO LOS DATAFRAMES DE LOS ESTADÍSTICOS Y DEL VALOR P----------------##
for i in range(len(c)):
if str(c[i]) not in Sk_Desp_pvalue_975_df.columns:
Sk_Desp_pvalue_975_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
if str(c[i]) not in Sk_Desp_pvalue_350_df.columns:
Sk_Desp_pvalue_350_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
if str(c[i]) not in Sk_Desp_pvalue_348_df.columns:
Sk_Desp_pvalue_348_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
if str(c[i]) not in Sk_Nuba_pvalue_350_df.columns:
Sk_Nuba_pvalue_350_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
if str(c[i]) not in Sk_Nuba_pvalue_348_df.columns:
Sk_Nuba_pvalue_348_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
if str(c[i]) not in Sk_Nuba_pvalue_975_df.columns:
Sk_Nuba_pvalue_975_df.insert(int(c[i]-6), str(c[i]), np.ones(12)*np.nan)
Significancia = 0.05
for i in c:
Sk_Desp_pvalue_348_df.loc[Sk_Desp_pvalue_348_df[str(i)]< Significancia, str(i)] = 100
Sk_Desp_pvalue_350_df.loc[Sk_Desp_pvalue_350_df[str(i)]< Significancia, str(i)] = 100
Sk_Desp_pvalue_975_df.loc[Sk_Desp_pvalue_975_df[str(i)]< Significancia, str(i)] = 100
Sk_Nuba_pvalue_348_df.loc[Sk_Nuba_pvalue_348_df[str(i)]< Significancia, str(i)] = 100
Sk_Nuba_pvalue_350_df.loc[Sk_Nuba_pvalue_350_df[str(i)]< Significancia, str(i)] = 100
Sk_Nuba_pvalue_975_df.loc[Sk_Nuba_pvalue_975_df[str(i)]< Significancia, str(i)] = 100
row_Desp_348 = []
col_Desp_348 = []
for row in range(Sk_Desp_pvalue_348_df.shape[0]):
for col in range(Sk_Desp_pvalue_348_df.shape[1]):
if Sk_Desp_pvalue_348_df.get_value((row+6),str(col+6)) == 100:
row_Desp_348.append(row)
col_Desp_348.append(col)
#print(row+6, col+6)
row_Desp_350 = []
col_Desp_350 = []
for row in range(Sk_Desp_pvalue_350_df.shape[0]):
for col in range(Sk_Desp_pvalue_350_df.shape[1]):
if Sk_Desp_pvalue_350_df.get_value((row+6),str(col+6)) == 100:
row_Desp_350.append(row)
col_Desp_350.append(col)
row_Desp_975 = []
col_Desp_975 = []
for row in range(Sk_Desp_pvalue_975_df.shape[0]):
for col in range(Sk_Desp_pvalue_975_df.shape[1]):
if Sk_Desp_pvalue_975_df.get_value((row+6),str(col+6)) == 100:
row_Desp_975.append(row)
col_Desp_975.append(col)
row_Nuba_348 = []
col_Nuba_348 = []
for row in range(Sk_Nuba_pvalue_348_df.shape[0]):
for col in range(Sk_Nuba_pvalue_348_df.shape[1]):
if Sk_Nuba_pvalue_348_df.get_value((row+6),str(col+6)) == 100:
row_Nuba_348.append(row)
col_Nuba_348.append(col)
#print(row+6, col+6)
row_Nuba_350 = []
col_Nuba_350 = []
for row in range(Sk_Nuba_pvalue_350_df.shape[0]):
for col in range(Sk_Nuba_pvalue_350_df.shape[1]):
if Sk_Nuba_pvalue_350_df.get_value((row+6),str(col+6)) == 100:
row_Nuba_350.append(row)
col_Nuba_350.append(col)
row_Nuba_975 = []
col_Nuba_975 = []
for row in range(Sk_Nuba_pvalue_975_df.shape[0]):
for col in range(Sk_Nuba_pvalue_975_df.shape[1]):
if Sk_Nuba_pvalue_975_df.get_value((row+6),str(col+6)) == 100:
row_Nuba_975.append(row)
col_Nuba_975.append(col)
##-------------------GRÁFICO DEL COMPOSITE NUBADO DE LA RADIACIÓN PARA CADA PUNTO Y LA CANT DE DÍAS----------------##
s_f = [int(s[i]) for i in range(len(s))]
plt.close("all")
fig = plt.figure(figsize=(10., 8.),facecolor='w',edgecolor='w')
ax1=fig.add_subplot(2,3,1)
mapa = ax1.imshow(Comp_Nuba_348_df, interpolation = 'none', cmap = 'Spectral_r')
ax1.set_yticks(range(0,12), minor=False)
ax1.set_yticklabels(s, minor=False)
ax1.set_xticks(range(0,12), minor=False)
ax1.set_xticklabels(s, minor=False, rotation = 20)
ax1.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax1.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax1.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax1.set_title(' x = Horas nubadas en JV', loc = 'center', fontsize=9)
ax2=fig.add_subplot(2,3,2)
mapa = ax2.imshow(Comp_Nuba_350_df, interpolation = 'none', cmap = 'Spectral_r')
ax2.set_yticks(range(0,12), minor=False)
ax2.set_yticklabels(s, minor=False)
ax2.set_xticks(range(0,12), minor=False)
ax2.set_xticklabels(s, minor=False, rotation = 20)
ax2.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax2.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax2.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax2.set_title(' x = Horas nubadas en CI', loc = 'center', fontsize=9)
ax3 = fig.add_subplot(2,3,3)
mapa = ax3.imshow(Comp_Nuba_975_df, interpolation = 'none', cmap = 'Spectral_r')
ax3.set_yticks(range(0,12), minor=False)
ax3.set_yticklabels(s, minor=False)
ax3.set_xticks(range(0,12), minor=False)
ax3.set_xticklabels(s, minor=False, rotation = 20)
ax3.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax3.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax3.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax3.set_title(' x = Horas nubadas en TS', loc = 'center', fontsize=9)
cbar_ax = fig.add_axes([0.11, 0.93, 0.78, 0.008])
cbar = fig.colorbar(mapa, cax=cbar_ax, orientation='horizontal', format="%.2f")
cbar.set_label(u"Intensidad de la radiación $[W/m^{2}]$", fontsize=8, fontproperties=prop)
ax4 = fig.add_subplot(2,3,4)
ax4.spines['top'].set_visible(False)
ax4.spines['right'].set_visible(False)
ax4.bar(np.arange(len(s)), Cant_Days_Nuba_348, color='orange', align='center', alpha=0.5)
ax4.set_xlabel(u'Hora', fontproperties = prop_1)
ax4.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax4.set_xticks(range(0,12), minor=False)
ax4.set_xticklabels(s, minor=False, rotation = 20)
ax4.set_title(u' Cantidad de días en JV', loc = 'center', fontsize=9)
ax5 = fig.add_subplot(2,3,5)
ax5.spines['top'].set_visible(False)
ax5.spines['right'].set_visible(False)
ax5.bar(np.arange(len(s)), Cant_Days_Nuba_350, color='orange', align='center', alpha=0.5)
ax5.set_xlabel(u'Hora', fontproperties = prop_1)
ax5.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax5.set_xticks(range(0,12), minor=False)
ax5.set_xticklabels(s, minor=False, rotation = 20)
ax5.set_title(u' Cantidad de días en CI', loc = 'center', fontsize=9)
ax6 = fig.add_subplot(2,3,6)
ax6.spines['top'].set_visible(False)
ax6.spines['right'].set_visible(False)
ax6.bar(np.arange(len(s)), Cant_Days_Nuba_975, color='orange', align='center', alpha=0.5)
ax6.set_xlabel(u'Hora', fontproperties = prop_1)
ax6.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax6.set_xticks(range(0,12), minor=False)
ax6.set_xticklabels(s, minor=False, rotation = 20)
ax6.set_title(u' Cantidad de días en TS', loc = 'center', fontsize=9)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/Composites_Nuba_Cant_Dias2018.png')
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/Composites_Nuba_Cant_Dias2018.png nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
##-------------------GRÁFICO DEL COMPOSITE DESPEJADO DE LA RADIACIÓN PARA CADA PUNTO Y LA CANT DE DÍAS----------------##
plt.close("all")
fig = plt.figure(figsize=(10., 8.),facecolor='w',edgecolor='w')
ax1=fig.add_subplot(2,3,1)
mapa = ax1.imshow(Comp_Desp_348_df, interpolation = 'none', cmap = 'Spectral_r')
ax1.set_yticks(range(0,12), minor=False)
ax1.set_yticklabels(s, minor=False)
ax1.set_xticks(range(0,12), minor=False)
ax1.set_xticklabels(s, minor=False, rotation = 20)
ax1.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax1.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax1.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax1.set_title(' x = Horas despejadas en JV', loc = 'center', fontsize=9)
ax2=fig.add_subplot(2,3,2)
mapa = ax2.imshow(Comp_Desp_350_df, interpolation = 'none', cmap = 'Spectral_r')
ax2.set_yticks(range(0,12), minor=False)
ax2.set_yticklabels(s, minor=False)
ax2.set_xticks(range(0,12), minor=False)
ax2.set_xticklabels(s, minor=False, rotation = 20)
ax2.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax2.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax2.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax2.set_title(' x = Horas despejadas en CI', loc = 'center', fontsize=9)
ax3 = fig.add_subplot(2,3,3)
mapa = ax3.imshow(Comp_Desp_975_df, interpolation = 'none', cmap = 'Spectral_r')
ax3.set_yticks(range(0,12), minor=False)
ax3.set_yticklabels(s, minor=False)
ax3.set_xticks(range(0,12), minor=False)
ax3.set_xticklabels(s, minor=False, rotation = 20)
ax3.set_xlabel('Hora del caso', fontsize=10, fontproperties = prop_1)
ax3.set_ylabel('Hora en el CD de radiación', fontsize=10, fontproperties = prop_1)
ax3.scatter(range(0,12),range(0,12), marker='x', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax3.set_title(' x = Horas despejadas en TS', loc = 'center', fontsize=9)
cbar_ax = fig.add_axes([0.11, 0.93, 0.78, 0.008])
cbar = fig.colorbar(mapa, cax=cbar_ax, orientation='horizontal', format="%.2f")
cbar.set_label(u"Intensidad de la radiación $[W/m^{2}]$", fontsize=8, fontproperties=prop)
ax4 = fig.add_subplot(2,3,4)
ax4.spines['top'].set_visible(False)
ax4.spines['right'].set_visible(False)
ax4.bar(np.arange(len(s)), Cant_Days_Desp_348, color='orange', align='center', alpha=0.5)
ax4.set_xlabel(u'Hora', fontproperties = prop_1)
ax4.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax4.set_xticks(range(0,12), minor=False)
ax4.set_xticklabels(s, minor=False, rotation = 20)
ax4.set_title(u' Cantidad de días en JV', loc = 'center', fontsize=9)
ax5 = fig.add_subplot(2,3,5)
ax5.spines['top'].set_visible(False)
ax5.spines['right'].set_visible(False)
ax5.bar(np.arange(len(s)), Cant_Days_Desp_350, color='orange', align='center', alpha=0.5)
ax5.set_xlabel(u'Hora', fontproperties = prop_1)
ax5.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax5.set_xticks(range(0,12), minor=False)
ax5.set_xticklabels(s, minor=False, rotation = 20)
ax5.set_title(u' Cantidad de días en CI', loc = 'center', fontsize=9)
ax6 = fig.add_subplot(2,3,6)
ax6.spines['top'].set_visible(False)
ax6.spines['right'].set_visible(False)
ax6.bar(np.arange(len(s)), Cant_Days_Desp_975, color='orange', align='center', alpha=0.5)
ax6.set_xlabel(u'Hora', fontproperties = prop_1)
ax6.set_ylabel(r"Cantidad de días", fontproperties = prop_1)
ax6.set_xticks(range(0,12), minor=False)
ax6.set_xticklabels(s, minor=False, rotation = 20)
ax6.set_title(u' Cantidad de días en TS', loc = 'center', fontsize=9)
#plt.title(u'Composites caso nubado', fontproperties=prop, fontsize = 8)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/Composites_Desp_Cant_Dias2018.png')
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/Composites_Desp_Cant_Dias2018.png nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
##--------------------------TOTAL DE DÍAS DE REGISTRO Y FRECUENCIA DE LA CONDICIÓN---------------------##
Total_dias_348 = len(Rad_df_348.groupby(pd.Grouper(freq="D")).mean())
Total_dias_350 = len(Rad_df_350.groupby(pd.Grouper(freq="D")).mean())
Total_dias_975 = len(Rad_df_975.groupby(pd.Grouper(freq="D")).mean())
Porc_Days_Desp_348 = (np.array(Cant_Days_Desp_348)/Total_dias_348)*100
Porc_Days_Desp_350 = (np.array(Cant_Days_Desp_350)/Total_dias_350)*100
Porc_Days_Desp_975 = (np.array(Cant_Days_Desp_975)/Total_dias_975)*100
Porc_Days_Nuba_348 = (np.array(Cant_Days_Nuba_348)/Total_dias_348)*100
Porc_Days_Nuba_350 = (np.array(Cant_Days_Nuba_350)/Total_dias_350)*100
Porc_Days_Nuba_975 = (np.array(Cant_Days_Nuba_975)/Total_dias_975)*100
print('Total de dias JV: ' + str(Total_dias_348))
print('Total de dias CI: ' + str(Total_dias_350))
print('Total de dias TV: ' + str(Total_dias_975))
##-------------------CD EN BOX PLOT HORARIO PARA CADA PUNTO----------------##
DF_348_horas = {}
for i in range(1, 13):
A = Rad_df_348_h[Rad_df_348_h.index.hour == (i + 5)]['Radiacias']
H = A.index.hour[0]
print(H)
DF_348_horas[H] = A.values
del H, A
DF_348_horas = pd.DataFrame.from_dict(DF_348_horas,orient='index').transpose()
#DF_348_horas = pd.DataFrame(DF_348_horas)
DF_350_horas = {}
for i in range(1, 13):
A = Rad_df_350_h[Rad_df_350_h.index.hour == (i + 5)]['Radiacias']
H = A.index.hour[0]
print(H)
DF_350_horas[H] = A
del H, A
#DF_350_horas = pd.DataFrame(DF_350_horas)
DF_350_horas = pd.DataFrame.from_dict(DF_350_horas,orient='index').transpose()
DF_975_horas = {}
for i in range(1, 13):
A = Rad_df_975_h[Rad_df_975_h.index.hour == (i + 5)]['Radiacias']
H = A.index.hour[0]
print(H)
DF_975_horas[H] = A
del H, A
#DF_975_horas = pd.DataFrame(DF_975_horas)
DF_975_horas = pd.DataFrame.from_dict(DF_975_horas,orient='index').transpose()
fig = plt.figure(figsize=(12,5))
plt.rc('axes', edgecolor='gray')
ax1 = fig.add_subplot(1, 3, 1)
ax1.spines['top'].set_visible(False)
ax1.spines['right'].set_visible(False)
DF_348_horas.boxplot(grid=False)
#Umbrales_line1y = [ax1.axhline(y=xc, color='k', linestyle='--') for xc in Umbrales_348]
ax1.set_title(u'Distribución de FR por horas en JV', fontproperties=prop, fontsize = 8)
ax1.set_ylabel(u'Factor Reflectancia[%]', fontproperties=prop_1)
ax1.set_xlabel(u'Horas', fontproperties=prop_1)
ax2 = fig.add_subplot(1, 3, 2)
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
DF_350_horas.boxplot(grid=False)
#Umbrales_line2y = [ax2.axhline(y=xc, color='k', linestyle='--') for xc in Umbrales_350]
ax2.set_title(u'Distribución de FR por horas en CI', fontproperties=prop, fontsize = 8)
ax2.set_ylabel(u'Factor Reflectancia[%]', fontproperties=prop_1)
ax2.set_xlabel(u'Horas', fontproperties=prop_1)
ax3 = fig.add_subplot(1, 3, 3)
ax3.spines['top'].set_visible(False)
ax3.spines['right'].set_visible(False)
DF_975_horas.boxplot(grid=False)
#Umbrales_line3y = [ax3.axhline(y=xc, color='k', linestyle='--') for xc in Umbrales_975]
ax3.set_title(u'Distribución de FR por horas en TS', fontproperties=prop, fontsize = 8)
ax3.set_ylabel(u'Factor Reflectancia[%]', fontproperties=prop_1)
ax3.set_xlabel(u'Horas', fontproperties=prop_1)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/FRBoxPlotHora.png')
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/FRBoxPlotHora.png nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
##-------------------ANOMALÍAS DE LOS COMPOSITES DE LA RADIACIÓN EN LOS DÍAS NUBADOS Y DESPEJADOS----------------##
new_idx = np.arange(6, 18, 1)
df_CDRad_348 = df_P348_h.radiacion.groupby(by=[df_P348_h.index.hour]).mean()
df_CDRad_348 = df_CDRad_348.reindex(new_idx)
df_STDRad_348 = df_P348_h.radiacion.groupby(by=[df_P348_h.index.hour]).std()
df_STDRad_348 = df_STDRad_348.reindex(new_idx)
df_CDRad_350 = df_P350_h.radiacion.groupby(by=[df_P350_h.index.hour]).mean()
df_CDRad_350 = df_CDRad_350.reindex(new_idx)
df_STDRad_350 = df_P350_h.radiacion.groupby(by=[df_P350_h.index.hour]).std()
df_STDRad_350 = df_STDRad_350.reindex(new_idx)
df_CDRad_975 = df_P975_h.radiacion.groupby(by=[df_P975_h.index.hour]).mean()
df_CDRad_975 = df_CDRad_975.reindex(new_idx)
df_STDRad_975 = df_P975_h.radiacion.groupby(by=[df_P975_h.index.hour]).std()
df_STDRad_975 = df_STDRad_975.reindex(new_idx)
Comp_Desp_348_anomal = (Comp_Desp_348_df.sub(df_CDRad_348, axis='index'))
Comp_Desp_350_anomal = (Comp_Desp_350_df.sub(df_CDRad_350, axis='index'))
Comp_Desp_975_anomal = (Comp_Desp_975_df.sub(df_CDRad_975, axis='index'))
Comp_Nuba_348_anomal = (Comp_Nuba_348_df.sub(df_CDRad_348, axis='index'))
Comp_Nuba_350_anomal = (Comp_Nuba_350_df.sub(df_CDRad_350, axis='index'))
Comp_Nuba_975_anomal = (Comp_Nuba_975_df.sub(df_CDRad_975, axis='index'))
##-------------------CONFIGURACIÓN DE LA COLORBAR DE LAS ANOMALÍAS----------------##
class MidpointNormalize(colors.Normalize):
"""
Normalise the colorbar so that diverging bars work there way either side from a prescribed midpoint value)
e.g. im=ax1.imshow(array, norm=MidpointNormalize(midpoint=0.,vmin=-100, vmax=100))
"""
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple eaxmple...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y), np.isnan(value))
cmap = matplotlib.cm.RdBu_r
##-------------------GRÁFICO DE LAS ANOMALÍAS DEL COMPOSITE DESPEJADO DE LA RADIACIÓN PARA CADA PUNTO ----------------##
"""
Se deben verificar los valores de elev_min y elev_max de acuerdo al set de datos de las anomalías
"""
elev_min = min(np.nanmin(Comp_Desp_348_anomal.values) , np.nanmin(Comp_Desp_350_anomal.values), np.nanmin(Comp_Desp_975_anomal.values))-1
elev_max = max(np.nanmax(Comp_Desp_348_anomal.values) , np.nanmax(Comp_Desp_350_anomal.values), np.nanmax(Comp_Desp_975_anomal.values))+1
mid_val = 0
plt.close("all")
fig = plt.figure(figsize=(10., 8.),facecolor='w',edgecolor='w')
ax1=fig.add_subplot(2,3,1)
mapa = ax1.imshow(Comp_Desp_350_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax1.contour(Sk_Desp_pvalue_350_df, levels=[99, 101], linewidths=0.5)
#ax1.clabel(cont, fmt = '%.2f', colors = 'k', fontsize=8)
ax1.set_yticks(range(0,12), minor=False)
ax1.set_yticklabels(s, minor=False)
ax1.set_xticks(range(0,12), minor=False)
ax1.set_xticklabels(s, minor=False, rotation = 20)
ax1.set_xlabel('Horas despejadas', fontsize=10, fontproperties = prop_1)
ax1.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax1.scatter(col_Desp_350,row_Desp_350, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax1.set_title(' x = Horas despejadas en el Oeste', loc = 'center', fontsize=9)
ax2=fig.add_subplot(2,3,2)
mapa = ax2.imshow(Comp_Desp_975_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax2.contour(Sk_Desp_pvalue_975_df, levels=[99, 101], linewidths=0.5)
#ax2.clabel(cont, fmt = '%.2f', colors = 'k', fontsize=8)
ax2.set_yticks(range(0,12), minor=False)
ax2.set_yticklabels(s, minor=False)
ax2.set_xticks(range(0,12), minor=False)
ax2.set_xticklabels(s, minor=False, rotation = 20)
ax2.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax2.set_xlabel('Horas despejadas', fontsize=10, fontproperties = prop_1)
ax2.scatter(col_Desp_975,row_Desp_975, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax2.set_title(' x = Horas despejadas en el Centro-Oeste', loc = 'center', fontsize=9)
ax3 = fig.add_subplot(2,3,3)
mapa = ax3.imshow(Comp_Desp_348_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax3.contour(Sk_Desp_pvalue_348_df, levels=[99, 101], linewidths=0.5)
#ax3.clabel(cont, fmt = '%.2f', colors = 'k', fontsize=8)
ax3.set_yticks(range(0,12), minor=False)
ax3.set_yticklabels(s, minor=False)
ax3.set_xticks(range(0,12), minor=False)
ax3.set_xticklabels(s, minor=False, rotation = 20)
ax3.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax3.set_xlabel('Horas despejadas', fontsize=10, fontproperties = prop_1)
ax3.scatter(col_Desp_348,row_Desp_348, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax3.set_title(' x = Horas despejadas en el Este', loc = 'center', fontsize=9)
#cbar_ax = fig.add_axes([0.11, 0.28, 0.78, 0.008])
cbar_ax = fig.add_axes([0.11, 0.93, 0.78, 0.008])
cbar = fig.colorbar(mapa, cax=cbar_ax, orientation='horizontal', format="%.2f")
cbar.set_label(u"Anomalías de la radiación en el caso despejado $[W/m^{2}]$", fontsize=8, fontproperties=prop)
ax4 = fig.add_subplot(2,3,4)
ax4.spines['top'].set_visible(False)
ax4.spines['right'].set_visible(False)
#ax4.bar(np.arange(len(s)), Cant_Days_Desp_348, color='orange', align='center', alpha=0.5)
ax4.plot(s, Porc_Days_Desp_350, color = '#8ABB73', lw=1.5)
ax4.scatter(s, Porc_Days_Desp_350, marker='.', color = '#8ABB73', s=30)
ax4.set_xlabel(u'Horas despejadas', fontproperties = prop_1)
ax4.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax4.set_xticks(range(0, 12), minor=False)
ax4.set_xticklabels(s, minor=False, rotation = 20)
ax4.set_ylim(0, 100)
ax4.set_title(u'Frecuencia de días en el Oeste', loc = 'center', fontsize=9)
ax5 = fig.add_subplot(2,3,5)
ax5.spines['top'].set_visible(False)
ax5.spines['right'].set_visible(False)
#ax5.bar(np.arange(len(s)), Cant_Days_Desp_350, color='orange', align='center', alpha=0.5)
ax5.set_xlabel(u'Horas despejadas', fontproperties = prop_1)
ax5.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax5.plot(s, Porc_Days_Desp_975, color = '#8ABB73', lw=1.5)
ax5.scatter(s, Porc_Days_Desp_975, marker='.', color = '#8ABB73', s=30)
ax5.set_xticks(range(0, 12), minor=False)
ax5.set_xticklabels(s, minor=False, rotation = 20)
ax5.set_ylim(0, 100)
ax5.set_title(u'Frecuencia de días en el Centro-Oeste', loc = 'center', fontsize=9)
ax6 = fig.add_subplot(2,3,6)
ax6.spines['top'].set_visible(False)
ax6.spines['right'].set_visible(False)
#ax6.bar(np.arange(len(s)), Cant_Days_Desp_975, color='orange', align='center', alpha=0.5)
ax6.plot(s, Porc_Days_Desp_348, color = '#8ABB73', lw=1.5)
ax6.scatter(s, Porc_Days_Desp_348, marker='.', color = '#8ABB73', s=30)
ax6.set_xlabel(u'Horas despejadas', fontproperties = prop_1)
ax6.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax6.set_xticks(range(0, 12), minor=False)
ax6.set_xticklabels(s, minor=False, rotation = 20)
ax6.set_ylim(0, 100)
ax6.set_title(u'Frecuencia de días en el Este', loc = 'center', fontsize=9)
plt.subplots_adjust(wspace=0.3, hspace = 0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/AnomalComposites_Desp_Cant_Dias.pdf', format='pdf', transparent=True)
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/AnomalComposites_Desp_Cant_Dias.pdf nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
##-------------------GRÁFICO DE LAS ANOMALÍAS DEL COMPOSITE NUBADO DE LA RADIACIÓN PARA CADA PUNTO ----------------##
"""
Se deben verificar los valores de elev_min y elev_max de acuerdo al set de datos de las anomalías
"""
elev_min = min(np.nanmin(Comp_Nuba_348_anomal.values) , np.nanmin(Comp_Nuba_350_anomal.values), np.nanmin(Comp_Nuba_975_anomal.values))-1
elev_max = max(np.nanmax(Comp_Nuba_348_anomal.values) , np.nanmax(Comp_Nuba_350_anomal.values), np.nanmax(Comp_Nuba_975_anomal.values))+1
mid_val = 0
plt.close("all")
fig = plt.figure(figsize=(10., 8.),facecolor='w',edgecolor='w')
ax1=fig.add_subplot(2,3,1)
mapa = ax1.imshow(Comp_Nuba_350_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax1.contour(Sk_Nuba_pvalue_350_df, levels=[99, 101], linewidths=0.5)
ax1.set_yticks(range(0,12), minor=False)
ax1.set_yticklabels(s, minor=False)
ax1.set_xticks(range(0,12), minor=False)
ax1.set_xticklabels(s, minor=False, rotation = 20)
ax1.set_xlabel('Horas nubladas', fontsize=10, fontproperties = prop_1)
ax1.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax1.scatter(col_Nuba_350,row_Nuba_350, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax1.set_title('x = Horas nubladas en el Oeste', loc = 'center', fontsize=9)
ax2=fig.add_subplot(2,3,2)
mapa = ax2.imshow(Comp_Nuba_975_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax2.contour(Sk_Nuba_pvalue_975_df, levels=[99, 101], linewidths=0.5)
#ax2.clabel(cont, fmt = '%.2f', colors = 'k', fontsize=8)
ax2.set_yticks(range(0,12), minor=False)
ax2.set_yticklabels(s, minor=False)
ax2.set_xticks(range(0,12), minor=False)
ax2.set_xticklabels(s, minor=False, rotation = 20)
ax2.set_xlabel('Horas nubladas', fontsize=10, fontproperties = prop_1)
ax2.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax2.scatter(col_Nuba_975,row_Nuba_975, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax2.set_title('x = Horas nubladas en el Centro-Oeste', loc = 'center', fontsize=9)
ax3 = fig.add_subplot(2,3,3)
mapa = ax3.imshow(Comp_Nuba_348_anomal, interpolation = 'none', cmap=cmap, clim=(elev_min, elev_max), norm=MidpointNormalize(midpoint=mid_val,vmin=elev_min, vmax=elev_max))
#cont = ax3.contour(Sk_Nuba_pvalue_348_df, levels=[99, 101], linewidths=0.5)
#ax3.clabel(cont, fmt = '%.2f', colors = 'k', fontsize=8)
ax3.set_yticks(range(0,12), minor=False)
ax3.set_yticklabels(s, minor=False)
ax3.set_xticks(range(0,12), minor=False)
ax3.set_xticklabels(s, minor=False, rotation = 20)
ax3.set_xlabel('Horas nubladas', fontsize=10, fontproperties = prop_1)
ax3.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax3.scatter(col_Nuba_348,row_Nuba_348, marker='o', facecolor = 'k', edgecolor = 'k', linewidth='1.', s=30)
ax3.set_title('x = Horas nubladas en el Este', loc = 'center', fontsize=9)
#cbar_ax = fig.add_axes([0.11, 0.28, 0.78, 0.008])
cbar_ax = fig.add_axes([0.11, 0.93, 0.78, 0.008])
cbar = fig.colorbar(mapa, cax=cbar_ax, orientation='horizontal', format="%.2f")
cbar.set_label(u"Anomalías de radiación en el caso nublado $[W/m^{2}]$", fontsize=8, fontproperties=prop)
ax4 = fig.add_subplot(2,3,4)
ax4.spines['top'].set_visible(False)
ax4.spines['right'].set_visible(False)
#ax4.bar(np.arange(len(s)), Cant_Days_Nuba_348, color='orange', align='center', alpha=0.5)
ax4.plot(s, Porc_Days_Nuba_350, color = '#8ABB73', lw=1.5)
ax4.scatter(s, Porc_Days_Nuba_350, marker='.', color = '#8ABB73', s=30)
ax4.set_xlabel('Horas nubladas', fontproperties = prop_1)
ax4.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax4.set_xticks(range(0, 12), minor=False)
ax4.set_xticklabels(s, minor=False, rotation = 20)
ax4.set_ylim(0, 100)
ax4.set_title(u'Frecuencia de días en el Oeste', loc = 'center', fontsize=9)
ax5 = fig.add_subplot(2,3,5)
ax5.spines['top'].set_visible(False)
ax5.spines['right'].set_visible(False)
#ax5.bar(np.arange(len(s)), Cant_Days_Nuba_350, color='orange', align='center', alpha=0.5)
ax5.plot(s, Porc_Days_Nuba_975, color = '#8ABB73', lw=1.5)
ax5.scatter(s, Porc_Days_Nuba_975, marker='.', color = '#8ABB73', s=30)
ax5.set_xlabel('Horas nubladas', fontproperties = prop_1)
ax5.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax5.set_xticks(range(0, 12), minor=False)
ax5.set_xticklabels(s, minor=False, rotation = 20)
ax5.set_ylim(0, 100)
ax5.set_title(u'Frecuencia de días en el Centro-Oeste', loc = 'center', fontsize=9)
ax6 = fig.add_subplot(2,3,6)
ax6.spines['top'].set_visible(False)
ax6.spines['right'].set_visible(False)
#ax6.bar(np.arange(len(s)), Cant_Days_Nuba_975, color='orange', align='center', alpha=0.5)
ax6.plot(s, Porc_Days_Nuba_348, color = '#8ABB73', lw=1.5)
ax6.scatter(s, Porc_Days_Nuba_348, marker='.', color = '#8ABB73', s=30)
ax6.set_xlabel('Horas nubladas', fontproperties = prop_1)
ax6.set_ylabel(u"Frecuencia de días [%]", fontproperties = prop_1)
ax6.set_xticks(range(0, 12), minor=False)
ax6.set_xticklabels(s, minor=False, rotation = 20)
ax6.set_ylim(0, 100)
ax6.set_title(u'Frecuencia de días en el Este', loc = 'center', fontsize=9)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/AnomalComposites_Nuba_Cant_Dias.pdf', format='pdf', transparent=True)
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/AnomalComposites_Nuba_Cant_Dias.pdf nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
## -------------------------RANGO DE LAS ANOMALIAS RADIACIÓN PARA VER EL EFECTO DE LA NUBE EN LA RADIACIÓN---------------------------- ##
Rango_348 = Comp_Desp_348_anomal-Comp_Nuba_348_anomal
Rango_350 = Comp_Desp_350_anomal-Comp_Nuba_350_anomal
Rango_975 = Comp_Desp_975_anomal-Comp_Nuba_975_anomal
range_min = min(np.nanmin(Rango_348.values) , np.nanmin(Rango_350.values), np.nanmin(Rango_975.values))-1
range_max = max(np.nanmax(Rango_348.values) , np.nanmax(Rango_350.values), np.nanmax(Rango_975.values))+1
mid_val = 0
plt.close("all")
fig = plt.figure(figsize=(10., 8.),facecolor='w',edgecolor='w')
ax1=fig.add_subplot(1,3,1)
mapa = ax1.imshow(Rango_350, interpolation = 'none', cmap=cmap, clim=(range_min, range_max), norm=MidpointNormalize(midpoint=mid_val,vmin=range_min, vmax=range_max))
ax1.set_yticks(range(0,12), minor=False)
ax1.set_yticklabels(s, minor=False)
ax1.set_xticks(range(0,12), minor=False)
ax1.set_xticklabels(s, minor=False, rotation = 20)
ax1.set_xlabel('Horas condicionadas', fontsize=10, fontproperties = prop_1)
ax1.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax1.set_title(u'Rango de anomalías en el Oeste', loc = 'center', fontsize=9)
ax2=fig.add_subplot(1,3,2)
mapa = ax2.imshow(Rango_975, interpolation = 'none', cmap=cmap, clim=(range_min, range_max), norm=MidpointNormalize(midpoint=mid_val,vmin=range_min, vmax=range_max))
ax2.set_yticks(range(0,12), minor=False)
ax2.set_yticklabels(s, minor=False)
ax2.set_xticks(range(0,12), minor=False)
ax2.set_xticklabels(s, minor=False, rotation = 20)
ax2.set_xlabel('Horas condicionadas', fontsize=10, fontproperties = prop_1)
ax2.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax2.set_title(u'Rango de anomalías en el Centro-Oeste', loc = 'center', fontsize=9)
ax3 = fig.add_subplot(1,3,3)
mapa = ax3.imshow(Rango_348, interpolation = 'none', cmap=cmap, clim=(range_min, range_max), norm=MidpointNormalize(midpoint=mid_val,vmin=range_min, vmax=range_max))
ax3.set_yticks(range(0,12), minor=False)
ax3.set_yticklabels(s, minor=False)
ax3.set_xticks(range(0,12), minor=False)
ax3.set_xticklabels(s, minor=False, rotation = 20)
ax3.set_xlabel('Horas condicionadas', fontsize=10, fontproperties = prop_1)
ax3.set_ylabel(u'Horas de radiación solar', fontsize=10, fontproperties = prop_1)
ax3.set_title(u'Rango de anomalías en el Este', loc = 'center', fontsize=9)
cbar_ax = fig.add_axes([0.11, 0.28, 0.78, 0.008])
cbar = fig.colorbar(mapa, cax=cbar_ax, orientation='horizontal', format="%.2f")
cbar.set_label(u"Anomaly range $[W/m^{2}]$", fontsize=8, fontproperties=prop)
plt.subplots_adjust(wspace=0.3)
plt.savefig('/home/nacorreasa/Escritorio/Figuras/Rango_Anomal_Composite.pdf', format='pdf', transparent=True)
plt.close('all')
os.system('scp /home/nacorreasa/Escritorio/Figuras/Rango_Anomal_Composite.pdf nacorreasa@192.168.1.74:/var/www/nacorreasa/Graficas_Resultados/Estudio')
| 48.311511 | 172 | 0.69288 | 11,085 | 67,153 | 3.914389 | 0.048534 | 0.009495 | 0.031965 | 0.012583 | 0.880989 | 0.805812 | 0.742412 | 0.693485 | 0.650411 | 0.59835 | 0 | 0.078819 | 0.123926 | 67,153 | 1,389 | 173 | 48.346292 | 0.658734 | 0.091299 | 0 | 0.411009 | 0 | 0.008257 | 0.144922 | 0.050604 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00367 | false | 0 | 0.014679 | 0.001835 | 0.022018 | 0.005505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bd66aa54eb82c259a3905c51ba14621d46ff0be5 | 677 | py | Python | frappe-bench/apps/erpnext/erpnext/accounts/doctype/purchase_taxes_and_charges_template/purchase_taxes_and_charges_template.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | frappe-bench/apps/erpnext/erpnext/accounts/doctype/purchase_taxes_and_charges_template/purchase_taxes_and_charges_template.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | frappe-bench/apps/erpnext/erpnext/accounts/doctype/purchase_taxes_and_charges_template/purchase_taxes_and_charges_template.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from erpnext.accounts.doctype.sales_taxes_and_charges_template.sales_taxes_and_charges_template \
import valdiate_taxes_and_charges_template
class PurchaseTaxesandChargesTemplate(Document):
def validate(self):
valdiate_taxes_and_charges_template(self)
def autoname(self):
if self.company and self.title:
abbr = frappe.db.get_value('Company', self.company, 'abbr')
self.name = '{0} - {1}'.format(self.title, abbr)
| 35.631579 | 97 | 0.788774 | 92 | 677 | 5.565217 | 0.554348 | 0.0625 | 0.117188 | 0.179688 | 0.230469 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013333 | 0.113737 | 677 | 18 | 98 | 37.611111 | 0.84 | 0.212703 | 0 | 0 | 0 | 0 | 0.037807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
bd767db9b6f302cda84fdccb673d677adaa3588b | 163 | py | Python | task/urls.py | bubaley/air-drf-relation | 6f40c481fcabe9162aa1ad1a7635f3d79b855582 | [
"MIT"
] | 4 | 2021-07-06T11:09:39.000Z | 2021-09-01T12:58:40.000Z | task/urls.py | bubaley/air-drf-relation | 6f40c481fcabe9162aa1ad1a7635f3d79b855582 | [
"MIT"
] | null | null | null | task/urls.py | bubaley/air-drf-relation | 6f40c481fcabe9162aa1ad1a7635f3d79b855582 | [
"MIT"
] | null | null | null | from rest_framework import routers
from .views import TaskViewSet
router = routers.SimpleRouter()
router.register('tasks', TaskViewSet)
urlpatterns = router.urls
| 23.285714 | 37 | 0.815951 | 19 | 163 | 6.947368 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104294 | 163 | 6 | 38 | 27.166667 | 0.90411 | 0 | 0 | 0 | 0 | 0 | 0.030675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bd826f06fa05db4e8283cfd07172cf2a2e3a0ec9 | 956 | py | Python | docker_interface/plugins/__init__.py | tillahoffmann/docker_interface | ed9ac691ca22dc2937e717af7f6b5fc7de19c660 | [
"Apache-2.0"
] | 32 | 2018-03-11T01:09:47.000Z | 2022-03-20T19:21:17.000Z | docker_interface/plugins/__init__.py | tillahoffmann/docker_interface | ed9ac691ca22dc2937e717af7f6b5fc7de19c660 | [
"Apache-2.0"
] | 7 | 2018-05-01T15:28:55.000Z | 2018-09-24T13:16:33.000Z | docker_interface/plugins/__init__.py | tillahoffmann/docker_interface | ed9ac691ca22dc2937e717af7f6b5fc7de19c660 | [
"Apache-2.0"
] | 14 | 2018-03-09T12:40:46.000Z | 2022-03-19T09:59:08.000Z | # Copyright 2018 Spotify AB
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .base import Plugin, BasePlugin, HomeDirPlugin, SubstitutionPlugin, WorkspaceMountPlugin, \
ValidationPlugin, ExecutePlugin
from .user import UserPlugin
from .run import RunPlugin, RunConfigurationPlugin
from .build import BuildPlugin, BuildConfigurationPlugin
from .python import JupyterPlugin
from .google import GoogleCloudCredentialsPlugin, GoogleContainerRegistryPlugin
| 43.454545 | 96 | 0.803347 | 123 | 956 | 6.243902 | 0.699187 | 0.078125 | 0.033854 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00978 | 0.144351 | 956 | 21 | 97 | 45.52381 | 0.929095 | 0.572176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
bd83fc524801bf8d46b580dc48d08980682ce6eb | 683 | py | Python | rl/agents/policy/random_policy_agent.py | ManuelMeraz/ReinforcementLearning | 5d42a88776428308d35c8031c01bf5afdf080079 | [
"MIT"
] | 1 | 2020-04-19T15:29:47.000Z | 2020-04-19T15:29:47.000Z | rl/agents/policy/random_policy_agent.py | ManuelMeraz/ReinforcementLearning | 5d42a88776428308d35c8031c01bf5afdf080079 | [
"MIT"
] | null | null | null | rl/agents/policy/random_policy_agent.py | ManuelMeraz/ReinforcementLearning | 5d42a88776428308d35c8031c01bf5afdf080079 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import numpy
from rl.agents.policy.policy_agent import PolicyAgent
class Random(PolicyAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def act(self, state: numpy.ndarray, available_actions: numpy.ndarray):
"""
Uses a uniform random distribution to determine it's action given a state
TODO: Act according to different distributions
:param state: The state of the environment
:param available_actions: A list of available possible actions (positions on the board to mark)
:return: a random action
"""
return numpy.random.choice(available_actions)
| 32.52381 | 103 | 0.691069 | 87 | 683 | 5.287356 | 0.586207 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00189 | 0.225476 | 683 | 20 | 104 | 34.15 | 0.867675 | 0.448023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
bd9eee4ee9738708798aa7488a942d713aab919a | 259 | py | Python | apimux/__init__.py | zeeguu-ecosystem/apimux | 4a0f28921615d713d976af1d2078efca5c51a4bd | [
"MIT"
] | null | null | null | apimux/__init__.py | zeeguu-ecosystem/apimux | 4a0f28921615d713d976af1d2078efca5c51a4bd | [
"MIT"
] | 1 | 2019-04-12T11:35:46.000Z | 2019-04-12T11:35:46.000Z | apimux/__init__.py | zeeguu-ecosystem/apimux | 4a0f28921615d713d976af1d2078efca5c51a4bd | [
"MIT"
] | 1 | 2019-04-07T21:10:05.000Z | 2019-04-07T21:10:05.000Z | import sys
from apimux.log import logger
logger.debug("==== API Multiplexer imported ====")
major = sys.version_info.major
minor = sys.version_info.minor
micro = sys.version_info.micro
logger.debug("Running Python version %s.%s.%s" % (major, minor, micro))
| 25.9 | 71 | 0.737452 | 38 | 259 | 4.947368 | 0.473684 | 0.159574 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11583 | 259 | 9 | 72 | 28.777778 | 0.820961 | 0 | 0 | 0 | 0 | 0 | 0.250965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bdb83e73a16dd064163bd65b6c3a02bd6b06ceb5 | 187 | py | Python | 1. Python/1. Getting Started with Python/5. print_variables.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | 1 | 2021-12-05T18:02:15.000Z | 2021-12-05T18:02:15.000Z | 1. Python/1. Getting Started with Python/5. print_variables.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | null | null | null | 1. Python/1. Getting Started with Python/5. print_variables.py | theparitoshkumar/Data-Structures-Algorithms-using-python | 445b9dee56bca637f21267114cc1686d333ea4c4 | [
"Apache-2.0"
] | null | null | null | """ Write a program to display values of variables in Python. """
message = "Keep Smiling!"
print(message)
userNo = 101
print("User No is ", userNo)
gender = 'M'
print("Gender: ",gender)
| 23.375 | 65 | 0.68984 | 27 | 187 | 4.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019108 | 0.160428 | 187 | 7 | 66 | 26.714286 | 0.802548 | 0.304813 | 0 | 0 | 0 | 0 | 0.270492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
bdc5844645346783e54e248ff78672cbab8ddee8 | 531 | py | Python | faker_tax/de_DE/__init__.py | mastacheata/faker_tax | 178cbb106cac1fc215857438da667560f9d0d8d3 | [
"MIT"
] | null | null | null | faker_tax/de_DE/__init__.py | mastacheata/faker_tax | 178cbb106cac1fc215857438da667560f9d0d8d3 | [
"MIT"
] | null | null | null | faker_tax/de_DE/__init__.py | mastacheata/faker_tax | 178cbb106cac1fc215857438da667560f9d0d8d3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .. import Provider as TaxProvider
class Provider(TaxProvider):
"""
A Faker provider for the Empto project
"""
def tax_id(self):
"""
Returns a random generated Tax ID
"""
return "DE" + str(self.bothify('#########'))
def tax_number(self):
"""
Generates a random tax number
"""
return str(self.random_number(3, True)) + '/' + str(self.random_number(3, True)) + '/' + \
str(self.random_number(3, True))
| 21.24 | 98 | 0.532957 | 61 | 531 | 4.557377 | 0.491803 | 0.100719 | 0.140288 | 0.205036 | 0.258993 | 0.258993 | 0.258993 | 0.258993 | 0.258993 | 0.258993 | 0 | 0.01084 | 0.305085 | 531 | 24 | 99 | 22.125 | 0.742547 | 0.235405 | 0 | 0 | 0 | 0 | 0.037791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
bdfebf77834253f5884d71147594f2ecb2b724e7 | 381 | py | Python | ringity/readwrite/diagram.py | ClusterDuck123/ringity | 44505d192b72a1d47f6b7b0a90f0db83d98b6156 | [
"MIT"
] | null | null | null | ringity/readwrite/diagram.py | ClusterDuck123/ringity | 44505d192b72a1d47f6b7b0a90f0db83d98b6156 | [
"MIT"
] | null | null | null | ringity/readwrite/diagram.py | ClusterDuck123/ringity | 44505d192b72a1d47f6b7b0a90f0db83d98b6156 | [
"MIT"
] | null | null | null | import numpy as np
from ringity.classes.diagram import PersistenceDiagram
def read_pdiagram(fname, **kwargs):
"""
Wrapper for numpy.genfromtxt.
"""
return PersistenceDiagram(np.genfromtxt(fname, **kwargs))
def write_pdiagram(dgm, fname, **kwargs):
"""
Wrapper for numpy.savetxt.
"""
array = np.array(dgm)
np.savetxt(fname, array, **kwargs) | 25.4 | 61 | 0.671916 | 44 | 381 | 5.772727 | 0.5 | 0.129921 | 0.141732 | 0.165354 | 0.204724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19685 | 381 | 15 | 62 | 25.4 | 0.830065 | 0.146982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
da00a04e105af98a1f2533401de370dbc9e4e477 | 191 | py | Python | vsr/common/helpers/validators.py | queirozfcom/vector_space_retrieval | 3d31963bb2a9851d9801b7317677bb47d5dc3e4f | [
"MIT"
] | null | null | null | vsr/common/helpers/validators.py | queirozfcom/vector_space_retrieval | 3d31963bb2a9851d9801b7317677bb47d5dc3e4f | [
"MIT"
] | 6 | 2015-04-06T00:57:06.000Z | 2015-04-27T05:48:22.000Z | vsr/common/helpers/validators.py | queirozfcom/vector_space_retrieval | 3d31963bb2a9851d9801b7317677bb47d5dc3e4f | [
"MIT"
] | null | null | null | def validate_positive_integer(param):
if isinstance(param,int) and (param > 0):
return(None)
else:
raise ValueError("Invalid value, expected positive integer, got {0}".format(param))
| 31.833333 | 86 | 0.73822 | 26 | 191 | 5.346154 | 0.769231 | 0.215827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 0.136126 | 191 | 5 | 87 | 38.2 | 0.830303 | 0 | 0 | 0 | 0 | 0 | 0.256545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
da0619e0b3f90a7657f61daf019b596715684431 | 733 | py | Python | setup.py | mollinaca/slack_utils | 5e1a4bc0ce6026b2816bf14dc1f328c15415c1d7 | [
"MIT"
] | null | null | null | setup.py | mollinaca/slack_utils | 5e1a4bc0ce6026b2816bf14dc1f328c15415c1d7 | [
"MIT"
] | null | null | null | setup.py | mollinaca/slack_utils | 5e1a4bc0ce6026b2816bf14dc1f328c15415c1d7 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
with open('requirements.txt') as requirements_file:
install_requirements = requirements_file.read().splitlines()
setup(
name="slack_utils",
version="0.0.1",
description="my slack utils",
author="mollinaca",
packages=find_packages(),
install_requires=install_requirements,
entry_points={
"console_scripts": [
"slack_utils=slack_utils.__main__:main",
]
},
classifiers=[
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
]
)
| 30.541667 | 64 | 0.631651 | 76 | 733 | 5.881579 | 0.539474 | 0.212528 | 0.279642 | 0.290828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023173 | 0.234652 | 733 | 23 | 65 | 31.869565 | 0.773619 | 0 | 0 | 0 | 0 | 0 | 0.398363 | 0.050477 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
da1ad99b4e4020c41c2f1959c24a089b0e9d4360 | 2,097 | py | Python | setup.py | venkatachalamlab/lambda | 838f0acef7da9bd9fb3ab933db5b732e8221e86a | [
"MIT"
] | null | null | null | setup.py | venkatachalamlab/lambda | 838f0acef7da9bd9fb3ab933db5b732e8221e86a | [
"MIT"
] | 26 | 2021-09-22T01:07:32.000Z | 2022-01-07T17:06:32.000Z | setup.py | venkatachalamlab/lambda | 838f0acef7da9bd9fb3ab933db5b732e8221e86a | [
"MIT"
] | null | null | null | import setuptools
requirements = [
'docopt',
'numpy',
'pyzmq'
]
console_scripts = [
'lambda_client=lambda_scope.zmq.client:main',
'lambda_forwarder=lambda_scope.zmq.forwarder:main',
'lambda_hub=lambda_scope.devices.hub_relay:main',
'lambda_publisher=lambda_scope.zmq.publisher:main',
'lambda_server=lambda_scope.zmq.server:main',
'lambda_subscriber=lambda_scope.zmq.subscriber:main',
'lambda_logger=lambda_scope.devices.logger:main',
'lambda_dragonfly=lambda_scope.devices.dragonfly:main',
'lambda_acquisition_board=lambda_scope.devices.acquisition_board:main',
'lambda_displayer=lambda_scope.devices.displayer:main',
'lambda_data_hub=lambda_scope.devices.data_hub:main',
'lambda_stage_data_hub=lambda_scope.devices.stage_data_hub:main',
'lambda_writer=lambda_scope.devices.writer:main',
'lambda_processor=lambda_scope.devices.processor:main',
'lambda_commands=lambda_scope.devices.commands:main',
'lambda_zaber=lambda_scope.devices.zaber:main',
'lambda_stage_tracker=lambda_scope.devices.stage_tracker:main',
'lambda_valve_control=lambda_scope.devices.valve_control:main',
'lambda_microfluidic=lambda_scope.devices.microfluidic:main',
'lambda_app=lambda_scope.devices.app:main',
'lambda_stage=lambda_scope.system.stage:main',
'lambda=lambda_scope.system.lambda:main'
]
setuptools.setup(
name="lambda_scope",
version="0.0.1",
author="Mahdi Torkashvand",
author_email="mmt.mahdi@gmail.com",
description="Software to operate the customized imaging system, lambda.",
url="https://github.com/venkatachalamlab/lambda",
project_urls={
"Bug Tracker": "https://github.com/venkatachalamlab/lambda/issues",
},
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows :: Windows 10",
],
entry_points={
'console_scripts': console_scripts
},
install_requires=requirements,
packages=['lambda_scope'],
python_requires=">=3.6",
)
| 37.446429 | 77 | 0.731521 | 249 | 2,097 | 5.895582 | 0.333333 | 0.179837 | 0.183924 | 0.042916 | 0.083106 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004427 | 0.138293 | 2,097 | 55 | 78 | 38.127273 | 0.807969 | 0 | 0 | 0 | 0 | 0 | 0.708155 | 0.523128 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019231 | 0 | 0.019231 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
da1c62f51825a651ce542eb88803a19261cddbfe | 423 | py | Python | handlers/users/__init__.py | Asadbek07/e-commerce-bot | df6c1bb625becf95bf53f4cece12752dca9f7f67 | [
"Unlicense",
"MIT"
] | null | null | null | handlers/users/__init__.py | Asadbek07/e-commerce-bot | df6c1bb625becf95bf53f4cece12752dca9f7f67 | [
"Unlicense",
"MIT"
] | null | null | null | handlers/users/__init__.py | Asadbek07/e-commerce-bot | df6c1bb625becf95bf53f4cece12752dca9f7f67 | [
"Unlicense",
"MIT"
] | null | null | null | from . import start
from . import edits
from . import payment
from . import pickup
from . import get_location
from . import information_handler
from . import fikr_bildirish_handler
from . import savat_ortga
from . import biz_bilan_aloqa
from . import product_menu_handler
from . import menus_handlers
from . import product_add_for_admins
from . import product_delete_for_admins
from . import ordering
from . import r_phone
| 26.4375 | 39 | 0.822695 | 62 | 423 | 5.33871 | 0.451613 | 0.453172 | 0.154079 | 0.114804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141844 | 423 | 15 | 40 | 28.2 | 0.911846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.