hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fe85d8a95b8c5b2930e247504f6c68cbb22c2717 | 1,038 | py | Python | whisk_python_ch0_randomtrigger/__main__.py | timwaizenegger/e-ink-display-esp8266-mqtt-openwhisk | d964151bfdebb54a0e0edccb7f5cfbbffa49ba0f | [
"MIT"
] | 30 | 2018-02-16T21:10:34.000Z | 2021-11-15T21:06:33.000Z | whisk_python_ch0_randomtrigger/__main__.py | timwaizenegger/e-ink-display-esp8266-mqtt-openwhisk | d964151bfdebb54a0e0edccb7f5cfbbffa49ba0f | [
"MIT"
] | 1 | 2020-10-04T21:50:26.000Z | 2020-10-04T21:50:26.000Z | whisk_python_ch0_randomtrigger/__main__.py | timwaizenegger/e-ink-display-esp8266-mqtt-openwhisk | d964151bfdebb54a0e0edccb7f5cfbbffa49ba0f | [
"MIT"
] | 2 | 2020-10-04T21:51:30.000Z | 2021-05-31T14:53:30.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Tim Waizenegger (c) 2018,2019
Used as an open whisk action on IBM cloud
bx wsk action update displaychannel_ch0_trigger_random --kind python:3 --main main __main__.py
"""
import random
import requests
functions_apikey = "cloud functions CF API key"
endpoints_protocol = "https://"
endpoints = [
"eu-de.functions.cloud.ibm.com/api/v1/namespaces/timw_dev/triggers/displaychannel_ch0_trigger_time",
"eu-de.functions.cloud.ibm.com/api/v1/namespaces/timw_dev/triggers/displaychannel_ch0_trigger_bitcoin",
"eu-de.functions.cloud.ibm.com/api/v1/namespaces/timw_dev/triggers/displaychannel_ch0_trigger_twitter",
"eu-de.functions.cloud.ibm.com/api/v1/namespaces/timw_dev/triggers/displaychannel_ch0_trigger_christmas"
]
def main(args):
endpoint = random.choice(endpoints)
url = endpoints_protocol + functions_apikey + "@" + endpoint
ret=requests.post(url)
print("Made API call to ", url, " , RC is: ", ret)
return { 'message': 'main method called' }
#main(None)
| 28.054054 | 106 | 0.751445 | 148 | 1,038 | 5.087838 | 0.493243 | 0.112882 | 0.159363 | 0.095618 | 0.414343 | 0.414343 | 0.414343 | 0.414343 | 0.414343 | 0.414343 | 0 | 0.021858 | 0.118497 | 1,038 | 36 | 107 | 28.833333 | 0.801093 | 0.213873 | 0 | 0 | 0 | 0.25 | 0.602978 | 0.495037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fe9594e1b983f4aeff7a0d0b1ef141900acd5a69 | 3,664 | py | Python | src/markdown2pango/__init__.py | mkdryden/markdown2pango | 547a76fd352c52b5f718f5166da5f59672383896 | [
"BSD-3-Clause"
] | null | null | null | src/markdown2pango/__init__.py | mkdryden/markdown2pango | 547a76fd352c52b5f718f5166da5f59672383896 | [
"BSD-3-Clause"
] | null | null | null | src/markdown2pango/__init__.py | mkdryden/markdown2pango | 547a76fd352c52b5f718f5166da5f59672383896 | [
"BSD-3-Clause"
] | 1 | 2021-05-28T21:44:27.000Z | 2021-05-28T21:44:27.000Z | from __future__ import (absolute_import, division, print_function,
unicode_literals)
import lxml.html
import re
import mistune
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
class PangoRenderer(mistune.Renderer):
'''
Pango Markdown renderer
See also
--------
`markdown2pango()`
'''
def __init__(self, **kwargs):
self.options = kwargs
def block_code(self, code, lang=None):
code = code.rstrip('\n')
return '<tt>%s</tt>\n' % code
def block_quote(self, text):
return text
def header(self, text, level, raw=None):
if level >= 1 and level < 4:
size = ('xx-large', 'x-large', 'large')[level - 1]
return "<span size='%s' font_weight='bold'>%s</span>\n\n" % (size,
text)
return text + '\n\n'
def hrule(self):
return '\n%s\n' % (72 * '-')
def paragraph(self, text):
return '\n%s\n' % text.strip()
def double_emphasis(self, text):
return '<b>%s</b>' % text
def emphasis(self, text):
return '<i>%s</i>' % text
def codespan(self, text):
text = mistune.escape(text.rstrip(), smart_amp=False)
return '<tt>%s</tt>' % text
def linebreak(self):
return '\n'
def strikethrough(self, text):
return '<s>%s</s>' % text
def newline(self):
"""Rendering newline element."""
return ''
def markdown2pango(markdown_text):
'''
Render Markdown-formatted text as Pango formatted text.
Note
----
Pango does not fully support _all_ markdown styles (e.g., lists). In most
cases, some attempt has been made to render something sensible (e.g.,
render unordered list items with leading ``-``, ordered list items with
item number, etc.).
Parameters
----------
markdown_text : str
Markdown-formatted text.
Returns
-------
str
`Pango markup <https://developer.gnome.org/pango/stable/PangoMarkupFormat.html>`_.
'''
def sub_list(match):
'''
Substitute root level HTML lists with Markdown list
'''
def extract_list_items(root, level=0):
content = []
for list_i in root.xpath('ul|ol'):
for j, child_ij in enumerate(list_i.xpath('li')):
leader_ij = '-' if list_i.tag == 'ul' else '%d.' % (j + 1)
subcontent_ij = extract_list_items(child_ij,
level=level + 1)
child_ij.text = ' %s%s %s' % (' ' * level, leader_ij,
child_ij.text
if child_ij.text else '')
content += [(level, child_ij)]
content.extend(subcontent_ij)
if root.tag != 'body':
root.remove(list_i)
else:
list_i.drop_tag()
return content
root = lxml.html.fragment_fromstring(match.group())
items = extract_list_items(root.xpath('/html/body')[0])
output = ''
for level, item in items:
item_str = re.sub(r'^<li>(.*)</li>', r'\1',
lxml.html.tostring(item))
output += item_str
return output.rstrip('\n')
m = mistune.Markdown(renderer=PangoRenderer())
return re.sub(r'^<ul>.*</ul>', sub_list, m.render(markdown_text),
flags=re.DOTALL | re.MULTILINE)
| 29.312 | 90 | 0.518013 | 416 | 3,664 | 4.425481 | 0.365385 | 0.030418 | 0.038023 | 0.01195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005031 | 0.349072 | 3,664 | 124 | 91 | 29.548387 | 0.766876 | 0.170306 | 0 | 0 | 0 | 0 | 0.073464 | 0.010985 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.072464 | 0.101449 | 0.521739 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
feac0c8edfceabc4f4d3d2b1442ce4b6442267f5 | 3,780 | py | Python | aliyun-python-sdk-cms/aliyunsdkcms/request/v20190101/PutCustomMetricRuleRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 1,001 | 2015-07-24T01:32:41.000Z | 2022-03-25T01:28:18.000Z | aliyun-python-sdk-cms/aliyunsdkcms/request/v20190101/PutCustomMetricRuleRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 363 | 2015-10-20T03:15:00.000Z | 2022-03-08T12:26:19.000Z | aliyun-python-sdk-cms/aliyunsdkcms/request/v20190101/PutCustomMetricRuleRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 682 | 2015-09-22T07:19:02.000Z | 2022-03-22T09:51:46.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
class PutCustomMetricRuleRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'Cms', '2019-01-01', 'PutCustomMetricRule','cms')
self.set_method('POST')
def get_Webhook(self):
return self.get_query_params().get('Webhook')
def set_Webhook(self,Webhook):
self.add_query_param('Webhook',Webhook)
def get_RuleName(self):
return self.get_query_params().get('RuleName')
def set_RuleName(self,RuleName):
self.add_query_param('RuleName',RuleName)
def get_Threshold(self):
return self.get_query_params().get('Threshold')
def set_Threshold(self,Threshold):
self.add_query_param('Threshold',Threshold)
def get_EffectiveInterval(self):
return self.get_query_params().get('EffectiveInterval')
def set_EffectiveInterval(self,EffectiveInterval):
self.add_query_param('EffectiveInterval',EffectiveInterval)
def get_EmailSubject(self):
return self.get_query_params().get('EmailSubject')
def set_EmailSubject(self,EmailSubject):
self.add_query_param('EmailSubject',EmailSubject)
def get_EvaluationCount(self):
return self.get_query_params().get('EvaluationCount')
def set_EvaluationCount(self,EvaluationCount):
self.add_query_param('EvaluationCount',EvaluationCount)
def get_SilenceTime(self):
return self.get_query_params().get('SilenceTime')
def set_SilenceTime(self,SilenceTime):
self.add_query_param('SilenceTime',SilenceTime)
def get_MetricName(self):
return self.get_query_params().get('MetricName')
def set_MetricName(self,MetricName):
self.add_query_param('MetricName',MetricName)
def get_Period(self):
return self.get_query_params().get('Period')
def set_Period(self,Period):
self.add_query_param('Period',Period)
def get_ContactGroups(self):
return self.get_query_params().get('ContactGroups')
def set_ContactGroups(self,ContactGroups):
self.add_query_param('ContactGroups',ContactGroups)
def get_Level(self):
return self.get_query_params().get('Level')
def set_Level(self,Level):
self.add_query_param('Level',Level)
def get_GroupId(self):
return self.get_query_params().get('GroupId')
def set_GroupId(self,GroupId):
self.add_query_param('GroupId',GroupId)
def get_Resources(self):
return self.get_query_params().get('Resources')
def set_Resources(self,Resources):
self.add_query_param('Resources',Resources)
def get_RuleId(self):
return self.get_query_params().get('RuleId')
def set_RuleId(self,RuleId):
self.add_query_param('RuleId',RuleId)
def get_ComparisonOperator(self):
return self.get_query_params().get('ComparisonOperator')
def set_ComparisonOperator(self,ComparisonOperator):
self.add_query_param('ComparisonOperator',ComparisonOperator)
def get_Statistics(self):
return self.get_query_params().get('Statistics')
def set_Statistics(self,Statistics):
self.add_query_param('Statistics',Statistics) | 30.983607 | 78 | 0.760317 | 496 | 3,780 | 5.582661 | 0.22379 | 0.03467 | 0.080896 | 0.09823 | 0.179126 | 0.179126 | 0.179126 | 0 | 0 | 0 | 0 | 0.003653 | 0.130952 | 3,780 | 122 | 79 | 30.983607 | 0.839269 | 0.199471 | 0 | 0 | 0 | 0 | 0.125386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.478261 | false | 0 | 0.014493 | 0.231884 | 0.73913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
22834b58ceed661ae744755e2574a5136d052354 | 337 | py | Python | jd/api/rest/KplOpenRegularPlanCompletedorderRequest.py | fengjinqi/linjuanbang | 8cdc4e81df73ccd737ac547da7f2c7dca545862a | [
"MIT"
] | 5 | 2019-10-30T01:16:30.000Z | 2020-06-14T03:32:19.000Z | jd/api/rest/KplOpenRegularPlanCompletedorderRequest.py | fengjinqi/linjuanbang | 8cdc4e81df73ccd737ac547da7f2c7dca545862a | [
"MIT"
] | 2 | 2020-10-12T07:12:48.000Z | 2021-06-02T03:15:47.000Z | jd/api/rest/KplOpenRegularPlanCompletedorderRequest.py | fengjinqi/linjuanbang | 8cdc4e81df73ccd737ac547da7f2c7dca545862a | [
"MIT"
] | 3 | 2019-12-06T17:33:49.000Z | 2021-03-01T13:24:22.000Z | from jd.api.base import RestApi
class KplOpenRegularPlanCompletedorderRequest(RestApi):
def __init__(self,domain='gw.api.360buy.com',port=80):
RestApi.__init__(self,domain, port)
self.venderId = None
self.planId = None
self.orderId = None
def getapiname(self):
return 'jd.kpl.open.regular.plan.completedorder'
| 18.722222 | 56 | 0.738872 | 43 | 337 | 5.604651 | 0.651163 | 0.06639 | 0.116183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017422 | 0.148368 | 337 | 17 | 57 | 19.823529 | 0.8223 | 0 | 0 | 0 | 0 | 0 | 0.169184 | 0.117825 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.111111 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
229a0bc926578ae10eff4515af08212c5144abe2 | 267 | py | Python | scripts/slave/recipe_modules/bisect_tester/__init__.py | bopopescu/build | 4e95fd33456e552bfaf7d94f7d04b19273d1c534 | [
"BSD-3-Clause"
] | null | null | null | scripts/slave/recipe_modules/bisect_tester/__init__.py | bopopescu/build | 4e95fd33456e552bfaf7d94f7d04b19273d1c534 | [
"BSD-3-Clause"
] | null | null | null | scripts/slave/recipe_modules/bisect_tester/__init__.py | bopopescu/build | 4e95fd33456e552bfaf7d94f7d04b19273d1c534 | [
"BSD-3-Clause"
] | 1 | 2020-07-23T11:05:06.000Z | 2020-07-23T11:05:06.000Z | DEPS = [
'chromium',
'file',
'gsutil',
'recipe_engine/json',
'math_utils',
'recipe_engine/path',
'recipe_engine/platform',
'recipe_engine/properties',
'recipe_engine/python',
'recipe_engine/raw_io',
'recipe_engine/step',
]
| 19.071429 | 31 | 0.617978 | 28 | 267 | 5.571429 | 0.571429 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220974 | 267 | 13 | 32 | 20.538462 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.629213 | 0.172285 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
22d05e6147b038f95ac6c3a6cf8e40ad4c492310 | 56 | py | Python | Backend Web Application Development/config/Settings.py | amilaansari/SP-Assignments | dbf3c1c9be199406aad1f23274380b2aee673089 | [
"CNRI-Python"
] | null | null | null | Backend Web Application Development/config/Settings.py | amilaansari/SP-Assignments | dbf3c1c9be199406aad1f23274380b2aee673089 | [
"CNRI-Python"
] | null | null | null | Backend Web Application Development/config/Settings.py | amilaansari/SP-Assignments | dbf3c1c9be199406aad1f23274380b2aee673089 | [
"CNRI-Python"
] | null | null | null | class Settings:
secretKey="a12nc)238OmPq#cxOlm*a"
| 18.666667 | 38 | 0.714286 | 7 | 56 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0.160714 | 56 | 2 | 39 | 28 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0.388889 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
22ec4a5ebba3a1cca202b2468c0f4ced5573dba5 | 731 | py | Python | aleph/search/__init__.py | gazeti/aleph | f6714c4be038471cfdc6408bfe88dc9e2ed28452 | [
"MIT"
] | 1 | 2017-07-28T12:54:09.000Z | 2017-07-28T12:54:09.000Z | aleph/search/__init__.py | gazeti/aleph | f6714c4be038471cfdc6408bfe88dc9e2ed28452 | [
"MIT"
] | 7 | 2017-08-16T12:49:23.000Z | 2018-02-16T10:22:11.000Z | aleph/search/__init__.py | gazeti/aleph | f6714c4be038471cfdc6408bfe88dc9e2ed28452 | [
"MIT"
] | 6 | 2017-07-26T12:29:53.000Z | 2017-08-18T09:35:50.000Z | import logging
from aleph.index.mapping import TYPE_DOCUMENT, TYPE_RECORD # noqa
from aleph.search.query import QueryState # noqa
from aleph.search.documents import documents_query, documents_iter # noqa
from aleph.search.documents import entity_documents # noqa
from aleph.search.entities import entities_query # noqa
from aleph.search.entities import suggest_entities, similar_entities # noqa
from aleph.search.entities import load_entity # noqa
from aleph.search.links import links_query # noqa
from aleph.search.leads import leads_query, lead_count # noqa
from aleph.search.records import records_query, execute_records_query # noqa
from aleph.search.util import scan_iter # noqa
log = logging.getLogger(__name__)
| 45.6875 | 77 | 0.820793 | 104 | 731 | 5.576923 | 0.298077 | 0.17069 | 0.224138 | 0.327586 | 0.37931 | 0.287931 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121751 | 731 | 15 | 78 | 48.733333 | 0.903427 | 0.073871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.923077 | 0 | 0.923077 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
22f479d8dbee7eb6ba67509acb08bbf5df764797 | 1,505 | py | Python | BookStore/models/book_model.py | ki-yungkim/Python_1st_Mini | 1dadb9ec51b85dfef22164557b8340d4eab96d65 | [
"MIT"
] | null | null | null | BookStore/models/book_model.py | ki-yungkim/Python_1st_Mini | 1dadb9ec51b85dfef22164557b8340d4eab96d65 | [
"MIT"
] | null | null | null | BookStore/models/book_model.py | ki-yungkim/Python_1st_Mini | 1dadb9ec51b85dfef22164557b8340d4eab96d65 | [
"MIT"
] | null | null | null | import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from flask import session
from datetime import datetime
from models.mem_model import Member,PaperBook
db = SQLAlchemy()
migrate = Migrate()
class PaperBookService:
# 책 추가
def addPaperBook(self,pb:PaperBook):
db.session.add(pb)
db.session.commit()
# 책 전체 조회
def getPaperBookAll(self):
return PaperBook.query.order_by(PaperBook.book_no.asc())
# 책 상세정보
def getPaperBookDetail(self,paper_book_no):
return PaperBook.query.get(paper_book_no)
# 책 이름으로 검색
def getPaperBookName(self,paper_book_name):
return PaperBook.query.filter(PaperBook.paper_book_name.like('%' + paper_book_name + '%')).all()
# 책 지은이로 검색
def getPaperBookPublisher(self,paper_book_publisher):
return PaperBook.query.filter(PaperBook.paper_book_publisher.like('%' + paper_book_publisher + '%')).all()
# 책 정보 수정
def editPaperBookInfo(self,paper_book_no,name,publisher,price,amount):
book = self.getPaperBookDetail(paper_book_no)
book.paper_book_name = name
book.paper_book_publisher = publisher
book.paper_book_price = price
book.paper_book_amount = amount
# 책 정보 삭제
def deletePaperBook(self,paper_book_no):
book = self.getPaperBookDetail(paper_book_no)
db.session.delete(book)
db.session.commit() | 32.021277 | 114 | 0.712957 | 197 | 1,505 | 5.238579 | 0.314721 | 0.139535 | 0.063953 | 0.043605 | 0.156977 | 0.156977 | 0.085271 | 0 | 0 | 0 | 0 | 0 | 0.190033 | 1,505 | 47 | 115 | 32.021277 | 0.846596 | 0.036545 | 0 | 0.125 | 0 | 0 | 0.002772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.21875 | 0.125 | 0.59375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
22f73371f58112d97679bbd9398198ded82c14e8 | 4,353 | py | Python | datapreprocessing/crash_match.py | Andyzr/work_zone_safety | e653740e7a42f06536f64c199388fd40d85aaaae | [
"MIT"
] | null | null | null | datapreprocessing/crash_match.py | Andyzr/work_zone_safety | e653740e7a42f06536f64c199388fd40d85aaaae | [
"MIT"
] | null | null | null | datapreprocessing/crash_match.py | Andyzr/work_zone_safety | e653740e7a42f06536f64c199388fd40d85aaaae | [
"MIT"
] | null | null | null | # %%
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pickle
import gc
from sqlalchemy import create_engine
import sqlite3
from sqlalchemy import Column, Integer, String, ForeignKey, Float
from _datetime import time
# %%
# speed_engine = create_engine('sqlite:////media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/speed_db.db', echo=False)
# speed_engine_old = create_engine('sqlite:////media/andy/zhangzr/speed_db.db', echo=False)
# crash_engine = create_engine('sqlite:////media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/output/output_crash.db', echo=False)
# output location
output_crash_conn = sqlite3.connect(
'/media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/output/output_crash_1117.db')
output_crash_c = output_crash_conn.cursor()
# source of wzs_output (wzID-loc-time): speed_db.wzsoutput
output_crash_c.execute('ATTACH DATABASE "/media/andy/zhangzr/speed_db.db" AS speed_db')
output_crash_conn.commit()
output_crash_c.execute('create table wzsoutput as select * from speed_db.wzsoutput;')
output_crash_conn.commit()
output_crash_c.execute('create index id_wzsoutput_id_time_loc on wzsoutput(wzID,wzTime_divided_stamp,location);')
output_crash_conn.commit()
# output_crash_c.execute('ATTACH DATABASE "/media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/wzs_output_db.db" AS wzs_output_db')
# output_crash_conn.commit()
# source of crash database crash_db.crash_table1107
output_crash_c.execute(
'ATTACH DATABASE "../CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/crash_db.db" AS crash_db')
# source of wzloc wz_loc_db.wz_loc_518 or wz_loc_db.wz_loc_61
output_crash_c.execute(
'ATTACH DATABASE "/media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/wz_loc.db" AS wz_loc_db')
output_crash_conn.commit()
# create crash table
# %%
# %%time
output_crash_c.execute("""
create table if not exists crash_xy_61 AS
SELECT temp_61.wzID,
temp_61.wzTime_divided_stamp,
temp_61.location,
COUNT(crash_db.crash_table1107.FATAL_OR_MAJ_INJ)>0 AS crash_61,
SUM(crash_db.crash_table1107.FATAL_OR_MAJ_INJ)>0 AS crash_severe_61
FROM
(select wzsoutput.wzID,wzsoutput.wzTime_divided_stamp,wzsoutput.location,
wz_loc_db.wz_loc_61.x as x,wz_loc_db.wz_loc_61.y as y ,
CAST(wzsoutput.wzTime_divided_stamp as INT) as wztimeint,CAST(wzsoutput.wzTime_divided_stamp as INT)+1800 as wztimeintend
FROM wzsoutput
LEFT JOIN wz_loc_db.wz_loc_61
ON wzsoutput.wzID == wz_loc_db.wz_loc_61.wzID AND
wzsoutput.location == wz_loc_db.wz_loc_61.location)temp_61
LEFT JOIN crash_db.crash_table1107
ON
temp_61.x = crash_db.crash_table1107.keplist_0x
AND
temp_61.y = crash_db.crash_table1107.keplist_0y
AND
crash_db.crash_table1107.Time_stamp BETWEEN temp_61.wztimeint AND temp_61.wztimeintend
GROUP BY temp_61.wzID,
temp_61.wzTime_divided_stamp,
temp_61.location """)
output_crash_conn.commit()
# %%
# %%time
output_crash_c.execute("""
create table if not exists crash_xy_518 AS
SELECT temp_518.wzID,
temp_518.wzTime_divided_stamp,
temp_518.location,
COUNT(crash_db.crash_table1107.FATAL_OR_MAJ_INJ)>0 AS crash_518,
SUM(crash_db.crash_table1107.FATAL_OR_MAJ_INJ)>0 AS crash_severe_518
FROM
(select wzsoutput.wzID,wzsoutput.wzTime_divided_stamp,wzsoutput.location,
wz_loc_db.wz_loc_518.x as x,wz_loc_db.wz_loc_518.y as y ,
CAST(wzsoutput.wzTime_divided_stamp as INT) as wztimeint,CAST(wzsoutput.wzTime_divided_stamp as INT)+1800 as wztimeintend
FROM wzsoutput
LEFT JOIN wz_loc_db.wz_loc_518
ON wzsoutput.wzID == wz_loc_db.wz_loc_518.wzID AND
wzsoutput.location == wz_loc_db.wz_loc_518.location)temp_518
LEFT JOIN crash_db.crash_table1107
ON
temp_518.x = crash_db.crash_table1107.keplist_0x
AND
temp_518.y = crash_db.crash_table1107.keplist_0y
AND
crash_db.crash_table1107.Time_stamp BETWEEN temp_518.wztimeint AND temp_518.wztimeintend
GROUP BY temp_518.wzID,
temp_518.wzTime_divided_stamp,
temp_518.location """)
output_crash_conn.commit()
| 37.852174 | 204 | 0.806111 | 721 | 4,353 | 4.520111 | 0.151179 | 0.03989 | 0.030071 | 0.083768 | 0.782449 | 0.728751 | 0.69101 | 0.69101 | 0.620743 | 0.5471 | 0 | 0.091467 | 0.10085 | 4,353 | 114 | 205 | 38.184211 | 0.741185 | 0.213646 | 0 | 0.4 | 0 | 0.053333 | 0.786553 | 0.456547 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.146667 | 0 | 0.146667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fe106a238946f463e637b920b7a043d1d7b312cf | 2,457 | py | Python | ccnpy/flic/Pointers.py | mmosko/ccnpy | 20d982e2e3845818fde7f3facdc8cbcdff323dbb | [
"Apache-2.0"
] | 1 | 2020-12-23T14:17:25.000Z | 2020-12-23T14:17:25.000Z | ccnpy/flic/Pointers.py | mmosko/ccnpy | 20d982e2e3845818fde7f3facdc8cbcdff323dbb | [
"Apache-2.0"
] | 1 | 2019-07-01T18:19:05.000Z | 2019-07-02T05:35:52.000Z | ccnpy/flic/Pointers.py | mmosko/ccnpy | 20d982e2e3845818fde7f3facdc8cbcdff323dbb | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Marc Mosko
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ccnpy
class Pointers(ccnpy.TlvType):
"""
Encloses an array of ccnpy.HashValues.
Note that len(Pointers) will return the TLV wire encoding length.
You can access Pointers as an array:
p = Pointers([hv1, hv2, hv3])
for i in range(0, p.count()):
hv = p[i]
print(hv)
Or you can iterate it:
p = Pointers([hv1, hv2, hv3])
for hv in p:
print(hv)
"""
__type = 0x0002
@classmethod
def class_type(cls):
return cls.__type
def __init__(self, hash_values):
ccnpy.TlvType.__init__(self)
if hash_values is None or not isinstance(hash_values, list):
raise TypeError("hash_values must be a non-empty list of ccnpy.HashValue")
self._hash_values = hash_values
self._tlv = ccnpy.Tlv(self.class_type(), self._hash_values)
def __len__(self):
return len(self._hash_values)
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __repr__(self):
return "Ptrs: %r" % self._hash_values
def __getitem__(self, item):
return self._hash_values[item]
def __iter__(self):
self._offset = 0
return self
def __next__(self):
if self._offset == len(self):
raise StopIteration
output = self[self._offset]
self._offset += 1
return output
@classmethod
def parse(cls, tlv):
if tlv.type() != cls.class_type():
raise ValueError("Incorrect TLV type %r" % tlv.type())
hash_values = []
offset = 0
while offset < tlv.length():
hv = ccnpy.HashValue.deserialize(tlv.value()[offset:])
offset += len(hv)
hash_values.append(hv)
return cls(hash_values)
def serialize(self):
return self._tlv.serialize()
| 27.606742 | 86 | 0.625967 | 325 | 2,457 | 4.513846 | 0.415385 | 0.088616 | 0.05726 | 0.034765 | 0.02863 | 0.02863 | 0 | 0 | 0 | 0 | 0 | 0.013016 | 0.28083 | 2,457 | 88 | 87 | 27.920455 | 0.817204 | 0.365893 | 0 | 0.047619 | 0 | 0 | 0.056376 | 0 | 0 | 0 | 0.004027 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.02381 | 0.142857 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
fe152ca660ab97b3c8ccee1a6e74ccc101fe26e2 | 2,373 | py | Python | starter-kits/credential-registry/server/tob-api/api_v2/migrations/0003_auto_20181005_2140.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 1 | 2019-03-18T13:10:05.000Z | 2019-03-18T13:10:05.000Z | starter-kits/credential-registry/server/tob-api/api_v2/migrations/0003_auto_20181005_2140.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 8 | 2019-06-15T13:18:39.000Z | 2021-05-01T17:52:02.000Z | starter-kits/credential-registry/server/tob-api/api_v2/migrations/0003_auto_20181005_2140.py | nairobi222/indy-catalyst | dcbd80524ace7747ecfecd716ff932e9b571d69a | [
"Apache-2.0"
] | 3 | 2019-06-12T21:08:53.000Z | 2021-05-03T17:09:37.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-05 21:40
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('api_v2', '0002_user_display_name'),
]
operations = [
migrations.RemoveField(
model_name='doingbusinessas',
name='verifiableOrgId',
),
migrations.RemoveField(
model_name='issuerservice',
name='jurisdictionId',
),
migrations.RemoveField(
model_name='location',
name='doingBusinessAsId',
),
migrations.RemoveField(
model_name='location',
name='locationTypeId',
),
migrations.RemoveField(
model_name='location',
name='verifiableOrgId',
),
migrations.RemoveField(
model_name='verifiableclaim',
name='claimType',
),
migrations.RemoveField(
model_name='verifiableclaim',
name='inactiveClaimReasonId',
),
migrations.RemoveField(
model_name='verifiableclaim',
name='verifiableOrgId',
),
migrations.RemoveField(
model_name='verifiableclaimtype',
name='issuerServiceId',
),
migrations.RemoveField(
model_name='verifiableorg',
name='jurisdictionId',
),
migrations.RemoveField(
model_name='verifiableorg',
name='orgTypeId',
),
migrations.DeleteModel(
name='DoingBusinessAs',
),
migrations.DeleteModel(
name='InactiveClaimReason',
),
migrations.DeleteModel(
name='IssuerService',
),
migrations.DeleteModel(
name='Jurisdiction',
),
migrations.DeleteModel(
name='Location',
),
migrations.DeleteModel(
name='LocationType',
),
migrations.DeleteModel(
name='VerifiableClaim',
),
migrations.DeleteModel(
name='VerifiableClaimType',
),
migrations.DeleteModel(
name='VerifiableOrg',
),
migrations.DeleteModel(
name='VerifiableOrgType',
),
]
| 26.366667 | 49 | 0.535609 | 154 | 2,373 | 8.123377 | 0.344156 | 0.184652 | 0.228617 | 0.263789 | 0.406075 | 0.406075 | 0 | 0 | 0 | 0 | 0 | 0.015162 | 0.360725 | 2,373 | 89 | 50 | 26.662921 | 0.809492 | 0.029077 | 0 | 0.670732 | 1 | 0 | 0.204694 | 0.018688 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02439 | 0 | 0.060976 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fe1c0591c4151d978ce42f0dfcabccd003aa16e6 | 2,770 | py | Python | aewl/helpers.py | SigJig/aewl | cbcf1a635503f53536d2cb32b88415b221e84bf1 | [
"MIT"
] | null | null | null | aewl/helpers.py | SigJig/aewl | cbcf1a635503f53536d2cb32b88415b221e84bf1 | [
"MIT"
] | 2 | 2021-04-22T19:00:11.000Z | 2021-05-02T19:06:26.000Z | aewl/helpers.py | SigJig/aewl | cbcf1a635503f53536d2cb32b88415b221e84bf1 | [
"MIT"
] | null | null | null |
class EmptyFactor:
"""
Empty factors, such as safeZoneX which do not get multiplied by anything
"""
def __str__(self):
return type(self).__name__
def __repr__(self):
return str(self)
def _operation_skip_if(self, skip, other, op):
if other == skip:
return self
return Operation(self, other, op)
def _skip_zero(self, other, op):
if isinstance(other, (float, int)) and float(other) == 0:
return self
return Operation(self, other, op)
def __mul__(self, other):
return self._operation_skip_if(1, other, '*')
def __truediv__(self, other):
return self._operation_skip_if(1, other, '/')
def __mod__(self, other):
return Operation(self, other, '%')
def __add__(self, other):
return self._skip_zero(other, '+')
def __sub__(self, other):
return self._skip_zero(other, '-')
def _filter_redundant_prod(self, return_):
if float(self) == 1:
return return_
return '({}*{})'.format(float(self), return_)
class Factor(EmptyFactor, float):
def __str__(self):
return '{}({})'.format(type(self).__name__, float(self))
def export(self):
return float(self)
class Operation(EmptyFactor):
def __init__(self, left, right, op):
self.left = left
self.right = right
self.op = op
def export(self):
def _xport(x):
if hasattr(x, 'export'):
return x.export()
return x
return '({left}{op}{right})'.format(
left=_xport(self.left),
op=self.op,
right=_xport(self.right)
)
def __str__(self):
return '{name}({left}{op}{right})'.format(
name=type(self).__name__,
left=self.left,
op=self.op,
right=self.right
)
class SafeZoneX(EmptyFactor):
def export(self):
return 'safeZoneX'
class SafeZoneY(EmptyFactor):
def export(self):
return 'safeZoneY'
class PixelGrid(EmptyFactor):
@classmethod
def pixel_h(cls, fac):
return Operation(PixelH(fac), cls(), '*')
@classmethod
def pixel_w(cls, fac):
return Operation(PixelW(fac), cls(), '*')
def export(self):
return 'pixelGrid'
class SafeZoneW(Factor):
def export(self):
return self._filter_redundant_prod('safeZoneW')
class SafeZoneH(Factor):
def export(self):
return self._filter_redundant_prod('safeZoneH')
class PixelH(Factor):
def export(self):
return self._filter_redundant_prod('pixelH')
class PixelW(Factor):
def export(self):
return self._filter_redundant_prod('pixelW')
class Percentage(float): pass | 24.086957 | 76 | 0.592419 | 321 | 2,770 | 4.831776 | 0.199377 | 0.103159 | 0.075435 | 0.098001 | 0.340426 | 0.301741 | 0.274662 | 0.274662 | 0.179239 | 0.055448 | 0 | 0.002008 | 0.280866 | 2,770 | 115 | 77 | 24.086957 | 0.776606 | 0.025993 | 0 | 0.243902 | 0 | 0 | 0.047353 | 0.009321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304878 | false | 0.012195 | 0 | 0.231707 | 0.780488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
a3b0c0bd3fa0ef2995b2c304a285c25b5a8a0bbf | 489 | py | Python | 016_Quebrando_um_numero.py | fabioeomedeiros/Python-Base | ef9c1c66b3221f71d1c8dcaf4c2f86503712e9f1 | [
"MIT"
] | null | null | null | 016_Quebrando_um_numero.py | fabioeomedeiros/Python-Base | ef9c1c66b3221f71d1c8dcaf4c2f86503712e9f1 | [
"MIT"
] | null | null | null | 016_Quebrando_um_numero.py | fabioeomedeiros/Python-Base | ef9c1c66b3221f71d1c8dcaf4c2f86503712e9f1 | [
"MIT"
] | null | null | null | # 016_Quebrando_um_numero.py
# Quebra e exibe um número em sua parte inteira e fracionária
from math import trunc
print()
num = float(input("Entre com um número: "))
#importando o método trunc da biblioteca math
print(f"A parte inteira de {num} é :{int(num)}")
print(f"A parte fracionária de {num} é :{(num - trunc(num)):.4f}")
print()
#usando a função interna int
print(f"A parte inteira de {num} é :{int(num)}")
print(f"A parte fracionária de {num} é :{(num - int(num)):.4f}")
print()
| 30.5625 | 66 | 0.699387 | 86 | 489 | 3.94186 | 0.44186 | 0.070796 | 0.082596 | 0.141593 | 0.371681 | 0.371681 | 0.371681 | 0.371681 | 0.371681 | 0.371681 | 0 | 0.012048 | 0.151329 | 489 | 15 | 67 | 32.6 | 0.804819 | 0.321063 | 0 | 0.555556 | 0 | 0 | 0.633028 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.777778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
a3b6d8a55aad781578d713a387e609902c6da3db | 3,045 | py | Python | src/sage/repl/readline_extra_commands.py | bopopescu/sage | 2d495be78e0bdc7a0a635454290b27bb4f5f70f0 | [
"BSL-1.0"
] | 3 | 2016-06-19T14:48:31.000Z | 2022-01-28T08:46:01.000Z | src/sage/repl/readline_extra_commands.py | bopopescu/sage | 2d495be78e0bdc7a0a635454290b27bb4f5f70f0 | [
"BSL-1.0"
] | 2 | 2018-10-30T13:40:20.000Z | 2020-07-23T12:13:30.000Z | src/sage/repl/readline_extra_commands.py | bopopescu/sage | 2d495be78e0bdc7a0a635454290b27bb4f5f70f0 | [
"BSL-1.0"
] | 7 | 2021-11-08T10:01:59.000Z | 2022-03-03T11:25:52.000Z | r"""
Extra Readline Commands
.. WARNING::
The feature described here is no longer available in Sage, as IPython upon which
Sage's command line interface is based adopted prompt_toolkit as a replacement
of readline as of IPython version 5.0
The following extra readline commands are available in Sage:
- ``operate-and-get-next``
- ``history-search-backward-and-save``
- ``history-search-forward-and-save``
The ``operate-and-get-next`` command accepts the input line and fetches the next line
from the history. This is the same command with the same name in the Bash shell.
The ``history-search-backward-and-save`` command searches backward in the history
for the string of characters from the start of the input line to the current cursor
position, and fetches the first line found. If the cursor is at the start of the line, the previous line
is fetched. The position of the fetched line is saved internally, and the next search begins at the
saved position.
The ``history-search-forward-and-save`` command behaves similarly but forward.
The previous two commands is best used in tandem to fetch a block of lines from the history,
by searching backward the first line of the block and then issuing the forward command as many times as needed.
They are intended to replace the ``history-search-backward`` command and the ``history-search-forward`` command
provided by the GNU readline library used in Sage.
To bind these commands with keys, insert the relevant lines into the IPython configuration file
``$DOT_SAGE/ipython-*/profile_default/ipython_config.py``. Note that ``$DOT_SAGE`` is ``$HOME/.sage``
by default. For example,
::
c = get_config()
c.InteractiveShell.readline_parse_and_bind = [
'"\C-o": operate-and-get-next',
'"\e[A": history-search-backward-and-save',
'"\e[B": history-search-forward-and-save'
]
binds the three commands with the control-o key, the up arrow key, and the down arrow key,
respectively. *Warning:* Sometimes, these keys may be bound to do other actions by the terminal and does not
reach to the readline properly (check this by running ``stty -a`` and reading the ``cchars`` section). Then
you may need to turn off these bindings before the new readline commands work fine . A prominent case is when
control-o is bound to ``discard`` by the terminal. You can turn this off by running ``stty discard undef``.
AUTHORS:
- Kwankyu Lee (2010-11-23): initial version
- Kwankyu Lee (2013-06-05): updated for the new IPython configuration format.
"""
#*****************************************************************************
# Copyright (C) 2010 Kwankyu Lee <ekwankyu@gmail.com>
#
# Distributed under the terms of the GNU General Public License (GPL)
# http://www.gnu.org/licenses/
#*****************************************************************************
from sage.misc.superseded import deprecation
deprecation(21342, "This module and the feature it provides is not available anymore in Sage")
| 46.136364 | 111 | 0.703777 | 456 | 3,045 | 4.679825 | 0.427632 | 0.048735 | 0.039363 | 0.023899 | 0.07732 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010672 | 0.16913 | 3,045 | 65 | 112 | 46.846154 | 0.832806 | 0.94647 | 0 | 0 | 0 | 0 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a3ba112724eb41303dadbed92c7755b3b7e561ce | 2,023 | py | Python | mysite/polls/models.py | bjtuhfz/pipe_leakage_query_system | eecca3fdbee44fb0f818d5fb195cf9769eda8786 | [
"MIT"
] | null | null | null | mysite/polls/models.py | bjtuhfz/pipe_leakage_query_system | eecca3fdbee44fb0f818d5fb195cf9769eda8786 | [
"MIT"
] | null | null | null | mysite/polls/models.py | bjtuhfz/pipe_leakage_query_system | eecca3fdbee44fb0f818d5fb195cf9769eda8786 | [
"MIT"
] | 2 | 2017-05-26T04:32:07.000Z | 2019-04-07T13:50:17.000Z | from __future__ import unicode_literals
from django.db import models
import datetime
from django.utils import timezone
# Create your models here.
# class Question(models.Model):
# question_text = models.CharField(max_length=200)
# pub_date = models.DateTimeField('date published')
#
# def __str__(self):
# return self.question_text
#
# def was_published_recently(self):
# # return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
# now = timezone.now()
# return now - datetime.timedelta(days=1) <= self.pub_date <= now
class Tweet(models.Model):
tweet_text = models.CharField(max_length=150)
# pub_date = models.CharField(max_length=15)
pub_date = models.DateTimeField()
location = models.CharField(max_length=10)
label = models.CharField(max_length=10)
def __str__(self):
return self.tweet_text
def was_published_recently(self):
# return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
now = timezone.now()
return now - datetime.timedelta(days=1) <= self.pub_date <= now
class Choice(models.Model):
tweet = models.ForeignKey(Tweet, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=100)
votes = models.IntegerField(default=0)
def __str__(self):
return self.choice_text
# Wei Wang
class User(models.Model):
username = models.CharField(max_length=100)
password = models.CharField(max_length=100)
def __str__(self):
return self.username
class Message(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
content = models.CharField(max_length=300)
location = models.CharField(max_length=30,default="")
pub_date = models.DateTimeField('date published')
status = models.CharField(max_length=100,default="Unlabelled")
def __str__(self):
return self.content
def was_published_recently(self):
return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
| 32.111111 | 79 | 0.702422 | 257 | 2,023 | 5.299611 | 0.245136 | 0.121145 | 0.145374 | 0.193833 | 0.591043 | 0.326725 | 0.269457 | 0.269457 | 0.269457 | 0.269457 | 0 | 0.021238 | 0.185368 | 2,023 | 62 | 80 | 32.629032 | 0.805218 | 0.273851 | 0 | 0.171429 | 0 | 0 | 0.016529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0.028571 | 0.114286 | 0.142857 | 0.971429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
a3ca4bd0882a8dccc51d543cc41a81a4ff9f3acf | 135 | py | Python | guest-talks/20170213-optional-static-types/good_example.py | mgadagin/PythonClass | 70b370362d75720b3fb0e1d6cc8158f9445e9708 | [
"MIT"
] | 46 | 2017-09-27T20:19:36.000Z | 2020-12-08T10:07:19.000Z | guest-talks/20170213-optional-static-types/good_example.py | mgadagin/PythonClass | 70b370362d75720b3fb0e1d6cc8158f9445e9708 | [
"MIT"
] | 6 | 2018-01-09T08:07:37.000Z | 2020-09-07T12:25:13.000Z | guest-talks/20170213-optional-static-types/good_example.py | mgadagin/PythonClass | 70b370362d75720b3fb0e1d6cc8158f9445e9708 | [
"MIT"
] | 18 | 2017-10-10T02:06:51.000Z | 2019-12-01T10:18:13.000Z | def fibonnacci(nth: int) -> int:
if nth <= 1:
return 1
else:
return fibonnacci(nth - 1) + fibonnacci(nth - 2)
| 19.285714 | 56 | 0.540741 | 18 | 135 | 4.055556 | 0.5 | 0.534247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.333333 | 135 | 6 | 57 | 22.5 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a3fef8727b39ef1514d1ab301ca08e01d1277985 | 686 | py | Python | models/dtc.py | harryprabowo/tars-backend | f791df7124a71bc3be6bd305f2197918bf86d74d | [
"MIT"
] | 1 | 2020-02-14T15:26:20.000Z | 2020-02-14T15:26:20.000Z | models/dtc.py | harryprabowo/tars-backend | f791df7124a71bc3be6bd305f2197918bf86d74d | [
"MIT"
] | null | null | null | models/dtc.py | harryprabowo/tars-backend | f791df7124a71bc3be6bd305f2197918bf86d74d | [
"MIT"
] | null | null | null | from app import db
class DTC(db.Model):
__tablename__ = 'dtcs'
id = db.Column(db.Integer, primary_key=True)
dtc_number = db.Column(db.String(10))
dtc_name = db.Column(db.String(100))
desc = db.Column(db.String(1000), nullable=True)
system = db.Column(db.String(25), nullable=True)
severity = db.Column(db.Integer)
urgency = db.Column(db.Integer)
def serialize(self):
return {
'id': self.id,
'dtc_number': self.dtc_number,
'dtc_name': self.dtc_name,
'desc': self.desc,
'system': self.system,
'severity': self.severity,
'urgency': self.urgency,
} | 31.181818 | 52 | 0.578717 | 87 | 686 | 4.436782 | 0.356322 | 0.145078 | 0.181347 | 0.165803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022312 | 0.281341 | 686 | 22 | 53 | 31.181818 | 0.760649 | 0 | 0 | 0 | 0 | 0 | 0.071325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.05 | 0.05 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4306e5f0da25fe0d4940a1ba8329ab1c6ac6de16 | 1,087 | py | Python | diofant/matrices/expressions/funcmatrix.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 57 | 2016-09-13T23:16:26.000Z | 2022-03-29T06:45:51.000Z | diofant/matrices/expressions/funcmatrix.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 402 | 2016-05-11T11:11:47.000Z | 2022-03-31T14:27:02.000Z | diofant/matrices/expressions/funcmatrix.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | 20 | 2016-05-11T08:17:37.000Z | 2021-09-10T09:15:51.000Z | from ...core import Expr
from ...core.sympify import sympify
from .matexpr import MatrixExpr
class FunctionMatrix(MatrixExpr):
"""
Represents a Matrix using a function (Lambda)
This class is an alternative to SparseMatrix
>>> i, j = symbols('i j')
>>> X = FunctionMatrix(3, 3, Lambda((i, j), i + j))
>>> Matrix(X)
Matrix([
[0, 1, 2],
[1, 2, 3],
[2, 3, 4]])
>>> Y = FunctionMatrix(1000, 1000, Lambda((i, j), i + j))
>>> isinstance(Y*Y, MatMul) # this is an expression object
True
>>> (Y**2)[10, 10] # So this is evaluated lazily
342923500
"""
def __new__(cls, rows, cols, lamda):
rows, cols = sympify(rows), sympify(cols)
return Expr.__new__(cls, rows, cols, lamda)
@property
def shape(self):
return self.args[0:2]
@property
def lamda(self):
return self.args[2]
def _entry(self, i, j):
return self.lamda(i, j)
def _eval_trace(self):
from ...concrete import Sum
from .trace import Trace
return Trace(self).rewrite(Sum)
| 22.645833 | 63 | 0.579577 | 148 | 1,087 | 4.182432 | 0.398649 | 0.025848 | 0.025848 | 0.029079 | 0.0937 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04586 | 0.277829 | 1,087 | 47 | 64 | 23.12766 | 0.742675 | 0.379945 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.263158 | 0.157895 | 0.842105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
431c90612332da9814879b0719b2cbe5f1419e0f | 108 | py | Python | 03_01_dice.py | ChoppingBroccoli/Raspi_Book_Exercises | 8082ce330817175212a7e5a8baf224e05a63dd3a | [
"MIT"
] | 26 | 2015-04-28T14:34:14.000Z | 2021-12-03T21:29:29.000Z | 03_01_dice.py | ChoppingBroccoli/Raspi_Book_Exercises | 8082ce330817175212a7e5a8baf224e05a63dd3a | [
"MIT"
] | null | null | null | 03_01_dice.py | ChoppingBroccoli/Raspi_Book_Exercises | 8082ce330817175212a7e5a8baf224e05a63dd3a | [
"MIT"
] | 27 | 2015-09-06T16:45:33.000Z | 2021-03-26T15:58:51.000Z | #03_01_dice
import random
for x in range(1, 11):
random_number = random.randint(1, 6)
print(random_number) | 21.6 | 37 | 0.759259 | 20 | 108 | 3.9 | 0.75 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095745 | 0.12963 | 108 | 5 | 38 | 21.6 | 0.734043 | 0.092593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4320ac889347de937838e13e00b8ae5f2025c288 | 557 | py | Python | problems/12.py | christofferaakre/project-euler | 4b42802233be10e4a592798205171fb5156dae6b | [
"MIT"
] | null | null | null | problems/12.py | christofferaakre/project-euler | 4b42802233be10e4a592798205171fb5156dae6b | [
"MIT"
] | null | null | null | problems/12.py | christofferaakre/project-euler | 4b42802233be10e4a592798205171fb5156dae6b | [
"MIT"
] | null | null | null | import math
from decimal import Decimal
from main import Solver, list_divisors
solver = Solver()
def triangle_number(n):
return int(n * (n + 1) / 2)
def is_triangle_number(x):
return ((-1 + math.sqrt(1 + 8 * x)) / 2) % 1 == 0
largest_number_of_divisors = 0
x = 0
i = 1
while largest_number_of_divisors <= 500:
x += i
number_of_divisors = len(list_divisors(x))
if number_of_divisors > largest_number_of_divisors:
print(number_of_divisors)
largest_number_of_divisors = number_of_divisors
i += 1
solver.solve(12, x)
| 23.208333 | 55 | 0.691203 | 88 | 557 | 4.090909 | 0.352273 | 0.177778 | 0.355556 | 0.255556 | 0.216667 | 0.216667 | 0.216667 | 0 | 0 | 0 | 0 | 0.038549 | 0.208259 | 557 | 23 | 56 | 24.217391 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.157895 | 0.105263 | 0.368421 | 0.052632 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
43259e8e7c375fc55442abedd4d41d0a13dbe895 | 451 | py | Python | 1478.py | heltonricardo/URI | 160cca22d94aa667177c9ebf2a1c9864c5e55b41 | [
"MIT"
] | 6 | 2021-04-13T00:33:43.000Z | 2022-02-10T10:23:59.000Z | 1478.py | heltonricardo/URI | 160cca22d94aa667177c9ebf2a1c9864c5e55b41 | [
"MIT"
] | null | null | null | 1478.py | heltonricardo/URI | 160cca22d94aa667177c9ebf2a1c9864c5e55b41 | [
"MIT"
] | 3 | 2021-03-23T18:42:24.000Z | 2022-02-10T10:24:07.000Z | while True:
n = int(input())
if n == 0: break
l = 1
o = '+'
while l <= n:
v = l
c = 1
while c <= n:
if c != n: print('{:>3}'.format(v), end=' ')
else:
print('{:>3}'.format(v))
o = '-'
if v == 1: o = '+'
elif v == n: o = '-'
if o == '+': v += 1
else: v -= 1
c += 1
l += 1
print()
| 21.47619 | 56 | 0.261641 | 56 | 451 | 2.107143 | 0.339286 | 0.050847 | 0.20339 | 0.220339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.545455 | 451 | 20 | 57 | 22.55 | 0.526829 | 0 | 0 | 0 | 0 | 0 | 0.035477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
432c39c315ddf844b7d71fd91b7511362986f19f | 56,003 | py | Python | predstorm/plot.py | helioforecast/Predstorm | f3c02c201ba790261c4fbd42264e3eb91e09ceb3 | [
"MIT"
] | 8 | 2020-01-17T22:04:38.000Z | 2021-11-18T11:02:23.000Z | predstorm/plot.py | helioforecast/Predstorm | f3c02c201ba790261c4fbd42264e3eb91e09ceb3 | [
"MIT"
] | 3 | 2019-04-11T09:39:28.000Z | 2019-06-19T12:02:14.000Z | predstorm/plot.py | IWF-helio/PREDSTORM | f3c02c201ba790261c4fbd42264e3eb91e09ceb3 | [
"MIT"
] | 9 | 2019-03-15T13:28:42.000Z | 2019-11-08T09:12:47.000Z | #!/usr/bin/env python
"""
This is the module for producing predstorm plots.
Author: C. Moestl, R. Bailey, IWF Graz, Austria
started May 2019, last update May 2019
Python 3.7
Issues:
- ...
To-dos:
- ...
Future steps:
- ...
"""
import os
import sys
import copy
import logging
import logging.config
import numpy as np
import pdb
import seaborn as sns
import scipy.signal as signal
import matplotlib.dates as mdates
from matplotlib.dates import date2num, num2date, DateFormatter
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib.patches import Polygon
from datetime import datetime, timedelta
from glob import iglob
import json
import urllib
from .config import plotting as pltcfg
logger = logging.getLogger(__name__)
# =======================================================================================
# --------------------------- PLOTTING FUNCTIONS ----------------------------------------
# =======================================================================================
def plot_solarwind_and_dst_prediction(DSCOVR_data, STEREOA_data, DST_data, DSTPRED_data, newell_coupling=None, dst_label='Dst Temerin & Li 2002', past_days=3.5, future_days=7., verification_mode=False, timestamp=None, times_3DCORE=[], times_nans={}, outfile='predstorm_real.png', **kwargs):
"""
Plots solar wind variables, past from DSCOVR and future/predicted from STEREO-A.
Total B-field and Bz (top), solar wind speed (second), particle density (third)
and Dst (fourth) from Kyoto and model prediction.
Parameters
==========
DSCOVR_data : list[minute data, hourly data]
DSCOVR data in different time resolutions.
STEREOA_data : list[minute data, hourly data]
STEREO-A data in different time resolutions.
DST_data : predstorm_module.SatData
Kyoto Dst
DSTPRED_data : predstorm_module.SatData
Dst predicted by PREDSTORM.
dst_method : str (default='temerin_li')
Descriptor for Dst method being plotted.
past_days : float (default=3.5)
Number of days in the past to plot.
future_days : float (default=7.)
Number of days into the future to plot.
lw : int (default=1)
Linewidth for plotting functions.
fs : int (default=11)
Font size for all text in plot.
ms : int (default=5)
Marker size for markers in plot.
figsize : tuple(float=width, float=height) (default=(14,12))
Figure size (in inches) for output file.
verification_mode : bool (default=False)
If True, verification mode will produce a plot of the predicted Dst
for model verification purposes.
timestamp : datetime obj
Time for 'now' label in plot.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_ec = kwargs.get('c_ec', pltcfg.c_ec)
c_sta = kwargs.get('c_sta', pltcfg.c_sta)
c_sta_dst = kwargs.get('c_sta_dst', pltcfg.c_sta_dst)
c_btot = kwargs.get('c_btot', pltcfg.c_btot)
c_bx = kwargs.get('c_bx', pltcfg.c_bx)
c_by = kwargs.get('c_by', pltcfg.c_by)
c_bz = kwargs.get('c_bz', pltcfg.c_bz)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend)
fs_ylabel = kwargs.get('fs_legend', pltcfg.fs_ylabel)
fs_title = kwargs.get('fs_title', pltcfg.fs_title)
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
# Make figure object:
fig=plt.figure(1,figsize=figsize)
axes = []
# Set data objects:
stam, sta = STEREOA_data
dism, dis = DSCOVR_data
dst = DST_data
dst_pred = DSTPRED_data
text_offset = past_days # days (for 'fast', 'intense', etc.)
# For the minute data, check which are the intervals to show for STEREO-A until end of plot
i_fut = np.where(np.logical_and(stam['time'] > dism['time'][-1], \
stam['time'] < dism['time'][-1]+future_days))[0]
if timestamp == None:
timestamp = datetime.utcnow()
timeutc = mdates.date2num(timestamp)
if newell_coupling == None:
n_plots = 4
else:
n_plots = 5
plotstart = timeutc - past_days
plotend = timeutc + future_days - 3./24.
# SUBPLOT 1: Total B-field and Bz
# -------------------------------
ax1 = fig.add_subplot(n_plots,1,1)
axes.append(ax1)
# Total B-field and Bz (DSCOVR)
plst = 2
plt.plot_date(dism['time'][::plst], dism['btot'][::plst],'-', c=c_btot, label='$B_{tot}$', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['bx'][::plst],'-', c=c_bx, label='$B_x$', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['by'][::plst],'-', c=c_by, label='$B_y$', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['bz'][::plst],'-', c=c_bz, label='$B_z$', linewidth=lw)
# STEREO-A minute resolution data with timeshift
plt.plot_date(stam['time'][i_fut], stam['btot'][i_fut], '-', c=c_btot, alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][i_fut], stam['br'][i_fut], '-', c=c_bx, alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][i_fut], stam['bt'][i_fut], '-', c=c_by, alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][i_fut], stam['bn'][i_fut], '-', c=c_bz, alpha=0.5, linewidth=0.5)
# Indicate 0 level for Bz
plt.plot_date([plotstart,plotend], [0,0],'--k', alpha=0.5, linewidth=1)
plt.ylabel('Magnetic field [nT]', fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
bplotmax = np.nanmax(dism['btot'])+5
bplotmin = -bplotmax
plt.ylim(bplotmin, bplotmax)
if len(times_3DCORE) > 0:
plt.annotate('flux rope (3DCORE)', xy=(times_3DCORE[0],bplotmax-(bplotmax-bplotmin)*0.25),
xytext=(times_3DCORE[0]+0.05,bplotmax-(bplotmax-bplotmin)*0.95), color='gray', fontsize=14)
if 'stereo' in stam.source.lower():
pred_source = 'STEREO-Ahead Beacon'
elif 'dscovr' in stam.source.lower() or 'noaa' in stam.source.lower():
pred_source = '27-day SW-Recurrence Model (NOAA)'
plt.title('L1 real time solar wind from NOAA SWPC for '+ datetime.strftime(timestamp, "%Y-%m-%d %H:%M")+
' UT & {}'.format(pred_source), fontsize=fs_title)
# SUBPLOT 2: Solar wind speed
# ---------------------------
ax2 = fig.add_subplot(n_plots,1,2)
axes.append(ax2)
# Plot solar wind speed (DSCOVR):
plt.plot_date(dism['time'][::plst], dism['speed'][::plst],'-', c='black', label='speed',linewidth=lw)
plt.ylabel('Speed $\mathregular{[km \\ s^{-1}]}$', fontsize=fs_ylabel)
stam_speed_filt = signal.savgol_filter(stam['speed'],11,1)
if 'speed' in times_nans:
stam_speed_filt = np.ma.array(stam_speed_filt)
for times in times_nans['speed']:
stam_speed_filt = np.ma.masked_where(np.logical_and(stam['time'] > times[0], stam['time'] < times[1]), stam_speed_filt)
# Plot STEREO-A data with timeshift and savgol filter
plt.plot_date(stam['time'][i_fut], stam_speed_filt[i_fut],'-',
c='black', alpha=0.5, linewidth=lw, label='speed {}'.format(stam.source))
# Add speed levels:
pltcfg.plot_speed_lines(xlims=[plotstart, plotend])
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
vplotmax=np.nanmax(np.concatenate((dism['speed'],stam_speed_filt[i_fut])))+100
vplotmin=np.nanmin(np.concatenate((dism['speed'],stam_speed_filt[i_fut]))-50)
plt.ylim(vplotmin, vplotmax)
plt.annotate('now', xy=(timeutc,vplotmax-(vplotmax-vplotmin)*0.25), xytext=(timeutc+0.05,vplotmax-(vplotmax-vplotmin)*0.25), color='k', fontsize=14)
# SUBPLOT 3: Solar wind density
# -----------------------------
ax3 = fig.add_subplot(n_plots,1,3)
axes.append(ax3)
stam_density_filt = signal.savgol_filter(stam['density'],5,1)
if 'density' in times_nans:
stam_density_filt = np.ma.array(stam_density_filt)
for times in times_nans['density']:
stam_density_filt = np.ma.masked_where(np.logical_and(stam['time'] > times[0], stam['time'] < times[1]), stam_density_filt)
# Plot solar wind density:
plt.plot_date(dism['time'], dism['density'],'-k', label='density L1',linewidth=lw)
plt.ylabel('Density $\mathregular{[ccm^{-3}]}$',fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
plt.ylim([0,np.nanmax(np.nanmax(np.concatenate((dism['density'],stam_density_filt[i_fut])))+10)])
#plot STEREO-A data with timeshift and savgol filter
plt.plot_date(stam['time'][i_fut], stam_density_filt[i_fut],
'-', c='black', alpha=0.5, linewidth=lw, label='density {}'.format(stam.source))
# SUBPLOT 4: Actual and predicted Dst
# -----------------------------------
ax4 = fig.add_subplot(n_plots,1,4)
axes.append(ax4)
# Observed Dst Kyoto (past):
plt.plot_date(dst['time'], dst['dst'],'o', c=c_dst, label='Dst observed',markersize=ms_dst)
plt.ylabel('Dst [nT]', fontsize=fs_ylabel)
dstplotmax = np.nanmax(np.concatenate((dst['dst'], dst_pred['dst'])))+20
dstplotmin = np.nanmin(np.concatenate((dst['dst'], dst_pred['dst'])))-20
if dstplotmin > -100: # Low activity (normal)
plt.ylim([-100, dstplotmax + 30])
else: # High activity
plt.ylim([dstplotmin, dstplotmax])
# Plot predicted Dst
dst_pred_past = dst_pred['time'] < date2num(timestamp)
plt.plot_date(dst_pred['time'][dst_pred_past], dst_pred['dst'][dst_pred_past], '-', c=c_sta_dst, label=dst_label, markersize=3, linewidth=1)
plt.plot_date(dst_pred['time'][~dst_pred_past], dst_pred['dst'][~dst_pred_past], '-', c=c_sta_dst, alpha=0.5, markersize=3, linewidth=1)
# Add generic error bars of +/-15 nT:
# Errors calculated using https://machinelearningmastery.com/prediction-intervals-for-machine-learning/
error_l1 = 5.038
error_l5 = 12.249
error_pers = 13.416
ih_fut = np.where(dst_pred['time'] > dis['time'][-1])[0]
ih_past = np.arange(0, ih_fut[0]+1)
# Error bars for data from L1:
plt.fill_between(dst_pred['time'][ih_past], dst_pred['dst'][ih_past]-error_l1, dst_pred['dst'][ih_past]+error_l1,
alpha=0.1, facecolor=c_sta_dst,
label=r'prediction interval +/- 1 & 2 $\sigma$ (68% and 95% significance)')
plt.fill_between(dst_pred['time'][ih_past], dst_pred['dst'][ih_past]-2*error_l1, dst_pred['dst'][ih_past]+2*error_l1,
alpha=0.1, facecolor=c_sta_dst)
# Error bars for data from L5/STEREO:#
plt.fill_between(dst_pred['time'][ih_fut], dst_pred['dst'][ih_fut]-error_l5, dst_pred['dst'][ih_fut]+error_l5,
alpha=0.1, facecolor=c_sta_dst)
plt.fill_between(dst_pred['time'][ih_fut], dst_pred['dst'][ih_fut]-2*error_l5, dst_pred['dst'][ih_fut]+2*error_l5,
alpha=0.1, facecolor=c_sta_dst)
# Label plot with geomagnetic storm levels
pltcfg.plot_dst_activity_lines(xlims=[plotstart, plotend])
# SUBPLOT 5: Newell Coupling
# --------------------------
if newell_coupling != None:
ax5 = fig.add_subplot(n_plots,1,5)
axes.append(ax5)
# Plot solar wind density:
ec_past = newell_coupling['time'] < date2num(timestamp)
avg_newell_coupling = newell_coupling.get_weighted_average('ec')
plt.plot_date(newell_coupling['time'][ec_past], avg_newell_coupling[ec_past]/4421., '-', color=c_ec, # past
label='Newell coupling 4h weighted mean',linewidth=1.5)
plt.plot_date(newell_coupling['time'][~ec_past], avg_newell_coupling[~ec_past]/4421., '-', color=c_ec, # future
alpha=0.5, linewidth=1.5)
plt.ylabel('Newell Coupling / 4421\n$\mathregular{[(km/s)^{4/3} nT^{2/3}]}$',fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
plt.ylim([0,np.nanmax(avg_newell_coupling/4421.)*1.2])
# Indicate level of interest (Ec/4421 = 1.0)
plt.plot_date([plotstart,plotend], [1,1],'--k', alpha=0.5, linewidth=1)
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
ax.legend(loc=2,ncol=4,fontsize=fs_legend)
# Dates on x-axes:
myformat = mdates.DateFormatter(date_fmt)
ax.xaxis.set_major_formatter(myformat)
# Vertical line for NOW:
ax.plot_date([timeutc,timeutc],[-2000,100000],'-k', linewidth=2)
# Indicate where prediction comes from 3DCORE:
if len(times_3DCORE) > 0:
ax.plot_date([times_3DCORE[0],times_3DCORE[0]],[-2000,100000], color='gray', linewidth=1, linestyle='--')
ax.plot_date([times_3DCORE[-1],times_3DCORE[-1]],[-2000,100000], color='gray', linewidth=1, linestyle='--')
# Liability text:
pltcfg.group_info_text()
pltcfg.liability_text()
#save plot
if not verification_mode:
plot_label = 'realtime'
else:
plot_label = 'verify'
filename = os.path.join('results','predstorm_v1_{}_stereo_a_plot_{}.png'.format(
plot_label, datetime.strftime(timestamp, "%Y-%m-%d-%H_%M")))
filename_eps = filename.replace('png', 'eps')
if not verification_mode:
plt.savefig(outfile)
logger.info('Real-time plot saved as {}!'.format(outfile))
#if not server: # Just plot and exit
# plt.show()
# sys.exit()
plt.savefig(filename)
logger.info('Plot saved as png:\n'+ filename)
def plot_solarwind_science(DSCOVR_data, STEREOA_data, verification_mode=False, timestamp=None, past_days=7, future_days=7, plot_step=20, outfile='predstorm_science.png', **kwargs):
"""
Plots solar wind variables, past from DSCOVR and future/predicted from STEREO-A.
Total B-field and Bz (top), solar wind speed (second), particle density (third)
and Dst (fourth) from Kyoto and model prediction.
Parameters
==========
DSCOVR_data : list[minute data, hourly data]
DSCOVR data in different time resolutions.
STEREOA_data : list[minute data, hourly data]
STEREO-A data in different time resolutions.
lw : int (default=1)
Linewidth for plotting functions.
fs : int (default=11)
Font size for all text in plot.
ms : int (default=5)
Marker size for markers in plot.
figsize : tuple(float=width, float=height) (default=(14,12))
Figure size (in inches) for output file.
verification_mode : bool (default=False)
If True, verification mode will produce a plot of the predicted Dst
for model verification purposes.
timestamp : datetime obj
Time for 'now' label in plot.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_ec = kwargs.get('c_ec', pltcfg.c_ec)
c_sta = kwargs.get('c_sta', pltcfg.c_sta)
c_sta_dst = kwargs.get('c_sta_dst', pltcfg.c_sta_dst)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend)
fs_ylabel = kwargs.get('fs_legend', pltcfg.fs_ylabel)
fs_title = kwargs.get('fs_title', pltcfg.fs_title)
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
# Make figure object:
fig = plt.figure(1,figsize=(20,8))
axes = []
# Set data objects:
stam, sta = STEREOA_data
dism, dis = DSCOVR_data
# For the minute data, check which are the intervals to show for STEREO-A until end of plot
sta_index_future=np.where(np.logical_and(stam['time'] > dism['time'][-1], \
stam['time'] < dism['time'][-1]+future_days))[0]
if timestamp == None:
timestamp = datetime.utcnow()
timeutc = mdates.date2num(timestamp)
n_plots = 3
plst = plot_step
plotstart = timeutc - past_days
plotend = timeutc + future_days
# SUBPLOT 1: Total B-field and Bz
# -------------------------------
ax1 = fig.add_subplot(n_plots,1,1)
axes.append(ax1)
# Total B-field and Bz (DSCOVR)
plt.plot_date(dism['time'][::plst], dism['btot'][::plst],'-', c='black', label='B', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['bx'][::plst],'-', c='teal', label='Bx', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['by'][::plst],'-', c='orange', label='By', linewidth=lw)
plt.plot_date(dism['time'][::plst], dism['bz'][::plst],'-', c='purple', label='Bz', linewidth=lw)
# STEREO-A minute resolution data with timeshift
plt.plot_date(stam['time'][sta_index_future], stam['btot'][sta_index_future],
'-', c='black', alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][sta_index_future], stam['br'][sta_index_future],
'-', c='teal', alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][sta_index_future], stam['bt'][sta_index_future],
'-', c='orange', alpha=0.5, linewidth=0.5)
plt.plot_date(stam['time'][sta_index_future], stam['bn'][sta_index_future],
'-', c='purple', alpha=0.5, linewidth=0.5)
# Indicate 0 level for Bz
plt.plot_date([plotstart,plotend], [0,0],'--k', alpha=0.5, linewidth=1)
plt.ylabel('Magnetic field [nT]', fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
bplotmax=np.nanmax(np.concatenate((dism['btot'],stam['btot'][sta_index_future])))+5
bplotmin=np.nanmin(np.concatenate((dism['bz'],stam['bn'][sta_index_future]))-5)
plt.ylim((-13, 13))
if 'stereo' in stam.source.lower():
pred_source = 'STEREO-Ahead Beacon'
elif 'dscovr' in stam.source.lower() or 'noaa' in stam.source.lower():
pred_source = '27-day SW-Recurrence Model (NOAA)'
plt.title('L1 real time solar wind from NOAA SWPC for '+ datetime.strftime(timestamp, "%Y-%m-%d %H:%M")+ ' UT & {}'.format(pred_source), fontsize=fs_title)
# SUBPLOT 2: Solar wind speed
# ---------------------------
ax2 = fig.add_subplot(n_plots,1,2)
axes.append(ax2)
# Plot solar wind speed (DSCOVR):
plt.plot_date(dism['time'][::plst], dism['speed'][::plst],'-', c='black', label='speed',linewidth=lw)
plt.ylabel('Speed $\mathregular{[km \\ s^{-1}]}$', fontsize=fs_ylabel)
# Plot STEREO-A data with timeshift and savgol filter
plt.plot_date(stam['time'][sta_index_future],signal.savgol_filter(stam['speed'][sta_index_future],11,1),'-',
c='black', alpha=0.5, linewidth=lw)
# Add speed levels:
pltcfg.plot_speed_lines(xlims=[plotstart, plotend])
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
vplotmax=np.nanmax(np.concatenate((dism['speed'],signal.savgol_filter(stam['speed'][sta_index_future],11,1))))+100
vplotmin=np.nanmin(np.concatenate((dism['speed'],signal.savgol_filter(stam['speed'][sta_index_future],11,1)))-50)
plt.ylim(vplotmin, vplotmax)
plt.annotate('now', xy=(timeutc,vplotmax-(vplotmax-vplotmin)*0.25), xytext=(timeutc+0.05,vplotmax-(vplotmax-vplotmin)*0.25), color='k', fontsize=14)
# SUBPLOT 3: Solar wind density
# -----------------------------
ax3 = fig.add_subplot(n_plots,1,3)
axes.append(ax3)
# Plot solar wind density:
plt.plot_date(dism['time'][::plst], dism['density'][::plst],'-k', label='density',linewidth=lw)
plt.ylabel('Density $\mathregular{[ccm^{-3}]}$',fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
plt.ylim([0,np.nanmax(np.nanmax(np.concatenate((dism['density'],stam['density'][sta_index_future])))+10)])
#plot STEREO-A data with timeshift and savgol filter
plt.plot_date(stam['time'][sta_index_future], signal.savgol_filter(stam['density'][sta_index_future],5,1),
'-', c='black', alpha=0.5, linewidth=lw)
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
ax.legend(loc=2,ncol=4,fontsize=fs_legend)
# Dates on x-axes:
myformat = mdates.DateFormatter(date_fmt)
ax.xaxis.set_major_formatter(myformat)
# Vertical line for NOW:
ax.plot_date([timeutc,timeutc],[-2000,100000],'-k', linewidth=2)
# Liability text:
pltcfg.group_info_text()
pltcfg.liability_text()
#save plot
if not verification_mode:
plot_label = 'realtime'
else:
plot_label = 'verify'
if not verification_mode:
plt.savefig(outfile)
logger.info('Real-time plot saved as {}!'.format(outfile))
def plot_solarwind_pretty(sw_past, sw_future, dst, newell_coupling, timestamp):
"""Uses the package mplcyberpunk to make a simpler and more visually appealing plot.
TO-DO:
- Implement weighted average smoothing on Newell Coupling."""
import mplcyberpunk
plt.style.use("cyberpunk")
c_speed = (0.58, 0.404, 0.741)
c_dst = (0.031, 0.969, 0.996)
c_ec = (0.961, 0.827, 0)
alpha_fut = 0.5
fig, (ax1, ax2, ax3) = plt.subplots(3, figsize=(17,9), sharex=True)
time_past = dst['time'] <= date2num(timestamp)
time_future = dst['time'] >= date2num(timestamp)
# Plot data:
ax1.plot_date(sw_past['time'], sw_past['speed'], '-', c=c_speed, label="Solar wind speed [km/s]")
ax1.plot_date(sw_future['time'], sw_future['speed'], '-', c=c_speed, alpha=alpha_fut)
ax2.plot_date(dst['time'][time_past], dst['dst'][time_past], '-', c=c_dst, label="$Dst$ [nT]")
ax2.plot_date(dst['time'][time_future], dst['dst'][time_future], '-', c=c_dst, alpha=alpha_fut)
avg_newell_coupling = newell_coupling.get_weighted_average('ec')
ax3.plot_date(newell_coupling['time'][time_past], avg_newell_coupling[time_past]/4421., '-', c=c_ec, label="Newell Coupling\n[nT]")
ax3.plot_date(newell_coupling['time'][time_future], avg_newell_coupling[time_future]/4421., '-', c=c_ec, alpha=alpha_fut)
mplcyberpunk.add_glow_effects(ax1)
mplcyberpunk.add_glow_effects(ax2)
mplcyberpunk.add_glow_effects(ax3)
# Add labels:
props = dict(boxstyle='round', facecolor='silver', alpha=0.2)
# place a text box in upper left in axes coords
ax1.text(0.01, 0.95, "Solar wind speed [km/s]", transform=ax1.transAxes, fontsize=14,
verticalalignment='top', bbox=props)
ax2.text(0.01, 0.95, "Predicted $Dst$ [nT]", transform=ax2.transAxes, fontsize=14,
verticalalignment='top', bbox=props)
ax3.text(0.01, 0.95, 'Newell Coupling / 4421 $\mathregular{[(km/s)^{4/3} nT^{2/3}]}$', transform=ax3.transAxes, fontsize=14,
verticalalignment='top', bbox=props)
pltcfg.plot_dst_activity_lines(xlims=[dst['time'][0], dst['time'][-1]], ax=ax2, color='silver')
pltcfg.plot_speed_lines(xlims=[dst['time'][0], dst['time'][-1]], ax=ax1, color='silver')
# Add vertical lines for 'now' time:
print_time_lines = True
for ax in [ax1, ax2, ax3]:
# Add a line denoting "now"
ax.axvline(x=timestamp, linewidth=2, color='silver')
# Add buffer to top of plots so that labels don't overlap with data:
ax_ymin, ax_ymax = ax.get_ylim()
text_adj = (ax_ymax-ax_ymin)*0.17
ax.set_ylim((ax_ymin, ax_ymax + text_adj))
# Add lines for future days:
ax_ymin, ax_ymax = ax.get_ylim()
text_adj = (ax_ymax-ax_ymin)*0.15
for t_day in [1,2,3,4]:
t_days_timestamp = timestamp+timedelta(days=t_day)
ax.axvline(x=t_days_timestamp, ls='--', linewidth=0.7, color='silver')
if print_time_lines:
ax.annotate('now', xy=(timestamp, ax_ymax-text_adj), xytext=(timestamp+timedelta(hours=2.5),
ax_ymax-text_adj*1.03), color='silver', fontsize=14)
ax.annotate('+{} days'.format(t_day), xy=(t_days_timestamp, ax_ymax-text_adj), xytext=(t_days_timestamp+timedelta(hours=2),
ax_ymax-text_adj*1.03), color='silver', fontsize=10)
print_time_lines = False
# Formatting:
tick_date = num2date(dst['time'][0]).replace(hour=0, minute=0, second=0, microsecond=0)
ax3.set_xticks([tick_date + timedelta(days=n) for n in range(1,15)])
ax3.set_xlim([dst['time'][0], dst['time'][-1]])
myformat = DateFormatter('%a\n%b %d')
ax3.xaxis.set_major_formatter(myformat)
ax1.tick_params(axis='both', which='major', labelsize=14)
ax2.tick_params(axis='both', which='major', labelsize=14)
ax3.tick_params(axis='both', which='major', labelsize=14)
plt.subplots_adjust(hspace=0.)
ax1.set_title("Helio4Cast Geomagnetic Activity Forecast, {} UTC".format(timestamp.strftime("%Y-%m-%d %H:%M")), pad=20)
pltcfg.group_info_text_small()
plt.savefig("predstorm_pretty.png")
# To cut the final version:
# convert predstorm_pretty.png -crop 1420x1000+145+30 predstorm_pretty_cropped.png
def plot_stereo_dscovr_comparison(stam, dism, dst, timestamp=None, look_back=20, outfile=None, **kwargs):
"""Plots the last days of STEREO-A and DSCOVR data for comparison alongside
the predicted and real Dst.
Parameters
==========
stam : predstorm.SatData
Object containing minute STEREO-A data
dism : predstorm.SatData
Object containing minute DSCOVR data.
dst : predstorm.SatData
Object containing Kyoto Dst data.
timestamp : datetime obj
Time for last datapoint in plot.
look_back : float (default=20)
Number of days in the past to plot.
**kwargs : ...
See config.plotting for variables that can be tweaked.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
if timestamp == None:
timestamp = datetime.utcnow()
if outfile == None:
outfile = 'sta_dsc_comparison_{}.png'.format(datetime.strftime(timestamp, "%Y-%m-%dT%H:%M"))
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_sta = kwargs.get('c_sta', pltcfg.c_sta)
c_sta_dst = kwargs.get('c_sta_dst', pltcfg.c_sta_dst)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend)
fs_ylabel = kwargs.get('fs_legend', pltcfg.fs_ylabel)
# READ DATA:
# ----------
# TODO: It would be faster to read archived hourly data rather than interped minute data...
logger.info("plot_stereo_dscovr_comparison: Reading satellite data")
# Get estimate of time diff:
stam.shift_time_to_L1()
sta = stam.make_hourly_data()
sta.interp_nans()
dis = dism.make_hourly_data()
dis.interp_nans()
# CALCULATE PREDICTED DST:
# ------------------------
sta.convert_RTN_to_GSE().convert_GSE_to_GSM()
dst_pred = sta.make_dst_prediction()
# PLOT:
# -----
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
plotstart = timestamp - timedelta(days=look_back)
plotend = timestamp
# Make figure object:
fig = plt.figure(1,figsize=figsize)
axes = []
# SUBPLOT 1: Total B-field and Bz
# -------------------------------
ax1 = fig.add_subplot(411)
axes.append(ax1)
plt.plot_date(dis['time'], dis['bz'], '-', c=c_dis, linewidth=lw, label='DSCOVR')
plt.plot_date(sta['time'], sta['bz'], '-', c=c_sta, linewidth=lw, label='STEREO-A')
# Indicate 0 level for Bz
plt.plot_date([plotstart,plotend], [0,0],'--k', alpha=0.5, linewidth=1)
plt.ylabel('Magnetic field Bz [nT]', fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
bplotmax=np.nanmax(np.concatenate((dis['bz'], sta['bz'])))+5
bplotmin=np.nanmin(np.concatenate((dis['bz'], sta['bz'])))-5
plt.ylim(bplotmin, bplotmax)
plt.legend(loc=2,ncol=4,fontsize=fs_legend)
plt.title('DSCOVR and STEREO-A solar wind projected to L1 for '+ datetime.strftime(timestamp, "%Y-%m-%d %H:%M")+ ' UT', fontsize=16)
# SUBPLOT 2: Solar wind speed
# ---------------------------
ax2 = fig.add_subplot(412)
axes.append(ax2)
plt.plot_date(dis['time'], dis['speed'], '-', c=c_dis, linewidth=lw)
plt.plot_date(sta['time'], sta['speed'], '-', c=c_sta, linewidth=lw)
plt.ylabel('Speed $\mathregular{[km \\ s^{-1}]}$', fontsize=fs_ylabel)
# Add speed levels:
pltcfg.plot_speed_lines(xlims=[plotstart, plotend])
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
vplotmax=np.nanmax(np.concatenate((dis['speed'], sta['speed'])))+100
vplotmin=np.nanmin(np.concatenate((dis['speed'], sta['speed'])))-50
plt.ylim(vplotmin, vplotmax)
# SUBPLOT 3: Solar wind density
# -----------------------------
ax3 = fig.add_subplot(413)
axes.append(ax3)
# Plot solar wind density:
plt.plot_date(dis['time'], dis['density'], '-', c=c_dis, linewidth=lw)
plt.plot_date(sta['time'], sta['density'], '-', c=c_sta, linewidth=lw)
plt.ylabel('Density $\mathregular{[ccm^{-3}]}$',fontsize=fs_ylabel)
# For y limits check where the maximum and minimum are for DSCOVR and STEREO taken together:
plt.ylim([0, np.nanmax(np.nanmax(np.concatenate((dis['density'], sta['density'])))+10)])
# SUBPLOT 4: Actual and predicted Dst
# -----------------------------------
ax4 = fig.add_subplot(414)
axes.append(ax4)
# Observed Dst Kyoto (past):
plt.plot_date(dst['time'], dst['dst'],'o', c=c_dst, label='Observed Dst', ms=ms_dst)
plt.plot_date(sta['time'], dst_pred['dst'],'-', c=c_sta_dst, label='Predicted Dst', lw=lw)
# Add generic error bars of +/-15 nT:
error=15
plt.fill_between(sta['time'], dst_pred['dst']-error, dst_pred['dst']+error, alpha=0.2,
label='Error for high speed streams')
# Label plot with geomagnetic storm levels
pltcfg.plot_dst_activity_lines(xlims=[plotstart, plotend])
dstplotmin = -10 + np.nanmin(np.nanmin(np.concatenate((dst['dst'], dst_pred['dst']))))
dstplotmax = 10 + np.nanmax(np.nanmax(np.concatenate((dst['dst'], dst_pred['dst']))))
plt.ylim([dstplotmin, dstplotmax])
plt.legend(loc=2,ncol=4,fontsize=fs_legend)
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
# Dates on x-axes:
myformat = mdates.DateFormatter('%b %d %Hh')
ax.xaxis.set_major_formatter(myformat)
plt.savefig(outfile)
logger.info("Plot saved as {}".format(outfile))
plt.close()
return
def plot_dst_comparison(stam, dism, dst, timestamp=None, look_back=20, dst_method='temerin_li_2006', outfile=None, **kwargs):
"""Plots the last days of STEREO-A and DSCOVR data for comparison alongside
the predicted and real Dst.
Parameters
==========
stam : predstorm.SatData
Object containing minute STEREO-A data
dism : predstorm.SatData
Object containing minute DSCOVR data.
dst : predstorm.SatData
Object containing hourly Kyoto Dst data.
timestamp : datetime obj
Time for last datapoint in plot.
look_back : float (default=20)
Number of days in the past to plot.
**kwargs : ...
See config.plotting for variables that can be tweaked.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
if timestamp == None:
timestamp = datetime.utcnow()
if outfile == None:
outfile = 'dst_comparison_{}.png'.format(datetime.strftime(timestamp, "%Y-%m-%dT%H:%M"))
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_dis_dst = kwargs.get('c_dis_dst', pltcfg.c_dis_dst)
c_sta_dst = kwargs.get('c_sta_dst', pltcfg.c_sta_dst)
c_sta = kwargs.get('c_sta', pltcfg.c_sta)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend)
fs_title = kwargs.get('fs_title', pltcfg.fs_title)
# PREPARE DATA:
# -------------
# TODO: It would be faster to read archived hourly data rather than interped minute data...
logger.info("plot_dst_comparison: Preparing satellite data")
# Correct for STEREO-A position:
stam.shift_time_to_L1()
sta = stam.make_hourly_data()
sta = sta.cut(starttime=timestamp-timedelta(days=look_back), endtime=timestamp).interp_nans()
dis = dism.make_hourly_data()
dis.interp_nans()
# CALCULATE PREDICTED DST:
# ------------------------
sta.convert_RTN_to_GSE().convert_GSE_to_GSM()
dst_h = dst.interp_to_time(sta['time'])
dis = dis.interp_to_time(sta['time'])
dst_sta = sta.make_dst_prediction(method=dst_method)
dst_dis = dis.make_dst_prediction(method=dst_method)
# PLOT:
# -----
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
plotstart = timestamp - timedelta(days=look_back)
plotend = timestamp
# Make figure object:
fig = plt.figure(1, figsize=figsize)
axes = []
# SUBPLOT 1: Actual and predicted Dst
# -----------------------------------
ax1 = fig.add_subplot(411)
axes.append(ax1)
# Observed Dst Kyoto (past):
plt.plot_date(dst['time'], dst['dst'], 'o', c=c_dst, label='Observed Dst', ms=ms_dst)
plt.plot_date(sta['time'], dst_sta['dst'],'-', c=c_sta_dst, label='Predicted Dst (STEREO-A)', linewidth=lw)
plt.plot_date(dis['time'], dst_dis['dst'],'-', c=c_dis_dst, label='Predicted Dst (DSCOVR)', linewidth=lw)
# Add generic error bars of +/-15 nT:
error=15
plt.fill_between(sta['time'], dst_sta['dst']-error, dst_sta['dst']+error, facecolor=c_sta_dst, alpha=0.2, label='Error')
plt.fill_between(dis['time'], dst_dis['dst']-error, dst_dis['dst']+error, facecolor=c_dis_dst, alpha=0.2, label='Error')
# Label plot with geomagnetic storm levels
pltcfg.plot_dst_activity_lines(xlims=[plotstart, plotend])
dstplotmin = -10 + np.nanmin(np.nanmin(np.concatenate((dst_sta['dst'], dst_dis['dst']))))
dstplotmax = 10 + np.nanmax(np.nanmax(np.concatenate((dst_sta['dst'], dst_dis['dst']))))
plt.ylim([dstplotmin, dstplotmax])
plt.title("Dst(real) vs Dst(predicted)", fontsize=fs_title)
# SUBPLOT 2: Actual vs predicted Dst STEREO
# -----------------------------------------
diff_sta = dst_h['dst'] - dst_sta['dst']
diff_dis = dst_h['dst'] - dst_dis['dst']
if np.nanmax((np.abs(dstplotmin), dstplotmax)) > 50:
maxval = np.nanmax((np.abs(dstplotmin), dstplotmax))
else:
maxval = 50.
ax2 = fig.add_subplot(412)
axes.append(ax2)
# Observed Dst Kyoto (past):
gradient_fill(sta['time'], dst_sta['dst']-dst_h['dst'], maxval=maxval, ls='-', c='k', label='Dst(Kyoto) - Dst(STEREO-A-pred)', ms=0, lw=lw)
# SUBPLOT 3: Actual vs predicted Dst DSCOVR
# -----------------------------------------
ax3 = fig.add_subplot(413)
axes.append(ax3)
# Observed Dst Kyoto (past):
gradient_fill(dis['time'], dst_dis['dst']-dst_h['dst'], maxval=maxval, ls='-', c='k', label='Dst(Kyoto) - Dst(DSCOVR-pred)', ms=0, lw=lw)
# SUBPLOT 3: Predicted vs predicted Dst
# -------------------------------------
ax4 = fig.add_subplot(414)
axes.append(ax4)
# Observed Dst Kyoto (past):
gradient_fill(dis['time'], dst_dis['dst']-dst_sta['dst'], maxval=maxval, ls='-', c='k', label='Dst(DSCOVR-pred) - Dst(STEREO-A-pred)', ms=0, lw=lw)
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
ax.legend(loc=2, ncol=5, fontsize=fs_legend)
# Dates on x-axes:
myformat = mdates.DateFormatter(date_fmt)
ax.xaxis.set_major_formatter(myformat)
plt.savefig(outfile)
logger.info("Plot saved as {}".format(outfile))
plt.close()
return
def plot_dst_vs_persistence_model(stam, dism, dpmm, dst, t_syn=27.27, dst_method='temerin_li_2006', timestamp=None, look_back=20, outfile=None, **kwargs):
"""Plots the last days of STEREO-A and DSCOVR data for comparison alongside
the predicted and real Dst.
Parameters
==========
stam : predstorm.SatData
Object containing minute STEREO-A data
dism : predstorm.SatData
Object containing minute DSCOVR data.
dst : predstorm.SatData
Object containing hourly Kyoto Dst data.
timestamp : datetime obj
Time for last datapoint in plot.
look_back : float (default=20)
Number of days in the past to plot.
**kwargs : ...
See config.plotting for variables that can be tweaked.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
if timestamp == None:
timestamp = datetime.utcnow()
if outfile == None:
outfile = 'dst_comparison_{}.png'.format(datetime.strftime(timestamp, "%Y-%m-%dT%H:%M"))
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_dis_dst = kwargs.get('c_dis_dst', pltcfg.c_dis_dst)
c_sta_dst = kwargs.get('c_sta_dst', pltcfg.c_sta_dst)
c_sta = kwargs.get('c_sta', pltcfg.c_sta)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend) + 2
fs_title = kwargs.get('fs_title', pltcfg.fs_title) + 2
# PREPARE DATA:
# -------------
# TODO: It would be faster to read archived hourly data rather than interped minute data...
logger.info("plot_dst_comparison: Preparing satellite data")
# Correct for STEREO-A position:
stam.shift_time_to_L1()
stam['bx'], stam['by'], stam['bz'] = stam['br'], -stam['bt'], stam['bn']
sta = stam.make_hourly_data()
sta = sta.cut(starttime=timestamp-timedelta(days=look_back), endtime=timestamp).interp_nans()
# DSCOVR
#dis = dism.make_hourly_data()
dism.interp_nans()
# Persistence Model
#dpm = dpmm.make_hourly_data()
dpmm.interp_nans()
# CALCULATE PREDICTED DST:
# ------------------------
#sta.convert_RTN_to_GSE().convert_GSE_to_GSM()
dst_h = dst.interp_to_time(sta['time'])
dis = dism.interp_to_time(sta['time'])
dpm = dpmm.interp_to_time(sta['time'])
dst_sta = sta.make_dst_prediction(method=dst_method)
dst_dis = dis.make_dst_prediction(method=dst_method)
dst_dpm = dpm.make_dst_prediction(method=dst_method)
# PLOT:
# -----
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
plotstart = timestamp - timedelta(days=look_back)
plotend = timestamp
# Make figure object:
fig = plt.figure(1, figsize=figsize)
axes = []
score_xpos, score_ypos = 0.80, 0.73
# SUBPLOT 1: Actual and predicted Dst
# -----------------------------------
ax1 = fig.add_subplot(411)
axes.append(ax1)
# Observed Dst Kyoto (past):
plt.plot_date(dst['time'], dst['dst'], 'o', c=c_dst, label='Observed Dst', ms=ms_dst)
plt.plot_date(sta['time'], dst_sta['dst'],'-', c=c_sta_dst, label='Predicted Dst (STEREO-A)', linewidth=lw)
plt.plot_date(dis['time'], dst_dis['dst'],'-', c=c_dis_dst, label='Predicted Dst (DSCOVR)', linewidth=lw)
plt.plot_date(dis['time'], dst_dpm['dst'],'-', c='r', label='Dst (DSCOVR persistence model)', linewidth=lw)
# Add generic error bars of +/-15 nT:
error=15
plt.fill_between(sta['time'], dst_sta['dst']-error, dst_sta['dst']+error, facecolor=c_sta_dst, alpha=0.2, label='Error')
#plt.fill_between(dis['time'], dst_dis['dst']-error, dst_dis['dst']+error, facecolor=c_dis_dst, alpha=0.2, label='Error')
# Label plot with geomagnetic storm levels
pltcfg.plot_dst_activity_lines(xlims=[plotstart, plotend])
dstplotmin = -10 + np.nanmin(np.nanmin(np.concatenate((dst_sta['dst'], dst_dis['dst'], dst_dpm['dst']))))
dstplotmax = 10 + np.nanmax(np.nanmax(np.concatenate((dst_sta['dst'], dst_dis['dst'], dst_dpm['dst']))))
plt.ylim([dstplotmin, dstplotmax])
plt.title("Dst(real) vs Dst(predicted) for {} - {} days".format(timestamp.strftime("%Y-%m-%d %H:%M"), look_back), fontsize=fs_title)
# SUBPLOT 2: Actual vs predicted Dst DSCOVR
# -----------------------------------------
ax2 = fig.add_subplot(412)
axes.append(ax2)
if np.nanmax((np.abs(dstplotmin), dstplotmax)) > 50:
maxval = np.nanmax((np.abs(dstplotmin), dstplotmax))
else:
maxval = 50.
# Observed Dst Kyoto (past):
gradient_fill(dis['time'], dst_h['dst']-dst_dis['dst'], maxval=maxval, ls='-', c='k', label='Dst(Kyoto) - Dst(DSCOVR-pred)', ms=0, lw=lw)
r2 = np.corrcoef(dst_h['dst'], dst_dis['dst'])[0][1]
mae = np.sum(np.abs(dst_h['dst']-dst_dis['dst'])) / len(dst_h['dst'])
ax2.annotate(r'$R^2 = {:.2f}$'.format(r2)+'\n'+r'$MAE = {:.1f}$ nT'.format(mae), xy=(score_xpos, score_ypos),
xycoords='axes fraction', size=fs_title-2)
# SUBPLOT 3: Actual vs predicted Dst STEREO
# -----------------------------------------
ax3 = fig.add_subplot(413)
axes.append(ax3)
# Observed Dst Kyoto (past):
gradient_fill(sta['time'], dst_h['dst']-dst_sta['dst'], maxval=maxval, ls='-', c='k', label='Dst(Kyoto) - Dst(STEREO-A-pred)', ms=0, lw=lw)
r2 = np.corrcoef(dst_h['dst'], dst_sta['dst'])[0][1]
mae = np.sum(np.abs(dst_h['dst']-dst_sta['dst'])) / len(dst_h['dst'])
ax3.annotate(r'$R^2 = {:.2f}$'.format(r2)+'\n'+r'$MAE = {:.1f}$ nT'.format(mae), xy=(score_xpos, score_ypos),
xycoords='axes fraction', size=fs_title-2)
# SUBPLOT 3: Actual vs persistence model Dst
# ------------------------------------------
ax4 = fig.add_subplot(414)
axes.append(ax4)
# Observed Dst Kyoto (past):
gradient_fill(dis['time'], dst_h['dst']-dst_dpm['dst'], maxval=maxval, ls='-', c='k', label='Dst(Kyoto) - Dst(DSCOVR pers. model)', ms=0, lw=lw)
r2 = np.corrcoef(dst_h['dst'], dst_dpm['dst'])[0][1]
mae = np.sum(np.abs(dst_h['dst']-dst_dpm['dst'])) / len(dst_h['dst'])
ax4.annotate(r'$R^2 = {:.2f}$'.format(r2)+'\n'+r'$MAE = {:.1f}$ nT'.format(mae), xy=(score_xpos, score_ypos),
xycoords='axes fraction', size=fs_title-2)
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
ax.legend(loc=2, ncol=5, fontsize=fs_legend)
# Dates on x-axes:
myformat = mdates.DateFormatter(date_fmt)
ax.xaxis.set_major_formatter(myformat)
plt.savefig(outfile)
logger.info("Plot saved as {}".format(outfile))
plt.close()
return
def plot_indices(dism, timestamp=None, look_back=20, outfile=None, **kwargs):
"""
Plots solar wind variables, past from DSCOVR and future/predicted from STEREO-A.
Total B-field and Bz (top), solar wind speed (second), particle density (third)
and Dst (fourth) from Kyoto and model prediction.
Parameters
==========
dism : predstorm.SatData
Object containing minute satellite L1 data.
timestamp : datetime obj
Time for last datapoint in plot.
look_back : float (default=20)
Number of days in the past to plot.
**kwargs : ...
See config.plotting for variables that can be tweaked.
Returns
=======
plt.savefig : .png file
File saved to XXX
"""
if timestamp == None:
timestamp = datetime.utcnow()
if outfile == None:
outfile = 'indices_{}.png'.format(datetime.strftime(timestamp, "%Y-%m-%dT%H:%M"))
figsize = kwargs.get('figsize', pltcfg.figsize)
lw = kwargs.get('lw', pltcfg.lw)
fs = kwargs.get('fs', pltcfg.fs)
date_fmt = kwargs.get('date_fmt', pltcfg.date_fmt)
c_dst = kwargs.get('c_dst', pltcfg.c_dst)
c_dis = kwargs.get('c_dis', pltcfg.c_dis)
c_dis_dst = kwargs.get('c_dis', pltcfg.c_dis_dst)
c_ec = kwargs.get('c_dis', pltcfg.c_ec)
c_kp = kwargs.get('c_dis', pltcfg.c_kp)
c_aurora = kwargs.get('c_dis', pltcfg.c_aurora)
ms_dst = kwargs.get('c_dst', pltcfg.ms_dst)
fs_legend = kwargs.get('fs_legend', pltcfg.fs_legend)
fs_ylabel = kwargs.get('fs_legend', pltcfg.fs_ylabel)
fs_title = kwargs.get('fs_title', pltcfg.fs_title)
# READ DATA:
# ----------
# TODO: It would be faster to read archived hourly data rather than interped minute data...
logger.info("plot_indices: Preparing satellite data")
# Get estimate of time diff:
# Read DSCOVR data:
dis = dism.make_hourly_data()
dis.interp_nans()
dst = ps.get_past_dst(filepath="data/dstarchive/WWW_dstae00016185.dat",
starttime=timestamp-timedelta(days=look_back),
endtime=timestamp)
# Calculate Dst from prediction:
dst_dis = dis.make_dst_prediction()
# Kp:
kp_dis = dis.make_kp_prediction()
# Newell coupling ec:
ec_dis = dis.get_newell_coupling()
# Aurora power:
aurora_dis = dis.make_aurora_power_prediction()
# PLOT:
# -----
# Set style:
sns.set_context(pltcfg.sns_context)
sns.set_style(pltcfg.sns_style)
# Make figure object:
fig = plt.figure(1, figsize=figsize)
axes = []
if timestamp == None:
timestamp = datetime.utcnow()
timeutc = mdates.date2num(timestamp)
plotstart = timestamp - timedelta(days=look_back)
plotend = timestamp
# SUBPLOT 1: Total B-field and Bz
# -------------------------------
ax1 = fig.add_subplot(511)
axes.append(ax1)
# Total B-field and Bz (DSCOVR)
plt.plot_date(dism['time'], dism['btot'],'-', c=c_dis, label='B total L1', linewidth=lw)
plt.plot_date(dism['time'], dism['bz'],'-', c=c_dis, alpha=0.5, label='Bz GSM L1', linewidth=lw)
# Indicate 0 level for Bz
plt.plot_date([plotstart,plotend], [0,0],'--k', alpha=0.5, linewidth=1)
plt.ylabel('Magnetic field [nT]', fontsize=fs_ylabel)
plt.ylim(np.nanmin(dism['bz'])-5, np.nanmax(dism['btot'])+5)
plt.title('DSCOVR data and derived indices for {}'.format(datetime.strftime(timestamp, "%Y-%m-%d %H:%M")), fontsize=fs_title)
# SUBPLOT 2: Actual and predicted Dst
# -----------------------------------
ax3 = fig.add_subplot(512)
axes.append(ax3)
# Observed Dst Kyoto (past):
plt.plot_date(dst['time'], dst['dst'],'o', c=c_dst, label='Observed Dst', markersize=ms_dst)
plt.ylabel('Dst [nT]', fontsize=fs_ylabel)
dstplotmax = np.nanmax(np.concatenate((dst['dst'], dst_dis['dst'])))+20
dstplotmin = np.nanmin(np.concatenate((dst['dst'], dst_dis['dst'])))-20
plt.ylim([dstplotmin, dstplotmax])
plt.plot_date(dst_dis['time'], dst_dis['dst'],'-', c=c_dis_dst, label='Predicted Dst (DSCOVR)', linewidth=lw)
error=15
plt.fill_between(dst_dis['time'], dst_dis['dst']-error, dst_dis['dst']+error, facecolor=c_dis_dst, alpha=0.2, label='Error')
# Label plot with geomagnetic storm levels
pltcfg.plot_dst_activity_lines(xlims=[plotstart, plotend])
# SUBPLOT 3: kp
# -----------------------------
ax5 = fig.add_subplot(513)
axes.append(ax5)
# Plot Newell coupling (DSCOVR):
plt.plot_date(kp_dis['time'], kp_dis['kp'],'-', c=c_kp, linewidth=lw)
plt.ylabel('$\mathregular{k_p}$', fontsize=fs_ylabel)
plt.ylim([0., 10.])
# SUBPLOT 4: Newell Coupling
# --------------------------
ax2 = fig.add_subplot(514)
axes.append(ax2)
# Plot Newell coupling (DSCOVR):
plt.plot_date(ec_dis['time'], ec_dis['ec'],'-', c=c_ec, linewidth=lw)
plt.ylabel('Newell coupling $ec$', fontsize=fs_ylabel)
plt.ylim([0., np.nanmax(ec_dis['ec'])*1.1])
# SUBPLOT 5: Aurora power
# -----------------------
ax4 = fig.add_subplot(515)
axes.append(ax4)
# Plot Newell coupling (DSCOVR):
plt.plot_date(aurora_dis['time'], aurora_dis['aurora'],'-', c=c_aurora, linewidth=lw)
plt.ylabel('Aurora power [?]', fontsize=fs_ylabel)
plt.ylim([0., np.nanmax(aurora_dis['aurora'])*1.1])
# GENERAL FORMATTING
# ------------------
for ax in axes:
ax.set_xlim([plotstart,plotend])
ax.tick_params(axis="x", labelsize=fs)
ax.tick_params(axis="y", labelsize=fs)
ax.legend(loc=2,ncol=4,fontsize=fs_legend)
# Dates on x-axes:
myformat = mdates.DateFormatter(date_fmt)
ax.xaxis.set_major_formatter(myformat)
plt.savefig(outfile)
logger.info('Plot saved as png:\n'+ outfile)
plt.close()
return
# =======================================================================================
# --------------------------- EXTRA FUNCTIONS -------------------------------------------
# =======================================================================================
def gradient_fill(x, y, ax=None, maxval=None, **kwargs):
"""
Plot a line with a linear alpha gradient filled beneath it.
Adapted from https://stackoverflow.com/a/29331211.
Parameters
----------
x, y : array-like
The data values of the line.
ax : a matplotlib Axes instance
The axes to plot on. If None, the current pyplot axes will be used.
maxval : float
Maximum value (x/-) in plots for gradient scaling.
Additional arguments are passed on to matplotlib's ``plot`` function.
Returns
-------
line : a Line2D instance
The line plotted.
im : an AxesImage instance
The transparent gradient clipped to just the area beneath the curve.
"""
if ax is None:
ax = plt.gca()
line, = ax.plot_date(x, y, **kwargs)
zorder = line.get_zorder()
alpha = line.get_alpha()
alpha = 1.0 if alpha is None else alpha
maxval if maxval is None else maxval
z_up, z_down = np.empty((100, 1, 4), dtype=float), np.empty((100, 1, 4), dtype=float)
rgb_b = mcolors.colorConverter.to_rgb('b')
rgb_r = mcolors.colorConverter.to_rgb('r')
z_down[:,:,:3] = rgb_r
z_down[:,:,-1] = np.linspace(0, alpha, 100)[:,None]
z_up[:,:,:3] = rgb_b
z_up[:,:,-1] = np.linspace(0, alpha, 100)[:,None]
# Fill above zero:
xmin, xmax, ymin, ymax = x.min(), x.max(), 0., maxval
im = ax.imshow(z_up, aspect='auto', extent=[xmin, xmax, ymin, ymax],
origin='lower', zorder=zorder)
xy = np.column_stack([x, y])
xy = np.vstack([[xmin, ymin], xy, [xmax, ymin], [xmin, ymin]])
clip_path = Polygon(xy, facecolor='none', edgecolor='none', closed=True)
ax.add_patch(clip_path)
im.set_clip_path(clip_path)
# Fill below zero:
xmin, xmax, ymin, ymax = x.min(), x.max(), -maxval, 0.
im = ax.imshow(z_down, aspect='auto', extent=[xmin, xmax, ymin, ymax],
origin='upper', zorder=zorder)
xy = np.column_stack([x, y])
#xy = np.vstack([[xmin, ymin], xy, [xmax, ymin], [xmin, ymin]])
xy = np.vstack([[xmin, 0.], xy, [xmax, 0.], [xmin, 0.]])
clip_path = Polygon(xy, facecolor='none', edgecolor='none', closed=True)
ax.add_patch(clip_path)
im.set_clip_path(clip_path)
ax.autoscale(True)
return line, im
def plot_all(timestamp=None, plotdir="plots", download=True):
"""Makes plots of a time range ending with timestamp using all functions."""
if not os.path.isdir(plotdir):
os.mkdir(plotdir)
if timestamp == None:
timestamp = datetime.utcnow()
from datetime import datetime, timedelta
import predstorm as ps
import os
import heliosat
logger = ps.init_logging(verbose=True)
plotdir="plots"
timestamp = datetime(2019,8,8) - timedelta(days=26*2) # datetime.utcnow() - timedelta(days=180) # datetime(2019,6,23)
look_back = 26
lag_L1, lag_r = ps.get_time_lag_wrt_earth(timestamp=timestamp, satname='STEREO-A')
est_timelag = lag_L1 + lag_r
logger.info("Plotting all plots...")
# STEREO DATA
stam = ps.get_stereo_beacon_data(starttime=timestamp-timedelta(days=look_back+est_timelag+0.5),
endtime=timestamp)
stam = stam.interp_nans(keys=['time'])
stam.load_positions()
# DSCOVR DATA
if timestamp < datetime(2019,6,23):
dism = ps.get_dscovr_data(starttime=timestamp-timedelta(days=look_back),
endtime=timestamp)
else:
dism = ps.get_omni_data(starttime=timestamp-timedelta(days=look_back),
endtime=timestamp)
dism.h['HeliosatObject'] = heliosat.DSCOVR()
dism.load_positions(l1_corr=True)
# KYOTO DST
dst = ps.get_omni_data(starttime=timestamp-timedelta(days=look_back),
endtime=timestamp, download=False)
# dst = ps.get_past_dst(filepath="data/dstarchive/WWW_dstae00019594.dat",
# starttime=timestamp-timedelta(days=look_back),
# endtime=timestamp)
# PERSISTENCE MODEL
t_syn = 26.27
if timestamp < datetime(2019,6,10):
dpmm = ps.get_dscovr_data(starttime=timestamp-timedelta(days=t_syn)-timedelta(days=look_back),
endtime=timestamp-timedelta(days=t_syn))
else:
dpmm = ps.get_omni_data(starttime=timestamp-timedelta(days=t_syn)-timedelta(days=look_back),
endtime=timestamp-timedelta(days=t_syn))
dpmm.h['HeliosatObject'] = heliosat.DSCOVR()
dpmm['time'] = dpmm['time'] + t_syn
outfile = os.path.join(plotdir, "all_dst_{}day_plot.png".format(look_back))
ps.plot.plot_dst_vs_persistence_model(stam, dism, dpmm, dst, look_back=look_back,
timestamp=timestamp, outfile=outfile)
logger.info("\n-------------------------\nDst comparison\n-------------------------")
outfile = os.path.join(plotdir, "dst_prediction_{}day_plot.png".format(look_back))
plot_dst_comparison(stam, dism, dst, timestamp=timestamp, look_back=look_back, outfile=outfile)
logger.info("\n-------------------------\nSTEREO-A vs DSCOVR\n-------------------------")
outfile = os.path.join(plotdir, "stereoa_vs_dscovr_{}day_plot.png".format(look_back))
plot_stereo_dscovr_comparison(stam, dism, dst, timestamp=timestamp, look_back=look_back, outfile=outfile)
logger.info("\n-------------------------\nPredicted indices\n-------------------------")
outfile = os.path.join(plotdir, "indices_{}day_plot.png".format(look_back))
plot_indices(dism, timestamp=timestamp, look_back=look_back, outfile=outfile)
if __name__ == '__main__':
plot_all()
| 40.174319 | 290 | 0.627252 | 7,949 | 56,003 | 4.262423 | 0.081897 | 0.020985 | 0.018181 | 0.007674 | 0.775604 | 0.74181 | 0.715749 | 0.688005 | 0.656661 | 0.617112 | 0 | 0.021637 | 0.193704 | 56,003 | 1,393 | 291 | 40.203159 | 0.728712 | 0.238291 | 0 | 0.521978 | 0 | 0.002747 | 0.116079 | 0.015463 | 0 | 0 | 0 | 0.002872 | 0 | 1 | 0.012363 | false | 0 | 0.032967 | 0 | 0.052198 | 0.004121 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4a29d5c98270ecac8a5ef68b2b368bd1974fb05d | 148 | py | Python | nhl/raw/stats/common.py | devinsba/py-nhl | d6f560d9a43cd2b7183ba465e03ee7871365814c | [
"Apache-2.0"
] | null | null | null | nhl/raw/stats/common.py | devinsba/py-nhl | d6f560d9a43cd2b7183ba465e03ee7871365814c | [
"Apache-2.0"
] | null | null | null | nhl/raw/stats/common.py | devinsba/py-nhl | d6f560d9a43cd2b7183ba465e03ee7871365814c | [
"Apache-2.0"
] | null | null | null | from dataclasses import dataclass
SERVER_ADDRESS = "https://statsapi.web.nhl.com"
@dataclass(frozen=True)
class BaseResponse:
copyright: str
| 16.444444 | 47 | 0.77027 | 18 | 148 | 6.277778 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128378 | 148 | 8 | 48 | 18.5 | 0.875969 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4a454289eef15e58ae478c5b593d3115e6fbb5d8 | 103 | py | Python | checkov/serverless/checks/layer/registry.py | antonblr/checkov | 9415c6593c537945c08f7a19f28bdd8b96966f67 | [
"Apache-2.0"
] | 4,013 | 2019-12-09T13:16:54.000Z | 2022-03-31T14:31:01.000Z | checkov/serverless/checks/layer/registry.py | antonblr/checkov | 9415c6593c537945c08f7a19f28bdd8b96966f67 | [
"Apache-2.0"
] | 1,258 | 2019-12-17T09:55:51.000Z | 2022-03-31T19:17:17.000Z | checkov/serverless/checks/layer/registry.py | antonblr/checkov | 9415c6593c537945c08f7a19f28bdd8b96966f67 | [
"Apache-2.0"
] | 638 | 2019-12-19T08:57:38.000Z | 2022-03-30T21:38:37.000Z | from checkov.serverless.base_registry import ServerlessRegistry
layer_registry = ServerlessRegistry()
| 25.75 | 63 | 0.873786 | 10 | 103 | 8.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07767 | 103 | 3 | 64 | 34.333333 | 0.926316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4a502776f85861f4e8401bf0df6935a49fb1141d | 64 | py | Python | script_eater/__init__.py | FoxtrotCore/script-eater | 7e64b5e18be1c8a3f947d4e3f7170aaf338ac561 | [
"MIT"
] | null | null | null | script_eater/__init__.py | FoxtrotCore/script-eater | 7e64b5e18be1c8a3f947d4e3f7170aaf338ac561 | [
"MIT"
] | null | null | null | script_eater/__init__.py | FoxtrotCore/script-eater | 7e64b5e18be1c8a3f947d4e3f7170aaf338ac561 | [
"MIT"
] | null | null | null | branch = "master"
version = "2.0.0"
from .script_eater import *
| 16 | 27 | 0.6875 | 10 | 64 | 4.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.15625 | 64 | 3 | 28 | 21.333333 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0.171875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4a5c929803c7f23da0d1d656a1f299fc3eac2710 | 3,084 | py | Python | epsonprojector/devices/generic.py | eieste/epsonProjector | bbe64cfcd07c821ffd43b771b9e7f337e4dfbf43 | [
"MIT"
] | null | null | null | epsonprojector/devices/generic.py | eieste/epsonProjector | bbe64cfcd07c821ffd43b771b9e7f337e4dfbf43 | [
"MIT"
] | null | null | null | epsonprojector/devices/generic.py | eieste/epsonProjector | bbe64cfcd07c821ffd43b771b9e7f337e4dfbf43 | [
"MIT"
] | null | null | null | from epsonprojector.devices.configurations.load import LoadConfiguration
# from epsonprojector.interfaces.generic import GenericInterface
from collections import namedtuple
ParsedResponse = namedtuple("ParsedResponse", ("command", "parameter", "status"))
class GenericDevice:
config_file = ""
_conf = None
def __init__(self, conn):
# ToDo check if conn inherith from Generic Interface
# if not isinstance(conn, epsonprojector.interfaces.GenericInterface):
# raise AttributeError("Invalid Interface")
self._conn = conn
# Load Config if not done yet
if not self._conf:
self.initialize_config()
def __getattr__(self, item):
"""
Intercept all method calls to implement them on projector commands
:param item: Name of called method
:return method: Return a wrapper method
"""
# Try to find command in config
cmd = self._conf.find_command(item)
# Wrapper Command
def set_command(*args, **kwargs):
# Try to find Parameters from wrapper called args in Command
parameter = self._conf.find_parameter(cmd["request_parameters"], args[0])
# Create a projector command based on previously collected information
command = self.build_command(cmd, parameter)
if command is False:
raise ValueError("Cant build command")
# Transmit Projector Command
answer = self.send(command)
return answer
return set_command
def build_command(self, command, parameter):
"""
Creates a command from Command and Parameter object
:param command: dict of command information from config
:param parameter: dict of parameter informations from config
:return any: Command to send via Interface
"""
raise NotImplementedError("Please implement this method")
def parse_command(self, command, parameter, *args, **kwargs):
"""
Try to parse responses from Projector
:param answer: Data that returned from projector
:return ParsedResponse: Tuple with the parsed Information
"""
raise NotImplementedError("Please implement this method")
def get_config_file(self):
"""
Get path to configfile
:return path: String path to configfile
"""
return self.config_file
def initialize_config(self):
"""
Initialize Config Loading
"""
self._conf = LoadConfiguration(self)
def connect(self):
# ToDo I dont know
pass
def send(self, command):
"""
Send command via Initialized Interface
:param command: Projector Command
:return: Parsed Projector answer
"""
answer = self._conn.send_command(command)
response = self.parse_command(answer)
return response
def read(self):
# ToDo I dont know
pass
| 31.469388 | 85 | 0.623217 | 322 | 3,084 | 5.872671 | 0.350932 | 0.033845 | 0.021153 | 0.015865 | 0.077208 | 0.077208 | 0.054997 | 0 | 0 | 0 | 0 | 0.000469 | 0.30869 | 3,084 | 97 | 86 | 31.793814 | 0.886492 | 0.385863 | 0 | 0.111111 | 0 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0.030928 | 0 | 1 | 0.277778 | false | 0.055556 | 0.055556 | 0 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
4a78fb78b56c0c58bf9d4751bf9eaece05b156c5 | 230 | py | Python | tests/utils/compat.py | mvas/apm-agent-python | f4582e90eb5308b915ca51e2e98620fc22af09ec | [
"BSD-3-Clause"
] | null | null | null | tests/utils/compat.py | mvas/apm-agent-python | f4582e90eb5308b915ca51e2e98620fc22af09ec | [
"BSD-3-Clause"
] | null | null | null | tests/utils/compat.py | mvas/apm-agent-python | f4582e90eb5308b915ca51e2e98620fc22af09ec | [
"BSD-3-Clause"
] | null | null | null | def middleware_setting(django_version, middleware_list):
if django_version < (1, 10):
return {'MIDDLEWARE_CLASSES': middleware_list}
else:
return {'MIDDLEWARE': middleware_list, 'MIDDLEWARE_CLASSES': None}
| 38.333333 | 74 | 0.717391 | 25 | 230 | 6.28 | 0.52 | 0.267516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.178261 | 230 | 5 | 75 | 46 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4a9ab3c6d2b817a099af795c11a7ed7309b68d87 | 1,557 | py | Python | python/linked_list/0023_merge_k_sorted_lists.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 6 | 2019-07-15T13:23:57.000Z | 2020-01-22T03:12:01.000Z | python/linked_list/0023_merge_k_sorted_lists.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | null | null | null | python/linked_list/0023_merge_k_sorted_lists.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 1 | 2019-07-24T02:15:31.000Z | 2019-07-24T02:15:31.000Z | import heapq
class ListNode(object):
def __init__(self, val=0, next=None):
self.val = val
self.next = next
class CmpNode:
def __init__(self, node):
self.node = node
self.val = node.val
def __gt__(self, another):
return self.val > another.val
class Solution(object):
def mergeKLists(self, lists):
"""
:type lists: List[ListNode]
:rtype: ListNode
"""
h = []
for node in lists:
if node:
heapq.heappush(h, CmpNode(node))
if not h:
return None
head, prev = None, None
while h:
cmpn = heapq.heappop(h)
if not head:
head = cmpn.node
prev = head
else:
prev.next = cmpn.node
prev = cmpn.node
if cmpn.node.next:
heapq.heappush(h, CmpNode(cmpn.node.next))
return head
def test_merge_k_lists_1():
s = Solution()
a = ListNode(1, ListNode(4, ListNode(5)))
b = ListNode(1, ListNode(3, ListNode(4)))
c = ListNode(2, ListNode(6))
r = s.mergeKLists([a, b, c])
assert 1 == r.val
assert 1 == r.next.val
assert 2 == r.next.next.val
assert 3 == r.next.next.next.val
assert 4 == r.next.next.next.next.val
assert 4 == r.next.next.next.next.next.val
assert 5 == r.next.next.next.next.next.next.val
assert 6 == r.next.next.next.next.next.next.next.val
assert r.next.next.next.next.next.next.next.next is None
| 23.953846 | 60 | 0.540784 | 214 | 1,557 | 3.859813 | 0.238318 | 0.280872 | 0.305085 | 0.290557 | 0.22276 | 0.22276 | 0.22276 | 0.200969 | 0.079903 | 0.079903 | 0 | 0.017459 | 0.337829 | 1,557 | 64 | 61 | 24.328125 | 0.783705 | 0.028259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 1 | 0.108696 | false | 0 | 0.021739 | 0.021739 | 0.26087 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4ac29a44b03de1427f4291796030a4baa95e24ef | 751 | py | Python | enki/modeltokenverify.py | charlf/enkiWS | 789c789b2c0e46dcf568697dc2e46b82512981da | [
"Zlib"
] | 1 | 2021-04-22T15:29:20.000Z | 2021-04-22T15:29:20.000Z | enki/modeltokenverify.py | charlf/enkiWS | 789c789b2c0e46dcf568697dc2e46b82512981da | [
"Zlib"
] | null | null | null | enki/modeltokenverify.py | charlf/enkiWS | 789c789b2c0e46dcf568697dc2e46b82512981da | [
"Zlib"
] | null | null | null | from google.appengine.ext.ndb import model
class EnkiModelTokenVerify( model.Model ):
token = model.StringProperty()
email = model.StringProperty()
user_id = model.IntegerProperty() # ndb user ID
time_created = model.DateTimeProperty( auto_now_add = True )
type = model.StringProperty( choices = [ 'register',
'passwordchange',
'emailchange',
'accountdelete',
'accountandpostsdelete',
'preventmultipost',
] )
auth_ids_provider = model.StringProperty() # store auth Id info for registration
| 41.722222 | 81 | 0.509987 | 53 | 751 | 7.113208 | 0.698113 | 0.201592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416778 | 751 | 17 | 82 | 44.176471 | 0.860731 | 0.062583 | 0 | 0 | 0 | 0 | 0.118402 | 0.029957 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.071429 | 0.071429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
434d3cbb69dc7e957cb0e444a2fbccb16a1cf2b4 | 277 | py | Python | MorseMain.py | JEFMX/Codigo-Morse | a515d0c445eb296e3ef82cd7ccc6fc20f0501899 | [
"CC0-1.0"
] | 1 | 2021-03-18T18:15:20.000Z | 2021-03-18T18:15:20.000Z | MorseMain.py | JEFMX/Codigo-Morse | a515d0c445eb296e3ef82cd7ccc6fc20f0501899 | [
"CC0-1.0"
] | null | null | null | MorseMain.py | JEFMX/Codigo-Morse | a515d0c445eb296e3ef82cd7ccc6fc20f0501899 | [
"CC0-1.0"
] | null | null | null | #enconding: utf-8
from ObenedorDeEntrada import ObtenedorDeEntrada
from ProcesadorDeEntrada import ProcesadorDeEntrada
if __name__== "__main__":
entrada = ObtenedorDeEntrada()
procesador = ProcesadorDeEntrada()
procesador.procesarEntrada(entrada.getEntrada()) | 39.571429 | 52 | 0.794224 | 22 | 277 | 9.636364 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004167 | 0.133574 | 277 | 7 | 53 | 39.571429 | 0.879167 | 0.057762 | 0 | 0 | 0 | 0 | 0.031373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4369567b366c848a10879285cc44e397d683a0d0 | 305 | py | Python | test.py | aqeelahmad/python-units-of-measure | 436829f7aeeec75999bece3736f0eb4e39daf0ad | [
"MIT"
] | 3 | 2015-08-10T14:32:42.000Z | 2020-01-27T17:23:58.000Z | test.py | aqeelahmad/python-units-of-measure | 436829f7aeeec75999bece3736f0eb4e39daf0ad | [
"MIT"
] | null | null | null | test.py | aqeelahmad/python-units-of-measure | 436829f7aeeec75999bece3736f0eb4e39daf0ad | [
"MIT"
] | 2 | 2019-10-28T13:45:18.000Z | 2020-06-26T10:55:26.000Z | import unittest
# (The lines below are "imported but unused", but that's ok
# unittest main runs all the imported modules.
# import all the test objects here, to run them
from tests.PhysicalQuantityTest import PhysicalQuantityTest
#from tests.UnitOfMeasureTest import UnitOfMeasureTest
unittest.main() | 27.727273 | 59 | 0.806557 | 41 | 305 | 6 | 0.634146 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140984 | 305 | 11 | 60 | 27.727273 | 0.938931 | 0.659016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
437ec59598314e507f9887e5f4fc2f98434b71bf | 640 | py | Python | Advanced Topics/Decorators.py | srp98/Python-Stuff | fade8934718e01a3d30cf9db93515b8f02a20b18 | [
"MIT"
] | null | null | null | Advanced Topics/Decorators.py | srp98/Python-Stuff | fade8934718e01a3d30cf9db93515b8f02a20b18 | [
"MIT"
] | null | null | null | Advanced Topics/Decorators.py | srp98/Python-Stuff | fade8934718e01a3d30cf9db93515b8f02a20b18 | [
"MIT"
] | 1 | 2019-10-31T03:16:04.000Z | 2019-10-31T03:16:04.000Z | class Current:
def __init__(self):
self._voltage = 100000
@property
def voltage(self):
"""" Get the current voltage"""
return self._voltage
class Pizza(object):
def __init__(self):
self.toppings = []
def __call__(self, topping):
# when using '@instance_of_pizza' before a function def, the function gets passed onto 'topping'
self.toppings.append(topping())
def __repr__(self):
return str(self.toppings)
c1 = Current()
print(c1.voltage)
pizza = Pizza()
@pizza
def cheese():
return 'cheese'
@pizza
def sauce():
return 'sauce'
print(pizza)
| 16 | 104 | 0.625 | 76 | 640 | 5 | 0.434211 | 0.094737 | 0.057895 | 0.078947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016878 | 0.259375 | 640 | 39 | 105 | 16.410256 | 0.78481 | 0.189063 | 0 | 0.173913 | 0 | 0 | 0.021443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0 | 0 | 0.130435 | 0.565217 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
43a6b4262c5578895cecea6af4a80b47401a9d8e | 100 | py | Python | transform/gdc/defaults.py | ohsu-comp-bio/gen3-etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
] | 1 | 2020-01-22T17:05:58.000Z | 2020-01-22T17:05:58.000Z | transform/gdc/defaults.py | ohsu-comp-bio/gen3-etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
] | 2 | 2019-02-08T23:24:58.000Z | 2021-05-13T22:42:28.000Z | transform/gdc/defaults.py | ohsu-comp-bio/gen3_etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
] | null | null | null | DEFAULT_OUTPUT_DIR = 'output/gdc'
DEFAULT_EXPERIMENT_CODE = 'gdc'
DEFAULT_PROJECT_ID = 'smmart-gdc'
| 25 | 33 | 0.8 | 14 | 100 | 5.285714 | 0.642857 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 100 | 3 | 34 | 33.333333 | 0.813187 | 0 | 0 | 0 | 0 | 0 | 0.23 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
43de10fe3abc51ebbaf471797873704b20f65773 | 5,729 | py | Python | cowsay/lib/cows/mona_lisa.py | Ovlic/cowsay_py | 1ee8d11d6d895d7695d57e26003d71ce18379d3b | [
"MIT"
] | null | null | null | cowsay/lib/cows/mona_lisa.py | Ovlic/cowsay_py | 1ee8d11d6d895d7695d57e26003d71ce18379d3b | [
"MIT"
] | null | null | null | cowsay/lib/cows/mona_lisa.py | Ovlic/cowsay_py | 1ee8d11d6d895d7695d57e26003d71ce18379d3b | [
"MIT"
] | null | null | null | def Mona_lisa(thoughts, eyes, eye, tongue):
return f"""
{thoughts}
{thoughts}
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!>''''''<!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!'''''\` \`\`'!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!''\` ..... \`'!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!'\` . :::::' \`'!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!' . ' .::::' \`!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!' : \`\`\`\`\` \`!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!! .,cchcccccc,,. \`!!!!!!!!!!!!
!!!!!!!!!!!!!!! .-"?\$\$\$\$\$\$\$\$\$\$\$\$\$\$c, \`!!!!!!!!!!!
!!!!!!!!!!!!!! ,ccc\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$, \`!!!!!!!!!!
!!!!!!!!!!!!! z\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$;. \`!!!!!!!!!
!!!!!!!!!!!! <\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$:. \`!!!!!!!!
!!!!!!!!!!! \$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$h;:. !!!!!!!!
!!!!!!!!!!' \$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$h;. !!!!!!!
!!!!!!!!!' <\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$ !!!!!!!
!!!!!!!!' \`\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$F \`!!!!!!
!!!!!!!! c\$\$\$\$???\$\$\$\$\$\$\$P"" \"\"\"??????" !!!!!!
!!!!!!! \`"" .,.. "\$\$\$\$F .,zcr !!!!!!
!!!!!!! . dL .?\$\$\$ .,cc, .,z\$h. !!!!!!
!!!!!!!! <. \$\$c= <\$d\$\$\$ <\$\$\$\$=-=+"\$\$\$\$\$\$\$ !!!!!!
!!!!!!! d\$\$\$hcccd\$\$\$\$\$ d\$\$\$hcccd\$\$\$\$\$\$\$F \`!!!!!
!!!!!! ,\$\$\$\$\$\$\$\$\$\$\$\$\$\$h d\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$ \`!!!!!
!!!!! \`\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$<\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$' !!!!!
!!!!! \`\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$"\$\$\$\$\$\$\$\$\$\$\$\$\$P> !!!!!
!!!!! ?\$\$\$\$\$\$\$\$\$\$\$\$??\$c\`\$\$\$\$\$\$\$\$\$\$\$?>' \`!!!!
!!!!! \`?\$\$\$\$\$\$I7?"" ,\$\$\$\$\$\$\$\$\$?>>' !!!!
!!!!!. <<?\$\$\$\$\$\$c. ,d\$\$?\$\$\$\$\$F>>'' \`!!!
!!!!!! <i?\$P"??\$\$r--"?"" ,\$\$\$\$h;>'' \`!!!
!!!!!! \$\$\$hccccccccc= cc\$\$\$\$\$\$\$>>' !!!
!!!!! \`?\$\$\$\$\$\$F"\"\"\" \`"\$\$\$\$\$>>>'' \`!!
!!!!! "?\$\$\$\$\$cccccc\$\$\$\$??>>>>' !!
!!!!> "\$\$\$\$\$\$\$\$\$\$\$\$\$F>>>>'' \`!
!!!!! "\$\$\$\$\$\$\$\$???>''' !
!!!!!> \`"\"\"\"\" \`
!!!!!!; . \`
!!!!!!! ?h.
!!!!!!!! \$\$c,
!!!!!!!!> ?\$\$\$h. .,c
!!!!!!!!! \$\$\$\$\$\$\$\$\$hc,.,,cc\$\$\$\$\$
!!!!!!!!! .,zcc\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$
!!!!!!!!! .z\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$
!!!!!!!!! ,d\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$ .
!!!!!!!!! ,d\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$ !!
!!!!!!!!! ,d\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$ ,!'
!!!!!!!!> c\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$. !'
!!!!!!'' ,d\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$> '
!!!'' z\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$>
!' ,\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$> ..
z\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$' ;!!!!''\`
\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$F ,;;!'\`' .''
<\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$> ,;'\`' ,;
\`\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$F -' ,;!!'
"?\$\$\$\$\$\$\$\$\$\$?\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$F .<!!!''' <!
!> ""??\$\$\$?C3\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$"" ;!''' !!!
;!!!!;, \`"''""????\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$"" ,;-'' ',!
;!!!!<!!!; . \`"\"\"\"\"\"\"\"\"\"\" \`' ' '
!!!! ;!!! ;!!!!>;,;, .. ' . ' '
!!' ,;!!! ;'\`!!!!!!!!;!!!!!; . >' .'' ;
!!' ;!!'!';! !! !!!!!!!!!!!!! ' -'
<!! !! \`!;! \`!' !!!!!!!!!!<! .
\`! ;! ;!!! <' <!!!! \`!!! < /
\""" !> <!! ;' !!!!' !!';! ;'
! ! !!! ! \`!!! ;!! ! ' '
; \`! \`!! ,' !' ;!'
' /\`! ! < !! < '
/ ;! >;! ;>
!' ; !! '
' ;! > ! '
""" | 75.381579 | 116 | 0.027404 | 69 | 5,729 | 2.26087 | 0.405797 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000531 | 0.341944 | 5,729 | 76 | 117 | 75.381579 | 0.040849 | 0 | 0 | 0.054054 | 0 | 0 | 0.961431 | 0.440489 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0 | 0.013514 | 0.027027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
43df9ad06dbcde4685ab6b8dc808f7fe64079223 | 353 | py | Python | ticktick/managers/settings.py | prnake/ticktick-py | 33f2131deca65dc322f0c6a8447c50122fa9006b | [
"MIT"
] | null | null | null | ticktick/managers/settings.py | prnake/ticktick-py | 33f2131deca65dc322f0c6a8447c50122fa9006b | [
"MIT"
] | null | null | null | ticktick/managers/settings.py | prnake/ticktick-py | 33f2131deca65dc322f0c6a8447c50122fa9006b | [
"MIT"
] | null | null | null | class SettingsManager:
def __init__(self, client_class):
self._client = client_class
self.access_token = ''
def get_templates(self):
# https://api.dida365.com/api/v2/templates
pass
def get_user_settings(self):
# https://api.dida365.com/api/v2/user/preferences/settings?includeWeb=true
pass
| 25.214286 | 82 | 0.651558 | 43 | 353 | 5.093023 | 0.488372 | 0.091324 | 0.136986 | 0.173516 | 0.246575 | 0.246575 | 0.246575 | 0 | 0 | 0 | 0 | 0.02974 | 0.23796 | 353 | 13 | 83 | 27.153846 | 0.784387 | 0.320113 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.25 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
602de935be06a22f5ac8826f1b5ff109a77878fe | 783 | py | Python | monitoringProxy/utils.py | juliozinga/FIWARELab-monitoringAPI | 5a411de0a59f4408b4ed1a7e58b550b227e4975c | [
"Apache-2.0"
] | null | null | null | monitoringProxy/utils.py | juliozinga/FIWARELab-monitoringAPI | 5a411de0a59f4408b4ed1a7e58b550b227e4975c | [
"Apache-2.0"
] | null | null | null | monitoringProxy/utils.py | juliozinga/FIWARELab-monitoringAPI | 5a411de0a59f4408b4ed1a7e58b550b227e4975c | [
"Apache-2.0"
] | null | null | null | import time
from datetime import timedelta, tzinfo
def get_timestamp(origin_datetime=None):
'''
Return UNIX timestamp from datetime object
:param origin_datetime: Origin date to convert to timestamp
:type origin_datetime: datetime
:return: The corresponding UNIX timestamp for date passed as parameter or current timestamp if no parameter
:rtype: int
'''
if not origin_datetime:
return int(time.time())
else:
timestamp = origin_datetime.strftime("%s")
return int(timestamp)
class UTC(tzinfo):
"""
Class Representing a UTC tzinfo
"""
ZERO = timedelta(0)
def utcoffset(self, dt):
return self.ZERO
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return self.ZERO | 24.46875 | 111 | 0.662835 | 98 | 783 | 5.234694 | 0.469388 | 0.136452 | 0.070175 | 0.062378 | 0.077973 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001718 | 0.256705 | 783 | 32 | 112 | 24.46875 | 0.879725 | 0.365262 | 0 | 0.125 | 0 | 0 | 0.011086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.1875 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
60449dcb42f663b022c8428c5c4e23abac0cc522 | 340 | py | Python | sourcemon/model/servermodel.py | michaelimfeld/sourcemon | 45be21fcab6377f25bee63430600283b381b2265 | [
"MIT"
] | 1 | 2015-12-22T19:16:36.000Z | 2015-12-22T19:16:36.000Z | sourcemon/model/servermodel.py | michaelimfeld/sourcemon | 45be21fcab6377f25bee63430600283b381b2265 | [
"MIT"
] | null | null | null | sourcemon/model/servermodel.py | michaelimfeld/sourcemon | 45be21fcab6377f25bee63430600283b381b2265 | [
"MIT"
] | null | null | null | """
Database model for sourcemod server
"""
from peewee import IntegerField, ForeignKeyField
from sourcemon.model.basemodel import BaseModel
from sourcemon.model.ipmodel import IPModel
class ServerModel(BaseModel):
"""
Database model for sourcemod server
"""
ip = ForeignKeyField(IPModel)
port = IntegerField()
| 24.285714 | 48 | 0.735294 | 35 | 340 | 7.142857 | 0.485714 | 0.104 | 0.128 | 0.2 | 0.248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188235 | 340 | 13 | 49 | 26.153846 | 0.905797 | 0.208824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
605cca585c607197433e07668b41e274ee1aae96 | 290 | py | Python | config/base.py | macic/pyboi | dc7dd77165612c149785411ea8b61e4f632d2df7 | [
"MIT"
] | null | null | null | config/base.py | macic/pyboi | dc7dd77165612c149785411ea8b61e4f632d2df7 | [
"MIT"
] | null | null | null | config/base.py | macic/pyboi | dc7dd77165612c149785411ea8b61e4f632d2df7 | [
"MIT"
] | null | null | null | import os
api_key = os.getenv('binance_key')
api_secret = os.getenv('binance_secret')
db_host = os.getenv('pqsl_db_host')
db_user = os.getenv('pqsl_db_user')
db_pw = os.getenv('pqsl_db_pw')
db_name = os.getenv('pqsl_db_name')
db_url=f'postgresql://{db_user}:{db_pw}@{db_host}/{db_name}'
| 24.166667 | 60 | 0.734483 | 54 | 290 | 3.555556 | 0.296296 | 0.25 | 0.25 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082759 | 290 | 11 | 61 | 26.363636 | 0.721805 | 0 | 0 | 0 | 0 | 0 | 0.418685 | 0.17301 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6064e5c861d53fd1967e60c602a1ae7111089af4 | 179 | py | Python | __main__.py | mynameismon/kindle_clippings_manager | 3ad6f83ec3f8b4e37f3bc96c765eea4bb55f4465 | [
"MIT"
] | null | null | null | __main__.py | mynameismon/kindle_clippings_manager | 3ad6f83ec3f8b4e37f3bc96c765eea4bb55f4465 | [
"MIT"
] | null | null | null | __main__.py | mynameismon/kindle_clippings_manager | 3ad6f83ec3f8b4e37f3bc96c765eea4bb55f4465 | [
"MIT"
] | null | null | null |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""This module provides RP Contacts entry point script."""
from src.__init__ import main
if __name__ == "__main__":
main() | 17.9 | 58 | 0.653631 | 24 | 179 | 4.375 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.184358 | 179 | 10 | 59 | 17.9 | 0.705479 | 0.536313 | 0 | 0 | 0 | 0 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
606b25903613d3e458532fd71d4136ee8837016e | 2,311 | py | Python | Creat AGI-PRIME/creat_single_rule_match.py | agi-hub/AGI-PRIME-dataset | 2d96c49f85382a254057fd7c5d33b7d114f6cfa3 | [
"MIT"
] | null | null | null | Creat AGI-PRIME/creat_single_rule_match.py | agi-hub/AGI-PRIME-dataset | 2d96c49f85382a254057fd7c5d33b7d114f6cfa3 | [
"MIT"
] | null | null | null | Creat AGI-PRIME/creat_single_rule_match.py | agi-hub/AGI-PRIME-dataset | 2d96c49f85382a254057fd7c5d33b7d114f6cfa3 | [
"MIT"
] | null | null | null | import os
import json
import random
import util
import shutil
random.seed(2021)
def main():
IST = 'Single-rule Learning'
json_data_list = {}
path='./data/'
if not os.path.exists(path):
os.mkdir(path)
path=path + 'single_rule_match_train/'
if not os.path.exists(path):
os.mkdir(path)
else:
shutil.rmtree(path)
os.mkdir(path)
rules = ['number-progression','type-progression','size-progression','color-progression',
'type-xor','size-xor','color-xor','position-xor',
'type-or','size-or','color-or','position-or',
'type-and','size-and','color-and','position-and']
rules=set(rules)
rules=sorted(rules)
util.text_save(os.path.join(path,'rule.txt'), rules)
n=8000//12+1
for i in range(n):
json_data_list = util.q0(path, json_data_list,back=False)
json_data_list = util.q1(path, json_data_list,back=False)
json_data_list = util.q2(path, json_data_list,back=False)
json_data_list = util.q5(path, json_data_list,back=False)
json_data_list = util.q6(path, json_data_list,back=False)
json_data_list = util.q7(path, json_data_list,back=False)
json_data_list = util.q8(path, json_data_list,back=False)
json_data_list = util.q10(path, json_data_list,back=False)
json_data_list = util.q11(path, json_data_list,back=False)
json_data_list = util.q12(path, json_data_list,back=False)
json_data_list = util.q13(path, json_data_list,back=False)
json_data_list = util.q15(path, json_data_list,back=False)
# json_data_list = util.q3(path, json_data_list, back=False)
# json_data_list = util.q4(path, json_data_list, back=False)
# json_data_list = util.q10(path, json_data_list, back=False)
# json_data_list = util.q15(path, json_data_list, back=False)
print(len(json_data_list))
json_data_list = {k:v for (k,v) in json_data_list.items() if int(k)<8000}
print(len(json_data_list))
json_data_dict = {IST: json_data_list}
json_path = path + 'label.json'
with open(json_path, "w") as f:
json.dump(json_data_dict, f)
print("加载入文件完成...")
if __name__ == '__main__':
main() | 35.015152 | 93 | 0.639983 | 347 | 2,311 | 3.991354 | 0.233429 | 0.231047 | 0.329242 | 0.184838 | 0.557401 | 0.557401 | 0.557401 | 0.516968 | 0.516968 | 0.470758 | 0 | 0.021265 | 0.226742 | 2,311 | 66 | 94 | 35.015152 | 0.753777 | 0.102553 | 0 | 0.142857 | 0 | 0 | 0.130673 | 0.01197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.102041 | 0 | 0.122449 | 0.061224 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
607365caebe15c29bf1fd4872b2528ec9b8e00ab | 1,974 | py | Python | Aspect based Senti analysis/aspect_categorization/word_grouping/seq_matcher.py | hixio-mh/Aspect-Based-Sentiment-Analysis | 3527dd7c36f6b57f41ba02157d47cfdb7b84a286 | [
"MIT"
] | 1 | 2017-12-22T15:34:34.000Z | 2017-12-22T15:34:34.000Z | Aspect based Senti analysis/aspect_categorization/word_grouping/seq_matcher.py | pranithkumar/Aspect-Based-Sentiment-Analysis | 55355b8c38f1a0d8ed67665cce0901d8b3e002cd | [
"MIT"
] | 12 | 2017-12-29T12:13:07.000Z | 2022-03-11T23:19:24.000Z | Aspect based Senti analysis/aspect_categorization/word_grouping/seq_matcher.py | hixio-mh/Aspect-Based-Sentiment-Analysis | 3527dd7c36f6b57f41ba02157d47cfdb7b84a286 | [
"MIT"
] | 1 | 2017-12-19T09:49:52.000Z | 2017-12-19T09:49:52.000Z | from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
choices = ['', 'con i', 'battery untill', 'looks', 'touch', 'speed', 'dont', 'update i', 'situations', 'window', 'winner', 'ram management', 'feel', 'charge', 'usage', 'camera quality', 'damages', 'case', 'advantage', 'views', 'game', 'bilion day', 'fingerprint', 'front', 'bit', 'day', 'color', 'disply', 'similer', 'signal reception', 'quality game', 'quality mobile', 'signal', 'honor', 'emui', 'mode', 'output', 'flipkart', 'mode picture', 'deal', 'people', 'branding', 'noise', 'design', 'honor mobile', 'build quality', 'review', 'power savng', 'aperture mode', 'everything', 'finger', 'sensor', 'satisfy', 'camera clarity', 'power', 'use granuels', 'confusion', 'screen', 'use', 'update', 'efficiency', 'packing', 'sensors', 'bilion', 'front camera', 'heating', 'amount', 'backup', 'processor', 'software', 'load', 'features', 'criterias', 'battery', 'image', 'batry', 'dont use', 'everyone', 'delevery', 'shoots', 'quality', 'story', 'management', 'service', 'usage i', 'look cons', 'battery backup', 'camera', 'criteria', 'aperture', 'legs', 'batrry', 'mobiles', 'function', 'cricket', 'buy', 'delivery', 'phone', 'part', 'sound', 'look', 'camera cons', 'advantage charging', 'n', 'mobile', 'ui', 'today', 'problem', 'piece', 'display', 'compare', 'earphone', 'ram', 'life', 'doesn', 'camera awesome', 'guarantee', 'sound quality', 'make', 'get', 'nd', 'feature', 'note', 'amazing', 'speaker', 'build', 'speed network', 'android', 'charging', 'sim', 'okay', 'beauty', 'though', 'price', 'effect', 'jst', 'thankyou', 'device', 'mah', 'data', 'response', 'purchase', 'calls', 'i', 'light', 'nd sel', 'options', 'phone value', 'con', 'reception', 'time', 'rupees', 'notch']
i=0
for choice in choices[:-1]:
for check in choices[i+1:]:
if similar(choice,check)>0.55:
print "comparing similarity of "+choice+" and "+check
print similar(choice,check)
i += 1 | 141 | 1,679 | 0.628166 | 227 | 1,974 | 5.462555 | 0.700441 | 0.003226 | 0.029032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004021 | 0.118034 | 1,974 | 14 | 1,680 | 141 | 0.708214 | 0 | 0 | 0 | 0 | 0 | 0.549873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6079184c859cc811898a8f2628356e1a3ce41f30 | 178 | py | Python | macroPi/__init__.py | stefankablowski/macroPi | c9b64077ac9d3b185c7a16c89c6ffab675b50cfd | [
"MIT"
] | null | null | null | macroPi/__init__.py | stefankablowski/macroPi | c9b64077ac9d3b185c7a16c89c6ffab675b50cfd | [
"MIT"
] | null | null | null | macroPi/__init__.py | stefankablowski/macroPi | c9b64077ac9d3b185c7a16c89c6ffab675b50cfd | [
"MIT"
] | null | null | null | from keys import record, replay
from store import store_key_events, load_key_events, __init__
arr = record()
print(arr)
store_key_events(arr)
for elem in arr:
print(elem[0]) | 22.25 | 61 | 0.775281 | 30 | 178 | 4.266667 | 0.533333 | 0.210938 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.140449 | 178 | 8 | 62 | 22.25 | 0.830065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.285714 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
60807a2a39d15da5a7257d0d73ad73c39ce01414 | 316 | py | Python | libs/applibs/dialogs/__init__.py | flytrue/testapp | 74d75e8761634aa8ce69d48f589d1f7a6b4d79ae | [
"MIT"
] | null | null | null | libs/applibs/dialogs/__init__.py | flytrue/testapp | 74d75e8761634aa8ce69d48f589d1f7a6b4d79ae | [
"MIT"
] | null | null | null | libs/applibs/dialogs/__init__.py | flytrue/testapp | 74d75e8761634aa8ce69d48f589d1f7a6b4d79ae | [
"MIT"
] | 1 | 2018-09-20T19:32:17.000Z | 2018-09-20T19:32:17.000Z | # -*- coding: utf-8 -*-
'''
VKGroups
Copyright © 2010-2018 HeaTTheatR
Для предложений и вопросов:
<kivydevelopment@gmail.com>
Данный файл распространяется по условиям той же лицензии,
что и фреймворк Kivy.
'''
from . selection import Selection
from . dialogs import card, dialog, dialog_progress, input_dialog
| 17.555556 | 65 | 0.756329 | 41 | 316 | 5.804878 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033582 | 0.151899 | 316 | 17 | 66 | 18.588235 | 0.850746 | 0.642405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6083ee05382ce8abd7a9ef4444d07c16fd128bd1 | 68 | py | Python | AtCoder/ABC/000-159/ABC142_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/000-159/ABC142_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | AtCoder/ABC/000-159/ABC142_A.py | sireline/PyCode | 8578467710c3c1faa89499f5d732507f5d9a584c | [
"MIT"
] | null | null | null | A = int(input())
print(len([n for n in range(1,A+1) if n %2!=0])/A)
| 22.666667 | 50 | 0.558824 | 18 | 68 | 2.111111 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 0.161765 | 68 | 2 | 51 | 34 | 0.596491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
60869b43ef323ba45971a70ee3ff8b554afd1e64 | 217 | py | Python | fitnessFolks/blog/urls.py | Programming-Club-Ahmedabad-University/wellness | 4cc06497ce2f2a6019c0fa4595940703605ffb0a | [
"MIT"
] | 1 | 2020-10-07T09:51:01.000Z | 2020-10-07T09:51:01.000Z | fitnessFolks/blog/urls.py | Programming-Club-Ahmedabad-University/wellness | 4cc06497ce2f2a6019c0fa4595940703605ffb0a | [
"MIT"
] | 1 | 2020-10-15T07:58:16.000Z | 2020-10-15T07:58:16.000Z | fitnessFolks/blog/urls.py | Programming-Club-Ahmedabad-University/wellness | 4cc06497ce2f2a6019c0fa4595940703605ffb0a | [
"MIT"
] | 2 | 2020-10-07T07:48:18.000Z | 2021-07-16T04:22:44.000Z | from django.urls import path
from .views import post_list_view, post_detail_view
urlpatterns = [
path('blog/', post_list_view, name='blog'),
path('blog/<slug:slug>/', post_detail_view, name='post_detail'),
] | 27.125 | 68 | 0.723502 | 32 | 217 | 4.625 | 0.4375 | 0.202703 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 217 | 8 | 69 | 27.125 | 0.783069 | 0 | 0 | 0 | 0 | 0 | 0.169725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
60881ed419c7dc5820b8a06ee34bdf29a02fac4f | 265 | py | Python | api/goods/migrations/0018_merge_20201127_1102.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | null | null | null | api/goods/migrations/0018_merge_20201127_1102.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | null | null | null | api/goods/migrations/0018_merge_20201127_1102.py | django-doctor/lite-api | 1ba278ba22ebcbb977dd7c31dd3701151cd036bf | [
"MIT"
] | null | null | null | # Generated by Django 2.2.16 on 2020-11-27 11:02
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("goods", "0016_auto_20201123_0332"),
("goods", "0017_auto_20201124_1613"),
]
operations = []
| 18.928571 | 48 | 0.656604 | 33 | 265 | 5.090909 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231884 | 0.218868 | 265 | 13 | 49 | 20.384615 | 0.57971 | 0.173585 | 0 | 0 | 1 | 0 | 0.258065 | 0.211982 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6094a7d9cf2e49911f792766d99c4d279b2e10c7 | 6,499 | py | Python | mmtbx/scaling/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | 2 | 2021-03-18T12:31:57.000Z | 2022-03-14T06:27:06.000Z | mmtbx/scaling/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | null | null | null | mmtbx/scaling/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | 1 | 2021-03-26T12:52:30.000Z | 2021-03-26T12:52:30.000Z |
"""
Base module for Xtriage and related scaling functionality; this imports the
Boost.Python extensions into the local namespace, and provides core functions
for displaying the results of Xtriage.
"""
from __future__ import absolute_import, division, print_function
import cctbx.array_family.flex # import dependency
from libtbx.str_utils import make_sub_header, make_header, make_big_header
from libtbx import slots_getstate_setstate
from six.moves import cStringIO as StringIO
import sys
import boost.python
from six.moves import range
ext = boost.python.import_ext("mmtbx_scaling_ext")
from mmtbx_scaling_ext import *
class data_analysis(slots_getstate_setstate):
def show(self, out=sys.stdout, prefix=""):
raise NotImplementedError()
class xtriage_output(slots_getstate_setstate):
"""
Base class for generic output wrappers.
"""
# this is used to toggle behavior in some output methods
gui_output = False
def show_big_header(self, title):
"""
Print a big header with the specified title.
"""
raise NotImplementedError()
def show_header(self, title):
"""
Start a new section with the specified title.
"""
raise NotImplementedError()
def show_sub_header(self, title):
"""
Start a sub-section with the specified title.
"""
raise NotImplementedError()
def show_text(self, text):
"""
Show unformatted text.
"""
raise NotImplementedError()
def show(self, text):
return self.show_text(text)
def show_preformatted_text(self, text):
"""
Show text with spaces and line breaks preserved; in some contexts this
will be done using a monospaced font.
"""
raise NotImplementedError()
def show_lines(self, text):
"""
Show partially formatted text, preserving paragraph breaks.
"""
raise NotImplementedError()
def show_paragraph_header(self, text):
"""
Show a header/title for a paragraph or small block of text.
"""
raise NotImplementedError()
def show_table(self, table, indent=0, plot_button=None,
equal_widths=True):
"""
Display a formatted table.
"""
raise NotImplementedError()
def show_plot(self, table):
"""
Display a plot, if supported by the given output class.
"""
raise NotImplementedError()
def show_plots_row(self, tables):
"""
Display a series of plots in a single row. Only used for the Phenix GUI.
"""
raise NotImplementedError()
def show_text_columns(self, rows, indent=0):
"""
Display a set of left-justified text columns. The number of columns is
arbitrary but this will usually be key:value pairs.
"""
raise NotImplementedError()
def newline(self):
"""
Print a newline and nothing else.
"""
raise NotImplementedError()
def write(self, text):
"""
Support for generic filehandle methods.
"""
self.show(text)
def flush(self):
"""
Support for generic filehandle methods.
"""
pass
def warn(self, text):
"""
Display a warning message.
"""
raise NotImplementedError()
class printed_output(xtriage_output):
"""
Output class for displaying raw text with minimal formatting.
"""
__slots__ = ["out"]
def __init__(self, out):
assert hasattr(out, "write") and hasattr(out, "flush")
self.out = out
self._warnings = []
def show_big_header(self, text):
make_big_header(text, out=self.out)
def show_header(self, text):
make_header(text, out=self.out)
def show_sub_header(self, title):
out_tmp = StringIO()
make_sub_header(title, out=out_tmp)
for line in out_tmp.getvalue().splitlines():
self.out.write("%s\n" % line.rstrip())
def show_text(self, text):
print(text, file=self.out)
def show_paragraph_header(self, text):
print(text, file=self.out) #+ ":"
def show_preformatted_text(self, text):
print(text, file=self.out)
def show_lines(self, text):
print(text, file=self.out)
def show_table(self, table, indent=2, plot_button=None, equal_widths=True):
print(table.format(indent=indent, equal_widths=equal_widths), file=self.out)
def show_plot(self, table):
pass
def show_plots_row(self, tables):
pass
def show_text_columns(self, rows, indent=0):
prefix = " "*indent
n_cols = len(rows[0])
col_sizes = [ max([ len(row[i]) for row in rows ]) for i in range(n_cols) ]
for row in rows :
assert len(row) == n_cols
formats = prefix+" ".join([ "%%%ds" % x for x in col_sizes ])
print(formats % tuple(row), file=self.out)
def newline(self):
print("", file=self.out)
def write(self, text):
self.out.write(text)
def warn(self, text):
self._warnings.append(text)
out_tmp = StringIO()
make_sub_header("WARNING", out=out_tmp, sep='*')
for line in out_tmp.getvalue().splitlines():
self.out.write("%s\n" % line.rstrip())
self.out.write(text)
class loggraph_output(xtriage_output):
"""
Output class for displaying 'loggraph' format (from ccp4i) as plain text.
"""
gui_output = True
def __init__(self, out):
assert hasattr(out, "write") and hasattr(out, "flush")
self.out = out
def show_big_header(self, text) : pass
def show_header(self, text) : pass
def show_sub_header(self, title) : pass
def show_text(self, text) : pass
def show_paragraph_header(self, text) : pass
def show_preformatted_text(self, text) : pass
def show_lines(self, text) : pass
def show_table(self, *args, **kwds) : pass
def show_text_columns(self, *args, **kwds) : pass
def newline(self) : pass
def write(self, text) : pass
def warn(self, text) : pass
def show_plot(self, table):
print("", file=self.out)
print(table.format_loggraph(), file=self.out)
def show_plots_row(self, tables):
for table in tables :
self.show_plot(table)
class xtriage_analysis(object):
"""
Base class for analyses performed by Xtriage. This does not impose any
restrictions on content or functionality, but simply provides a show()
method suitable for either filehandle-like objects or objects derived from
the xtriage_output class. Child classes should implement _show_impl.
"""
def show(self, out=None):
if out is None:
out=sys.stdout
if (not isinstance(out, xtriage_output)):
out = printed_output(out)
self._show_impl(out=out)
return self
def _show_impl(self, out):
raise NotImplementedError()
def summarize_issues(self):
return []
| 26.635246 | 80 | 0.68549 | 895 | 6,499 | 4.826816 | 0.236872 | 0.059954 | 0.08125 | 0.071759 | 0.413889 | 0.311574 | 0.176852 | 0.144444 | 0.11713 | 0.073611 | 0 | 0.001163 | 0.206032 | 6,499 | 243 | 81 | 26.744856 | 0.836047 | 0.237883 | 0 | 0.473282 | 0 | 0 | 0.013522 | 0 | 0 | 0 | 0 | 0 | 0.022901 | 1 | 0.381679 | false | 0.114504 | 0.076336 | 0.015267 | 0.541985 | 0.091603 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
60b48ef4664313247f6f73e2eb5c349625ba0d6d | 3,469 | py | Python | scripts/external_libs/scapy-2.4.3/scapy/layers/tls/__init__.py | timgates42/trex-core | efe94752fcb2d0734c83d4877afe92a3dbf8eccd | [
"Apache-2.0"
] | 956 | 2015-06-24T15:04:55.000Z | 2022-03-30T06:25:04.000Z | scripts/external_libs/scapy-2.4.3/scapy/layers/tls/__init__.py | angelyouyou/trex-core | fddf78584cae285d9298ef23f9f5c8725e16911e | [
"Apache-2.0"
] | 782 | 2015-09-20T15:19:00.000Z | 2022-03-31T23:52:05.000Z | scripts/external_libs/scapy-2.4.3/scapy/layers/tls/__init__.py | angelyouyou/trex-core | fddf78584cae285d9298ef23f9f5c8725e16911e | [
"Apache-2.0"
] | 429 | 2015-06-27T19:34:21.000Z | 2022-03-23T11:02:51.000Z | # This file is part of Scapy
# Copyright (C) 2007, 2008, 2009 Arnaud Ebalard <arno@natisbad.com>
# 2015, 2016, 2017 Maxence Tury <maxence.tury@ssi.gouv.fr>
# This program is published under a GPLv2 license
"""
Tools for handling TLS sessions and digital certificates.
Use load_layer('tls') to load them to the main namespace.
Prerequisites:
- You may need to 'pip install cryptography' for the module to be loaded.
Main features:
- X.509 certificates parsing/building.
- RSA & ECDSA keys sign/verify methods.
- TLS records and sublayers (handshake...) parsing/building. Works with
versions SSLv2 to TLS 1.2. This may be enhanced by a TLS context. For
instance, if Scapy reads a ServerHello with version TLS 1.2 and a cipher
suite using AES, it will assume the presence of IVs prepending the data.
See test/tls.uts for real examples.
- TLS encryption/decryption capabilities with many ciphersuites, including
some which may be deemed dangerous. Once again, the TLS context enables
Scapy to transparently send/receive protected data if it learnt the
session secrets. Note that if Scapy acts as one side of the handshake
(e.g. reads all server-related packets and builds all client-related
packets), it will indeed compute the session secrets.
- TLS client & server basic automatons, provided for testing and tweaking
purposes. These make for a very primitive TLS stack.
- Additionally, a basic test PKI (key + certificate for a CA, a client and
a server) is provided in tls/examples/pki_test.
Unit tests:
- Various cryptography checks.
- Reading a TLS handshake between a Firefox client and a GitHub server.
- Reading TLS 1.3 handshakes from test vectors of a draft RFC.
- Reading a SSLv2 handshake between s_client and s_server, without PFS.
- Test our TLS server against s_client with different cipher suites.
- Test our TLS client against our TLS server (s_server is unscriptable).
TODO list (may it be carved away by good souls):
- Features to add (or wait for) in the cryptography library:
- X448 from RFC 7748 (no support in openssl yet);
- the compressed EC point format.
- About the automatons:
- Add resumption support, through session IDs or session tickets.
- Add various checks for discrepancies between client and server.
Is the ServerHello ciphersuite ok? What about the SKE params? Etc.
- Add some examples which illustrate how the automatons could be used.
Typically, we could showcase this with Heartbleed.
- Allow the server to store both one RSA key and one ECDSA key, and
select the right one to use according to the ClientHello suites.
- Find a way to shutdown the automatons sockets properly without
simultaneously breaking the unit tests.
- Miscellaneous:
- Enhance PSK and session ticket support.
- Define several Certificate Transparency objects.
- Add the extended master secret and encrypt-then-mac logic.
- Mostly unused features : DSS, fixed DH, SRP, char2 curves...
"""
from scapy.config import conf
if not conf.crypto_valid:
import logging
log_loading = logging.getLogger("scapy.loading")
log_loading.info("Can't import python-cryptography v1.7+. "
"Disabled PKI & TLS crypto-related features.")
| 34.69 | 78 | 0.705967 | 496 | 3,469 | 4.919355 | 0.542339 | 0.014754 | 0.004098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017464 | 0.240703 | 3,469 | 99 | 79 | 35.040404 | 0.908884 | 0.919573 | 0 | 0 | 0 | 0 | 0.358209 | 0 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
60bafacde9691f3785199dbb0aaa71a6c89ad956 | 165 | py | Python | quickreplies/models.py | praekeltfoundation/reminder-scheduler | 02ec6fde57883b276d16ca3524bd0202813c8fb2 | [
"BSD-3-Clause"
] | null | null | null | quickreplies/models.py | praekeltfoundation/reminder-scheduler | 02ec6fde57883b276d16ca3524bd0202813c8fb2 | [
"BSD-3-Clause"
] | null | null | null | quickreplies/models.py | praekeltfoundation/reminder-scheduler | 02ec6fde57883b276d16ca3524bd0202813c8fb2 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
class QuickReplyDestination(models.Model):
url = models.URLField()
hmac_secret = models.CharField(max_length=255, blank=True)
| 23.571429 | 62 | 0.763636 | 21 | 165 | 5.904762 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021127 | 0.139394 | 165 | 6 | 63 | 27.5 | 0.852113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
60c26d4b0096829edb6a95bd327e12271cc0494b | 48 | py | Python | neuroml/nml/config.py | NeuralEnsemble/libNeuroML | 75d1630a0c6354a3997c4068dc8cdc447491b6f8 | [
"BSD-3-Clause"
] | 20 | 2015-03-11T11:21:32.000Z | 2021-10-11T16:03:27.000Z | neuroml/nml/config.py | NeuralEnsemble/libNeuroML | 75d1630a0c6354a3997c4068dc8cdc447491b6f8 | [
"BSD-3-Clause"
] | 48 | 2015-01-15T18:41:01.000Z | 2022-01-05T13:53:58.000Z | neuroml/nml/config.py | NeuralEnsemble/libNeuroML | 75d1630a0c6354a3997c4068dc8cdc447491b6f8 | [
"BSD-3-Clause"
] | 16 | 2015-01-14T21:53:46.000Z | 2019-09-04T23:05:27.000Z | variables = {"schema_name": "NeuroML_v2.2.xsd"}
| 24 | 47 | 0.708333 | 7 | 48 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.083333 | 48 | 1 | 48 | 48 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
60c75e4ba343d77630741c129fff874ee7aabd3a | 856 | py | Python | discovery-provider/src/api/v1/models/challenges.py | Tenderize/audius-protocol | aa15844e3f12812fe8aaa81e2cb6e5c5fa89ff51 | [
"Apache-2.0"
] | 1 | 2022-03-27T21:40:36.000Z | 2022-03-27T21:40:36.000Z | discovery-provider/src/api/v1/models/challenges.py | Tenderize/audius-protocol | aa15844e3f12812fe8aaa81e2cb6e5c5fa89ff51 | [
"Apache-2.0"
] | null | null | null | discovery-provider/src/api/v1/models/challenges.py | Tenderize/audius-protocol | aa15844e3f12812fe8aaa81e2cb6e5c5fa89ff51 | [
"Apache-2.0"
] | null | null | null | from flask_restx import fields
from .common import ns
attestation = ns.model(
"attestation",
{
"owner_wallet": fields.String(required=True),
"attestation": fields.String(required=True),
},
)
undisbursed_challenge = ns.model(
"undisbursed_challenge",
{
"challenge_id": fields.String(required=True),
"user_id": fields.String(required=True),
"specifier": fields.String(required=True),
"amount": fields.String(required=True),
"completed_blocknumber": fields.Integer(required=True),
"handle": fields.String(required=True),
"wallet": fields.String(required=True),
},
)
create_sender_attestation = ns.model(
"create_sender_attestation",
{
"owner_wallet": fields.String(required=True),
"attestation": fields.String(required=True),
},
)
| 25.939394 | 63 | 0.651869 | 87 | 856 | 6.275862 | 0.298851 | 0.241758 | 0.3663 | 0.43956 | 0.446886 | 0.296703 | 0.296703 | 0.296703 | 0.296703 | 0.296703 | 0 | 0 | 0.209112 | 856 | 32 | 64 | 26.75 | 0.806499 | 0 | 0 | 0.142857 | 0 | 0 | 0.198598 | 0.078271 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
60ce67b7e2b24cc2a97190b453defc65ccef1e84 | 1,350 | py | Python | tests/test_enter.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | null | null | null | tests/test_enter.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | 2 | 2017-09-05T11:12:05.000Z | 2017-09-07T19:23:15.000Z | tests/test_enter.py | vpv11110000/pyss | bc2226e2e66e0b551a09ae6ab6835b0bb6c7f32b | [
"MIT"
] | null | null | null | # #!/usr/bin/python
# -*- coding: utf-8 -*-
# pylint: disable=line-too-long,missing-docstring,bad-whitespace
import sys
import os
import unittest
DIRNAME_MODULE = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(sys.argv[0])))) + os.sep
sys.path.append(DIRNAME_MODULE)
sys.path.append(DIRNAME_MODULE + "pyss" + os.sep)
from pyss import pyssobject
from pyss.pyss_model import PyssModel
from pyss.segment import Segment
from pyss import generate
from pyss import terminate
from pyss import logger
from pyss import pyss_model
from pyss import segment
from pyss import table
from pyss import handle
from pyss.enter import Enter
from pyss.leave import Leave
from pyss import storage
from pyss import pyssobject
from pyss import advance
from pyss import options
from pyss.pyss_const import *
class TestEnter(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_init_001(self):
#
with self.assertRaises(pyssobject.ErrorIsNone) as context:
Enter(None, storageName="S1", funcBusySize=1)
def test_init_002(self):
m = PyssModel(optionz=None)
m[OPTIONS].setAllFalse()
sgm = Segment(m)
#
Enter(sgm, storageName="S1", funcBusySize=1)
if __name__ == '__main__':
unittest.main(module="test_enter")
| 24.107143 | 106 | 0.713333 | 187 | 1,350 | 5.048128 | 0.390374 | 0.144068 | 0.177966 | 0.047669 | 0.227754 | 0.115466 | 0.047669 | 0.047669 | 0 | 0 | 0 | 0.011009 | 0.192593 | 1,350 | 55 | 107 | 24.545455 | 0.855046 | 0.074815 | 0 | 0.105263 | 0 | 0 | 0.020934 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.105263 | false | 0.052632 | 0.526316 | 0 | 0.657895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
60ef24190e5796fe94aa6d0c84e7cc01d9ab2f7e | 174 | py | Python | CURSO UDEMY/TEORICAS/1.py | CamilliCerutti/Exercicios-de-Python-curso-em-video | 6571a5c5cb7b4398352a7778c55588c0c16f13c2 | [
"MIT"
] | null | null | null | CURSO UDEMY/TEORICAS/1.py | CamilliCerutti/Exercicios-de-Python-curso-em-video | 6571a5c5cb7b4398352a7778c55588c0c16f13c2 | [
"MIT"
] | null | null | null | CURSO UDEMY/TEORICAS/1.py | CamilliCerutti/Exercicios-de-Python-curso-em-video | 6571a5c5cb7b4398352a7778c55588c0c16f13c2 | [
"MIT"
] | null | null | null | # STRING: nome
print('Camilli', type('Camilli'))
# INT: idade
print(17, type(17))
# # ALTURA: float
print(1.58, type(1.58))
# É MAIOR DE IDADE: bool
print(bool(17 > 18)) | 15.818182 | 33 | 0.632184 | 29 | 174 | 3.793103 | 0.586207 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096552 | 0.166667 | 174 | 11 | 34 | 15.818182 | 0.662069 | 0.362069 | 0 | 0 | 0 | 0 | 0.132075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
88031728853823161b5a425023b70cdeff85ee1f | 190 | py | Python | angelo/problem_9.py | giovannilambertucci/github-learning | 9d99ae0b4d77bf4464ce974a903c1a225aeb1d0a | [
"MIT"
] | null | null | null | angelo/problem_9.py | giovannilambertucci/github-learning | 9d99ae0b4d77bf4464ce974a903c1a225aeb1d0a | [
"MIT"
] | 4 | 2018-10-09T20:55:26.000Z | 2020-10-16T18:33:01.000Z | angelo/problem_9.py | giovannilambertucci/github-learning | 9d99ae0b4d77bf4464ce974a903c1a225aeb1d0a | [
"MIT"
] | 8 | 2018-10-06T16:39:22.000Z | 2021-10-20T19:41:53.000Z | #!/usr/bin/python3
for a in range(1, 400):
for b in range(1, 400):
c = (1000 - a) - b
if a < b < c:
if a**2 + b**2 == c**2:
print(a * b * c)
| 21.111111 | 35 | 0.373684 | 34 | 190 | 2.088235 | 0.441176 | 0.084507 | 0.225352 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 0.442105 | 190 | 8 | 36 | 23.75 | 0.518868 | 0.089474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
880be01962e805216657469d5d33f6977bef9b89 | 155 | py | Python | emergency/ui.py | dmonroy/911 | 217bb336903495370ff1374606823c5473a0cf70 | [
"MIT"
] | null | null | null | emergency/ui.py | dmonroy/911 | 217bb336903495370ff1374606823c5473a0cf70 | [
"MIT"
] | null | null | null | emergency/ui.py | dmonroy/911 | 217bb336903495370ff1374606823c5473a0cf70 | [
"MIT"
] | null | null | null | from chilero import web
class HomeView(web.View):
def get(self):
return web.Response('This is the home!')
routes = [
['', HomeView]
]
| 11.923077 | 48 | 0.606452 | 20 | 155 | 4.7 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 155 | 12 | 49 | 12.916667 | 0.817391 | 0 | 0 | 0 | 0 | 0 | 0.109677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7150600aaa110110306719c5b7be299f067d0fba | 265 | py | Python | djangoProject3/giris/urls.py | abdullahturkak/Django-pc-login-and-show | 8de1c33f30ff3e501ee4fa57f45e0752f7092d1f | [
"Apache-2.0"
] | null | null | null | djangoProject3/giris/urls.py | abdullahturkak/Django-pc-login-and-show | 8de1c33f30ff3e501ee4fa57f45e0752f7092d1f | [
"Apache-2.0"
] | null | null | null | djangoProject3/giris/urls.py | abdullahturkak/Django-pc-login-and-show | 8de1c33f30ff3e501ee4fa57f45e0752f7092d1f | [
"Apache-2.0"
] | null | null | null | from django.urls import path
from . import views
from django.urls import path
from django.urls import path
urlpatterns = [
path('', views.ilksayfa,name='ilksayfa'),
path('ekle',views.ekle,name='ekle'),
path('getir',views.index,name='getir'),
]
| 13.947368 | 45 | 0.679245 | 36 | 265 | 5 | 0.333333 | 0.166667 | 0.233333 | 0.333333 | 0.422222 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169811 | 265 | 18 | 46 | 14.722222 | 0.818182 | 0 | 0 | 0.333333 | 0 | 0 | 0.098113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7160d489a58b25d7b87e2d37c20b94c78c28e43c | 57 | py | Python | example_snippets/multimenus_snippets/Snippets/SciPy/Physical and mathematical constants/CODATA physical constants/P/proton Compton wavelength.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | null | null | null | example_snippets/multimenus_snippets/Snippets/SciPy/Physical and mathematical constants/CODATA physical constants/P/proton Compton wavelength.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | null | null | null | example_snippets/multimenus_snippets/Snippets/SciPy/Physical and mathematical constants/CODATA physical constants/P/proton Compton wavelength.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | 1 | 2021-02-04T04:51:48.000Z | 2021-02-04T04:51:48.000Z | constants.physical_constants["proton Compton wavelength"] | 57 | 57 | 0.877193 | 6 | 57 | 8.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 57 | 1 | 57 | 57 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0.431034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
717d6b8beae77248154db43634235c684600256e | 293 | py | Python | helpers/collect.py | dan-candeira/api_backend | b2670ede792634cb1982f0a809ef70fc4e2e21d1 | [
"MIT"
] | 2 | 2020-09-11T01:41:06.000Z | 2022-01-26T11:09:00.000Z | helpers/collect.py | dan-candeira/api_backend | b2670ede792634cb1982f0a809ef70fc4e2e21d1 | [
"MIT"
] | null | null | null | helpers/collect.py | dan-candeira/api_backend | b2670ede792634cb1982f0a809ef70fc4e2e21d1 | [
"MIT"
] | 2 | 2020-10-10T21:15:33.000Z | 2021-12-06T18:02:03.000Z | from models.collect import Collect
from helpers.patient import validate_patient
from helpers.equipment import validate_equipment
async def validate_collect(collect: Collect):
await validate_patient(collect.patient)
await validate_equipment(collect.equipment)
return collect | 26.636364 | 48 | 0.8157 | 35 | 293 | 6.685714 | 0.342857 | 0.094017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139932 | 293 | 11 | 49 | 26.636364 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
717f1f70265be6bc5f941c4217513b3ab2e45ef1 | 407 | py | Python | lib/python3.7/site-packages/pydantic/__init__.py | guilhermeginezsilva/python-products-api | b27327f64359801edcd858263e39fe8bc8c0b0f7 | [
"BSD-3-Clause"
] | null | null | null | lib/python3.7/site-packages/pydantic/__init__.py | guilhermeginezsilva/python-products-api | b27327f64359801edcd858263e39fe8bc8c0b0f7 | [
"BSD-3-Clause"
] | null | null | null | lib/python3.7/site-packages/pydantic/__init__.py | guilhermeginezsilva/python-products-api | b27327f64359801edcd858263e39fe8bc8c0b0f7 | [
"BSD-3-Clause"
] | null | null | null | # flake8: noqa
from . import dataclasses
from .class_validators import validator
from .env_settings import BaseSettings
from .error_wrappers import ValidationError
from .errors import *
from .fields import Required
from .main import BaseConfig, BaseModel, Extra, compiled, create_model, validate_model
from .parse import Protocol
from .schema import Schema
from .types import *
from .version import VERSION
| 31.307692 | 86 | 0.823096 | 53 | 407 | 6.226415 | 0.566038 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002817 | 0.127764 | 407 | 12 | 87 | 33.916667 | 0.926761 | 0.029484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
71acf177ad960023e1438bf52e631fc9a6602f6a | 619 | py | Python | product_information/product_data.py | BVT-Engineering/Product_Information | 7b9c6d9bbb3d68ac9c1e4b606e96196f4ee1cd4b | [
"Apache-2.0"
] | null | null | null | product_information/product_data.py | BVT-Engineering/Product_Information | 7b9c6d9bbb3d68ac9c1e4b606e96196f4ee1cd4b | [
"Apache-2.0"
] | null | null | null | product_information/product_data.py | BVT-Engineering/Product_Information | 7b9c6d9bbb3d68ac9c1e4b606e96196f4ee1cd4b | [
"Apache-2.0"
] | null | null | null | import pandas as pd
from . import data
import importlib.resources
def autex_frontier_acoustic_fins():
"""Return a dataframe containing Autex frontier acoustic fin data"""
path = importlib.resources.open_text(data, "Autex Frontier Acoustic Fins.csv")
return pd.read_csv(path)
def tracklok():
"""Return a dataframe containing tracklok data"""
path = importlib.resources.open_text(data, "tracklok.csv")
return pd.read_csv(path)
def gripple():
"""Return a dataframe containing tracklok data"""
path = importlib.resources.open_text(data, "Gripple.csv")
return pd.read_csv(path)
| 23.807692 | 82 | 0.726979 | 82 | 619 | 5.378049 | 0.304878 | 0.163265 | 0.142857 | 0.176871 | 0.575964 | 0.575964 | 0.526077 | 0.326531 | 0.326531 | 0.326531 | 0 | 0 | 0.168013 | 619 | 25 | 83 | 24.76 | 0.856311 | 0.242326 | 0 | 0.25 | 0 | 0 | 0.121413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
71afa04ef8d0f57669ca0f129f2c50b754e901ac | 365 | py | Python | 0-notes/job-search/Cracking the Coding Interview/C04TreesGraphs/questions/4.12-questions.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 0-notes/job-search/Cracking the Coding Interview/C04TreesGraphs/questions/4.12-questions.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 0-notes/job-search/Cracking the Coding Interview/C04TreesGraphs/questions/4.12-questions.py | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | # 4.12 Paths with Sum
# You are given a binary tree in which each node contains an integer value
# which might be positive or negative.
# Design an algorithm to count the number of paths that sum to a given value.
# The path does not need to start or end at the root or a leaf, but it must go
# downwards, traveling only from parent nodes to child nodes.
| 45.625 | 78 | 0.739726 | 68 | 365 | 3.970588 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.227397 | 365 | 7 | 79 | 52.142857 | 0.946809 | 0.939726 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
71e5fc3a02f133ded42284d60f5db450de4aea77 | 204 | py | Python | setup.py | KyungMinJin/Pointnet | ac18b074d1ab9067ad923bbea2dab524f76b8b09 | [
"MIT"
] | null | null | null | setup.py | KyungMinJin/Pointnet | ac18b074d1ab9067ad923bbea2dab524f76b8b09 | [
"MIT"
] | null | null | null | setup.py | KyungMinJin/Pointnet | ac18b074d1ab9067ad923bbea2dab524f76b8b09 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(name='pointnet',
packages=['pointnet'],
package_dir={'pointnet': 'pointnet'},
install_requires=['torch', 'tqdm', 'plyfile'],
version='0.0.1')
| 20.4 | 52 | 0.622549 | 22 | 204 | 5.681818 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.191176 | 204 | 9 | 53 | 22.666667 | 0.739394 | 0 | 0 | 0 | 0 | 0 | 0.262376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e08224d680f298b0c62c7c66f4ee253b21125106 | 578 | py | Python | tests/meeshkan/nlp/test_gib_detector.py | meeshkan/meeshkan-nlp | 63ef1e0ef31fd9c2031c89e9fd6ca3fc46eef13e | [
"MIT"
] | 1 | 2020-04-02T08:02:33.000Z | 2020-04-02T08:02:33.000Z | tests/meeshkan/nlp/test_gib_detector.py | meeshkan/meeshkan-nlp | 63ef1e0ef31fd9c2031c89e9fd6ca3fc46eef13e | [
"MIT"
] | 9 | 2020-03-24T21:09:16.000Z | 2020-07-24T09:58:11.000Z | tests/meeshkan/nlp/test_gib_detector.py | meeshkan/meeshkan-nlp | 63ef1e0ef31fd9c2031c89e9fd6ca3fc46eef13e | [
"MIT"
] | null | null | null | from meeshkan.nlp.ids.gib_detect import GibberishDetector
def test_gib_detector():
detector = GibberishDetector()
assert not detector.is_gibberish("gibberish")
assert not detector.is_gibberish("gibberish text")
assert not detector.is_gibberish("gibberish_text_with_underscores")
assert not detector.is_gibberish("gibberish.text.with.dots")
assert not detector.is_gibberish("gibberish-text-with-minus")
assert detector.is_gibberish("WhYHJKb")
assert detector.is_gibberish("cdkf=9m0fm3")
assert not detector.is_gibberish("g5ibdf35ber6ish")
| 36.125 | 71 | 0.778547 | 71 | 578 | 6.140845 | 0.338028 | 0.183486 | 0.348624 | 0.261468 | 0.552752 | 0.488532 | 0.40367 | 0.309633 | 0 | 0 | 0 | 0.013834 | 0.124567 | 578 | 15 | 72 | 38.533333 | 0.847826 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 0.138408 | 0 | 0 | 0 | 0 | 0.727273 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e093c9bb33cc079afd8c352a399d3a4910427a85 | 463 | py | Python | GA/baseline/mtsp/routemanager.py | Tricker-z/CSE5001-GA-mTSP | 108916cafecbe325302dbce4ddd07c477a0c5f79 | [
"Apache-2.0"
] | 3 | 2021-12-14T00:46:55.000Z | 2021-12-19T08:41:21.000Z | GA/baseline/mtsp/routemanager.py | Tricker-z/CSE5001-GA-mTSP | 108916cafecbe325302dbce4ddd07c477a0c5f79 | [
"Apache-2.0"
] | null | null | null | GA/baseline/mtsp/routemanager.py | Tricker-z/CSE5001-GA-mTSP | 108916cafecbe325302dbce4ddd07c477a0c5f79 | [
"Apache-2.0"
] | null | null | null | '''
Holds all the dustbin objects and is used for
creation of chromosomes by jumbling their sequence
'''
from mtsp.dustbin import *
class RouteManager:
destinationDustbins = []
@classmethod
def addDustbin (cls, db):
cls.destinationDustbins.append(db)
@classmethod
def getDustbin (cls, index):
return cls.destinationDustbins[index]
@classmethod
def numberOfDustbins(cls):
return len(cls.destinationDustbins)
| 22.047619 | 50 | 0.704104 | 49 | 463 | 6.653061 | 0.673469 | 0.128834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218143 | 463 | 20 | 51 | 23.15 | 0.900552 | 0.207343 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e0be97a1e97f364aa46dcdeed19c0eced545d040 | 105 | py | Python | series_tiempo_ar_api/apps/metadata/indexer/strings.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 28 | 2017-12-16T20:30:52.000Z | 2021-08-11T17:35:04.000Z | series_tiempo_ar_api/apps/metadata/indexer/strings.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 446 | 2017-11-16T15:21:40.000Z | 2021-06-10T20:14:21.000Z | series_tiempo_ar_api/apps/metadata/indexer/strings.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 12 | 2018-08-23T16:13:32.000Z | 2022-03-01T23:12:28.000Z | #! coding: utf-8
from __future__ import unicode_literals
INDEXING_ERROR = 'Error en la indexación: %s'
| 17.5 | 45 | 0.761905 | 15 | 105 | 4.933333 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.152381 | 105 | 5 | 46 | 21 | 0.820225 | 0.142857 | 0 | 0 | 0 | 0 | 0.292135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
e0c0630ef7275753a27f4636b02798e72127feab | 117 | py | Python | Python/PythonExercicios/ex024.py | felizardo27/Python | 965d56f4956eb7b6a68c0b1cbd74d363dd2a223c | [
"MIT"
] | null | null | null | Python/PythonExercicios/ex024.py | felizardo27/Python | 965d56f4956eb7b6a68c0b1cbd74d363dd2a223c | [
"MIT"
] | null | null | null | Python/PythonExercicios/ex024.py | felizardo27/Python | 965d56f4956eb7b6a68c0b1cbd74d363dd2a223c | [
"MIT"
] | null | null | null | print('====== EX 024 ======')
cid = str(input('Em que cidade você nasceu? ')).lower().split()
print('santo' in cid)
| 23.4 | 63 | 0.57265 | 17 | 117 | 3.941176 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.136752 | 117 | 4 | 64 | 29.25 | 0.633663 | 0 | 0 | 0 | 0 | 0 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
e0c7399fc2d4022ed149f2a68a5e24931da09237 | 174 | py | Python | web/apps/social/urls.py | vitaliyharchenko/django_template | 41fa00cb0b8be6c5cf67b7a334d4340163255160 | [
"MIT"
] | null | null | null | web/apps/social/urls.py | vitaliyharchenko/django_template | 41fa00cb0b8be6c5cf67b7a334d4340163255160 | [
"MIT"
] | 1 | 2018-02-02T20:25:41.000Z | 2018-02-02T20:25:41.000Z | web/apps/social/urls.py | vitaliyharchenko/django_template | 41fa00cb0b8be6c5cf67b7a334d4340163255160 | [
"MIT"
] | null | null | null | # URLconf
from django.urls import path
from apps.social import views
app_name = 'social'
urlpatterns = [
path('vk/complete', views.vk_complete, name='vk_complete'),
]
| 15.818182 | 63 | 0.724138 | 24 | 174 | 5.125 | 0.583333 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 174 | 10 | 64 | 17.4 | 0.836735 | 0.04023 | 0 | 0 | 0 | 0 | 0.169697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
e0ca2023f2d70fc898d1c818e74ce9cd59b43330 | 2,481 | py | Python | fieldtypes/Field.py | pulpocoders/pulpoforms-django | 60d268faa492ba8256cc32b3108d6a27dabcd40f | [
"Apache-2.0"
] | 45 | 2015-07-30T21:52:00.000Z | 2020-03-25T16:53:34.000Z | fieldtypes/Field.py | pulpocoders/pulpo-forms-django | 60d268faa492ba8256cc32b3108d6a27dabcd40f | [
"Apache-2.0"
] | 5 | 2016-10-18T12:17:54.000Z | 2017-11-09T10:39:34.000Z | fieldtypes/Field.py | pulpocoders/pulpo-forms-django | 60d268faa492ba8256cc32b3108d6a27dabcd40f | [
"Apache-2.0"
] | 13 | 2015-08-01T01:57:35.000Z | 2022-03-28T21:14:02.000Z | from django.core.exceptions import ValidationError
from pulpo_forms.models import Version, FieldEntry
class Field(object):
"""
Default abstract field type class
"""
folder = "fields/"
template_name = "field_template_base.html"
edit_template_name = "fiel_template_edit_base.html"
prp_template_name = "field_properties_base.html"
def validate(self, value, **kwargs):
# Default validation or pass
checks = self.get_methods(**kwargs)
for method in checks:
method(value, **kwargs)
def get_methods(self, **kwargs):
return [self.null_check]
def null_check(self, value, **kwargs):
if not value:
raise ValidationError("Problem with the answer.")
def get_validations(self, json, f_id):
for page in json['pages']:
for field in page['fields']:
if (field['field_id'] == f_id):
return field['validations']
def get_options(self, json, f_id):
return None
def check_consistency(self, field):
# When a field is created check if the restrictions are consistent
pass
def count_responses_pct(self, form_pk, version_num, field_id):
v = Version.objects.get(number=version_num, form_id=form_pk)
queryset = FieldEntry.objects.filter(
field_id=field_id, entry__version_id=v.pk)
total = queryset.count()
responses = total - queryset.filter(answer="").count()
return (responses, total)
def get_statistics(self, data_list, field):
"""
Returns a the statistics related to the data list.
"""
statistics = {
"field_type": field["field_type"],
"field_text": field["text"]
}
if field["required"]:
statistics["required"] = "Yes"
else:
statistics["required"] = "No"
return statistics
def get_assets():
return []
def get_non_static():
return []
def get_styles():
return []
"""
Default Render methods for field templates
"""
def render(self):
return self.folder + self.template_name
def render_properties(self):
return self.folder + self.prp_template_name
def render_edit(self):
return self.folder + self.edit_template_name
def render_statistic(self):
return self.folder + self.sts_template_name
class Meta:
abstract = True
| 27.876404 | 74 | 0.615075 | 291 | 2,481 | 5.051546 | 0.347079 | 0.057143 | 0.038095 | 0.054422 | 0.065306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285369 | 2,481 | 88 | 75 | 28.193182 | 0.829103 | 0.071342 | 0 | 0.051724 | 0 | 0 | 0.091568 | 0.035358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258621 | false | 0.017241 | 0.034483 | 0.155172 | 0.603448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e0d2cb5079d6c051d2825b4e95644d0a69a10092 | 441 | py | Python | fluiddb/schema/scripts/lowercase_usernames.py | fluidinfo/fluiddb | b5a8c8349f3eaf3364cc4efba4736c3e33b30d96 | [
"Apache-2.0"
] | 3 | 2021-05-10T14:41:30.000Z | 2021-12-16T05:53:30.000Z | fluiddb/schema/scripts/lowercase_usernames.py | fluidinfo/fluiddb | b5a8c8349f3eaf3364cc4efba4736c3e33b30d96 | [
"Apache-2.0"
] | null | null | null | fluiddb/schema/scripts/lowercase_usernames.py | fluidinfo/fluiddb | b5a8c8349f3eaf3364cc4efba4736c3e33b30d96 | [
"Apache-2.0"
] | 2 | 2018-01-24T09:03:21.000Z | 2021-06-25T08:34:54.000Z | """Lowercase all the users in the database."""
from fluiddb.data.user import getUsers
from fluiddb.scripts.commands import setupStore
if __name__ == '__main__':
store = setupStore('postgres:///fluidinfo', 'main')
print __doc__
for user in getUsers():
if user.username != user.username.lower():
print 'Fixing user', user.username
user.username = user.username.lower()
store.commit()
| 23.210526 | 55 | 0.655329 | 51 | 441 | 5.431373 | 0.54902 | 0.216607 | 0.173285 | 0.259928 | 0.209386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226757 | 441 | 18 | 56 | 24.5 | 0.812317 | 0 | 0 | 0 | 0 | 0 | 0.111392 | 0.053165 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e0de11c2dcc49d61800875bc4b3d8fe77073a5e9 | 452 | py | Python | 100-Exercicios/ex004.py | thedennerdev/ExerciciosPython-Iniciante | de36c4a09700353a9a1daa7f1320e416c6201a5c | [
"MIT"
] | null | null | null | 100-Exercicios/ex004.py | thedennerdev/ExerciciosPython-Iniciante | de36c4a09700353a9a1daa7f1320e416c6201a5c | [
"MIT"
] | null | null | null | 100-Exercicios/ex004.py | thedennerdev/ExerciciosPython-Iniciante | de36c4a09700353a9a1daa7f1320e416c6201a5c | [
"MIT"
] | null | null | null | qq = (input('Digite algo qualquer: '))
print('O tipo da palavra digitado é', type(qq))
print('O valor dela são digitos?', qq.isdigit())
print('O valor dela são númericos?', qq.isnumeric())
print('O valor dela são letras?', qq.isalpha())
print('O valor dela são alfanúmericos?', qq.isalnum())
print('O valor dela são apenas espaços?', qq.isspace())
print('O valor dela são maiúsculas?', qq.isupper())
print('O valor dela são minusclas?', qq.islower())
| 41.090909 | 55 | 0.701327 | 71 | 452 | 4.464789 | 0.43662 | 0.15142 | 0.242902 | 0.33123 | 0.397476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126106 | 452 | 10 | 56 | 45.2 | 0.802532 | 0 | 0 | 0 | 0 | 0 | 0.54102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.888889 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
1c9f59beeef09a1d3746d78c309103b22a67b401 | 285 | py | Python | tests/test_required_interfaces.py | evoh-nft/evoh-erc721 | 573de4da7047066ab187c2a31aee95ed00355e7d | [
"MIT"
] | null | null | null | tests/test_required_interfaces.py | evoh-nft/evoh-erc721 | 573de4da7047066ab187c2a31aee95ed00355e7d | [
"MIT"
] | null | null | null | tests/test_required_interfaces.py | evoh-nft/evoh-erc721 | 573de4da7047066ab187c2a31aee95ed00355e7d | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def test_erc165_support(nft):
erc165_interface_id = "0x01ffc9a7"
assert nft.supportsInterface(erc165_interface_id) is True
def test_erc721_support(nft):
erc721_interface_id = "0x80ac58cd"
assert nft.supportsInterface(erc721_interface_id) is True
| 23.75 | 61 | 0.782456 | 37 | 285 | 5.702703 | 0.459459 | 0.208531 | 0.161137 | 0.161137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117886 | 0.136842 | 285 | 11 | 62 | 25.909091 | 0.739837 | 0.059649 | 0 | 0 | 0 | 0 | 0.074906 | 0 | 0 | 0 | 0.074906 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1cabdaf9988c673eef43545493f061d3a959c66d | 3,788 | py | Python | backend/src/gloader/xml/dom/html/HTMLAnchorElement.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 11 | 2019-03-02T20:39:34.000Z | 2021-09-02T19:47:38.000Z | backend/src/gloader/xml/dom/html/HTMLAnchorElement.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 29 | 2019-01-17T15:44:48.000Z | 2021-06-02T00:19:40.000Z | backend/src/gloader/xml/dom/html/HTMLAnchorElement.py | anrl/gini4 | d26649c8c02a1737159e48732cf1ee15ba2a604d | [
"MIT"
] | 11 | 2019-01-28T05:00:55.000Z | 2021-11-12T03:08:32.000Z | ########################################################################
#
# File Name: HTMLAnchorElement
#
#
### This file is automatically generated by GenerateHtml.py.
### DO NOT EDIT!
"""
WWW: http://4suite.com/4DOM e-mail: support@4suite.com
Copyright (c) 2000 Fourthought Inc, USA. All Rights Reserved.
See http://4suite.com/COPYRIGHT for license and copyright information
"""
import string
from xml.dom import Node
from xml.dom.html.HTMLElement import HTMLElement
class HTMLAnchorElement(HTMLElement):
def __init__(self, ownerDocument, nodeName="A"):
HTMLElement.__init__(self, ownerDocument, nodeName)
### Attribute Methods ###
def _get_accessKey(self):
return self.getAttribute("ACCESSKEY")
def _set_accessKey(self, value):
self.setAttribute("ACCESSKEY", value)
def _get_charset(self):
return self.getAttribute("CHARSET")
def _set_charset(self, value):
self.setAttribute("CHARSET", value)
def _get_coords(self):
return self.getAttribute("COORDS")
def _set_coords(self, value):
self.setAttribute("COORDS", value)
def _get_href(self):
return self.getAttribute("HREF")
def _set_href(self, value):
self.setAttribute("HREF", value)
def _get_hreflang(self):
return self.getAttribute("HREFLANG")
def _set_hreflang(self, value):
self.setAttribute("HREFLANG", value)
def _get_name(self):
return self.getAttribute("NAME")
def _set_name(self, value):
self.setAttribute("NAME", value)
def _get_rel(self):
return self.getAttribute("REL")
def _set_rel(self, value):
self.setAttribute("REL", value)
def _get_rev(self):
return self.getAttribute("REV")
def _set_rev(self, value):
self.setAttribute("REV", value)
def _get_shape(self):
return string.capitalize(self.getAttribute("SHAPE"))
def _set_shape(self, value):
self.setAttribute("SHAPE", value)
def _get_tabIndex(self):
value = self.getAttribute("TABINDEX")
if value:
return int(value)
return 0
def _set_tabIndex(self, value):
self.setAttribute("TABINDEX", str(value))
def _get_target(self):
return self.getAttribute("TARGET")
def _set_target(self, value):
self.setAttribute("TARGET", value)
def _get_type(self):
return self.getAttribute("TYPE")
def _set_type(self, value):
self.setAttribute("TYPE", value)
### Methods ###
def blur(self):
pass
def focus(self):
pass
### Attribute Access Mappings ###
_readComputedAttrs = HTMLElement._readComputedAttrs.copy()
_readComputedAttrs.update({
"accessKey" : _get_accessKey,
"charset" : _get_charset,
"coords" : _get_coords,
"href" : _get_href,
"hreflang" : _get_hreflang,
"name" : _get_name,
"rel" : _get_rel,
"rev" : _get_rev,
"shape" : _get_shape,
"tabIndex" : _get_tabIndex,
"target" : _get_target,
"type" : _get_type
})
_writeComputedAttrs = HTMLElement._writeComputedAttrs.copy()
_writeComputedAttrs.update({
"accessKey" : _set_accessKey,
"charset" : _set_charset,
"coords" : _set_coords,
"href" : _set_href,
"hreflang" : _set_hreflang,
"name" : _set_name,
"rel" : _set_rel,
"rev" : _set_rev,
"shape" : _set_shape,
"tabIndex" : _set_tabIndex,
"target" : _set_target,
"type" : _set_type
})
_readOnlyAttrs = filter(lambda k,m=_writeComputedAttrs: not m.has_key(k),
HTMLElement._readOnlyAttrs + _readComputedAttrs.keys())
| 25.768707 | 77 | 0.612196 | 398 | 3,788 | 5.537688 | 0.228643 | 0.053085 | 0.076679 | 0.136116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003181 | 0.253168 | 3,788 | 146 | 78 | 25.945205 | 0.775893 | 0.096093 | 0 | 0.043011 | 1 | 0 | 0.081122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.290323 | false | 0.021505 | 0.032258 | 0.11828 | 0.505376 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
1cbf7e7b48c17a4738f3a41ebb956e0efb15333c | 638 | py | Python | tests/api/utils/schema/auth.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | tests/api/utils/schema/auth.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | tests/api/utils/schema/auth.py | BenjamenMeyer/deuce | fbca31cb5248a808a85bfc24af10119453359276 | [
"Apache-2.0"
] | null | null | null | authentication = {
"type": "object",
"properties": {
"access": {
"type": "object",
"properties": {
"token": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
},
"required": ["id"],
},
"serviceCatalog": {
"type": "array",
},
},
"required": ["token", "serviceCatalog", ],
},
},
"required": ["access", ],
}
| 25.52 | 54 | 0.268025 | 25 | 638 | 6.84 | 0.44 | 0.175439 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.570533 | 638 | 24 | 55 | 26.583333 | 0.624088 | 0 | 0 | 0.25 | 0 | 0 | 0.246082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1cc6c7e8b732b32110858561f10fdd739f1cb8b1 | 207 | py | Python | src/roboverse/__init__.py | Mindstem/Roboverse | 3d34e6afc2c43f57e647c10411f013a317108886 | [
"MIT"
] | null | null | null | src/roboverse/__init__.py | Mindstem/Roboverse | 3d34e6afc2c43f57e647c10411f013a317108886 | [
"MIT"
] | null | null | null | src/roboverse/__init__.py | Mindstem/Roboverse | 3d34e6afc2c43f57e647c10411f013a317108886 | [
"MIT"
] | null | null | null | """Roboverse package."""
from collections.abc import Sequence
from importlib.metadata import version
__all__: Sequence[str] = ('__version__',)
__version__: str = version(distribution_name='Roboverse')
| 17.25 | 57 | 0.763285 | 22 | 207 | 6.590909 | 0.636364 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 207 | 11 | 58 | 18.818182 | 0.79235 | 0.086957 | 0 | 0 | 0 | 0 | 0.10929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
1cd036636063d98c815e91d7a353efb5060a2fb5 | 809 | py | Python | metadrive/base_class/nameable.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 125 | 2021-08-30T06:33:57.000Z | 2022-03-31T09:02:44.000Z | metadrive/base_class/nameable.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 72 | 2021-08-30T16:23:41.000Z | 2022-03-31T19:17:16.000Z | metadrive/base_class/nameable.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 20 | 2021-09-09T08:20:25.000Z | 2022-03-24T13:24:07.000Z | import logging
from metadrive.utils import random_string
class Nameable:
"""
Instance of this class will have a special name
"""
def __init__(self, name=None):
# ID for object
self.name = random_string() if name is None else name
self.id = self.name # name = id
@property
def class_name(self):
return self.__class__.__name__
def __del__(self):
try:
str(self)
except AttributeError:
pass
else:
logging.debug("{} is destroyed".format(str(self)))
def __repr__(self):
return "{}".format(str(self))
def __str__(self):
return "{}, ID:{}".format(self.class_name, self.name)
def rename(self, new_name):
self.name = new_name
self.id = self.name
| 22.472222 | 62 | 0.583436 | 100 | 809 | 4.42 | 0.41 | 0.108597 | 0.045249 | 0.063348 | 0.081448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309023 | 809 | 35 | 63 | 23.114286 | 0.790698 | 0.088999 | 0 | 0.086957 | 0 | 0 | 0.036111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0.043478 | 0.086957 | 0.130435 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1cd0b9969ec980eadb9502a245242751f992c588 | 320 | py | Python | EPro-PnP-6DoF/lib/utils/tictoc.py | Lakonik/EPro-PnP | 931df847190ce10eddd1dc3e3168ce1a2f295ffa | [
"Apache-2.0"
] | 19 | 2022-03-21T10:22:24.000Z | 2022-03-30T15:43:46.000Z | EPro-PnP-6DoF/lib/utils/tictoc.py | Lakonik/EPro-PnP | 931df847190ce10eddd1dc3e3168ce1a2f295ffa | [
"Apache-2.0"
] | null | null | null | EPro-PnP-6DoF/lib/utils/tictoc.py | Lakonik/EPro-PnP | 931df847190ce10eddd1dc3e3168ce1a2f295ffa | [
"Apache-2.0"
] | 3 | 2022-03-26T08:08:24.000Z | 2022-03-30T11:17:11.000Z | """
This file is from
https://github.com/LZGMatrix/CDPN_ICCV2019_ZhigangLi
"""
import time
def tic():
global start_time
start_time = time.time()
return start_time
def toc():
if 'start_time' in globals():
end_time = time.time()
return end_time - start_time
else:
return None | 17.777778 | 52 | 0.646875 | 44 | 320 | 4.5 | 0.568182 | 0.227273 | 0.131313 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.25 | 320 | 18 | 53 | 17.777778 | 0.808333 | 0.21875 | 0 | 0 | 0 | 0 | 0.041152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
1cdc4f9eed5127ce6f230d3c5f394af453babf03 | 385 | py | Python | q2_comp/__init__.py | dianahaider/q2-CHAOS | 89571a8bffbebeeed2e2f5ec989a978169afaa88 | [
"BSD-3-Clause"
] | null | null | null | q2_comp/__init__.py | dianahaider/q2-CHAOS | 89571a8bffbebeeed2e2f5ec989a978169afaa88 | [
"BSD-3-Clause"
] | null | null | null | q2_comp/__init__.py | dianahaider/q2-CHAOS | 89571a8bffbebeeed2e2f5ec989a978169afaa88 | [
"BSD-3-Clause"
] | null | null | null | from . import _alpha
from . import _denoise
from . import _taxonomy
from ._alpha import (alpha_frequency, alpha_diversity)
from ._denoise import (denoise_stats)
from ._taxonomy import (taxo_variability)
__all__ = ['alpha_frequency', 'alpha_diversity', 'denoise_stats', 'taxo_variability']
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
| 25.666667 | 85 | 0.794805 | 47 | 385 | 5.957447 | 0.319149 | 0.107143 | 0.135714 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 385 | 14 | 86 | 27.5 | 0.821114 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.7 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1ce06d34f7fe6b93f423edbe5ef505a1a0608531 | 570 | py | Python | nlu/components/classifiers/sentiment_detector/sentiment_detector.py | sumanthratna/nlu | acde6879d776116051d4cbe909268ab8946989b5 | [
"Apache-2.0"
] | 1 | 2020-09-25T22:55:13.000Z | 2020-09-25T22:55:13.000Z | nlu/components/classifiers/sentiment_detector/sentiment_detector.py | sumanthratna/nlu | acde6879d776116051d4cbe909268ab8946989b5 | [
"Apache-2.0"
] | null | null | null | nlu/components/classifiers/sentiment_detector/sentiment_detector.py | sumanthratna/nlu | acde6879d776116051d4cbe909268ab8946989b5 | [
"Apache-2.0"
] | null | null | null | import nlu.pipe_components
import sparknlp
from sparknlp.annotator import *
class SentimentDl:
@staticmethod
def get_default_model(): # TODO cannot runw ithouth a dictionary!
return SentimentDetectorModel() \
.setInputCols("lemma", "sentence_embeddings") \
.setOutputCol("sentiment") \
\
@staticmethod
def get_default_trainable_model():
return SentimentDetector() \
.setInputCols("lemma", "sentence_embeddings") \
.setOutputCol("sentiment") \
.setDictionary("dcit_TODO???")
| 30 | 70 | 0.65614 | 48 | 570 | 7.604167 | 0.666667 | 0.082192 | 0.09863 | 0.136986 | 0.306849 | 0.306849 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240351 | 570 | 18 | 71 | 31.666667 | 0.842956 | 0.066667 | 0 | 0.375 | 0 | 0 | 0.14717 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0.125 | true | 0 | 0.1875 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
1ce104758de9c21aee1176a944bdbb004968ca9e | 232 | py | Python | atcoder/abc054/a.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 506 | 2018-08-22T10:30:38.000Z | 2022-03-31T10:01:49.000Z | atcoder/abc054/a.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 13 | 2019-08-07T18:31:18.000Z | 2020-12-15T21:54:41.000Z | atcoder/abc054/a.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 234 | 2018-08-06T17:11:41.000Z | 2022-03-26T10:56:42.000Z | #!/usr/bin/env python3
# https://abc054.contest.atcoder.jp/tasks/abc054_a
a, b = map(int, input().split())
if a == b: print('Draw')
elif a == 1: print('Alice')
elif b == 1: print('Bob')
elif a > b: print('Alice')
else: print('Bob')
| 25.777778 | 50 | 0.625 | 41 | 232 | 3.512195 | 0.585366 | 0.041667 | 0.097222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 0.133621 | 232 | 8 | 51 | 29 | 0.671642 | 0.301724 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.833333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
1cf1834effd35d24cf8eec21ed50f484254b6230 | 766 | py | Python | app/http/controllers/dashboard/TechniciansController.py | jrquiles18/Kennedy-Pools | 628375c814c4b4a59fa194739ddab4ab5838d2f2 | [
"MIT"
] | null | null | null | app/http/controllers/dashboard/TechniciansController.py | jrquiles18/Kennedy-Pools | 628375c814c4b4a59fa194739ddab4ab5838d2f2 | [
"MIT"
] | null | null | null | app/http/controllers/dashboard/TechniciansController.py | jrquiles18/Kennedy-Pools | 628375c814c4b4a59fa194739ddab4ab5838d2f2 | [
"MIT"
] | null | null | null | """A TechniciansController Module."""
from masonite.request import Request
from masonite.view import View
from masonite.controllers import Controller
from app.Technician import Technician
class TechniciansController(Controller):
"""TechniciansController Controller Class."""
def __init__(self, request: Request):
"""TechniciansController Initializer
Arguments:
request {masonite.request.Request} -- The Masonite Request class.
"""
self.request = request
def show(self, view: View):
techs = Technician.all()
return view.render('dashboard/technicians', {'techs': techs})
def logout(self, request: Request):
request.session.reset()
return request.redirect('/dashboard')
| 28.37037 | 77 | 0.690601 | 75 | 766 | 7 | 0.4 | 0.133333 | 0.102857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207572 | 766 | 26 | 78 | 29.461538 | 0.864909 | 0.244125 | 0 | 0 | 0 | 0 | 0.066915 | 0.039033 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.307692 | 0 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
e8027d4e7a933ea51e200c6d5e5b6bf722082f31 | 327 | py | Python | __init__.py | antoniamarie03/siren-alarm-skill | d4c994ef8f72725f78ff199e93ffd30f6a66b47d | [
"Apache-2.0"
] | null | null | null | __init__.py | antoniamarie03/siren-alarm-skill | d4c994ef8f72725f78ff199e93ffd30f6a66b47d | [
"Apache-2.0"
] | null | null | null | __init__.py | antoniamarie03/siren-alarm-skill | d4c994ef8f72725f78ff199e93ffd30f6a66b47d | [
"Apache-2.0"
] | null | null | null | from mycroft import MycroftSkill, intent_file_handler
class SirenAlarm(MycroftSkill):
def __init__(self):
MycroftSkill.__init__(self)
@intent_file_handler('alarm.siren.intent')
def handle_alarm_siren(self, message):
self.speak_dialog('alarm.siren')
def create_skill():
return SirenAlarm()
| 20.4375 | 53 | 0.727829 | 38 | 327 | 5.842105 | 0.552632 | 0.135135 | 0.153153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174312 | 327 | 15 | 54 | 21.8 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.088957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
08fd3f72ab2400ef9e1e9f6c50e300b0c22e9a9f | 1,573 | py | Python | design_patterns__examples/Proxy/example.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 117 | 2015-12-18T07:18:27.000Z | 2022-03-28T00:25:54.000Z | design_patterns__examples/Proxy/example.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 8 | 2018-10-03T09:38:46.000Z | 2021-12-13T19:51:09.000Z | design_patterns__examples/Proxy/example.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 28 | 2016-08-02T17:43:47.000Z | 2022-03-21T08:31:12.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = 'ipetrash'
# SOURCE: Design Patterns: Proxy - Заместитель
# SOURCE: https://ru.wikipedia.org/wiki/Заместитель_(шаблон_проектирования)
class IMath:
"""Интерфейс для прокси и реального субъекта"""
def add(self, x, y):
raise NotImplementedError()
def sub(self, x, y):
raise NotImplementedError()
def mul(self, x, y):
raise NotImplementedError()
def div(self, x, y):
raise NotImplementedError()
class Math(IMath):
"""Реальный субъект"""
def add(self, x, y):
return x + y
def sub(self, x, y):
return x - y
def mul(self, x, y):
return x * y
def div(self, x, y):
return x / y
class MathProxy(IMath):
"""Прокси"""
def __init__(self):
self.math = None
# Быстрые операции - не требуют реального субъекта
def add(self, x, y):
return x + y
def sub(self, x, y):
return x - y
# Медленная операция - требует создания реального субъекта
def mul(self, x, y):
if not self.math:
self.math = Math()
return self.math.mul(x, y)
def div(self, x, y):
if y == 0:
return float('inf') # Вернуть positive infinity
if not self.math:
self.math = Math()
return self.math.div(x, y)
if __name__ == '__main__':
p = MathProxy()
x, y = 4, 2
print('4 + 2 =', p.add(x, y))
print('4 - 2 =', p.sub(x, y))
print('4 * 2 =', p.mul(x, y))
print('4 / 2 =', p.div(x, y))
| 19.6625 | 75 | 0.546726 | 216 | 1,573 | 3.898148 | 0.314815 | 0.059382 | 0.085511 | 0.085511 | 0.513064 | 0.448931 | 0.293349 | 0.187648 | 0.187648 | 0.187648 | 0 | 0.011982 | 0.310235 | 1,573 | 79 | 76 | 19.911392 | 0.764055 | 0.228862 | 0 | 0.55814 | 0 | 0 | 0.039463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.302326 | false | 0 | 0 | 0.139535 | 0.581395 | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1c0cb41326a46bf497e4e202b368457518deb26c | 595 | py | Python | _Projects/flask_projects/todo_app/test.py | vincepogz/CUNY2X-TTP-Notes | 8579c7457ef315c4465520b3f2ddb04b0a6ddaf7 | [
"MIT"
] | null | null | null | _Projects/flask_projects/todo_app/test.py | vincepogz/CUNY2X-TTP-Notes | 8579c7457ef315c4465520b3f2ddb04b0a6ddaf7 | [
"MIT"
] | null | null | null | _Projects/flask_projects/todo_app/test.py | vincepogz/CUNY2X-TTP-Notes | 8579c7457ef315c4465520b3f2ddb04b0a6ddaf7 | [
"MIT"
] | 1 | 2022-01-29T21:39:03.000Z | 2022-01-29T21:39:03.000Z | from app import app
def test1():
"""
This function test that the flask application has a
correct response code when the application goes live
"""
response = app.test_client().get("/")
assert response.status_code == 200
def test2():
"""A dummy docstring"""
response = app.test_client().get("/edit")
assert response.status_code == 200
def test3():
"""A dummy docstring"""
response = app.test_client().get("/edit")
assert b"To Do App" in response.data
assert b"To Do Title" in response.data
assert b"Add" in response.data
| 23.8 | 57 | 0.636975 | 82 | 595 | 4.560976 | 0.45122 | 0.088235 | 0.120321 | 0.168449 | 0.582888 | 0.406417 | 0.262032 | 0.262032 | 0.262032 | 0.262032 | 0 | 0.019956 | 0.242017 | 595 | 25 | 58 | 23.8 | 0.809313 | 0.238655 | 0 | 0.333333 | 0 | 0 | 0.080189 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1c12eb72b957a807ce5688e024cf15ee45589047 | 1,256 | py | Python | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GLES2/OES/fbo_render_mipmap.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GLES2/OES/fbo_render_mipmap.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GLES2/OES/fbo_render_mipmap.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | '''OpenGL extension OES.fbo_render_mipmap
This module customises the behaviour of the
OpenGL.raw.GLES2.OES.fbo_render_mipmap to provide a more
Python-friendly API
Overview (from the spec)
OES_framebuffer_object allows rendering to the base level of a
texture only. This extension removes this limitation by
allowing implementations to support rendering to any mip-level
of a texture(s) that is attached to a framebuffer object(s).
If this extension is supported, FramebufferTexture2DOES, and
FramebufferTexture3DOES can be used to render directly into
any mip level of a texture image
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/OES/fbo_render_mipmap.txt
'''
from OpenGL import platform, constant, arrays
from OpenGL import extensions, wrapper
import ctypes
from OpenGL.raw.GLES2 import _types, _glgets
from OpenGL.raw.GLES2.OES.fbo_render_mipmap import *
from OpenGL.raw.GLES2.OES.fbo_render_mipmap import _EXTENSION_NAME
def glInitFboRenderMipmapOES():
'''Return boolean indicating whether this extension is available'''
from OpenGL import extensions
return extensions.hasGLExtension( _EXTENSION_NAME )
### END AUTOGENERATED SECTION | 36.941176 | 72 | 0.786624 | 175 | 1,256 | 5.542857 | 0.485714 | 0.061856 | 0.061856 | 0.092784 | 0.162887 | 0.162887 | 0.119588 | 0.086598 | 0.086598 | 0 | 0 | 0.005703 | 0.16242 | 1,256 | 34 | 73 | 36.941176 | 0.91635 | 0.698248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.777778 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1c27fcdd05db513ebf2533fb78d8a688c277b9dc | 593 | py | Python | guided_diffusion/grad_reverse.py | ZGCTroy/guided-diffusion | af987bb2b65db2875148a5466df79736ea5ae6a1 | [
"MIT"
] | null | null | null | guided_diffusion/grad_reverse.py | ZGCTroy/guided-diffusion | af987bb2b65db2875148a5466df79736ea5ae6a1 | [
"MIT"
] | null | null | null | guided_diffusion/grad_reverse.py | ZGCTroy/guided-diffusion | af987bb2b65db2875148a5466df79736ea5ae6a1 | [
"MIT"
] | null | null | null | from torch.autograd import Function
# class GradReverse(Function):
# def __init__(self, lambd):
# self.lambd = lambd
#
# def forward(self, x):
# return x.view_as(x)
#
# def backward(self, grad_output):
# return (grad_output * -self.lambd)
#
#
# def grad_reverse(x, lambd=1.0):
# return GradReverse(lambd)(x)
class GradReverse(Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
return grad_output.neg()
def grad_reverse(x):
return GradReverse.apply(x) | 20.448276 | 44 | 0.637437 | 76 | 593 | 4.815789 | 0.342105 | 0.10929 | 0.131148 | 0.065574 | 0.224044 | 0.081967 | 0 | 0 | 0 | 0 | 0 | 0.004425 | 0.237774 | 593 | 29 | 45 | 20.448276 | 0.80531 | 0.480607 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0.3 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1c3c59272f0b216c9e1a63e8019a1742ca1d1695 | 418 | py | Python | jobs/models.py | bharatchanddandamudi/Bharatchand_Portfolio_V1.1-deploy- | 21205ec9d3263463b43422b4679b9736142f7308 | [
"MIT"
] | null | null | null | jobs/models.py | bharatchanddandamudi/Bharatchand_Portfolio_V1.1-deploy- | 21205ec9d3263463b43422b4679b9736142f7308 | [
"MIT"
] | null | null | null | jobs/models.py | bharatchanddandamudi/Bharatchand_Portfolio_V1.1-deploy- | 21205ec9d3263463b43422b4679b9736142f7308 | [
"MIT"
] | null | null | null | from django.db import models
from django.urls import reverse
# Create your models here.
class Job(models.Model):
image = models.ImageField(upload_to='images/')
summary = models.CharField(max_length=5000)
# video = models.FileField(upload_to='images/',null=True)
def __str__(self):
return self.summary
def get_absolute_url(self):
return reverse('links', kwargs={"pk": self.pk}) | 24.588235 | 61 | 0.696172 | 56 | 418 | 5.035714 | 0.660714 | 0.070922 | 0.099291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011696 | 0.181818 | 418 | 17 | 62 | 24.588235 | 0.812866 | 0.191388 | 0 | 0 | 0 | 0 | 0.041791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.222222 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1c572b006232931c609ddc348fb2c6139f59f91a | 4,325 | py | Python | pyStorageBackend/__init__.py | tantalum7/pyStorageBackend | 948785a9431c4013a476341d3e3fc773ba0612eb | [
"WTFPL"
] | null | null | null | pyStorageBackend/__init__.py | tantalum7/pyStorageBackend | 948785a9431c4013a476341d3e3fc773ba0612eb | [
"WTFPL"
] | null | null | null | pyStorageBackend/__init__.py | tantalum7/pyStorageBackend | 948785a9431c4013a476341d3e3fc773ba0612eb | [
"WTFPL"
] | null | null | null |
# Project imports
from pyStorageBackend.uid import UID
from pyStorageBackend.generic_backend import GenericBackend
# Exceptions
class InvalidKeyException(Exception): pass
class InvalidUIDException(Exception): pass
class InvalidDataException(Exception): pass
class DocumentNotFoundException(Exception): pass
class Storage:
MAX_KEY_LENGTH = 32
MAX_DATA_LENGTH = 65536
def __init__(self, backend, settings: dict):
"""
Thin wrapper class around the specific implementation of GenericBackend used
:param settings:
"""
self._backend = backend(settings=settings)
def open(self):
"""
Opens the storage medium
:return:
"""
self._backend.open()
def close(self, options: dict=None):
"""
Safe-closes the storage medium
:param options: Optional dict passed with options specific to the backend implementation
"""
self._backend.close(options=options)
def get(self, uid: UID, key: str) -> bytes:
"""
Retrieves a value for the key given, within the document specified by the uid
:param uid: UID of the document to access
:param key: Key string to retrieve value of
:return: Data bytes stored at key, or None
"""
self._validate(key=key, uid=uid)
return self._backend.get(uid=uid, key=key)
def get_document(self, uid: UID) -> dict:
"""
Retrieves the entire document with the given uid (dict of key:value pairs)
:param uid: UID of the document to retrieve
:return: Dict of key:value pairs for the document
"""
self._validate(uid=uid)
return self._backend.get_document(uid=uid)
def put(self, uid: UID, key, data):
"""
Stores data bytes for the key given, in the document with the uid specified.
:param uid: UID of document to store in
:param key: Key string to store data against
:param data: Data bytes to store
"""
self._validate(uid=uid, key=key, data=data)
self._backend.put(uid=uid, key=key, data=data)
def delete(self, uid: UID, key):
"""
Deletes a key:value pair in the document with the uid specified
Fails silently if the key doesn't exist
:param uid: UID of the document to operate on
:param key: Key string of the key:value pair to delete
"""
self._validate(uid=uid, key=key)
self._backend.delete(uid=uid, key=key)
def delete_document(self, uid):
"""
Deletes the entire document with the uid given
:param uid: UID of the document to delete
"""
self._validate(uid=uid)
self._backend.delete_document(uid=uid)
def sync(self, options=None):
"""
Triggers a synchronisation of the storage medium. Actual operation depends on the backend, but typically
storage writes should be considered volatile until sync() is called.
:param options: Optional dict of options related to sync(), dependant on backend implementation
:return:
"""
self._backend.sync(options=options)
def count(self, uid: UID) -> int:
"""
Returns the number of keys stored in a given document
:param uid: UID of document to count keys for
:return: Number of keys (int)
"""
self._validate(uid=uid)
return self._backend.count(uid=uid)
@staticmethod
def generate_uid():
"""
Generates a new, random uid. Each new uid is considered globally unique, using the uuid library.
The UID class is just a wrapper for a 32char uuid string
:return:
"""
return UID.new()
def _validate(self, key: str=None, uid: UID=None, data: bytes=None):
# Validate key
if key is not None:
if not isinstance(key, str) or len(key) == 0 or len(key) > self.MAX_KEY_LENGTH:
raise InvalidKeyException
# Validate uid
if uid is not None:
if not isinstance(uid, UID):
raise InvalidUIDException
# Validate data
if data is not None:
if not isinstance(data, bytes) or len(data) < self.MAX_DATA_LENGTH:
raise InvalidDataException
| 33.527132 | 112 | 0.626821 | 557 | 4,325 | 4.804309 | 0.231598 | 0.056054 | 0.026906 | 0.029148 | 0.241779 | 0.18423 | 0.088939 | 0 | 0 | 0 | 0 | 0.00327 | 0.292948 | 4,325 | 128 | 113 | 33.789063 | 0.871812 | 0.405087 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0.041667 | 0 | 0.520833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
1c5c36b5c90c28b4a1c5c49c81d242e3804093dc | 146,324 | py | Python | abcvoting/abcrules.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 2 | 2019-07-10T08:54:43.000Z | 2019-09-09T16:17:04.000Z | abcvoting/abcrules.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 1 | 2019-09-11T21:29:47.000Z | 2019-09-11T21:29:47.000Z | abcvoting/abcrules.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 1 | 2019-09-11T16:56:23.000Z | 2019-09-11T16:56:23.000Z | """Approval-based committee (ABC) voting rules."""
import functools
import itertools
import random
from fractions import Fraction
from abcvoting.output import output, DETAILS
from abcvoting import abcrules_gurobi, abcrules_ortools, abcrules_mip, misc, scores
from abcvoting.misc import str_committees_with_header, header, str_set_of_candidates
from abcvoting.misc import sorted_committees, CandidateSet
try:
from gmpy2 import mpq
except ImportError:
mpq = None
########################################################################
MAIN_RULE_IDS = [
"av",
"sav",
"pav",
"slav",
"cc",
"lexcc",
"geom2",
"seqpav",
"revseqpav",
"seqslav",
"seqcc",
"seqphragmen",
"minimaxphragmen",
"leximaxphragmen",
"monroe",
"greedy-monroe",
"minimaxav",
"lexminimaxav",
"rule-x",
"phragmen-enestroem",
"consensus-rule",
"trivial",
"rsd",
"eph",
]
"""
List of rule identifiers (`rule_id`) for the main ABC rules included in abcvoting.
This selection is somewhat arbitrary. But all really important rules (that are implemented)
are contained in this list.
"""
ALGORITHM_NAMES = {
"gurobi": "Gurobi ILP solver",
"branch-and-bound": "branch-and-bound",
"brute-force": "brute-force",
"mip-cbc": "CBC ILP solver via Python MIP library",
"mip-gurobi": "Gurobi ILP solver via Python MIP library",
# "cvxpy_gurobi": "Gurobi ILP solver via CVXPY library",
# "cvxpy_scip": "SCIP ILP solver via CVXPY library",
# "cvxpy_glpk_mi": "GLPK ILP solver via CVXPY library",
# "cvxpy_cbc": "CBC ILP solver via CVXPY library",
"standard": "Standard algorithm",
"standard-fractions": "Standard algorithm (using standard Python fractions)",
"gmpy2-fractions": "Standard algorithm (using gmpy2 fractions)",
"float-fractions": "Standard algorithm (using floats instead of fractions)",
"ortools-cp": "OR-Tools CP-SAT solver",
}
"""
A dictionary containing mapping all valid algorithm identifiers to full names (i.e., descriptions).
"""
MAX_NUM_OF_COMMITTEES_DEFAULT = None
"""
The maximum number of committees that is returned by an ABC voting rule.
If `MAX_NUM_OF_COMMITTEES_DEFAULT` ist set to `None`, then there is no constraint
on the maximum number of committees.
Can be overridden with the parameter `max_num_of_committees` in any `compute` function.
"""
class Rule:
"""
A class that contains the main information about an ABC rule.
Parameters
----------
rule_id : str
The rule identifier.
"""
_THIELE_ALGORITHMS = (
# algorithms sorted by speed
"gurobi",
"mip-gurobi",
"mip-cbc",
"branch-and-bound",
"brute-force",
)
_RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES = (False, True)
_RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES = (True, False)
def __init__(
self,
rule_id,
):
self.rule_id = rule_id
if rule_id == "av":
self.shortname = "AV"
self.longname = "Approval Voting (AV)"
self.compute_fct = compute_av
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "sav":
self.shortname = "SAV"
self.longname = "Satisfaction Approval Voting (SAV)"
self.compute_fct = compute_sav
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "pav":
self.shortname = "PAV"
self.longname = "Proportional Approval Voting (PAV)"
self.compute_fct = compute_pav
self.algorithms = self._THIELE_ALGORITHMS
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "slav":
self.shortname = "SLAV"
self.longname = "Sainte-Laguë Approval Voting (SLAV)"
self.compute_fct = compute_slav
self.algorithms = self._THIELE_ALGORITHMS
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "cc":
self.shortname = "CC"
self.longname = "Approval Chamberlin-Courant (CC)"
self.compute_fct = compute_cc
self.algorithms = (
# algorithms sorted by speed
"gurobi",
"mip-gurobi",
"ortools-cp",
"branch-and-bound",
"brute-force",
"mip-cbc",
)
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "lexcc":
self.shortname = "lex-CC"
self.longname = "Lexicographic Chamberlin-Courant (lex-CC)"
self.compute_fct = compute_lexcc
# algorithms sorted by speed
self.algorithms = ("gurobi", "mip-gurobi", "brute-force", "mip-cbc")
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "seqpav":
self.shortname = "seq-PAV"
self.longname = "Sequential Proportional Approval Voting (seq-PAV)"
self.compute_fct = compute_seqpav
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "revseqpav":
self.shortname = "revseq-PAV"
self.longname = "Reverse Sequential Proportional Approval Voting (revseq-PAV)"
self.compute_fct = compute_revseqpav
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "seqslav":
self.shortname = "seq-SLAV"
self.longname = "Sequential Sainte-Laguë Approval Voting (seq-SLAV)"
self.compute_fct = compute_seqslav
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "seqcc":
self.shortname = "seq-CC"
self.longname = "Sequential Approval Chamberlin-Courant (seq-CC)"
self.compute_fct = compute_seqcc
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "seqphragmen":
self.shortname = "seq-Phragmén"
self.longname = "Phragmén's Sequential Rule (seq-Phragmén)"
self.compute_fct = compute_seqphragmen
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "minimaxphragmen":
self.shortname = "minimax-Phragmén"
self.longname = "Phragmén's Minimax Rule (minimax-Phragmén)"
self.compute_fct = compute_minimaxphragmen
self.algorithms = ("gurobi", "mip-gurobi", "mip-cbc")
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "leximaxphragmen":
self.shortname = "leximax-Phragmén"
self.longname = "Phragmén's Leximax Rule (leximax-Phragmén)"
self.compute_fct = compute_leximaxphragmen
self.algorithms = ("gurobi",) # TODO: "mip-gurobi", "mip-cbc"),
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "monroe":
self.shortname = "Monroe"
self.longname = "Monroe's Approval Rule (Monroe)"
self.compute_fct = compute_monroe
self.algorithms = (
# algorithms sorted by speed
"gurobi",
"mip-gurobi",
"mip-cbc",
"ortools-cp",
"brute-force",
)
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "greedy-monroe":
self.shortname = "Greedy Monroe"
self.longname = "Greedy Monroe"
self.compute_fct = compute_greedy_monroe
self.algorithms = ("standard",)
self.resolute_values = (True,)
elif rule_id == "minimaxav":
self.shortname = "minimaxav"
self.longname = "Minimax Approval Voting (MAV)"
self.compute_fct = compute_minimaxav
self.algorithms = ("gurobi", "mip-gurobi", "ortools-cp", "mip-cbc", "brute-force")
# algorithms sorted by speed. however, for small profiles with a small committee size,
# brute-force is often the fastest
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "lexminimaxav":
self.shortname = "lex-MAV"
self.longname = "Lexicographic Minimax Approval Voting (lex-MAV)"
self.compute_fct = compute_lexminimaxav
self.algorithms = ("gurobi", "brute-force")
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "rule-x":
self.shortname = "Rule X"
self.longname = "Rule X (aka Method of Equal Shares)"
self.compute_fct = compute_rule_x
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "rule-x-without-phragmen-phase":
self.shortname = "Rule X without Phragmén phase"
self.longname = "Rule X without the Phragmén phase (second phase)"
self.compute_fct = functools.partial(compute_rule_x, skip_phragmen_phase=True)
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "phragmen-enestroem":
self.shortname = "Phragmén-Eneström"
self.longname = "Method of Phragmén-Eneström"
self.compute_fct = compute_phragmen_enestroem
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "consensus-rule":
self.shortname = "Consensus Rule"
self.longname = "Consensus Rule"
self.compute_fct = compute_consensus_rule
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
elif rule_id == "trivial":
self.shortname = "Trivial Rule"
self.longname = "Trivial Rule"
self.compute_fct = compute_trivial_rule
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id == "rsd":
self.shortname = "Random Serial Dictator"
self.longname = "Random Serial Dictator"
self.compute_fct = compute_rsd
self.algorithms = ("standard",)
self.resolute_values = (True,)
elif rule_id == "eph":
self.shortname = "E Pluribus Hugo"
self.longname = "E Pluribus Hugo (EPH)"
self.compute_fct = compute_eph
self.algorithms = ("float-fractions", "gmpy2-fractions", "standard-fractions")
self.resolute_values = (False, True)
elif rule_id.startswith("geom"):
parameter = rule_id[4:]
self.shortname = f"{parameter}-Geometric"
self.longname = f"{parameter}-Geometric Rule"
self.compute_fct = functools.partial(compute_thiele_method, rule_id)
self.algorithms = self._THIELE_ALGORITHMS
self.resolute_values = self._RESOLUTE_VALUES_FOR_OPTIMIZATION_BASED_RULES
elif rule_id.startswith("seq") or rule_id.startswith("revseq"):
# handle sequential and reverse sequential Thiele methods
# that are not explicitly included in the list above
if rule_id.startswith("seq"):
scorefct_id = rule_id[3:] # score function id of Thiele method
else:
scorefct_id = rule_id[6:] # score function id of Thiele method
try:
scores.get_marginal_scorefct(scorefct_id)
except scores.UnknownScoreFunctionError as error:
raise UnknownRuleIDError(rule_id) from error
if rule_id == "av":
raise UnknownRuleIDError(rule_id) # seq-AV and revseq-AV are equivalent to AV
# sequential Thiele methods
optrule = Rule(scorefct_id)
if rule_id.startswith("seq"):
self.shortname = f"seq-{optrule.shortname}"
self.longname = f"Sequential {optrule.longname}"
self.compute_fct = functools.partial(compute_seq_thiele_method, scorefct_id)
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
# reverse sequential Thiele methods
elif rule_id.startswith("revseq"):
self.shortname = f"revseq-{optrule.shortname}"
self.longname = f"Reverse Sequential {optrule.longname}"
self.compute_fct = functools.partial(compute_revseq_thiele_method, scorefct_id)
self.algorithms = ("standard",)
self.resolute_values = self._RESOLUTE_VALUES_FOR_SEQUENTIAL_RULES
else:
raise UnknownRuleIDError(rule_id)
# find all *available* algorithms for this ABC rule
self.available_algorithms = []
for algorithm in self.algorithms:
if algorithm in available_algorithms:
self.available_algorithms.append(algorithm)
def fastest_available_algorithm(self):
"""
Return the fastest algorithm for this rule that is available on this system.
An algorithm may not be available because its requirements are not satisfied. For example,
some algorithms require Gurobi, others require gmpy2 - both of which are not requirements
for abcvoting.
Returns
-------
str
"""
if self.available_algorithms:
# This rests on the assumption that ``self.algorithms`` are sorted by speed.
return self.available_algorithms[0]
raise NoAvailableAlgorithm(self.rule_id, self.algorithms)
def compute(self, profile, committeesize, **kwargs):
"""
Compute rule using self._compute_fct.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
**kwargs : dict
Optional arguments for computing the rule (e.g., `resolute`).
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return self.compute_fct(profile, committeesize, **kwargs)
def verify_compute_parameters(
self,
profile,
committeesize,
algorithm,
resolute,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Basic checks for parameter values when computing an ABC rule.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
resolute : bool
Return only one winning committee.
If `resolute=False`, all winning committees are computed
(subject to `max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not
restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
bool
"""
if committeesize < 1:
raise ValueError("Parameter `committeesize` must be a positive integer.")
if committeesize > profile.num_cand:
raise ValueError(
"Parameter `committeesize` must be smaller or equal to"
" the total number of candidates."
)
if len(profile) == 0:
raise ValueError("The given profile contains no voters (len(profile) == 0).")
if algorithm not in self.algorithms:
raise UnknownAlgorithm(self.rule_id, algorithm)
if resolute not in self.resolute_values:
raise NotImplementedError(
f'ABC rule with rule_id "{self.rule_id}" does not support resolute={resolute}.'
)
if (max_num_of_committees is not None and not isinstance(max_num_of_committees, int)) or (
max_num_of_committees is not None and max_num_of_committees < 1
):
raise ValueError(
"Parameter `max_num_of_committees` must be None or a positive integer."
)
if max_num_of_committees is not None and resolute:
raise ValueError(
"Parameter `max_num_of_committees` cannot be used when `resolute` is set to True."
)
class UnknownRuleIDError(ValueError):
"""
Error: unknown rule id.
Parameters
----------
rule_id : str
The unknown rule identifier.
"""
def __init__(self, rule_id):
message = f'Rule ID "{rule_id}" is not known.'
super().__init__(message)
class UnknownAlgorithm(ValueError):
"""
Error: unknown algorithm for a given ABC rule.
Parameters
----------
rule_id : str
The ABC rule for which the algorithm is not known.
algorithm : str
The unknown algorithm.
"""
def __init__(self, rule_id, algorithm):
message = f"Algorithm {algorithm} not specified for ABC rule {rule_id}."
super().__init__(message)
class NoAvailableAlgorithm(ValueError):
"""
Exception: none of the implemented algorithms are available.
This error occurs because no solvers are installed.
Parameters
----------
rule_id : str
The ABC rule for which no algorithm are available.
algorithms : tuple of str
List of algorithms for this rule (none of which are available).
"""
def __init__(self, rule_id, algorithms):
message = (
f"None of the implemented algorithms are available for ABC rule {rule_id}\n"
f"(because the solvers for the following algorithms are not installed: "
f"{algorithms}) "
)
super().__init__(message)
def _available_algorithms():
"""Verify which algorithms are supported on the current machine.
This is done by verifying that the required modules and solvers are available.
"""
available = []
for algorithm in ALGORITHM_NAMES:
if "gurobi" in algorithm and not abcrules_gurobi.gb:
continue
if algorithm == "gmpy2-fractions" and not mpq:
continue
available.append(algorithm)
return available
available_algorithms = _available_algorithms()
def get_rule(rule_id):
"""
Get instance of `Rule` for the ABC rule specified by `rule_id`.
.. deprecated:: 2.3.0
Function `get_rule(rule_id)` is deprecated, use `Rule(rule_id)` instead.
Parameters
----------
rule_id : str
The rule identifier.
Returns
-------
Rule
A corresponding `Rule` object.
"""
return Rule(rule_id)
########################################################################
def compute(rule_id, profile, committeesize, result=None, **kwargs):
"""
Compute winning committees with an ABC rule given by `rule_id`.
Parameters
----------
rule_id : str
The rule identifier.
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
result : list of CandidateSet, optional
Expected winning committees.
This is used in unit tests to verify correctness. Raises `ValueError` if
`result` is different from actual winning committees.
**kwargs : dict
Optional arguments for computing the rule (e.g., `resolute`).
Returns
-------
list of CandidateSet
A list of the winning committees.
If `resolute=True`, the list contains only one winning committee.
"""
rule = Rule(rule_id)
committees = rule.compute(profile=profile, committeesize=committeesize, **kwargs)
if result is not None:
# verify that the parameter `result` is indeed the result of computing the ABC rule
resolute = kwargs.get("resolute", rule.resolute_values[0])
misc.verify_expected_committees_equals_actual_committees(
actual_committees=committees,
expected_committees=result,
resolute=resolute,
shortname=rule.shortname,
)
return committees
def compute_thiele_method(
scorefct_id,
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Thiele methods.
Compute winning committees according to a Thiele method specified
by a score function (scorefct_id).
Examples of Thiele methods are PAV, CC, and SLAV.
An exception is Approval Voting (AV), which should be computed using
compute_av(). (AV is polynomial-time computable (separable) and can thus be
computed much faster.)
For a mathematical description of Thiele methods, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
scorefct_id : str
A string identifying the score function that defines the Thiele method.
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
If `resolute=True`, the list contains only one winning committee.
"""
rule = Rule(scorefct_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "gurobi":
committees = abcrules_gurobi._gurobi_thiele_methods(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "branch-and-bound":
committees, detailed_info = _thiele_methods_branchandbound(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "brute-force":
committees, detailed_info = _thiele_methods_bruteforce(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm.startswith("mip-"):
committees = abcrules_mip._mip_thiele_methods(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
solver_id=algorithm[4:],
)
elif algorithm == "ortools-cp" and scorefct_id == "cc":
committees = abcrules_ortools._ortools_cc(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(scorefct_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.details(
f"Optimal {scorefct_id.upper()}-score: "
f"{scores.thiele_score(scorefct_id, profile, committees[0])}\n"
)
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _thiele_methods_bruteforce(
scorefct_id,
profile,
committeesize,
resolute,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Brute-force algorithm for Thiele methods (PAV, CC, etc.).
Only intended for comparison, much slower than _thiele_methods_branchandbound()
"""
opt_committees = []
opt_thiele_score = -1
for committee in itertools.combinations(profile.candidates, committeesize):
score = scores.thiele_score(scorefct_id, profile, committee)
if score > opt_thiele_score:
opt_committees = [committee]
opt_thiele_score = score
elif score == opt_thiele_score:
if not resolute:
opt_committees.append(committee)
committees = sorted_committees(opt_committees)
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
detailed_info = {}
if resolute:
committees = [committees[0]]
return committees, detailed_info
def _thiele_methods_branchandbound(
scorefct_id,
profile,
committeesize,
resolute,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Branch-and-bound algorithm for Thiele methods.
"""
marginal_scorefct = scores.get_marginal_scorefct(scorefct_id, committeesize)
best_committees = []
init_com, _ = _seq_thiele_resolute(scorefct_id, profile, committeesize)
init_com = init_com[0]
best_score = scores.thiele_score(scorefct_id, profile, init_com)
part_coms = [[]]
while part_coms:
part_com = part_coms.pop(0)
# potential committee, check if at least as good
# as previous best committee
if len(part_com) == committeesize:
score = scores.thiele_score(scorefct_id, profile, part_com)
if score == best_score:
best_committees.append(part_com)
elif score > best_score:
best_committees = [part_com]
best_score = score
else:
if len(part_com) > 0:
largest_cand = part_com[-1]
else:
largest_cand = -1
missing = committeesize - len(part_com)
marg_util_cand = scores.marginal_thiele_scores_add(
marginal_scorefct, profile, part_com
)
upper_bound = sum(
sorted(marg_util_cand[largest_cand + 1 :])[-missing:]
) + scores.thiele_score(scorefct_id, profile, part_com)
if upper_bound >= best_score:
for cand in range(largest_cand + 1, profile.num_cand - missing + 1):
part_coms.insert(0, part_com + [cand])
committees = sorted_committees(best_committees)
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
if resolute:
committees = [committees[0]]
detailed_info = {}
return committees, detailed_info
def compute_pav(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Proportional Approval Voting (PAV).
This ABC rule belongs to the class of Thiele methods.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for PAV:
.. doctest::
>>> Rule("pav").algorithms
('gurobi', 'mip-gurobi', 'mip-cbc', 'branch-and-bound', 'brute-force')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_thiele_method(
scorefct_id="pav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_slav(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Sainte-Lague Approval Voting (SLAV).
This ABC rule belongs to the class of Thiele methods.
For a mathematical description of this rule, see e.g.
Martin Lackner and Piotr Skowron
Utilitarian Welfare and Representation Guarantees of Approval-Based Multiwinner Rules
In Artificial Intelligence, 288: 103366, 2020.
<https://arxiv.org/abs/1801.01527>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for SLAV:
.. doctest::
>>> Rule("slav").algorithms
('gurobi', 'mip-gurobi', 'mip-cbc', 'branch-and-bound', 'brute-force')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_thiele_method(
scorefct_id="slav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_cc(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Approval Chamberlin-Courant (CC).
This ABC rule belongs to the class of Thiele methods.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Approval Chamberlin-Courant (CC):
.. doctest::
>>> Rule("cc").algorithms # doctest: +NORMALIZE_WHITESPACE
('gurobi', 'mip-gurobi', 'ortools-cp', 'branch-and-bound', 'brute-force',
'mip-cbc')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_thiele_method(
scorefct_id="cc",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_lexcc(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with a Lexicographic Chamberlin-Courant (lex-CC).
This ABC rule is a lexicographic variant of Approval Chamberlin-Courant (CC). It maximizes the
CC score, i.e., the number of voters with at least one approved
candidate in the winning committee. If there is more than one such committee, it chooses the
committee with most voters having at least two approved candidates in the committee. This
tie-breaking continues with values of 3, 4, .., k if necessary.
This rule can be seen as an analogue to the leximin social welfare ordering for utility
functions.
.. important::
Very slow due to lexicographic optimization.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Lexicographic Chamberlin-Courant (lex-CC):
.. doctest::
>>> Rule("lexcc").algorithms
('gurobi', 'mip-gurobi', 'brute-force', 'mip-cbc')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "lexcc"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "brute-force":
committees, detailed_info = _lexcc_bruteforce(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "gurobi":
committees, detailed_info = abcrules_gurobi._gurobi_lexcc(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm.startswith("mip-"):
committees, detailed_info = abcrules_mip._mip_lexcc(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
solver_id=algorithm[4:],
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.details("At-least-ell scores:")
for ell, score in enumerate(detailed_info["opt_score_vector"]):
output.details(f"at-least-{ell+1}: {score}", indent=" ")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _lexcc_bruteforce(profile, committeesize, resolute, max_num_of_committees):
opt_committees = []
opt_score_vector = [0] * committeesize
for committee in itertools.combinations(profile.candidates, committeesize):
score_vector = [
scores.thiele_score(f"atleast{ell}", profile, committee)
for ell in range(1, committeesize + 1)
]
for i in range(committeesize):
if opt_score_vector[i] > score_vector[i]:
break
if opt_score_vector[i] < score_vector[i]:
opt_score_vector = score_vector
opt_committees = [committee]
break
else:
opt_committees.append(committee)
committees = sorted_committees(opt_committees)
detailed_info = {"opt_score_vector": opt_score_vector}
if resolute:
committees = [committees[0]]
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
return committees, detailed_info
def compute_seq_thiele_method(
scorefct_id,
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Sequential Thiele methods.
For a mathematical description of these rules, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
scorefct_id : str
A string identifying the score function that defines the Thiele method.
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
scores.get_marginal_scorefct(scorefct_id, committeesize) # check that `scorefct_id` is valid
rule_id = "seq" + scorefct_id
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "standard":
if resolute:
committees, detailed_info = _seq_thiele_resolute(scorefct_id, profile, committeesize)
else:
committees, detailed_info = _seq_thiele_irresolute(
scorefct_id, profile, committeesize, max_num_of_committees
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
if output.verbosity <= DETAILS: # skip thiele_score() calculations if not necessary
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
if resolute:
output.details(
f"starting with the empty committee (score = "
f"{scores.thiele_score(scorefct_id, profile, [])})\n"
)
committee = []
for i, next_cand in enumerate(detailed_info["next_cand"]):
tied_cands = detailed_info["tied_cands"][i]
delta_score = detailed_info["delta_score"][i]
committee.append(next_cand)
output.details(f"adding candidate number {i+1}: {profile.cand_names[next_cand]}")
output.details(
f"score increases by {delta_score} to"
f" a total of {scores.thiele_score(scorefct_id, profile, committee)}",
indent=" ",
)
if len(tied_cands) > 1:
output.details(f"tie broken in favor of {next_cand},\n", indent=" ")
output.details(
f"candidates "
f"{str_set_of_candidates(tied_cands, cand_names=profile.cand_names)} "
"are tied"
)
output.details(
f"(all would increase the score by the same amount {delta_score})",
indent=" ",
)
output.details("")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
if output.verbosity <= DETAILS: # skip thiele_score() calculations if not necessary
output.details(scorefct_id.upper() + "-score of winning committee(s):")
for committee in committees:
output.details(
f"{str_set_of_candidates(committee, cand_names=profile.cand_names)}: "
f"{scores.thiele_score(scorefct_id, profile, committee)}",
indent=" ",
)
output.details("\n")
# end of optional output
return sorted_committees(committees)
def _seq_thiele_resolute(scorefct_id, profile, committeesize):
"""Compute one winning committee (=resolute) for sequential Thiele methods.
Tiebreaking between candidates in favor of candidate with smaller
number/index (candidates with larger numbers get deleted first).
"""
committee = []
marginal_scorefct = scores.get_marginal_scorefct(scorefct_id, committeesize)
detailed_info = {"next_cand": [], "tied_cands": [], "delta_score": []}
# build a committee starting with the empty set
for _ in range(committeesize):
additional_score_cand = scores.marginal_thiele_scores_add(
marginal_scorefct, profile, committee
)
tied_cands = [
cand
for cand in range(len(additional_score_cand))
if additional_score_cand[cand] == max(additional_score_cand)
]
next_cand = tied_cands[0] # tiebreaking in favor of candidate with smallest index
committee.append(next_cand)
detailed_info["next_cand"].append(next_cand)
detailed_info["tied_cands"].append(tied_cands)
detailed_info["delta_score"].append(max(additional_score_cand))
return sorted_committees([committee]), detailed_info
def _seq_thiele_irresolute(scorefct_id, profile, committeesize, max_num_of_committees):
"""Compute all winning committee (=irresolute) for sequential Thiele methods.
Consider all possible ways to break ties between candidates
(aka parallel universe tiebreaking)
"""
marginal_scorefct = scores.get_marginal_scorefct(scorefct_id, committeesize)
# build committees starting with the empty set
partial_committees = [()]
winning_committees = set()
while partial_committees:
new_partial_committees = []
committee = partial_committees.pop()
# marginal utility gained by adding candidate to the committee
additional_score_cand = scores.marginal_thiele_scores_add(
marginal_scorefct, profile, committee
)
for cand in profile.candidates:
if additional_score_cand[cand] >= max(additional_score_cand):
new_committee = committee + (cand,)
if len(new_committee) == committeesize:
new_committee = tuple(sorted(new_committee))
winning_committees.add(new_committee) # remove duplicate committees
if (
max_num_of_committees is not None
and len(winning_committees) == max_num_of_committees
):
# sufficiently many winning committees found
detailed_info = {}
return sorted_committees(winning_committees), detailed_info
else:
# partial committee
new_partial_committees.append(new_committee)
# add new partial committees in reversed order, so that tiebreaking is correct
partial_committees += reversed(new_partial_committees)
detailed_info = {}
return sorted_committees(winning_committees), detailed_info
# Sequential PAV
def compute_seqpav(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Sequential PAV (seq-PAV).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Sequential PAV:
.. doctest::
>>> Rule("seqpav").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_seq_thiele_method(
scorefct_id="pav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_seqslav(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Sequential Sainte-Lague Approval Voting (SLAV).
For a mathematical description of SLAV, see e.g.
Martin Lackner and Piotr Skowron
Utilitarian Welfare and Representation Guarantees of Approval-Based Multiwinner Rules
In Artificial Intelligence, 288: 103366, 2020.
<https://arxiv.org/abs/1801.01527>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Sequential SLAV:
.. doctest::
>>> Rule("seqslav").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_seq_thiele_method(
scorefct_id="slav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_seqcc(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Sequential Chamberlin-Courant (seq-CC).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Sequential CC:
.. doctest::
>>> Rule("seqcc").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_seq_thiele_method(
scorefct_id="cc",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_revseq_thiele_method(
scorefct_id,
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Reverse sequential Thiele methods.
For a mathematical description of these rules, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
scorefct_id : str
A string identifying the score function that defines the Thiele method.
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
scores.get_marginal_scorefct(scorefct_id, committeesize) # check that scorefct_id is valid
rule_id = "revseq" + scorefct_id
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "standard":
if resolute:
committees, detailed_info = _revseq_thiele_resolute(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
)
else:
committees, detailed_info = _revseq_thiele_irresolute(
scorefct_id=scorefct_id,
profile=profile,
committeesize=committeesize,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
if resolute:
committee = set(profile.candidates)
output.details(
f"full committee ({len(committee)} candidates) has a total score of "
f"{scores.thiele_score(scorefct_id, profile, committee)}\n"
)
for i, next_cand in enumerate(detailed_info["next_cand"]):
committee.remove(next_cand)
tied_cands = detailed_info["tied_cands"][i]
delta_score = detailed_info["delta_score"][i]
output.details(
f"removing candidate number {profile.num_cand - len(committee)}: "
f"{profile.cand_names[next_cand]}"
)
output.details(
f"score decreases by {delta_score} to a total of "
f"{scores.thiele_score(scorefct_id, profile, committee)}",
indent=" ",
)
if len(tied_cands) > 1:
output.details(f"tie broken to the disadvantage of {next_cand},", indent=" ")
output.details(
f"candidates "
f"{str_set_of_candidates(tied_cands, cand_names=profile.cand_names)}"
" are tied",
indent=" ",
)
output.details(
f"(all would decrease the score by the same amount {delta_score})", indent=" "
)
output.details("")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
msg = "PAV-score of winning committee:"
if not resolute and len(committees) != 1:
msg += "\n"
for committee in committees:
msg += " " + str(scores.thiele_score(scorefct_id, profile, committee))
msg += "\n"
output.details(msg)
# end of optional output
return committees
def _revseq_thiele_resolute(scorefct_id, profile, committeesize):
"""Compute one winning committee (=resolute) for reverse sequential Thiele methods.
Tiebreaking between candidates in favor of candidate with smaller
number/index (candidates with smaller numbers are added first).
"""
marginal_scorefct = scores.get_marginal_scorefct(scorefct_id, committeesize)
committee = set(profile.candidates)
detailed_info = {"next_cand": [], "tied_cands": [], "delta_score": []}
for _ in range(profile.num_cand - committeesize):
marg_util_cand = scores.marginal_thiele_scores_remove(
marginal_scorefct, profile, committee
)
# find smallest elements in `marg_util_cand` and return indices
cands_to_remove = [
cand for cand in profile.candidates if marg_util_cand[cand] == min(marg_util_cand)
]
next_cand = cands_to_remove[-1]
tied_cands = cands_to_remove[:-1]
committee.remove(next_cand)
detailed_info["next_cand"].append(next_cand)
detailed_info["tied_cands"].append(tied_cands)
detailed_info["delta_score"].append(min(marg_util_cand))
return sorted_committees([committee]), detailed_info
def _revseq_thiele_irresolute(scorefct_id, profile, committeesize, max_num_of_committees):
"""
Compute all winning committee (=irresolute) for reverse sequential Thiele methods.
Consider all possible ways to break ties between candidates
(aka parallel universe tiebreaking)
"""
marginal_scorefct = scores.get_marginal_scorefct(scorefct_id, committeesize)
full_committee = tuple(profile.candidates)
comm_scores = {full_committee: scores.thiele_score(scorefct_id, profile, full_committee)}
for _ in range(profile.num_cand - committeesize):
comm_scores_next = {}
for committee, score in comm_scores.items():
marg_util_cand = scores.marginal_thiele_scores_remove(
marginal_scorefct, profile, committee
)
score_reduction = min(marg_util_cand)
# find smallest elements in `marg_util_cand` and return indices
cands_to_remove = [
cand for cand in profile.candidates if marg_util_cand[cand] == min(marg_util_cand)
]
for cand in cands_to_remove:
next_committee = tuple(set(committee) - {cand})
comm_scores_next[next_committee] = score - score_reduction
comm_scores = comm_scores_next
committees = sorted_committees(list(comm_scores.keys()))
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
detailed_info = {}
return committees, detailed_info
# Reverse Sequential PAV
def compute_revseqpav(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Reverse Sequential PAV (revseq-PAV).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Reverse Sequential PAV:
.. doctest::
>>> Rule("revseqpav").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_revseq_thiele_method(
scorefct_id="pav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_separable_rule(
rule_id,
profile,
committeesize,
algorithm,
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Separable rules (such as AV and SAV).
For a mathematical description of separable rules (for ranking-based rules), see
E. Elkind, P. Faliszewski, P. Skowron, and A. Slinko.
Properties of multiwinner voting rules.
Social Choice and Welfare, 48(3):599–632, 2017.
<https://link.springer.com/article/10.1007/s00355-017-1026-z>
Parameters
----------
rule_id : str
The rule identifier for a separable rule.
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for AV:
.. doctest::
>>> Rule("av").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "standard":
committees, detailed_info = _separable_rule_algorithm(
rule_id=rule_id,
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
score = detailed_info["score"]
msg = "Scores of candidates:\n"
for cand in profile.candidates:
msg += (profile.cand_names[cand] + ": " + str(score[cand])) + "\n"
cutoff = detailed_info["cutoff"]
msg += "\nCandidates are contained in winning committees\n"
msg += "if their score is >= " + str(cutoff) + "."
output.details(msg)
certain_cands = detailed_info["certain_cands"]
if len(certain_cands) > 0:
msg = "\nThe following candidates are contained in\n"
msg += "every winning committee:\n"
namedset = [profile.cand_names[cand] for cand in certain_cands]
msg += (" " + ", ".join(map(str, namedset))) + "\n"
output.details(msg)
possible_cands = detailed_info["possible_cands"]
missing = detailed_info["missing"]
if len(possible_cands) > 0:
msg = "The following candidates are contained in\n"
msg += "some of the winning committees:\n"
namedset = [profile.cand_names[cand] for cand in possible_cands]
msg += (" " + ", ".join(map(str, namedset))) + "\n"
msg += f"({missing} of those candidates are contained\n in every winning committee.)\n"
output.details(msg)
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _separable_rule_algorithm(rule_id, profile, committeesize, resolute, max_num_of_committees):
"""
Algorithm for separable rules (such as AV and SAV).
"""
score = [0] * profile.num_cand
for voter in profile:
for cand in voter.approved:
if rule_id == "sav":
# Satisfaction Approval Voting
score[cand] += voter.weight / len(voter.approved)
elif rule_id == "av":
# (Classic) Approval Voting
score[cand] += voter.weight
else:
raise UnknownRuleIDError(rule_id)
# smallest score to be in the committee
cutoff = sorted(score)[-committeesize]
certain_cands = [cand for cand in profile.candidates if score[cand] > cutoff]
possible_cands = [cand for cand in profile.candidates if score[cand] == cutoff]
missing = committeesize - len(certain_cands)
if len(possible_cands) == missing:
# candidates with score[cand] == cutoff
# are also certain candidates because all these candidates
# are required to fill the committee
certain_cands = sorted(certain_cands + possible_cands)
possible_cands = []
missing = 0
if resolute:
committees = sorted_committees([(certain_cands + possible_cands[:missing])])
else:
if max_num_of_committees is None:
committees = sorted_committees(
[
(certain_cands + list(selection))
for selection in itertools.combinations(possible_cands, missing)
]
)
else:
committees = []
for selection in itertools.combinations(possible_cands, missing):
committees.append(certain_cands + list(selection))
if len(committees) >= max_num_of_committees:
break
committees = sorted_committees(committees)
detailed_info = {
"certain_cands": certain_cands,
"possible_cands": possible_cands,
"missing": missing,
"cutoff": cutoff,
"score": score,
}
return committees, detailed_info
def compute_sav(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Satisfaction Approval Voting (SAV).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for SAV:
.. doctest::
>>> Rule("sav").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_separable_rule(
rule_id="sav",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_av(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Approval Voting (AV).
AV is both a Thiele method and a separable rule. Seperable rules can be computed much
faster than Thiele methods (in general), thus `compute_separable_rule` is used
to compute AV.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for AV:
.. doctest::
>>> Rule("av").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
return compute_separable_rule(
rule_id="av",
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
def compute_minimaxav(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Minimax Approval Voting (MAV).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Minimax AV:
.. doctest::
>>> Rule("minimaxav").algorithms
('gurobi', 'mip-gurobi', 'ortools-cp', 'mip-cbc', 'brute-force')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "minimaxav"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "gurobi":
committees = abcrules_gurobi._gurobi_minimaxav(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "ortools-cp":
committees = abcrules_ortools._ortools_minimaxav(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm.startswith("mip-"):
solver_id = algorithm[4:]
committees = abcrules_mip._mip_minimaxav(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
solver_id=solver_id,
)
elif algorithm == "brute-force":
committees, detailed_info = _minimaxav_bruteforce(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
opt_minimaxav_score = scores.minimaxav_score(profile, committees[0])
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
output.details("Minimum maximal distance: " + str(opt_minimaxav_score))
msg = "Corresponding distances to voters:\n"
for committee in committees:
msg += str([misc.hamming(voter.approved, committee) for voter in profile]) + "\n"
output.details(msg)
# end of optional output
return committees
def _minimaxav_bruteforce(profile, committeesize, resolute, max_num_of_committees):
"""Brute-force algorithm for Minimax AV (MAV)."""
opt_committees = []
opt_minimaxav_score = profile.num_cand + 1
for committee in itertools.combinations(profile.candidates, committeesize):
score = scores.minimaxav_score(profile, committee)
if score < opt_minimaxav_score:
opt_committees = [committee]
opt_minimaxav_score = score
elif score == opt_minimaxav_score:
opt_committees.append(committee)
committees = sorted_committees(opt_committees)
detailed_info = {}
if resolute:
committees = [committees[0]]
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
return committees, detailed_info
def compute_lexminimaxav(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Lexicographic Minimax AV (lex-MAV).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
(Remark 2)
If `lexicographic_tiebreaking` is True, compute all winning committees and choose the
lexicographically smallest. This is a deterministic form of tiebreaking; if only resolute=True,
it is not guaranteed how ties are broken.
.. important::
Very slow due to lexicographic optimization.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Lexicographic Minimax AV:
.. doctest::
>>> Rule("lexminimaxav").algorithms
('gurobi', 'brute-force')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "lexminimaxav"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only defined for unit weights (weight=1)")
if algorithm == "brute-force":
committees, detailed_info = _lexminimaxav_bruteforce(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "gurobi":
committees, detailed_info = abcrules_gurobi._gurobi_lexminimaxav(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
opt_distances = detailed_info["opt_distances"]
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
output.details("Minimum maximal distance: " + str(max(opt_distances)))
msg = "Corresponding distances to voters:\n"
for committee in committees:
msg += str([misc.hamming(voter.approved, committee) for voter in profile])
output.details(msg + "\n")
# end of optional output
return committees
def _lexminimaxav_bruteforce(profile, committeesize, resolute, max_num_of_committees):
opt_committees = []
opt_distances = [profile.num_cand + 1] * len(profile)
for committee in itertools.combinations(profile.candidates, committeesize):
distances = sorted(
(misc.hamming(voter.approved, set(committee)) for voter in profile), reverse=True
)
for i, dist in enumerate(distances):
if opt_distances[i] < dist:
break
if opt_distances[i] > dist:
opt_distances = distances
opt_committees = [committee]
break
else:
opt_committees.append(committee)
committees = sorted_committees(opt_committees)
detailed_info = {"opt_distances": opt_distances}
if resolute:
committees = [committees[0]]
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
return committees, detailed_info
def compute_monroe(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Monroe's rule.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Monroe:
.. doctest::
>>> Rule("monroe").algorithms
('gurobi', 'mip-gurobi', 'mip-cbc', 'ortools-cp', 'brute-force')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "monroe"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only defined for unit weights (weight=1)")
if algorithm == "gurobi":
committees = abcrules_gurobi._gurobi_monroe(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm == "ortools-cp":
committees = abcrules_ortools._ortools_monroe(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm.startswith("mip-"):
committees = abcrules_mip._mip_monroe(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
solver_id=algorithm[4:],
)
elif algorithm == "brute-force":
committees, detailed_info = _monroe_bruteforce(
profile=profile,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.details(
"Optimal Monroe score: " + str(scores.monroescore(profile, committees[0])) + "\n"
)
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _monroe_bruteforce(profile, committeesize, resolute, max_num_of_committees):
"""
Brute-force algorithm for Monroe's rule.
"""
opt_committees = []
opt_monroescore = -1
for committee in itertools.combinations(profile.candidates, committeesize):
score = scores.monroescore(profile, committee)
if score > opt_monroescore:
opt_committees = [committee]
opt_monroescore = score
elif scores.monroescore(profile, committee) == opt_monroescore:
opt_committees.append(committee)
committees = sorted_committees(opt_committees)
if max_num_of_committees is not None:
committees = committees[:max_num_of_committees]
if resolute:
committees = [committees[0]]
detailed_info = {}
return committees, detailed_info
def compute_greedy_monroe(
profile, committeesize, algorithm="fastest", resolute=True, max_num_of_committees=None
):
"""
Compute winning committees with Greedy Monroe.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Greedy Monroe:
.. doctest::
>>> Rule("greedy-monroe").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "greedy-monroe"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile, committeesize, algorithm, resolute, max_num_of_committees
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only defined for unit weights (weight=1)")
if algorithm == "standard":
committees, detailed_info = _greedy_monroe_algorithm(profile, committeesize)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
remaining_voters = detailed_info["remaining_voters"]
assignment = detailed_info["assignment"]
score1 = scores.monroescore(profile, committees[0])
score2 = len(profile) - len(remaining_voters)
output.details("The Monroe assignment computed by Greedy Monroe")
output.details("has a Monroe score of " + str(score2) + ".")
if score1 > score2:
output.details(
"Monroe assignment found by Greedy Monroe is not "
+ "optimal for the winning committee,"
)
output.details(
"i.e., by redistributing voters to candidates a higher "
+ "satisfaction is possible "
+ "(without changing the committee)."
)
output.details("Optimal Monroe score of the winning committee is " + str(score1) + ".")
# build actual Monroe assignment for winning committee
num_voters = len(profile)
for t, district in enumerate(assignment):
cand, voters = district
if t < num_voters - committeesize * (num_voters // committeesize):
missing = num_voters // committeesize + 1 - len(voters)
else:
missing = num_voters // committeesize - len(voters)
for _ in range(missing):
v = remaining_voters.pop()
voters.append(v)
msg = "Assignment (unsatisfatied voters marked with *):\n\n"
for cand, voters in assignment:
msg += " candidate " + profile.cand_names[cand] + " assigned to: "
assing_msg = ""
for v in sorted(voters):
assing_msg += str(v)
if cand not in profile[v].approved:
assing_msg += "*"
assing_msg += ", "
msg += assing_msg[:-2] + "\n"
output.details(msg)
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return sorted_committees(committees)
def _greedy_monroe_algorithm(profile, committeesize):
"""
Algorithm for Greedy Monroe.
"""
num_voters = len(profile)
committee = []
remaining_voters = list(range(num_voters))
remaining_cands = set(profile.candidates)
assignment = []
for t in range(committeesize):
maxapprovals = -1
selected = None
for cand in remaining_cands:
approvals = len([i for i in remaining_voters if cand in profile[i].approved])
if approvals > maxapprovals:
maxapprovals = approvals
selected = cand
# determine how many voters are removed (at most)
if t < num_voters - committeesize * (num_voters // committeesize):
num_remove = num_voters // committeesize + 1
else:
num_remove = num_voters // committeesize
# only voters that approve the chosen candidate
# are removed
to_remove = [i for i in remaining_voters if selected in profile[i].approved]
if len(to_remove) > num_remove:
to_remove = to_remove[:num_remove]
assignment.append((selected, to_remove))
remaining_voters = [i for i in remaining_voters if i not in to_remove]
committee.append(selected)
remaining_cands.remove(selected)
detailed_info = {"remaining_voters": remaining_voters, "assignment": assignment}
return sorted_committees([committee]), detailed_info
def compute_seqphragmen(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Phragmen's sequential rule (seq-Phragmen).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Phragmen's sequential rule (seq-Phragmen):
.. doctest::
>>> Rule("seqphragmen").algorithms
('float-fractions', 'gmpy2-fractions', 'standard-fractions')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "seqphragmen"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if resolute:
committees, detailed_info = _seqphragmen_resolute(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
)
else:
committees, detailed_info = _seqphragmen_irresolute(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
max_num_of_committees=max_num_of_committees,
)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
if resolute:
committee = []
for i, next_cand in enumerate(detailed_info["next_cand"]):
tied_cands = detailed_info["tied_cands"][i]
max_load = detailed_info["max_load"][i]
load = detailed_info["load"][i]
committee.append(next_cand)
output.details(f"adding candidate number {i+1}: {profile.cand_names[next_cand]}")
output.details(
f"maximum load increased to {max_load}",
indent=" "
# f"\n (continuous model: time t_{i+1} = {max_load})"
)
output.details(" load distribution:")
msg = "("
for v, _ in enumerate(profile):
msg += str(load[v]) + ", "
output.details(msg[:-2] + ")", indent=" ")
if len(tied_cands) > 1:
output.details(
f"tie broken in favor of {profile.cand_names[next_cand]},", indent=" "
)
output.details(
"candidates "
f"{str_set_of_candidates(tied_cands, cand_names=profile.cand_names)}"
f" are tied",
indent=" ",
)
output.details(
f"(for all those new maximum load = {max_load}).",
indent=" ",
)
output.details("")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
if resolute or len(committees) == 1:
output.details("corresponding load distribution:")
else:
output.details("corresponding load distributions:")
for committee, load in detailed_info["committee_load_pairs"].items():
msg = f"{str_set_of_candidates(committee, cand_names=profile.cand_names)}: ("
for v, _ in enumerate(profile):
msg += str(load[v]) + ", "
output.details(msg[:-2] + ")\n")
# end of optional output
return sorted_committees(committees)
def _seqphragmen_resolute(
profile, committeesize, algorithm, start_load=None, partial_committee=None
):
"""
Algorithm for computing resolute seq-Phragmen (1 winning committee).
"""
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm("seqphragmen", algorithm)
approvers_weight = {}
for cand in profile.candidates:
approvers_weight[cand] = sum(voter.weight for voter in profile if cand in voter.approved)
load = start_load
if load is None:
load = [0 for _ in range(len(profile))]
committee = partial_committee
if partial_committee is None:
committee = [] # build committees starting with the empty set
detailed_info = {
"next_cand": [],
"tied_cands": [],
"load": [],
"max_load": [],
}
for _ in range(len(committee), committeesize):
approvers_load = {}
for cand in profile.candidates:
approvers_load[cand] = sum(
voter.weight * load[v] for v, voter in enumerate(profile) if cand in voter.approved
)
new_maxload = [
division(approvers_load[cand] + 1, approvers_weight[cand])
if approvers_weight[cand] > 0
else committeesize + 1
for cand in profile.candidates
]
# exclude committees already in the committee
for cand in profile.candidates:
if cand in committee:
new_maxload[cand] = committeesize + 2 # that's larger than any possible value
opt = min(new_maxload)
if algorithm == "float-fractions":
tied_cands = [
cand for cand in profile.candidates if misc.isclose(new_maxload[cand], opt)
]
else:
tied_cands = [cand for cand in profile.candidates if new_maxload[cand] == opt]
next_cand = tied_cands[0]
# compute new loads and add new candidate
for v, voter in enumerate(profile):
if next_cand in voter.approved:
load[v] = new_maxload[next_cand]
committee = sorted(committee + [next_cand])
detailed_info["next_cand"].append(next_cand)
detailed_info["tied_cands"].append(tied_cands)
detailed_info["load"].append(list(load)) # create copy of `load`
detailed_info["max_load"].append(opt)
detailed_info["committee_load_pairs"] = {tuple(committee): load}
return [committee], detailed_info
def _seqphragmen_irresolute(
profile,
committeesize,
algorithm,
max_num_of_committees,
start_load=None,
partial_committee=None,
):
"""Algorithm for computing irresolute seq-Phragmen (all winning committees)."""
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm("seqphragmen", algorithm)
approvers_weight = {}
for cand in profile.candidates:
approvers_weight[cand] = sum(voter.weight for voter in profile if cand in voter.approved)
load = start_load
if load is None:
load = {v: 0 for v, _ in enumerate(profile)}
if partial_committee is None:
partial_committee = () # build committees starting with the empty set
else:
partial_committee = tuple(partial_committee)
committee_load_pairs = [(partial_committee, load)]
committees = set()
detailed_info = {"committee_load_pairs": {}}
while committee_load_pairs:
committee, load = committee_load_pairs.pop()
approvers_load = {}
for cand in profile.candidates:
approvers_load[cand] = sum(
voter.weight * load[v] for v, voter in enumerate(profile) if cand in voter.approved
)
new_maxload = [
division(approvers_load[cand] + 1, approvers_weight[cand])
if approvers_weight[cand] > 0
else committeesize + 1
for cand in profile.candidates
]
# exclude committees already in the committee
for cand in profile.candidates:
if cand in committee:
new_maxload[cand] = committeesize + 2 # that's larger than any possible value
# compute new loads
new_committee_load_pairs = []
for cand in profile.candidates:
if algorithm == "float-fractions":
select_cand = misc.isclose(new_maxload[cand], min(new_maxload))
else:
select_cand = new_maxload[cand] <= min(new_maxload)
if select_cand:
new_load = [0] * len(profile)
for v, voter in enumerate(profile):
if cand in voter.approved:
new_load[v] = new_maxload[cand]
else:
new_load[v] = load[v]
new_committee = committee + (cand,)
if len(new_committee) == committeesize:
new_committee = tuple(sorted(new_committee))
committees.add(new_committee) # remove duplicate committees
detailed_info["committee_load_pairs"][new_committee] = new_load
if (
max_num_of_committees is not None
and len(committees) == max_num_of_committees
):
# sufficiently many winning committees found
return sorted_committees(committees), detailed_info
else:
# partial committee
new_committee_load_pairs.append((new_committee, new_load))
# add new committee/load pairs in reversed order, so that tiebreaking is correct
committee_load_pairs += reversed(new_committee_load_pairs)
return sorted_committees(committees), detailed_info
def compute_rule_x(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
skip_phragmen_phase=False,
):
"""
Compute winning committees with Rule X (aka Method of Equal Shares).
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
See also <https://arxiv.org/pdf/1911.11747.pdf>, page 7
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Rule X (aka Method of Equal Shares):
.. doctest::
>>> Rule("rule-x").algorithms
('float-fractions', 'gmpy2-fractions', 'standard-fractions')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
skip_phragmen_phase : bool, default=False
Omit the second phase (that uses seq-Phragmen).
May result in a committee that is too small (length smaller than `committeesize`).
Returns
-------
list of CandidateSet
A list of winning committees.
"""
if skip_phragmen_phase:
rule_id = "rule-x-without-phragmen-phase"
else:
rule_id = "rule-x"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only defined for unit weights (weight=1)")
committees, detailed_info = _rule_x_algorithm(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
skip_phragmen_phase=skip_phragmen_phase,
)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
if resolute:
start_budget = detailed_info["start_budget"]
output.details("Phase 1:\n")
output.details("starting budget:")
msg = " ("
for v, _ in enumerate(profile):
msg += str(start_budget[v]) + ", "
output.details(msg[:-2] + ")\n")
committee = []
for i, next_cand in enumerate(detailed_info["next_cand"]):
committee.append(next_cand)
budget = detailed_info["budget"][i]
cost = detailed_info["cost"][i]
tied_cands = detailed_info["tied_cands"][i]
output.details(f"adding candidate number {i+1}: {profile.cand_names[next_cand]}")
output.details(f"with maxmimum cost per voter q = {cost}", indent=" ")
output.details(" remaining budget:")
msg = "("
for v, _ in enumerate(profile):
msg += str(budget[v]) + ", "
output.details(msg[:-2] + ")", indent=" ")
if len(tied_cands) > 1:
output.details(
f"tie broken in favor of {profile.cand_names[next_cand]},", indent=" "
)
output.details(
"candidates "
f"{str_set_of_candidates(tied_cands, cand_names=profile.cand_names)}"
f" are tied",
indent=" ",
)
output.details(f"(all would impose a maximum cost of {cost}).", indent=" ")
output.details("")
if detailed_info["phragmen_start_load"]: # the second phase (seq-Phragmen) was used
phragmen_start_load = detailed_info["phragmen_start_load"]
output.details("Phase 2 (seq-Phragmén):\n")
output.details("starting loads (= budget spent):")
msg = "("
for v, _ in enumerate(profile):
msg += str(phragmen_start_load[v]) + ", "
output.details(msg[:-2] + ")\n", indent=" ")
detailed_info_phragmen = detailed_info["phragmen_phase"]
for i, next_cand in enumerate(detailed_info_phragmen["next_cand"]):
tied_cands = detailed_info_phragmen["tied_cands"][i]
max_load = detailed_info_phragmen["max_load"][i]
load = detailed_info_phragmen["load"][i]
committee.append(next_cand)
output.details(
f"adding candidate number {len(committee)}: {profile.cand_names[next_cand]}"
)
output.details(
f"maximum load increased to {max_load}",
indent=" "
# f"\n (continuous model: time t_{len(committee)} = {max_load})"
)
output.details(" load distribution:")
msg = "("
for v, _ in enumerate(profile):
msg += str(load[v]) + ", "
output.details(msg[:-2] + ")", indent=" ")
if len(tied_cands) > 1:
output.details(
f"tie broken in favor of {profile.cand_names[next_cand]},", indent=" "
)
output.details(
"candidates "
f"{str_set_of_candidates(tied_cands, cand_names=profile.cand_names)}"
" are tied",
indent=" ",
)
output.details(
f"(for any of those, the new maximum load would be {max_load}).",
indent=" ",
)
output.details("")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return sorted_committees(committees)
def _rule_x_algorithm(
profile, committeesize, algorithm, resolute, max_num_of_committees, skip_phragmen_phase=False
):
"""Algorithm for Rule X."""
def _rule_x_get_min_q(profile, budget, cand, division):
rich = {v for v, voter in enumerate(profile) if cand in voter.approved}
poor = set()
while len(rich) > 0:
poor_budget = sum(budget[v] for v in poor)
_q = division(1 - poor_budget, len(rich))
if algorithm == "float-fractions":
# due to float imprecision, values very close to `q` count as `q`
new_poor = {v for v in rich if budget[v] < _q and not misc.isclose(budget[v], _q)}
else:
new_poor = {v for v in rich if budget[v] < _q}
if len(new_poor) == 0:
return _q
rich -= new_poor
poor.update(new_poor)
return None # not sufficient budget available
def find_minimum_dict_entries(dictx):
if algorithm == "float-fractions":
min_entries = [
cand for cand in dictx.keys() if misc.isclose(dictx[cand], min(dictx.values()))
]
else:
min_entries = [cand for cand in dictx.keys() if dictx[cand] == min(dictx.values())]
return min_entries
def phragmen_phase(_committee, _budget):
# translate budget to loads
start_load = [-_budget[v] for v in range(len(profile))]
detailed_info["phragmen_start_load"] = list(start_load) # make a copy
if resolute:
committees, detailed_info_phragmen = _seqphragmen_resolute(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
partial_committee=list(_committee),
start_load=start_load,
)
else:
committees, detailed_info_phragmen = _seqphragmen_irresolute(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
max_num_of_committees=None,
# TODO: would be nice to have max_num_of_committees=max_num_of_committees
# but there is the issue that some of these committees might be
# already contained in `winning_committees` - so we need more
partial_committee=list(_committee),
start_load=start_load,
)
winning_committees.update([tuple(sorted(committee)) for committee in committees])
detailed_info["phragmen_phase"] = detailed_info_phragmen
# after filling the remaining spots these committees have size `committeesize`
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm("rule-x", algorithm)
if resolute:
max_num_of_committees = 1 # same algorithm for resolute==True and resolute==False
start_budget = {v: division(committeesize, len(profile)) for v, _ in enumerate(profile)}
committee_bugdet_pairs = [(tuple(), start_budget)]
winning_committees = set()
detailed_info = {
"next_cand": [],
"cost": [],
"tied_cands": [],
"budget": [],
"start_budget": start_budget,
"phragmen_start_load": None,
}
while committee_bugdet_pairs:
committee, budget = committee_bugdet_pairs.pop()
available_candidates = [cand for cand in profile.candidates if cand not in committee]
min_q = {}
for cand in available_candidates:
q = _rule_x_get_min_q(profile, budget, cand, division)
if q is not None:
min_q[cand] = q
if len(min_q) > 0: # one or more candidates are affordable
# choose those candidates that require the smallest budget
tied_cands = find_minimum_dict_entries(min_q)
new_committee_budget_pairs = []
for next_cand in sorted(tied_cands):
new_budget = dict(budget)
for v, voter in enumerate(profile):
if next_cand in voter.approved:
new_budget[v] -= min(budget[v], min_q[next_cand])
new_committee = committee + (next_cand,)
if resolute:
detailed_info["next_cand"].append(next_cand)
detailed_info["tied_cands"].append(tied_cands)
detailed_info["cost"].append(min(min_q.values()))
detailed_info["budget"].append(new_budget)
if len(new_committee) == committeesize:
new_committee = tuple(sorted(new_committee))
winning_committees.add(new_committee) # remove duplicate committees
if (
max_num_of_committees is not None
and len(winning_committees) == max_num_of_committees
):
# sufficiently many winning committees found
return sorted_committees(winning_committees), detailed_info
else:
# partial committee
new_committee_budget_pairs.append((new_committee, new_budget))
if resolute:
break
# add new committee/budget pairs in reversed order, so that tiebreaking is correct
committee_bugdet_pairs += reversed(new_committee_budget_pairs)
else: # no affordable candidates remain
if skip_phragmen_phase:
winning_committees.add(tuple(sorted(committee)))
else:
# fill committee via seq-Phragmen
phragmen_phase(committee, budget)
if max_num_of_committees is not None and len(winning_committees) >= max_num_of_committees:
winning_committees = sorted_committees(winning_committees)[:max_num_of_committees]
break
return sorted_committees(winning_committees), detailed_info
def compute_minimaxphragmen(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Phragmen's minimax rule (minimax-Phragmen).
Minimizes the maximum load.
For a mathematical description of this rule, see e.g.
"Multi-Winner Voting with Approval Preferences".
Martin Lackner and Piotr Skowron.
<https://arxiv.org/abs/2007.01795>
Does not include the lexicographic optimization as specified
in Markus Brill, Rupert Freeman, Svante Janson and Martin Lackner.
Phragmen's Voting Methods and Justified Representation.
<https://arxiv.org/abs/2102.12305>
Instead: minimizes the maximum load (without consideration of the second-,
third-, ...-largest load.
The lexicographic method is this one: :func:`compute_leximaxphragmen`.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Phragmen's minimax rule (minimax-Phragmen):
.. doctest::
>>> Rule("minimaxphragmen").algorithms
('gurobi', 'mip-gurobi', 'mip-cbc')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "minimaxphragmen"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "gurobi":
committees = abcrules_gurobi._gurobi_minimaxphragmen(
profile,
committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
elif algorithm.startswith("mip-"):
committees = abcrules_mip._mip_minimaxphragmen(
profile,
committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
solver_id=algorithm[4:],
)
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def compute_leximaxphragmen(
profile,
committeesize,
algorithm="fastest",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
lexicographic_tiebreaking=False,
):
"""
Compute winning committees with Phragmen's leximax rule (leximax-Phragmen).
Lexicographically minimize the maximum loads.
Details in
Markus Brill, Rupert Freeman, Svante Janson and Martin Lackner.
Phragmen's Voting Methods and Justified Representation.
<https://arxiv.org/abs/2102.12305>
.. important::
Very slow due to lexicographic optimization.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Phragmen's leximax rule (leximax-Phragmen):
.. doctest::
>>> Rule("leximaxphragmen").algorithms
('gurobi',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
lexicographic_tiebreaking : bool
Require lexicographic tiebreaking among tied committees.
This requires the computation of *all* winning committees and is therefore very slow.
.. important::
`lexicographic_tiebreaking=True` is only valid in "combination with resolute=True.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "leximaxphragmen"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if lexicographic_tiebreaking:
if not resolute:
raise ValueError(
"lexicographic_tiebreaking=True is only valid in "
"combination with resolute=True."
)
resolute = False # compute all committees to break ties correctly
if algorithm == "gurobi":
committees = abcrules_gurobi._gurobi_leximaxphragmen(
profile,
committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
# elif algorithm.startswith("mip-"):
# committees = abcrules_mip._mip_leximaxphragmen(
# profile,
# committeesize,
# resolute=resolute,
# max_num_of_committees=max_num_of_committees,
# solver_id=algorithm[4:],
# )
else:
raise UnknownAlgorithm(rule_id, algorithm)
if lexicographic_tiebreaking:
committees = sorted_committees(committees)[:1]
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def compute_phragmen_enestroem(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with Phragmen-Enestroem.
This ABC rule is also known as Phragmen's first method and Enestroem's method.
In every round the candidate with the highest combined budget of
their supporters is put in the committee.
Method described in:
Svante Janson
Phragmén's and Thiele's election methods
<https://arxiv.org/pdf/1611.08826.pdf> (Section 18.5, Page 59)
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Phragmen-Enestroem:
.. doctest::
>>> Rule("phragmen-enestroem").algorithms
('float-fractions', 'gmpy2-fractions', 'standard-fractions')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "phragmen-enestroem"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only defined for unit weights (weight=1)")
committees, detailed_info = _phragmen_enestroem_algorithm(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _phragmen_enestroem_algorithm(
profile, committeesize, algorithm, resolute, max_num_of_committees
):
"""
Algorithm computing Phragmen-Enestroem.
"""
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm("phragmen-enestroem", algorithm)
if resolute:
max_num_of_committees = 1 # same algorithm for resolute==True and resolute==False
initial_voter_budget = [voter.weight for voter in profile]
# price for adding a candidate to the committee
price = division(sum(initial_voter_budget), committeesize)
committee_budget_pairs = [(tuple(), initial_voter_budget)]
committees = set()
while committee_budget_pairs:
committee, budget = committee_budget_pairs.pop()
available_candidates = [cand for cand in profile.candidates if cand not in committee]
support = {cand: 0 for cand in available_candidates}
for i, voter in enumerate(profile):
voting_power = budget[i]
if voting_power <= 0:
continue
for cand in voter.approved:
if cand in available_candidates:
support[cand] += voting_power
max_support = max(support.values())
if algorithm == "float-fractions":
tied_cands = [
cand for cand, supp in support.items() if misc.isclose(supp, max_support)
]
else:
tied_cands = sorted(cand for cand, supp in support.items() if supp == max_support)
assert tied_cands, "_phragmen_enestroem_algorithm: no candidate with max support (??)"
new_committee_budget_pairs = []
for cand in tied_cands:
new_budget = list(budget) # copy of budget
if max_support > price: # supporters can afford it
multiplier = division(max_support - price, max_support)
else: # supporters can't afford it, set budget to 0
multiplier = 0
for i, voter in enumerate(profile):
if cand in voter.approved:
new_budget[i] *= multiplier
new_committee = committee + (cand,)
if len(new_committee) == committeesize:
new_committee = tuple(sorted(new_committee))
committees.add(new_committee) # remove duplicate committees
if max_num_of_committees is not None and len(committees) == max_num_of_committees:
# sufficiently many winning committees found
detailed_info = {}
return sorted_committees(committees), detailed_info
else:
# partial committee
new_committee_budget_pairs.append((new_committee, new_budget))
# add new committee/budget pairs in reversed order, so that tiebreaking is correct
committee_budget_pairs += reversed(new_committee_budget_pairs)
detailed_info = {}
return sorted_committees(committees), detailed_info
def compute_consensus_rule(
profile,
committeesize,
algorithm="fastest",
resolute=True,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with the Consensus rule.
Based on Perpetual Consensus from
Martin Lackner Perpetual Voting: Fairness in Long-Term Decision Making
In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020)
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for the Consensus rule:
.. doctest::
>>> Rule("consensus-rule").algorithms
('float-fractions', 'gmpy2-fractions', 'standard-fractions')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "consensus-rule"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
committees, detailed_info = _consensus_rule_algorithm(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
# optional output
output.info(header(rule.longname), wrap=False)
if not resolute:
output.info("Computing all possible winning committees for any tiebreaking order")
output.info(" (aka parallel universes tiebreaking) (resolute=False)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return committees
def _consensus_rule_algorithm(profile, committeesize, algorithm, resolute, max_num_of_committees):
"""
Algorithm for computing the consensus rule.
"""
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm("consensus-rule", algorithm)
if resolute:
max_num_of_committees = 1 # same algorithm for resolute==True and resolute==False
initial_voter_budget = [0] * len(profile)
committee_budget_pairs = [(tuple(), initial_voter_budget)]
committees = set()
while committee_budget_pairs:
committee, budget = committee_budget_pairs.pop()
for i, _ in enumerate(profile):
budget[i] += profile[i].weight # weight is 1 by default
available_candidates = [cand for cand in profile.candidates if cand not in committee]
support = {cand: 0 for cand in available_candidates}
supporters = {cand: [] for cand in available_candidates}
for i, voter in enumerate(profile):
if (budget[i] <= 0) or (algorithm == "float-fractions" and misc.isclose(budget[i], 0)):
continue
for cand in voter.approved:
if cand in available_candidates:
support[cand] += budget[i]
supporters[cand].append(i)
max_support = max(support.values())
if algorithm == "float-fractions":
tied_cands = [
cand for cand, supp in support.items() if misc.isclose(supp, max_support)
]
else:
tied_cands = sorted(cand for cand, supp in support.items() if supp == max_support)
assert tied_cands, "_consensus_rule_algorithm: no candidate with max support (??)"
new_committee_budget_pairs = []
for cand in tied_cands:
new_budget = list(budget) # copy of budget
for i in supporters[cand]:
new_budget[i] -= division(len(profile), len(supporters[cand]))
new_committee = committee + (cand,)
if len(new_committee) == committeesize:
new_committee = tuple(sorted(new_committee))
committees.add(new_committee) # remove duplicate committees
if max_num_of_committees is not None and len(committees) == max_num_of_committees:
# sufficiently many winning committees found
detailed_info = {}
return sorted_committees(committees), detailed_info
else:
# partial committee
new_committee_budget_pairs.append((new_committee, new_budget))
# add new committee/budget pairs in reversed order, so that tiebreaking is correct
committee_budget_pairs += reversed(new_committee_budget_pairs)
detailed_info = {}
return sorted_committees(committees), detailed_info
def compute_trivial_rule(
profile,
committeesize,
algorithm="standard",
resolute=False,
max_num_of_committees=MAX_NUM_OF_COMMITTEES_DEFAULT,
):
"""
Compute winning committees with the trivial rule (all committees are winning).
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for the trivial rule:
.. doctest::
>>> Rule("trivial").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "trivial"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile=profile,
committeesize=committeesize,
algorithm=algorithm,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
if algorithm == "standard":
if resolute:
committees = [range(committeesize)]
else:
all_committees = itertools.combinations(profile.candidates, committeesize)
if max_num_of_committees is None:
committees = list(all_committees)
else:
committees = itertools.islice(all_committees, max_num_of_committees)
committees = [CandidateSet(comm) for comm in committees]
else:
raise UnknownAlgorithm(rule_id, algorithm)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return sorted_committees(committees)
def compute_rsd(
profile, committeesize, algorithm="standard", resolute=True, max_num_of_committees=None
):
"""
Compute winning committees with the Random Serial Dictator rule.
This rule randomy selects a permutation of voters. The first voter in this permutation
adds all approved candidates to the winning committee, then the second voter,
then the third, etc. At some point, a voter has more approved candidates than
can be added to the winning committee. In this case, as many as possible are added
(candidates with smaller index first). In this way, the winning committee is constructed.
.. important::
This algorithm is not deterministic and relies on the Python module `random`.
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Random Serial Dictator:
.. doctest::
>>> Rule("rsd").algorithms
('standard',)
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "rsd"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile, committeesize, algorithm, resolute, max_num_of_committees
)
if not profile.has_unit_weights():
raise ValueError(f"{rule.shortname} is only implemented for unit weights (weight=1).")
# Todo: fix
if algorithm == "standard":
approval_sets = [sorted(voter.approved) for voter in profile]
# random order of dictators
random.shuffle(approval_sets)
committee = set()
for approved in approval_sets:
if len(committee) + len(approved) <= committeesize:
committee.update(approved)
else:
for cand in approved:
committee.add(cand)
if len(committee) == committeesize:
break
if len(committee) == committeesize:
break
else:
remaining_candidates = [cand for cand in profile.candidates if cand not in committee]
num_missing_candidates = committeesize - len(committee)
committee.update(random.sample(remaining_candidates, num_missing_candidates))
else:
raise UnknownAlgorithm(rule_id, algorithm)
committees = [committee]
# optional output
output.info(header(rule.longname), wrap=False)
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return sorted_committees(committees)
def compute_eph(
profile, committeesize, algorithm="float-fractions", resolute=False, max_num_of_committees=None
):
"""
Compute winning committees with the "E Pluribus Hugo" (EPH) voting rule.
This rule is used by the Hugo Awards as a shortlisting voting rule. It is described in the
following paper under the name "Single Divisible Vote with Least-Popular Elimination
(SDV-LPE)":
"A proportional voting system for awards nominations resistant to voting blocs."
Jameson Quinn, and Bruce Schneier.
<https://www.schneier.com/wp-content/uploads/2016/05/Proportional_Voting_System.pdf>
Parameters
----------
profile : abcvoting.preferences.Profile
A profile.
committeesize : int
The desired committee size.
algorithm : str, optional
The algorithm to be used.
The following algorithms are available for Random Serial Dictator:
.. doctest::
>>> Rule("eph").algorithms
('float-fractions', 'gmpy2-fractions', 'standard-fractions')
resolute : bool, optional
Return only one winning committee.
If `resolute=False`, all winning committees are computed (subject to
`max_num_of_committees`).
max_num_of_committees : int, optional
At most `max_num_of_committees` winning committees are computed.
If `max_num_of_committees=None`, the number of winning committees is not restricted.
The default value of `max_num_of_committees` can be modified via the constant
`MAX_NUM_OF_COMMITTEES_DEFAULT`.
Returns
-------
list of CandidateSet
A list of winning committees.
"""
rule_id = "eph"
rule = Rule(rule_id)
if algorithm == "fastest":
algorithm = rule.fastest_available_algorithm()
rule.verify_compute_parameters(
profile, committeesize, algorithm, resolute, max_num_of_committees
)
committees, detailed_info = _eph_algorithm(
rule_id=rule_id,
profile=profile,
algorithm=algorithm,
committeesize=committeesize,
resolute=resolute,
max_num_of_committees=max_num_of_committees,
)
# optional output
output.info(header(rule.longname), wrap=False)
if resolute:
output.info("Computing only one winning committee (resolute=True)\n")
output.details(f"Algorithm: {ALGORITHM_NAMES[algorithm]}\n")
output.info(
str_committees_with_header(committees, cand_names=profile.cand_names, winning=True)
)
# end of optional output
return sorted_committees(committees)
def _eph_algorithm(rule_id, profile, algorithm, committeesize, resolute, max_num_of_committees):
"""Algorithm for computing the "E Pluribus Hugo" (EPH) voting rule."""
if algorithm == "float-fractions":
division = lambda x, y: x / y # standard float division
elif algorithm == "standard-fractions":
division = Fraction # using Python built-in fractions
elif algorithm == "gmpy2-fractions":
if not mpq:
raise ImportError(
'Module gmpy2 not available, required for algorithm "gmpy2-fractions"'
)
division = mpq # using gmpy2 fractions
else:
raise UnknownAlgorithm(rule_id, algorithm)
if resolute:
max_num_of_committees = 1 # same algorithm for resolute==True and resolute==False
remaining_candidates = set(profile.candidates)
while True:
sdv_score = {cand: 0 for cand in remaining_candidates}
av_score = {cand: 0 for cand in remaining_candidates}
for voter in profile:
remaining_approved = [cand for cand in remaining_candidates if cand in voter.approved]
for cand in remaining_approved:
sdv_score[cand] += division(voter.weight, len(remaining_approved))
av_score[cand] += voter.weight
cutoff_sdv = sorted(sdv_score.values())[1] # 2nd smallest value
elimination_cands = [
cand
for cand in remaining_candidates
if (sdv_score[cand] <= cutoff_sdv)
or (algorithm == "float-fractions" and misc.isclose(sdv_score[cand], cutoff_sdv))
]
cutoff_av = min(av_score[cand] for cand in elimination_cands)
elimination_cands = [cand for cand in elimination_cands if av_score[cand] <= cutoff_av]
if len(remaining_candidates) - len(elimination_cands) <= committeesize:
num_cands_to_be_eliminated = len(remaining_candidates) - committeesize
committees = sorted_committees(
[
(remaining_candidates - set(selection))
for selection in itertools.combinations(
elimination_cands, num_cands_to_be_eliminated
)
]
)
detailed_info = {}
return committees[:max_num_of_committees], detailed_info
remaining_candidates -= set(elimination_cands)
| 35.489692 | 99 | 0.6292 | 16,039 | 146,324 | 5.540246 | 0.045452 | 0.053747 | 0.035652 | 0.080216 | 0.759273 | 0.719176 | 0.700158 | 0.676176 | 0.65691 | 0.637734 | 0 | 0.0043 | 0.287998 | 146,324 | 4,122 | 100 | 35.498302 | 0.848613 | 0.32333 | 0 | 0.60143 | 0 | 0 | 0.126438 | 0.018756 | 0 | 0 | 0 | 0.000728 | 0.000894 | 1 | 0.025916 | false | 0 | 0.007149 | 0 | 0.062109 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1c74695a7efe48fa2ca3158a804aa1dd32a43236 | 1,712 | py | Python | cavedb/docgen_mxf.py | masneyb/cavedbmanager | 0e1fb48a3054134a069ec8e9892475ae1e228e5c | [
"Apache-2.0"
] | 4 | 2016-02-26T12:24:08.000Z | 2019-09-10T02:45:08.000Z | cavedb/docgen_mxf.py | masneyb/cavedbmanager | 0e1fb48a3054134a069ec8e9892475ae1e228e5c | [
"Apache-2.0"
] | 2 | 2017-04-16T01:13:22.000Z | 2017-05-07T22:28:49.000Z | cavedb/docgen_mxf.py | masneyb/cavedbmanager | 0e1fb48a3054134a069ec8e9892475ae1e228e5c | [
"Apache-2.0"
] | 1 | 2021-04-16T15:25:20.000Z | 2021-04-16T15:25:20.000Z | # SPDX-License-Identifier: Apache-2.0
import cavedb.docgen_common
import cavedb.utils
class Mxf(cavedb.docgen_common.Common):
def __init__(self, filename, download_url):
cavedb.docgen_common.Common.__init__(self)
self.filename = filename
self.download_url = download_url
self.number = 1
self.mxffile = None
def open(self, all_regions_gis_hash):
cavedb.docgen_common.create_base_directory(self.filename)
self.mxffile = open(self.filename, 'w')
def close(self):
self.mxffile.close()
def feature_entrance(self, feature, entrance, coordinates):
wgs84_lon_lat = coordinates.get_lon_lat_wgs84()
self.mxffile.write('%s, %s, \"%s\", \"%s%s\", \"Number: %s Height: %s\", ff0000, 3\n' % \
(wgs84_lon_lat[1], wgs84_lon_lat[0], \
cavedb.docgen_common.get_entrance_name(feature, entrance), \
feature.survey_county.survey_short_name, feature.survey_id, \
self.number, entrance.elevation_ft))
self.number = self.number + 1
def create_html_download_urls(self):
return self.create_url(self.download_url, 'Maptech (MXF)', self.filename)
def create_for_bulletin(bulletin):
return Mxf(get_bulletin_mxf_filename(bulletin.id), 'bulletin/%s/mxf' % (bulletin.id))
def create_for_global():
return Mxf(get_global_mxf_filename(), None)
def get_bulletin_mxf_filename(bulletin_id):
return '%s/mxf/bulletin_%s.mxf' % (cavedb.utils.get_output_base_dir(bulletin_id), bulletin_id)
def get_global_mxf_filename():
return '%s/mxf/all.mxf' % (cavedb.utils.get_global_output_base_dir())
| 31.703704 | 98 | 0.665888 | 223 | 1,712 | 4.798206 | 0.278027 | 0.056075 | 0.084112 | 0.04486 | 0.059813 | 0.059813 | 0 | 0 | 0 | 0 | 0 | 0.014137 | 0.214953 | 1,712 | 53 | 99 | 32.301887 | 0.781994 | 0.020444 | 0 | 0 | 0 | 0.03125 | 0.077015 | 0.013134 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28125 | false | 0 | 0.0625 | 0.15625 | 0.53125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
1c75e461532df41224c9c19d5a4cf71ecfa9e43d | 1,505 | py | Python | rrmng/rrmngmnt/user.py | avihaie/bug-hunter | 9a2730d9a61b268f45e9d3115b8bbba039b954db | [
"Apache-2.0"
] | null | null | null | rrmng/rrmngmnt/user.py | avihaie/bug-hunter | 9a2730d9a61b268f45e9d3115b8bbba039b954db | [
"Apache-2.0"
] | 4 | 2018-05-29T03:58:15.000Z | 2018-10-10T10:27:35.000Z | rrmng/rrmngmnt/user.py | avihaie/bug-hunter | 9a2730d9a61b268f45e9d3115b8bbba039b954db | [
"Apache-2.0"
] | null | null | null | from rrmng.rrmngmnt.resource import Resource
class User(Resource):
def __init__(self, name, password):
"""
Args:
password (str): Password
name (str): User name
"""
super(User, self).__init__()
self.name = name
self.password = password
@property
def full_name(self):
return self.get_full_name()
def get_full_name(self):
return self.name
class RootUser(User):
NAME = 'root'
def __init__(self, password):
super(RootUser, self).__init__(self.NAME, password)
class Domain(Resource):
def __init__(self, name, provider=None, server=None):
"""
Args:
server (str): Server address
name (str): Name of domain
provider (str): Name of provider / type of domain
"""
super(Domain, self).__init__()
self.name = name
self.provider = provider
self.server = server
class InternalDomain(Domain):
NAME = 'internal'
def __init__(self):
super(InternalDomain, self).__init__(self.NAME)
class ADUser(User):
def __init__(self, name, password, domain):
"""
Args:
domain (instance of Domain): User domain
password (str): Password
name (str): User name
"""
super(ADUser, self).__init__(name, password)
self.domain = domain
def get_full_name(self):
return "%s@%s" % (self.name, self.domain.name)
| 23.515625 | 61 | 0.57608 | 167 | 1,505 | 4.91018 | 0.197605 | 0.087805 | 0.102439 | 0.078049 | 0.337805 | 0.212195 | 0.095122 | 0.095122 | 0 | 0 | 0 | 0 | 0.312957 | 1,505 | 63 | 62 | 23.888889 | 0.793037 | 0.192691 | 0 | 0.129032 | 0 | 0 | 0.015726 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258065 | false | 0.193548 | 0.032258 | 0.096774 | 0.612903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
1c79ab6b54f565a108796c071b30a1793de5e098 | 243 | py | Python | politicians/tasks.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 2 | 2018-11-16T21:45:17.000Z | 2019-02-03T19:55:46.000Z | politicians/tasks.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | 13 | 2018-08-17T19:12:11.000Z | 2022-03-11T23:27:41.000Z | politicians/tasks.py | zinaukarenku/zkr-platform | 8daf7d1206c482f1f8e0bcd54d4fde783e568774 | [
"Apache-2.0"
] | null | null | null | from celery import shared_task
from politicians.models import Promises, PromiseAction
@shared_task(soft_time_limit=30)
def get_promise_action_scores(promiseaction_id=None):
promiseAction = PromiseAction.objects.filter(pk=promiseaction_id) | 40.5 | 69 | 0.855967 | 32 | 243 | 6.21875 | 0.71875 | 0.100503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.078189 | 243 | 6 | 69 | 40.5 | 0.879464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
1c86202c5c9d6cf6069f1ca55edd96f5a6259a61 | 443 | py | Python | proxypool/utils/parse.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | proxypool/utils/parse.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | proxypool/utils/parse.py | zronghui/ProxyPool | 7c0dde213c56942807d6421fa1e3604d3f25514f | [
"MIT"
] | null | null | null | import re
def parse_redis_connection_string(connection_string):
"""
parse a redis connection string, for example:
redis://[password]@host:port
rediss://[password]@host:port
:param connection_string:
:return:
"""
result = re.match('rediss?:\/\/(.*?)@(.*?):(\d+)', connection_string)
return result.group(2), int(result.group(3)), (result.group(1) or None) if result \
else ('localhost', 6379, None)
| 29.533333 | 87 | 0.636569 | 54 | 443 | 5.111111 | 0.555556 | 0.289855 | 0.152174 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019337 | 0.182844 | 443 | 14 | 88 | 31.642857 | 0.743094 | 0.31377 | 0 | 0 | 0 | 0 | 0.139706 | 0.106618 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
98d3c828d5f10cf9789f1280af55ea616236efec | 523 | py | Python | src/data/dataframes.py | TihonkovSergey/pd-model-logreg | baa6447198f5f89a43b1091413d3199192230ce1 | [
"Apache-2.0"
] | null | null | null | src/data/dataframes.py | TihonkovSergey/pd-model-logreg | baa6447198f5f89a43b1091413d3199192230ce1 | [
"Apache-2.0"
] | null | null | null | src/data/dataframes.py | TihonkovSergey/pd-model-logreg | baa6447198f5f89a43b1091413d3199192230ce1 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
import pandas as pd
from definitions import ROOT_DIR
def get_train():
data_path = Path(ROOT_DIR).joinpath("data/raw/")
return pd.read_csv(data_path.joinpath('PD-data-train.csv'), sep=';')
def get_test():
data_path = Path(ROOT_DIR).joinpath("data/raw/")
return pd.read_csv(data_path.joinpath('PD-data-test.csv'), sep=';')
def get_data_description():
data_path = Path(ROOT_DIR).joinpath("data/raw/")
return pd.read_csv(data_path.joinpath('PD-data-desc.csv'), sep=';')
| 24.904762 | 72 | 0.703633 | 83 | 523 | 4.228916 | 0.277108 | 0.136752 | 0.102564 | 0.136752 | 0.606838 | 0.606838 | 0.606838 | 0.606838 | 0.606838 | 0.606838 | 0 | 0 | 0.130019 | 523 | 20 | 73 | 26.15 | 0.771429 | 0 | 0 | 0.25 | 0 | 0 | 0.151052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
98d4672916c38338602fe1d91086ef4b15d61b12 | 261 | py | Python | app/config.py | joelkyu/SantaComingToTown | c93bca9158459243205b26cb36f1b19346eeb058 | [
"MIT"
] | null | null | null | app/config.py | joelkyu/SantaComingToTown | c93bca9158459243205b26cb36f1b19346eeb058 | [
"MIT"
] | 3 | 2019-09-09T14:55:12.000Z | 2019-09-10T14:51:52.000Z | app/config.py | joelkyu/SantaComingToTowne | c93bca9158459243205b26cb36f1b19346eeb058 | [
"MIT"
] | null | null | null | from flask import Flask, render_template
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
| 26.1 | 64 | 0.800766 | 36 | 261 | 5.5 | 0.527778 | 0.136364 | 0.191919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095785 | 261 | 9 | 65 | 29 | 0.838983 | 0 | 0 | 0 | 0 | 0 | 0.287356 | 0.287356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
98e1cd4818800c72e6d9e5cea455548d52546136 | 2,354 | py | Python | src/long_read_pipeline/migrations/0001_initial.py | NTU-CGM/MiDSystem | 0d7dadafe4811ec9d1c0df03e99d7c479e8c7f1d | [
"MIT"
] | null | null | null | src/long_read_pipeline/migrations/0001_initial.py | NTU-CGM/MiDSystem | 0d7dadafe4811ec9d1c0df03e99d7c479e8c7f1d | [
"MIT"
] | null | null | null | src/long_read_pipeline/migrations/0001_initial.py | NTU-CGM/MiDSystem | 0d7dadafe4811ec9d1c0df03e99d7c479e8c7f1d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.3 on 2021-06-10 20:18
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='long_ip_log',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ip', models.CharField(max_length=25)),
('country', models.CharField(default='NA', max_length=50)),
('functions', models.CharField(default='NA', max_length=25)),
('submission_time', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='long_User_Job',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_id', models.CharField(max_length=50)),
('upload_id', models.CharField(max_length=64)),
('ip', models.CharField(max_length=25)),
('mail', models.EmailField(max_length=254)),
('submission_time', models.DateTimeField(auto_now_add=True)),
('start_time', models.DateTimeField(auto_now_add=True)),
('end_time', models.DateTimeField(auto_now_add=True)),
('total_status', models.CharField(default='WAITING', max_length=10)),
('data_preparation_status', models.CharField(default='WAITING', max_length=10)),
('quality_check', models.CharField(default='WAITING', max_length=10)),
('assembly_status', models.CharField(default='WAITING', max_length=10)),
('remap_status', models.CharField(default='SKIPPED', max_length=10)),
('gene_prediction_status', models.CharField(default='WAITING', max_length=10)),
('go_status', models.CharField(default='WAITING', max_length=10)),
('tree_status', models.CharField(default='SKIPPED', max_length=10)),
('parsing_status', models.CharField(default='WAITING', max_length=10)),
('error_log', models.CharField(default='NA', max_length=50)),
],
),
]
| 47.08 | 114 | 0.595157 | 248 | 2,354 | 5.415323 | 0.334677 | 0.113924 | 0.196575 | 0.166791 | 0.703649 | 0.664929 | 0.59866 | 0.461653 | 0.117647 | 0.117647 | 0 | 0.029765 | 0.257859 | 2,354 | 49 | 115 | 48.040816 | 0.738981 | 0.028887 | 0 | 0.341463 | 1 | 0 | 0.144109 | 0.019711 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04878 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
98fd48870c8e98de7967b6f2fb07fb3a47a30dbf | 1,763 | py | Python | python/aead/aead_key_manager.py | tsingson/tink | bb4386994a4ff62b2ae2b140a9c106267028c511 | [
"Apache-2.0"
] | null | null | null | python/aead/aead_key_manager.py | tsingson/tink | bb4386994a4ff62b2ae2b140a9c106267028c511 | [
"Apache-2.0"
] | null | null | null | python/aead/aead_key_manager.py | tsingson/tink | bb4386994a4ff62b2ae2b140a9c106267028c511 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Python wrapper of the CLIF-wrapped C++ AEAD key manager."""
from __future__ import absolute_import
from __future__ import division
from __future__ import google_type_annotations
from __future__ import print_function
from typing import Text
from tink.cc.python import aead as cc_aead
from tink.python.aead import aead
from tink.python.cc.clif import cc_key_manager
from tink.python.core import key_manager
from tink.python.core import tink_error
class _AeadCcToPyWrapper(aead.Aead):
"""Transforms cliffed C++ Aead primitive into a Python primitive."""
def __init__(self, cc_primitve: cc_aead.Aead):
self._aead = cc_primitve
@tink_error.use_tink_errors
def encrypt(self, plaintext: bytes, associated_data: bytes) -> bytes:
return self._aead.encrypt(plaintext, associated_data)
@tink_error.use_tink_errors
def decrypt(self, plaintext: bytes, associated_data: bytes) -> bytes:
return self._aead.decrypt(plaintext, associated_data)
def from_cc_registry(type_url: Text) -> key_manager.KeyManager[aead.Aead]:
return key_manager.KeyManagerCcToPyWrapper(
cc_key_manager.AeadKeyManager.from_cc_registry(type_url), aead.Aead,
_AeadCcToPyWrapper)
| 35.26 | 74 | 0.782189 | 257 | 1,763 | 5.143969 | 0.424125 | 0.045386 | 0.048412 | 0.024206 | 0.205749 | 0.173979 | 0.136157 | 0.08472 | 0.08472 | 0.08472 | 0 | 0.005295 | 0.142938 | 1,763 | 49 | 75 | 35.979592 | 0.869623 | 0.380034 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.434783 | 0.130435 | 0.782609 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
c710d276f32cd528b2ce914aeaf8669baf623b38 | 62 | py | Python | subrepos/papy/papy/__init__.py | timothyyu/au_utils | 6d1f1095b7f5de823a329ca9beb787c72aaea53b | [
"BSD-3-Clause"
] | 1 | 2019-02-01T05:09:37.000Z | 2019-02-01T05:09:37.000Z | subrepos/papy/papy/__init__.py | timothyyu/au_utils | 6d1f1095b7f5de823a329ca9beb787c72aaea53b | [
"BSD-3-Clause"
] | null | null | null | subrepos/papy/papy/__init__.py | timothyyu/au_utils | 6d1f1095b7f5de823a329ca9beb787c72aaea53b | [
"BSD-3-Clause"
] | null | null | null | name = 'papy'
from . import freq, img, num, plot, time, misc
| 15.5 | 46 | 0.645161 | 10 | 62 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209677 | 62 | 3 | 47 | 20.666667 | 0.816327 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c714651871e62e4573ce237d4753e96cb6c3bf1b | 721 | py | Python | edk2toolext/environment/plugintypes/dsc_processor_plugin.py | joschock/edk2-pytool-extensions | 34cd69415f8a8363291e8ae6f7c6bddec7ac9967 | [
"BSD-2-Clause-Patent"
] | 32 | 2019-07-09T21:43:57.000Z | 2022-03-31T01:43:59.000Z | edk2toolext/environment/plugintypes/dsc_processor_plugin.py | joschock/edk2-pytool-extensions | 34cd69415f8a8363291e8ae6f7c6bddec7ac9967 | [
"BSD-2-Clause-Patent"
] | 217 | 2019-08-07T01:12:27.000Z | 2022-03-30T07:28:24.000Z | edk2toolext/environment/plugintypes/dsc_processor_plugin.py | joschock/edk2-pytool-extensions | 34cd69415f8a8363291e8ae6f7c6bddec7ac9967 | [
"BSD-2-Clause-Patent"
] | 28 | 2019-08-05T17:23:08.000Z | 2022-03-04T00:20:04.000Z | # @file dsc_processor_plugin
# Plugin for for parsing DSCs
##
# Copyright (c) Microsoft Corporation
#
# SPDX-License-Identifier: BSD-2-Clause-Patent
##
class IDscProcessorPlugin(object):
##
# does the transform on the DSC
#
# @param dsc - the in-memory model of the DSC
# @param thebuilder - UefiBuild object to get env information
#
# @return 0 for success NonZero for error.
##
def do_transform(self, dsc, thebuilder):
return 0
##
# gets the level that this transform operates at
#
# @param thebuilder - UefiBuild object to get env information
#
# @return 0 for the most generic level
##
def get_level(self, thebuilder):
return 0
| 21.848485 | 65 | 0.651872 | 91 | 721 | 5.120879 | 0.571429 | 0.060086 | 0.04721 | 0.128755 | 0.253219 | 0.253219 | 0.253219 | 0.253219 | 0.253219 | 0.253219 | 0 | 0.009452 | 0.266297 | 721 | 32 | 66 | 22.53125 | 0.871456 | 0.629681 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c757bdf20ef16356887bb05d39a876192818f188 | 754 | py | Python | facenet_sandberg/train_insightface/distance.py | armanrahman22/facenet | 194e7bab2060833acd1d245228bcd43825ce5af3 | [
"MIT"
] | 21 | 2019-01-10T16:46:06.000Z | 2022-01-17T09:30:35.000Z | facenet_sandberg/train_insightface/distance.py | armanrahman22/facenet | 194e7bab2060833acd1d245228bcd43825ce5af3 | [
"MIT"
] | null | null | null | facenet_sandberg/train_insightface/distance.py | armanrahman22/facenet | 194e7bab2060833acd1d245228bcd43825ce5af3 | [
"MIT"
] | 12 | 2019-02-08T20:37:16.000Z | 2020-12-10T06:55:30.000Z | # -*- coding:utf-8 -*-
import math
import numpy as np
def cosine_similarity(v1, v2):
# compute cosine similarity of v1 to v2: (v1 dot v2)/{||v1||*||v2||)
sumxx, sumxy, sumyy = 0, 0, 0
for i in range(len(v1)):
x = v1[i]
y = v2[i]
sumxx += x * x
sumyy += y * y
sumxy += x * y
return sumxy / math.sqrt(sumxx * sumyy)
def cosine_distance(v1, v2):
return 1 - cosine_similarity(v1, v2)
def L2_distance(v1, v2):
return np.sqrt(np.sum(np.square(v1 - v2)))
def SSD_distance(v1, v2):
return np.sum(np.square(v1 - v2))
def get_distance(dist_type):
loss_map = {
'cosine': cosine_distance,
'L2': L2_distance,
'SSD': SSD_distance}
return loss_map[dist_type]
| 20.944444 | 72 | 0.578249 | 117 | 754 | 3.615385 | 0.350427 | 0.07565 | 0.085106 | 0.12766 | 0.184397 | 0.094563 | 0.094563 | 0 | 0 | 0 | 0 | 0.056673 | 0.274536 | 754 | 35 | 73 | 21.542857 | 0.716636 | 0.115385 | 0 | 0 | 0 | 0 | 0.016566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.086957 | 0.130435 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c7659ec96d4e983c20829d4bde2f8c8c9f0ebe20 | 265 | py | Python | Python/Math/RandomGuess.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | 148 | 2017-08-03T01:49:27.000Z | 2022-03-26T10:39:30.000Z | Python/Math/RandomGuess.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | 3 | 2017-11-23T19:52:05.000Z | 2020-04-01T00:44:40.000Z | Python/Math/RandomGuess.py | piovezan/SOpt | a5ec90796b7bdf98f0675457fc4bb99c8695bc40 | [
"MIT"
] | 59 | 2017-08-03T01:49:19.000Z | 2022-03-31T23:24:38.000Z | import random
m = 1
while m != 0:
n1 = int(random.random() * 9) + 1
n2 = int(random.random() * 9) + 1
m = int(raw_input("{} * {} = ".format(n1, n2)))
print ("correto!" if m == n1 * n2 else "errado!!")
#https://pt.stackoverflow.com/q/259931/101
| 26.5 | 54 | 0.543396 | 41 | 265 | 3.487805 | 0.609756 | 0.125874 | 0.20979 | 0.223776 | 0.237762 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10396 | 0.237736 | 265 | 9 | 55 | 29.444444 | 0.60396 | 0.154717 | 0 | 0 | 0 | 0 | 0.116592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.