hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9ac2d215665fe8d23ba2ab12a3e27bf4e67d3f6f | 264 | py | Python | app/genres/urls.py | marcosvbras/inimex-api | 1d19289d23bc2cd08672aa6d052803149df7af3e | [
"WTFPL"
] | null | null | null | app/genres/urls.py | marcosvbras/inimex-api | 1d19289d23bc2cd08672aa6d052803149df7af3e | [
"WTFPL"
] | null | null | null | app/genres/urls.py | marcosvbras/inimex-api | 1d19289d23bc2cd08672aa6d052803149df7af3e | [
"WTFPL"
] | 3 | 2019-09-06T17:02:30.000Z | 2021-10-25T23:57:52.000Z | from django.conf.urls import url
from .views import GenreListCreateView, GenreRetrieveUpdateView
urlpatterns = [
url(r'^genres/$', GenreListCreateView.as_view(), name='genre_list_create'),
url(r'^genres/(?P<genre_pk>\d+)$', GenreRetrieveUpdateView.as_view()),
] | 37.714286 | 76 | 0.765152 | 32 | 264 | 6.15625 | 0.65625 | 0.040609 | 0.101523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075758 | 264 | 7 | 77 | 37.714286 | 0.807377 | 0 | 0 | 0 | 0 | 0 | 0.196226 | 0.098113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
9aceeca95bdefec2b4cc8e397689736b997e4cae | 2,164 | py | Python | kernel_density/kernel_density_config.py | marius-blomli/telefold | be14c41483b6b81a9bc63153a36c7bb175e1e5ac | [
"MIT"
] | null | null | null | kernel_density/kernel_density_config.py | marius-blomli/telefold | be14c41483b6b81a9bc63153a36c7bb175e1e5ac | [
"MIT"
] | null | null | null | kernel_density/kernel_density_config.py | marius-blomli/telefold | be14c41483b6b81a9bc63153a36c7bb175e1e5ac | [
"MIT"
] | null | null | null | config = {
'matrikkel_zip_files': [{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0709_Larvik/UTM32_Euref89/Shape/32_Matrikkeldata_0709.zip',
'target_shape_prefix': '32_0709adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0706_Sandefjord/UTM32_Euref89/Shape/32_Matrikkeldata_0706.zip',
'target_shape_prefix': '32_0706adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0719_Andebu/UTM32_Euref89/Shape/32_Matrikkeldata_0719.zip',
'target_shape_prefix': '32_0719adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/08_Telemark/0814_Bamble/UTM32_Euref89/Shape/32_Matrikkeldata_0814.zip',
'target_shape_prefix': '32_0814adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0722_Notteroy/UTM32_Euref89/Shape/32_Matrikkeldata_0722.zip',
'target_shape_prefix': '32_0722adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/08_Telemark/0805_Porsgrunn/UTM32_Euref89/Shape/32_Matrikkeldata_0805.zip',
'target_shape_prefix': '32_0805adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/08_Telemark/0806_Skien/UTM32_Euref89/Shape/32_Matrikkeldata_0806.zip',
'target_shape_prefix': '32_0806adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0720_Stokke/UTM32_Euref89/Shape/32_Matrikkeldata_0720.zip',
'target_shape_prefix': '32_0720adresse_punkt'
},{
'zip_name': 'Y:/kartdata/Matrikkeldata/SOSIuttrekk/07_Vestfold/0723_Tjome/UTM32_Euref89/Shape/32_Matrikkeldata_0723.zip',
'target_shape_prefix': '32_0723adresse_punkt'
}],
'temp_directory': 'C:/temp/ntnu_temp/',
'file_name_raw_merged_temp': 'merged_matrikkel.shp',
'file_name_raw_merged_transformed_temp': 'merged_matrikkel_transformed.shp',
'file_name_raw_kernel_density_temp': 'kernel_density_raw.img',
'file_name_raster_to_fit': 'Y:/prosesserte_data/slope_arcm_img.img',
'area_rectangle': '532687,5 6533912,5 589987,5 6561812,5',
'file_name_kernel_density': 'Y:/prosesserte_data/kernel_density_25_500.img',
'kernel_density_cell_size': 25,
'kernel_density_search_radius': 500
}
| 54.1 | 128 | 0.806377 | 289 | 2,164 | 5.574394 | 0.273356 | 0.039106 | 0.044693 | 0.089385 | 0.605835 | 0.304159 | 0.304159 | 0.304159 | 0.273122 | 0 | 0 | 0.116838 | 0.058688 | 2,164 | 39 | 129 | 55.487179 | 0.67403 | 0 | 0 | 0.205128 | 0 | 0 | 0.85305 | 0.601201 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9ad3ba478acf06877dd826b4425a33af7a16bbb8 | 1,322 | py | Python | tests/download/download_tests.py | jarve/FAIRDATAtest | 7dd32a37a88b240996fcb67210c6e2f71a1091e0 | [
"MIT"
] | null | null | null | tests/download/download_tests.py | jarve/FAIRDATAtest | 7dd32a37a88b240996fcb67210c6e2f71a1091e0 | [
"MIT"
] | null | null | null | tests/download/download_tests.py | jarve/FAIRDATAtest | 7dd32a37a88b240996fcb67210c6e2f71a1091e0 | [
"MIT"
] | null | null | null | import time
import unittest
from tests.download import download
class TestDownload(unittest.TestCase):
@classmethod
def setUpClass(cls):
print('Executing %s...' % cls.__name__)
super().setUpClass()
def setUp(self):
self.OK = [200, 201, 202, 203, 204]
self.FAIL = [401, 403, 404, 500]
def test_file(self):
# downloading a file from the example dataset
urn = 'urn:nbn:fi:att:d2cf4977-36fa-4762-adb3-126ea06108ed?file=5b7d486c951df671401089f134223'
download_status, download_data = download.download_dataset(urn)
self.assertIn(download_status, self.OK, "Download could not download the file")
def test_dir(self):
# downloading a dir from the example dataset
urn = 'urn:nbn:fi:att:d2cf4977-36fa-4762-adb3-126ea06108ed?dir=c66c97e387933b82a734d904cc9572a3'
download_status, download_data = download.download_dataset(urn)
self.assertIn(download_status, self.OK, "Download could not download the file")
def test_dataset(self):
# downloading the example dataset
urn = 'urn:nbn:fi:att:d2cf4977-36fa-4762-adb3-126ea06108ed'
download_status, download_data = download.download_dataset(urn)
self.assertIn(download_status, self.OK, "Download could not download the dataset")
| 36.722222 | 104 | 0.700454 | 164 | 1,322 | 5.530488 | 0.335366 | 0.066152 | 0.056229 | 0.066152 | 0.6086 | 0.6086 | 0.6086 | 0.6086 | 0.6086 | 0.6086 | 0 | 0.125947 | 0.20121 | 1,322 | 35 | 105 | 37.771429 | 0.732955 | 0.089259 | 0 | 0.217391 | 0 | 0.086957 | 0.292744 | 0.187656 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.217391 | false | 0 | 0.130435 | 0 | 0.391304 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9ae02703dc473878cd60b65bcd6439a89d549462 | 1,158 | py | Python | csvkit/exceptions.py | tthibo/csvkit | fb12c7df32504b51b9def6e3cff41c36147616cf | [
"MIT"
] | 2 | 2015-03-06T15:22:02.000Z | 2016-03-11T13:35:48.000Z | csvkit/exceptions.py | tthibo/csvkit | fb12c7df32504b51b9def6e3cff41c36147616cf | [
"MIT"
] | null | null | null | csvkit/exceptions.py | tthibo/csvkit | fb12c7df32504b51b9def6e3cff41c36147616cf | [
"MIT"
] | null | null | null | #!/usr/bin/env python
class ColumnIdentifierError(Exception):
"""
Exception raised when the user supplies an invalid column identifier.
"""
def __init__(self, msg):
self.msg = msg
class XLSDataError(Exception):
"""
Exception raised when there is a problem converting XLS data.
"""
def __init__(self, msg):
self.msg = msg
class CSVTestException(Exception):
"""
Superclass for all row-test-failed exceptions.
All must have a line number, the problematic row, and a text explanation.
"""
def __init__(self, line_number, row, msg):
super(CSVTestException, self).__init__()
self.msg = msg
self.line_number = line_number
self.row = row
class LengthMismatchError(CSVTestException):
"""
Encapsulate information about a row which as the wrong length.
"""
def __init__(self, line_number, row, expected_length):
msg = "Expected %i columns, found %i columns" % (expected_length, len(row))
super(LengthMismatchError, self).__init__(line_number, row, msg)
@property
def length(self):
return len(self.row)
| 28.95 | 83 | 0.65544 | 137 | 1,158 | 5.313869 | 0.437956 | 0.082418 | 0.06044 | 0.076923 | 0.145604 | 0.145604 | 0.07967 | 0.07967 | 0 | 0 | 0 | 0 | 0.246978 | 1,158 | 39 | 84 | 29.692308 | 0.834862 | 0.291019 | 0 | 0.263158 | 0 | 0 | 0.049007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0 | 0.052632 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b100e0561aad30cdeececf8275ef55cddc5bc621 | 1,580 | py | Python | examples/processors/other.py | calibre12/zipreport | e6b48c999c530e16600842847a17f650002a0e13 | [
"MIT"
] | null | null | null | examples/processors/other.py | calibre12/zipreport | e6b48c999c530e16600842847a17f650002a0e13 | [
"MIT"
] | null | null | null | examples/processors/other.py | calibre12/zipreport | e6b48c999c530e16600842847a17f650002a0e13 | [
"MIT"
] | null | null | null | from zipreport import ZipReportCli
from zipreport.processors.zipreport import ZipReportClient, ZipReportProcessor
from zipreport.report import ReportFileLoader, ReportJob
from zipreport.template import JinjaRender
def generate_pdf_server(report:str, data: dict, output_file:str) -> bool:
zpt = ReportFileLoader.load(report) # load zpt file
JinjaRender(zpt).render(data) # render jinja template with parameters
job = ReportJob(zpt) # create new rendering job
job.set_jsevent(True) # report uses event hook
job.set_jsevent_timeout(500)
job.set_process_timeout(600)
job.set_render_timeout(600)
client = ZipReportClient('http://127.0.0.1:6543', "") # zipreport-server client
result = ZipReportProcessor(client).process(job) # render
if result.success:
with open(output_file, 'wb') as rpt:
rpt.write(result.report.read())
return True
return False
def generate_pdf_cli(zipreport_cli:str, report: str, data: dict, output_file: str) -> bool:
zpt = ReportFileLoader.load(report) # load zpt file
JinjaRender(zpt).render(data) # render jinja template with parameters
job = ReportJob(zpt) # create new rendering job
job.set_jsevent(True) # report uses event hook
job.set_jsevent_timeout(500)
job.set_process_timeout(600)
job.set_render_timeout(600)
result = ZipReportCli(zipreport_cli).render(job, data)
if result.success:
with open(output_file, 'wb') as rpt:
rpt.write(result.report.read())
return True
return False
| 35.111111 | 91 | 0.710759 | 203 | 1,580 | 5.413793 | 0.295567 | 0.043676 | 0.047316 | 0.030937 | 0.647862 | 0.647862 | 0.647862 | 0.647862 | 0.647862 | 0.647862 | 0 | 0.022082 | 0.197468 | 1,580 | 44 | 92 | 35.909091 | 0.844637 | 0.14557 | 0 | 0.727273 | 0 | 0 | 0.018671 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.121212 | 0 | 0.30303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b12814d054ed75c22f60eaf6b6c69d028a62c9df | 754 | py | Python | Lib/site-packages/pyexcel_io/writers/tsvz.py | percevalm/aumyproject | b24b38005188ce9dd41ed663cf54dad5464afef3 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/pyexcel_io/writers/tsvz.py | percevalm/aumyproject | b24b38005188ce9dd41ed663cf54dad5464afef3 | [
"bzip2-1.0.6"
] | 16 | 2020-03-24T17:30:37.000Z | 2022-03-11T23:57:41.000Z | Lib/site-packages/pyexcel_io/writers/tsvz.py | percevalm/aumyproject | b24b38005188ce9dd41ed663cf54dad5464afef3 | [
"bzip2-1.0.6"
] | null | null | null | """
pyexcel_io.fileformat.tsvz
~~~~~~~~~~~~~~~~~~~~~~~~~~
The lower level tsvz file format handler.
:copyright: (c) 2014-2017 by Onni Software Ltd.
:license: New BSD License, see LICENSE for more details
"""
from pyexcel_io.constants import FILE_FORMAT_TSVZ, KEYWORD_TSV_DIALECT
from .csvz import CSVZipBookWriter
class TSVZipBookWriter(CSVZipBookWriter):
""" write zipped tsv file
It is similiar to CSVZipBookWriter, but support tab separated values
"""
def __init__(self):
CSVZipBookWriter.__init__(self)
self._file_type = FILE_FORMAT_TSVZ
def open(self, file_name, **keywords):
keywords["dialect"] = KEYWORD_TSV_DIALECT
CSVZipBookWriter.open(self, file_name, **keywords)
| 26.928571 | 72 | 0.689655 | 89 | 754 | 5.595506 | 0.595506 | 0.060241 | 0.056225 | 0.064257 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013245 | 0.198939 | 754 | 27 | 73 | 27.925926 | 0.811258 | 0.388594 | 0 | 0 | 0 | 0 | 0.016746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b135494785a70fb73063ec4b52b691407e172ccc | 967 | py | Python | libs/report/__init__.py | Cookie-YY/cooshow | fe487ff27a4d5fa0a2f832c45694fb4526d9771b | [
"MIT"
] | null | null | null | libs/report/__init__.py | Cookie-YY/cooshow | fe487ff27a4d5fa0a2f832c45694fb4526d9771b | [
"MIT"
] | null | null | null | libs/report/__init__.py | Cookie-YY/cooshow | fe487ff27a4d5fa0a2f832c45694fb4526d9771b | [
"MIT"
] | null | null | null | class Graph:
"""
convert层继承Graph后,将数据放到__init__中之后,执行父类run方法,得到填充好的数据
"""
def __init__(self):
pass
def get_graph(self):
"""
根据传过来的参数找合适的图
"""
pass
def fill_graph(self):
"""
拿到format后的数据,天道graph里面
"""
pass
def _get_data(self):
"""
通过gd_id0去拿真实的请求,并走完整的流程拿到数据
"""
pass
def _format_data(self):
"""
转换成echarts图需要的格式
"""
pass
def run(self):
self.get_graph()
self.fill_graph()
return self
class Report:
"""
插件过程的report模式,settings/gdxf/report/xxx.py 继承 Report之后
self.text_title_1 = 在text里面{gd_id1}
self.text_bg_2 =
self.gd_id1 = 发请求的url
self.wrapper_data =
self.wrapper_text =
self.wrapper_all =
self.wrapper_text1
self.wrapper_text2
.....
调用父类run方法
1. 模拟发送所有请求,拿到数据
2. 拼到text里面
3. 把text拼起来
"""
pass
| 16.964912 | 57 | 0.54395 | 99 | 967 | 5.030303 | 0.515152 | 0.070281 | 0.040161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016051 | 0.355739 | 967 | 56 | 58 | 17.267857 | 0.783307 | 0.434333 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.352941 | false | 0.352941 | 0 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b1356ca6ddabc52431e8b28f5c29a332530cfdab | 190 | py | Python | wrappers/scripts/get_last_tx_hash.py | button-tech/ton-grams-testnet | e4d83fef72e24f8ba977bc0dfab9d4f57c1f27ec | [
"MIT"
] | null | null | null | wrappers/scripts/get_last_tx_hash.py | button-tech/ton-grams-testnet | e4d83fef72e24f8ba977bc0dfab9d4f57c1f27ec | [
"MIT"
] | null | null | null | wrappers/scripts/get_last_tx_hash.py | button-tech/ton-grams-testnet | e4d83fef72e24f8ba977bc0dfab9d4f57c1f27ec | [
"MIT"
] | null | null | null | #!/usr/bin/env python3.7
import sys
from ton import get_last_tx_hash
tx_hash = get_last_tx_hash(sys.argv[1], sys.argv[2])
if tx_hash == False:
print("error")
else:
print(tx_hash)
| 15.833333 | 52 | 0.705263 | 36 | 190 | 3.472222 | 0.583333 | 0.24 | 0.144 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.157895 | 190 | 11 | 53 | 17.272727 | 0.75625 | 0.121053 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.285714 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b139ad758af1e15899ea2c47d2b089c2a1cd60cc | 513 | py | Python | d1/part2-d1.py | guillesiesta/advent_of_code_2021 | 0e6a2fa4ada22794c7947cabd049866b43a47e1d | [
"Apache-2.0"
] | null | null | null | d1/part2-d1.py | guillesiesta/advent_of_code_2021 | 0e6a2fa4ada22794c7947cabd049866b43a47e1d | [
"Apache-2.0"
] | null | null | null | d1/part2-d1.py | guillesiesta/advent_of_code_2021 | 0e6a2fa4ada22794c7947cabd049866b43a47e1d | [
"Apache-2.0"
] | null | null | null | # abrir y leer archivo
f = open ('input.txt','r')
mensaje = f.read()
f.close()
# almacenar los números en una lista
list_numbers = mensaje.split('\n')
list_numbers = list(map(int,list_numbers))
#print(list_numbers)
increased = 0
for x in range(len(list_numbers)-3):
previous = list_numbers[x] + list_numbers[x+1] + list_numbers[x+2]
posterior = list_numbers[x+1] + list_numbers[x+2] + list_numbers[x+3]
if(previous < posterior):
increased = increased + 1
print(increased) | 27 | 74 | 0.668616 | 78 | 513 | 4.25641 | 0.487179 | 0.364458 | 0.216867 | 0.078313 | 0.156627 | 0.156627 | 0.156627 | 0.156627 | 0 | 0 | 0 | 0.019277 | 0.191033 | 513 | 19 | 75 | 27 | 0.780723 | 0.14425 | 0 | 0 | 0 | 0 | 0.028708 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b149a6b6d77f47e475bb5e10fe2362b5a66f04fe | 112 | py | Python | exercises/ex005.py | mouraa0/python-exercises | 78ecf1cb0d1dfd7dfbdd05574cce5cd6a5cba0f1 | [
"MIT"
] | null | null | null | exercises/ex005.py | mouraa0/python-exercises | 78ecf1cb0d1dfd7dfbdd05574cce5cd6a5cba0f1 | [
"MIT"
] | null | null | null | exercises/ex005.py | mouraa0/python-exercises | 78ecf1cb0d1dfd7dfbdd05574cce5cd6a5cba0f1 | [
"MIT"
] | null | null | null | n = int(input('Digite um número: '))
ns = n+1
na = n-1
print(f'O sucessor de {n} é {ns} e o antecessor é {na}.') | 28 | 57 | 0.598214 | 25 | 112 | 2.68 | 0.68 | 0.059701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.196429 | 112 | 4 | 57 | 28 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0.575221 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b14f5760b3aaebfd8f0c0dc267d7969296a29dad | 22,031 | py | Python | lttnganalyses/core/period.py | azharivs/lamiminer | 168b79074e08837076c832d942d1db290d139930 | [
"MIT"
] | 93 | 2015-02-04T16:55:27.000Z | 2022-03-05T13:25:02.000Z | lttnganalyses/core/period.py | azharivs/lamiminer | 168b79074e08837076c832d942d1db290d139930 | [
"MIT"
] | 49 | 2015-02-03T19:57:44.000Z | 2022-02-27T13:05:55.000Z | lttnganalyses/core/period.py | azharivs/lamiminer | 168b79074e08837076c832d942d1db290d139930 | [
"MIT"
] | 32 | 2015-02-03T19:33:07.000Z | 2020-12-23T13:25:16.000Z | # The MIT License (MIT)
#
# Copyright (C) 2016 - Philippe Proulx <pproulx@efficios.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from . import event as core_event
from functools import partial
import babeltrace as bt
import enum
class InvalidPeriodDefinition(Exception):
pass
# period definition registry, owner of the whole tree of periods
class PeriodDefinitionRegistry:
def __init__(self):
self._root_period_defs = set()
self._named_period_defs = {}
# name to hierarchy
self._full_period_path = {}
def period_full_path(self, name):
return self._full_period_path[name]
def has_period_def(self, name):
return name in self._named_period_defs
def add_full_period_path(self, period_name, parent_name):
period_path = [period_name]
period_path_str = ""
if parent_name is None:
self._full_period_path[period_name] = period_name
return
parent = self.get_period_def(parent_name)
while parent is not None:
period_path.append(parent.name)
parent = parent.parent
period_path.reverse()
for i in period_path:
if len(period_path_str) == 0:
period_path_str = i
else:
period_path_str = "%s/%s" % (period_path_str, i)
self._full_period_path[period_name] = period_path_str
def add_period_def(self, parent_name, period_name, begin_expr, end_expr,
begin_captures_exprs, end_captures_exprs):
# validate unique period name (if named)
if self.has_period_def(period_name):
raise InvalidPeriodDefinition('Cannot redefine period "{}"'.format(
period_name))
# validate that parent exists if it's set
if parent_name is not None and not self.has_period_def(parent_name):
fmt = 'Cannot find parent period named "{}" (as parent of ' \
'period "{}")'
msg = fmt.format(parent_name, period_name)
raise InvalidPeriodDefinition(msg)
# create period, and associate parent and children
parent = None
if parent_name is not None:
parent = self.get_period_def(parent_name)
period_def = PeriodDefinition(parent, period_name, begin_expr,
end_expr, begin_captures_exprs,
end_captures_exprs)
if parent is not None:
parent.children.add(period_def)
# validate new period definition
PeriodDefinitionValidator(period_def)
if period_def.parent is None:
self._root_period_defs.add(period_def)
if period_def.name is not None:
self._named_period_defs[period_def.name] = period_def
self.add_full_period_path(period_name, parent_name)
def get_period_def(self, name):
return self._named_period_defs.get(name)
@property
def root_period_defs(self):
for period_def in self._root_period_defs:
yield period_def
@property
def named_period_defs(self):
return self._named_period_defs
@property
def is_empty(self):
return len(self._root_period_defs) == 0 and \
len(self._named_period_defs) == 0
# definition of a period
class PeriodDefinition:
def __init__(self, parent, name, begin_expr, end_expr,
begin_captures_exprs, end_captures_exprs):
self._parent = parent
self._children = set()
self._name = name
self._begin_expr = begin_expr
self._end_expr = end_expr
self._begin_captures_exprs = begin_captures_exprs
self._end_captures_exprs = end_captures_exprs
@property
def name(self):
return self._name
@property
def parent(self):
return self._parent
@property
def begin_expr(self):
return self._begin_expr
@property
def end_expr(self):
return self._end_expr
@property
def begin_captures_exprs(self):
return self._begin_captures_exprs
@property
def end_captures_exprs(self):
return self._end_captures_exprs
@property
def children(self):
return self._children
class _Expression:
pass
class _BinaryExpression(_Expression):
def __init__(self, lh_expr, rh_expr):
self._lh_expr = lh_expr
self._rh_expr = rh_expr
@property
def lh_expr(self):
return self._lh_expr
@property
def rh_expr(self):
return self._rh_expr
class _UnaryExpression(_Expression):
def __init__(self, expr):
self._expr = expr
@property
def expr(self):
return self._expr
class LogicalNot(_UnaryExpression):
def __repr__(self):
return '!({})'.format(self.expr)
class LogicalAnd(_BinaryExpression):
def __repr__(self):
return '({} && {})'.format(self.lh_expr, self.rh_expr)
class LogicalOr(_BinaryExpression):
def __repr__(self):
return '({} || {})'.format(self.lh_expr, self.rh_expr)
class GlobEq(_BinaryExpression):
def __init__(self, lh_expr, rh_expr):
super().__init__(lh_expr, rh_expr)
self._compile()
def _compile(self):
import fnmatch
import re
pattern = self.rh_expr.value
regex = fnmatch.translate(pattern)
self._regex = re.compile(regex)
@property
def regex(self):
return self._regex
def __repr__(self):
return '({} =* {})'.format(self.lh_expr, self.rh_expr)
class Eq(_BinaryExpression):
def __repr__(self):
return '({} == {})'.format(self.lh_expr, self.rh_expr)
class Lt(_BinaryExpression):
def __repr__(self):
return '({} < {})'.format(self.lh_expr, self.rh_expr)
class LtEq(_BinaryExpression):
def __repr__(self):
return '({} <= {})'.format(self.lh_expr, self.rh_expr)
class Gt(_BinaryExpression):
def __repr__(self):
return '({} > {})'.format(self.lh_expr, self.rh_expr)
class GtEq(_BinaryExpression):
def __repr__(self):
return '({} >= {})'.format(self.lh_expr, self.rh_expr)
class Number(_Expression):
def __init__(self, value):
self._value = value
@property
def value(self):
return self._value
def __repr__(self):
return '{}'.format(self.value)
class String(_Expression):
def __init__(self, value):
self._value = value
@property
def value(self):
return self._value
def __repr__(self):
return '"{}"'.format(self.value)
@enum.unique
class DynScope(enum.Enum):
AUTO = 'auto'
TPH = '$pkt_header'
SPC = '$pkt_ctx'
SEH = '$header'
SEC = '$stream_ctx'
EC = '$ctx'
EP = '$payload'
class _SingleChildNode(_Expression):
def __init__(self, child):
self._child = child
@property
def child(self):
return self._child
class ParentScope(_SingleChildNode):
def __repr__(self):
return '$parent.{}'.format(self.child)
class BeginScope(_SingleChildNode):
def __repr__(self):
return '$begin.{}'.format(self.child)
class EventScope(_SingleChildNode):
def __repr__(self):
return '$evt.{}'.format(self.child)
class DynamicScope(_SingleChildNode):
def __init__(self, dyn_scope, child):
super().__init__(child)
self._dyn_scope = dyn_scope
@property
def dyn_scope(self):
return self._dyn_scope
def __repr__(self):
if self._dyn_scope == DynScope.AUTO:
return repr(self.child)
return '{}.{}'.format(self.dyn_scope.value, self.child)
class EventFieldName(_Expression):
def __init__(self, name):
self._name = name
@property
def name(self):
return self._name
def __repr__(self):
return self._name
class EventName(_Expression):
def __repr__(self):
return '$name'
class IllegalExpression(Exception):
pass
class PeriodDefinitionValidator:
def __init__(self, period_def):
self._period_def = period_def
self._validate_expr_cbs = {
LogicalNot: self._validate_unary_expr,
LogicalAnd: self._validate_binary_expr,
LogicalOr: self._validate_binary_expr,
GlobEq: self._validate_comp,
Eq: self._validate_comp,
Lt: self._validate_comp,
LtEq: self._validate_comp,
Gt: self._validate_comp,
GtEq: self._validate_comp,
ParentScope: self._validate_parent_scope,
}
self._validate_expr(period_def.begin_expr)
self._validate_expr(period_def.end_expr)
def _validate_unary_expr(self, not_expr):
self._validate_expr(not_expr.expr)
def _validate_binary_expr(self, and_expr):
self._validate_expr(and_expr.lh_expr)
self._validate_expr(and_expr.rh_expr)
def _validate_parent_scope(self, scope):
if self._period_def.parent is None:
raise IllegalExpression('Cannot refer to parent context without '
'a named parent period')
if type(scope.child) is not BeginScope:
raise IllegalExpression('Must refer to the begin context in a '
'parent context')
self._validate_expr(scope.child)
def _validate_comp(self, comp_expr):
self._validate_expr(comp_expr.lh_expr)
self._validate_expr(comp_expr.rh_expr)
def _validate_expr(self, expr):
if type(expr) in self._validate_expr_cbs:
self._validate_expr_cbs[type(expr)](expr)
class _MatchContext:
def __init__(self, evt, begin_evt, parent_begin_evt):
self._evt = evt
self._begin_evt = begin_evt
self._parent_begin_evt = parent_begin_evt
@property
def evt(self):
return self._evt
@property
def begin_evt(self):
return self._begin_evt
@property
def parent_begin_evt(self):
return self._parent_begin_evt
_DYN_SCOPE_TO_BT_CTF_SCOPE = {
DynScope.TPH: bt.CTFScope.TRACE_PACKET_HEADER,
DynScope.SPC: bt.CTFScope.STREAM_PACKET_CONTEXT,
DynScope.SEH: bt.CTFScope.STREAM_EVENT_HEADER,
DynScope.SEC: bt.CTFScope.STREAM_EVENT_CONTEXT,
DynScope.EC: bt.CTFScope.EVENT_CONTEXT,
DynScope.EP: bt.CTFScope.EVENT_FIELDS,
}
def _resolve_event_expr(event, expr):
# event not found
if event is None:
return
# event name
if type(expr.child) is EventName:
return event.name
# default, automatic dynamic scope
dyn_scope = DynScope.AUTO
if type(expr.child) is DynamicScope:
# select specific dynamic scope
expr = expr.child
dyn_scope = expr.dyn_scope
if type(expr.child) is EventFieldName:
expr = expr.child
if dyn_scope == DynScope.AUTO:
# automatic dynamic scope
if expr.name in event:
return event[expr.name]
# event field not found
return
# specific dynamic scope
bt_ctf_scope = _DYN_SCOPE_TO_BT_CTF_SCOPE[dyn_scope]
return event.field_with_scope(expr.name, bt_ctf_scope)
assert(False)
# This exquisite function takes an expression and resolves it to
# an actual value (Python's number/string) considering the current
# matching context.
def _resolve_expr(expr, match_context):
if type(expr) is ParentScope:
begin_scope = expr.child
event_scope = begin_scope.child
return _resolve_event_expr(match_context.parent_begin_evt, event_scope)
if type(expr) is BeginScope:
# event in the begin context
event_scope = expr.child
return _resolve_event_expr(match_context.begin_evt, event_scope)
if type(expr) is EventScope:
# current event
return _resolve_event_expr(match_context.evt, expr)
if type(expr) is Number:
return expr.value
if type(expr) is String:
return expr.value
assert(False)
class _Matcher:
def __init__(self, expr, match_context):
self._match_context = match_context
self._expr_matchers = {
LogicalAnd: self._and_expr_matches,
LogicalOr: self._or_expr_matches,
LogicalNot: self._not_expr_matches,
GlobEq: self._glob_eq_expr_matches,
Eq: partial(self._comp_expr_matches, lambda lh, rh: lh == rh),
Lt: partial(self._comp_expr_matches, lambda lh, rh: lh < rh),
LtEq: partial(self._comp_expr_matches, lambda lh, rh: lh <= rh),
Gt: partial(self._comp_expr_matches, lambda lh, rh: lh > rh),
GtEq: partial(self._comp_expr_matches, lambda lh, rh: lh >= rh),
}
self._matches = self._expr_matches(expr)
def _and_expr_matches(self, expr):
return (self._expr_matches(expr.lh_expr) and
self._expr_matches(expr.rh_expr))
def _or_expr_matches(self, expr):
return (self._expr_matches(expr.lh_expr) or
self._expr_matches(expr.rh_expr))
def _not_expr_matches(self, expr):
return not self._expr_matches(expr.expr)
def _glob_eq_expr_matches(self, expr):
def compfn(lh, rh):
return bool(expr.regex.match(lh))
return self._comp_expr_matches(compfn, expr)
def _comp_expr_matches(self, compfn, expr):
lh_value = _resolve_expr(expr.lh_expr, self._match_context)
rh_value = _resolve_expr(expr.rh_expr, self._match_context)
# make sure both sides are found
if lh_value is None or rh_value is None:
return False
# cast RHS to int if LHS is an int
if type(lh_value) is int and type(rh_value) is float:
rh_value = int(rh_value)
# compare types first
if type(lh_value) is not type(rh_value):
return False
# compare field to a literal value
return compfn(lh_value, rh_value)
def _expr_matches(self, expr):
return self._expr_matchers[type(expr)](expr)
@property
def matches(self):
return self._matches
def _expr_matches(expr, match_context):
return _Matcher(expr, match_context).matches
def create_conjunction_from_exprs(exprs):
if len(exprs) == 0:
return
cur_expr = exprs[0]
for expr in exprs[1:]:
cur_expr = LogicalAnd(cur_expr, expr)
return cur_expr
def create_disjunction_from_exprs(exprs):
if len(exprs) == 0:
return
cur_expr = exprs[0]
for expr in exprs[1:]:
cur_expr = LogicalOr(cur_expr, expr)
return cur_expr
@enum.unique
class PeriodEngineCallbackType(enum.Enum):
PERIOD_BEGIN = 1
PERIOD_END = 2
class Period:
def __init__(self, definition, parent, begin_evt, begin_captures):
begin_evt_copy = core_event.Event(begin_evt)
self._begin_evt = begin_evt_copy
self._end_evt = None
self._completed = False
self._definition = definition
self._parent = parent
self._children = set()
self._begin_captures = begin_captures
self._end_captures = {}
@property
def begin_evt(self):
return self._begin_evt
@property
def end_evt(self):
return self._end_evt
@end_evt.setter
def end_evt(self, evt):
self._end_evt = evt
@property
def definition(self):
return self._definition
@property
def parent(self):
return self._parent
@property
def children(self):
return self._children
@property
def completed(self):
return self._completed
@completed.setter
def completed(self, value):
self._completed = value
@property
def begin_captures(self):
return self._begin_captures
@property
def end_captures(self):
return self._end_captures
class PeriodEngine:
def __init__(self, registry, cbs):
self._registry = registry
self._cbs = cbs
self._root_periods = set()
def _cb_period_end(self, period):
self._cbs[PeriodEngineCallbackType.PERIOD_END](period)
def _cb_period_begin(self, period):
self._cbs[PeriodEngineCallbackType.PERIOD_BEGIN](period)
def _create_period(self, definition, parent, begin_evt, begin_captures):
return Period(definition, parent, begin_evt, begin_captures)
def _get_captures(self, captures_exprs, match_context):
captures = {}
for name, capture_expr in captures_exprs.items():
captures[name] = _resolve_expr(capture_expr, match_context)
return captures
def _process_event_add_periods(self, parent_period,
child_periods, child_period_defs, evt):
periods_to_add = set()
for child_period_def in child_period_defs:
match_context = self._create_begin_match_context(parent_period,
evt)
if _expr_matches(child_period_def.begin_expr, match_context):
# match! add period
captures = self._get_captures(
child_period_def.begin_captures_exprs,
match_context)
period = self._create_period(child_period_def,
parent_period, evt, captures)
periods_to_add.add(period)
# safe to add child periods now, outside the iteration
for period_to_add in periods_to_add:
self._cb_period_begin(period_to_add)
child_periods.add(period_to_add)
for child_period in child_periods:
self._process_event_add_periods(child_period,
child_period.children,
child_period.definition.children,
evt)
def _process_event_begin(self, evt):
defs = self._registry.root_period_defs
self._process_event_add_periods(None, self._root_periods, defs, evt)
def _create_begin_match_context(self, parent_period, evt):
parent_begin_evt = None
if parent_period is not None:
parent_begin_evt = parent_period.begin_evt
return _MatchContext(evt, evt, parent_begin_evt)
def _create_end_match_context(self, period, evt):
parent_begin_evt = None
if period.parent is not None:
parent_begin_evt = period.parent.begin_evt
return _MatchContext(evt, period.begin_evt, parent_begin_evt)
def _process_event_remove_period(self, child_periods, evt):
for child_period in child_periods:
self._process_event_remove_period(child_period.children, evt)
child_periods_to_remove = set()
for child_period in child_periods:
match_context = self._create_end_match_context(child_period, evt)
if _expr_matches(child_period.definition.end_expr, match_context):
# set period's end captures
end_captures_exprs = \
child_period.definition.end_captures_exprs
captures = self._get_captures(end_captures_exprs,
match_context)
child_period._end_captures = captures
# mark as to be removed
child_periods_to_remove.add(child_period)
# safe to remove child periods now, outside the iteration
for child_period_to_remove in child_periods_to_remove:
# set period's ending event and completed property
child_period_to_remove.end_evt = evt
child_period_to_remove.completed = True
# also remove its own remaining child periods
self._remove_periods(child_period_to_remove.children, evt)
# call end of period user callback (this period matched)
self._cb_period_end(child_period_to_remove)
# remove period from set
child_periods.remove(child_period_to_remove)
def _process_event_end(self, evt):
self._process_event_remove_period(self._root_periods, evt)
def process_event(self, evt):
self._process_event_end(evt)
self._process_event_begin(evt)
def _remove_periods(self, child_periods, evt):
for child_period in child_periods:
self._remove_periods(child_period.children, evt)
# safe to remove child periods now, outside the iteration
for child_period in child_periods:
# set period's ending event and completed property
child_period.end_evt = evt
child_period.completed = False
# call end of period user callback
self._cb_period_end(child_period)
child_periods.clear()
def remove_all_periods(self):
self._remove_periods(self._root_periods, None)
@property
def root_periods(self):
return self._root_periods
| 28.723598 | 79 | 0.647179 | 2,737 | 22,031 | 4.854951 | 0.112532 | 0.03537 | 0.032661 | 0.02047 | 0.348961 | 0.265879 | 0.215006 | 0.147351 | 0.142836 | 0.131397 | 0 | 0.000938 | 0.274068 | 22,031 | 766 | 80 | 28.761097 | 0.829926 | 0.104943 | 0 | 0.249004 | 0 | 0 | 0.01953 | 0 | 0 | 0 | 0 | 0 | 0.003984 | 1 | 0.203187 | false | 0.005976 | 0.011952 | 0.113546 | 0.458167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b153368d5363e8d56f0739e13cb9f730c808550c | 890 | py | Python | portal/migrations/0019_auto_20180813_1116.py | eugenechia95/Project-Document-Submission-Portal | d2534564f0aac7ad3896739719b5e86d4f719eef | [
"Apache-2.0"
] | 1 | 2019-03-06T07:56:44.000Z | 2019-03-06T07:56:44.000Z | portal/migrations/0019_auto_20180813_1116.py | eugenechia95/Project-Document-Submission-Portal | d2534564f0aac7ad3896739719b5e86d4f719eef | [
"Apache-2.0"
] | null | null | null | portal/migrations/0019_auto_20180813_1116.py | eugenechia95/Project-Document-Submission-Portal | d2534564f0aac7ad3896739719b5e86d4f719eef | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.0.7 on 2018-08-13 03:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('portal', '0018_auto_20180810_1801'),
]
operations = [
migrations.AddField(
model_name='templateinstance',
name='name1',
field=models.CharField(default='test', help_text='Enter template instance name', max_length=200),
),
migrations.AddField(
model_name='templateinstance',
name='name2',
field=models.CharField(default='test', help_text='Enter template instance name', max_length=200),
),
migrations.AlterField(
model_name='templateinstance',
name='name',
field=models.CharField(default='test', help_text='Enter template instance name', max_length=200),
),
]
| 30.689655 | 109 | 0.613483 | 92 | 890 | 5.804348 | 0.48913 | 0.050562 | 0.140449 | 0.162921 | 0.621723 | 0.621723 | 0.464419 | 0.464419 | 0.464419 | 0.464419 | 0 | 0.064615 | 0.269663 | 890 | 28 | 110 | 31.785714 | 0.756923 | 0.050562 | 0 | 0.5 | 1 | 0 | 0.221827 | 0.027284 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b15fe1a0bc312e52426c6b9ef622373d40e6511e | 39,310 | py | Python | sdk/python/pulumi_aws_native/gamelift/_inputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/gamelift/_inputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/gamelift/_inputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from ._enums import *
__all__ = [
'AliasRoutingStrategyArgs',
'BuildS3LocationArgs',
'FleetCertificateConfigurationArgs',
'FleetIpPermissionArgs',
'FleetLocationCapacityArgs',
'FleetLocationConfigurationArgs',
'FleetResourceCreationLimitPolicyArgs',
'FleetRuntimeConfigurationArgs',
'FleetServerProcessArgs',
'GameServerGroupAutoScalingPolicyArgs',
'GameServerGroupInstanceDefinitionArgs',
'GameServerGroupLaunchTemplateArgs',
'GameServerGroupTagArgs',
'GameServerGroupTargetTrackingConfigurationArgs',
'GameSessionQueueDestinationArgs',
'GameSessionQueueFilterConfigurationArgs',
'GameSessionQueuePlayerLatencyPolicyArgs',
'GameSessionQueuePriorityConfigurationArgs',
'GameSessionQueueTagArgs',
'MatchmakingConfigurationGamePropertyArgs',
'MatchmakingConfigurationTagArgs',
'MatchmakingRuleSetTagArgs',
'ScriptS3LocationArgs',
'ScriptTagArgs',
]
@pulumi.input_type
class AliasRoutingStrategyArgs:
def __init__(__self__, *,
type: pulumi.Input['AliasRoutingStrategyType'],
fleet_id: Optional[pulumi.Input[str]] = None,
message: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['AliasRoutingStrategyType'] type: Simple routing strategy. The alias resolves to one specific fleet. Use this type when routing to active fleets.
:param pulumi.Input[str] fleet_id: A unique identifier for a fleet that the alias points to. If you specify SIMPLE for the Type property, you must specify this property.
:param pulumi.Input[str] message: The message text to be used with a terminal routing strategy. If you specify TERMINAL for the Type property, you must specify this property.
"""
pulumi.set(__self__, "type", type)
if fleet_id is not None:
pulumi.set(__self__, "fleet_id", fleet_id)
if message is not None:
pulumi.set(__self__, "message", message)
@property
@pulumi.getter
def type(self) -> pulumi.Input['AliasRoutingStrategyType']:
"""
Simple routing strategy. The alias resolves to one specific fleet. Use this type when routing to active fleets.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input['AliasRoutingStrategyType']):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="fleetId")
def fleet_id(self) -> Optional[pulumi.Input[str]]:
"""
A unique identifier for a fleet that the alias points to. If you specify SIMPLE for the Type property, you must specify this property.
"""
return pulumi.get(self, "fleet_id")
@fleet_id.setter
def fleet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "fleet_id", value)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
"""
The message text to be used with a terminal routing strategy. If you specify TERMINAL for the Type property, you must specify this property.
"""
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@pulumi.input_type
class BuildS3LocationArgs:
def __init__(__self__, *,
bucket: pulumi.Input[str],
key: pulumi.Input[str],
role_arn: pulumi.Input[str],
object_version: Optional[pulumi.Input[str]] = None):
pulumi.set(__self__, "bucket", bucket)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "role_arn", role_arn)
if object_version is not None:
pulumi.set(__self__, "object_version", object_version)
@property
@pulumi.getter
def bucket(self) -> pulumi.Input[str]:
return pulumi.get(self, "bucket")
@bucket.setter
def bucket(self, value: pulumi.Input[str]):
pulumi.set(self, "bucket", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter(name="roleArn")
def role_arn(self) -> pulumi.Input[str]:
return pulumi.get(self, "role_arn")
@role_arn.setter
def role_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "role_arn", value)
@property
@pulumi.getter(name="objectVersion")
def object_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "object_version")
@object_version.setter
def object_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "object_version", value)
@pulumi.input_type
class FleetCertificateConfigurationArgs:
def __init__(__self__, *,
certificate_type: pulumi.Input['FleetCertificateConfigurationCertificateType']):
"""
Information about the use of a TLS/SSL certificate for a fleet. TLS certificate generation is enabled at the fleet level, with one certificate generated for the fleet. When this feature is enabled, the certificate can be retrieved using the GameLift Server SDK call GetInstanceCertificate. All instances in a fleet share the same certificate.
"""
pulumi.set(__self__, "certificate_type", certificate_type)
@property
@pulumi.getter(name="certificateType")
def certificate_type(self) -> pulumi.Input['FleetCertificateConfigurationCertificateType']:
return pulumi.get(self, "certificate_type")
@certificate_type.setter
def certificate_type(self, value: pulumi.Input['FleetCertificateConfigurationCertificateType']):
pulumi.set(self, "certificate_type", value)
@pulumi.input_type
class FleetIpPermissionArgs:
def __init__(__self__, *,
from_port: pulumi.Input[int],
ip_range: pulumi.Input[str],
protocol: pulumi.Input['FleetIpPermissionProtocol'],
to_port: pulumi.Input[int]):
"""
A range of IP addresses and port settings that allow inbound traffic to connect to server processes on an Amazon GameLift hosting resource. New game sessions that are started on the fleet are assigned an IP address/port number combination, which must fall into the fleet's allowed ranges. For fleets created with a custom game server, the ranges reflect the server's game session assignments. For Realtime Servers fleets, Amazon GameLift automatically opens two port ranges, one for TCP messaging and one for UDP, for use by the Realtime servers.
:param pulumi.Input[int] from_port: A starting value for a range of allowed port numbers.
:param pulumi.Input[str] ip_range: A range of allowed IP addresses. This value must be expressed in CIDR notation. Example: "000.000.000.000/[subnet mask]" or optionally the shortened version "0.0.0.0/[subnet mask]".
:param pulumi.Input['FleetIpPermissionProtocol'] protocol: The network communication protocol used by the fleet.
:param pulumi.Input[int] to_port: An ending value for a range of allowed port numbers. Port numbers are end-inclusive. This value must be higher than FromPort.
"""
pulumi.set(__self__, "from_port", from_port)
pulumi.set(__self__, "ip_range", ip_range)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "to_port", to_port)
@property
@pulumi.getter(name="fromPort")
def from_port(self) -> pulumi.Input[int]:
"""
A starting value for a range of allowed port numbers.
"""
return pulumi.get(self, "from_port")
@from_port.setter
def from_port(self, value: pulumi.Input[int]):
pulumi.set(self, "from_port", value)
@property
@pulumi.getter(name="ipRange")
def ip_range(self) -> pulumi.Input[str]:
"""
A range of allowed IP addresses. This value must be expressed in CIDR notation. Example: "000.000.000.000/[subnet mask]" or optionally the shortened version "0.0.0.0/[subnet mask]".
"""
return pulumi.get(self, "ip_range")
@ip_range.setter
def ip_range(self, value: pulumi.Input[str]):
pulumi.set(self, "ip_range", value)
@property
@pulumi.getter
def protocol(self) -> pulumi.Input['FleetIpPermissionProtocol']:
"""
The network communication protocol used by the fleet.
"""
return pulumi.get(self, "protocol")
@protocol.setter
def protocol(self, value: pulumi.Input['FleetIpPermissionProtocol']):
pulumi.set(self, "protocol", value)
@property
@pulumi.getter(name="toPort")
def to_port(self) -> pulumi.Input[int]:
"""
An ending value for a range of allowed port numbers. Port numbers are end-inclusive. This value must be higher than FromPort.
"""
return pulumi.get(self, "to_port")
@to_port.setter
def to_port(self, value: pulumi.Input[int]):
pulumi.set(self, "to_port", value)
@pulumi.input_type
class FleetLocationCapacityArgs:
def __init__(__self__, *,
desired_ec2_instances: pulumi.Input[int],
max_size: pulumi.Input[int],
min_size: pulumi.Input[int]):
"""
Current resource capacity settings in a specified fleet or location. The location value might refer to a fleet's remote location or its home Region.
:param pulumi.Input[int] desired_ec2_instances: The number of EC2 instances you want to maintain in the specified fleet location. This value must fall between the minimum and maximum size limits.
:param pulumi.Input[int] max_size: The maximum value that is allowed for the fleet's instance count for a location. When creating a new fleet, GameLift automatically sets this value to "1". Once the fleet is active, you can change this value.
:param pulumi.Input[int] min_size: The minimum value allowed for the fleet's instance count for a location. When creating a new fleet, GameLift automatically sets this value to "0". After the fleet is active, you can change this value.
"""
pulumi.set(__self__, "desired_ec2_instances", desired_ec2_instances)
pulumi.set(__self__, "max_size", max_size)
pulumi.set(__self__, "min_size", min_size)
@property
@pulumi.getter(name="desiredEC2Instances")
def desired_ec2_instances(self) -> pulumi.Input[int]:
"""
The number of EC2 instances you want to maintain in the specified fleet location. This value must fall between the minimum and maximum size limits.
"""
return pulumi.get(self, "desired_ec2_instances")
@desired_ec2_instances.setter
def desired_ec2_instances(self, value: pulumi.Input[int]):
pulumi.set(self, "desired_ec2_instances", value)
@property
@pulumi.getter(name="maxSize")
def max_size(self) -> pulumi.Input[int]:
"""
The maximum value that is allowed for the fleet's instance count for a location. When creating a new fleet, GameLift automatically sets this value to "1". Once the fleet is active, you can change this value.
"""
return pulumi.get(self, "max_size")
@max_size.setter
def max_size(self, value: pulumi.Input[int]):
pulumi.set(self, "max_size", value)
@property
@pulumi.getter(name="minSize")
def min_size(self) -> pulumi.Input[int]:
"""
The minimum value allowed for the fleet's instance count for a location. When creating a new fleet, GameLift automatically sets this value to "0". After the fleet is active, you can change this value.
"""
return pulumi.get(self, "min_size")
@min_size.setter
def min_size(self, value: pulumi.Input[int]):
pulumi.set(self, "min_size", value)
@pulumi.input_type
class FleetLocationConfigurationArgs:
def __init__(__self__, *,
location: pulumi.Input[str],
location_capacity: Optional[pulumi.Input['FleetLocationCapacityArgs']] = None):
"""
A remote location where a multi-location fleet can deploy EC2 instances for game hosting.
"""
pulumi.set(__self__, "location", location)
if location_capacity is not None:
pulumi.set(__self__, "location_capacity", location_capacity)
@property
@pulumi.getter
def location(self) -> pulumi.Input[str]:
return pulumi.get(self, "location")
@location.setter
def location(self, value: pulumi.Input[str]):
pulumi.set(self, "location", value)
@property
@pulumi.getter(name="locationCapacity")
def location_capacity(self) -> Optional[pulumi.Input['FleetLocationCapacityArgs']]:
return pulumi.get(self, "location_capacity")
@location_capacity.setter
def location_capacity(self, value: Optional[pulumi.Input['FleetLocationCapacityArgs']]):
pulumi.set(self, "location_capacity", value)
@pulumi.input_type
class FleetResourceCreationLimitPolicyArgs:
def __init__(__self__, *,
new_game_sessions_per_creator: Optional[pulumi.Input[int]] = None,
policy_period_in_minutes: Optional[pulumi.Input[int]] = None):
"""
A policy that limits the number of game sessions a player can create on the same fleet. This optional policy gives game owners control over how players can consume available game server resources. A resource creation policy makes the following statement: "An individual player can create a maximum number of new game sessions within a specified time period".
The policy is evaluated when a player tries to create a new game session. For example, assume you have a policy of 10 new game sessions and a time period of 60 minutes. On receiving a CreateGameSession request, Amazon GameLift checks that the player (identified by CreatorId) has created fewer than 10 game sessions in the past 60 minutes.
:param pulumi.Input[int] new_game_sessions_per_creator: The maximum number of game sessions that an individual can create during the policy period.
:param pulumi.Input[int] policy_period_in_minutes: The time span used in evaluating the resource creation limit policy.
"""
if new_game_sessions_per_creator is not None:
pulumi.set(__self__, "new_game_sessions_per_creator", new_game_sessions_per_creator)
if policy_period_in_minutes is not None:
pulumi.set(__self__, "policy_period_in_minutes", policy_period_in_minutes)
@property
@pulumi.getter(name="newGameSessionsPerCreator")
def new_game_sessions_per_creator(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of game sessions that an individual can create during the policy period.
"""
return pulumi.get(self, "new_game_sessions_per_creator")
@new_game_sessions_per_creator.setter
def new_game_sessions_per_creator(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "new_game_sessions_per_creator", value)
@property
@pulumi.getter(name="policyPeriodInMinutes")
def policy_period_in_minutes(self) -> Optional[pulumi.Input[int]]:
"""
The time span used in evaluating the resource creation limit policy.
"""
return pulumi.get(self, "policy_period_in_minutes")
@policy_period_in_minutes.setter
def policy_period_in_minutes(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "policy_period_in_minutes", value)
@pulumi.input_type
class FleetRuntimeConfigurationArgs:
def __init__(__self__, *,
game_session_activation_timeout_seconds: Optional[pulumi.Input[int]] = None,
max_concurrent_game_session_activations: Optional[pulumi.Input[int]] = None,
server_processes: Optional[pulumi.Input[Sequence[pulumi.Input['FleetServerProcessArgs']]]] = None):
"""
A collection of server process configurations that describe the processes to run on each instance in a fleet. All fleets must have a runtime configuration. Each instance in the fleet maintains server processes as specified in the runtime configuration, launching new ones as existing processes end. Each instance regularly checks for an updated runtime configuration makes adjustments as called for.
The runtime configuration enables the instances in a fleet to run multiple processes simultaneously. Potential scenarios are as follows: (1) Run multiple processes of a single game server executable to maximize usage of your hosting resources. (2) Run one or more processes of different executables, such as your game server and a metrics tracking program. (3) Run multiple processes of a single game server but with different launch parameters, for example to run one process on each instance in debug mode.
An Amazon GameLift instance is limited to 50 processes running simultaneously. A runtime configuration must specify fewer than this limit. To calculate the total number of processes specified in a runtime configuration, add the values of the ConcurrentExecutions parameter for each ServerProcess object in the runtime configuration.
:param pulumi.Input[int] game_session_activation_timeout_seconds: The maximum amount of time (in seconds) that a game session can remain in status ACTIVATING. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED.
:param pulumi.Input[int] max_concurrent_game_session_activations: The maximum number of game sessions with status ACTIVATING to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.
:param pulumi.Input[Sequence[pulumi.Input['FleetServerProcessArgs']]] server_processes: A collection of server process configurations that describe which server processes to run on each instance in a fleet.
"""
if game_session_activation_timeout_seconds is not None:
pulumi.set(__self__, "game_session_activation_timeout_seconds", game_session_activation_timeout_seconds)
if max_concurrent_game_session_activations is not None:
pulumi.set(__self__, "max_concurrent_game_session_activations", max_concurrent_game_session_activations)
if server_processes is not None:
pulumi.set(__self__, "server_processes", server_processes)
@property
@pulumi.getter(name="gameSessionActivationTimeoutSeconds")
def game_session_activation_timeout_seconds(self) -> Optional[pulumi.Input[int]]:
"""
The maximum amount of time (in seconds) that a game session can remain in status ACTIVATING. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED.
"""
return pulumi.get(self, "game_session_activation_timeout_seconds")
@game_session_activation_timeout_seconds.setter
def game_session_activation_timeout_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "game_session_activation_timeout_seconds", value)
@property
@pulumi.getter(name="maxConcurrentGameSessionActivations")
def max_concurrent_game_session_activations(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of game sessions with status ACTIVATING to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.
"""
return pulumi.get(self, "max_concurrent_game_session_activations")
@max_concurrent_game_session_activations.setter
def max_concurrent_game_session_activations(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_concurrent_game_session_activations", value)
@property
@pulumi.getter(name="serverProcesses")
def server_processes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['FleetServerProcessArgs']]]]:
"""
A collection of server process configurations that describe which server processes to run on each instance in a fleet.
"""
return pulumi.get(self, "server_processes")
@server_processes.setter
def server_processes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['FleetServerProcessArgs']]]]):
pulumi.set(self, "server_processes", value)
@pulumi.input_type
class FleetServerProcessArgs:
def __init__(__self__, *,
concurrent_executions: pulumi.Input[int],
launch_path: pulumi.Input[str],
parameters: Optional[pulumi.Input[str]] = None):
"""
A set of instructions for launching server processes on each instance in a fleet. Each instruction set identifies the location of the server executable, optional launch parameters, and the number of server processes with this configuration to maintain concurrently on the instance. Server process configurations make up a fleet's RuntimeConfiguration.
:param pulumi.Input[int] concurrent_executions: The number of server processes that use this configuration to run concurrently on an instance.
:param pulumi.Input[str] launch_path: The location of the server executable in a custom game build or the name of the Realtime script file that contains the Init() function. Game builds and Realtime scripts are installed on instances at the root:
Windows (for custom game builds only): C:\game. Example: "C:\game\MyGame\server.exe"
Linux: /local/game. Examples: "/local/game/MyGame/server.exe" or "/local/game/MyRealtimeScript.js"
:param pulumi.Input[str] parameters: An optional list of parameters to pass to the server executable or Realtime script on launch.
"""
pulumi.set(__self__, "concurrent_executions", concurrent_executions)
pulumi.set(__self__, "launch_path", launch_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="concurrentExecutions")
def concurrent_executions(self) -> pulumi.Input[int]:
"""
The number of server processes that use this configuration to run concurrently on an instance.
"""
return pulumi.get(self, "concurrent_executions")
@concurrent_executions.setter
def concurrent_executions(self, value: pulumi.Input[int]):
pulumi.set(self, "concurrent_executions", value)
@property
@pulumi.getter(name="launchPath")
def launch_path(self) -> pulumi.Input[str]:
"""
The location of the server executable in a custom game build or the name of the Realtime script file that contains the Init() function. Game builds and Realtime scripts are installed on instances at the root:
Windows (for custom game builds only): C:\game. Example: "C:\game\MyGame\server.exe"
Linux: /local/game. Examples: "/local/game/MyGame/server.exe" or "/local/game/MyRealtimeScript.js"
"""
return pulumi.get(self, "launch_path")
@launch_path.setter
def launch_path(self, value: pulumi.Input[str]):
pulumi.set(self, "launch_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[str]]:
"""
An optional list of parameters to pass to the server executable or Realtime script on launch.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class GameServerGroupAutoScalingPolicyArgs:
def __init__(__self__, *,
target_tracking_configuration: pulumi.Input['GameServerGroupTargetTrackingConfigurationArgs'],
estimated_instance_warmup: Optional[pulumi.Input[float]] = None):
"""
Configuration settings to define a scaling policy for the Auto Scaling group that is optimized for game hosting
"""
pulumi.set(__self__, "target_tracking_configuration", target_tracking_configuration)
if estimated_instance_warmup is not None:
pulumi.set(__self__, "estimated_instance_warmup", estimated_instance_warmup)
@property
@pulumi.getter(name="targetTrackingConfiguration")
def target_tracking_configuration(self) -> pulumi.Input['GameServerGroupTargetTrackingConfigurationArgs']:
return pulumi.get(self, "target_tracking_configuration")
@target_tracking_configuration.setter
def target_tracking_configuration(self, value: pulumi.Input['GameServerGroupTargetTrackingConfigurationArgs']):
pulumi.set(self, "target_tracking_configuration", value)
@property
@pulumi.getter(name="estimatedInstanceWarmup")
def estimated_instance_warmup(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "estimated_instance_warmup")
@estimated_instance_warmup.setter
def estimated_instance_warmup(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "estimated_instance_warmup", value)
@pulumi.input_type
class GameServerGroupInstanceDefinitionArgs:
def __init__(__self__, *,
instance_type: pulumi.Input[str],
weighted_capacity: Optional[pulumi.Input[str]] = None):
"""
An allowed instance type for your game server group.
"""
pulumi.set(__self__, "instance_type", instance_type)
if weighted_capacity is not None:
pulumi.set(__self__, "weighted_capacity", weighted_capacity)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> pulumi.Input[str]:
return pulumi.get(self, "instance_type")
@instance_type.setter
def instance_type(self, value: pulumi.Input[str]):
pulumi.set(self, "instance_type", value)
@property
@pulumi.getter(name="weightedCapacity")
def weighted_capacity(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "weighted_capacity")
@weighted_capacity.setter
def weighted_capacity(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "weighted_capacity", value)
@pulumi.input_type
class GameServerGroupLaunchTemplateArgs:
def __init__(__self__, *,
launch_template_id: Optional[pulumi.Input[str]] = None,
launch_template_name: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
The EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group.
"""
if launch_template_id is not None:
pulumi.set(__self__, "launch_template_id", launch_template_id)
if launch_template_name is not None:
pulumi.set(__self__, "launch_template_name", launch_template_name)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="launchTemplateId")
def launch_template_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "launch_template_id")
@launch_template_id.setter
def launch_template_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "launch_template_id", value)
@property
@pulumi.getter(name="launchTemplateName")
def launch_template_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "launch_template_name")
@launch_template_name.setter
def launch_template_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "launch_template_name", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class GameServerGroupTagArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] key: The key for a developer-defined key:value pair for tagging an AWS resource.
:param pulumi.Input[str] value: The value for a developer-defined key:value pair for tagging an AWS resource.
"""
if key is not None:
pulumi.set(__self__, "key", key)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
The key for a developer-defined key:value pair for tagging an AWS resource.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
The value for a developer-defined key:value pair for tagging an AWS resource.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class GameServerGroupTargetTrackingConfigurationArgs:
def __init__(__self__, *,
target_value: pulumi.Input[float]):
"""
Settings for a target-based scaling policy applied to Auto Scaling group.
"""
pulumi.set(__self__, "target_value", target_value)
@property
@pulumi.getter(name="targetValue")
def target_value(self) -> pulumi.Input[float]:
return pulumi.get(self, "target_value")
@target_value.setter
def target_value(self, value: pulumi.Input[float]):
pulumi.set(self, "target_value", value)
@pulumi.input_type
class GameSessionQueueDestinationArgs:
def __init__(__self__, *,
destination_arn: Optional[pulumi.Input[str]] = None):
if destination_arn is not None:
pulumi.set(__self__, "destination_arn", destination_arn)
@property
@pulumi.getter(name="destinationArn")
def destination_arn(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "destination_arn")
@destination_arn.setter
def destination_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_arn", value)
@pulumi.input_type
class GameSessionQueueFilterConfigurationArgs:
def __init__(__self__, *,
allowed_locations: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
if allowed_locations is not None:
pulumi.set(__self__, "allowed_locations", allowed_locations)
@property
@pulumi.getter(name="allowedLocations")
def allowed_locations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "allowed_locations")
@allowed_locations.setter
def allowed_locations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "allowed_locations", value)
@pulumi.input_type
class GameSessionQueuePlayerLatencyPolicyArgs:
def __init__(__self__, *,
maximum_individual_player_latency_milliseconds: Optional[pulumi.Input[int]] = None,
policy_duration_seconds: Optional[pulumi.Input[int]] = None):
if maximum_individual_player_latency_milliseconds is not None:
pulumi.set(__self__, "maximum_individual_player_latency_milliseconds", maximum_individual_player_latency_milliseconds)
if policy_duration_seconds is not None:
pulumi.set(__self__, "policy_duration_seconds", policy_duration_seconds)
@property
@pulumi.getter(name="maximumIndividualPlayerLatencyMilliseconds")
def maximum_individual_player_latency_milliseconds(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "maximum_individual_player_latency_milliseconds")
@maximum_individual_player_latency_milliseconds.setter
def maximum_individual_player_latency_milliseconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "maximum_individual_player_latency_milliseconds", value)
@property
@pulumi.getter(name="policyDurationSeconds")
def policy_duration_seconds(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "policy_duration_seconds")
@policy_duration_seconds.setter
def policy_duration_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "policy_duration_seconds", value)
@pulumi.input_type
class GameSessionQueuePriorityConfigurationArgs:
def __init__(__self__, *,
location_order: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
priority_order: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
if location_order is not None:
pulumi.set(__self__, "location_order", location_order)
if priority_order is not None:
pulumi.set(__self__, "priority_order", priority_order)
@property
@pulumi.getter(name="locationOrder")
def location_order(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "location_order")
@location_order.setter
def location_order(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "location_order", value)
@property
@pulumi.getter(name="priorityOrder")
def priority_order(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "priority_order")
@priority_order.setter
def priority_order(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "priority_order", value)
@pulumi.input_type
class GameSessionQueueTagArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class MatchmakingConfigurationGamePropertyArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class MatchmakingConfigurationTagArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class MatchmakingRuleSetTagArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class ScriptS3LocationArgs:
def __init__(__self__, *,
bucket: pulumi.Input[str],
key: pulumi.Input[str],
role_arn: pulumi.Input[str],
object_version: Optional[pulumi.Input[str]] = None):
pulumi.set(__self__, "bucket", bucket)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "role_arn", role_arn)
if object_version is not None:
pulumi.set(__self__, "object_version", object_version)
@property
@pulumi.getter
def bucket(self) -> pulumi.Input[str]:
return pulumi.get(self, "bucket")
@bucket.setter
def bucket(self, value: pulumi.Input[str]):
pulumi.set(self, "bucket", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter(name="roleArn")
def role_arn(self) -> pulumi.Input[str]:
return pulumi.get(self, "role_arn")
@role_arn.setter
def role_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "role_arn", value)
@property
@pulumi.getter(name="objectVersion")
def object_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "object_version")
@object_version.setter
def object_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "object_version", value)
@pulumi.input_type
class ScriptTagArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
| 42.774755 | 554 | 0.691325 | 4,765 | 39,310 | 5.50829 | 0.090241 | 0.093039 | 0.05974 | 0.039814 | 0.670553 | 0.549358 | 0.49979 | 0.449613 | 0.415247 | 0.379548 | 0 | 0.00219 | 0.21015 | 39,310 | 918 | 555 | 42.821351 | 0.843151 | 0.268201 | 0 | 0.425361 | 1 | 0 | 0.140425 | 0.08414 | 0 | 0 | 0 | 0 | 0 | 1 | 0.215088 | false | 0 | 0.009631 | 0.05618 | 0.351525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b16e35fecb7f641cc02d82c65e5835be96bef2e5 | 2,213 | py | Python | ptp/tools/robots/parser.py | owtf/ptp | b43e581d7646330810f526432c689c3d88995df9 | [
"BSD-3-Clause"
] | 23 | 2015-03-22T09:18:35.000Z | 2022-03-10T23:28:13.000Z | ptp/tools/robots/parser.py | owtf/ptp | b43e581d7646330810f526432c689c3d88995df9 | [
"BSD-3-Clause"
] | 22 | 2015-07-12T12:23:40.000Z | 2017-02-26T12:39:48.000Z | ptp/tools/robots/parser.py | owtf/ptp | b43e581d7646330810f526432c689c3d88995df9 | [
"BSD-3-Clause"
] | 14 | 2015-06-03T19:16:22.000Z | 2022-03-10T23:28:15.000Z | """
:synopsis: Specialized :class:`ptp.libptp.parser.AbstractParser` classes for the robots.txt files.
.. moduleauthor:: Tao Sauvage
"""
from ptp.libptp.constants import UNKNOWN
from ptp.libptp.parser import LineParser
from ptp.tools.robots.signatures import SIGNATURES
class RobotsParser(LineParser):
"""Robots specialized parser."""
__tool__ = 'robots'
__format__ = 'txt'
def __init__(self, pathname, filename='*robots.txt', light=False, first=True):
LineParser.__init__(self, pathname, filename, light=light, first=first)
@classmethod
def is_mine(cls, pathname, filename='*robots.txt', light=False, first=True):
"""Check if it can handle the report file.
:param str pathname: Path to the report directory.
:param str filename: Regex matching the report file.
:param bool light: `True` to only parse the ranking of the findings from the report.
:param bool first: Only process first file (``True``) or each file that matched (``False``).
:raises IOError: when the report file cannot be found.
:raises OSError: when the report file cannot be found.
:return: `True` if it supports the report, `False` otherwise.
:rtype: :class:`bool`
"""
stream = cls.handle_file(pathname, filename, first=first)
if stream and stream[0].startswith(('User-agent:', 'Disallow:', 'Allow:')): # FIXME: Weak check here...
return True
return False
def parse_metadata(self):
"""Parse the metadata of the report.
:return: The metadata of the report.
:rtype: dict
"""
return {} # No metadata to retrieve in robots.txt
def parse_report(self):
"""Parser the results of a Robots.txt report.
:return: List of dicts where each one represents a vuln.
:rtype: :class:`list`
"""
disallowed_entries = [line.lstrip('Disallow: ') for line in self.stream if line.startswith('Disallow')]
if not disallowed_entries:
return []
self.vulns = [
{'ranking': SIGNATURES.get(entry, UNKNOWN)}
for entry in disallowed_entries]
return self.vulns
| 33.029851 | 112 | 0.643922 | 272 | 2,213 | 5.154412 | 0.393382 | 0.057775 | 0.03709 | 0.034237 | 0.182596 | 0.105563 | 0.105563 | 0.062767 | 0 | 0 | 0 | 0.000601 | 0.248531 | 2,213 | 66 | 113 | 33.530303 | 0.842453 | 0.428378 | 0 | 0 | 0 | 0 | 0.074007 | 0 | 0 | 0 | 0 | 0.015152 | 0 | 1 | 0.166667 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b1a05bcb58e96bbaad0dc46aa2709b4f00beb64c | 2,263 | py | Python | fast_transformers/feature_maps/base.py | SamuelCahyawijaya/fast-transformers | 6ae8ed4cc50bd037968db4f5062e4d328aae73fe | [
"MIT"
] | 1,171 | 2020-06-30T01:57:19.000Z | 2022-03-31T15:11:25.000Z | fast_transformers/feature_maps/base.py | SamuelCahyawijaya/fast-transformers | 6ae8ed4cc50bd037968db4f5062e4d328aae73fe | [
"MIT"
] | 105 | 2020-06-30T14:40:56.000Z | 2022-02-08T16:31:45.000Z | fast_transformers/feature_maps/base.py | SamuelCahyawijaya/fast-transformers | 6ae8ed4cc50bd037968db4f5062e4d328aae73fe | [
"MIT"
] | 127 | 2020-06-26T09:07:48.000Z | 2022-03-25T06:46:37.000Z | #
# Copyright (c) 2020 Idiap Research Institute, http://www.idiap.ch/
# Written by Angelos Katharopoulos <angelos.katharopoulos@idiap.ch>
#
"""Create the feature map interface and some commonly used feature maps.
All attention implementations that expect a feature map shall receive a factory
function that returns a feature map instance when called with the query
dimensions.
"""
from functools import partial
import torch
from torch.nn import Module
class FeatureMap(Module):
"""Define the FeatureMap interface."""
def __init__(self, query_dims):
super().__init__()
self.query_dims = query_dims
def new_feature_map(self, device):
"""Create a new instance of this feature map. In particular, if it is a
random feature map sample new parameters."""
raise NotImplementedError()
def forward_queries(self, x):
"""Encode the queries `x` using this feature map."""
return self(x)
def forward_keys(self, x):
"""Encode the keys `x` using this feature map."""
return self(x)
def forward(self, x):
"""Encode x using this feature map. For symmetric feature maps it
suffices to define this function, but for asymmetric feature maps one
needs to define the `forward_queries` and `forward_keys` functions."""
raise NotImplementedError()
@classmethod
def factory(cls, *args, **kwargs):
"""Return a function that when called with the query dimensions returns
an instance of this feature map.
It is inherited by the subclasses so it is available in all feature
maps.
"""
def inner(query_dims):
return cls(query_dims, *args, **kwargs)
return inner
class ActivationFunctionFeatureMap(FeatureMap):
"""Define a feature map that is simply an element-wise activation
function."""
def __init__(self, query_dims, activation_function):
super().__init__(query_dims)
self.activation_function = activation_function
def new_feature_map(self, device):
return
def forward(self, x):
return self.activation_function(x)
elu_feature_map = ActivationFunctionFeatureMap.factory(
lambda x: torch.nn.functional.elu(x) + 1
)
| 30.581081 | 79 | 0.687141 | 291 | 2,263 | 5.216495 | 0.357388 | 0.085639 | 0.046113 | 0.033597 | 0.201581 | 0.130435 | 0.054018 | 0.054018 | 0.054018 | 0.054018 | 0 | 0.002875 | 0.231551 | 2,263 | 73 | 80 | 31 | 0.87004 | 0.465312 | 0 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.322581 | false | 0 | 0.096774 | 0.096774 | 0.677419 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b1a3ee57311794655d903b13da4c55485579dd21 | 1,889 | py | Python | PropertyEditor/CustomProperties.py | wizath/pyqt-propertyeditor | 26ba6ea23a73576b6d6973b6c518d3538614a6b6 | [
"MIT"
] | 3 | 2017-11-02T11:07:53.000Z | 2020-01-30T20:10:02.000Z | PropertyEditor/CustomProperties.py | wizath/pyqt-propertyeditor | 26ba6ea23a73576b6d6973b6c518d3538614a6b6 | [
"MIT"
] | null | null | null | PropertyEditor/CustomProperties.py | wizath/pyqt-propertyeditor | 26ba6ea23a73576b6d6973b6c518d3538614a6b6 | [
"MIT"
] | 6 | 2017-01-03T17:44:11.000Z | 2021-09-03T01:25:02.000Z | from PyQt4 import QtCore, QtGui
from Property import Property
# TODO add metaclass with auto-property registering for easier use
# TODO format with PEP8
class ListProperty(Property):
def __init__(self, name, propertyList, parent = None):
super(ListProperty, self).__init__(name, None, parent)
self._list = []
for i, item in enumerate(propertyList):
self._list.append(Property(str(i), item, self))
class ColorProperty(Property):
def __init__(self, name, property, parent = None):
super(ColorProperty, self).__init__(name, property, parent)
def createEditor(self, parent, option, index):
editor = QtGui.QComboBox(parent)
editor.currentIndexChanged.connect(editor.clearFocus)
colorNames = QtGui.QColor().colorNames()
for i in range(len(colorNames)):
color = QtGui.QColor(colorNames[i])
editor.insertItem(i, colorNames[i])
editor.setItemData(i, color, QtCore.Qt.DecorationRole)
return editor
def setEditorData(self, editor, data):
color = data.internalPointer().property()
editor.setCurrentIndex(editor.findData(color, QtCore.Qt.DecorationRole))
if editor.currentIndex() == -1:
editor.addItem(color.name())
editor.setItemData(editor.count() - 1, color, QtCore.Qt.DecorationRole)
editor.setCurrentIndex(editor.count() - 1)
def setModelData(self, editor, model, index):
item = index.internalPointer()
color = editor.itemData(editor.currentIndex(), QtCore.Qt.DecorationRole).toPyObject()
item.setProperty(color)
class DictProperty(Property):
def __init__(self, name, propertyDict, parent = None):
super(DictProperty, self).__init__(name, None, parent)
for i, j in propertyDict.items():
parent = Property(str(i), j, self) | 33.732143 | 94 | 0.665431 | 207 | 1,889 | 5.951691 | 0.352657 | 0.025974 | 0.071429 | 0.046266 | 0.091721 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003401 | 0.22181 | 1,889 | 56 | 95 | 33.732143 | 0.834014 | 0.045527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
49336560063ddc88f7d0dead10de2084bc5866e8 | 1,041 | py | Python | pyshrt_tiny.py | elasyaf/pyshort-cli | 0cb7e8f10d441c303d0d36b05f8d0e86ec4b5774 | [
"Unlicense"
] | null | null | null | pyshrt_tiny.py | elasyaf/pyshort-cli | 0cb7e8f10d441c303d0d36b05f8d0e86ec4b5774 | [
"Unlicense"
] | null | null | null | pyshrt_tiny.py | elasyaf/pyshort-cli | 0cb7e8f10d441c303d0d36b05f8d0e86ec4b5774 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
# Licence
# DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
# Version 2, December 2004
# Everyone is permitted to copy and distribute verbatim or modified
# copies of this license document, and changing it is allowed as long
# as the name is changed.
#
# DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
# 0. You just DO WHAT THE FUCK YOU WANT TO.
import contextlib
import sys
try:
from urllib.parse import urlencode
except ImportError:
from urllib import urlencode
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
def main():
for pendekin in map(pendek, sys.argv[1:]):
print"URL Pendek = ",(pendekin)
def pendek(url):
short_engine = ('http://tinyurl.com/api-create.php?' +
urlencode({'url':url}))
with contextlib.closing(urlopen(short_engine)) as response:
return response.read().decode('utf-8')
if __name__ == '__main__':
main() | 26.025 | 70 | 0.692603 | 145 | 1,041 | 4.903448 | 0.586207 | 0.025316 | 0.037975 | 0.054852 | 0.129395 | 0.129395 | 0.129395 | 0.098453 | 0.098453 | 0 | 0 | 0.011097 | 0.220941 | 1,041 | 40 | 71 | 26.025 | 0.865598 | 0.428434 | 0 | 0.2 | 0 | 0 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.4 | null | null | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
494523c59f1754d55e567211d1d1a74785c21bde | 223 | py | Python | leda/urls.py | banbanchs/leda | 26bf6394aff813bc3f4da30f46bcc7bf597cca3a | [
"MIT"
] | null | null | null | leda/urls.py | banbanchs/leda | 26bf6394aff813bc3f4da30f46bcc7bf597cca3a | [
"MIT"
] | null | null | null | leda/urls.py | banbanchs/leda | 26bf6394aff813bc3f4da30f46bcc7bf597cca3a | [
"MIT"
] | null | null | null | # coding=utf-8
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^$', include('home.urls', namespace='home')),
url(r'^api/', include('api.urls', namespace='api')),
]
| 22.3 | 56 | 0.663677 | 31 | 223 | 4.774194 | 0.548387 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005208 | 0.139013 | 223 | 9 | 57 | 24.777778 | 0.765625 | 0.053812 | 0 | 0 | 0 | 0 | 0.148325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
494dfd7da7a566397090621e278b80655c4abca4 | 38,113 | py | Python | Python Calc basic 2.py | Prithwis-2023/PyCalc--The-Python-Calculator | 4241e34714173f4b7d813eb83ccf0fbfaf9432ed | [
"MIT"
] | null | null | null | Python Calc basic 2.py | Prithwis-2023/PyCalc--The-Python-Calculator | 4241e34714173f4b7d813eb83ccf0fbfaf9432ed | [
"MIT"
] | null | null | null | Python Calc basic 2.py | Prithwis-2023/PyCalc--The-Python-Calculator | 4241e34714173f4b7d813eb83ccf0fbfaf9432ed | [
"MIT"
] | null | null | null |
def ADD(A,B):
return A+B
def SUBSTRACT(A,B):
return A-B
def MULTIPLY(A,B):
return A*B
def QUOTIENT_OF_DIVISION(A,B):
return A//B
def REMAINDER_OF_DIVISION(A,B):
return A%B
def AVERAGE(A,B):
return (A+B)/2
def N_ROOT(A,B):
return (A)**(1/B)
def POWER(A,B):
return (A)**(B)
def COMPOSE(f,g):
return lambda x : f(g(x))
print("WELCOME TO PyCalc")
print("HERE YOU CAN DO BASIC ARITHMETIC OPERATIONS AND MANY MORE!")
print("REMEMBER THE FOLLOWING KEYWORDS:")
print("ADD(A,B): FOR ADDING A AND B.")
print("SUBSTRACT(A,B): FOR SUBSTRACTING B FROM A.")
print("MULTIPLY(A,B): FOR MULTIPLYING A AND B.")
print("SQRT(A): FOR SQUARE ROOT.")
print("POWER(A,B): FOR A^B.")
print("AVERAGE(A,B): FOR AVERAGE OF A AND B.")
print("QUOTIENT_OF_DIVISION(A,B): QUOTIENT WHEN A IS DIVIDED BY B.")
print("REMAINDER_OF_DIVISION(A,B): REMAINDER WHEN A IS DIVIDED BY B.")
print("exp() : EULER'S NUMBER.")
print("A KIND REQUEST:PLEASE DO NOT USE THE COMPOSITIO FOR GRAPHS. IT'S STILL UNDER PROCESSING STAGE! ")
print("PLEASE SELECT FROM FOLLOWING OPERATIONS:")
print("ADD\nSUBSTRACT\nMULTIPLY\nQUOTIENT_OF_DIVISION\nREMAINDER_OF_DIVISION\nAVERAGE\nN_ROOT\nPOWER\nPARTITIONS\nFINDING_ROOTS\nLOGARITHMIC_OPERATIONS\nGRAPHS\nCALCULUS\nPRIME_IDENTIFICATION\n2_BY_2_PARTITIONS\nFACTORIAL_CALCULATIONS\nFUN_ZONE")
E = input("ENTER THE OPERATION YOU NEED:")
if E == "ADD":
w = int(input("HOW MANY TIMES YOU WANT TO ADD 2 NUMBERS:"))
for i in range (w):
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(ADD(A,B))
i=i+1
elif E == "SUBSTRACT":
w = int(input("HOW MANY TIMES YOU WANT TO SUBSTRACT 2 NUMBERS:"))
for i in range (w):
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(SUBSTRACT(A,B))
i=i+1
elif E == "MULTIPLY":
w = int(input("HOW MANY TIMES YOU WANT TO MULTIPLY 2 NUMBERS:"))
for i in range (w):
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(MULTIPLY(A,B))
i=i+1
elif E == "QUOTIENT_OF_DIVISION":
w = int(input("HOW MANY TIMES YOU WANT TO FIND THE QUOTIENT OF A DIVISION OF 2 NUMBERS:"))
for i in range (w):
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(QUOTIENT_OF_DIVISION(A,B))
i=i+1
elif E == "REMAINDER_OF_DIVISION":
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(REMAINDER_OF_DIVISION(A,B))
elif E == "AVERAGE":
A=int(input("ENTER THE FIRST NUMBER:"))
B=int(input("ENTER THE SECOND NUMBER:"))
print(AVERAGE(A,B))
elif E == "PARTITIONS":
w=int(input("ENTER THE NUMBER:"))
X=1/((4*w*3**(1/2))*(2.718)**(3.414*((2*w)/3)**(1/2)))
print(X)
elif E == "FINDING_ROOTS":
V=int(input("ENTER THE DEGREE OF THE POLYNOMIAL:"))
if V == 2:
Y=int(input("ENTER THE COEFFICIENT OF X^2:"))
Z=int(input("ENTER THE COEFFICIENT OF X:"))
W=int(input("ENTER THE CONSTANT:"))
T=(-Z+((Z**2)-4*Y*W)**(1/2))/(2*Y)
U=(-Z-((Z**2)-4*Y*W)**(1/2))/(2*Y)
print("THE POLYNOMIAL IS ({}X^2) + ({}X) + ({})".format(Y,Z,W))
print("THE FIRST ROOT IS",T)
print("THE SECOND ROOT IS",U)
else:
print("WE CURRENTLY DON'T SUPPORT POLYNOMIALS OF DEGREES MORE THAN 2")
elif E == "LOGARITHMIC_OPERATIONS":
a=int(input("ENTER THE NUMBER:"))
import math
print(math.log(a))
elif E == "POWER":
A=int(input("ENTER THE BASE:"))
B=int(input("ENTER THE EXPONENT:"))
print(POWER(A,B))
elif E == "N_ROOT":
A=int(input("ENTER THE NUMBER:"))
B=int(input("ENTER THE NUMBER N:"))
print(N_ROOT(A,B))
elif E == "GRAPHS":
print("TRIGONOMETRIC\nEXPONENTIAL\nLOGARITHMIC\nPOLYNOMIAL\nCOMPOSITION")
d=input("PLEASE ENTER WHICH TYPE OF GRAPH YOU HAVE TO MAKE:")
if d == "TRIGONOMETRIC":
b=input("ENTER THE TRIGONONOMETRIC FUNCTION(sin/cos/tan):")
c=float(input("ENTER THE NUMBER WHICH WILL BE COEFICIENT OF theta:"))
if b == "sin":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.sin(x)
plt.plot(x,y)
plt.show()
elif b == "cos":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.cos(x)
plt.plot(x,y)
plt.show()
elif b == "tan":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.tan(x)
plt.plot(x,y)
plt.show()
elif b == "cosec":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.csc(x)
plt.plot(x,y)
plt.show()
elif b == "sec":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.sec(x)
plt.plot(x,y)
plt.show()
elif b == "cot":
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(0, c*(np.pi), 0.1)
y=np.cot(x)
plt.plot(x,y)
plt.show()
else:
print("INVALID INPUT")
elif d == "EXPONENTIAL":
f=int(input("ENTER THE FIRST VALUE OF RANGE:"))
g=int(input("ENTER THE LAST VALUE OF RANGE:" ))
e=2.718281828
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(f,g,0.1)
y=np.e**x
plt.plot(x,y)
plt.show()
elif d == "LOGARITHMIC":
import matplotlib.pyplot as plt
import numpy as np
h=int(input("ENTER THE LOWER BOUND OF RANGE:"))
i=int(input("ENTER THE UPPER BOUND OF RANGE:"))
x=np.arange(h,i,0.1)
y=np.log(x)
plt.plot(x,y)
plt.show()
elif d == "POLYNOMIAL":
a_y = input("ENTER WHAT TYPE OF POLYNOMIAL GRAPH(QUADRATIC/CUBIC/BIQUADRATIC):")
if a_y == "QUADRATIC":
import numpy as np
import matplotlib.pyplot as plt
a_s=int(input("ENTER THE COEFFICIENT OF X^2:"))
a_d=int(input("ENTER THE COEFFICIENT OF X:"))
a_f=int(input("ENTER THE CONSTANT:"))
a_g=int(input("ENTER THE FIRST NUMBER IN THE RANGE:"))
a_h=int(input("ENTER THE LAST NUMBER IN THE RANGE:"))
x=np.linspace(a_g,a_h,256, endpoint = True)
y=(a_s*(x*x))+(a_d*x)+a_f
plt.plot(x,y)
plt.show()
elif a_y == "CUBIC":
import numpy as np
import matplotlib.pyplot as plt
a_q=int(input("ENTER THE COEFFICIENT OF X^3:"))
a_s=int(input("ENTER THE COEFFICIENT OF X^2:"))
a_d=int(input("ENTER THE COEFFICIENT OF X:"))
a_f=int(input("ENTER THE CONSTANT:"))
a_g=int(input("ENTER THE FIRST NUMBER IN THE RANGE:"))
a_h=int(input("ENTER THE LAST NUMBER IN THE RANGE:"))
x=np.linspace(a_g,a_h,256, endpoint = True)
y=(a_q*(x*x*x))+(a_s*(x*x))+(a_d*x)+a_f
plt.plot(x,y)
plt.show()
elif a_y == "BIQUADRATIC":
import numpy as np
import matplotlib.pyplot as plt
a_r=int(input("ENTER THE COEFFICIENT OF X^4:"))
a_q=int(input("ENTER THE COEFFICIENT OF X^3:"))
a_s=int(input("ENTER THE COEFFICIENT OF X^2:"))
a_d=int(input("ENTER THE COEFFICIENT OF X:"))
a_f=int(input("ENTER THE CONSTANT:"))
a_g=int(input("ENTER THE FIRST NUMBER IN THE RANGE:"))
a_h=int(input("ENTER THE LAST NUMBER IN THE RANGE:"))
x=np.linspace(a_g,a_h,256, endpoint = True)
y=(a_r*(x*x*x*x))+(a_q*(x*x*x))+(a_s*(x*x))+(a_d*x)+a_f
plt.plot(x,y)
plt.show()
elif d == "COMPOSITION":
import numpy as np
import matplotlib.pyplot as plt
q_s =int(input("ENTER THE LOWER BOUND IN THE RANGE:"))
t_i =int(input("ENTER THE UPPER BOUND IN THE RANGE:"))
e_w = input("ENTER THE INNER FUNCTION:")
w_e = input("ENTER THE OUTER FUNCTION:")
x = np.linspace(q_s,t_i,0.1)
y = COMPOSE(w_e, e_w)
plt.plot(x,y)
plt.show()
else:
print("INVALID INPUT")
elif E == "CALCULUS":
k=input("DIFFERENTIATION/INTEGRATION:")
if k == "DIFFERENTIATION":
print("REMEMBER! IN OUTPUT THE FIRST EXPRESSION IS OF THE ACTUAL FUNCTION WHILE THE SECOND, THIRD, FOURTH CONTAINS THE FIRST DERIVATIVE, SECOND DERIVATIVE....and so on" )
print("COMPOSITION OF FUNCTIONS IS INBUILT")
l=input("WHICH TYPE OF FUNCTION YOU WANT TO USE TRIGONOMETRIC,ALGEBRIC,LOGARITHMIC,EXPONENTIAL:")
if l == "TRIGONOMETRIC":
m = input("ENTER THE TRIGONOMETRIC FUNCTION(sin/cos/tan/cosec/sec/cot):")
if m == "sin":
n=input("ENTER THE FUNCTION WHICH WILL BE WITHIN SIN():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
k_l=(diff(sin(n)),x,a_b)
print(k_l)
a_b=a_b+1
elif m == "cos":
o=input("ENTER THE FUNCTION WHICH WILL BE WITHIN COS():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
j_k=(diff(cos(o)),x,a_b)
print(j_k)
a_b=a_b+1
elif m == "tan":
p=input("ENTER THE FUNCTION WHICH WILL BE WITHIN TAN():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
i_j=(diff(tan(p)),x,a_b)
print(i_j)
a_b=a_b+1
elif m == "cosec":
q=input("ENTER THE FUNCTION WHICH IS WITHIN COSEC():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
h_i=(diff(1/sin(q)),x,a_b)
print(h_i)
a_b=a_b+1
elif m == "sec":
r=input("ENTER THE FUNCTION WHICH WILL BE WITHIN SEC():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
g_h=(diff(1/cos(r)),x,a_b)
print(g_h)
a_b=a_b+1
elif m == "cot":
s=input("ENTER THE FUNCTION WHICH WILL BE WITHIN COT():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
f_g=(diff(1/tan(s)),x,a_b)
print(f_g)
a_b=a_b+1
else:
print("INVALID INPUT")
elif l == "ALGEBRIC":
t=input("ENTER THE ALGEBRIC EXPRESSION:")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
e_f=(diff(t),x,a_b)
print(e_f)
a_b=a_b+1
elif l == "LOGARITHMIC":
u=input("ENTER THE FUNCTION WHICH WILL BE IN LOG():")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
c_d=(diff(log(u),x,a_b))
print(c_d)
a_b=a_b+1
elif l == "EXPONENTIAL":
v=input("ENTER THE FUNCTION WHICH WILL BE IN THE POWER OF e:")
from sympy import *
x,y,z = symbols('x y z')
init_printing(use_unicode=True)
a_b=1
for a_b in range(4):
d_e=(diff(exp(v),x,a_b))
print(d_e)
a_b=a_b+1
else:
print("NOT SUPPORTED CURRENTLY")
elif k == "INTEGRATION":
w=input("WHICH TYPE OF FUNCTION YOU WANT TO USE (TRIGONOMETRIC,ALGEBRIC,LOGARITHMIC,EXPONENTIAL) :")
if w == "ALGEBRIC":
p_q = input("WHICH TYPE OF INTEGRAL DO YOU NEED(LINE/SURFACE/VOLUME):")
if p_q == "LINE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x= Symbol('x')
m_n=input("ENTER THE FUNCTION:")
print(integrate(m_n, x))
elif p_q == "SURFACE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y=symbols('x y')
m_n=input("ENTER THE FUNCTION:")
print(integrate(m_n, x, y))
elif p_q == "VOLUME":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y,z=symbols('x y z')
m_n = input("ENTER THE FUNCTION:")
print (integrate(m_n, x, y, z))
else:
print("INVALID INPUT")
elif w == "TRIGONOMETRIC":
a_y=input("WHICH TYPE FO INTEGRAL DO YOU WANT(SINGLE/DOUBLE/TRIPLE):")
if a_y == "SINGLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x= Symbol('x')
m_n=input("ENTER THE FUNCTION:")
print(integrate(m_n, x))
elif a_y == "DOUBLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y=symbols('x y')
m_n=input("ENTER THE FUNCTION:")
print(integrate(m_n, x, y))
elif a_y == "TRIPLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y,z=symbols('x y z')
m_n = input("ENTER THE FUNCTION:")
print (integrate(m_n, x, y, z))
else:
print("INPUT INVALID")
elif w == "LOGARITHMIC":
a_y=input("WHICH TYPE FO INTEGRAL DO YOU WANT(SINGLE/DOUBLE/TRIPLE):")
if a_y == "SINGLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x= Symbol('x')
m_n=input("ENTER THE FUNCTION:")
print(integrate(log(m_n), x))
elif a_y == "DOUBLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y=symbols('x y')
m_n=input("ENTER THE FUNCTION:")
print(integrate(log(m_n), x, y))
elif a_y == "TRIPLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y,z=symbols('x y z')
m_n = input("ENTER THE FUNCTION:")
print (integrate(log(m_n), x, y, z))
else:
print("INVALID INPUT")
elif w == "EXPONENTIAL":
print("USE exp() FOR EULER'S NUMBER")
a_y=input("WHICH TYPE FO INTEGRAL DO YOU WANT(SINGLE/DOUBLE/TRIPLE):")
if a_y == "SINGLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x= Symbol('x')
m_n=input("ENTER THE FUNCTION:")
print(integrate(exp(m_n), x))
elif a_y == "DOUBLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y=symbols('x y')
m_n=input("ENTER THE FUNCTION:")
print(integrate(exp(m_n), x, y))
elif a_y == "TRIPLE":
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x,y,z=symbols('x y z')
m_n = input("ENTER THE FUNCTION:")
print (integrate(exp(m_n), x, y, z))
else:
print("INVALID INPUT")
else:
print("PLEASE ENTER VALID INPUT")
else:
print("INVALID INPUT")
elif E == "PRIME_IDENTIFICATION":
D_R = int(input("HOW MANY TIMES YOU WANT TO IDENTIFY PRIMES:"))
for i in range (D_R):
a_z = int(input("ENTER THE NUMBER:"))
if a_z>1:
for i in range(2,a_z):
if a_z%i==0:
print("THE NUMBER IS COMPOSITE")
break
else:
print("THE NUMBER IS PRIME")
else:
print("INVALID INPUT")
i=i+1
elif E == "2_BY_2_PARTITIONS":
S_D = int(input("ENTER THE NUMBER:"))
for i in range (n):
print(i,"+", n-i)
i=i+1
elif E == "FACTORIAL_CALCULATIONS":
W_E = int(input("ENTER THE NUMBER:"))
fact = 1
for a in range(1,W_E+1):
fact = fact*a
print(fact)
elif E == "FUN_ZONE":
print("WELCOME TO THE FUN ZONE OF PyCalc. HERE YOU CAN DRAW DIFFERENT 3-D AND 2-D SHAPES OF DIFFERENT DIMENAIONS.")
a_t = input("WHAT DO YOU WANT TO DRAW?(2-D/3-D):")
if a_t == "2-D":
a_yo=input("WHICH SHAPE?(SQUARES/RECTANGLES/POLYGONS):")
if a_yo == "SQUARE":
a_oy=int(input("ENTER THE SIDE LENGTH:"))
import turtle
pd=turtle.Screen()
pd=turtle.Turtle()
for i in range (5):
pd.forward(a_oy)
pd.left(90)
elif a_yo == "RECTANGLE":
a_oy=int(input("ENTER THE LENGTH:"))
a_ot=int(input("ENTER THE BREADTH:"))
import turtle
pd=turtle.Screen()
pd=turtle.Turtle()
pd.forward(a_oy)
pd.left(90)
pd.forward(a_ot)
pd.left(90)
pd.forward(a_oy)
pd.left(90)
pd.forward(a_ot)
elif a_yo == "POLYGONS":
tess = int(input("ENTER THE NUMBER OF SIDES OF THE POLYGON:"))
l_en = int(input("ENTER THE LENGTH OF EACH SIDE:"))
cd = ((tess-2)*180)/tess
sa = 180-cd
import turtle
fg = turtle.Screen()
fg = turtle.Turtle()
for i in range(tess+1):
fg.forward(l_en)
fg.left(sa)
else:
print("INVALID")
elif a_t == "3-D":
a_op = input("WHICH SHAPE?(CUBE/CUBOID)")
if a_op == "CUBE":
pid = int(input("ENTER THE SIDE LENGTH:"))
import turtle
pd=turtle.Screen()
pd=turtle.Turtle()
for i in range (5):
pd.forward(pid)
pd.left(90)
i=i+1
pd.right(45)
pd.forward(pid)
pd.left(45)
pd.left(90)
pd.forward(pid)
pd.left(45)
pd.forward(pid)
pd.right(45)
pd.right(90)
pd.forward(pid)
pd.right(45)
pd.forward(pid)
pd.right(45)
pd.forward(pid)
pd.right(90)
pd.right(45)
pd.forward(pid)
pd.right(180)
pd.forward(pid)
pd.right(135)
pd.forward(pid)
pd.right(90)
pd.forward(pid)
pd.right(90)
pd.forward(pid)
else:
print("INVALID")
else:
print("PLESE ENTER CORRECTLY!")
L=["ADD-ADD","ADD-SUBSTRACT","ADD-MULTIPLY","ADD-DIVIDE","SUBSTRACT-ADD","SUBSTRACT-SUBSTRACT","SUBSTRACT-MULTIPLY","SUBSTRACT-DIVIDE","MULTIPLY-ADD","MULTIPLY-SUBSTRACT","MULTIPLY-MULTIPLY","MULTIPLY-DIVIDE","DIVIDE-ADD","DIVIDE-SUBSTRACT","DIVIDE-MULTIPLY","DIVIDE-DIVIDE"]
F=print("DO YOU WANT TO WORK WITH MORE NUMBERS?")
G=input("TYPE YES OR NO:")
if G == "YES":
H=input("HOW MANY NUMBERS?:")
if H == "3":
I=int(input("ENTER THE FIRST NUMBER:"))
J=int(input("ENTER THE SECOND NUMBER:"))
K=int(input("ENTER THE THIRD NUMBER:"))
print("YOU CAN SELECT FROM THE FOLLOWING OPERATIONS and TYPE THEIR RESPECTIVE CODES FOR NUMBERS a,b,c:")
print(" L(0) : a+b+c ")
print(" L(1) : a+b-c ")
print(" L(2) : a+b*c ")
print(" L(3) : a+b/c ")
print(" L(4) : a-b+c ")
print(" L(5) : a-b-c ")
print(" L(6) : a-b*c ")
print(" L(7) : a-b/c ")
print(" L(8) : a*b+c ")
print(" L(9) : a*b-c ")
print(" L(10) : a*b*c ")
print(" L(11) : a*b/c ")
print(" L(12) : a/b+c ")
print(" L(13) : a/b-c ")
print(" L(14) : a/b*c ")
print(" L(15) : a/b/c ")
M=input("ENTER THE CODE OF THE OPERATION OF YOUR CHOICE:")
if M == "L(0)":
print(I+J+K)
print(L[0])
print("THE LIST:")
print(L)
elif M == "L(1)":
print(I+J-K)
print(L[1])
print("THE LIST:")
print(L)
elif M == "L(2)":
print(I+J*K)
print(L[2])
print("THE LIST:")
print(L)
elif M == "L(3)":
print(I+J/K)
print(L[3])
print("THE LIST:")
print(L)
elif M == "L(4)":
print(I-J+K)
print(L[4])
print("THE LIST:")
print(L)
elif M == "L(5)":
print(I-J-K)
print(L[5])
print("THE LIST:")
print(L)
elif M == "L(6)":
print(I-J*K)
print(L[6])
print("THE LIST:")
print(L)
elif M == "L(7)":
print(I-J/K)
print(L[7])
print("THE LIST:")
print(L)
elif M == "L(8)":
print(I*J+K)
print(L[8])
print("THE LIST:")
print(L)
elif M == "L(9)":
print(I*J-K)
print(L[9])
print("THE LIST:")
print(L)
elif M == "L(10)":
print(I*J*K)
print(L[10])
print("THE LIST:")
print(L)
elif M == "L(11)":
print(I*J/K)
print(L[11])
print("THE LIST:")
print(L)
elif M == "L(12)":
print(I/J+K)
print(L[12])
print("THE LIST:")
print(L)
elif M == "L(13)":
print(I/J-K)
print(L[13])
print("THE LIST:")
print(L)
elif M == "L(14)":
print(I/J*K)
print(L[14])
print("THE LIST:")
print(L)
elif M == "L(15)":
print(I/J/K)
print(L[15])
print("THE LIST:")
print(L)
else:
print("CODE IS INCORRECT")
elif H == "4":
N=("ADD-ADD-ADD","ADD-ADD-SUBSTRACT","ADD-ADD-MULTIPLY","ADD-ADD-DIVISION","ADD-SUBSTRACT-ADD","ADD-SUBSTRACT-SUBSTRACT","ADD-SUBSTRACT-MULTIPLY","ADD-SUBSTRACT-DIVIDE","ADD-MULTIPLY-ADD","ADD-MULTIPLY-SUBSTRACT","ADD-MULTIPLY-MULTIPLY","ADD-MULTIPLY-DIVIDE","ADD-DIVIDE-ADD","ADD-DIVIDE-SUBSTRACT","ADD-DIVIDE-MULTIPLY","ADD-DIVIDE-DIVIDE","SUBSTRACT-ADD-ADD","SUBSTRACT-ADD-SUBSTRACT","SUBSTRACT-ADD-MULTIPLY","SUBSTRACT-ADD-DIVIDE","SUBSTRACT-SUBSTRACT-ADD","SUBSTRACT-SUBSTRACT-SUBSTRACT","SUBSTRACT-SUBSTRACT-MULTIPLY","SUBSTRACT-SUBSTRACT-DIVIDE","SUBSTRACT-MULTIPLY-ADD","SUBSTRACT-MULTIPLY-SUBSTRACT","SUBSTRACT-MULTIPLY-MULTIPLY","SUBSTRACT-MULTIPLY-DIVIDE","SUBSTRACT-DIVIDE-ADD","SUBSTRACT-DIVIDE-SUBSTRACT","SUBSTRACT-DIVIDE-MULTIPLY","SUBSTRACT-DIVIDE-DIVIDE","MULTIPLY-ADD-ADD","MULTIPLY-ADD-SUBSTRACT","MULTIPLY-ADD-MULTIPLY","MULTIPLY-ADD-DIVIDE","MULTIPLY-SUBSTRACT-ADD","MULTIPLY-SUBSTRACT-SUBSTRACT","MULTIPLY-SUBSTRACT-MULTIPLY","MULTIPLY-SUBSTRACT-DIVISION","MULTIPLY-MULTIPLY-ADD","MULTIPLY-MULTIPLY-SUBSTRACT","MULTIPLY-MULTIPLY-MULTIPLY","MULTIPLY-MULTIPLTY-DIVISION","MULTIPLY-DIVIDE-ADD","MULTIPLY-DIVIDE-SUBSTRACT","MULTIPLY-DIVIDE-MULTIPLY","MULTIPLY-DIVIDE-DIVIDE","DIVIDE-ADD-ADD","DIVIDE-ADD-SUBSTRACT","DIVIDE-ADD-MULTIPLY","DIVIDE-ADD-DIVIDE","DIVIDE-SUBSTRACT-ADD","DIVIDE-SUBSTRACT-SUBSTRACT","DIVIDE-SUBSTRACT-MULTIPLY","DIVIDE-SUBSTRACT-DIVIDE","DIVIDE-MULTIPLY-ADD","DIVIDE-MULTIPLY-SUBSTRACT","DIVIDE-MULTIPLY-MULTIPLY","DIVIDE-MULTIPLY-DIVIDE","DIVIDE-DIVIDE-ADD","DIVIDE-DIVIDE-SUBSTRACT","DIVIDE-DIVIDE-MULTIPLY","DIVIDE-DIVIDE-DIVIDE")
O=int(input("ENTER THE FIRST NUMBER:"))
P=int(input("ENTER THE SECOND NUMBER:"))
Q=int(input("ENTER THE THIRD NUMBER:"))
R=int(input("ENTER THE FOURTH NUMBER:"))
print("YOU CAN SELECT THE OPERATIONS OF OUR CHOICE FROM BELOW CONSIDERING ANY THREE INTEGERS a,b,c,d :")
print(" N(0) : a+b+c+d ")
print(" N(1) : a+b+c-d ")
print(" N(2) : a+b+c*d ")
print(" N(3) : a+b+c/d ")
print(" N(4) : a+b-c+d ")
print(" N(5) : a+b-c-d ")
print(" N(6) : a+b-c*d ")
print(" N(7) : a+b-c/d ")
print(" N(8) : a+b*c+d ")
print(" N(9) : a+b*c-d ")
print(" N(10) : a+b*c*d ")
print(" N(11) : a+b*c/d ")
print(" N(12) : a+b/c+d ")
print(" N(13) : a+b/c-d ")
print(" N(14) : a+b/c*d ")
print(" N(15) : a+b/c/d ")
print(" N(16) : a-b+c+d ")
print(" N(17) : a-b+c-d ")
print(" N(18) : a-b+c*d ")
print(" N(19) : a-b+c/d ")
print(" N(20) : a-b-c+d ")
print(" N(21) : a-b-c-d ")
print(" N(22) : a-b-c*d ")
print(" N(23) : a-b-c/d ")
print(" N(24) : a-b*c+d ")
print(" N(25) : a-b*c-d ")
print(" N(26) : a-b*c*d ")
print(" N(27) : a-b*c/d ")
print(" N(28) : a-b/c+d ")
print(" N(29) : a-b/c-d ")
print(" N(30) : a-b/c*d ")
print(" N(31) : a-b/c/d ")
print(" N(32) : a*b+c+d ")
print(" N(33) : a*b+c-d ")
print(" N(34) : a*b+c*d ")
print(" N(35) : a*b+c/d ")
print(" N(36) : a*b-c+d ")
print(" N(37) : a*b-c-d ")
print(" N(38) : a*b-c*d ")
print(" N(39) : a*b-c/d ")
print(" N(40) : a*b*c+d ")
print(" N(41) : a*b*c-d ")
print(" N(42) : a*b*c*d ")
print(" N(43) : a*b*c/d ")
print(" N(44) : a*b/c+d ")
print(" N(45) : a*b/c-d ")
print(" N(46) : a*b/c*d ")
print(" N(47) : a*b/c/d ")
print(" N(48) : a/b+c+d ")
print(" N(49) : a/b+c-d ")
print(" N(50) : a/b+c*d ")
print(" N(51) : a/b+c/d ")
print(" N(52) : a/b-c+d ")
print(" N(53) : a/b-c-d ")
print(" N(54) : a/b-c*d ")
print(" N(55) : a/b-c/d ")
print(" N(56) : a/b*c+d ")
print(" N(57) : a/b*c-d ")
print(" N(58) : a/b*c*d ")
print(" N(59) : a/b*c/d ")
print(" N(60) : a/b/c+d ")
print(" N(61) : a/b/c-d ")
print(" N(62) : a/b/c*d ")
print(" N(63) : a/b/c/d ")
S=input("ENTER THE CODE OF YOUR CHOICE:")
if S == "N(0)":
print(O+P+Q+R)
print("THE OPERATION:")
print(N[0])
elif S == "N(1)":
print(O+P+Q-R)
print("THE OPERATION:")
print(N[1])
elif S == "N(2)":
print(O+P+Q*R)
print("THE OPERATION:")
print(N[2])
elif S == "N(3)":
print(O+P+Q/R)
print("THE OPERATION:")
print(N[3])
elif S == "N(4)":
print(O+P-Q+R)
print("THE OPERATION:")
print(N[4])
elif S == "N(5)":
print(O+P-Q-R)
print("THE OPERATION:")
print(N[5])
elif S == "N(6)":
print(O+P-Q*R)
print("THE OPERATION:")
print(N[6])
elif S == "N(7)":
print(O+P-Q/R)
print("THE OPERATION:")
print(N[7])
elif S == "N(8)":
print(O+P*Q+R)
print("THE OPERATOR:")
print(N[8])
elif S == "N(9)":
print(O+P*Q-R)
print("THE OPERATOR:")
print(N[9])
elif S == "N(10)":
print(O+P*Q*R)
print("THE OPERATOR:")
print(N[10])
elif S == "N(11)":
print(O+P*Q/R)
print("THE OPERATOR:")
print(N[11])
elif S == "N(12)":
print(O+P/Q+R)
print("THE OPERATOR:")
print(N[12])
elif S == "N(13)":
print(O+P/Q-R)
print("THE OPERATOR:")
print(N[13])
elif S == "N(14)":
print(O+P/Q*R)
print(N[14])
elif S == "N(15)":
print(O+P/Q/R)
print(N[15])
elif S == "N(16)":
print(O-P+Q+R)
print("THE OPERATOR:")
print(N[16])
elif S == "N(17)":
print(O-P+Q-R)
print("THE OPERATOR:")
print(N[17])
elif S == "N(18)":
print(O-P+Q*R)
print("THE OPERATOR:")
print(N[18])
elif S == "N(19)":
print(O-P+Q/R)
print("THE OPERATOR:")
print(N[19])
elif S == "N(20)":
print(O-P-Q+R)
print("THE OPERATOR:")
print(N[20])
elif S == "N(21)":
print(O-P-Q-R)
print("THE OPERATOR:")
print(N[21])
elif S == "N(22)":
print(O-P-Q*R)
print("THE OPERATOR:")
print(N[22])
elif S == "N(23)":
print(O-P-Q/R)
print("THE OPERATOR")
elif S == "N(24)":
print(O-P*Q+R)
print("THE OPERATOR:")
print(N[24])
elif S == "N(25)":
print(O-P*Q-R)
print("THE OPERATOR:")
print(N[25])
elif S == "N(26)":
print(O-P*Q*R)
print("THE OPERATOR:")
print(N[26])
elif S == "N(27)":
print(O-P*Q/R)
print("THE OPERATOR:")
print(N[27])
elif S == "N(28)":
print(O-P/Q+R)
print("THE OPERATOR:")
print(N[28])
elif S == "N(29)":
print(O-P/Q-R)
print("THE OPERATOR:")
print(N[29])
elif S == "N(30)":
print(O-P/Q*R)
print("THE OPERATOR:")
print(N[30])
elif S == "N(31)":
print(O-P/Q/R)
print("THE OPERATOR:")
print(N[31])
elif S == "N(32)":
print(O*P+Q+R)
print("THE OPERATOR:")
print(N[32])
elif S == "N(33)":
print(O*P+Q-R)
print("THE OPERATOR:")
print(N[33])
elif S == "N(34)":
print(O*P+Q*R)
print("THE OPERATOR:")
print(N[34])
elif S == "N(35)":
print(O*P+Q/R)
print("THE OPERATOR:")
print(N[35])
elif S == "N(36)":
print(O*P-Q+R)
print("THE OPERATOR:")
print(N[36])
elif S == "N(37)":
print(O*P-Q-R)
print("THE OPERATOR:")
print(N[37])
elif S == "N(38)":
print(O*P-Q*R)
print("THE OPERATOR:")
print(N[38])
elif S == "N(39)":
print(O*P-Q/R)
print("THE OPERATOR:")
print(N[39])
elif S == "N(40)":
print(O*P*Q+R)
print("THE OPERATOR:")
print(N[40])
elif S == "N(41)":
print(O*P*Q-R)
print("THE OPERATOR:")
print(N[41])
elif S == "N(42)":
print(O*P*Q*R)
print("THE OPERATOR:")
print(N[42])
elif S == "N(43)":
print(O*P*Q/R)
print("THE OPERATOR:")
print(N[43])
elif S == "N(44)":
print(O*P/Q+R)
print("THE OPERATOR:")
print(N[44])
elif S == "N(45)":
print(O*P/Q-R)
print("THE OPERATOR:")
print(N[45])
elif S == "N(46)":
print(O*P/Q*R)
print("THE OPERATOR:")
print(N[46])
elif S == "N(47)":
print(O*P/Q/R)
print("THE OPERATOR:")
print(N[47])
elif S == "N(48)":
print(O/P+Q+R)
print("THE OPERATOR:")
print(N[48])
elif S == "N(49)":
print(O/P+Q-R)
print("THE OPERATOR:")
print(N[49])
elif S == "N(50)":
print(O/P+Q*R)
print("THE OPERATOR:")
print(N[50])
elif S == "N(51)":
print(O/P+Q/R)
print("THE OPERATOR:")
print(N[51])
elif S == "N(52)":
print(O/P-Q+R)
print("THE OPERATOR:")
print(N[52])
elif S == "N(53)":
print(O/P-Q-R)
print("THE OPERATOR:")
print(N[53])
elif S == "N(54)":
print(O/P-Q*R)
print("THE OPERATOR:")
print(N[54])
elif S == "N(55)":
print(O/P-Q/R)
print("THE OPERATOR:")
print(N[55])
elif S == "N(56)":
print(O/P*Q+R)
print("THE OPERATOR:")
print(N[56])
elif S == "N(57)":
print(O/P*Q-R)
print("THE OPERATOR:")
print(N[57])
elif S == "N(58)":
print(O/P*Q*R)
print("THE OPERATOR:")
print(N[58])
elif S == "N(59)":
print(O/P*Q/R)
print("THE OPERATOR:")
print(N[59])
elif S == "N(60)":
print(O/P/Q+R)
print("THE OPERATOR:")
print(N[60])
elif S == "N(61)":
print(O/P/Q-R)
print("THE OPERATOR:")
print(N[61])
elif S == "N(62)":
print(O/P/Q*R)
print("THE OPERATOR:")
print(N[62])
elif S == "N(63)":
print(O/P/Q/R)
print("THE OPERATOR:")
print(N[63])
else:
print("CODE ENTERED IS INCORRECT")
else:
print("WE DO NOT SUPPORT MORE THAN 4 NUMBERS OR LESS THAN 2 NUMBER")
import smtplib
sender_email = "pythoncalc096@gmail.com"
receiver_email = input("PLEASE ENTER YOUR EMAIL ADDRESS:")
password = "pycalc2023"
message = """
THANK YOU FOR USING PyCalc
REGARDS,
PyCalc
"""
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(sender_email, password)
print("SUCCESSFUL LOGIN")
server.sendmail(sender_email, receiver_email, message)
print("CHECK YOUR INBOX")
else:
import smtplib
sender_email = "pythoncalc096@gmail.com"
receiver_email = input("PLEASE ENTER YOUR EMAIL ADDRESS:")
password = "pycalc2023"
message = """
THANK YOU FOR USING PyCalc
REGARDS,
PyCalc
"""
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(sender_email, password)
print("SUCCESSFUL LOGIN")
server.sendmail(sender_email, receiver_email, message)
print("CHECK YOUR INBOX")
| 35.062557 | 1,588 | 0.449059 | 5,105 | 38,113 | 3.29716 | 0.076396 | 0.018774 | 0.070283 | 0.015447 | 0.686965 | 0.602365 | 0.521566 | 0.498455 | 0.447956 | 0.422172 | 0 | 0.027657 | 0.401385 | 38,113 | 1,086 | 1,589 | 35.094843 | 0.710103 | 0 | 0 | 0.419355 | 0 | 0.00504 | 0.286648 | 0.047225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009073 | false | 0.004032 | 0.052419 | 0.009073 | 0.070565 | 0.440524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
494f853c0620ce5e08c9096bfea1e082d704227e | 753 | py | Python | source_examiner.py | rsoaresp/pydep | bcd391d7fa538431e364f611e6cf652d5baa8556 | [
"MIT"
] | null | null | null | source_examiner.py | rsoaresp/pydep | bcd391d7fa538431e364f611e6cf652d5baa8556 | [
"MIT"
] | null | null | null | source_examiner.py | rsoaresp/pydep | bcd391d7fa538431e364f611e6cf652d5baa8556 | [
"MIT"
] | null | null | null | from typing import *
from code_checker import find_functions, find_imports, clean
from load_files import get_source_code
class SourceParser:
def __init__(self, path: str):
self.source_code = get_source_code(path)
def scan(self) -> Dict[str, Dict[str, Set[str]]]:
relations_dict = dict()
for file_name, source in self.source_code.items():
imports = find_imports(source)
response = dict()
for key, value in find_functions(source).items():
response[key] = {i if (i in imports or i in find_functions(source).keys()) else None for i in value}
relations_dict[file_name] = response
return clean(relations_dict)
def diagram(self):
pass | 27.888889 | 116 | 0.646746 | 101 | 753 | 4.60396 | 0.405941 | 0.086022 | 0.055914 | 0.090323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262948 | 753 | 27 | 117 | 27.888889 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.058824 | 0.294118 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
49760a423f475c9850b10a132d14cca7dddef529 | 216 | py | Python | python/dmopc18c5p0.py | ThePeeps191/dmoj-solutions | 7137e945f3f595c481ad4d29e1dc3a77d8b26e55 | [
"MIT"
] | 1 | 2022-01-23T16:02:14.000Z | 2022-01-23T16:02:14.000Z | python/dmopc18c5p0.py | ThePeeps191/dmoj-solutions | 7137e945f3f595c481ad4d29e1dc3a77d8b26e55 | [
"MIT"
] | 5 | 2022-01-23T00:16:49.000Z | 2022-01-30T04:37:45.000Z | python/dmopc18c5p0.py | ThePeeps191/dmoj-solutions | 7137e945f3f595c481ad4d29e1dc3a77d8b26e55 | [
"MIT"
] | 1 | 2022-01-23T00:03:47.000Z | 2022-01-23T00:03:47.000Z | # not yet finished
mode = input()
a = [float(a) for a in input().split()]
b = [float(a) for a in input().split()]
if mode == "Multiply":
print(f"{a[0] * b[0]} {a[1] * b[1]} {a[2] * b[2]}")
elif mode == "Screen":
| 19.636364 | 52 | 0.532407 | 41 | 216 | 2.804878 | 0.487805 | 0.104348 | 0.156522 | 0.173913 | 0.382609 | 0.382609 | 0.382609 | 0 | 0 | 0 | 0 | 0.034483 | 0.194444 | 216 | 10 | 53 | 21.6 | 0.626437 | 0.074074 | 0 | 0 | 0 | 0.166667 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
49783ee4a1e802d5f5076c85bec069585ff1fcb2 | 1,266 | py | Python | src/commands/context.py | an-dyy/lefi-bot | 07f1b97d009dcc3fb8ec52eb14d4503861ac8a74 | [
"MIT"
] | 3 | 2021-07-26T03:28:40.000Z | 2021-07-29T08:16:02.000Z | src/commands/context.py | an-dyy/lefi-bot | 07f1b97d009dcc3fb8ec52eb14d4503861ac8a74 | [
"MIT"
] | null | null | null | src/commands/context.py | an-dyy/lefi-bot | 07f1b97d009dcc3fb8ec52eb14d4503861ac8a74 | [
"MIT"
] | 1 | 2021-07-29T07:50:12.000Z | 2021-07-29T07:50:12.000Z | from __future__ import annotations
import snekcord as s
import typing as t
import inspect
from .command import Command
from ..utils import CheckFailed, maybe_coro
__all__ = ("Context",)
class Context:
def __init__(self, *args, **kwargs):
self.message = kwargs["message"]
self._parser = kwargs["parser"]
self.bot = kwargs["bot"]
self.command: t.Optional[Command] = None
self.prefix: t.Optional[str] = None
self.args: t.Optional[list] = []
self.kwargs: t.Optional[dict] = {}
async def send(self, content: str) -> s.Message:
return await self.channel.messages.create(content=content)
async def invoke(self, *args, **kwargs):
if self.command is not None:
return await self.command(self, *args, **kwargs)
@property
def me(self) -> s.GuildMember:
return self.message.guild.members.get(self.bot.user.id)
@property
def channel(self) -> s.TextChannel:
return self.message.channel
@property
def guild(self) -> t.Optional[s.Guild]:
return self.message.guild
@property
def author(self) -> s.User:
return self.message.author
@property
def valid(self) -> bool:
return self.command is not None
| 26.375 | 66 | 0.639021 | 161 | 1,266 | 4.937888 | 0.347826 | 0.069182 | 0.085535 | 0.040252 | 0.050314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242496 | 1,266 | 47 | 67 | 26.93617 | 0.828989 | 0 | 0 | 0.138889 | 0 | 0 | 0.018167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.138889 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
497afe4a631c336864b0c3bbbbe1b33c28a92f1f | 594 | py | Python | steps/make/haxe.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | 6 | 2015-08-14T15:26:38.000Z | 2019-11-14T01:08:31.000Z | steps/make/haxe.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | null | null | null | steps/make/haxe.py | flipcoder/siege-tools | df8a9fa5a4a6f2bcce4789cebe17b0c4dd3f7e60 | [
"MIT"
] | 1 | 2019-05-10T04:52:40.000Z | 2019-05-10T04:52:40.000Z | #!/usr/bin/env python
import os
import sgmake
from common import Status
from common import Support
from common import Settings
from common.Plugin import Plugin
import subprocess
from common import call
def make(project):
cmd = [ 'haxe', 'compile.hxml' ]
try:
call(cmd)
except subprocess.CalledProcessError:
return Status.FAILURE
return Status.SUCCESS
def update(project):
pass
def compatible(project):
support = Support.MASK & (~Support.PROJECT)
if os.path.isfile("compile.hxml"):
support |= Support.PROJECT
return support
| 19.16129 | 47 | 0.695286 | 73 | 594 | 5.657534 | 0.479452 | 0.121065 | 0.154964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 594 | 30 | 48 | 19.8 | 0.893939 | 0.03367 | 0 | 0 | 0 | 0 | 0.048951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.045455 | 0.363636 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
497e89040f973cc06d1802505835c93f454a4845 | 2,311 | py | Python | services/test/services/test_matchmaking.py | tkblackbelt/Asteroids-Multiplayer-Backend | e09b110ec8698657d8f22f2600e95acc663b1ba0 | [
"Apache-2.0"
] | null | null | null | services/test/services/test_matchmaking.py | tkblackbelt/Asteroids-Multiplayer-Backend | e09b110ec8698657d8f22f2600e95acc663b1ba0 | [
"Apache-2.0"
] | null | null | null | services/test/services/test_matchmaking.py | tkblackbelt/Asteroids-Multiplayer-Backend | e09b110ec8698657d8f22f2600e95acc663b1ba0 | [
"Apache-2.0"
] | null | null | null | # import pytest
# import os
# import boto3
# import inject
# from moto import mock_sqs
# from matchmaking.matchmaking import MatchMakerApp, MatchMakerConfig, get_configuration
# from common.domain.game_queue_interface import GameQueueInterface
# from common.domain.player import Player
# @pytest.fixture(scope='function')
# def aws_credentials():
# """Mocked AWS Credentials for moto."""
# os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
# os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
# os.environ['AWS_SECURITY_TOKEN'] = 'testing'
# os.environ['AWS_SESSION_TOKEN'] = 'testing'
# @pytest.fixture(scope='function')
# def game_queue(aws_credentials):
# with mock_sqs():
# from common.adapters.sqs_game_queue import SQSGameQueueAdapter
# sqs = boto3.resource('sqs', 'ca-central-1')
# sqs.create_queue(QueueName='MyTestQueue')
# yield SQSGameQueueAdapter("MyTestQueue")
# @pytest.fixture
# def configuration(game_queue) -> None:
# return MatchMakerConfig(
# debug_mode=False,
# sleep_time_seconds=60,
# game_queue=game_queue,
# max_games=50,
# players_per_game=2
# )
# class TestMatchMaking:
# def test_can_setup_match(self, configuration: MatchMakerConfig, game_queue: GameQueueInterface):
# player = Player("test1")
# player2 = Player("test2")
# game_queue.push(player)
# game_queue.push(player2)
# app = MatchMakerApp(configuration)
# assert app.can_setup_match()
# def test_can_not_setup_match(self, configuration: MatchMakerConfig):
# app = MatchMakerApp(configuration)
# assert not app.can_setup_match()
# def test_find_match_with_people_in_queue(self, configuration: MatchMakerConfig, game_queue: GameQueueInterface):
# player = Player("test1")
# player2 = Player("test2")
# game_queue.push(player)
# game_queue.push(player2)
# app = MatchMakerApp(configuration)
# players = app.find_players_for_match()
# assert len(players) == 2
# def test_find_match_with_no_people_in_queue(self, configuration: MatchMakerConfig):
# app = MatchMakerApp(configuration)
# players = app.find_players_for_match()
# assert len(players) == 0
| 33.492754 | 118 | 0.679792 | 254 | 2,311 | 5.925197 | 0.330709 | 0.071761 | 0.031894 | 0.037874 | 0.480399 | 0.406645 | 0.295681 | 0.295681 | 0.295681 | 0.295681 | 0 | 0.009858 | 0.209866 | 2,311 | 68 | 119 | 33.985294 | 0.814348 | 0.943315 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8cc36ac004126dbe3a1efa158ec0dfa5415912d | 1,300 | py | Python | authorstyle/preprocessing/util.py | mullerpeter/authorstyle | 9cb869e6428f5115478c6be516451b28572189bf | [
"MIT"
] | 1 | 2020-08-28T10:02:54.000Z | 2020-08-28T10:02:54.000Z | authorstyle/preprocessing/util.py | mullerpeter/authorstyle | 9cb869e6428f5115478c6be516451b28572189bf | [
"MIT"
] | null | null | null | authorstyle/preprocessing/util.py | mullerpeter/authorstyle | 9cb869e6428f5115478c6be516451b28572189bf | [
"MIT"
] | null | null | null | from nltk.corpus import stopwords as nltk_stopwords
from nltk.tokenize import RegexpTokenizer
from nltk.stem import PorterStemmer
def make_tokens_alphabetic(text):
"""
Remove all non alphabetic tokens
:type text: str
:param text: The text
:rtype List of str
:returns List of alphabetic tokens
"""
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text.lower()) # This step is needed to prevent hyphenate words from
# being filtered out
return [t for t in tokens if t.isalpha()]
def remove_stopwords(tokens, stopwords=nltk_stopwords.words('english')):
"""
Removes all stopwords from the document tokens
:type tokens: list of str
:param tokens: List of tokens
:type stopwords: list of str
:param stopwords: List of stopwords to be removed from the document tokens. (Default: Stopword List from nltk)
:rtype List of str
:returns List of tokens without stopwords
"""
return [t for t in tokens if t not in stopwords]
def stem_tokens(tokens):
"""
Stem all tokens in List
:type tokens: list of str
:param tokens: List of tokens
:rtype List of str
:returns List of stemmed tokens
"""
porter_stemmer = PorterStemmer()
return [porter_stemmer.stem(t) for t in tokens]
| 28.888889 | 114 | 0.698462 | 183 | 1,300 | 4.918033 | 0.322404 | 0.08 | 0.06 | 0.046667 | 0.246667 | 0.232222 | 0.232222 | 0.142222 | 0.093333 | 0.093333 | 0 | 0 | 0.229231 | 1,300 | 44 | 115 | 29.545455 | 0.898204 | 0.485385 | 0 | 0 | 0 | 0 | 0.017668 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b8cdcbe6652c846189ff84a7c167532b9b6fb956 | 555 | py | Python | src/pss_achievement.py | PieInTheSky-Inc/pss-statistics | be57974f0f0a36a9a0ea1e2117348f6f481b26b0 | [
"Apache-2.0"
] | 12 | 2019-11-06T12:02:37.000Z | 2022-03-17T17:24:47.000Z | src/pss_achievement.py | PieInTheSky-Inc/pss-statistics | be57974f0f0a36a9a0ea1e2117348f6f481b26b0 | [
"Apache-2.0"
] | 243 | 2019-11-06T12:05:01.000Z | 2022-03-28T13:37:17.000Z | src/pss_achievement.py | PieInTheSky-Inc/pss-statistics | be57974f0f0a36a9a0ea1e2117348f6f481b26b0 | [
"Apache-2.0"
] | 19 | 2019-11-06T02:17:48.000Z | 2022-01-27T02:40:07.000Z | import pss_entity as entity
# ---------- Constants ----------
ACHIEVEMENT_DESIGN_BASE_PATH: str = 'AchievementService/ListAchievementDesigns2?languageKey=en'
ACHIEVEMENT_DESIGN_KEY_NAME: str = 'AchievementDesignId'
ACHIEVEMENT_DESIGN_DESCRIPTION_PROPERTY_NAME: str = 'AchievementTitle'
# ---------- Initialization ----------
achievements_designs_retriever: entity.EntityRetriever = entity.EntityRetriever(
ACHIEVEMENT_DESIGN_BASE_PATH,
ACHIEVEMENT_DESIGN_KEY_NAME,
ACHIEVEMENT_DESIGN_DESCRIPTION_PROPERTY_NAME,
'AchievementDesigns'
) | 30.833333 | 95 | 0.78018 | 50 | 555 | 8.2 | 0.52 | 0.24878 | 0.102439 | 0.121951 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001996 | 0.097297 | 555 | 18 | 96 | 30.833333 | 0.816367 | 0.122523 | 0 | 0 | 0 | 0 | 0.226804 | 0.117526 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8d69ce1eb58c86311657fe6011b6313c8150db1 | 6,841 | py | Python | dacScanV3.py | lpetre-ulb/vfatqc-python-scripts | 1f11c736de630deba24c7da927eeb9746206db93 | [
"MIT"
] | 1 | 2017-07-10T12:48:27.000Z | 2017-07-10T12:48:27.000Z | dacScanV3.py | lpetre-ulb/vfatqc-python-scripts | 1f11c736de630deba24c7da927eeb9746206db93 | [
"MIT"
] | 200 | 2016-11-11T16:13:22.000Z | 2020-05-27T13:41:39.000Z | dacScanV3.py | lpetre-ulb/vfatqc-python-scripts | 1f11c736de630deba24c7da927eeb9746206db93 | [
"MIT"
] | 20 | 2016-09-14T12:41:46.000Z | 2020-02-10T19:46:04.000Z | #!/bin/env python
r"""
Dac Scan
========
Performs a VFAT3 DAC scan on all unmasked optohybrids. Measures the correspondence between DAC units
and physical values (in fC, uA, or V depending on the register of interest).
``dacScanV3.py``
================
Synopsis
--------
**run_scans.py** **dacScanV3** [-**h**] [--**dacSelect** *DACSELECT*] [-**e**] shelf slot ohMask
Mandatory arguments
-------------------
.. program:: run_Scans.py dacScanV3
Positional arguments
--------------------
.. option:: shelf
uTCA crate shelf number
.. option:: slot
AMC slot number in the uTCA crate
.. option:: ohMask
optohybrid mask to apply, a 1 in the n^{th} bit indicates the n^{th} OH should be considered
Optional arguments
------------------
.. option:: -h, --help
show the help message and exit
.. option:: --dacSelect <DACSELECT>
DAC Selection, see `The VFAT3 Manual <https://espace.cern.ch/cms-project-GEMElectronics/VFAT3/Forms/AllItems.aspx>`_
.. option:: -e, --extRefADC
Use the externally referenced ADC on the VFAT3
Environment
-----------
The following `$SHELL` variables should be defined beforehand:
.. glossary::
:envvar: `BUILD_HOME`
the location of your ``vfatqc-python-scripts`` directory
:envvar: `DATA_PATH`
the location of input data
Then execute:
`source $BUILD_HOME/vfatqc-python-scripts/setup/paths.sh`
"""
from gempython.tools.hw_constants import maxVfat3DACSize
import os
if __name__ == '__main__':
"""
Script to perform DAC scans with VFAT3
By: Brian Dorney (brian.l.dorney@cern.ch)
"""
# create the parser
import argparse
parser = argparse.ArgumentParser(description="Scans a given DAC on a VFAT3 against the chip's ADC. Either the internally or externally referenced ADC can be used. Scans all VFATs on a given link simultaneously")
# Positional arguments
from reg_utils.reg_interface.common.reg_xml_parser import parseInt
parser.add_argument("shelf", type=int, help="uTCA shelf to access")
parser.add_argument("slot", type=int,help="slot in the uTCA of the AMC you are connceting too")
parser.add_argument("ohMask", type=parseInt, help="ohMask to apply, a 1 in the n^th bit indicates the n^th OH should be considered", metavar="ohMask")
# Optional arguments
from gempython.tools.hw_constants import gemVariants
parser.add_argument("-d","--debug", action="store_true", dest="debug",
help = "Print additional debugging information")
parser.add_argument("--dacSelect", type=int, dest="dacSelect",
help = "DAC Selection", default=None)
parser.add_argument("-e","--extRefADC", action="store_true", dest="extRefADC",
help = "Use the externally referenced ADC on the VFAT3.")
parser.add_argument("-f","--filename",type=str,dest="filename",default="dacScanV3.root",
help = "Specify output filename to store data in.")
parser.add_argument("--series", action="store_true", dest="series",
help = "Scan nonzero links in ohMask in series (successive RPC calls) instead of in parallel (one RPC call)")
parser.add_argument("--stepSize", type=int, dest="stepSize",default=1,
help="Supply a step size for the scan")
parser.add_argument("-v","--vfatmask",type=parseInt,dest="vfatmask",default=None,
help="VFATs to be masked in scan & analysis applications (e.g. 0xFFFFF masks all VFATs)")
parser.add_argument("--gemType",type=str,help="String that defines the GEM variant, available from the list: {0}".format(gemVariants.keys()),default="ge11")
parser.add_argument("--detType",type=str,help="Detector type within gemType. If gemType is 'ge11' then this should be from list {0}; if gemType is 'ge21' then this should be from list {1}; and if type is 'me0' then this should be from the list {2}".format(gemVariants['ge11'],gemVariants['ge21'],gemVariants['me0']),default=None)
args = parser.parse_args()
from gempython.utils.gemlogger import printRed
if ((args.dacSelect not in maxVfat3DACSize.keys()) and (args.dacSelect is not None)):
printRed("Input DAC selection {0} not understood".format(args.dacSelect))
printRed("possible options include:")
from gempython.vfatqc.utils.qcutilities import printDACOptions
printDACOptions()
exit(os.EX_USAGE)
# Open rpc connection to hw
from gempython.vfatqc.utils.qcutilities import getCardName, inputOptionsValid
cardName = getCardName(args.shelf,args.slot)
from gempython.tools.vfat_user_functions_xhal import *
vfatBoard = HwVFAT(cardName, 0, args.debug, args.gemType, args.detType) # Assign link 0; we will update later
print 'opened connection'
amcBoard = vfatBoard.parentOH.parentAMC
if amcBoard.fwVersion < 3:
printRed("DAC Scan of v2b electronics is not supported, exiting!!!")
exit(os.EX_USAGE)
# Check options
if not inputOptionsValid(args, amcBoard.fwVersion):
exit(os.EX_USAGE)
pass
# Make output files
import ROOT as r
outF = r.TFile(args.filename,"RECREATE")
from gempython.vfatqc.utils.scanUtils import dacScanAllLinks, dacScanSingleLink
from gempython.vfatqc.utils.treeStructure import gemDacCalTreeStructure
calTree = gemDacCalTreeStructure(
name="dacScanTree",
nameX="dummy", # temporary name, will be over-ridden
nameY=("ADC1" if args.extRefADC else "ADC0"),
dacSelect=-1, #temporary value, will be over-ridden
description="GEM DAC Calibration of VFAT3 DAC"
)
if args.dacSelect is None: # No DAC selected; scan them all
for dacSelect in maxVfat3DACSize.keys():
args.dacSelect = dacSelect
if args.series:
for ohN in range(0, amcBoard.nOHs):
if( not ((args.ohMask >> ohN) & 0x1)):
continue
# update the OH in question
vfatBoard.parentOH.link = ohN
dacScanSingleLink(args, calTree, vfatBoard)
pass
pass
else:
dacScanAllLinks(args, calTree, vfatBoard)
else: # Specific DAC Requested; scan only this DAC
if args.series:
for ohN in range(0, amcBoard.nOHs):
if( not ((args.ohMask >> ohN) & 0x1)):
continue
# update the OH in question
vfatBoard.parentOH.link = ohN
dacScanSingleLink(args, calTree, vfatBoard)
pass
pass
else:
dacScanAllLinks(args, calTree, vfatBoard)
outF.cd()
calTree.autoSave("SaveSelf")
calTree.write()
outF.Close()
print("All DAC Scans Completed. Goodbye")
| 37.587912 | 333 | 0.652536 | 845 | 6,841 | 5.233136 | 0.360947 | 0.024423 | 0.046133 | 0.02171 | 0.180914 | 0.176391 | 0.131162 | 0.131162 | 0.113523 | 0.113523 | 0 | 0.009303 | 0.230083 | 6,841 | 181 | 334 | 37.79558 | 0.830264 | 0.053501 | 0 | 0.287356 | 0 | 0.045977 | 0.280376 | 0 | 0 | 0 | 0.002598 | 0 | 0 | 0 | null | null | 0.057471 | 0.137931 | null | null | 0.091954 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b8f60a2ee265799459028ba389432e56f7f85ac3 | 437 | py | Python | running_modes/validation/logging/local_validation_logger.py | lilleswing/Reinvent-1 | ac4e3e6fa6379c6f4af883478dfd1b3407933ada | [
"Apache-2.0"
] | 183 | 2020-04-04T02:01:15.000Z | 2022-03-30T21:56:56.000Z | running_modes/validation/logging/local_validation_logger.py | prasannavd/Reinvent | ca02ebee8d8ed83223c55f4a1dd1b3fbc2359616 | [
"MIT"
] | 39 | 2020-04-05T15:19:56.000Z | 2022-03-09T12:58:21.000Z | running_modes/validation/logging/local_validation_logger.py | prasannavd/Reinvent | ca02ebee8d8ed83223c55f4a1dd1b3fbc2359616 | [
"MIT"
] | 70 | 2020-04-05T19:25:43.000Z | 2022-02-22T12:04:39.000Z | from running_modes.configurations.general_configuration_envelope import GeneralConfigurationEnvelope
from running_modes.validation.logging.base_validation_logger import BaseValidationLogger
class LocalValidationLogger(BaseValidationLogger):
def __init__(self, configuration: GeneralConfigurationEnvelope):
super().__init__(configuration)
def log_message(self, message: str):
self._common_logger.info(message)
| 36.416667 | 100 | 0.828375 | 41 | 437 | 8.414634 | 0.609756 | 0.063768 | 0.092754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10984 | 437 | 11 | 101 | 39.727273 | 0.886889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b8fe5d38825946ccc2ee8d9ee6a81d3d2f1b86e1 | 314 | py | Python | desafio_007_media_aritmetica.py | VagnerGit/PythonCursoEmVideo | 3e80e12fbf21f5be08c554d77fa9073dc0a3145f | [
"MIT"
] | null | null | null | desafio_007_media_aritmetica.py | VagnerGit/PythonCursoEmVideo | 3e80e12fbf21f5be08c554d77fa9073dc0a3145f | [
"MIT"
] | null | null | null | desafio_007_media_aritmetica.py | VagnerGit/PythonCursoEmVideo | 3e80e12fbf21f5be08c554d77fa9073dc0a3145f | [
"MIT"
] | null | null | null | """
Exercício Python 7:
Desenvolva um programa que leia as duas
notas de um aluno, calcule e mostre a sua média.
"""
n1 = float(input('Digite nota AV: '))
n2 = float(input('Digite Nota AVS: '))
n3 = float(input('Digite nota VR: '))
#media = (n1+n2+n3)/3
print('A media do aluno é {:.1f}'.format((n1+n2+n3)/3))
| 28.545455 | 55 | 0.652866 | 55 | 314 | 3.727273 | 0.654545 | 0.146341 | 0.234146 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049808 | 0.16879 | 314 | 10 | 56 | 31.4 | 0.735632 | 0.423567 | 0 | 0 | 0 | 0 | 0.427746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7702c6429016f81529601cedcd461d2e7b9ff30a | 1,248 | py | Python | wwpdb/utils/tests-nmr-tox/ImportTests.py | wwPDB/py-wwpdb_utils_nmr | 531f1cf5dba0c1fb4e5d710d740c3ee05d34dafb | [
"Apache-2.0"
] | null | null | null | wwpdb/utils/tests-nmr-tox/ImportTests.py | wwPDB/py-wwpdb_utils_nmr | 531f1cf5dba0c1fb4e5d710d740c3ee05d34dafb | [
"Apache-2.0"
] | null | null | null | wwpdb/utils/tests-nmr-tox/ImportTests.py | wwPDB/py-wwpdb_utils_nmr | 531f1cf5dba0c1fb4e5d710d740c3ee05d34dafb | [
"Apache-2.0"
] | 1 | 2021-06-21T10:46:22.000Z | 2021-06-21T10:46:22.000Z | ##
# File: NEFImportTests.py
# Date: 06-Oct-2018 E. Peisach
#
# Updates:
##
"""Test cases for NEFTranslator - simply import everything to ensure imports work"""
import unittest
import sys
if __package__ is None or __package__ == "":
from os import path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
from commonsetup import TESTOUTPUT # noqa: F401 pylint: disable=import-error,unused-import
else:
from .commonsetup import TESTOUTPUT # noqa: F401 pylint: disable=relative-beyond-top-level
from wwpdb.utils.nmr.NEFTranslator.NEFTranslator import NEFTranslator
from wwpdb.utils.nmr.NmrDpUtility import NmrDpUtility
from wwpdb.utils.nmr.NmrDpReport import NmrDpReport
from wwpdb.utils.nmr.NmrStarToCif import NmrStarToCif
from wwpdb.utils.nmr.rci.RCI import RCI
from wwpdb.utils.nmr.BMRBChemShiftStat import BMRBChemShiftStat
class ImportTests(unittest.TestCase):
def testInstantiate(self):
_c = NEFTranslator() # noqa: F841
_npu = NmrDpUtility() # noqa: F841
_ndp = NmrDpReport() # noqa: F841
_nstc = NmrStarToCif() # noqa: F841
_rci = RCI() # noqa: F841
_bmrb = BMRBChemShiftStat() # noqa: F841
if __name__ == "__main__":
unittest.main()
| 31.2 | 96 | 0.725962 | 150 | 1,248 | 5.866667 | 0.46 | 0.061364 | 0.095455 | 0.115909 | 0.118182 | 0.118182 | 0.118182 | 0.118182 | 0 | 0 | 0 | 0.029211 | 0.177083 | 1,248 | 39 | 97 | 32 | 0.827653 | 0.254006 | 0 | 0 | 0 | 0 | 0.008791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.5 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
771f182baf79a3a8864e33668bb6f90ad253cc46 | 2,175 | py | Python | emLam/corpus/multi_file_writer.py | DavidNemeskey/emLam | 89359e7eee5b7b9c596dec8ab6654591d4039e3e | [
"MIT"
] | 2 | 2018-03-31T10:00:11.000Z | 2018-09-15T19:38:19.000Z | emLam/corpus/multi_file_writer.py | DavidNemeskey/emLam | 89359e7eee5b7b9c596dec8ab6654591d4039e3e | [
"MIT"
] | 16 | 2017-02-28T13:58:28.000Z | 2018-03-14T11:42:01.000Z | emLam/corpus/multi_file_writer.py | dlt-rilmta/emLam | 2b7274dcda4080445698e10b34a3db2e2eed5112 | [
"MIT"
] | 1 | 2017-01-30T15:06:37.000Z | 2017-01-30T15:06:37.000Z | #!/usr/bin/env python3
"""Defines ways to "convert" a file name to an input/output stream."""
from __future__ import absolute_import, division, print_function
from builtins import range
from io import TextIOBase
import math
import os
from emLam.utils import allname, openall
class MultiFileWriter(TextIOBase):
def __init__(self, file_name, max_lines, wait_for_empty=True):
self.file_name = file_name
self.max_lines = max_lines
self.wait_for_empty = wait_for_empty
self.index = 1
self.lines = 0
self.f = openall(self.__get_file_name(), 'wt')
def __get_file_name(self, index=None, digits=None):
basename, extension = allname(self.file_name)
ext = extension if extension else ''
num_format = '{{:0{}d}}'.format(digits) if digits else '{}'
index_str = num_format.format(self.index if index is None else index)
return '{}-{}{}'.format(basename, index_str, ext)
def close(self):
self.f.close()
def fileno(self):
return self.f.fileno()
def flush(self):
return self.f.flush()
def write(self, s):
for line in s.splitlines():
self.f.write(line)
self.f.write(u'\n')
self.lines += 1
if self.lines >= self.max_lines and (
not self.wait_for_empty or line == ''):
self.__new_file()
def __new_file(self):
"""
Opens the next file, resets the line counter and renames all previous
files if we need a new digit.
"""
self.f.close()
digits = int(math.log10(self.index)) + 1
self.index += 1
new_digits = int(math.log10(self.index)) + 1
if new_digits > digits:
for i in range(1, self.index):
os.rename(self.__get_file_name(i, digits),
self.__get_file_name(i, new_digits))
self.f = openall(self.__get_file_name(), 'wt')
self.lines = 0
def isatty(self):
return False
def readable(self):
return False
def seekable(self):
return False
def writable(self):
return True
| 29.794521 | 77 | 0.597241 | 294 | 2,175 | 4.217687 | 0.329932 | 0.064516 | 0.044355 | 0.048387 | 0.117742 | 0.091935 | 0.091935 | 0.046774 | 0 | 0 | 0 | 0.009126 | 0.294713 | 2,175 | 72 | 78 | 30.208333 | 0.799218 | 0.085517 | 0 | 0.169811 | 0 | 0 | 0.012295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207547 | false | 0 | 0.113208 | 0.113208 | 0.471698 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7727342447537ee5cc16bae72c12b32f1db5dc54 | 2,064 | py | Python | pyaz/backup/vault/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/backup/vault/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/backup/vault/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | '''
Online storage entity in Azure used to hold data such as backup copies, recovery points and backup policies.
'''
from ... pyaz_utils import _call_az
from . import backup_properties, encryption, identity
def create(location, name, resource_group, tags=None):
'''
Create a new Recovery Services vault.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Name of the Recovery services vault.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- tags -- space-separated tags: key[=value] [key[=value] ...]. Use '' to clear existing tags.
'''
return _call_az("az backup vault create", locals())
def show(name, resource_group):
'''
Show details of a particular Recovery service vault.
Required Parameters:
- name -- Name of the Recovery services vault.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
'''
return _call_az("az backup vault show", locals())
def list(resource_group=None):
'''
List Recovery service vaults within a subscription.
Optional Parameters:
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
'''
return _call_az("az backup vault list", locals())
def delete(name, resource_group, force=None, yes=None):
'''
Delete an existing Recovery services vault.
Required Parameters:
- name -- Name of the Recovery services vault.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- force -- Force completion of the requested action.
- yes -- Do not prompt for confirmation.
'''
return _call_az("az backup vault delete", locals())
| 35.586207 | 161 | 0.702519 | 267 | 2,064 | 5.355805 | 0.303371 | 0.109091 | 0.073427 | 0.062937 | 0.52028 | 0.464336 | 0.429371 | 0.429371 | 0.429371 | 0.429371 | 0 | 0 | 0.200581 | 2,064 | 57 | 162 | 36.210526 | 0.866667 | 0.685078 | 0 | 0 | 0 | 0 | 0.163743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7748a409ddb122ff7dda205c1356cad3425fc3ef | 226 | py | Python | variants/urls.py | raonyguimaraes/mendelmd | 7f8f98a55b12c1fe9dfc2d95905a6128ca800091 | [
"BSD-3-Clause"
] | 33 | 2016-07-22T21:39:09.000Z | 2021-06-24T02:57:02.000Z | variants/urls.py | raonyguimaraes/mendelmd | 7f8f98a55b12c1fe9dfc2d95905a6128ca800091 | [
"BSD-3-Clause"
] | 41 | 2017-06-20T03:10:33.000Z | 2021-12-24T23:54:41.000Z | variants/urls.py | raonyguimaraes/mendelmd | 7f8f98a55b12c1fe9dfc2d95905a6128ca800091 | [
"BSD-3-Clause"
] | 8 | 2017-06-14T21:07:47.000Z | 2021-01-12T17:59:49.000Z | from django.conf.urls import *
from . import views
from django.urls import include, path
urlpatterns = [
path('', views.index, name='variant_index'),
path('view/<int:variant_id>/', views.view, name='variant_view'),
] | 25.111111 | 68 | 0.699115 | 31 | 226 | 5 | 0.483871 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 226 | 9 | 69 | 25.111111 | 0.798969 | 0 | 0 | 0 | 0 | 0 | 0.207048 | 0.096916 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
77542820ab0a7920b93751412fd4a15d7d8291c8 | 159 | py | Python | root/models.py | radoslawdabrowski/radoslawdabrowski.pl | b3d4f92ea51b40b104449259a376134aeb11766b | [
"Apache-2.0"
] | 1 | 2019-05-17T10:57:25.000Z | 2019-05-17T10:57:25.000Z | root/models.py | radoslawdabrowski/radoslawdabrowski.pl | b3d4f92ea51b40b104449259a376134aeb11766b | [
"Apache-2.0"
] | 1 | 2019-08-06T01:55:54.000Z | 2019-08-06T01:55:54.000Z | root/models.py | radoslawdabrowski/personal-website | b3d4f92ea51b40b104449259a376134aeb11766b | [
"MIT"
] | 1 | 2019-05-07T21:23:57.000Z | 2019-05-07T21:23:57.000Z | from django.db import models
# Const
DEFAULT_ID = 1
class BaseModel(models.Model):
objects = models.Manager()
class Meta:
abstract = True
| 12.230769 | 30 | 0.666667 | 20 | 159 | 5.25 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008403 | 0.251572 | 159 | 12 | 31 | 13.25 | 0.87395 | 0.031447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
775d4714b6856f633e120322cfcb970d141f530a | 621 | py | Python | api/views/asset.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | 1 | 2019-08-06T07:31:40.000Z | 2019-08-06T07:31:40.000Z | api/views/asset.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | null | null | null | api/views/asset.py | lndba/apasa_backend | e0bb96e22a22f6e2a5a2826f225388113473e7e2 | [
"Apache-2.0"
] | null | null | null | from rest_framework.viewsets import ModelViewSet
from api.models import *
from api.serializers.asset import *
from api.pagination.page import MyPageNumberPagination
class AssetListViewSet(ModelViewSet):
queryset = Asset.objects.all().order_by('id')
pagination_class = MyPageNumberPagination
serializer_class = AssetListSerializers
class AssetInfoViewSet(ModelViewSet):
queryset = Asset.objects.all().order_by('id')
serializer_class = AssetInfoSerializers
class AssetUpdataViewSet(ModelViewSet):
queryset = Asset.objects.all().order_by('id')
serializer_class = AssetUpdataSerializers
| 24.84 | 54 | 0.78744 | 63 | 621 | 7.634921 | 0.428571 | 0.043659 | 0.155925 | 0.199584 | 0.336798 | 0.336798 | 0.336798 | 0.336798 | 0.245322 | 0.245322 | 0 | 0 | 0.130435 | 621 | 24 | 55 | 25.875 | 0.890741 | 0 | 0 | 0.214286 | 0 | 0 | 0.009709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
77606ddbd8837bece3247917199682103e7b6859 | 844 | py | Python | oneibl/globus.py | nbonacchi/ibllib | 9066c00a8e9a65a1d209144a2ac54d0b87bec0b3 | [
"MIT"
] | 1 | 2020-11-21T07:02:21.000Z | 2020-11-21T07:02:21.000Z | oneibl/globus.py | nbonacchi/ibllib | 9066c00a8e9a65a1d209144a2ac54d0b87bec0b3 | [
"MIT"
] | null | null | null | oneibl/globus.py | nbonacchi/ibllib | 9066c00a8e9a65a1d209144a2ac54d0b87bec0b3 | [
"MIT"
] | null | null | null | from ibllib.io import globus
from oneibl.one import ONE
import oneibl.params
par = oneibl.params.get()
class OneGlobus(ONE):
def __init__(self, **kwargs):
# Init connection to the database
super(OneGlobus, self).__init__(**kwargs)
# Init connection to Globus if needed
self._tc = globus.login_auto(par.GLOBUS['CLIENT_ID'], str_app='globus_one')
def setup(self):
super(OneGlobus, self).setup()
globus.login_auto(par.GLOBUS['CLIENT_ID'], str_app='globus_one')
def download(self, eids):
pass
# transfer_object = globus.TransferData(
# self._globus_transfer_client,
# source_endpoint=par.GLOBUS['SERVER_ENDPOINT'],
# destination_endpoint=par.GLOBUS['LOCAL_ENDPOINT'],
# verify_checksum=True,
# sync_level='checksum',
# label='ONE_transfer_python')
| 27.225806 | 83 | 0.686019 | 106 | 844 | 5.188679 | 0.462264 | 0.065455 | 0.072727 | 0.08 | 0.181818 | 0.181818 | 0.181818 | 0.181818 | 0.181818 | 0.181818 | 0 | 0 | 0.193128 | 844 | 30 | 84 | 28.133333 | 0.807636 | 0.393365 | 0 | 0 | 0 | 0 | 0.075697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.076923 | 0.230769 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
776074899b373caca325ce2db69a4c1bee3a951f | 6,348 | py | Python | web/webapp/models.py | chickenshifu/InstagramBot | f7f042ee919a02e41de74bf0f91ce8ccc9ade798 | [
"Unlicense"
] | 2 | 2021-02-13T20:05:39.000Z | 2021-10-11T18:56:43.000Z | web/webapp/models.py | chickenshifu/InstagramBot | f7f042ee919a02e41de74bf0f91ce8ccc9ade798 | [
"Unlicense"
] | null | null | null | web/webapp/models.py | chickenshifu/InstagramBot | f7f042ee919a02e41de74bf0f91ce8ccc9ade798 | [
"Unlicense"
] | 1 | 2021-03-19T19:11:35.000Z | 2021-03-19T19:11:35.000Z | from webapp import db, login_manager
from datetime import datetime
from werkzeug.security import generate_password_hash, check_password_hash
from flask_login import UserMixin # is_authenticated is_loggedin usw...
@login_manager.user_loader # if user is authenticated, then....
def load_user(user_id):
return Users.query.get(user_id)
class Users(db.Model, UserMixin):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), unique=True, index=True)
password_hash = db.Column(db.String(500))
def __init__(self, username, password):
self.username = username
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
def __repr__(self):
return f"Username: {self.username}"
class Abonnenten(db.Model):
__tablename__ = 'abonnenten'
abonnenten_url = db.Column(db.String(100), primary_key=True)
datum = db.Column(db.DateTime, default=datetime.utcnow)
def __init__(self, abonnenten_url):
self.abonnenten_url = abonnenten_url
def __repr__(self):
return f"Abonnent: {self.abonnenten_url}"
class Abonniert(db.Model):
__tablename__ = 'abonniert'
abonniet_url = db.Column(db.String(100), primary_key=True)
datum = db.Column(db.DateTime, default=datetime.utcnow)
def __init__(self, abonniet_url):
self.abonniet_url = abonniet_url
def __repr__(self):
return f"Abonniert: {self.abonniet_url}"
class Source(db.Model):
__tablename__ = 'source'
id = db.Column(db.Integer, primary_key=True)
source_url = db.Column(db.String(100), index=True)
targets_total = db.Column(db.Integer)
datum = db.Column(db.DateTime, default=datetime.utcnow)
targets_raw = db.relationship('Targets_raw', backref='targets_raw_quelle')
targets_done = db.relationship('Targets_raw', backref='targets_done_quelle')
def __init__(self, source_url):
self.source_url = source_url
def __repr__(self):
return f"Target-Source: {self.source_url} vom: {self.datum}"
class Targets_raw(db.Model):
__tablename__ = 'targets_raw'
id = db.Column(db.Integer, primary_key=True)
target_url = db.Column(db.String(100), index=True)
source_id = db.Column(db.Integer, db.ForeignKey('source.id'))
def __init__(self, target_url, source_id):
self.target_url = target_url
self.source_id = source_id
def __repr__(self):
return f"Target-Account: {self.target_url} und Source-ID: {self.source_id}"
class Targets_done(db.Model):
__tablename__ = 'targets_done'
id = db.Column(db.Integer, primary_key=True)
source_id = db.Column(db.Integer, db.ForeignKey('source.id'))
target_url = db.Column(db.String(100), index=True)
target_abonnenten = db.Column(db.Integer)
target_abonniert = db.Column(db.Integer)
match = db.Column(db.String(10))
datum_bearbeitet = db.Column(db.DateTime, default=datetime.utcnow)
pics_liked = db.Column(db.Integer)
followed = db.Column(db.DateTime)
unfollowed = db.Column(db.DateTime)
followed_back = db.Column(db.DateTime)
t5_indicator = db.Column(db.String(3))
t1_indicator = db.Column(db.String(3))
t5_timestamp = db.Column(db.DateTime)
t1_timestamp = db.Column(db.DateTime)
def __init__(self, target_url, target_abonnenten, target_abonniert, source_id):
self.target_url = target_url
self.target_abonnenten = target_abonnenten
self.target_abonniert = target_abonniert
self.source_id = source_id
def __repr__(self):
return f"Target-URL: {self.target_url} bearbeitet am {self.datum_bearbeitet}, Anzahl Abonnenten: {self.target_abonnenten}, Anzahl Abonniert: {self.target_abonniert}"
class Statistiken(db.Model):
__tablename__ = "statistik"
id = db.Column(db.Integer, primary_key=True)
source_id = db.Column(db.Integer)
targets_total = db.Column(db.Integer)
pics_liked = db.Column(db.Integer)
followed = db.Column(db.Integer)
unfollowed = db.Column(db.Integer)
followed_back = db.Column(db.Integer)
def __init__(self, source_id, targets_total):
self.source_id = source_id
self.targets_total = targets_total
class Counter(db.Model):
__tablename__ = "counter"
datum = db.Column(db.DateTime, default=datetime.now().date(), primary_key=True)
like_counter = db.Column(db.Integer)
follow_counter = db.Column(db.Integer)
class Blacklist(db.Model):
__tablename__ = "blacklist"
id = db.Column(db.Integer, primary_key=True)
url = db.Column(db.String(100))
datum = db.Column(db.DateTime, default=datetime.now().date())
def __init__(self, url):
self.url = url
class Historical_follower(db.Model):
__tablename__ = "historical_follower"
id = db.Column(db.Integer, primary_key=True)
target_url = db.Column(db.String(100))
datum = db.Column(db.DateTime, default=datetime.now().date())
def __init__(self, target_url):
self.target_url = target_url
class Tasks(db.Model):
__tablename__ = "tasks"
task_id = db.Column(db.String(72), primary_key=True)
task_type = db.Column(db.String(21))
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
taskid = db.relationship('Taskstatus', backref="status")
def __init__(self, task_id, task_type):
self.task_id = task_id
self.task_type = task_type
class Taskstatus(db.Model):
__tablename__ = "taskstatus"
id = db.Column(db.Integer, primary_key=True)
taskid = db.Column(db.String(72), db.ForeignKey('tasks.task_id'))
target_url = db.Column(db.String(100))
check0 = db.Column(db.String(100))
check1 = db.Column(db.String(100))
check2 = db.Column(db.String(100))
check3 = db.Column(db.String(100))
check4 = db.Column(db.String(100))
check5 = db.Column(db.String(100))
check6 = db.Column(db.String(100))
match = db.Column(db.String(4))
followed = db.Column(db.DateTime)
unfollowed = db.Column(db.DateTime)
pics_liked = db.Column(db.Integer)
t5_timestamp = db.Column(db.DateTime)
t1_timestamp = db.Column(db.DateTime)
def __init__(self, target_url):
self.target_url = target_url
| 31.899497 | 173 | 0.698803 | 863 | 6,348 | 4.852839 | 0.128621 | 0.122254 | 0.152818 | 0.091691 | 0.539637 | 0.44341 | 0.374881 | 0.356256 | 0.292264 | 0.262178 | 0 | 0.014201 | 0.179112 | 6,348 | 198 | 174 | 32.060606 | 0.789484 | 0.011027 | 0 | 0.333333 | 1 | 0.007092 | 0.091474 | 0.014821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134752 | false | 0.042553 | 0.028369 | 0.056738 | 0.865248 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91f7be2931c63738cfe743d7a595fd6581ccef00 | 19,502 | py | Python | tests/test_nest_MPI_threading/check_result.py | mfahdaz/TVB-NEST | c16e18f41ff07b33482a55f4a033b8c698051c9a | [
"Apache-2.0"
] | 2 | 2020-10-21T11:45:19.000Z | 2020-12-01T09:32:53.000Z | tests/test_nest_MPI_threading/check_result.py | mfahdaz/TVB-NEST | c16e18f41ff07b33482a55f4a033b8c698051c9a | [
"Apache-2.0"
] | 10 | 2020-11-17T09:33:19.000Z | 2022-01-11T17:00:40.000Z | tests/test_nest_MPI_threading/check_result.py | mfahdaz/TVB-NEST | c16e18f41ff07b33482a55f4a033b8c698051c9a | [
"Apache-2.0"
] | 9 | 2020-11-17T08:52:51.000Z | 2021-12-10T12:25:04.000Z | # Copyright 2020 Forschungszentrum Jülich GmbH and Aix-Marseille Université
# "Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0. "
import numpy as np
import sys
import os
import copy
def get_data(path,nb_mpi,labels=['senders','times']):
'''
getting data from the result of the tests
:param path : path of the data
:param nb_mpi : number of MPI for Nest
:param labels : labels of the recording device
'''
#get data from all the rank
datas =[]
for i in range(nb_mpi):
datas.append(np.load(path + '_rank_' + str(i) + '.npy',allow_pickle=True))
unite = {}
for i in labels:
unite[i]=np.array([])
data_concatenate = [copy.deepcopy(unite) for i in range(len(datas[0]))]
for i in range(len(datas[0])):
for data in datas:
for label in labels:
data_concatenate[i][label] = np.concatenate([data_concatenate[i][label],data[i][label]])
return data_concatenate
def check_recorder(path,nb_mpi,nb_mpi_recorder,devices,separate,index_device,nb_element):
"""
test the data generate by the recorders
:param path:path of the files
:param nb_mpi:number of mpi process
:param nb_mpi_recorder:number of mpi recorder
:param devices:number of devices
:param separate:if the reference recorder are separate or not
:param index_device:the index of device ( use for define the type of test)
:param nb_element: the number of element ( use for define he type of test)
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path + '/recorders_memory' , nb_mpi)
for index,index_recorder in enumerate(range(1,nb_mpi_recorder+1)):
data_mpi = np.concatenate(np.load(path + 'recording_mpi_' + str(index_recorder) + '.npy',allow_pickle=True))
if len(data_mpi) != len(data_memory[index]['times']): # case where the sie of data are different
print('test Failed')
else:
if len(data_mpi)!=0: # case where there are data
for index_data,d in enumerate(data_mpi):
if d[2] - data_memory[index]['times'][index_data] != 0: # case of the time if different
print('test Fail')
break
if separate and ((index_device != -1 and index % nb_element == index_device) or index_device == -2): # only for the spike generator will break
if d[1]+devices - data_memory[index]['senders'][index_data] != 0: # case if the spike_generator has different id
print('test Fail')
break
else:
if d[1] - data_memory[index]['senders'][index_data] != 0: # check if the id of the generator of spike is the same
print('test Fail')
break
if index_data == len(data_mpi)-1: # test if all the data are tested
print('test Succeed')
else:
print('test Succeed')
def check_GS_parrot(path,nb_mpi,parrot,separate):
"""
test spike generator of spike with node
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param parrot: number of parrot neurons
:param separate:if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GS_parrot',nb_mpi)
if len(data_memory) != 0: # if not data don't test
index = np.unique(data_memory[0]['senders'])
if separate:
data_memory_2 = get_data(path+'GS_parrot_bis',nb_mpi)
times = np.unique(data_memory[0]['times'])
count =0
for t in times:
if separate: # difference case for the reference
if len(np.where(data_memory_2[0]['times'] == t)[0]) != parrot: # test if the result is on number of parrot neurons
count += 1 # case when there are two spikes
if len(np.where(data_memory_2[0]['times'] == t)[0]) % parrot != 0:
print('test Fail') # other case failed
else: # same as previously
for i in index:
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) != 2:
count +=1
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) % 2 !=0:
print('test Fail')
if count > len(data_memory[0]['times'])/parrot/2/10: # 10% can be repeated spikes
print('test Fail')
print('test succeed')
else:
print('skip test')
def check_GS_spike_device(path,nb_mpi,spike_generator,separate):
"""
test spike generator of spike with devices
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param spike_generator: number of spike_generator
:param separate: if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GS_spike_device',nb_mpi)
if len(data_memory) != 0: # if not data don't test
index = np.unique(data_memory[0]['senders'])
if separate:
data_memory_2 = get_data(path+'GS_spike_device_bis',nb_mpi)
times = np.unique(data_memory[0]['times'])
count =0
for t in times:
if separate: # difference case for the reference
if len(np.where(data_memory_2[0]['times'] == t)[0]) != 1: # test if the result is present
count += 1
if len(np.where(data_memory_2[0]['times'] == t)[0]) != 2: # case when there are two spikes
print('test Fail ') # other case failed
else: # same as previously
for i in index:
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) != 1:
count +=1
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) != 2:
print('test Fail')
if count > len(data_memory[0]['times'])/spike_generator/2/10: # 10% can be repeated element
print('test Fail')
print('test succeed')
else:
print('skip test')
def check_GS_spike_neuron(path,nb_mpi,neurons,separate):
"""
test spike generator of spike with neurons model
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param neurons: number of neurons
:param separate: if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GS_spike_neuron',nb_mpi)
if len(data_memory) != 0: # if not data don't test
index = np.unique(data_memory[0]['senders'])
if separate:
data_memory_2 = get_data(path+'GS_spike_neuron_bis',nb_mpi)
times = np.unique(data_memory[0]['times'])
count =0
for t in times:
if separate: # difference case for the reference
if len(np.where(data_memory_2[0]['times'] == t)[0]) != 1: # test if the result is present WARNING : need to add the case of 2 spikes
print('test Fail ')
else: # same as previously
for i in index:
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) != 1: # test if the result is present
count +=1
if len(np.where(np.logical_and(data_memory[0]['times']==t,data_memory[0]['senders']==i))[0]) != 2: # case when there are two spikes
print('test Fail ') # other case failed
if count > len(data_memory[0]['times'])/2/neurons/10: # 10% can be repeated element
print('test Fail')
print('test succeed')
else:
print('skip test')
def check_GC_parrot(path,nb_mpi,parrot,separate):
"""
test current generator of spike with node
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param parrot: number of parrot neurons
:param separate:if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GC_parrot',nb_mpi,labels=['V_m','senders','times'])
if len(data_memory) != 0: # if not data don't test
if separate:
data_memory_2 = get_data(path+'GC_parrot_bis',nb_mpi,labels=['V_m','senders','times'])
times = np.unique(data_memory[0]['times'])
if separate: # difference case for the reference
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0] # check if the time are the same
if len(time_index) != parrot:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0] # check if the value are the same
for i in data_memory_2[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
else: # same as previously
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0]
if len(time_index) != parrot:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0]
for i in data_memory[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
print('test succeed')
else:
print('skip test')
def check_GC_spike_device(path,nb_mpi,spike_generator,separate):
"""
test current generator of spike with devices
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param spike_generator: number of spike generator
:param separate:if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GC_device',nb_mpi,labels=['V_m','senders','times'])
if len(data_memory) != 0: # if not data don't test
if separate:
data_memory_2 = get_data(path+'GC_device_bis',nb_mpi,labels=['V_m','senders','times'])
times = np.unique(data_memory[0]['times'])
if separate: # difference case for the reference
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0] # check if the time are the same
if len(time_index) != spike_generator:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0] # check if the value are the same
for i in data_memory_2[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
else: # same as previously
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0]
if len(time_index) != spike_generator:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0]
for i in data_memory[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
print('test succeed')
else:
print('skip test')
def check_GC_spike_neuron(path,nb_mpi,neurons,separate):
"""
test current generator of spike with model of neurons
:param path: path of the files
:param nb_mpi: number of mpi for Nest
:param neurons: number of neurons
:param separate:if the reference generator are separate or not
:return: #TODO return if the test succeed or not and if it failed which part failed
"""
data_memory = get_data(path+'GC_neuron',nb_mpi,labels=['V_m','senders','times'])
if len(data_memory) != 0: # if not data don't test
if separate:
data_memory_2 = get_data(path+'GC_neuron_bis',nb_mpi,labels=['V_m','senders','times'])
times = np.unique(data_memory[0]['times'])
if separate: # difference case for the reference
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0] # check if the time are the same
if len(time_index) != neurons:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0] # check if the value are the same
for i in data_memory_2[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
else: # same as previously
for t in times:
time_index = np.where(data_memory[0]['times']==t)[0]
if len(time_index) != neurons:
print('test Fail')
init_val = data_memory[0]['V_m'][time_index][0]
for i in data_memory[0]['V_m'][time_index]:
if i != init_val:
print('test Fail')
print('test succeed')
else:
print('skip test')
def check(path,nb_VP,nb_mpi,nb_run, time_sim,
spike_generator=0,parrot=0,iaf=0,
nb_mpi_recorder=0,separate=False,
nb_mpi_generator_spike=0,nb_mpi_generator_current=0,shared_mpi_input=False,
mix_mpi=0,
):
"""
return all the test for one simulation case
:param path: path of the folder
:param nb_VP: number of virtual process
:param nb_mpi: number of mpi rank
:param nb_run: number of run
:param time_sim: time of 1 run
:param spike_generator: number of device
:param parrot: number of parrot neurons
:param iaf: number of neurons
:param separate: separation or not of the reference case
:param nb_mpi_recorder: number of mpi recorder
:param nb_mpi_generator_spike: number of spike_generator
:param nb_mpi_generator_current: number of current generator
:param shared_mpi_input: if the node share input or not
:param mix_mpi: different case #TODO not yet implemented
:return:
"""
print(nb_VP,nb_mpi,nb_run, time_sim,
spike_generator,parrot,iaf,
nb_mpi_recorder,separate,
nb_mpi_generator_spike,nb_mpi_generator_current,shared_mpi_input,
mix_mpi); sys.stdout.flush()
# name of the file
name = "nb_VP_"+str(nb_VP)+"nb_mpi"+str(nb_mpi)\
+'_D_'+str(int(spike_generator))+'_P_'+str(int(parrot))+'_N_'+str(int(iaf))\
+'_R_'+str(nb_mpi_recorder)+'_Separate_'+str(int(separate))\
+'_GS_'+str(nb_mpi_generator_spike) +'_GC_'+str(nb_mpi_generator_current)\
+'_SH_'+str(int(shared_mpi_input))+'_SM_'+str(mix_mpi)+'/'
#TODO check the name is corresponding
path = path+'/'
# compute index and nb element for distinguish the different tests
nb_element = int(parrot>0) + int(spike_generator>0) + int(iaf>0)
if shared_mpi_input:
if nb_mpi_recorder != 0 and nb_mpi_recorder % nb_element !=0:
raise Exception('Miss nb recorder')
if nb_mpi_generator_spike != 0 and nb_mpi_generator_spike % nb_element !=0:
raise Exception('Miss nb spike generator')
if nb_mpi_generator_current != 0 and nb_mpi_generator_current % nb_element !=0:
raise Exception('Miss nb current generator')
index_parrot =-2
index_device = -2
index_aif = -2
else:
if parrot > 0 :
index_parrot = 0
if spike_generator > 0:
index_device = 1
if iaf > 0:
index_aif = 2
else:
index_aif = -1
else:
index_device = -1
if iaf > 0:
index_aif = 1
else:
index_aif = -1
else:
index_parrot = -1
if spike_generator > 0:
index_device = 0
if iaf > 0:
index_aif = 1
else:
index_aif = -1
else:
index_device = -1
if iaf > 0:
index_aif = 0
else:
index_aif = -1
# run all the tests
check_recorder(path,nb_mpi,nb_mpi_recorder,spike_generator,separate,index_device,nb_element)
check_GS_parrot(path,nb_mpi,parrot,separate)
check_GS_spike_device(path,nb_mpi,spike_generator,separate)
check_GS_spike_neuron(path,nb_mpi,iaf,separate)
check_GC_parrot(path,nb_mpi,parrot,separate)
check_GC_spike_device(path,nb_mpi,spike_generator,separate)
check_GC_spike_neuron(path,nb_mpi,iaf,separate)
if __name__ == "__main__":
# print('test 1')
# check(1 ,1 ,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(16,1 ,3,200.0,2,2,2,False,2,2,2,False,0)
# print('test 1')
# check(16,1 ,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(16,2 ,3,200.0,2,2,2,False,2,2,2,False,0)
# print('test 1')
# check(16,2 ,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(16,4 ,3,200.0,2,2,2,False,2,2,2,False,0)
# print('test 1')
# check(16,4 ,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(16,8 ,3,200.0,2,2,2,False,2,2,2,False,0)
# print('test 1')
# check(16,16,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(16,16,3,200.0,2,2,2,False,2,2,2,False,0)
# print('test 1')
# check(16,8 ,3,200.0,2,2,2,True ,2,2,2,False,0)
# print('test 1')
# check(1 ,1 ,3,200.0,2,2,2,True ,6,6,6,False,0)
# print('test 1')
# check(16,1 ,3,200.0,2,2,2,False,6,6,6,False,0)
# print('test 1')
# check(16,1 ,3,200.0,2,2,2,True ,6,6,6,False,0)
# print('test 1')
# check(16,2 ,3,200.0,2,2,2,False,6,6,6,False,0)
# print('test 1')
# check(16,2 ,3,200.0,2,2,2,True ,6,6,6,False,0)
# print('test 1')
# check(16,4 ,3,200.0,2,2,2,False,6,6,6,False,0)
# print('test 1')
# check(16,4 ,3,200.0,2,2,2,True ,6,6,6,False,0)
# print('test 1')
# check(16,8 ,3,200.0,2,2,2,False,6,6,6,False,0)
# print('test 1')
# check(16,16,3,200.0,2,2,2,True ,6,6,6,False,0)
# print('test 1')
# check(16,16,3,200.0,2,2,2,False,6,6,6,False,0)
# print('test 1')
# check(16,8 ,3,200.0,2,2,2,True ,6,6,6,False,0)
if len(sys.argv) == 15:
check(sys.argv[1],int(sys.argv[2]),int(sys.argv[3]),int(sys.argv[4]),float(sys.argv[5]),
int(sys.argv[6]),int(sys.argv[7]),int(sys.argv[8]),
int(sys.argv[9]),bool(int(sys.argv[10])),
int(sys.argv[11]),int(sys.argv[12]),bool(int(sys.argv[13])),
int(sys.argv[14])
)
else:
print('bad number of argument') | 46.544153 | 163 | 0.57389 | 2,889 | 19,502 | 3.722395 | 0.073036 | 0.065092 | 0.046029 | 0.030686 | 0.738516 | 0.694346 | 0.681886 | 0.647759 | 0.614562 | 0.602009 | 0 | 0.041341 | 0.304174 | 19,502 | 419 | 164 | 46.544153 | 0.751142 | 0.319557 | 0 | 0.605948 | 0 | 0 | 0.084419 | 0 | 0 | 0 | 0 | 0.02148 | 0 | 1 | 0.033457 | false | 0 | 0.01487 | 0 | 0.052045 | 0.152416 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6203a1e9f399c656dfba1a70bc7d6c5384556fb5 | 2,136 | py | Python | ABetterProposal/db/experiments_db.py | ebord/ABetterProposal | e1fec4ae21c35e7eda9e0faa6a9abb6b04f5021b | [
"MIT"
] | null | null | null | ABetterProposal/db/experiments_db.py | ebord/ABetterProposal | e1fec4ae21c35e7eda9e0faa6a9abb6b04f5021b | [
"MIT"
] | null | null | null | ABetterProposal/db/experiments_db.py | ebord/ABetterProposal | e1fec4ae21c35e7eda9e0faa6a9abb6b04f5021b | [
"MIT"
] | null | null | null | from flask import (
Blueprint, flash, g, redirect, render_template, request, session, url_for
)
# Blueprint Configuration
db_bp = Blueprint('db_bp', __name__,
template_folder='templates',
static_folder='static')
# this will be the experiment db dashboard page, only accessible for loggedin users
@db_bp.route('/experiment_db_dashboard')
def experiment_db_dashboard():
# Check if user is loggedin
if 'loggedin' in session:
# User is loggedin show them the experimental db dashboard page
return render_template('experiment_db_dashboard.html', username=session['username'])
# User is not loggedin redirect to login page
return redirect(url_for('auth_bp.login'))
# this will be the experiment db proposals page, only accessible for loggedin users
@db_bp.route('/experiment_db_proposals')
def experiment_db_proposals():
# Check if user is loggedin
if 'loggedin' in session:
# User is loggedin show them the experimental db proposals page
return render_template('experiment_db_proposals.html', username=session['username'])
# User is not loggedin redirect to login page
return redirect(url_for('auth_bp.login'))
# this will be the experiment db plans page, only accessible for loggedin users
@db_bp.route('/experiment_db_plans')
def experiment_db_plans():
# Check if user is loggedin
if 'loggedin' in session:
# User is loggedin show them the experimental db plans page
return render_template('experiment_db_plans.html', username=session['username'])
# User is not loggedin redirect to login page
return redirect(url_for('auth_bp.login'))
# this will be the experiment db teams page, only accessible for loggedin users
@db_bp.route('/experiment_db_teams')
def experiment_db_teams():
# Check if user is loggedin
if 'loggedin' in session:
# User is loggedin show them the experimental db teams page
return render_template('experiment_db_teams.html', username=session['username'])
# User is not loggedin redirect to login page
return redirect(url_for('auth_bp.login')) | 44.5 | 92 | 0.731273 | 295 | 2,136 | 5.125424 | 0.172881 | 0.126984 | 0.074074 | 0.034392 | 0.76455 | 0.76455 | 0.652778 | 0.652778 | 0.652778 | 0.652778 | 0 | 0 | 0.190075 | 2,136 | 48 | 93 | 44.5 | 0.873988 | 0.404026 | 0 | 0.307692 | 0 | 0 | 0.261147 | 0.121019 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.038462 | 0 | 0.5 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
620aefcfda08fb1a87c47faf813214712ea795ae | 366 | py | Python | bnn_mcmc_examples/examples/mlp/noisy_xor/setting1/optim/sgd/constants.py | papamarkou/bnn_mcmc_examples | 7bb4ecfb33db4c30a8e61e31f528bda0efb24e3d | [
"MIT"
] | 1 | 2021-09-09T15:55:37.000Z | 2021-09-09T15:55:37.000Z | bnn_mcmc_examples/examples/mlp/noisy_xor/setting1/optim/sgd/constants.py | kushagragpt99/bnn_mcmc_examples | 297cdb1e74335860989bebdb4ff6f6322b6adc06 | [
"MIT"
] | null | null | null | bnn_mcmc_examples/examples/mlp/noisy_xor/setting1/optim/sgd/constants.py | kushagragpt99/bnn_mcmc_examples | 297cdb1e74335860989bebdb4ff6f6322b6adc06 | [
"MIT"
] | 1 | 2021-10-05T06:38:57.000Z | 2021-10-05T06:38:57.000Z | # %% Import packages
from bnn_mcmc_examples.examples.mlp.noisy_xor.setting1.constants import output_path
# %% Define optimizer-specific output directories
optimizer_output_path = output_path.joinpath('sgd')
optimizer_output_pilot_path = optimizer_output_path.joinpath('pilot_run')
optimizer_output_benchmark_path = optimizer_output_path.joinpath('benchmark_run')
| 36.6 | 83 | 0.846995 | 47 | 366 | 6.191489 | 0.468085 | 0.171821 | 0.195876 | 0.158076 | 0.213058 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002941 | 0.071038 | 366 | 9 | 84 | 40.666667 | 0.852941 | 0.180328 | 0 | 0 | 0 | 0 | 0.084175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6221da779148edbe63710dce5f8a0b88e5a4ba62 | 3,492 | py | Python | easyxsd/utils.py | gnrfan/python-easyxsd | 26b8a25599fce1d9c80de47d6e3f424979ff71b2 | [
"BSD-3-Clause"
] | 11 | 2016-06-15T09:46:30.000Z | 2021-05-20T11:54:49.000Z | easyxsd/utils.py | gnrfan/python-easyxsd | 26b8a25599fce1d9c80de47d6e3f424979ff71b2 | [
"BSD-3-Clause"
] | null | null | null | easyxsd/utils.py | gnrfan/python-easyxsd | 26b8a25599fce1d9c80de47d6e3f424979ff71b2 | [
"BSD-3-Clause"
] | 7 | 2015-09-17T14:59:24.000Z | 2021-05-03T20:30:00.000Z | from lxml import etree
def xml_from_string(xmlstr):
"""
Returns an lxml.etree._ElementTree object from a string
containing a valid XML document.
"""
try:
return etree.XML(str(xmlstr).strip())
except etree.XMLSyntaxError:
return None
def xml_from_file(filepath):
"""
Returns an lxml.etree._ElementTree object from a file
containing a valid XML document.
"""
try:
return etree.parse(filepath)
except etree.XMLSyntaxError:
return None
def xsd_from_string(xsdstr):
"""
Returns an lxml.etree.XMLSchema object from a string
containing a valid XML document.
"""
try:
xml = etree.XML(str(xsdstr).strip())
return etree.XMLSchema(xml)
except etree.XMLSyntaxError:
return None
def xsd_from_file(filepath):
"""
Returns an lxml.etree.XMLSchema object from a file
containing a valid XML document.
"""
try:
xml = etree.parse(filepath)
return etree.XMLSchema(xml)
except etree.XMLSyntaxError:
return None
def validate(xml, xsd):
"""
Receives an lxml.etree._ElementTree object as first parameter
and an lxml.etree.XMLSchema object as second parameter and
returns True or False respectively as the XSD validation of the
XML succeeds or fails.
"""
return xsd.validate(xml)
def validate_from_strings(xmlstr, xsdstr):
"""
Receives a string containing a valid XML document as first parameter
and another string containing a valid XSD document as second parameter
and validates the first according to the latter returning True or False
respectively as the validation succeeds or fails.
"""
xml = xml_from_string(xmlstr)
xsd = xsd_from_string(xsdstr)
return validate(xml, xsd)
def validate_from_files(xmlfilepath, xsdfilepath):
"""
Receives a string with a file path to a valid XML document
as first parameter and another string with a file path to a valid
XSD document as second parameter and validates the first according
to the latter returning True or False respectively as the validation
succeeds or fails.
"""
xml = xml_from_file(xmlfilepath)
xsd = xsd_from_file(xsdfilepath)
return validate(xml, xsd)
def validate_xml_string_from_xsd_file(xmlstr, xsdfilepath):
"""
Validates a string containing an XML document as the first parameter
with an XSD document contained in the file path passed as the
second parameter.
"""
xml = xml_from_string(xmlstr)
xsd = xsd_from_file(xsdfilepath)
return validate(xml, xsd)
def validate_with_errors(xml, xsd):
"""
Returns a tuple with a boolean product of the XSD validation as
the first element and the error log object as the second element.
"""
validation = xsd.validate(xml)
return (validation, xsd.error_log, )
def xsd_error_as_simple_string(error):
"""
Returns a string based on an XSD error object with the format
LINE:COLUMN:LEVEL_NAME:DOMAIN_NAME:TYPE_NAME:MESSAGE.
"""
parts = [
error.line,
error.column,
error.level_name,
error.domain_name,
error.type_name,
error.message
]
return ':'.join([str(item) for item in parts])
def xsd_error_log_as_simple_strings(error_log):
"""
Returns a list of strings representing all the errors of an XSD
error log object.
"""
return [xsd_error_as_simple_string(e) for e in error_log]
| 29.344538 | 75 | 0.691008 | 479 | 3,492 | 4.920668 | 0.179541 | 0.020365 | 0.028002 | 0.043275 | 0.564277 | 0.522698 | 0.481544 | 0.462452 | 0.342809 | 0.342809 | 0 | 0 | 0.239977 | 3,492 | 118 | 76 | 29.59322 | 0.888093 | 0.450745 | 0 | 0.403846 | 0 | 0 | 0.000597 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211538 | false | 0 | 0.019231 | 0 | 0.519231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6226406ea3b28b2417eb8efc3211c5906e7fd517 | 1,446 | py | Python | oauth/models.py | zhxiaohe/starwars_api | f1b729e819eb19e5eb59630bed56b13127eb1ef2 | [
"MIT"
] | null | null | null | oauth/models.py | zhxiaohe/starwars_api | f1b729e819eb19e5eb59630bed56b13127eb1ef2 | [
"MIT"
] | null | null | null | oauth/models.py | zhxiaohe/starwars_api | f1b729e819eb19e5eb59630bed56b13127eb1ef2 | [
"MIT"
] | null | null | null | from app import db
from passlib.apps import custom_app_context as pwd_context
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64))
password = db.Column(db.String(120))
role = db.relationship('Role', backref='users', lazy='dynamic')
def hash_password(self, password):
self.password = pwd_context.encrypt(password)
def verify_password(self, password):
return pwd_context.verify(password, self.password)
Roles_Perms = db.Table('roles_perms',
db.Column('role_id', db.Integer, db.ForeignKey('roles.id')),
db.Column('perm_id', db.Integer, db.ForeignKey('perms.id'))
)
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
rolename = db.Column(db.String(64))
userid = db.Column(db.Integer, db.ForeignKey('users.id'))
perms = db.relationship('Perm', secondary=Roles_Perms, backref='roles')
class Perm(db.Model):
__tablename__ = 'perms'
id = db.Column(db.Integer, primary_key=True)
menu = db.Column(db.String(120))
type = db.Column(db.Integer)
uri = db.Column(db.String(120))
method = db.Column(db.String(20))
icon = db.Column(db.String(120))
pid = db.Column(db.Integer)
| 35.268293 | 85 | 0.607884 | 183 | 1,446 | 4.655738 | 0.284153 | 0.140845 | 0.152582 | 0.131455 | 0.301643 | 0.116197 | 0.116197 | 0.116197 | 0 | 0 | 0 | 0.016775 | 0.257953 | 1,446 | 40 | 86 | 36.15 | 0.77726 | 0 | 0 | 0.096774 | 0 | 0 | 0.061549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.193548 | 0.064516 | 0.032258 | 0.83871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
6232426d009029a71686a7036203c42fe004aa6f | 470 | py | Python | app/utils.py | VeraButler/resources_api | 5dc4f0f0828a7fe331c8b8036ecd1e0956c69848 | [
"MIT"
] | null | null | null | app/utils.py | VeraButler/resources_api | 5dc4f0f0828a7fe331c8b8036ecd1e0956c69848 | [
"MIT"
] | null | null | null | app/utils.py | VeraButler/resources_api | 5dc4f0f0828a7fe331c8b8036ecd1e0956c69848 | [
"MIT"
] | null | null | null | class Paginator:
def __init__(self, configuration, request):
self.configuration = configuration
self.page = request.args.get('page', 1, type=int)
self.page_size = request.args.get('page_size', configuration.per_page, type=int)
if self.page_size > configuration.max_page_size:
self.page_size = configuration.max_page_size
def items(self, query):
return query.paginate(self.page, self.page_size, False).items
| 36.153846 | 88 | 0.689362 | 61 | 470 | 5.081967 | 0.360656 | 0.180645 | 0.154839 | 0.116129 | 0.232258 | 0.232258 | 0.232258 | 0 | 0 | 0 | 0 | 0.002674 | 0.204255 | 470 | 12 | 89 | 39.166667 | 0.826203 | 0 | 0 | 0 | 0 | 0 | 0.02766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.111111 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
62478315f9d5ac3dc84354d11a5fed8f0f055173 | 392 | py | Python | apphelpers/socialauth/fb.py | MilindSuryawanshi/apphelpers | aa8866895f5108b4d020378551dba3b557fa443f | [
"MIT"
] | 1 | 2019-03-21T15:17:30.000Z | 2019-03-21T15:17:30.000Z | apphelpers/socialauth/fb.py | MilindSuryawanshi/apphelpers | aa8866895f5108b4d020378551dba3b557fa443f | [
"MIT"
] | 4 | 2019-05-08T07:40:06.000Z | 2020-08-01T01:42:39.000Z | apphelpers/socialauth/fb.py | scrolltech/commonlib | 2ae0c9206c631ea5942024a00755f2d858de9d3f | [
"MIT"
] | 5 | 2019-04-26T06:57:48.000Z | 2020-12-24T12:10:22.000Z | import os
from requests_oauthlib import OAuth2Session
try:
from converge import settings
except:
import settings
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'True'
def fetch_info(access_token):
session = OAuth2Session(token={'access_token': access_token})
info_url = 'https://graph.facebook.com/me?fields=' + settings.FB_USER_FIELDS
return session.get(info_url).json()
| 26.133333 | 80 | 0.760204 | 51 | 392 | 5.627451 | 0.627451 | 0.114983 | 0.111498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0059 | 0.135204 | 392 | 14 | 81 | 28 | 0.840708 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 0.068878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6248aefb5befe02f4679eec1d06143a11bc7e2f6 | 493 | py | Python | utils/validate.py | umd-mith/twarc | 943485b17d2c3553f826d4e0572c362a17001105 | [
"CC0-1.0"
] | null | null | null | utils/validate.py | umd-mith/twarc | 943485b17d2c3553f826d4e0572c362a17001105 | [
"CC0-1.0"
] | null | null | null | utils/validate.py | umd-mith/twarc | 943485b17d2c3553f826d4e0572c362a17001105 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
import sys
import json
import fileinput
import dateutil.parser
line_number = 0
for line in fileinput.input():
line_number += 1
try:
tweet = json.loads(line)
except Exception as e:
sys.stderr.write("invalid JSON (%s) line %s: %s" % (e, line_number, line))
| 30.8125 | 199 | 0.411765 | 46 | 493 | 4.347826 | 0.608696 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.515213 | 493 | 15 | 200 | 32.866667 | 0.828452 | 0.040568 | 0 | 0 | 0 | 0 | 0.061441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
62596739203982959c523efef3b677c9e44b47e7 | 1,050 | py | Python | python/basis/16-function.py | weizhenwei/tech-docs-2016 | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | 3 | 2017-06-09T08:48:07.000Z | 2020-12-13T10:37:44.000Z | python/basis/16-function.py | weizhenwei/tech-docs-sharetome | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | null | null | null | python/basis/16-function.py | weizhenwei/tech-docs-sharetome | 253564a1633e9ec75ac94efede57f52c02b29280 | [
"BSD-2-Clause"
] | 4 | 2020-04-29T07:03:44.000Z | 2021-07-25T15:12:15.000Z | #!/usr/bin/env python
def printme(str):
"print something input"
print str
return
printme("Hello")
def changeme(list):
list.append([1, 2, 3, 4])
print "In the function, list = ", list
return
list = [23, 34, 32, 12]
print "Out the function, before call list = ", list
changeme(list)
print "Out the function, after call list = ", list
# keyword parameter
def printinfo(name, age):
"print out the name and age"
print "Name:", name
print "Age:", age
return
printinfo(age = 50, name = "Mike")
printinfo(name = "Kay", age = 45)
# default parameter
def printinfo(name, age = 28):
"print out the name and age"
print "Name:", name
print "Age:", age
return
printinfo(name = "Kay")
# variable parameter
def printinfo(name, *rest_parameters):
"print out the name and age"
print "Name:", name
for var in rest_parameters:
print var
return
printinfo("Kay", 12, "female")
# lambda expression
sum = lambda arg1, arg2: arg1 + arg2
print "1 + 1 = ", sum(1, 1)
| 15.671642 | 51 | 0.624762 | 148 | 1,050 | 4.418919 | 0.358108 | 0.061162 | 0.084098 | 0.114679 | 0.321101 | 0.235474 | 0.235474 | 0.235474 | 0.235474 | 0.183486 | 0 | 0.035533 | 0.249524 | 1,050 | 66 | 52 | 15.909091 | 0.794416 | 0.088571 | 0 | 0.371429 | 0 | 0 | 0.264211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.685714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
625a9c5018450b2235010c7b79f7df2a76a0f1e2 | 1,752 | py | Python | euler/problem090.py | brunorijsman/euler-problems-python | 8f5681a54795fc5859cee1c4ad38a995a530dc66 | [
"BSD-2-Clause"
] | 2 | 2018-06-25T17:54:55.000Z | 2020-05-13T06:10:15.000Z | euler/problem090.py | brunorijsman/euler-problems-python | 8f5681a54795fc5859cee1c4ad38a995a530dc66 | [
"BSD-2-Clause"
] | null | null | null | euler/problem090.py | brunorijsman/euler-problems-python | 8f5681a54795fc5859cee1c4ad38a995a530dc66 | [
"BSD-2-Clause"
] | null | null | null | # Euler problem 90: Cube digit pairs
def solve():
# Find all possible dies with 6 digits on faces out of 10 combinations
options = []
find_options(options, [])
count = 0
# Figure out which combinations of two dies make a square
for die1 in options:
for die2 in options:
if is_solution(die1, die2):
count += 1
# Devide by 2 because dices can be interchanges (symmetry)
print(count // 2)
def find_options(options, die):
if len(die) == 6:
options.append(die)
return
if len(die) == 0:
start = 0
else:
start = die[-1] + 1
for d in range(start, 10):
find_options(options, die + [d])
def is_solution(die1, die2):
if not digits_match(die1, die2, 0, 1):
return False
if not digits_match(die1, die2, 0, 4):
return False
if not digits_match(die1, die2, 0, 9) and not digits_match(die1, die2, 0, 6):
return False
if not digits_match(die1, die2, 1, 6) and not digits_match(die1, die2, 1, 9):
return False
if not digits_match(die1, die2, 2, 5):
return False
if not digits_match(die1, die2, 3, 6) and not digits_match(die1, die2, 3, 9):
return False
if not digits_match(die1, die2, 4, 9) and not digits_match(die1, die2, 4, 6):
return False
if not digits_match(die1, die2, 6, 4) and not digits_match(die1, die2, 9, 4):
return False
if not digits_match(die1, die2, 8, 1):
return False
return True
def digits_match(die1, die2, digit1, digit2):
if (digit1 in die1) and (digit2 in die2):
return True
if (digit2 in die1) and (digit1 in die2):
return True
return False
solve()
| 26.149254 | 81 | 0.602169 | 266 | 1,752 | 3.890977 | 0.255639 | 0.131401 | 0.217391 | 0.275362 | 0.433816 | 0.433816 | 0.402899 | 0.278261 | 0.244444 | 0 | 0 | 0.07371 | 0.303082 | 1,752 | 66 | 82 | 26.545455 | 0.773956 | 0.123288 | 0 | 0.282609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0 | 0 | 0.391304 | 0.021739 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
627bb3ace21ba42e86189fe8ff453b9f2c909850 | 3,362 | py | Python | src/abaqus/Optimization/FixedRegion.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | src/abaqus/Optimization/FixedRegion.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | src/abaqus/Optimization/FixedRegion.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | from abaqusConstants import *
from .GeometricRestriction import GeometricRestriction
from ..Region.Region import Region
class FixedRegion(GeometricRestriction):
"""The FixedRegion object defines a fixed region geometric restriction.
The FixedRegion object is derived from the GeometricRestriction object.
Notes
-----
This object can be accessed by:
.. code-block:: python
import optimization
mdb.models[name].optimizationTasks[name].geometricRestrictions[name]
"""
def __init__(self, name: str, region: Region, csys: int = None, presumeFeasibleRegionAtStart: Boolean = ON,
u1: Boolean = OFF, u2: Boolean = OFF, u3: Boolean = OFF):
"""This method creates a FixedRegion object.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.models[name].optimizationTasks[name].FixedRegion
Parameters
----------
name
A String specifying the geometric restriction repository key.
region
A Region object specifying the region to which the geometric restriction is applied.
When used with a TopologyTask, there is no default value. When used with a ShapeTask,
the default value is MODEL.
csys
None or a DatumCsys object specifying the local coordinate system. If *csys*=None, the
global coordinate system is used. When this member is queried, it returns an Int. The
default value is None.
presumeFeasibleRegionAtStart
A Boolean specifying whether to ignore the geometric restriction in the first design
cycle. The default value is ON.
u1
A Boolean specifying whether to fix the region in the 1-direction. The default value is
OFF.
u2
A Boolean specifying whether to fix the region in the 2-direction. The default value is
OFF.
u3
A Boolean specifying whether to fix the region in the 3-direction. The default value is
OFF.
Returns
-------
A FixedRegion object.
"""
super().__init__()
pass
def setValues(self, csys: int = None, presumeFeasibleRegionAtStart: Boolean = ON, u1: Boolean = OFF,
u2: Boolean = OFF, u3: Boolean = OFF):
"""This method modifies the FixedRegion object.
Parameters
----------
csys
None or a DatumCsys object specifying the local coordinate system. If *csys*=None, the
global coordinate system is used. When this member is queried, it returns an Int. The
default value is None.
presumeFeasibleRegionAtStart
A Boolean specifying whether to ignore the geometric restriction in the first design
cycle. The default value is ON.
u1
A Boolean specifying whether to fix the region in the 1-direction. The default value is
OFF.
u2
A Boolean specifying whether to fix the region in the 2-direction. The default value is
OFF.
u3
A Boolean specifying whether to fix the region in the 3-direction. The default value is
OFF.
"""
pass
| 37.775281 | 111 | 0.617787 | 389 | 3,362 | 5.318766 | 0.239075 | 0.069599 | 0.079749 | 0.090382 | 0.646689 | 0.613823 | 0.613823 | 0.584824 | 0.584824 | 0.584824 | 0 | 0.007951 | 0.326591 | 3,362 | 88 | 112 | 38.204545 | 0.905919 | 0.68025 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.181818 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
62806e40976995ebdbe3640cbdccbd0e62311f6d | 2,989 | py | Python | problems/OF/auto/problem19_OF.py | sunandita/ICAPS_Summer_School_RAE_2020 | a496b62185bcfdd2c76eb7986ae99cfa85708d28 | [
"BSD-3-Clause"
] | 5 | 2020-10-15T14:40:03.000Z | 2021-08-20T17:45:41.000Z | problems/OF/auto/problem19_OF.py | sunandita/ICAPS_Summer_School_RAE_2020 | a496b62185bcfdd2c76eb7986ae99cfa85708d28 | [
"BSD-3-Clause"
] | null | null | null | problems/OF/auto/problem19_OF.py | sunandita/ICAPS_Summer_School_RAE_2020 | a496b62185bcfdd2c76eb7986ae99cfa85708d28 | [
"BSD-3-Clause"
] | 2 | 2020-10-15T07:06:14.000Z | 2020-10-15T17:33:01.000Z | __author__ = 'mason'
from domain_orderFulfillment import *
from timer import DURATION
from state import state
import numpy as np
'''
This is a randomly generated problem
'''
def GetCostOfMove(id, r, loc1, loc2, dist):
return 1 + dist
def GetCostOfLookup(id, item):
return max(1, np.random.beta(2, 2))
def GetCostOfWrap(id, orderName, m, item):
return max(1, np.random.normal(5, .5))
def GetCostOfPickup(id, r, item):
return max(1, np.random.normal(4, 1))
def GetCostOfPutdown(id, r, item):
return max(1, np.random.normal(4, 1))
def GetCostOfLoad(id, orderName, r, m, item):
return max(1, np.random.normal(3, .5))
DURATION.TIME = {
'lookupDB': GetCostOfLookup,
'wrap': GetCostOfWrap,
'pickup': GetCostOfPickup,
'putdown': GetCostOfPutdown,
'loadMachine': GetCostOfLoad,
'moveRobot': GetCostOfMove,
'acquireRobot': 1,
'freeRobot': 1,
'wait': 5
}
DURATION.COUNTER = {
'lookupDB': GetCostOfLookup,
'wrap': GetCostOfWrap,
'pickup': GetCostOfPickup,
'putdown': GetCostOfPutdown,
'loadMachine': GetCostOfLoad,
'moveRobot': GetCostOfMove,
'acquireRobot': 1,
'freeRobot': 1,
'wait': 5
}
rv.LOCATIONS = [0, 1, 2, 3, 4, 5, 200]
rv.FACTORY1 = frozenset({0, 1, 2, 3, 4, 5, 200})
rv.FACTORY_UNION = rv.FACTORY1
rv.SHIPPING_DOC = {rv.FACTORY1: 0}
rv.GROUND_EDGES = {0: [1, 5, 200, 2, 4], 1: [2, 4, 5, 0, 3, 200], 2: [0, 1, 4, 5], 3: [1], 4: [0, 1, 2, 5], 5: [1, 2, 0, 4], 200: [1, 0]}
rv.GROUND_WEIGHTS = {(0, 1): 6.499229647088665, (0, 5): 2.692119274311481, (0, 200): 6.2181264795712705, (0, 2): 7.121374187150064, (0, 4): 7.908766557240531, (1, 2): 10.484258297196071, (1, 4): 2.8722782433551934, (1, 5): 4.117098308924607, (1, 3): 7.129538742746605, (1, 200): 8.245597546318098, (2, 4): 9.026732394538875, (2, 5): 5.704832854262499, (4, 5): 11.599770968738499}
rv.ROBOTS = { 'r0': rv.FACTORY1, }
rv.ROBOT_CAPACITY = {'r0': 4.599413029987371}
rv.MACHINES = { 'm0': rv.FACTORY1, 'm1': rv.FACTORY1, 'm2': rv.FACTORY1, }
rv.PALLETS = { 'p0', 'p1', 'p2', }
def ResetState():
state.OBJECTS = { 'o0': True, 'o1': True, 'o2': True, 'o3': True, 'o4': True, 'o5': True, 'o6': True, }
state.OBJ_WEIGHT = {'o0': 4.599413029987371, 'o1': 4.599413029987371, 'o2': 3.4035481992311962, 'o3': 4.599413029987371, 'o4': 4.599413029987371, 'o5': 4.599413029987371, 'o6': 4.599413029987371}
state.OBJ_CLASS = {'type0': ['o0', 'o1'], 'type1': ['o2'], 'type2': ['o3', 'o4', 'o5', 'o6']}
state.loc = { 'r0': 1, 'm0': 5, 'm1': 4, 'm2': 1, 'p0': 2, 'p1': 1, 'p2': 200, 'o0': 200, 'o1': 3, 'o2': 4, 'o3': 200, 'o4': 0, 'o5': 200, 'o6': 0,}
state.load = { 'r0': NIL,}
state.busy = {'r0': False, 'm0': False, 'm1': False, 'm2': False}
state.numUses = {'m0': 10, 'm1': 13, 'm2': 9}
state.var1 = {'temp': 'r0', 'temp1': 'r0', 'temp2': 1, 'redoId': 0}
state.shouldRedo = {}
tasks = {
4: [['orderStart', ['type0']]],
6: [['orderStart', ['type0']]],
}
eventsEnv = {
} | 35.164706 | 379 | 0.600201 | 416 | 2,989 | 4.283654 | 0.317308 | 0.039282 | 0.036476 | 0.039282 | 0.274972 | 0.274972 | 0.262626 | 0.262626 | 0.217733 | 0.217733 | 0 | 0.225952 | 0.18267 | 2,989 | 85 | 380 | 35.164706 | 0.503479 | 0 | 0 | 0.307692 | 1 | 0 | 0.107264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107692 | false | 0 | 0.061538 | 0.092308 | 0.261538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
65649872be0991246314d5df6efcac9933c9cfaa | 1,102 | py | Python | src/sims4communitylib/_vanilla_fixes/_sim_full_name.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 118 | 2019-08-31T04:33:18.000Z | 2022-03-28T21:12:14.000Z | src/sims4communitylib/_vanilla_fixes/_sim_full_name.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 15 | 2019-12-05T01:29:46.000Z | 2022-02-18T17:13:46.000Z | src/sims4communitylib/_vanilla_fixes/_sim_full_name.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 28 | 2019-09-07T04:11:05.000Z | 2022-02-07T18:31:40.000Z | """
The Sims 4 Community Library is licensed under the Creative Commons Attribution 4.0 International public license (CC BY 4.0).
https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/legalcode
Copyright (c) COLONOLNUTTY
"""
# The purpose of this file is to fix the fact that when trying to access the "full_name" attribute on Sims an empty string is returned.
# noinspection PyBroadException
from sims.sim_info import SimInfo
from sims4communitylib.modinfo import ModInfo
from sims4communitylib.utils.common_injection_utils import CommonInjectionUtils
from sims4communitylib.utils.sims.common_sim_name_utils import CommonSimNameUtils
from sims4communitylib.utils.sims.common_sim_utils import CommonSimUtils
@CommonInjectionUtils.inject_safely_into(ModInfo.get_identity(), SimInfo, 'full_name')
def _common_fix_full_name_returning_empty_string(original, self: SimInfo, *_, **__):
original_value = original(self, *_, **__)
if original_value == '':
return CommonSimNameUtils.get_full_name(CommonSimUtils.get_sim_info(self))
return original_value
| 47.913043 | 135 | 0.813067 | 146 | 1,102 | 5.910959 | 0.486301 | 0.00927 | 0.013905 | 0.020857 | 0.17613 | 0.17613 | 0.085747 | 0.085747 | 0.085747 | 0.085747 | 0 | 0.013238 | 0.108893 | 1,102 | 22 | 136 | 50.090909 | 0.86558 | 0.378403 | 0 | 0 | 0 | 0 | 0.013314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
659e22ad1df6700db9cece7c2733711ba6dc4fea | 183 | py | Python | example/producer.py | Semo/kq | 024cc52b10b2af0c2999a20920faa460442bcbd6 | [
"MIT"
] | 582 | 2016-10-31T04:26:28.000Z | 2022-03-30T12:57:14.000Z | example/producer.py | Semo/kq | 024cc52b10b2af0c2999a20920faa460442bcbd6 | [
"MIT"
] | 17 | 2016-11-01T16:37:16.000Z | 2022-02-10T06:47:36.000Z | example/producer.py | Semo/kq | 024cc52b10b2af0c2999a20920faa460442bcbd6 | [
"MIT"
] | 26 | 2016-11-01T05:06:02.000Z | 2022-02-04T12:44:36.000Z | from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers="127.0.0.1:9092")
for _ in range(10000):
producer.send("my_topic", b"message")
# producer.flush()
| 22.875 | 60 | 0.726776 | 25 | 183 | 5.2 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094937 | 0.136612 | 183 | 7 | 61 | 26.142857 | 0.727848 | 0.087432 | 0 | 0 | 0 | 0 | 0.175758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
659fd83828277eff7d087e06a37e0adb49957a10 | 35,761 | py | Python | festival/management/commands/export_artists.py | mykonosbiennale/mykonosbiennale.github.io | fba479807204768ac440c77c4850b64fb25d113d | [
"Apache-2.0"
] | null | null | null | festival/management/commands/export_artists.py | mykonosbiennale/mykonosbiennale.github.io | fba479807204768ac440c77c4850b64fb25d113d | [
"Apache-2.0"
] | 4 | 2015-04-17T14:10:54.000Z | 2015-04-17T14:13:00.000Z | festival/management/commands/export_artists.py | mykonosbiennale/mykonosbiennale.github.io | fba479807204768ac440c77c4850b64fb25d113d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
import time, traceback
import collections
import re
import os
import sys
import csv
import pprint, json
import optparse
from django.core.files.base import ContentFile
from nameparser import HumanName
from django.core.serializers import serialize
from django.core.management.base import BaseCommand
from django.conf import settings
from django.utils.text import slugify
from festivaly import models as festivaly_models
from festival import models as festival_models
from filmfestival import models as filmffestival_models
class Command(BaseCommand):
help = '''rename all images'''
def handle(self, *args, **kwargs):
artists = {}
for artist in json.loads(serialize('json', festival_models.Artist.objects.all())):
artist['fields']['headshot'] = 'https://s3.amazonaws.com/com.mykonosbiennale.static/' + artist['fields'][
'headshot']
artists[artist['pk']] = artist
festivals = dict([(x['pk'], x) for x in json.loads(serialize('json', festival_models.Festival.objects.all()))])
projects = dict([(x['pk'], x['fields']) for x in json.loads(serialize('json', festival_models.Project.objects.all()))])
project_art = collections.defaultdict(list)
for artwork in json.loads(serialize('json', festival_models.Art.objects.all())):
artwork['fields']['photo'] = 'https://s3.amazonaws.com/com.mykonosbiennale.static/'+ artwork['fields']['photo']
artwork['fields']['artist_name'] = artists[artwork['fields']['artist']]['fields']['name']
project_art[artwork['fields']['project']].append(artwork)
festivals_projects = collections.defaultdict(list)
for project in json.loads(serialize('json', festival_models.Project.objects.all())):
festivals_projects[project['fields']['festival']].append(project)
project['art'] = project_art[project['pk']]
for festival in festivals.itervalues():
festival['projects'] = festivals_projects[festival['pk']]
mb = {
'festivals':festivals,
'artists': artists
}
with open('mb.json','wb') as mb_json:
json.dump(mb, mb_json)
# artist_project = collections.defaultdict(list)
# art = {}
# for artwork in json.loads(serialize('json', festival_models.Art.objects.all())):
# art[artwork['pk']] = art
# artist_project[(artwork['fields']['artist'], artwork['fields']['project'])].append(artwork)
# pprint.pprint(dict(artist_project))
#
#
# # self.rename_poster(film)
# self.rename_stills(film)
#
# def rename_poster(self, film):
# year = film.project.festival.year
# festival = film.project.festival.slug
# if film.poster:
# print film.poster.name
# ext = os.path.splitext(film.poster.name)[-1]
# poster_name = "images/mykonos-biennale-{}-{}-{}-{}-poster{}".format(
# year,
# festival,
# film.project.slug,
# film.slug.strip('-'),
# ext
# )
# poster = ContentFile(film.poster.read())
# poster.name = poster_name
# film.poster = poster
# film.save()
# print film.poster.name
#
# def rename_stills(self, film):
# print film.slug
# for i, image in enumerate(film.filmfestival_image_related.all()):
# print i, image.image.name
#
#
#
# # # for festival in festivaly_models.Festival.objects.all():
# # # print (festival, festival.slug, festival.get_absolute_url())
# # # for project in festivaly_models.Project.objects.all():
# # # print (project, project.slug, project.get_absolute_url())
# # # for festival_project in festivaly_models.FestivalProject.objects.all():
# # # festival_project.save()
# # # print (festival_project, festival_project.slug, festival_project.get_absolute_url())
# #
# # # f_2017 = festivaly_models.Festival.objects.get(year=2017)
# # # for project in festivaly_models.Project.objects.all():
# # # festival_project,_ = festivaly_models.FestivalProject.objects.get_or_create(
# # # festival= f_2017,
# # # project = project,
# # # name = project.name,
# # # text = project.text)
# #
# # # for festival in festival_models.Festival.objects.all():
# # # for projectx in festival.projectx_set.all():
# # # print festival, projectx
# #
# # def rec_2(self):
# # for artist in festival_models.Artist.objects.all():
# # try:
# # festivaly_models.Participant.objects.get(name=artist.name)
# # except:
# # self.mirgate_artist(artist)
# #
# # # for festival_project in festivaly_models.FestivalProject.objects.filter(festival__year=2015):
# # # ps = festival_models.ProjectSeason.objects.filter(project__title = festival_project.name).first()
# # # if ps:
# # # print('found', ps, 'matches',festival_project)
# #
# # # for artist in set([art.artist for art in ps.art_set.all()]):
# #
# # def purge(self):
# # print festivaly_models.FilmDirector.objects.count()
# # pees = [fd.participant for fd in festivaly_models.FilmDirector.objects.all()]
# # map(lambda x: x.delete(), pees)
# #
# # def process_names(self, text):
# # for name in re.split("\s*[\/&,]\s*", text):
# # yield HumanName(name)
# #
# # # build album for each art piece
# #
# # # album = {
# # # 'festivalproject':festivalproject,
# # # 'name': '{} - {}'.format(artist.name, work),
# # # 'text': (works[work][0].description + '\n\n'+ works[work][0].text).strip(),
# # # 'media':[]
# # # }
# # # artwork = festivaly_models.Art.objects.create(
# # # name = album['name'],
# # # text = album['text']
# # # )
# #
# # def migrate_film(self, old_film):
# # # image = ContentFile(old_film.poster.read())
# # dir_list = [str(name) for name in self.process_names(old_film.dir_by)]
# # slug = slugify(old_film.title + '-' + ' '.join(dir_list))
# # festivalproject = festivaly_models.FestivalProject.objects.get(festival__year=2015, project__name={
# # 'Dramatic Nights': 'Dramatic Nights',
# # 'Video Graffiti': 'Video Graffiti',
# # 'Dance': 'Video Graffiti',
# # 'Video Grafitti': 'Video Graffiti',
# # 'Documentary': 'Dramatic Nights',
# # }[old_film.film_type])
# # poster = None
# # if old_film.poster:
# # poster = ContentFile(old_film.poster.read())
# # poster_name = 'mykonos-biennale-{}-{}-{}-post{}'.format(
# # festivalproject.festival.slug,
# # festivalproject.project.slug,
# # old_film.slug,
# # os.path.splitext(old_film.poster.name)[1]
# # )
# #
# # # if old_film.trailer_video:
# # # #trailer_video = ContentFile(old_film.trailer_video.read())
# # # trailer_video_name = 'mykonos-biennale-{}-{}-{}-trailer{}'.format(
# # # festivalproject.festival.slug,
# # # festivalproject.project.slug,
# # # old_film.slug,
# # # os.path.splitext(old_film.poster.name)[1]
# # # )
# # def synopsis(film):
# # if film.log_line:
# # return film.log_line
# # elif film.synopsis_125:
# # return film.synopsis_125
# # elif film.synopsis_250:
# # return film.synopsis_250
# # else:
# # return film.synopsis
# #
# # stills, _ = festivaly_models.Album.objects.get_or_create(
# # name="{} dir. by {}".format(old_film.title, ', '.join(dir_list)),
# # defaults=dict(
# # text=synopsis(old_film),
# # )
# # )
# #
# # film, created = festivaly_models.Film.objects.get_or_create(ref=old_film.ref,
# # defaults=dict(
# # film_source=old_film.source.lower(),
# # ref=old_film.ref,
# # entry_status=old_film.status.lower(),
# # film_type={
# # 'Dramatic Nights': 'short',
# # 'Video Graffiti': 'art_video',
# # 'Video Grafitti': 'art_video',
# # 'Dance': 'dance',
# # 'Documentary': 'documentary',
# # }.get(old_film.film_type),
# # name=old_film.title,
# # original_title=old_film.original_title,
# # sub_by=old_film.sub_by,
# # contact_email=old_film.contact_email,
# # contact_phone=old_film.contact_phone,
# # posted_on_facebook=old_film.posted_on_facebook,
# # subtitles=old_film.subtitles,
# # language=old_film.language,
# # actors=old_film.actors,
# # year=old_film.year,
# # runtime=old_film.runtime,
# # country=old_film.country,
# # projection_copy=old_film.projection_copy,
# # projection_copy_url=old_film.projection_copy_url,
# # coming=False,
# # present=old_film.present,
# # when=old_film.when,
# # log_line=old_film.log_line,
# # synopsis=old_film.synopsis,
# # synopsis_125=old_film.synopsis_125,
# # synopsis_250=old_film.synopsis_250,
# # first_time=old_film.first_time,
# # twitter=old_film.twitter,
# # facebook=old_film.facebook,
# # other_social_media=old_film.other_social_media,
# # url=old_film.url,
# # screenwriters=old_film.screenwriters,
# # producers=old_film.producers,
# # exec_producers=old_film.exec_producers,
# # co_producers=old_film.co_producers,
# # cinematographers=old_film.cinematographers,
# # product_designers=old_film.product_designers,
# # art_directors=old_film.art_directors,
# # editors=old_film.editors,
# # sound_editors=old_film.sound_editors,
# # composers=old_film.composers,
# # crew=old_film.crew,
# # screenings=old_film.screenings,
# # genres=old_film.genres,
# # niches=old_film.niches,
# # info=old_film.info,
# # directors_statement=old_film.directors_statement,
# # production_notes=old_film.production_notes,
# # poster=poster,
# # trailer_url=old_film.trailer_url,
# # trailer_embed=old_film.trailer_embed,
# # stills=stills
# # )
# # )
# #
# # for i, image in enumerate(old_film.filmfestival_image_related.all()):
# #
# # new_image = ContentFile(image.image.read())
# # image_name = 'mykonos-biennale-{}-{}-{}-still{}{}'.format(
# # festivalproject.festival.slug,
# # festivalproject.project.slug,
# # old_film.slug,
# # ('-%d' % i) if i else '',
# # os.path.splitext(image.image.name)[1]
# # )
# # media, created = festivaly_models.Media.objects.get_or_create(
# # name='still of {}{}'.format(film.name, (' (%d)' % i) if i else ''),
# # defaults=dict(
# # text=stills.text,
# # image=new_image,
# # )
# # )
# # if created: film.stills.media.add(media)
# # return film
# #
# # def process_films(self):
# # for director in festivaly_models.FilmDirector.objects.all():
# # print director.participant.name
# #
# # def process_filmdirectors(self):
# # def process_names(text):
# # for name in re.split("\s*[\/&,]\s*", text):
# # yield HumanName(name)
# #
# # for film in filmffestival_models.Film.objects.filter(status=filmffestival_models.Film.SELECTED):
# # if festivaly_models.Film.objects.filter(ref=film.ref).first():
# # continue
# # print film, film.film_type, film.dir_by
# # new_film = self.migrate_film(film)
# # file_type = 'Video Graffiti' if film.film_type == 'Video Grafitti' else 'Dramatic Nights'
# #
# # festivalproject = festivaly_models.FestivalProject.objects.get(festival__year=2015, project__name=file_type)
# # directors = [str(name) for name in process_names(film.dir_by)]
# # # print '\t directors:', map(str, directors)
# # submitter = HumanName(film.sub_by) if film.sub_by.strip() else None
# # director_submitter = None
# # if not submitter:
# # director_submitter = directors[0]
# # elif submitter in directors:
# # # print 'MATCH'
# # director_submitter = directors[directors.index(submitter)]
# # else:
# # pass # print 'NO MATCH', submitter
# # if director_submitter:
# # pass
# # # print '\t\t director submitted', director_submitter
# # # print '\t\t ', film.contact_email
# # # print '\t\t ', film.contact_phone
# # for director in directors:
# # if director == director_submitter:
# # participant, _ = festivaly_models.Participant.objects.get_or_create(name=str(director),
# # defaults=dict(
# # phone=film.contact_phone,
# # email=film.contact_email
# # ))
# # else:
# # participant, _ = festivaly_models.Participant.objects.get_or_create(name=str(director))
# # film_director, created = festivaly_models.FilmDirector.objects.get_or_create(participant=participant,
# # festival_project=festivalproject)
# # film_director.films.add(new_film)
# # # return
# #
# # # if film.actors: print '\t actors:', [str(name) for name in process_names(film.actors)]
# # # if film.producers: print '\t producers:', [str(name) for name in process_names(film.producers)]
# # # if film.exec_producers: print '\t exec_producers:', [str(name) for name in process_names(film.exec_producers)]
# # # if film.co_producers: print '\t co_producers:', [str(name) for name in process_names(film.co_producers)]
# # # if film.cinematographers: print '\t cinematographers:', [str(name) for name in process_names(film.cinematographers)]
# # # if film.screenwriters: print '\t screenwriters:', [str(name) for name in process_names(film.screenwriters)]
# # # if film.editors: print '\t editors:', [str(name) for name in process_names(film.editors)]
# # # if film.sound_editors: print '\t sound_editors:', [str(name) for name in process_names(film.sound_editors)]
# # # if film.composers: print '\t composers:', [str(name) for name in process_names(film.composers)]
# # # if film.art_directors: print '\t art_directors:', [str(name) for name in process_names(film.art_directors)]
# # # if film.crew: print '\t crew:', [str(name) for name in process_names(film.crew)]
# #
# # def list_festivals(self):
# # for festival in festivaly_models.Festival.objects.all():
# # print (festival)
# #
# # def mirgate_artist(self, old_artist):
# # participant = self.add_participant(old_artist)
# # print participant
# # artworks = collections.defaultdict(list)
# # # collect the art by project
# # for art in old_artist.art_set.all():
# # festivalproject = festivaly_models.FestivalProject.objects.get(festival__year=2015,
# # project__name=art.project_x.project.title)
# # artworks[festivalproject].append(art)
# # for festivalproject in artworks:
# # participation, _ = festivaly_models.Artist.objects.get_or_create(festival_project=festivalproject,
# # participant=participant)
# # # collect the images by art piece
# # works = collections.defaultdict(list)
# # for photo in artworks[festivalproject]:
# # works[photo.title].append(photo)
# # # create album for each art piece
# # for work in works:
# # print '{} - {}'.format(participant.name, work)
# # artwork, created = festivaly_models.Art.objects.get_or_create(
# # name='{} - {}'.format(participant.name, work),
# # text=(works[work][0].description + '\n\n' + works[work][0].text).strip(),
# # )
# # print artwork, created
# # if created:
# # participation.artwork.add(artwork)
# # for i, photo in enumerate(works[work]):
# # image = ContentFile(photo.photo.read())
# # image.name = 'mykonos-biennale-{}-{}-{}-{}{}{}'.format(
# # festivalproject.festival.slug,
# # festivalproject.project.slug,
# # photo.artist.slug,
# # photo.slug,
# # ('-%d' % i) if i else '',
# # os.path.splitext(photo.photo.name)[1]
# # )
# # media, created = festivaly_models.Media.objects.get_or_create(
# # name='{} - {}{}'.format(participant.name, photo.title, ('(%d)' % i) if i else ''),
# # defaults=dict(
# # image=image,
# # text=(photo.description + '\n\n' + photo.text).strip(),
# # )
# # )
# # print media, created
# # if created: artwork.media.add(media)
# #
# # # def rec_art(self):
# # # for artist in festivaly_models.Artist.objects.all():
# # # if artist.artwork.first() == None:
# # # print artist
# # # old_artist = festival_models.Artist.objects.get(name=artist.participant.name)
# # # print 'old_artist:', old_artist
# # # projects = collections.defaultdict(list)
# # # for art in old_artist.art_set.all():
# # # projects[art.project_x].append(art)
# # # for project in projects:
# # # old_festival = project.festival
# # # festivalproject = festivaly_models.FestivalProject.objects.get(festival__year=old_festival.year, project__name=project.project.title)
# # # works = collections.defaultdict(list)
# # # for work in projects[project]:
# # # works[work.title].append(work)
# # # for work in works:
# #
# # # album = {
# # # 'festivalproject':festivalproject,
# # # 'name': '{} - {}'.format(artist.name, work),
# # # 'text': (works[work][0].description + '\n\n'+ works[work][0].text).strip(),
# # # 'media':[]
# # # }
# # # artwork = festivaly_models.Art.objects.create(
# # # name = album['name'],
# # # text = album['text']
# # # )
# # # artist.artwork.add(artwork)
# # # print 'artwork', artwork.pk
# # # for i,p in enumerate(works[work]):
# # # image = ContentFile(p.photo.read())
# # # image.name = 'mykonos-biennale-{}-{}-{}-{}{}{}'.format(
# # # festivalproject.festival.slug,
# # # festivalproject.project.slug,
# # # p.artist.slug,
# # # p.slug,
# # # ('-%d' % i) if i else '',
# # # os.path.splitext(p.photo.name)[1]
# # # )
# # # media = festivaly_models.Media.objects.create(
# # # image = image,
# # # name = '{} - {}'.format(artist.name, p.title),
# # # text = (p.description + '\n\n'+ p.text).strip(),
# # # )
# # # artwork.media.add(media)
# # # print 'media', media.pk
# # # print 'album', album
# #
# # # print 'project- art:', project, projects[project]
# # # print '\t old project:', art, art.project_x
# # # festival = art.project_x.festival
# # # print '\t new project:', festivaly_models.FestivalProject.objects.get(festival__year=festival.year, project__name=art.project_x.project.title)
# #
# # # break
# #
# # # for i, art in enumerate(festivaly_models.Art.objects.all()):
# # # print i, 'art', art, 'artist', [a.participant.name for a in art.artist.all()]
# #
# # # def migrate_2015_art(self):
# # # artists = collections.defaultdict(list)
# # # for ps in festival_models.ProjectSeason.objects.all():
# # # for art in ps.art_set.all():
# # # artists[art.artist.name].append((ps, art))
# # # for a in artists:
# # # print a, len(artists[a])
# # # work = {}
# # # work_images = collections.defaultdict(list)
# # # if 'XXVenieri' not in a:
# # # for ips, art in artists[a]:
# #
# # # # print """
# # # # title: {title}
# # # # slug: {slug}
# # # # show: {show}
# # # # leader: {leader}
# # # # description: {description}
# # # # text: {text}
# # # # photo: {photo}
# # # # """.format(**vars(art))
# # # fp = festivaly_models.FestivalProject.objects.get(festival__year=2015, name=ips.project.title)
# # # participant = festivaly_models.Participant.objects.get(name=a)
# # # artistp,_ = festivaly_models.Artist.objects.get_or_create(festival_project=fp, participant=participant)
# # # work[(artistp, art.title)] = art
# # # work_images[(artistp, art.title)].append(art)
# # # #continue
# # # for k in work:
# # # artistp = k[0]
# # # art = work[k]
# # # text = art.description + '\n\n'+ art.text
# # # artwork,created = festivaly_models.Art.objects.get_or_create(
# # # name=art.title,
# # # defaults={ 'text': text.strip()}
# # # )
# # # print artwork,created
# # # if created:
# # # artistp.artwork.add(artwork)
# # # for i, img in enumerate(work_images[k]):
# # # image = ContentFile(img.photo.read())
# # # image.name = 'mykonos-biennale-{}-{}-{}-{}{}{}'.format(
# # # k[0].festival_project.festival.slug,
# # # k[0].festival_project.project.slug,
# # # art.artist.slug,
# # # art.slug,
# # # ('-%d' % i) if i else '',
# # # os.path.splitext(img.photo.name)[1]
# # # )
# # # media = festivaly_models.Media.objects.create(
# # # image = image,
# # # name = artwork.name,
# # # text = artwork.text
# # # )
# # # artwork.media.add(media)
# #
# # # def rec_2015_artists(self):
# # # for artist in festival_models.Artist.objects.filter(visible=True):
# # # try:
# # # found = festivaly_models.Participant.objects.get(name=artist.name)
# # # except:
# # # print "not Found", artist.name
# # # for art in artist.art_set.all():
# # # print '\t', art, art.project_x.pk
# # # if not 'Lommel' in artist.name:
# # # self.add_participant(artist)
# #
# # def add_participant(self, artist):
# # print ('\t %s' % artist)
# # if "The" in artist.name:
# # sort_by = artist.name[4:].strip()
# # else:
# # name = HumanName(artist.name)
# # sort_by = "{} {}".format(name.last, name.first).strip()
# # headshot = artist.headshot
# # participant, created = festivaly_models.Participant.objects.get_or_create(
# # name=artist.name,
# # defaults=dict(
# # sort_by=sort_by,
# # text=artist.bio,
# # statement=artist.statement,
# # email=artist.email,
# # country=artist.country,
# # phone=artist.phone,
# # homepage=artist.homepage,
# # )
# # )
# # if created:
# # if artist.headshot:
# # participant.headshot = ContentFile(artist.headshot.read())
# # participant.headshot.name = 'mykonos-biennale-artist-{}{}'.format(artist.slug, os.path.splitext(
# # artist.headshot.name)[1])
# # participant.save()
# # print participant
# # return participant
# #
# # # def add_artist(self, artist):
# # # print ('\t %s' % artist)
# # # if "The" in artist.name:
# # # sort_by = artist.name[4:].strip()
# # # else:
# # # name = HumanName(artist.name)
# # # sort_by = "{} {}".format(name.last, name.first).strip()
# # # headshot = artist.headshot
# # # if artist.headshot:
# # # headshot = ContentFile(artist.headshot.read())
# # # headshot.name = 'mykonos-biennale-artist-{}{}'.format(artist.slug, os.path.splitext(artist.headshot.name)[1])
# # # new_artist = festivaly_models.Artist.objects.get_or_create(
# # # festival_project = festival_project,
# # # participant = festivaly_models.Participant.objects.get_or_create(
# # # name = artist.name,
# # # defaults = dict(
# # # sort_by = sort_by,
# # # text = artist.bio,
# # # statement = artist.statement,
# # # email = artist.email,
# # # country = artist.country,
# # # phone = artist.phone,
# # # homepage = artist.homepage,
# # # headshot = headshot
# # # )
# # # )[0]
# # # )
# # # print new_artist
# # # return new_artist
# #
# # # def mirgate_2015_artists(self):
# # # for festival_project in festivaly_models.FestivalProject.objects.filter(festival__year=2015):
# # # ps = festival_models.ProjectSeason.objects.filter(project__title = festival_project.name).first()
# # # if ps:
# # # print('found', ps, 'matches',festival_project)
# #
# # # for artist in set([art.artist for art in ps.art_set.all()]):
# # # print ('\t %s' % artist)
# # # if "The" in artist.name:
# # # sort_by = artist.name[4:].strip()
# # # else:
# # # name = HumanName(artist.name)
# # # sort_by = "{} {}".format(name.last, name.first).strip()
# # # headshot = artist.headshot
# # # if artist.headshot:
# # # headshot = ContentFile(artist.headshot.read())
# # # headshot.name = 'mykonos-biennale-artist-{}{}'.format(artist.slug, os.path.splitext(artist.headshot.name)[1])
# # # artist = festivaly_models.Artist.objects.get_or_create(
# # # festival_project = festival_project,
# # # participant = festivaly_models.Participant.objects.get_or_create(
# # # name = artist.name,
# # # defaults = dict(
# # # sort_by = sort_by,
# # # text = artist.bio,
# # # statement = artist.statement,
# # # email = artist.email,
# # # country = artist.country,
# # # phone = artist.phone,
# # # homepage = artist.homepage,
# # # headshot = headshot
# # # )
# # # )[0]
# # # )
# # # print artist
# # # else:
# # # print('no match for',festival_project)
# #
# #
# #
| 58.433007 | 172 | 0.424597 | 2,813 | 35,761 | 5.244223 | 0.097049 | 0.031318 | 0.013829 | 0.020743 | 0.461226 | 0.412554 | 0.388625 | 0.361646 | 0.320838 | 0.276573 | 0 | 0.00536 | 0.457454 | 35,761 | 611 | 173 | 58.528642 | 0.754974 | 0.884287 | 0 | 0 | 0 | 0 | 0.109964 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.386364 | 0 | 0.454545 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
65a4b3fdaca2febab7995b0b62abcbdeca55c479 | 2,119 | py | Python | pyfr/backends/hip/packing.py | rishit2307/PyFR | 1d249426d8ea4257cb8beff75b6ccd8a058787f4 | [
"BSD-3-Clause"
] | 185 | 2015-01-03T01:06:04.000Z | 2019-09-02T22:10:53.000Z | pyfr/backends/hip/packing.py | rishit2307/PyFR | 1d249426d8ea4257cb8beff75b6ccd8a058787f4 | [
"BSD-3-Clause"
] | 68 | 2015-02-18T13:34:15.000Z | 2019-09-03T13:28:36.000Z | pyfr/backends/hip/packing.py | rishit2307/PyFR | 1d249426d8ea4257cb8beff75b6ccd8a058787f4 | [
"BSD-3-Clause"
] | 105 | 2015-01-09T14:05:22.000Z | 2019-07-25T22:04:00.000Z | # -*- coding: utf-8 -*-
from pyfr.backends.base import NullKernel
from pyfr.backends.hip.provider import (HIPKernel, HIPKernelProvider,
get_grid_for_block)
class HIPPackingKernels(HIPKernelProvider):
def pack(self, mv):
hip = self.backend.hip
# An exchange view is simply a regular view plus an exchange matrix
m, v = mv.xchgmat, mv.view
# Compute the grid and thread-block size
block = (128, 1, 1)
grid = get_grid_for_block(block, v.n)
# Render the kernel template
src = self.backend.lookup.get_template('pack').render(blocksz=block[0])
# Build
kern = self._build_kernel('pack_view', src, 'iiiPPPP')
# Set the arguments
params = kern.make_params(grid, block)
params.set_args(v.n, v.nvrow, v.nvcol, v.basedata, v.mapping,
v.rstrides or 0, m)
# If MPI is HIP aware then we just need to pack the buffer
if self.backend.mpitype == 'hip-aware':
class PackXchgViewKernel(HIPKernel):
def add_to_graph(self, graph, deps):
pass
def run(self, stream):
kern.exec_async(stream, params)
# Otherwise, we need to both pack the buffer and copy it back
else:
class PackXchgViewKernel(HIPKernel):
def add_to_graph(self, graph, deps):
pass
def run(self, stream):
kern.exec_async(stream, params)
hip.memcpy(m.hdata, m.data, m.nbytes, stream)
return PackXchgViewKernel(mats=[mv])
def unpack(self, mv):
hip = self.backend.hip
if self.backend.mpitype == 'hip-aware':
return NullKernel()
else:
class UnpackXchgMatrixKernel(HIPKernel):
def add_to_graph(self, graph, deps):
pass
def run(self, stream):
hip.memcpy(mv.data, mv.hdata, mv.nbytes, stream)
return UnpackXchgMatrixKernel(mats=[mv])
| 33.109375 | 79 | 0.562529 | 250 | 2,119 | 4.688 | 0.392 | 0.046928 | 0.038396 | 0.043515 | 0.309727 | 0.309727 | 0.222696 | 0.222696 | 0.222696 | 0.222696 | 0 | 0.005768 | 0.345446 | 2,119 | 63 | 80 | 33.634921 | 0.839221 | 0.138745 | 0 | 0.487179 | 0 | 0 | 0.020925 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0.076923 | 0.051282 | 0 | 0.435897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
65a80ca222d527565d108fc40d6b9d9c4b29c696 | 746 | py | Python | sapp/pipeline/create_database.py | gracewgao/sapp | a6a48f2a88b0dd9feae75f005b8586d36037a0a6 | [
"MIT"
] | 1 | 2021-07-01T12:08:06.000Z | 2021-07-01T12:08:06.000Z | sapp/pipeline/create_database.py | JayGitH/sapp | 2cb58b4dfc0907c4e6d40ce7612559587c243c55 | [
"MIT"
] | null | null | null | sapp/pipeline/create_database.py | JayGitH/sapp | 2cb58b4dfc0907c4e6d40ce7612559587c243c55 | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from typing import Tuple
from ..db import DB
from ..models import create as create_models
from ..trace_graph import TraceGraph
from . import DictEntries, PipelineStep, Summary
log: logging.Logger = logging.getLogger("sapp")
class CreateDatabase(PipelineStep[DictEntries, DictEntries]):
def __init__(self, database: DB) -> None:
super().__init__()
self.database = database
def run(self, input: DictEntries, summary: Summary) -> Tuple[DictEntries, Summary]:
create_models(self.database)
return input, summary
| 28.692308 | 87 | 0.733244 | 95 | 746 | 5.642105 | 0.557895 | 0.067164 | 0.059701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182306 | 746 | 25 | 88 | 29.84 | 0.878689 | 0.225201 | 0 | 0 | 0 | 0 | 0.006981 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
65a8a0c9529b2e9777d5350a74cbfbac3f65b765 | 27,171 | py | Python | tests/test_auto_sharding_bert.py | alpa-projects/alpa | 2c54de2a8fa8a48c77069f4bad802f4e8fa6d126 | [
"Apache-2.0"
] | 114 | 2022-03-02T20:38:16.000Z | 2022-03-31T20:41:50.000Z | tests/test_auto_sharding_bert.py | alpa-projects/alpa | 2c54de2a8fa8a48c77069f4bad802f4e8fa6d126 | [
"Apache-2.0"
] | 6 | 2022-03-09T22:04:50.000Z | 2022-03-30T17:53:15.000Z | tests/test_auto_sharding_bert.py | alpa-projects/alpa | 2c54de2a8fa8a48c77069f4bad802f4e8fa6d126 | [
"Apache-2.0"
] | 5 | 2022-03-05T12:04:31.000Z | 2022-03-31T03:55:42.000Z | """Test auto sharding on transformer layers and bert models."""
import unittest
import jax
import jax.numpy as jnp
import numpy as np
from flax import optim, linen as nn
from alpa import parallelize, ShardParallel, LocalPhysicalDeviceMesh, AutoShardingOption
from alpa.model.bert_model import (BertConfig, FlaxBertLayerCollection,
FlaxBertForMaskedLMModule)
from alpa.util import count_communication_primitives
from test_auto_sharding_mlp import (
assert_all_replicated, assert_close, assert_column_partitioned,
assert_data_parallel_cost, assert_fully_sharded, assert_less_equal,
assert_sharded, assert_replicated_column_partitioned,
assert_replicated_row_partitioned, assert_row_partitioned, is_fully_sharded,
assert_sharding_zero_stage_3)
class AutoShardingAttentionTest(unittest.TestCase):
def setUp(self):
assert len(jax.local_devices()) >= 4
self.physical_mesh = LocalPhysicalDeviceMesh(jax.local_devices()[:4])
self.as_option = AutoShardingOption()
def get_device_mesh(self, shape, mesh_alpha, mesh_beta):
return self.physical_mesh.get_logical_mesh(shape, mesh_alpha, mesh_beta)
def run_bert_layers(self, batch_size, seq_len, num_layers, hidden_size,
num_heads, deterministic, use_remat, device_mesh):
@parallelize(method=ShardParallel(devices=device_mesh,
auto_sharding_option=self.as_option))
def train_step(optimizer, batch, deterministic, apply_fn):
def loss_func(params):
rngs = {"dropout": batch["rng"]}
out = apply_fn(params,
batch["hidden_states"],
batch["attention_mask"],
deterministic,
rngs=rngs)[0]
return jnp.mean((out - batch["label"])**2)
grad = jax.grad(loss_func)(optimizer.target)
new_optimizer = optimizer.apply_gradient(grad)
return new_optimizer
# Init model and optimizer
hidden_states = jnp.ones((batch_size, seq_len, hidden_size),
dtype=jnp.float32)
attention_mask = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
label = jnp.ones((batch_size, seq_len, hidden_size), dtype=jnp.float32)
model = FlaxBertLayerCollection(
BertConfig(num_hidden_layers=num_layers,
hidden_size=hidden_size,
intermediate_size=hidden_size * 4,
num_attention_heads=num_heads,
gradient_checkpointing=use_remat))
rngkey = jax.random.PRNGKey(0)
params = model.init(rngkey, hidden_states, attention_mask)
optimizer = optim.Adam(1e-2).create(params)
# JIT compile
optimizer = train_step(
optimizer, {
"hidden_states": hidden_states,
"attention_mask": attention_mask,
"label": label,
"rng": rngkey
}, deterministic, model.apply)
# Get optimized HLO IR
executable = train_step.get_executable(
optimizer, {
"hidden_states": hidden_states,
"attention_mask": attention_mask,
"label": label,
"rng": rngkey
}, deterministic, model.apply)
return (optimizer, executable.get_hlo_text(),
executable.auto_sharding_objective)
def run_bert_mlm(self, batch_size, seq_len, num_layers, hidden_size,
num_heads, vocab_size, deterministic, device_mesh):
@parallelize(method=ShardParallel(devices=device_mesh,
auto_sharding_option=self.as_option))
def train_step(optimizer, batch):
def loss_func(params):
rngs = {"dropout": batch["rng"]}
logits = model.apply(params,
batch["input_ids"],
batch["attention_mask"],
batch["token_type_ids"],
batch["position_ids"],
deterministic=deterministic,
rngs=rngs)[0]
label_mask = jnp.where(batch["labels"] > 0, 1.0, 0.0)
labels = jax.nn.one_hot(batch["labels"], logits.shape[-1])
loss = -jnp.sum(labels * jax.nn.log_softmax(logits, axis=-1),
axis=-1)
return (label_mask * loss).sum() / label_mask.sum() * 0.1234
grad = jax.grad(loss_func)(optimizer.target)
new_optimizer = optimizer.apply_gradient(grad)
return new_optimizer
# Init model and optimizer
input_ids = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
attention_mask = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
token_type_ids = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
position_ids = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
labels = jnp.ones((batch_size, seq_len), dtype=jnp.int32)
model = FlaxBertForMaskedLMModule(
BertConfig(
num_hidden_layers=num_layers,
hidden_size=hidden_size,
intermediate_size=hidden_size * 4,
num_attention_heads=num_heads,
vocab_size=vocab_size,
max_position_embeddings=seq_len,
))
rngkey = jax.random.PRNGKey(0)
params = model.init(rngkey, input_ids, attention_mask, token_type_ids,
position_ids)
optimizer = optim.Adam(1e-2).create(params)
# JIT compile
optimizer = train_step(
optimizer, {
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"position_ids": position_ids,
"labels": labels,
"rng": rngkey
})
# Get optimized HLO IR
executable = train_step.get_executable(
optimizer, {
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"position_ids": position_ids,
"labels": labels,
"rng": rngkey
})
return (optimizer, executable.get_hlo_text(),
executable.auto_sharding_objective)
def test_bert_layer_data_parallel(self):
batch_size = 64
seq_len = 64
num_layers = 2
hidden_size = 32
num_heads = 8
deterministic = False
use_remat = False
# Test on different logical mesh shapes
for i, mesh_shape in enumerate([(4, 1), (1, 4)]):
device_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
assert_data_parallel_cost(optimizer, hlo_ir, objective, device_mesh,
self.as_option, i)
def test_bert_layer_model_parallel(self):
batch_size = 8
seq_len = 8
num_layers = 2
hidden_size = 128
num_heads = 8
deterministic = False
use_remat = False
# Test on different logical mesh shapes
for i, mesh_shape in enumerate([(4, 1), (1, 4)]):
device_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
# Check communication cost
expected = (num_layers * 4 - 1) * device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, i)
assert_close(objective, expected)
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir))
if self.as_option.prefer_reduce_scatter:
assert n_total == num_layers * 4 - 1
assert n_all_reduce == num_layers * 4 - 1
assert n_total == n_all_reduce
else:
assert n_total == num_layers * 4 - 1
assert n_all_reduce == num_layers * 4 - 1
assert n_total == n_all_reduce
# Check sharding specification
for k in range(num_layers):
params = optimizer.target["params"][str(k)]
weights = [
params["attention"]["self"]["qvk_combined"]["kernel"],
params["attention"]["output"]["dense"]["kernel"],
params["intermediate"]["dense"]["kernel"],
params["output"]["dense"]["kernel"],
]
for j in range(len(weights)):
if j % 2 == 0:
assert_column_partitioned(weights[j], mesh_shape[i], i)
else:
assert_row_partitioned(weights[j], mesh_shape[i], i)
def test_bert_layer_2d_mesh(self):
batch_size = 8
seq_len = 8
num_layers = 2
hidden_size = 128
num_heads = 8
deterministic = False
use_remat = False
# Test on different logical mesh shapes
mesh_shape = [2, 2]
device_mesh = self.get_device_mesh(mesh_shape, [2, 2], [1, 0.1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
# Check communication cost
params = jax.tree_util.tree_leaves(optimizer.target)
expected = (sum(
device_mesh.all_reduce_cost(
np.prod(x.shape) * 4 / mesh_shape[1], 0)
for x in params) + device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4 / mesh_shape[0], 1) *
(num_layers * 4 - 1))
assert_close(objective, expected)
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir,
ignore_scalar_all_reduce=True))
if self.as_option.prefer_reduce_scatter:
assert n_all_reduce == num_layers * 4 - 1
assert n_reduce_scatter == 2
assert n_all_gather == 1
assert n_total == n_all_reduce + n_reduce_scatter + n_all_gather
else:
assert n_all_reduce == num_layers * 4
assert n_total == n_all_reduce
# Check sharding specification
if self.as_option.prefer_reduce_scatter:
for weight in jax.tree_util.tree_leaves(
optimizer.state.param_states):
if len(weight.shape) > 1:
assert_fully_sharded(weight)
else:
for k in range(num_layers):
params = optimizer.target["params"][str(k)]
weights = [
params["attention"]["self"]["qvk_combined"]["kernel"],
params["attention"]["output"]["dense"]["kernel"],
params["intermediate"]["dense"]["kernel"],
params["output"]["dense"]["kernel"],
]
for j in range(len(weights)):
if j % 2 == 0:
assert_replicated_column_partitioned(
weights[j], mesh_shape)
else:
assert_replicated_row_partitioned(
weights[j], mesh_shape)
def test_bert_layer_force_batch_dim_mapping(self):
batch_size = 64
seq_len = 64
num_layers = 2
hidden_size = 32
num_heads = 8
deterministic = False
use_remat = False
self.as_option.force_batch_dim_to_mesh_dim = 0
# data parallel
device_mesh = self.get_device_mesh([4, 1], [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
assert_data_parallel_cost(optimizer, hlo_ir, objective, device_mesh, self.as_option, 0)
# model parallel (case 1)
device_mesh = self.get_device_mesh([1, 4], [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
expected = (num_layers * 4 - 1) * device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, 1)
assert_close(objective, expected)
# model parallel (case 2)
batch_size = 1
device_mesh = self.get_device_mesh([1, 4], [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
expected = (num_layers * 4 - 1) * device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, 1)
assert_close(objective, expected)
def test_embedding_2d_mesh(self):
vocab_size = 1024
hidden_size = 8
batch_size = 8
seq_len = 8
mesh_shape = [2, 2]
# Model and training step definition
class Model(nn.Module):
"""Tied input and output embedding."""
def setup(self):
self.embed = nn.Embed(vocab_size, hidden_size)
def __call__(self, x):
x = self.embed(x)
embed = self.embed.variables["params"]["embedding"]
x = x @ embed.T
return x
logical_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
@parallelize(method=ShardParallel(devices=logical_mesh))
def func(optimizer, x, y):
def loss_func(params):
out = model.apply(params, x)
y_ = jax.nn.one_hot(y, out.shape[-1])
loss = -jnp.sum(y_ * jax.nn.log_softmax(out, axis=-1), axis=-1)
return loss.sum()
grad = jax.grad(loss_func)(optimizer.target)
new_optimizer = optimizer.apply_gradient(grad)
return new_optimizer
# Init model and optimizer
x = jnp.ones((batch_size, seq_len), np.int32)
y = jnp.ones((batch_size, seq_len), np.int32)
model = Model()
rngkey = jax.random.PRNGKey(0)
params = model.init(rngkey, x)
optimizer = optim.Adam(1e-2).create(params)
# JIT Compile
optimize = func(optimizer, x, y)
# Check communication cost
executable = func.get_executable(optimizer, x, y)
hlo_ir = executable.get_hlo_text()
objective = executable.auto_sharding_objective
params = jax.tree_util.tree_leaves(optimizer.target)
expected = (
logical_mesh.all_reduce_cost(
vocab_size * hidden_size * 4 / mesh_shape[1], 0) +
logical_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4 / mesh_shape[0], 1) * 2 +
logical_mesh.all_reduce_cost(
batch_size * seq_len * 4 / mesh_shape[0], 1) * 2)
assert_close(objective, expected)
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir))
assert n_total == n_all_reduce
def test_bert_mlm_data_parallel(self):
batch_size = 32
seq_len = 32
num_layers = 2
hidden_size = 16
num_heads = 4
vocab_size = 128
deterministic = False
# Test on different logical mesh shapes
for i, mesh_shape in enumerate([(4, 1), (1, 4)]):
device_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_mlm(
batch_size, seq_len, num_layers, hidden_size, num_heads,
vocab_size, deterministic, device_mesh)
if self.as_option.force_zero_stage_3:
# only the weight and opt_state of token_embed is not sharded
assert_sharding_zero_stage_3(optimizer, 3)
continue
assert_data_parallel_cost(optimizer, hlo_ir, objective, device_mesh,
self.as_option, i, 1)
@unittest.skip("This test is broken after we disallow some replicated iota."
)
def test_bert_mlm_model_parallel(self):
batch_size = 16
seq_len = 16
num_layers = 2
hidden_size = 128
num_heads = 4
vocab_size = 512
deterministic = False
self.as_option.allow_all_gather = False # Temporary hack
self.as_option.allow_all_to_all = False # Temporary hack
# Test on different logical mesh shapes
for i, mesh_shape in enumerate([(4, 1), (1, 4)]):
device_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_mlm(
batch_size, seq_len, num_layers, hidden_size, num_heads,
vocab_size, deterministic, device_mesh)
# Check communication cost
# expected_cost = embed.forward (1) + embed.backward(2) +
# LM_head.forward (1) + LM_head.backward (1) +
# LM_head.weight.backward (1) + log_softmax.forward (2) +
# transformer.forward (2 * num_layers) + transformer.backward (2 * num_layers)
#
# Note that the final cost is different from this estimated cost in ILP solver.
# The SPMD partitioner will eliminate some unnecessary communication in favor of
# redundant computation (e.g., it will elimiate the all-reduce in embed.backward).
expected = (
device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, i) * 5 +
device_mesh.all_reduce_cost(hidden_size * hidden_size * 4, i) +
device_mesh.all_reduce_cost(batch_size * seq_len * 4, i) * 2 +
device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, i) * num_layers * 4)
assert_close(objective, expected)
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir))
# real number of all-reduce = transformers (4 * num_layers) + log_softmax (2) +
# embed.forward (1) + embad.backward (1)
assert n_all_reduce == num_layers * 4 + 4
assert n_total == n_all_reduce
# Check sharding specification
embed_weight = optimizer.target["params"]["bert"]["embeddings"][
"word_embeddings"]["embedding"]
lm_head = optimizer.target["params"]["cls"]["predictions"][
"transform"]["dense"]["kernel"]
assert_row_partitioned(embed_weight, mesh_shape[i], i)
assert_all_replicated(lm_head, np.prod(mesh_shape))
for k in range(num_layers):
params = optimizer.target["params"]["bert"]["encoder"]["layer"][
str(k)]
weights = [
params["attention"]["self"]["qvk_combined"]["kernel"],
params["attention"]["output"]["dense"]["kernel"],
params["intermediate"]["dense"]["kernel"],
params["output"]["dense"]["kernel"],
]
for j in range(len(weights)):
if j % 2 == 0:
assert_column_partitioned(weights[j], mesh_shape[i], i)
else:
assert_row_partitioned(weights[j], mesh_shape[i], i)
def test_bert_mlm_2d_mesh(self):
batch_size = 4
seq_len = 4
num_layers = 2
hidden_size = 512
num_heads = 4
vocab_size = 4096
deterministic = False
# To generate the desired strategy, we have to turn off mixed mesh shape and all-gather
# and enable recomputing heavy ops.
self.as_option.allow_recompute_heavy_op = True
self.as_option.allow_all_gather = False
self.as_option.allow_mixed_mesh_shape = False
mesh_shape = [2, 2]
device_mesh = self.get_device_mesh(mesh_shape, [2, 2], [1, 0.1])
optimizer, hlo_ir, objective = self.run_bert_mlm(
batch_size, seq_len, num_layers, hidden_size, num_heads, vocab_size,
deterministic, device_mesh)
# Check communication cost.
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir,
ignore_scalar_all_reduce=True))
if self.as_option.prefer_reduce_scatter:
assert n_all_reduce == 4 * num_layers + 2 + 2
assert n_reduce_scatter <= 3 # The correct number should be 2,
# but GpuMultiOutputFusion can make
# some reduce-scatter unable to be combined
assert n_all_gather == 1
assert n_total == n_all_reduce + n_all_gather + n_reduce_scatter
else:
# real number of all-reduce = transformers (4 * num_layers) + log_softmax (2) +
# embed.forward (1) + embad.backward (1) + weights (1)
assert n_all_reduce == 4 * num_layers + 2 + 2 + 1
assert n_total == n_all_reduce
# Check sharding specification
assert "s32[4,4,4096]{2,1,0} iota()" not in hlo_ir
assert "s32[2,4,2048]{2,1,0} iota()" in hlo_ir
if self.as_option.prefer_reduce_scatter:
num_not_sharded = 0 # allow the token_type_embeddings not partitioned.
for weight in jax.tree_util.tree_leaves(
optimizer.state.param_states):
if len(weight.shape) > 1:
if not is_fully_sharded(weight):
num_not_sharded += 1
assert num_not_sharded <= 2
else:
embed_weight = (optimizer.target["params"]["bert"]["embeddings"]
["word_embeddings"]["embedding"])
lm_head = (optimizer.target["params"]["cls"]["predictions"]
["transform"]["dense"]["kernel"])
assert_replicated_row_partitioned(embed_weight, mesh_shape)
assert_all_replicated(lm_head, np.prod(mesh_shape))
for k in range(num_layers):
params = optimizer.target["params"]["bert"]["encoder"]["layer"][
str(k)]
weights = [
params["attention"]["self"]["qvk_combined"]["kernel"],
params["attention"]["output"]["dense"]["kernel"],
params["intermediate"]["dense"]["kernel"],
params["output"]["dense"]["kernel"],
]
for j in range(len(weights)):
if j % 2 == 0:
assert_replicated_column_partitioned(
weights[j], mesh_shape)
else:
assert_replicated_row_partitioned(
weights[j], mesh_shape)
def test_bert_layer_data_parallel_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_layer_data_parallel()
def test_bert_layer_model_parallel_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_layer_model_parallel()
def test_bert_layer_2d_mesh_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_layer_2d_mesh()
def test_bert_mlm_data_parallel_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_mlm_data_parallel()
def test_bert_mlm_data_parallel_reduce_scatter_zero_3(self):
self.as_option.force_zero_stage_3 = True
self.as_option.force_zero_stage_3_all_gather_threshold = 1
self.test_bert_mlm_data_parallel()
@unittest.skip("This test is broken after we disallow some replicated iota."
)
def test_bert_mlm_model_parallel_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_mlm_model_parallel()
def test_bert_mlm_2d_mesh_reduce_scatter(self):
self.as_option.prefer_reduce_scatter = True
self.test_bert_mlm_2d_mesh()
def test_bert_layer_model_parallel_remat(self):
batch_size = 8
seq_len = 8
num_layers = 2
hidden_size = 128
num_heads = 8
deterministic = False
use_remat = True
# Test on different logical mesh shapes
for i, mesh_shape in enumerate([(4, 1), (1, 4)]):
device_mesh = self.get_device_mesh(mesh_shape, [1, 1], [1, 1])
optimizer, hlo_ir, objective = self.run_bert_layers(
batch_size, seq_len, num_layers, hidden_size, num_heads,
deterministic, use_remat, device_mesh)
expected = (num_layers * 6 - 1) * device_mesh.all_reduce_cost(
batch_size * seq_len * hidden_size * 4, i)
assert_close(objective, expected)
n_total, n_all_reduce, n_all_gather, n_reduce_scatter, _ = (
count_communication_primitives(hlo_ir))
assert n_total == num_layers * 6 - 1
assert n_all_reduce == num_layers * 6 - 1
assert n_total == n_all_reduce
def suite():
suite = unittest.TestSuite()
def add(name):
suite.addTest(AutoShardingAttentionTest(name))
add("test_bert_layer_data_parallel")
add("test_bert_layer_model_parallel")
add("test_bert_layer_2d_mesh")
add("test_bert_layer_force_batch_dim_mapping")
add("test_embedding_2d_mesh")
add("test_bert_mlm_data_parallel")
add("test_bert_mlm_model_parallel")
add("test_bert_mlm_2d_mesh")
add("test_bert_layer_data_parallel_reduce_scatter")
add("test_bert_layer_model_parallel_reduce_scatter")
add("test_bert_layer_2d_mesh_reduce_scatter")
add("test_bert_mlm_data_parallel_reduce_scatter")
add("test_bert_mlm_model_parallel_reduce_scatter")
add("test_bert_mlm_2d_mesh_reduce_scatter")
add("test_bert_mlm_data_parallel_reduce_scatter_zero_3")
add("test_bert_layer_model_parallel_remat")
return suite
if __name__ == "__main__":
runner = unittest.TextTestRunner()
runner.run(suite())
| 41.673313 | 106 | 0.581208 | 3,181 | 27,171 | 4.634077 | 0.092424 | 0.033241 | 0.02605 | 0.032562 | 0.771182 | 0.735703 | 0.696425 | 0.662099 | 0.64826 | 0.610746 | 0 | 0.0195 | 0.328107 | 27,171 | 651 | 107 | 41.737327 | 0.78796 | 0.073718 | 0 | 0.594862 | 0 | 0 | 0.066029 | 0.021983 | 0 | 0 | 0 | 0 | 0.112648 | 1 | 0.059289 | false | 0 | 0.017787 | 0.001976 | 0.102767 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
65aad0dcad6b978d7933961de7d32128e38494f9 | 12,917 | py | Python | infrastructure-provisioning/scripts/post-deployment_configuration.py | roolrd/incubator-datalab | 2045207ecd1b381193f1a1ec143cc968716ad989 | [
"Apache-2.0"
] | 66 | 2020-10-03T08:36:48.000Z | 2022-03-20T23:16:20.000Z | infrastructure-provisioning/scripts/post-deployment_configuration.py | roolrd/incubator-datalab | 2045207ecd1b381193f1a1ec143cc968716ad989 | [
"Apache-2.0"
] | 48 | 2019-02-28T12:11:33.000Z | 2020-09-15T08:27:08.000Z | infrastructure-provisioning/scripts/post-deployment_configuration.py | roolrd/incubator-datalab | 2045207ecd1b381193f1a1ec143cc968716ad989 | [
"Apache-2.0"
] | 44 | 2019-01-14T10:31:55.000Z | 2020-09-22T17:53:33.000Z | #!/usr/bin/python3
# *****************************************************************************
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# ******************************************************************************
import argparse
import requests
import uuid
from Crypto.PublicKey import RSA
from fabric import *
import subprocess
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--keycloak_realm_name', type=str, default='KEYCLOAK_REALM_NAME', help='Keycloak Realm name')
parser.add_argument('--keycloak_auth_server_url', type=str, default='KEYCLOAK_AUTH_SERVER_URL', help='Keycloak auth server URL')
parser.add_argument('--keycloak_client_name', type=str, default='KEYCLOAK_CLIENT_NAME', help='Keycloak client name')
parser.add_argument('--keycloak_client_secret', type=str, default='KEYCLOAK_CLIENT_SECRET', help='Keycloak client secret')
parser.add_argument('--keycloak_user', type=str, default='KEYCLOAK_USER', help='Keycloak user')
parser.add_argument('--keycloak_admin_password', type=str, default='KEYCLOAK_ADMIN_PASSWORD',
help='Keycloak admin password')
args = parser.parse_args()
headers = {
'Metadata-Flavor': 'Google',
}
print("Getting cloud and instance parameters")
server_external_ip = requests.get(
'http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip',
headers=headers).text
datalab_sbn = requests.get('http://metadata/computeMetadata/v1/instance/name', headers=headers).text
datalab_ssn_static_ip_name = datalab_sbn + '-ip'
datalab_zone = requests.get('http://metadata/computeMetadata/v1/instance/zone', headers=headers).text.split('/')[-1]
datalab_region = '-'.join(datalab_zone.split('-', 2)[:2])
deployment_vpcId = subprocess.run(
"sudo gcloud compute instances describe {0} --zone {1} --format 'value(networkInterfaces.network)' | sed 's|.*/||'".format(
datalab_sbn, datalab_zone), capture_output=True, shell=True, check=True).stdout.decode('UTF-8').rstrip("\n\r")
deployment_subnetId = subprocess.run(
"sudo gcloud compute instances describe {0} --zone {1} --format 'value(networkInterfaces.subnetwork)' | sed 's|.*/||'".format(
datalab_sbn, datalab_zone), capture_output=True, shell=True, check=True).stdout.decode('UTF-8').rstrip("\n\r")
gcp_projectId = requests.get('http://metadata/computeMetadata/v1/project/project-id', headers=headers).text
keycloak_redirectUri = 'http://{}'.format(server_external_ip)
print("Generationg SSH keyfile for datalab-user")
key = RSA.generate(2048)
subprocess.run("sudo sh -c 'echo \"{}\" > /home/datalab-user/keys/KEY-FILE.pem'".format(key.exportKey('PEM')), shell=True, check=True)
subprocess.run("sudo chmod 600 /home/datalab-user/keys/KEY-FILE.pem", shell=True, check=True)
pubkey = key.publickey()
subprocess.run("sudo sh -c 'echo \"{}\" > /home/datalab-user/.ssh/authorized_keys'".format(pubkey.exportKey('OpenSSH')), shell=True, check=True)
print("Generationg MongoDB password")
mongo_pwd = uuid.uuid4().hex
try:
subprocess.run(
"sudo echo -e 'db.changeUserPassword(\"admin\", \"{}\")' | mongo datalabdb --port 27017 -u admin -p MONGO_PASSWORD".format(
mongo_pwd), shell=True, check=True)
subprocess.run('sudo sed -i "s|MONGO_PASSWORD|{}|g" /opt/datalab/conf/billing.yml'.format(mongo_pwd), shell=True, check=True)
subprocess.run('sudo sed -i "s|MONGO_PASSWORD|{}|g" /opt/datalab/conf/ssn.yml'.format(mongo_pwd), shell=True, check=True)
except:
print('Mongo password was already changed')
print('Reserving external IP')
static_address_exist = subprocess.run(
"sudo gcloud compute addresses list --filter='address={}'".format(server_external_ip), capture_output=True, shell=True, check=True).stdout.decode('UTF-8').rstrip("\n\r")
if static_address_exist:
print('Address is already static')
else:
subprocess.run("sudo gcloud compute addresses create {0} --addresses {1} --region {2}".format(datalab_ssn_static_ip_name,
server_external_ip,
datalab_region),
capture_output=True, shell=True, check=True)
print("Overwriting SSN parameters")
if deployment_subnetId == 'default':
subprocess.run(
'sudo sed -i "s|# user_subnets_range|user_subnets_range|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini', shell=True, check=True)
subprocess.run('sudo sed -i "s|DATALAB_SBN|{}|g" /opt/datalab/conf/self-service.yml'.format(datalab_sbn), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_REDIRECTURI|{}|g" /opt/datalab/conf/self-service.yml'.format(keycloak_redirectUri), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_REALM_NAME|{}|g" /opt/datalab/conf/self-service.yml'.format(args.keycloak_realm_name), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_AUTH_SERVER_URL|{}|g" /opt/datalab/conf/self-service.yml'.format(
args.keycloak_auth_server_url), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_CLIENT_NAME|{}|g" /opt/datalab/conf/self-service.yml'.format(
args.keycloak_client_name), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_CLIENT_SECRET|{}|g" /opt/datalab/conf/self-service.yml'.format(
args.keycloak_client_secret), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_REALM_NAME|{}|g" /opt/datalab/conf/provisioning.yml'.format(args.keycloak_realm_name), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_AUTH_SERVER_URL|{}|g" /opt/datalab/conf/provisioning.yml'.format(
args.keycloak_auth_server_url), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_CLIENT_NAME|{}|g" /opt/datalab/conf/provisioning.yml'.format(
args.keycloak_client_name), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_CLIENT_SECRET|{}|g" /opt/datalab/conf/provisioning.yml'.format(
args.keycloak_client_secret), shell=True, check=True)
subprocess.run('sudo sed -i "s|DATALAB_SBN|{}|g" /opt/datalab/conf/provisioning.yml'.format(datalab_sbn), shell=True, check=True)
subprocess.run('sudo sed -i "s|SUBNET_ID|{}|g" /opt/datalab/conf/provisioning.yml'.format(deployment_subnetId), shell=True, check=True)
subprocess.run('sudo sed -i "s|DATALAB_REGION|{}|g" /opt/datalab/conf/provisioning.yml'.format(datalab_region), shell=True, check=True)
subprocess.run('sudo sed -i "s|DATALAB_ZONE|{}|g" /opt/datalab/conf/provisioning.yml'.format(datalab_zone), shell=True, check=True)
subprocess.run('sudo sed -i "s|SSN_VPC_ID|{}|g" /opt/datalab/conf/provisioning.yml'.format(deployment_vpcId), shell=True, check=True)
subprocess.run('sudo sed -i "s|GCP_PROJECT_ID|{}|g" /opt/datalab/conf/provisioning.yml'.format(gcp_projectId), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_USER|{}|g" /opt/datalab/conf/provisioning.yml'.format(args.keycloak_user), shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYCLOAK_ADMIN_PASSWORD|{}|g" /opt/datalab/conf/provisioning.yml'.format(
args.keycloak_admin_password), shell=True, check=True)
subprocess.run('sudo sed -i "s|DATALAB_SBN|{}|g" /opt/datalab/conf/billing.yml'.format(datalab_sbn), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|DATALAB_SBN|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
datalab_sbn), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|GCP_PROJECT_ID|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
gcp_projectId), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|DATALAB_REGION|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
datalab_region), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|DATALAB_ZONE|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
datalab_zone), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_REALM_NAME|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_realm_name), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_AUTH_SERVER_URL|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_auth_server_url), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_CLIENT_NAME|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_client_name), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_CLIENT_SECRET|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_client_secret), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_USER|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_user), shell=True, check=True)
subprocess.run(
'sudo sed -i "s|KEYCLOAK_ADMIN_PASSWORD|{}|g" /opt/datalab/sources/infrastructure-provisioning/src/general/conf/overwrite.ini'.format(
args.keycloak_admin_password), shell=True, check=True)
print('SSL certificate generating')
keystore_passwd = uuid.uuid4().hex
subprocess.run('sudo rm /home/datalab-user/keys/ssn*', shell=True, check=True)
subprocess.run('sudo rm /etc/ssl/certs/datalab*', shell=True, check=True)
subprocess.run('sudo keytool -delete -noprompt -trustcacerts -alias ssn -storepass changeit -keystore '
'/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts', shell=True, check=True)
subprocess.run(
'sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/certs/datalab.key -out '
'/etc/ssl/certs/datalab.crt -subj "/C=US/ST=US/L=US/O=datalab/CN=localhost/subjectAltName={0}"'.format(
server_external_ip), shell=True, check=True)
subprocess.run(
'sudo openssl pkcs12 -export -in /etc/ssl/certs/datalab.crt -inkey /etc/ssl/certs/datalab.key -name ssn -out '
'/home/datalab-user/keys/ssn.p12 -password pass:{0}'.format(keystore_passwd), shell=True, check=True)
subprocess.run(
'sudo keytool -importkeystore -srckeystore /home/datalab-user/keys/ssn.p12 -srcstoretype PKCS12 -alias '
'ssn -destkeystore /home/datalab-user/keys/ssn.keystore.jks -deststorepass {0} -srcstorepass {0}'.format(
keystore_passwd), shell=True, check=True)
subprocess.run(
'sudo keytool -importcert -trustcacerts -alias ssn -file /etc/ssl/certs/datalab.crt -noprompt -storepass '
'changeit -keystore /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts', shell=True, check=True)
subprocess.run('sudo sed -i "s|KEYSTORE_PASSWORD|{}|g" /opt/datalab/conf/ssn.yml'.format(keystore_passwd), shell=True, check=True)
print('Nginx configuration updating')
subprocess.run('sudo sed -i "s|SERVER_IP|{}|g" /etc/nginx/conf.d/nginx_proxy.conf'.format(server_external_ip), shell=True, check=True)
subprocess.run('sudo systemctl restart nginx', shell=True, check=True)
subprocess.run('sudo supervisorctl restart all', shell=True, check=True)
print('Rebuilding docker images')
subprocess.run('cd /opt/datalab/sources/infrastructure-provisioning/src/ && sudo docker-build all', shell=True, check=True)
print('[SUMMARY]')
print('Mongo password stored in /opt/datalab/conf/ssn.yml')
print('SSH key for datalab-user stored in /home/datalab-user/keys/KEY-FILE.pem')
| 66.582474 | 177 | 0.695905 | 1,724 | 12,917 | 5.100348 | 0.174594 | 0.076879 | 0.082793 | 0.106448 | 0.6802 | 0.628682 | 0.593427 | 0.547708 | 0.519618 | 0.487092 | 0 | 0.006094 | 0.148874 | 12,917 | 193 | 178 | 66.927461 | 0.793706 | 0.071766 | 0 | 0.202614 | 0 | 0.300654 | 0.48258 | 0.280558 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.124183 | 0.052288 | 0 | 0.052288 | 0.084967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
65b84a0f57583610a96238b297af5b9209e39504 | 2,078 | py | Python | lib/mayaUsd/resources/scripts/mayaUsdMayaReferenceUtils.py | sun-frog/maya-usd | 561fc867e192e426749c9df59807cc836d16a2c2 | [
"Apache-2.0"
] | null | null | null | lib/mayaUsd/resources/scripts/mayaUsdMayaReferenceUtils.py | sun-frog/maya-usd | 561fc867e192e426749c9df59807cc836d16a2c2 | [
"Apache-2.0"
] | null | null | null | lib/mayaUsd/resources/scripts/mayaUsdMayaReferenceUtils.py | sun-frog/maya-usd | 561fc867e192e426749c9df59807cc836d16a2c2 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2022 Autodesk
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import maya.cmds as cmds
# These names should not be localized as Usd only accepts [a-z,A-Z] as valid characters.
kDefaultMayaReferencePrimName = 'MayaReference1'
kDefaultVariantSetName = 'Representation'
kDefaultVariantName = 'MayaReference'
def defaultMayaReferencePrimName():
return kDefaultMayaReferencePrimName
def defaultVariantSetName():
return kDefaultVariantSetName
def defaultVariantName():
return kDefaultVariantName
class SetParentContext():
'''Simple context helper to go up one parent level when exiting.'''
def __init__(self, parent):
cmds.setParent(parent)
pass
def __enter__(self):
pass
def __exit__(self, mytype, value, tb):
cmds.setParent('..')
def pushOptionsUITemplate():
'''Standardize the look of the options UI.
Python translation of fileOptions.mel:pushOptionsUITemplate(),
which is not a global proc.
'''
if not cmds.uiTemplate('optionsTemplate', exists=True):
cmds.uiTemplate('optionsTemplate')
cmds.frameLayout(defineTemplate='optionsTemplate',
collapsable=True,
collapse=False,
labelVisible=True,
labelIndent=5,
marginWidth=5,
marginHeight=5)
cmds.columnLayout(defineTemplate='optionsTemplate',
adjustableColumn=True)
cmds.setUITemplate('optionsTemplate', pushTemplate=True)
| 31.014925 | 88 | 0.681906 | 220 | 2,078 | 6.386364 | 0.631818 | 0.042705 | 0.018505 | 0.022776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007609 | 0.241097 | 2,078 | 66 | 89 | 31.484848 | 0.883323 | 0.397979 | 0 | 0.064516 | 0 | 0 | 0.09744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0.064516 | 0.032258 | 0.096774 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
65bfe9e8439f59de883ef94bfad2065595ac30d1 | 770 | py | Python | composite_pk/util.py | SelfHacked/django-composite-pk | a8f8c98b83e1993e284e6a5bf206648de936ef27 | [
"MIT"
] | null | null | null | composite_pk/util.py | SelfHacked/django-composite-pk | a8f8c98b83e1993e284e6a5bf206648de936ef27 | [
"MIT"
] | null | null | null | composite_pk/util.py | SelfHacked/django-composite-pk | a8f8c98b83e1993e284e6a5bf206648de936ef27 | [
"MIT"
] | 1 | 2021-06-03T12:03:58.000Z | 2021-06-03T12:03:58.000Z | from django.utils.functional import (
cached_property as _cached_property,
)
cached_property = _cached_property
class ConstantValueProperty(object):
def __init__(self, val):
self.val = val
def __get__(self, instance, cls=None):
if instance is None:
return self
return self.val
def __set__(self, instance, value):
# ignore
pass
class AttrTupleProperty(object):
def __init__(self, *attrs: str):
self.attrs = attrs
def __get__(self, instance, cls=None):
if instance is None:
return self
return tuple(
getattr(instance, attr)
for attr in self.attrs
)
def __set__(self, instance, value):
# ignore
pass
| 20.810811 | 42 | 0.603896 | 86 | 770 | 5.05814 | 0.395349 | 0.128736 | 0.091954 | 0.128736 | 0.413793 | 0.413793 | 0.413793 | 0.262069 | 0.262069 | 0.262069 | 0 | 0 | 0.319481 | 770 | 36 | 43 | 21.388889 | 0.830153 | 0.016883 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0.08 | 0.04 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
65c35d1bad34012dddb65c54176bb85dee35fa66 | 302 | py | Python | applications/main/tests/test_models.py | uottawapython/uopy | af5d1e809dfe8fb54f5c92aa2d5d7bda243e7a21 | [
"BSD-3-Clause"
] | 1 | 2016-09-17T12:03:21.000Z | 2016-09-17T12:03:21.000Z | applications/main/tests/test_models.py | uottawapython/uopy | af5d1e809dfe8fb54f5c92aa2d5d7bda243e7a21 | [
"BSD-3-Clause"
] | null | null | null | applications/main/tests/test_models.py | uottawapython/uopy | af5d1e809dfe8fb54f5c92aa2d5d7bda243e7a21 | [
"BSD-3-Clause"
] | 9 | 2016-06-05T17:02:51.000Z | 2022-03-05T21:45:48.000Z | import pytest
from mixer.backend.django import mixer
#Don't try to write in the database
pytestmark = pytest.mark.django_db
class TestClubEvent:
def test_init(self):
obj=mixer.blend('main.ClubEvent')
assert obj.pk == 1, 'Should save an isntance'
#TODO Create other test ....
| 23.230769 | 53 | 0.701987 | 44 | 302 | 4.772727 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004149 | 0.201987 | 302 | 12 | 54 | 25.166667 | 0.86722 | 0.201987 | 0 | 0 | 0 | 0 | 0.154812 | 0 | 0 | 0 | 0 | 0.083333 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
02950750c6242e00ff9f32adec3131bfa3cc33fb | 984 | py | Python | plugin_skill/__init__.py | forslund/mycroft-skill-plugin | 0b7b431134686731c936871b39a9656639723d05 | [
"Apache-2.0"
] | null | null | null | plugin_skill/__init__.py | forslund/mycroft-skill-plugin | 0b7b431134686731c936871b39a9656639723d05 | [
"Apache-2.0"
] | null | null | null | plugin_skill/__init__.py | forslund/mycroft-skill-plugin | 0b7b431134686731c936871b39a9656639723d05 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Åke Forslund
#
# This test plugin is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This module is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.
from adapt.intent import IntentBuilder
from mycroft import MycroftSkill, intent_handler
class PluginSkill(MycroftSkill):
@intent_handler(IntentBuilder('').require('PluginKeyword'))
def handle_plugin_intent(self, message):
self.speak_dialog("plugin")
def create_skill():
return PluginSkill()
| 36.444444 | 74 | 0.764228 | 143 | 984 | 5.216783 | 0.65035 | 0.020107 | 0.052279 | 0.076408 | 0.10992 | 0.075067 | 0 | 0 | 0 | 0 | 0 | 0.006098 | 0.166667 | 984 | 26 | 75 | 37.846154 | 0.903659 | 0.650407 | 0 | 0 | 0 | 0 | 0.057751 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
02aa3a7506318ee5003595e5253547c4b4ed875f | 494 | py | Python | parser/team17/Interprete/Primitivos/BOOLEANO.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 35 | 2020-12-07T03:11:43.000Z | 2021-04-15T17:38:16.000Z | parser/team17/Interprete/Primitivos/BOOLEANO.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 47 | 2020-12-09T01:29:09.000Z | 2021-01-13T05:37:50.000Z | parser/team17/Interprete/Primitivos/BOOLEANO.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
] | 556 | 2020-12-07T03:13:31.000Z | 2021-06-17T17:41:10.000Z | from Interprete.NodoAST import NodoArbol
from Interprete.Tabla_de_simbolos import Tabla_de_simbolos
from Interprete.Arbol import Arbol
from Interprete.Valor.Valor import Valor
from Interprete.Primitivos.TIPO import TIPO
class BOOLEANO(NodoArbol):
def __init__(self, data, line, column):
super().__init__(line, column)
self.data = data
def execute(self, entorno:Tabla_de_simbolos, arbol:Arbol):
value:Valor = Valor(TIPO.BOOLEAN, self.data)
return value | 32.933333 | 62 | 0.753036 | 65 | 494 | 5.507692 | 0.4 | 0.195531 | 0.125698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17004 | 494 | 15 | 63 | 32.933333 | 0.873171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.416667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
02b0d72371be62631bb43f797ca61b4675b29c36 | 2,181 | py | Python | pyeip1962/group.py | HarryR/pyeip1962 | fde81a16276bd18c7ebb03e0fcedcfc9574d8fb6 | [
"MIT"
] | 2 | 2019-09-30T20:41:50.000Z | 2019-10-01T14:57:32.000Z | pyeip1962/group.py | HarryR/pyeip1962 | fde81a16276bd18c7ebb03e0fcedcfc9574d8fb6 | [
"MIT"
] | null | null | null | pyeip1962/group.py | HarryR/pyeip1962 | fde81a16276bd18c7ebb03e0fcedcfc9574d8fb6 | [
"MIT"
] | null | null | null |
class AbstractPoint(object):
__slots__ = ('x', 'y')
def __init__(self, x, y):
self.x = self.field()(x)
self.y = self.field()(y)
def __neg__(self):
return self.neg()
def __add__(self, other):
return self.add(other)
def __sub__(self, other):
return self.add(other.neg())
def __mul__(self, n):
return self.mul(n)
def __iter__(self):
return iter([self.x, self.y])
def __eq__(self, other: 'AbstractPoint'):
return other is not None and self.x == other.x and self.y == other.y
def __bool__(self):
return self != self.zero()
def __repr__(self):
return f'{type(self).__name__}(x={self.x}, y={self.y})'
@classmethod
def generator(cls):
raise NotImplementedError
@classmethod
def zero(cls):
return None
@classmethod
def field(cls):
# Field used for X and Y coordinates
raise NotImplementedError
@classmethod
def order(cls):
return cls.group().order()
@classmethod
def group(cls):
# Group
raise NotImplementedError
def neg(self):
raise NotImplementedError
def add(self, other: 'AbstractPoint'):
raise NotImplementedError
def double(self, other: 'AbstractPoint'):
raise NotImplementedError
def mul(self, scalar):
scalar = int(scalar)
if scalar == 1:
return self
p = self
a = self.zero()
while scalar != 0:
if (scalar & 1) != 0:
a = p.add(a)
p = p.double()
scalar = scalar // 2
return a
class AbstractPointG1(AbstractPoint):
def pairing(self, other: 'AbstractPointG2'):
return self.group().pairing(self, other)
class AbstractPointG2(AbstractPoint):
def pairing(self, other: AbstractPointG1):
return self.group().pairing(other, self)
class AbstractGroup(object):
@classmethod
def order(cls):
raise NotImplementedError
@classmethod
def G1(cls):
"""Returns class for G1 group"""
raise NotImplementedError
@classmethod
def G2(cls):
"""Returns class for G2 group"""
raise NotImplementedError
@classmethod
def GT(cls):
"""Returns class for target group"""
raise NotImplementedError
@classmethod
def pairing(cls, a: AbstractPointG1, b: AbstractPointG2):
assert isinstance(a, self.G1())
assert isinstance(b, self.G2())
raise NotImplementedError
| 19.300885 | 70 | 0.692801 | 287 | 2,181 | 5.111498 | 0.212544 | 0.179959 | 0.143149 | 0.155419 | 0.29107 | 0.103613 | 0 | 0 | 0 | 0 | 0 | 0.009476 | 0.177442 | 2,181 | 112 | 71 | 19.473214 | 0.80825 | 0.057772 | 0 | 0.294872 | 0 | 0 | 0.049583 | 0.0162 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.320513 | false | 0 | 0 | 0.153846 | 0.564103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
02bdd39dab555d84b4a26ad67896b2890f7c2449 | 157 | py | Python | distinct_numbers.py | kshitijgupta01/CSES-Problem-Set | db53323967abcbdad3d6c172cd6c8c43956417ea | [
"MIT"
] | null | null | null | distinct_numbers.py | kshitijgupta01/CSES-Problem-Set | db53323967abcbdad3d6c172cd6c8c43956417ea | [
"MIT"
] | null | null | null | distinct_numbers.py | kshitijgupta01/CSES-Problem-Set | db53323967abcbdad3d6c172cd6c8c43956417ea | [
"MIT"
] | null | null | null | #this is my solution for Distinct Numbers problem in CSES dataset
no_of_integers = int(input())
array = list(map(int,input().split()))
print(len(set(array))) | 39.25 | 65 | 0.745223 | 26 | 157 | 4.423077 | 0.884615 | 0.13913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10828 | 157 | 4 | 66 | 39.25 | 0.821429 | 0.407643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02bfa751ea5336a900a793d41a727f461bc9f3e5 | 1,332 | py | Python | service/machines/admin.py | TAHER-MOSTAFA/service-app | c48d872769e20e5d58a339c2e43e4578d71c7011 | [
"MIT"
] | null | null | null | service/machines/admin.py | TAHER-MOSTAFA/service-app | c48d872769e20e5d58a339c2e43e4578d71c7011 | [
"MIT"
] | null | null | null | service/machines/admin.py | TAHER-MOSTAFA/service-app | c48d872769e20e5d58a339c2e43e4578d71c7011 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Company, Machine, Specs, Conatact, Category
from django.utils.html import format_html
class StackedSpecsAdmin(admin.TabularInline):
model = Specs
fields = ("label", "value", "machine")
readonly_fields = ("machine",)
extra = 1
def has_add_permission(self, request, *args, **kwargs):
return True
@admin.register(Machine)
class MachineAdmin(admin.ModelAdmin):
list_display = ("name", "company_website", "company", "is_active", "description")
list_filter = ("is_active",)
search_fields = ("name__icontains",)
readonly_fields = ("created", "modified")
inlines = (StackedSpecsAdmin,)
def company_website(self, obj):
return format_html(f'<a href="{obj.company.website}">company Website</a>')
class StackedMachinesAdmin(admin.TabularInline):
model = Machine
fields = ("name", "is_active", "created", "modified")
readonly_fields = ("created", "modified")
extra = 0
def has_add_permission(self, request, *args, **kwargs):
return False
@admin.register(Company)
class CompanyAdmin(admin.ModelAdmin):
inlines = (StackedMachinesAdmin,)
@admin.register(Conatact)
class ContactAdmin(admin.ModelAdmin):
pass
@admin.register(Category)
class CategoryAdmin(admin.ModelAdmin):
pass | 24.666667 | 85 | 0.6997 | 145 | 1,332 | 6.296552 | 0.42069 | 0.056955 | 0.050383 | 0.041621 | 0.100767 | 0.100767 | 0.100767 | 0.100767 | 0.100767 | 0 | 0 | 0.001808 | 0.16967 | 1,332 | 54 | 86 | 24.666667 | 0.823689 | 0 | 0 | 0.171429 | 0 | 0 | 0.152288 | 0.027007 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.057143 | 0.085714 | 0.085714 | 0.828571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
02c12b84a85cbb928482b4c22d75b845a626ce93 | 3,304 | py | Python | examples/get_vms_paged.py | tmongero/tintri-python-sdk | aadeb047201fb0b82f0edf966dfcbba675705821 | [
"BSD-3-Clause"
] | 11 | 2016-10-27T05:57:20.000Z | 2018-03-09T19:22:11.000Z | examples/get_vms_paged.py | tmongero/tintri-python-sdk | aadeb047201fb0b82f0edf966dfcbba675705821 | [
"BSD-3-Clause"
] | 2 | 2018-03-20T17:57:43.000Z | 2020-02-26T15:59:50.000Z | examples/get_vms_paged.py | tmongero/tintri-python-sdk | aadeb047201fb0b82f0edf966dfcbba675705821 | [
"BSD-3-Clause"
] | 5 | 2017-04-13T19:07:47.000Z | 2018-07-26T20:41:45.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# The MIT License (MIT)
#
# Copyright (c) 2015 Tintri, Inc.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import sys
from tintri.v310 import Tintri
from tintri.v310 import VirtualMachineFilterSpec
"""
This Python script gets all the VMs in paged invocation.
Here the script manages the pages.
Paged invocations are useful so that the client doesn't have to suck-in
all the information at one time.
Command usage: get_vms_paged <server_name> <userName> <password>
"""
# For exhaustive messages on console, make it to True; otherwise keep it False
debug_mode = False
def print_with_prefix(prefix, out):
print(prefix + out)
return
def print_debug(out):
if debug_mode:
print_with_prefix("[DEBUG] : ", out)
return
def print_info(out):
print_with_prefix("[INFO] : ", out)
return
def print_error(out):
print_with_prefix("[ERROR] : ", out)
return
# main
if len(sys.argv) < 4:
print("\nPrints VM information using pagination\n")
print("Usage: " + sys.argv[0] + " server_name user_name password\n")
sys.exit(-1)
server_name = sys.argv[1]
user_name = sys.argv[2]
password = sys.argv[3]
page_size = 2
# instantiate the Tintri server.
tintri = Tintri(server_name, auto_page = False)
# Get version and product
version_info = tintri.version
product = version_info.productName
print("Product: " + product + " (" + version_info.preferredVersion + ")")
print ""
# Login to VMstore
tintri.login(user_name, password)
# Define the filter with page size and live VMs.
vm_filter_spec = VirtualMachineFilterSpec()
vm_filter_spec.limit = page_size
vm_filter_spec.live = "true"
# Prime the VM information pump
vms = tintri.get_vms(filters = vm_filter_spec)
if vms.filteredTotal == 0:
print_error("No Live VMs present")
tintri.logout()
sys.exit()
count = 1
done = False
print "Live Total: " + str(vms.filteredTotal)
# Get more VM information until done.
try:
while vms:
for vm in vms:
vm_name = vm.vmware.name
vm_uuid = vm.uuid.uuid
print(str(count) + ": " + vm_name + ", " + vm_uuid)
count += 1
vms = vms.get_next_page()
except StopIteration as sie:
pass # Expected
# All pau, log out
tintri.logout()
| 27.081967 | 79 | 0.714588 | 486 | 3,304 | 4.76749 | 0.425926 | 0.03798 | 0.025896 | 0.022011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007907 | 0.196126 | 3,304 | 121 | 80 | 27.305785 | 0.864458 | 0.425545 | 0 | 0.113208 | 0 | 0 | 0.102079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.075472 | 0.056604 | null | null | 0.283019 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
02c5694ad44ef93cb82d72f35ed2f9c05c18d13d | 1,243 | py | Python | pyfakefs/tests/fixtures/module_with_attributes.py | kmerenkov/pyfakefs | dca7de71f100a45ad0c715cfe5618ff083122a57 | [
"Apache-2.0"
] | 422 | 2015-03-19T06:03:48.000Z | 2022-03-31T00:06:45.000Z | pyfakefs/tests/fixtures/module_with_attributes.py | kmerenkov/pyfakefs | dca7de71f100a45ad0c715cfe5618ff083122a57 | [
"Apache-2.0"
] | 510 | 2015-03-19T18:35:04.000Z | 2022-03-28T20:59:01.000Z | pyfakefs/tests/fixtures/module_with_attributes.py | kmerenkov/pyfakefs | dca7de71f100a45ad0c715cfe5618ff083122a57 | [
"Apache-2.0"
] | 95 | 2015-03-17T20:25:57.000Z | 2022-03-28T18:04:45.000Z | # Copyright 2017 John McGehee
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module is for testing pyfakefs
:py:class:`fake_filesystem_unittest.Patcher`. It defines attributes that have
the same names as file modules, sudh as 'io` and `path`. Since these are not
modules, :py:class:`fake_filesystem_unittest.Patcher` should not patch them.
Whenever a new module is added to
:py:meth:`fake_filesystem_unittest.Patcher._findModules`, the corresponding
attribute should be added here and in the test
:py:class:`fake_filesystem_unittest_test.TestAttributesWithFakeModuleNames`.
"""
os = 'os attribute value'
path = 'path attribute value'
pathlib = 'pathlib attribute value'
shutil = 'shutil attribute value'
io = 'io attribute value'
| 40.096774 | 78 | 0.776348 | 185 | 1,243 | 5.162162 | 0.562162 | 0.062827 | 0.092147 | 0.065969 | 0.105759 | 0.075393 | 0 | 0 | 0 | 0 | 0 | 0.007554 | 0.148029 | 1,243 | 30 | 79 | 41.433333 | 0.89424 | 0.849558 | 0 | 0 | 0 | 0 | 0.60119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02d3c3e42d5b3f989b288416035479d17f111820 | 2,994 | py | Python | code/duke.py | jordankeener/ncaa_rosters | 12e66e9ef7502ab6869e7352ae673c46680eedd0 | [
"MIT"
] | null | null | null | code/duke.py | jordankeener/ncaa_rosters | 12e66e9ef7502ab6869e7352ae673c46680eedd0 | [
"MIT"
] | null | null | null | code/duke.py | jordankeener/ncaa_rosters | 12e66e9ef7502ab6869e7352ae673c46680eedd0 | [
"MIT"
] | null | null | null | from urllib.request import urlopen
from urllib.request import FancyURLopener
from bs4 import BeautifulSoup
import pandas as pd
import os
import _proj_functions as proj
import _lookups as lookups
import re
outdir = '../output'
##### duke ###########################
school = 'duke'
sports_dict = lookups.get_sports_dict()
# {'sport_id' : ['full sport url']}
sports_dict['baseball'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1850&SPSID=22852&DB_OEM_ID=4200']
sports_dict['mens basketball'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1845&SPSID=22727&DB_OEM_ID=4200']
sports_dict['womens basketball'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1846&SPSID=22763&DB_OEM_ID=4200']
sports_dict['mixed cross country'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1831&SPSID=22403&DB_OEM_ID=4200']
sports_dict['mixed fencing'] = ['http://www.goduke.com/SportSelect.dbml?SPID=2028&SPSID=25950&DB_OEM_ID=4200']
sports_dict['womens field hockey'] = ['http://www.goduke.com/SportSelect.dbml?SPID=2029&SPSID=25945&DB_OEM_ID=4200']
sports_dict['football'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1843&SPSID=22667&DB_OEM_ID=4200']
sports_dict['mens golf'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1837&SPSID=22554&DB_OEM_ID=4200']
sports_dict['womens golf'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1838&SPSID=22564&DB_OEM_ID=4200']
sports_dict['mens lacrosse'] = ['http://www.goduke.com/SportSelect.dbml?SPID=2027&SPSID=25941&DB_OEM_ID=4200']
sports_dict['womens lacrosse'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1832&SPSID=22430&DB_OEM_ID=4200']
sports_dict['womens rowing'] = ['http://www.goduke.com/SportSelect.dbml?SPID=2031&SPSID=25949&DB_OEM_ID=4200']
sports_dict['mens soccer'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1833&SPSID=22446&DB_OEM_ID=4200']
sports_dict['womens soccer'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1842&SPSID=22660&DB_OEM_ID=4200']
sports_dict['softball'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1851&SPSID=22879&DB_OEM_ID=4200']
sports_dict['mixed swimming'] = ['http://www.goduke.com/SportSelect.dbml?SPID=2182&SPSID=27950&DB_OEM_ID=4200']
sports_dict['mens tennis'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1839&SPSID=22590&DB_OEM_ID=4200']
sports_dict['womens tennis'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1840&SPSID=22608&DB_OEM_ID=4200']
sports_dict['mixed track'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1835&SPSID=22497&DB_OEM_ID=4200']
sports_dict['womens volleyball'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1844&SPSID=22705&DB_OEM_ID=4200']
sports_dict['wrestling'] = ['http://www.goduke.com/SportSelect.dbml?SPID=1834&SPSID=22470&DB_OEM_ID=4200']
# remove empty sports
for (key, value) in sports_dict.copy().items():
if value == []:
del sports_dict[key]
# loop through sports collecting rosters
rosters = proj.gather_rosters_grid(sports_dict)
rosters['college'] = school
csvname = school + '_rosters.csv'
rosters.to_csv(os.path.join(outdir, csvname))
| 61.102041 | 116 | 0.756179 | 467 | 2,994 | 4.683084 | 0.263383 | 0.118884 | 0.124829 | 0.153635 | 0.599451 | 0.599451 | 0.570645 | 0.191129 | 0 | 0 | 0 | 0.097026 | 0.05678 | 2,994 | 48 | 117 | 62.375 | 0.677408 | 0.032732 | 0 | 0 | 0 | 0.538462 | 0.655245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.205128 | 0 | 0.205128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02d96248c22be4751eaae58fef6939b6f3f460f3 | 742 | py | Python | chapter_6-DesignPatternsWithFirst-ClassFunctions/promotions.py | zixingkong/Fluent-Python | 6f05ec5f27a1cf582ddc5a09959f446d455b6c4a | [
"MIT"
] | null | null | null | chapter_6-DesignPatternsWithFirst-ClassFunctions/promotions.py | zixingkong/Fluent-Python | 6f05ec5f27a1cf582ddc5a09959f446d455b6c4a | [
"MIT"
] | null | null | null | chapter_6-DesignPatternsWithFirst-ClassFunctions/promotions.py | zixingkong/Fluent-Python | 6f05ec5f27a1cf582ddc5a09959f446d455b6c4a | [
"MIT"
] | 1 | 2022-03-14T15:04:58.000Z | 2022-03-14T15:04:58.000Z | # -*- coding: utf-8 -*-
"""
-------------------------------------------------
File Name: promotions
Description :
date: 2022/2/13
-------------------------------------------------
"""
def fidelity_promo(order):
"""为积分为1000或以上的顾客提供5%折扣"""
return order.total() * .05 if order.customer.fidelity >= 1000 else 0
def bulk_item_promo(order):
"""单个商品为20个或以上时提供10%折扣"""
discount = 0
for item in order.cart:
if item.quantity >= 20:
discount += item.total() * .1
return discount
def large_order_promo(order):
"""订单中的不同商品达到10个或以上时提供7%折扣"""
distinct_items = {item.product for item in order.cart}
if len(distinct_items) >= 10:
return order.total() * .07
return 0 | 26.5 | 72 | 0.52965 | 79 | 742 | 4.886076 | 0.544304 | 0.07772 | 0.082902 | 0.072539 | 0.103627 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0.062176 | 0.219677 | 742 | 28 | 73 | 26.5 | 0.604491 | 0.351752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
02e51142160ed375afe6453fc1e50d6771aa26ed | 150 | py | Python | api/gradebook/apps.py | Latiftanga/TagnatekAPI | 8ad609a5d7097051796d25d9b532c770d32ac304 | [
"MIT"
] | null | null | null | api/gradebook/apps.py | Latiftanga/TagnatekAPI | 8ad609a5d7097051796d25d9b532c770d32ac304 | [
"MIT"
] | 5 | 2022-02-05T00:10:27.000Z | 2022-03-23T19:45:38.000Z | api/gradebook/apps.py | Latiftanga/TagnatekAPI | 8ad609a5d7097051796d25d9b532c770d32ac304 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class GradebookConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'gradebook'
| 21.428571 | 56 | 0.766667 | 17 | 150 | 6.647059 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 150 | 6 | 57 | 25 | 0.882813 | 0 | 0 | 0 | 0 | 0 | 0.253333 | 0.193333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02ea408316f34a34779491f9ebf203a60a082d85 | 4,314 | py | Python | src/examples/smart-mirror/kitchen_sink.py | adams-liu/SmartMirrorG23 | bdc60083d03aaae75e7b66e97a0af4977945260f | [
"MIT"
] | 1 | 2020-01-09T19:41:15.000Z | 2020-01-09T19:41:15.000Z | src/examples/smart-mirror/kitchen_sink.py | adams-liu/SmartMirrorG36 | bdc60083d03aaae75e7b66e97a0af4977945260f | [
"MIT"
] | null | null | null | src/examples/smart-mirror/kitchen_sink.py | adams-liu/SmartMirrorG36 | bdc60083d03aaae75e7b66e97a0af4977945260f | [
"MIT"
] | null | null | null | #
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# These materials are licensed under the Amazon Software License in connection with the Alexa Gadgets Program.
# The Agreement is available at https://aws.amazon.com/asl/.
# See the Agreement for the specific terms and conditions of the Agreement.
# Capitalized terms not defined in this file have the meanings given to them in the Agreement.
#
import logging
import sys
from agt import AlexaGadget
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
logging.getLogger('agt.alexa_gadget').setLevel(logging.DEBUG)
class KitchenSinkGadget(AlexaGadget):
"""
Class that logs each directive received from the Echo device.
"""
def on_connected(self, device_addr):
"""
Gadget connected to the paired Echo device.
:param device_addr: the address of the device we connected to
"""
pass
def on_disconnected(self, device_addr):
"""
Gadget disconnected from the paired Echo device.
:param device_addr: the address of the device we disconnected from
"""
pass
def on_alexa_gadget_statelistener_stateupdate(self, directive):
"""
Alexa.Gadget.StateListener StateUpdate directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/alexa-gadget-statelistener-interface.html#StateUpdate-directive
:param directive: Protocol Buffer Message that was send by Echo device.
To get the specific state update name, the following code snippet can be used:
# Extract first available state (name & value) from directive payload
if len(directive.payload.states) > 0:
state = directive.payload.states[0]
name = state.name
value = state.value
print('state name:{}, state value:{}'.format(name, value))
"""
pass
def on_notifications_setindicator(self, directive):
"""
Notifications SetIndicator directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/notifications-interface.html#SetIndicator-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
def on_notifications_clearindicator(self, directive):
"""
Notifications ClearIndicator directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/notifications-interface.html#ClearIndicator-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
def on_alexa_gadget_speechdata_speechmarks(self, directive):
"""
Alexa.Gadget.SpeechData Speechmarks directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/alexa-gadget-speechdata-interface.html#Speechmarks-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
def on_alexa_gadget_musicdata_tempo(self, directive):
"""
Alexa.Gadget.MusicData Tempo directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/alexa-gadget-musicdata-interface.html#Tempo-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
def on_alerts_setalert(self, directive):
"""
Alerts SetAlert directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/alerts-interface.html#SetAlert-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
def on_alerts_deletealert(self, directive):
"""
Alerts DeleteAlert directive received.
For more info, visit:
https://developer.amazon.com/docs/alexa-gadgets-toolkit/alerts-interface.html#DeleteAlert-directive
:param directive: Protocol Buffer Message that was send by Echo device.
"""
pass
if __name__ == '__main__':
KitchenSinkGadget().main()
| 33.703125 | 131 | 0.680575 | 500 | 4,314 | 5.794 | 0.262 | 0.03797 | 0.024853 | 0.057991 | 0.470142 | 0.463238 | 0.463238 | 0.463238 | 0.463238 | 0.463238 | 0 | 0.001826 | 0.238526 | 4,314 | 127 | 132 | 33.968504 | 0.880061 | 0.65554 | 0 | 0.333333 | 0 | 0 | 0.023622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.111111 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f31af71189c0278b02e9b7d22f145978921271f1 | 1,442 | py | Python | tests/test_damerau_levenshtein_distance.py | nokia/PyBGL | e9868361e5a3870b5247872a8c8c91a1c065fe84 | [
"BSD-3-Clause"
] | 11 | 2019-05-20T16:47:03.000Z | 2021-12-17T10:24:22.000Z | tests/test_damerau_levenshtein_distance.py | nokia/PyBGL | e9868361e5a3870b5247872a8c8c91a1c065fe84 | [
"BSD-3-Clause"
] | null | null | null | tests/test_damerau_levenshtein_distance.py | nokia/PyBGL | e9868361e5a3870b5247872a8c8c91a1c065fe84 | [
"BSD-3-Clause"
] | 3 | 2019-05-24T02:24:30.000Z | 2020-03-17T09:55:40.000Z | #!/usr/bin/env pytest-3
# -*- coding: utf-8 -*-
__author__ = "Marc-Olivier Buob"
__maintainer__ = "Marc-Olivier Buob"
__email__ = "marc-olivier.buob@nokia-bell-labs.com"
__copyright__ = "Copyright (C) 2021, Nokia"
__license__ = "BSD-3"
from pybgl.damerau_levenshtein_distance import *
WORDS = [
"book", "books", "cake",
"boo", "boon", "cook", "cake", "cape", "cart"
]
def test_damerau_levenshtein_distance_identity():
for w in ["", "abc"]:
assert damerau_levenshtein_distance(w, w) == 0
def test_damerau_levenshtein_distance_symmetry():
w1 = "abcd"
w2 = "ebgce"
assert damerau_levenshtein_distance(w1, "") == damerau_levenshtein_distance("", w1)
assert damerau_levenshtein_distance(w2, "") == damerau_levenshtein_distance("", w2)
assert damerau_levenshtein_distance(w1, w2) == damerau_levenshtein_distance(w2, w1)
def test_damerau_levenshtein_distance():
for wi in WORDS:
for wj in WORDS:
if wi < wj:
d1 = damerau_levenshtein_distance_naive(wi, wj)
d2 = damerau_levenshtein_distance(wi, wj)
assert d1 == d2
def test_damerau_levenshtein_distance():
map_xy_expected = {
("ab", "ba") : 1,
("ba", "abc") : 2,
('fee', 'deed') : 2,
}
for ((x, y), expected) in map_xy_expected.items():
obtained = damerau_levenshtein_distance(x, y)
assert obtained == expected
| 32.044444 | 87 | 0.635922 | 171 | 1,442 | 5.005848 | 0.415205 | 0.315421 | 0.455607 | 0.116822 | 0.303738 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022462 | 0.228155 | 1,442 | 44 | 88 | 32.772727 | 0.746631 | 0.030513 | 0 | 0.057143 | 0 | 0 | 0.118195 | 0.026504 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.114286 | false | 0 | 0.028571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b827b1a522736058911316912c15784f94c517de | 959 | py | Python | peeringdb/resource.py | alarig/peeringdb-py | 917cda69f7bc05be008faa66875827d408328609 | [
"Apache-2.0"
] | null | null | null | peeringdb/resource.py | alarig/peeringdb-py | 917cda69f7bc05be008faa66875827d408328609 | [
"Apache-2.0"
] | null | null | null | peeringdb/resource.py | alarig/peeringdb-py | 917cda69f7bc05be008faa66875827d408328609 | [
"Apache-2.0"
] | null | null | null | """
PeeringDB resource definitions
"""
from collections import OrderedDict
# Generate classes
_NAMES = OrderedDict(
[
("org", "Organization"),
("fac", "Facility"),
("net", "Network"),
("ix", "InternetExchange"),
("ixfac", "InternetExchangeFacility"),
("ixlan", "InternetExchangeLan"),
("ixpfx", "InternetExchangeLanPrefix"),
("netfac", "NetworkFacility"),
("netixlan", "NetworkIXLan"),
("poc", "NetworkContact"),
]
)
RESOURCES_BY_TAG = OrderedDict()
for tag, name in _NAMES.items():
class Meta(type):
def __repr__(cls, _name=name):
return _name
Class = Meta(name, (), {"tag": tag})
RESOURCES_BY_TAG[tag] = Class
locals()[name] = Class
is_resource_tag = RESOURCES_BY_TAG.__contains__
def get_resource(tag):
return RESOURCES_BY_TAG[tag]
get = get_resource
def all_resources():
return list(RESOURCES_BY_TAG.values())
| 20.404255 | 47 | 0.616267 | 92 | 959 | 6.130435 | 0.554348 | 0.097518 | 0.124113 | 0.060284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233577 | 959 | 46 | 48 | 20.847826 | 0.767347 | 0.050052 | 0 | 0 | 1 | 0 | 0.219269 | 0.054264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.034483 | 0.103448 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b827c5049998aee470f52ac1b088f9ea6f219154 | 457 | py | Python | tests/db/sql/clauses/test_array_agg.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 4 | 2019-08-10T12:56:12.000Z | 2020-01-21T09:51:20.000Z | tests/db/sql/clauses/test_array_agg.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 71 | 2019-04-09T05:39:21.000Z | 2020-05-16T23:09:24.000Z | tests/db/sql/clauses/test_array_agg.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | null | null | null | from unittest.mock import MagicMock
import pytest
from polecat.db.sql.expression.array_agg import ArrayAgg
from polecat.db.sql.sql import Sql
from .conftest import SqlTermTester
def test_to_sql():
term = ArrayAgg('test')
sql = Sql(term.to_sql())
assert str(sql) == 'array_agg("test")'
@pytest.mark.parametrize('test_func', SqlTermTester.ALL_TESTS)
def test_sql_term_methods(test_func):
term = ArrayAgg(MagicMock())
test_func(term)
| 22.85 | 62 | 0.746171 | 66 | 457 | 4.984848 | 0.409091 | 0.06383 | 0.079027 | 0.097264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140044 | 457 | 19 | 63 | 24.052632 | 0.83715 | 0 | 0 | 0 | 0 | 0 | 0.065646 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.153846 | false | 0 | 0.384615 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b82e0f8ad8aa623885eb167fbebcf7ad8e1d3f53 | 548 | py | Python | scipy_central/person/tests.py | wangvictor2012/liuwei | 0a06f8fd56d78162f81f1e7e7def7bfdeb4472e1 | [
"BSD-3-Clause"
] | 7 | 2016-02-03T12:44:33.000Z | 2020-08-26T09:22:23.000Z | scipy_central/person/tests.py | wangvictor2012/liuwei | 0a06f8fd56d78162f81f1e7e7def7bfdeb4472e1 | [
"BSD-3-Clause"
] | 19 | 2015-01-20T11:27:22.000Z | 2017-09-23T22:26:18.000Z | scipy_central/person/tests.py | wangvictor2012/liuwei | 0a06f8fd56d78162f81f1e7e7def7bfdeb4472e1 | [
"BSD-3-Clause"
] | 9 | 2015-01-03T02:56:33.000Z | 2021-02-20T10:45:11.000Z | from django.test import TestCase
"""
Tests to add:
* User account: may not contain characters such as #!$&*() etc
* Password must be at least 1 character long
* Reset the password; check that email is received;
* Try reset with an incorrect email link
* Reset with correct email link should work
* Reset with correct email link, but enter two different passwords in the form
"""
class SimpleTest(TestCase):
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.assertEqual(1 + 1, 2)
| 24.909091 | 78 | 0.687956 | 80 | 548 | 4.6875 | 0.7125 | 0.072 | 0.085333 | 0.112 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016627 | 0.231752 | 548 | 21 | 79 | 26.095238 | 0.874109 | 0.060219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b83bb5360f919b4728106dee5150ed0baf861cd4 | 7,444 | py | Python | PySimService.py | tjsego/simservice | 1ca1df4c6644f22217645575719cfa72f5b9f895 | [
"MIT"
] | 1 | 2021-08-08T03:15:47.000Z | 2021-08-08T03:15:47.000Z | PySimService.py | tjsego/simservice | 1ca1df4c6644f22217645575719cfa72f5b9f895 | [
"MIT"
] | null | null | null | PySimService.py | tjsego/simservice | 1ca1df4c6644f22217645575719cfa72f5b9f895 | [
"MIT"
] | null | null | null | """
Defines the base class for simulation services
"""
from enum import Enum
from typing import Callable, Optional
class SimStatus(Enum):
"""Simulation status enum"""
SIM_REGISTERED = 0
SIM_LOADED = 1
SIM_INITIALIZED = 2
SIM_STARTED = 3
SIM_RUNNING = 4
SIM_STOPPED = 5
SIM_FINISHED = 6
SIM_FAILED = -1
class PySimService:
"""
Client-side interface for simulation service processes
Implementations should derive a wrap for an underlying service from this class
Basic usage is
sim = PySimService()
sim.run()
sim.init()
sim.start()
for s in range(S):
sim.step()
sim.finish()
Status reporting is as follows
sim = PySimService() : sim.status -> SimStatus.REGISTERED
sim.run() : sim.status -> SimStatus.SIM_LOADED
sim.init() : sim.status -> SimStatus.SIM_INITIALIZED
sim.start() : sim.status -> SimStatus.SIM_STARTED
sim.step() : sim.status -> SimStatus.SIM_RUNNING
sim.finish() : sim.status -> SimStatus.SIM_FINISHED
sim.stop() : sim.status -> SimStatus.SIM_STOPPED
sim.stop(terminate_sim=False) : sim.status -> SimStatus.SIM_FINISHED
"""
def __init__(self, sim_name: str = '', *args, **kwargs):
# Simulation details
self._sim_name: str = sim_name
"""Name of simulation"""
self.beginning_step: int = -1
"""First simulaton step"""
self._current_step: int = -1
"""Current simulation step"""
# In case of failure
self._error_message: Optional[str] = None
"""Error message to report on demand, if any"""
self.status: SimStatus = SimStatus.SIM_REGISTERED
"""Current serivce status"""
self._inside_run: Callable[[PySimService], None] = self.inside_run
"""Hook for control in parallel applications"""
@property
def current_step(self) -> int:
"""Current simulation step"""
return self._current_step
@property
def error_message(self) -> Optional[str]:
"""Error message to report on demand, if any"""
return self._error_message
def run(self):
"""
Initialize underlying simulation
All prep for the underlying simulation is complete after this call
Returned dictionary contains
- name: the name of the simulation
- sim: this servce instance
:return: name and reference of this service instance
"""
self._run()
self.status = SimStatus.SIM_LOADED
self._inside_run(self)
return {'name': self._sim_name, 'sim': self}
@staticmethod
def inside_run(self) -> None:
"""
Called inside run; this supports parallel applications
To support running a service in parallel, overload this or set it
via set_inside_run with what to do when this service acts without
further control from the calling process
:param self: this service instance
:type self: PySimService
:return: None
"""
def set_inside_run(self, _inside_run_func) -> None:
"""
Set inside run function
:param _inside_run_func: inside run function
:type _inside_run_func: (PySimService) -> None
:return: None
"""
self._inside_run = _inside_run_func
def set_sim_name(self, _sim_name: str) -> None:
"""
Set simulation name after instantiation
:param _sim_name: name of the simulation
:type _sim_name: str
:return: None
"""
self._sim_name = _sim_name
def init(self) -> bool:
"""
Initialize underlying simulation
:return: True if started; False if further start calls are required
:rtype: bool
"""
init_status: bool = self._init()
if init_status:
self.status = SimStatus.SIM_INITIALIZED
return init_status
def start(self) -> bool:
"""
After simulation and before stepping
:return: True if started; False if further start calls are required
:rtype: bool
"""
start_status: bool = self._start()
if start_status:
self._current_step = self.beginning_step
self.status = SimStatus.SIM_STARTED
return start_status
def step(self) -> bool:
"""
Execute a step of the underlying simulation
:return: True if successful, False if something failed
:rtype: bool
"""
step_status = self._step()
if step_status:
self.status = SimStatus.SIM_RUNNING
self._current_step += 1
return step_status
def finish(self) -> None:
"""
Execute underlying simulation finish
:return: None
"""
self._finish()
self.status = SimStatus.SIM_FINISHED
def stop(self, terminate_sim: bool = True) -> None:
"""
Execute underlying stop
:param terminate_sim: Terminates simulation if True; default True
:type terminate_sim: bool
:return: None
"""
self._stop(terminate_sim=terminate_sim)
if terminate_sim:
self.status = SimStatus.SIM_FINISHED
else:
self.status = SimStatus.SIM_STOPPED
def _run(self) -> None:
"""
Called by run; all prep for the underlying simulation is complete after this call!
:return: None
"""
raise NotImplementedError
def _init(self) -> bool:
"""
Called by init; initialize underlying simulation
:return: True if started; False if further start calls are required
:rtype: bool
"""
raise NotImplementedError
def _start(self) -> bool:
"""
Called by start; after simulation and before stepping
Should set self.beginning_step to first first step of current_step counter
:return: True if started; False if further start calls are required
:rtype: bool
"""
raise NotImplementedError
def _step(self) -> bool:
"""
Called by step; execute a step of the underlying simulation
:return: True if successful, False if something failed
:rtype: bool
"""
raise NotImplementedError
def _finish(self) -> None:
"""
Called by finish; execute underlying simulation finish
:return: None
"""
raise NotImplementedError
def _stop(self, terminate_sim: bool = True) -> None:
"""
Called by stop; execute underlying simulation stop
:param terminate_sim: Terminates simulation if True; default True
:type terminate_sim: bool
:return: None
"""
def steer(self) -> bool:
"""
Execute steering; calling signal for ad-hoc changes to service and underlying simulation data
:return: True if OK, False if something went wrong
:rtype: bool
"""
return True
@property
def profiler_report(self) -> str:
"""
Return on-demand profiling information about simulation service
:return: profiling information
:rtype: str
"""
return ""
| 27.069091 | 101 | 0.59417 | 827 | 7,444 | 5.204353 | 0.195889 | 0.055762 | 0.05855 | 0.034154 | 0.321794 | 0.245353 | 0.225372 | 0.225372 | 0.193773 | 0.193773 | 0 | 0.00219 | 0.325228 | 7,444 | 274 | 102 | 27.167883 | 0.854669 | 0.464938 | 0 | 0.128205 | 0 | 0 | 0.002385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25641 | false | 0 | 0.025641 | 0 | 0.512821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b849efed2771731f61e06629a1e09b140def91dd | 365 | py | Python | nequip/model/__init__.py | schiotz/nequip | c343ce25ecfeb64f6df92e96022e673a7714e3a6 | [
"MIT"
] | null | null | null | nequip/model/__init__.py | schiotz/nequip | c343ce25ecfeb64f6df92e96022e673a7714e3a6 | [
"MIT"
] | null | null | null | nequip/model/__init__.py | schiotz/nequip | c343ce25ecfeb64f6df92e96022e673a7714e3a6 | [
"MIT"
] | null | null | null | from ._eng import EnergyModel
from ._grads import ForceOutput
from ._scaling import RescaleEnergyEtc, PerSpeciesRescale
from ._weight_init import uniform_initialize_FCs
from ._build import model_from_config
__all__ = [
"EnergyModel",
"ForceOutput",
"RescaleEnergyEtc",
"PerSpeciesRescale",
"uniform_initialize_FCs",
"model_from_config",
]
| 22.8125 | 57 | 0.769863 | 37 | 365 | 7.108108 | 0.486486 | 0.250951 | 0.152091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156164 | 365 | 15 | 58 | 24.333333 | 0.853896 | 0 | 0 | 0 | 0 | 0 | 0.257534 | 0.060274 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b859a4d8cdc702c42d0301b920b261c2cf9c08ae | 3,422 | py | Python | mindware/optimizers/base_optimizer.py | aman-gupta-1995/Machine-Learning-Mindware | 8b3050720711730520683c89949e3dbdfb168961 | [
"MIT"
] | 27 | 2021-07-19T09:03:34.000Z | 2022-03-31T06:19:23.000Z | mindware/optimizers/base_optimizer.py | aman-gupta-1995/Machine-Learning-Mindware | 8b3050720711730520683c89949e3dbdfb168961 | [
"MIT"
] | 4 | 2021-07-15T12:17:10.000Z | 2022-01-26T17:16:58.000Z | mindware/optimizers/base_optimizer.py | aman-gupta-1995/Machine-Learning-Mindware | 8b3050720711730520683c89949e3dbdfb168961 | [
"MIT"
] | 17 | 2020-05-12T20:24:50.000Z | 2021-07-11T03:31:38.000Z | import abc
import os
import time
import numpy as np
import pickle as pkl
from mindware.utils.constant import MAX_INT
from mindware.utils.logging_utils import get_logger
from mindware.components.evaluators.base_evaluator import _BaseEvaluator
from mindware.components.utils.topk_saver import CombinedTopKModelSaver
class BaseOptimizer(object):
def __init__(self, evaluator: _BaseEvaluator, config_space, name, timestamp, eval_type, output_dir=None, seed=None):
self.evaluator = evaluator
self.config_space = config_space
assert name in ['hpo', 'fe']
self.name = name
self.seed = np.random.random_integers(MAX_INT) if seed is None else seed
self.start_time = time.time()
self.timing_list = list()
self.incumbent = None
self.eval_type = eval_type
self.logger = get_logger(self.__module__ + "." + self.__class__.__name__)
self.init_hpo_iter_num = None
self.early_stopped_flag = False
self.timestamp = timestamp
self.output_dir = output_dir
self.topk_saver = CombinedTopKModelSaver(k=50, model_dir=self.output_dir, identifier=self.timestamp)
@abc.abstractmethod
def run(self):
pass
@abc.abstractmethod
def iterate(self, budget=MAX_INT):
pass
# TODO:Refactor the other optimizers
def update_saver(self, config_list, perf_list):
# Check if all the configs is valid in case of storing None into the config file
all_invalid = True
for i, perf in enumerate(perf_list):
if np.isfinite(perf) and perf != MAX_INT:
all_invalid = False
if not isinstance(config_list[i], dict):
config = config_list[i].get_dictionary().copy()
else:
config = config_list[i].copy()
if self.evaluator.fixed_config is not None:
if not isinstance(self.evaluator.fixed_config, dict):
fixed_config = self.evaluator.fixed_config.get_dictionary().copy()
else:
fixed_config = self.evaluator.fixed_config.copy()
config.update(fixed_config)
classifier_id = config['algorithm']
# -perf: The larger, the better.
save_flag, model_path, delete_flag, model_path_deleted = self.topk_saver.add(config, -perf,
classifier_id)
# By default, the evaluator has already stored the models.
if self.eval_type in ['holdout', 'partial']:
if save_flag:
pass
else:
os.remove(model_path)
self.logger.info("Model deleted from %s" % model_path)
try:
if delete_flag:
os.remove(model_path_deleted)
self.logger.info("Model deleted from %s" % model_path_deleted)
else:
pass
except:
pass
else:
continue
if not all_invalid:
self.topk_saver.save_topk_config()
def get_evaluation_stats(self):
return
def gc(self):
return
| 38.886364 | 120 | 0.570427 | 380 | 3,422 | 4.905263 | 0.336842 | 0.041309 | 0.038627 | 0.051502 | 0.080472 | 0.080472 | 0.042918 | 0.042918 | 0.042918 | 0 | 0 | 0.000911 | 0.35827 | 3,422 | 87 | 121 | 39.333333 | 0.847905 | 0.058738 | 0 | 0.194444 | 0 | 0 | 0.022077 | 0 | 0 | 0 | 0 | 0.011494 | 0.013889 | 1 | 0.083333 | false | 0.069444 | 0.125 | 0.027778 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b8654db26747ae44ec12c43d203554508c2c6ed3 | 119 | py | Python | tests/fixtures/tabs/tab1_out.py | spulec/pep8ify | cf1815f7bad9882027289bdb2f77604b68962ca7 | [
"Apache-2.0"
] | 50 | 2015-01-04T04:24:41.000Z | 2021-09-13T01:01:29.000Z | tests/fixtures/tabs/tab1_out.py | michaelBenin/pep8ify | 3cd8a9e8baaab33d66e792185a6870338bdf0911 | [
"Apache-2.0"
] | 2 | 2016-05-15T12:01:28.000Z | 2019-03-08T03:27:20.000Z | tests/fixtures/tabs/tab1_out.py | michaelBenin/pep8ify | 3cd8a9e8baaab33d66e792185a6870338bdf0911 | [
"Apache-2.0"
] | 5 | 2016-07-27T13:44:54.000Z | 2020-11-19T16:00:08.000Z | import foo
class testing():
def tester(self):
return self.blah
def tester2():
print "bleh"
| 10.818182 | 24 | 0.563025 | 14 | 119 | 4.785714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.336134 | 119 | 10 | 25 | 11.9 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.033613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8806b8f4c29befbbd329c1dfaae5baf9400c456 | 964 | py | Python | mcl/structures/Fp.py | michal21/mcl-python | 12967a2812e50c3a71a9f275b12d3da9d97911ed | [
"MIT"
] | 1 | 2020-10-29T02:32:50.000Z | 2020-10-29T02:32:50.000Z | mcl/structures/Fp.py | michal21/mcl-python | 12967a2812e50c3a71a9f275b12d3da9d97911ed | [
"MIT"
] | null | null | null | mcl/structures/Fp.py | michal21/mcl-python | 12967a2812e50c3a71a9f275b12d3da9d97911ed | [
"MIT"
] | 3 | 2020-10-16T05:55:02.000Z | 2022-02-13T21:27:38.000Z | import ctypes
from .. import builder
from .. import consts
from . import base
@builder.provide_methods(
builder.method("__add__").using(builder.buildThreeOp).with_args("add"),
builder.method("__eq__").using(builder.buildIsEqual),
builder.method("__invert__").using(builder.buildTwoOp).with_args("inv"),
builder.method("__mul__").using(builder.buildThreeOp).with_args("mul"),
builder.method("__neg__").using(builder.buildTwoOp).with_args("neg"),
builder.method("__sub__").using(builder.buildThreeOp).with_args("sub"),
builder.method("__truediv__").using(builder.buildThreeOp).with_args("div"),
builder.method("deserialize"),
builder.method("getStr"),
builder.method("isOne"),
builder.method("isZero"),
builder.method("serialize"),
builder.method("setByCSPRNG"),
builder.method("setInt"),
builder.method("setStr"),
)
class Fp(base.Structure):
_fields_ = [("v", ctypes.c_ulonglong * consts.FP_SIZE)]
| 35.703704 | 79 | 0.711618 | 110 | 964 | 5.881818 | 0.363636 | 0.301391 | 0.148377 | 0.173107 | 0.290572 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109959 | 964 | 26 | 80 | 37.076923 | 0.754079 | 0 | 0 | 0 | 0 | 0 | 0.139004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b883fce07d08e7dee0c7164c5465cda42f705cd9 | 986 | py | Python | step04.py | xuantum/StockmanDuck | aa328d00511a1024138af2eece9f31f694889f31 | [
"MIT"
] | 1 | 2020-12-24T06:36:26.000Z | 2020-12-24T06:36:26.000Z | step04.py | xuantum/StockmanDuck | aa328d00511a1024138af2eece9f31f694889f31 | [
"MIT"
] | null | null | null | step04.py | xuantum/StockmanDuck | aa328d00511a1024138af2eece9f31f694889f31 | [
"MIT"
] | null | null | null | # Step04 needs "ticker_list.pkl" generated in Step03.
# Step04 downloads half-year PD data of stocks that were picked in Step03, using pandas_datareader.
# Step04 then pickles the PD data into "pd_data.pkl".
print('==============================================')
print(' Step04 Started')
print('==============================================')
import pandas_datareader as pdr
from time import sleep
from pandas import to_pickle, read_pickle
from datetime import date, timedelta
# Load ticker_list generated in Step03
ticker_list = read_pickle('ticker_list.pkl')
print(ticker_list)
# Get half-year data from Yahoo
today = date.today()
startday = today - timedelta(days=183)
pd_data = pdr.get_data_yahoo(ticker_list, startday, today)
# Pickle the pd object(input data of Step05)
to_pickle(pd_data, 'pd_data.pkl')
sleep(1)
print('==============================================')
print(' Step04 Completed!')
print('==============================================')
| 36.518519 | 99 | 0.608519 | 122 | 986 | 4.770492 | 0.418033 | 0.103093 | 0.044674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025522 | 0.125761 | 986 | 26 | 100 | 37.923077 | 0.649652 | 0.315416 | 0 | 0.235294 | 1 | 0 | 0.396707 | 0.275449 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0.411765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
b8846436f0713620ffbb7130c2d3cffc2f0c7030 | 1,110 | py | Python | setup.py | ingresso-group/django-debug-toolbar-api-requests | c95a515bba61e1be1ee6870d734b71491a019ace | [
"MIT"
] | 1 | 2020-05-19T20:07:28.000Z | 2020-05-19T20:07:28.000Z | setup.py | ingresso-group/django-debug-toolbar-api-requests | c95a515bba61e1be1ee6870d734b71491a019ace | [
"MIT"
] | 1 | 2018-08-08T14:59:44.000Z | 2018-08-08T14:59:44.000Z | setup.py | ingresso-group/django-debug-toolbar-api-requests | c95a515bba61e1be1ee6870d734b71491a019ace | [
"MIT"
] | null | null | null | #import os
from setuptools import setup
# allow setup.py to be run from any path
#os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))
setup(
name='django-debug-toolbar-api-requests',
version='0.1.0',
packages=['djdt_api_requests', 'djdt_api_requests.panels'],
include_package_data=True,
description=(
'A plugin to the Django Debug Toolbar to record stats on requests '
'made to APIs using the requests library'
),
long_description=open('README.md').read(),
author='Ingresso',
author_email='systems@ingresso.co.uk',
install_requires=['django-debug-toolbar'],
classifiers=[
'Development Status :: 4 - Beta',
'Framework :: Django',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
],
)
| 32.647059 | 79 | 0.636036 | 130 | 1,110 | 5.330769 | 0.615385 | 0.137085 | 0.180375 | 0.075036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011641 | 0.226126 | 1,110 | 33 | 80 | 33.636364 | 0.795111 | 0.113514 | 0 | 0 | 0 | 0 | 0.572449 | 0.080612 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8b6ea646be9425d8fdb344d8e14fe9e914e2b64 | 719 | py | Python | python/p_lesson_7/task2.py | korwil/lessons | cbeb375cea38e143498979b63e133fdf71bf83bf | [
"Apache-2.0"
] | null | null | null | python/p_lesson_7/task2.py | korwil/lessons | cbeb375cea38e143498979b63e133fdf71bf83bf | [
"Apache-2.0"
] | null | null | null | python/p_lesson_7/task2.py | korwil/lessons | cbeb375cea38e143498979b63e133fdf71bf83bf | [
"Apache-2.0"
] | null | null | null | # 2-e задание
from abc import ABC, abstractmethod
class Clothes(ABC):
name = None
@property
@abstractmethod
def fabric_consumption(self):
pass
class Coat(Clothes):
name = 'Пальто'
def __init__(self, size):
self.size = size
@property
def fabric_consumption(self):
return self.size / 6.5 + 0.5
class Suit(Clothes):
name = 'Костюм'
def __init__(self, growth):
self.growth = growth
@property
def fabric_consumption(self):
return 2 * self.growth + 0.3
c = Coat(65)
s = Suit(180)
print(f'Расход {c.name} с размером {c.size} = {c.fabric_consumption}')
print(f'Расход {s.name} с ростом {s.growth} = {s.fabric_consumption}')
| 17.119048 | 70 | 0.62726 | 97 | 719 | 4.515464 | 0.402062 | 0.194064 | 0.136986 | 0.164384 | 0.173516 | 0.173516 | 0 | 0 | 0 | 0 | 0 | 0.024164 | 0.251739 | 719 | 41 | 71 | 17.536585 | 0.789963 | 0.015299 | 0 | 0.24 | 0 | 0 | 0.187234 | 0.062411 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.04 | 0.04 | 0.08 | 0.56 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b8bc962f0301402c9c2acf9b1b47bcb1b3b39107 | 184 | py | Python | cloudsql/utilities.py | amuetter/cloudSQL | c1881a51aa61bf1667cabb3f723b35de6fde96be | [
"MIT"
] | null | null | null | cloudsql/utilities.py | amuetter/cloudSQL | c1881a51aa61bf1667cabb3f723b35de6fde96be | [
"MIT"
] | null | null | null | cloudsql/utilities.py | amuetter/cloudSQL | c1881a51aa61bf1667cabb3f723b35de6fde96be | [
"MIT"
] | null | null | null | def parse_args(r_args):
columns = r_args.get('columns')
args = {key:value for (key,value) in r_args.items() if key not in ('columns', 'api_key')}
return columns, args | 30.666667 | 93 | 0.646739 | 30 | 184 | 3.8 | 0.5 | 0.131579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206522 | 184 | 6 | 94 | 30.666667 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0.113514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b211c71ee95b7527d604be8209afe870fad6cedb | 600 | py | Python | myweb/xsite.py | RainYang0925/testcase_web | 6698190c426be56bfc54e92b6f99a3de335d5e82 | [
"CC-BY-4.0"
] | 7 | 2017-08-03T08:02:11.000Z | 2021-02-22T02:25:03.000Z | myweb/xsite.py | kian11/testcase_web | 6698190c426be56bfc54e92b6f99a3de335d5e82 | [
"CC-BY-4.0"
] | null | null | null | myweb/xsite.py | kian11/testcase_web | 6698190c426be56bfc54e92b6f99a3de335d5e82 | [
"CC-BY-4.0"
] | null | null | null | # -*- coding:utf-8 -*-
from xadmin import Settings
from xadmin.views.list import ListAdminView
from xadmin.views import CommAdminView
from xadmin.plugins.actions import BaseActionView,ActionPlugin
from django.http import HttpResponse, HttpResponseRedirect
class Base(Settings):
enable_themes = True
use_bootswatch = True
# menu_style = 'default'
# class List(ListAdminView):
# ListAdminView.list_per_page = 20
class GlobalSetting(CommAdminView):
CommAdminView.site_title = u'自动化测试用例管理'
CommAdminView.site_footer = u'轻松筹'
class Comm(Settings):
menu_style = 'accordion'
| 27.272727 | 62 | 0.768333 | 70 | 600 | 6.471429 | 0.585714 | 0.0883 | 0.066225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005871 | 0.148333 | 600 | 21 | 63 | 28.571429 | 0.880626 | 0.171667 | 0 | 0 | 0 | 0 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b224a64cf7cc85d948580ba95f6790032e7f2a0a | 1,658 | py | Python | tests/views/test_auth.py | koonwen/YNC-Gym-Counter | e5000f1e8665c275f0a3c66ade7084d94e6526ad | [
"Apache-2.0"
] | null | null | null | tests/views/test_auth.py | koonwen/YNC-Gym-Counter | e5000f1e8665c275f0a3c66ade7084d94e6526ad | [
"Apache-2.0"
] | null | null | null | tests/views/test_auth.py | koonwen/YNC-Gym-Counter | e5000f1e8665c275f0a3c66ade7084d94e6526ad | [
"Apache-2.0"
] | 1 | 2021-11-05T06:15:33.000Z | 2021-11-05T06:15:33.000Z | import pytest
from flask import g, session
from app.db.models import Admin
# TODO implement skipping for pytest
# @pytest.mark.parametrize(('username', 'password', 'message'), (
# ('', '', b'Username is required.'),
# ('a', '', b'Password is required.'),
# ('test', 'test', b'already registered'),
# ))
# def test_register_validate_input(client, username, password, message):
# response = client.post(
# '/register',
# data={'username': username, 'password': password}
# )
# assert message in response.data
# def test_register(client, app):
# assert client.get('/register').status_code == 200
# response = client.post(
# '/register', data={'username': 'a', 'password': 'a'}
# )
# assert response.headers['Location'] == 'http://localhost/login'
# with app.app_context():
# assert Admin.verify_user(username='a', password='a') is not None
@pytest.mark.parametrize(('username', 'password', 'message'), (
('a', 'test', b'Incorrect username.'),
('test', 'a', b'Incorrect password.')
))
def test_login_validate_input(auth, username, password, message):
response = auth.login(username, password)
assert message in response.data
def test_login(client, auth):
assert client.get('/login').status_code == 200
response = auth.login()
assert response.headers["Location"] == 'http://localhost/admin/'
with client:
client.get('/')
assert session['user_id'] == 1
assert g.user.username == 'test'
def test_logout(client, auth):
auth.login()
with client:
auth.logout()
assert 'user_id' not in session
| 30.145455 | 74 | 0.627262 | 192 | 1,658 | 5.338542 | 0.291667 | 0.093659 | 0.089756 | 0.056585 | 0.323902 | 0.323902 | 0.081951 | 0.081951 | 0 | 0 | 0 | 0.005307 | 0.204463 | 1,658 | 54 | 75 | 30.703704 | 0.771797 | 0.4807 | 0 | 0.086957 | 0 | 0 | 0.15119 | 0 | 0 | 0 | 0 | 0.018519 | 0.26087 | 1 | 0.130435 | false | 0.173913 | 0.130435 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b224cde180e86605ee3a29c4c55a1a85f558060d | 971 | py | Python | Brain_Waves_Analysis/Training_Data_Acquisition/src/Board.py | ViniciusALS/Cerebro | 473866662fc93512b90d234f0cee8fea256b22e3 | [
"MIT"
] | 3 | 2020-11-05T15:36:22.000Z | 2021-01-25T20:54:32.000Z | Brain_Waves_Analysis/Training_Data_Acquisition/src/Board.py | ViniciusALS/Cerebro | 473866662fc93512b90d234f0cee8fea256b22e3 | [
"MIT"
] | 2 | 2021-01-27T07:42:58.000Z | 2021-01-27T07:53:28.000Z | Brain_Waves_Analysis/Training_Data_Acquisition/src/Board.py | ViniciusALS/Cerebro | 473866662fc93512b90d234f0cee8fea256b22e3 | [
"MIT"
] | null | null | null | import time
import numpy as np
import brainflow
from brainflow import BoardIds
from brainflow.board_shim import BoardShim, BrainFlowInputParams
class Board:
def __init__(self):
self.__board_ID = brainflow.board_shim.BoardIds(BoardIds.CYTON_BOARD)
self.__parameters = self.__setBoardParameters()
self.__board = self.__setBoard()
self.__board.prepare_session()
def startDataAcquisition(self):
self.__board.start_stream()
def getEGG_Data(self):
self.__board.stop_stream()
data = self.__board.get_board_data()
eeg_channels = BoardShim.get_eeg_channels(self.__board_ID)
eeg_data = data[eeg_channels]
return eeg_data
def releaseBoardSession(self):
self.__board.release_session()
def __setBoardParameters(self):
parameters = BrainFlowInputParams()
parameters.serial_port = "COM3"
return parameters
def __setBoard(self):
BoardShim.enable_dev_board_logger()
return BoardShim(self.__board_ID, self.__parameters)
| 19.42 | 71 | 0.775489 | 117 | 971 | 5.948718 | 0.350427 | 0.116379 | 0.074713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001198 | 0.140062 | 971 | 49 | 72 | 19.816327 | 0.832335 | 0 | 0 | 0 | 0 | 0 | 0.004119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.178571 | 0 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b2305aa91fe5b020c41774b34b8c9cfbff9c2994 | 8,117 | py | Python | app/accounts/models.py | HenriqueLR/payments | f2f7316fe12b683705e9a78813a86e43c08a2cf6 | [
"MIT"
] | null | null | null | app/accounts/models.py | HenriqueLR/payments | f2f7316fe12b683705e9a78813a86e43c08a2cf6 | [
"MIT"
] | 9 | 2017-06-01T12:28:25.000Z | 2017-10-26T11:21:37.000Z | app/accounts/models.py | HenriqueLR/payments | f2f7316fe12b683705e9a78813a86e43c08a2cf6 | [
"MIT"
] | null | null | null | #coding: utf-8
import re
from django.db.models import Q
from django.db import models
from django.conf import settings
from django.core import validators
from accounts.localflavor.br.br_states import STATE_CHOICES
from django.contrib.auth.models import (AbstractBaseUser, PermissionsMixin, UserManager)
class Account(models.Model):
id_account = models.AutoField(primary_key=True, verbose_name=u'Cod Account', db_column='id_account')
cpf = models.CharField(max_length=30, verbose_name=u'Cpf', db_column='cpf', unique=True)
status_account = models.BooleanField(verbose_name=u'Status', default=False, db_column='status_account')
status_payment = models.BooleanField(verbose_name=u'Status Payments', default=False, db_column='status_payment')
updated_at = models.DateTimeField(verbose_name=u'Atualizado em', auto_now=True, db_column='updated_at')
created_at = models.DateTimeField(verbose_name=u'Criado em', auto_now_add=True)
def __unicode__(self):
return u'%s' % self.cpf
class Meta:
verbose_name = 'Account'
verbose_name_plural = 'Account'
ordering=['-created_at']
db_table='account'
permissions = (
('add_new_account', 'add account'),
('list_new_account', 'list account'),
('delete_new_account', 'delete account'),
('change_new_account', 'change account'),
('detail_new_account', 'detail account'),
)
class UserManager(models.Manager):
def list_user(self, user):
qs = super(UserManager, self).get_queryset().filter(~Q(pk=user.pk))
if not user.is_superuser and user.is_active:
qs = qs.filter(account=user.account)
elif user.is_superuser:
qs = qs
else:
qs = qs.none()
return qs
def get_by_natural_key(self, email):
return self.get(email=email)
class User(AbstractBaseUser, PermissionsMixin):
username = models.CharField(
verbose_name=u'Nome de Usuário', max_length=30,
validators=[validators.RegexValidator(re.compile('^[\A-Za-z]+$'),
'O nome de usuário só pode conter letras, digitos ou os '
'seguintes caracteres: @/./+/-/_', 'invalid')]
)
email = models.EmailField(verbose_name=u'E-mail', unique=True)
is_active = models.BooleanField(verbose_name=u'Está ativo?', default=False)
is_staff = models.BooleanField(verbose_name=u'É da equipe?', default=False)
date_joined = models.DateTimeField(verbose_name=u'Data de Entrada', auto_now_add=True)
updated_at = models.DateTimeField(verbose_name=u'Atualizado em', auto_now=True, db_column='updated_at')
account = models.ForeignKey(Account, verbose_name='Conta', related_name='user_account',
on_delete=models.CASCADE, null=False)
objects = UserManager()
USERNAME_FIELD = 'email'
def __unicode__(self):
return u'%s' % self.email
def natural_key(self):
return (self.email)
@property
def name(self):
return self.email
def get_short_name(self):
return self.email
def get_full_name(self):
return str(self)
@models.permalink
def get_edit_profile(self):
return ('accounts:edit_profile', {})
@models.permalink
def get_detail_profile(self):
return ('accounts:detail_profile', {})
@models.permalink
def get_edit_user(self):
return ('accounts:edit_user', [int(self.pk)], {})
@models.permalink
def get_delete_user(self):
return ('accounts:delete_user', [int(self.pk)], {})
class Meta:
verbose_name = 'User'
verbose_name_plural = 'User'
ordering = ['-date_joined']
db_table='user'
permissions = (
('add_new_user', 'add user'),
('list_new_user', 'list user'),
('delete_new_user', 'delete user'),
('change_new_user', 'change user'),
('detail_new_user', 'detail user'),
)
class PasswordReset(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name='Usuário', related_name='resets_user',
on_delete=models.CASCADE, null=False)
key = models.CharField(verbose_name=u'Chave', max_length=100, unique=True)
created_at = models.DateTimeField(verbose_name=u'Criado em', auto_now_add=True)
confirmed = models.BooleanField(verbose_name=u'Confirmado?', default=False)
def __unicode__(self):
return u'%s' % self.user
class Meta:
verbose_name = 'New Password'
verbose_name_plural = 'New Password'
ordering = ['-created_at']
db_table='passwordreset'
permissions = (
('add_new_password', 'add password'),
('list_new_password', 'list password'),
('delete_new_password', 'delete password'),
('change_new_password', 'change password'),
('detail_new_password', 'detail password'),
)
class ProfileManager(models.Manager):
def list_account(self, user):
qs = super(ProfileManager, self).get_queryset().filter(~Q(user=user))
if not user.is_superuser and user.is_active:
qs = qs.filter(user__account=user.account)
elif user.is_superuser:
qs = qs
else:
qs = qs.none()
return qs
class Profile(models.Model):
ORDER_CHOICE = ((0, 'super'),(1, 'user'))
id_profile = models.AutoField(primary_key=True, verbose_name=u'id_profile', db_column='id_profile')
first_name = models.CharField(max_length=50, verbose_name=u'Nome', db_column='first_name',
validators=[validators.RegexValidator(re.compile('^[\A-Za-z]+$'),
'O nome de usuário só pode conter letras', 'invalid')])
last_name = models.CharField(max_length=100, verbose_name=u'Sobrenome', db_column='last_name')
status_profile = models.BooleanField(verbose_name=u'Status', default=False, db_column='status_profile')
state = models.CharField(choices=STATE_CHOICES, max_length=10, verbose_name=u'Estado', db_column='state')
created_at = models.DateTimeField(verbose_name=u'Data de criação', auto_now_add=True, db_column='date_created')
updated_at = models.DateTimeField(verbose_name=u'Atualizado em', auto_now=True, db_column='updated_at')
birthday = models.DateField(verbose_name=u'Aniversario', db_column='birthday', blank=True, null=True)
url = models.CharField(max_length=100, verbose_name=u'Site', db_column='url', blank=True, null=True)
description = models.TextField(db_column='description', blank=True, null=True, verbose_name=u'Descricao')
user = models.OneToOneField(settings.AUTH_USER_MODEL, verbose_name='Usuário', related_name='profile_user',
on_delete=models.CASCADE, null=False)
order = models.IntegerField(choices=ORDER_CHOICE, verbose_name=u'Ordem', db_column='order',
default=ORDER_CHOICE[1][0])
objects = ProfileManager()
def __unicode__(self):
return u'%s' % self.first_name
def get_short_name(self):
return self.first_name
def get_full_name(self):
return str(self)
@models.permalink
def active_account(self):
return ('accounts:active_account', [int(self.pk)], {})
class Meta:
verbose_name = 'Profile'
verbose_name_plural = 'Profile'
ordering=['-created_at']
db_table='profile'
permissions = (
('add_new_profile', 'add profile'),
('list_new_profile', 'list profile'),
('delete_new_profile', 'delete profile'),
('change_new_profile', 'change profile'),
('detail_new_profile', 'detail profile'),
) | 39.024038 | 117 | 0.628434 | 955 | 8,117 | 5.099476 | 0.183246 | 0.083573 | 0.064066 | 0.043121 | 0.427926 | 0.345175 | 0.331622 | 0.249692 | 0.216838 | 0.196304 | 0 | 0.003626 | 0.252433 | 8,117 | 208 | 118 | 39.024038 | 0.798945 | 0.001602 | 0 | 0.310559 | 0 | 0 | 0.185387 | 0.008484 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111801 | false | 0.055901 | 0.043478 | 0.099379 | 0.534161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b247f037ae61375f2b14b99fb0d6e385812f3e13 | 367 | py | Python | backend/models.py | MichalKandybowicz/cicida | 92cd0ae5551c0a2e3386feb758bf3b7daf2d205f | [
"MIT"
] | null | null | null | backend/models.py | MichalKandybowicz/cicida | 92cd0ae5551c0a2e3386feb758bf3b7daf2d205f | [
"MIT"
] | null | null | null | backend/models.py | MichalKandybowicz/cicida | 92cd0ae5551c0a2e3386feb758bf3b7daf2d205f | [
"MIT"
] | null | null | null | from django.core.validators import MinValueValidator, MaxValueValidator
from django.db import models
from accounts.models import CustomUser
from django.utils.translation import ugettext_lazy as _
from accounts.models import CustomUser
class Purchase(models.Model):
user = models.ForeignKey(CustomUser, related_name="purchase_user", on_delete=models.PROTECT)
| 28.230769 | 96 | 0.831063 | 46 | 367 | 6.521739 | 0.565217 | 0.1 | 0.12 | 0.16 | 0.226667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106267 | 367 | 12 | 97 | 30.583333 | 0.914634 | 0 | 0 | 0.285714 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.714286 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b2567e4c1272d4eecac9f7a4ed88557d78012849 | 4,370 | py | Python | crypto/bls/bls_bft_replica.py | rhzs/indy-plenum | a1ee6f3d081e802b404637026dc6f8ef3ec82a40 | [
"Apache-2.0"
] | null | null | null | crypto/bls/bls_bft_replica.py | rhzs/indy-plenum | a1ee6f3d081e802b404637026dc6f8ef3ec82a40 | [
"Apache-2.0"
] | null | null | null | crypto/bls/bls_bft_replica.py | rhzs/indy-plenum | a1ee6f3d081e802b404637026dc6f8ef3ec82a40 | [
"Apache-2.0"
] | 1 | 2020-01-24T09:36:13.000Z | 2020-01-24T09:36:13.000Z | from abc import ABCMeta, abstractmethod
from crypto.bls.bls_bft import BlsBft
from plenum.common.messages.node_messages import PrePrepare, Prepare, Commit
class BlsBftReplica(metaclass=ABCMeta):
PPR_BLS_MULTISIG_WRONG = 1
CM_BLS_SIG_WRONG = 2
def __init__(self,
bls_bft: BlsBft,
is_master):
self._bls_bft = bls_bft
self._is_master = is_master
@abstractmethod
def validate_pre_prepare(self, pre_prepare: PrePrepare, sender):
'''
Validates PrePrepare for correct BLS signatures.
Raises SuspiciousNode exception if there are errors
:param pre_prepare: pre-prepare to be validated
:param sender: sender's Node name
:return:
'''
pass
@abstractmethod
def validate_prepare(self, prepare: Prepare, sender):
'''
Validates Prepare for correct BLS signatures.
Raises SuspiciousNode exception if there are errors
:param prepare: prepare to be validated
:param sender: sender's Node name
:return:
'''
pass
@abstractmethod
def validate_commit(self, commit: Commit, sender, pre_prepare: PrePrepare):
'''
Validates Commit for correct BLS signatures.
Raises SuspiciousNode exception if there are errors
:param commit: commit to be validated
:param sender: sender's Node name
:param pre_prepare: PrePrepare associated with the Commit
:return:
'''
pass
@abstractmethod
def process_pre_prepare(self, pre_prepare: PrePrepare, sender):
'''
Performs BLS-related logic for a given PrePrepare (for example,
saving multi-signature calculated by Pre-Prepare for last batches).
:param pre_prepare: pre-prepare to be processed
:param sender: sender's Node name
:return:
'''
pass
@abstractmethod
def process_prepare(self, prepare: Prepare, sender):
'''
Performs BLS-related logic for a given Prepare
:param prepare: prepare to be processed
:param sender: sender's Node name
:return:
'''
pass
@abstractmethod
def process_commit(self, commit: Commit, sender):
'''
Performs BLS-related logic for a given Commit (for example, saving BLS signatures from this Commit)
:param commit: commit to be processed
:param sender: sender's Node name
:return:
'''
pass
@abstractmethod
def process_order(self, key, quorums, pre_prepare: PrePrepare):
'''
Performs BLS-related logic when Ordering (for example, calculate a temporarily multi-sig by a current Node
which will be replaced by Primary's multi-sig in process_prepare).
:param key: 3PC-key re;ated to the Ordered message
:param quorums: quorums
:param pre_prepare: PrePrepare associated with the ordered messages
:return:
'''
pass
@abstractmethod
def update_pre_prepare(self, pre_prepare_params, ledger_id):
'''
Adds BLS-related parameters to be used for creation of a new PrePrepare
:param pre_prepare_params: a list of existing parameters
:param ledger_id: ledger's ID
:return: pre_prepare_params updated with BLS ones
'''
pass
@abstractmethod
def update_prepare(self, prepare_params, ledger_id):
'''
Adds BLS-related parameters to be used for creation of a new Prepare
:param prepare_params: a list of existing parameters
:param ledger_id: ledger's ID
:return: pre_prepare_params updated with BLS ones
'''
pass
@abstractmethod
def update_commit(self, commit_params, pre_prepare: PrePrepare):
'''
Adds BLS-related parameters to be used for creation of a new Commit
:param commit_params: a list of existing parameters
:param pre_prepare: PrePrepare associated with the Commit
:return: pre_prepare_params updated with BLS ones
'''
pass
@abstractmethod
def gc(self, key_3PC=None):
"""
Do some cleaning if needed
:param key_3PC: 3PC-key till which cleaning must be done
(all is cleaned if not provided)
"""
pass
| 32.857143 | 114 | 0.641419 | 522 | 4,370 | 5.247126 | 0.224138 | 0.07667 | 0.07667 | 0.069003 | 0.638919 | 0.566265 | 0.566265 | 0.501278 | 0.459657 | 0.408178 | 0 | 0.00195 | 0.295881 | 4,370 | 132 | 115 | 33.106061 | 0.888203 | 0.51373 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.25 | 0.068182 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b25746127a291678ddbe683728c485ff67e3f35a | 318 | py | Python | ex000.py | Laurahpro/EXERCICIOS-EM-PYTHON | 7eb896e322f183fdbcf3a7d93b21ab5b6162f303 | [
"MIT"
] | null | null | null | ex000.py | Laurahpro/EXERCICIOS-EM-PYTHON | 7eb896e322f183fdbcf3a7d93b21ab5b6162f303 | [
"MIT"
] | null | null | null | ex000.py | Laurahpro/EXERCICIOS-EM-PYTHON | 7eb896e322f183fdbcf3a7d93b21ab5b6162f303 | [
"MIT"
] | null | null | null | nome = input('Qual o seu nome?')
print('É um grande prazer te conhecer,', nome)
idade = input('Quantos anos você tem?')
print('Bacana que você tem', idade,'anos', nome,'!')
filho = input('Você tem filhos?')
print('Que bacana!')
nomeFilho = input('Qual o nome do seu filho?')
print('Então ele se chama', nomeFilho, '!') | 39.75 | 52 | 0.679245 | 50 | 318 | 4.32 | 0.54 | 0.097222 | 0.092593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 318 | 8 | 53 | 39.75 | 0.791209 | 0 | 0 | 0 | 0 | 0 | 0.514107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
b260c0aa25c629eb40bb0e5bdaa0ce6efb625770 | 1,197 | py | Python | mempyasync/resourcehelper.py | sumsted/mempy-async | cbbfe8c972c8076668e74fbf1f79299d65e6336a | [
"Apache-2.0"
] | null | null | null | mempyasync/resourcehelper.py | sumsted/mempy-async | cbbfe8c972c8076668e74fbf1f79299d65e6336a | [
"Apache-2.0"
] | null | null | null | mempyasync/resourcehelper.py | sumsted/mempy-async | cbbfe8c972c8076668e74fbf1f79299d65e6336a | [
"Apache-2.0"
] | null | null | null | __author__ = 'scottumsted'
import gc
import resource
import time
class ResourceHelper():
def __init__(self, module=None):
self.module = '' if module is None else '\nmodule:\t'+module+'\n'
gc.disable()
self.reset()
def reset(self):
self.start_mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
self.start_utime = resource.getrusage(resource.RUSAGE_SELF).ru_utime
self.start_stime = resource.getrusage(resource.RUSAGE_SELF).ru_stime
self.start_time = time.time()
def usage(self):
stop_utime = resource.getrusage(resource.RUSAGE_SELF).ru_utime
stop_stime = resource.getrusage(resource.RUSAGE_SELF).ru_stime
diff_utime = stop_utime - self.start_utime
diff_stime = stop_stime - self.start_stime
diff_cpu = (stop_utime + stop_stime) - (self.start_utime + self.start_stime)
diff_mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss - self.start_mem
diff_time = time.time() - self.start_time;
print('{}mem:\t{:,}\nutime:\t{:f}\nstime:\t{:f}\ncpu:\t{:f}\ntime:\t{:f}\n'.format(self.module, diff_mem, diff_utime, diff_stime, diff_cpu, diff_time)) | 46.038462 | 159 | 0.679198 | 163 | 1,197 | 4.705521 | 0.245399 | 0.11734 | 0.195567 | 0.242503 | 0.388527 | 0.388527 | 0.388527 | 0.388527 | 0.143416 | 0.143416 | 0 | 0 | 0.193818 | 1,197 | 26 | 159 | 46.038462 | 0.794819 | 0 | 0 | 0 | 0 | 0.043478 | 0.07596 | 0.055927 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.130435 | 0 | 0.304348 | 0.043478 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b267a0be277dc89d036ac9ce801fd45bd41ed451 | 770 | py | Python | intro/summary-exercises/examples/plot_optimize_lidar_data_fit.py | zmoon/scipy-lecture-notes | 75a89ddedeb48930dbdb6fe25a76e9ef0587ae21 | [
"CC-BY-4.0"
] | 2,538 | 2015-01-01T04:58:41.000Z | 2022-03-31T21:06:05.000Z | intro/summary-exercises/examples/plot_optimize_lidar_data_fit.py | zmoon/scipy-lecture-notes | 75a89ddedeb48930dbdb6fe25a76e9ef0587ae21 | [
"CC-BY-4.0"
] | 362 | 2015-01-18T14:16:23.000Z | 2021-11-18T16:24:34.000Z | intro/summary-exercises/examples/plot_optimize_lidar_data_fit.py | zmoon/scipy-lecture-notes | 75a89ddedeb48930dbdb6fe25a76e9ef0587ae21 | [
"CC-BY-4.0"
] | 1,127 | 2015-01-05T14:39:29.000Z | 2022-03-25T08:38:39.000Z | """
The lidar system, data and fit (1 of 2 datasets)
================================================
Generate a chart of the data fitted by Gaussian curve
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
def model(t, coeffs):
return coeffs[0] + coeffs[1] * np.exp(- ((t-coeffs[2])/coeffs[3])**2)
def residuals(coeffs, y, t):
return y - model(t, coeffs)
waveform_1 = np.load('waveform_1.npy')
t = np.arange(len(waveform_1))
x0 = np.array([3, 30, 15, 1], dtype=float)
x, flag = leastsq(residuals, x0, args=(waveform_1, t))
print(x)
fig, ax = plt.subplots(figsize=(8, 6))
plt.plot(t, waveform_1, t, model(t, x))
plt.xlabel('Time [ns]')
plt.ylabel('Amplitude [bins]')
plt.legend(['Waveform', 'Model'])
plt.show()
| 22 | 73 | 0.628571 | 124 | 770 | 3.862903 | 0.548387 | 0.093946 | 0.050104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033384 | 0.144156 | 770 | 34 | 74 | 22.647059 | 0.693475 | 0.197403 | 0 | 0 | 1 | 0 | 0.085246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.166667 | 0.111111 | 0.388889 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b26d586e3c72dd097aa813377afb6cb1bec96e72 | 535 | py | Python | app/main/forms.py | Nyirabazungu/pitcher-app | 65f9a98d4df6a2966840d0703bf081aa27d3b324 | [
"MIT"
] | null | null | null | app/main/forms.py | Nyirabazungu/pitcher-app | 65f9a98d4df6a2966840d0703bf081aa27d3b324 | [
"MIT"
] | null | null | null | app/main/forms.py | Nyirabazungu/pitcher-app | 65f9a98d4df6a2966840d0703bf081aa27d3b324 | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField,TextAreaField,SubmitField
from wtforms.validators import Required
class PitchForm(FlaskForm):
pitches= TextAreaField('Write your Pitch',validators=[Required()])
submit = SubmitField('Submit')
class UpdateProfile(FlaskForm):
bio = TextAreaField('Tell us about you.',validators = [Required()])
submit = SubmitField('Submit')
class CommentForm(FlaskForm):
comment= TextAreaField('Comment',validators=[Required()])
submit = SubmitField('Submit') | 29.722222 | 71 | 0.75514 | 54 | 535 | 7.462963 | 0.481481 | 0.133995 | 0.17866 | 0.260546 | 0.330025 | 0.228288 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130841 | 535 | 18 | 72 | 29.722222 | 0.866667 | 0 | 0 | 0.25 | 0 | 0 | 0.110075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b29521c7a015bd44e52db5b543c3a1f07ffbf977 | 6,055 | py | Python | app/template_controller.py | MaxKusnadi/tax-calculator | 9de1bd426b26e43e8cb23daca668b07dd1c68535 | [
"MIT"
] | null | null | null | app/template_controller.py | MaxKusnadi/tax-calculator | 9de1bd426b26e43e8cb23daca668b07dd1c68535 | [
"MIT"
] | 3 | 2018-12-05T14:55:18.000Z | 2018-12-07T15:52:00.000Z | app/template_controller.py | MaxKusnadi/tax-calculator | 9de1bd426b26e43e8cb23daca668b07dd1c68535 | [
"MIT"
] | null | null | null | import dash_html_components as html
from dash.dependencies import Input, Output, State
from app import app
from app.utils import Utils
from app.logic.tax_calculator import TaxCalculator
tax_calculator = TaxCalculator()
@app.callback(
Output('annual-salary', 'children'),
[Input('monthly-salary', 'n_submit'), Input('monthly-salary', 'n_blur')],
[State('monthly-salary', 'value')]
)
def get_annual_salary(n_submit, n_blur, value):
annual_salary = Utils.get_annual_income(value)
return html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Annual salary: "]),
html.Div(className="col-sm-3",
children=["S$ {0:.2f}".format(annual_salary)])
])
@app.callback(
Output('taxable-salary', 'children'),
[Input('monthly-salary', 'n_submit'), Input('monthly-salary', 'n_blur'),
Input('annual-bonus', 'n_submit'), Input('annual-bonus', 'n_blur'),
Input('tax-rebates', 'n_submit'), Input('tax-rebates', 'n_blur'),
Input('donation', 'value')],
[State('monthly-salary', 'value'),
State('annual-bonus', 'value'),
State('tax-rebates', 'value')]
)
def get_taxable_salary(ns1, nb1, ns2, nb2, ns3, nb3, donation, monthly_salary, annual_bonus, tax_rebates):
annual_salary = Utils.get_annual_income(monthly_salary)
taxable_salary = Utils.get_taxable_income(annual_salary, annual_bonus, tax_rebates, donation)
return html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Taxable salary: "]),
html.Div(className="col-sm-3",
children=["S$ {0:.2f}".format(taxable_salary)])
])
@app.callback(
Output('donation', 'max'),
[Input('monthly-salary', 'n_submit'), Input('monthly-salary', 'n_blur'),
Input('annual-bonus', 'n_submit'), Input('annual-bonus', 'n_blur')],
[State('monthly-salary', 'value'),
State('annual-bonus', 'value')]
)
def get_maximum_donation(ns1, nb1, ns2, nb2, monthly_salary, annual_bonus):
annual_salary = Utils.get_annual_income(monthly_salary)
return annual_salary + annual_bonus
@app.callback(
Output('donation', 'marks'),
[Input('monthly-salary', 'n_submit'), Input('monthly-salary', 'n_blur'),
Input('annual-bonus', 'n_submit'), Input('annual-bonus', 'n_blur')],
[State('monthly-salary', 'value'),
State('annual-bonus', 'value')]
)
def get_maximum_donation_mark(ns1, nb1, ns2, nb2, monthly_salary, annual_bonus):
annual_salary = Utils.get_annual_income(monthly_salary)
max_donation = annual_salary + annual_bonus
return {
0: 'S$ 0',
max_donation: "{0:.2f}".format(max_donation)
}
@app.callback(
Output('current-donation', 'value'),
[Input('donation', 'value')]
)
def get_maximum_donation_label(donation):
return donation
@app.callback(
Output('donation', 'value'),
[Input('current-donation', 'n_submit'), Input('current-donation', 'n_blur')],
[State('current-donation', 'value')]
)
def get_maximum_donation_label(ns, nb, donation):
return donation
@app.callback(
Output('tax-info', 'children'),
[Input('monthly-salary', 'n_submit'), Input('monthly-salary', 'n_blur'),
Input('annual-bonus', 'n_submit'), Input('annual-bonus', 'n_blur'),
Input('tax-rebates', 'n_submit'), Input('tax-rebates', 'n_blur'),
Input('donation', 'value')],
[State('monthly-salary', 'value'),
State('annual-bonus', 'value'),
State('tax-rebates', 'value')]
)
def get_taxable_salary(ns1, nb1, ns2, nb2, ns3, nb3, donation, monthly_salary, annual_bonus, tax_rebates):
tax_info = tax_calculator.calculate_tax(monthly_salary, annual_bonus, tax_rebates, donation)
print(tax_info)
return [
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Total Annual Tax: "]),
html.Div(className="col-sm-3",
children=["S$ {0:.2f}".format(tax_info.total_tax)])
]),
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Total Monthly Tax: "]),
html.Div(className="col-sm-3",
children=["S$ {0:.2f}".format(tax_info.total_tax / 12)])
]),
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Tax Tier: "]),
html.Div(className="col-sm-3",
children=["{}".format(tax_info.tier_level)])
]),
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Tax Rate: "]),
html.Div(className="col-sm-3",
children=["{0:.2f}%".format(tax_info.tier_info['tax_rate'])])
]),
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Tier Upper Limit: "]),
html.Div(className="col-sm-3",
children=["S$ {}".format(tax_info.tier_info['upper_limit'])])
]),
html.Div(className="row mt-1",
children=[
html.Div(className="col-sm-3 mr-1",
children=["Tier Lower Limit: "]),
html.Div(className="col-sm-3",
children=["S$ {}".format(tax_info.tier_info['lower_limit'])])
]),
]
| 39.835526 | 106 | 0.546821 | 682 | 6,055 | 4.695015 | 0.104106 | 0.052467 | 0.119925 | 0.094941 | 0.768582 | 0.752967 | 0.687383 | 0.644285 | 0.630231 | 0.630231 | 0 | 0.015814 | 0.289843 | 6,055 | 151 | 107 | 40.099338 | 0.728837 | 0 | 0 | 0.567164 | 0 | 0 | 0.216185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052239 | false | 0 | 0.037313 | 0.014925 | 0.141791 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b295adcbecace89aae63b84ee21aa2cb2868aee6 | 7,370 | py | Python | tests/ut/python/pipeline/parse/test_parse.py | Gavin-Hoang/mindspore | f745ae0799a0840ebba18021c250f0089325a414 | [
"Apache-2.0"
] | 2 | 2020-08-12T16:14:40.000Z | 2020-12-04T03:05:57.000Z | tests/ut/python/pipeline/parse/test_parse.py | Gavin-Hoang/mindspore | f745ae0799a0840ebba18021c250f0089325a414 | [
"Apache-2.0"
] | null | null | null | tests/ut/python/pipeline/parse/test_parse.py | Gavin-Hoang/mindspore | f745ae0799a0840ebba18021c250f0089325a414 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
@File : test_parse.py
@Author:
@Date : 2019-01-23 17:13
@Desc :
"""
import logging
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor
from mindspore import context
from mindspore.ops import composite as C
from mindspore.common.api import ms_function, _executor
from mindspore.ops._grad.grad_base import bprop_getters
from mindspore.ops.primitive import prim_attr_register, PrimitiveWithInfer
from mindspore.ops.functional import tensor_add
from ...ut_filter import non_graph_engine
# pylint: disable=W0613,W0612
# W0613: unused-argument
log = logging.getLogger("test")
log.setLevel(level=logging.ERROR)
context.set_context(mode=context.GRAPH_MODE)
# Test case: use the parse obj interface use default parameter
class Net(nn.Cell):
""" Net definition """
def __init__(self, dim):
super(Net, self).__init__()
self.softmax1 = nn.Softmax(dim)
self.softmax2 = nn.Softmax(dim + 1)
def construct(self, input_data, input1=ms.Tensor(np.random.randn(2, 3, 4, 5).astype(np.float32))):
return self.softmax1(input_data)
@non_graph_engine
def test_parse_defalut_parameter_case2():
""" test_parse_defalut_parameter_case2 """
log.debug("begin test_parse_defalut_parameter_case2")
net = Net(0)
npd = np.array([[1.2, 2.1], [2.2, 3.2]]).astype('float32')
log.debug("input value is: %r", npd)
input_data = ms.Tensor(npd)
input_data.set_dtype(ms.float32)
log.debug("start run")
output = net(input_data)
value = output.asnumpy()
log.debug("output value = %r", value)
# Test case: use the variable parameter for parse object
class Net1(nn.Cell):
""" Net1 definition """
def __init__(self):
super(Net1, self).__init__()
def construct(self, *args):
x = args[0]
return x
def test_var_parameter_case2():
""" test_var_parameter_case2 """
log.debug("begin test_var_parameter_case2")
net = Net1()
npd = np.array([[1.2, 2.1], [2.2, 3.2]]).astype('float32')
log.debug("input value is: %r", npd)
input_data = ms.Tensor(npd)
input_data.set_dtype(ms.float32)
np1 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input1 = ms.Tensor(np1)
np2 = np.random.randn(2, 3, 4, 5).astype(np.float32)
input2 = ms.Tensor(np2)
_executor.compile(net, input_data, input1, input2)
# Test case: test the global flag
g_x = Tensor(np.ones([3, 3]).astype(np.float32))
@ms_function
def tensor_add_global(x):
""" tensor_add_global """
global g_x
res = tensor_add(x, g_x)
return res
@non_graph_engine
def test_global_flag():
""" test_global_flag """
log.debug("begin test_global_flag")
x = Tensor(np.ones([3, 3]).astype(np.float32))
res = tensor_add_global(x)
log.debug("finished test_global_flag, ret = %r", res)
class NetWithNDarray(nn.Cell):
""" NetWithNDarray definition """
def __init__(self, dim):
super(NetWithNDarray, self).__init__()
self.softmax = nn.Softmax(dim)
self.x = ms.Tensor(np.ones(shape=(1)).astype(np.float32))
def construct(self, input_data):
return self.softmax(input_data) * self.x
@non_graph_engine
def test_net_with_ndarray():
""" test_net_with_ndarray """
net = NetWithNDarray(0)
input_data = np.array([[1.2, 2.1], [2.2, 3.2]]).astype('float32')
net(ms.Tensor(input_data))
def test_bprop_with_wrong_output_num():
context.set_context(check_bprop=True)
class BpropWithWrongOutputNum(PrimitiveWithInfer):
@prim_attr_register
def __init__(self):
super(BpropWithWrongOutputNum, self).__init__('BpropWithWrongOutputNum')
def __call__(self, x, y):
return x
def infer_shape(self, x_shape, yshape):
return x_shape
def infer_dtype(self, x_type, y_type):
return x_type
@bprop_getters.register(BpropWithWrongOutputNum)
def get_bprop_with_wrong_output_num(self):
"""Generate bprop for BpropWithWrongOutputNum"""
def bprop(x, y, out, dout):
return (dout,)
return bprop
class BpropWithWrongOutputNumCell(nn.Cell):
def __init__(self):
super(BpropWithWrongOutputNumCell, self).__init__()
def construct(self, x, y):
return BpropWithWrongOutputNum()(x, y)
with pytest.raises(TypeError):
C.grad_all(BpropWithWrongOutputNumCell())(1, 2)
def test_bprop_with_wrong_output_type():
context.set_context(check_bprop=True)
class BpropWithWrongOutputType(PrimitiveWithInfer):
@prim_attr_register
def __init__(self):
super(BpropWithWrongOutputType, self).__init__('BpropWithWrongOutputType')
def __call__(self, x):
return x
def infer_shape(self, x_shape):
return x_shape
def infer_dtype(self, x_type):
return x_type
@bprop_getters.register(BpropWithWrongOutputType)
def get_bprop_with_wrong_output_type(self):
"""Generate bprop for BpropWithWrongOutputType"""
def bprop(x, out, dout):
return (1,)
return bprop
class BpropWithWrongOutputTypeCell(nn.Cell):
def __init__(self):
super(BpropWithWrongOutputTypeCell, self).__init__()
def construct(self, x):
return BpropWithWrongOutputType()(x)
with pytest.raises(TypeError):
C.grad_all(BpropWithWrongOutputTypeCell())(Tensor(np.ones([64, 10]).astype(np.int32)))
def test_bprop_with_wrong_output_shape():
context.set_context(check_bprop=True)
class BpropWithWrongOutputShape(PrimitiveWithInfer):
@prim_attr_register
def __init__(self):
super(BpropWithWrongOutputShape, self).__init__('BpropWithWrongOutputShape')
def __call__(self, x):
return x
def infer_shape(self, x_shape):
return x_shape
def infer_dtype(self, x_type):
return x_type
@bprop_getters.register(BpropWithWrongOutputShape)
def get_bprop_with_wrong_output_shape(self):
"""Generate bprop for BpropWithWrongOutputShape"""
ones = Tensor(np.ones([2,]).astype(np.int32))
def bprop(x, out, dout):
return (ones,)
return bprop
class BpropWithWrongOutputShapeCell(nn.Cell):
def __init__(self):
super(BpropWithWrongOutputShapeCell, self).__init__()
def construct(self, x):
return BpropWithWrongOutputShape()(x)
with pytest.raises(TypeError):
net = BpropWithWrongOutputShapeCell()
net.set_grad()
C.grad_all(net)(Tensor(np.ones([64, 10]).astype(np.int32)))
| 29.015748 | 102 | 0.669878 | 943 | 7,370 | 4.988335 | 0.223754 | 0.014881 | 0.021046 | 0.02381 | 0.385417 | 0.330995 | 0.242985 | 0.185374 | 0.134779 | 0.114796 | 0 | 0.023838 | 0.20882 | 7,370 | 253 | 103 | 29.130435 | 0.782885 | 0.1654 | 0 | 0.342105 | 0 | 0 | 0.047179 | 0.021445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.085526 | 0.111842 | 0.539474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b297e45185fae6af8c48b427526a973481d69e16 | 283 | py | Python | matzip_image/views.py | Qone2/real-matzip-backend | 8448e28c928dd9117163055643746b86d99ae692 | [
"MIT"
] | null | null | null | matzip_image/views.py | Qone2/real-matzip-backend | 8448e28c928dd9117163055643746b86d99ae692 | [
"MIT"
] | null | null | null | matzip_image/views.py | Qone2/real-matzip-backend | 8448e28c928dd9117163055643746b86d99ae692 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
from django.template import loader
def index(request):
template = loader.get_template('matzip_image/index.html')
return HttpResponse(template.render(request))
# def image_show(request, file_name):
#
| 23.583333 | 61 | 0.787986 | 37 | 283 | 5.918919 | 0.540541 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123675 | 283 | 11 | 62 | 25.727273 | 0.883065 | 0.123675 | 0 | 0 | 0 | 0 | 0.093878 | 0.093878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b29d5a4083ec5d6c11d0029ec08b9c76886af89b | 1,763 | py | Python | games/spiders/spider.py | winkelmantanner/mmai20-dumbsophomores | bd2c88ea34cf9be83e2efd59abf2adf9a1a056fa | [
"MIT"
] | null | null | null | games/spiders/spider.py | winkelmantanner/mmai20-dumbsophomores | bd2c88ea34cf9be83e2efd59abf2adf9a1a056fa | [
"MIT"
] | null | null | null | games/spiders/spider.py | winkelmantanner/mmai20-dumbsophomores | bd2c88ea34cf9be83e2efd59abf2adf9a1a056fa | [
"MIT"
] | null | null | null | # Spider: A Spider in the game. The most basic unit.
# DO NOT MODIFY THIS FILE
# Never try to directly create an instance of this class, or modify its member variables.
# Instead, you should only be reading its variables and calling its functions.
from games.spiders.game_object import GameObject
# <<-- Creer-Merge: imports -->> - Code you add between this comment and the end comment will be preserved between Creer re-runs.
# you can add additional import(s) here
# <<-- /Creer-Merge: imports -->>
class Spider(GameObject):
"""The class representing the Spider in the Spiders game.
A Spider in the game. The most basic unit.
"""
def __init__(self):
"""Initializes a Spider with basic logic as provided by the Creer code generator."""
GameObject.__init__(self)
# private attributes to hold the properties so they appear read only
self._is_dead = False
self._nest = None
self._owner = None
@property
def is_dead(self):
"""If this Spider is dead and has been removed from the game.
:rtype: bool
"""
return self._is_dead
@property
def nest(self):
"""The Nest that this Spider is currently on. None when moving on a Web.
:rtype: Nest
"""
return self._nest
@property
def owner(self):
"""The Player that owns this Spider, and can command it.
:rtype: Player
"""
return self._owner
# <<-- Creer-Merge: functions -->> - Code you add between this comment and the end comment will be preserved between Creer re-runs.
# if you want to add any client side logic (such as state checking functions) this is where you can add them
# <<-- /Creer-Merge: functions -->>
| 32.054545 | 135 | 0.655133 | 249 | 1,763 | 4.566265 | 0.437751 | 0.03518 | 0.029024 | 0.021108 | 0.191733 | 0.191733 | 0.191733 | 0.191733 | 0.191733 | 0.135444 | 0 | 0 | 0.261486 | 1,763 | 54 | 136 | 32.648148 | 0.873272 | 0.669881 | 0 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a22c7d4bf85e09e6ea43da9912a236c43d81caf2 | 638 | py | Python | aoc_2020/run_all.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | 1 | 2020-12-05T09:25:03.000Z | 2020-12-05T09:25:03.000Z | aoc_2020/run_all.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | null | null | null | aoc_2020/run_all.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | null | null | null | from . import day1, day2, day3, day4, day5, day6, day7, day8
from . import inputs
if __name__ == "__main__":
day1.part1(inputs.day1.data)
day1.part2(inputs.day1.data)
day2.part1(inputs.day2.data)
day2.part2(inputs.day2.data)
day3.part1(inputs.day3.data)
day3.part2(inputs.day3.data)
day4.part1(inputs.day4.data)
day4.part2(inputs.day4.data)
day5.part1(inputs.day5.data)
day5.part2(inputs.day5.data)
day6.part1(inputs.day6.data)
day6.part2(inputs.day6.data)
day7.part1(inputs.day7.data)
day7.part2(inputs.day7.data)
day8.part1(inputs.day8.data)
day8.part2(inputs.day8.data)
| 30.380952 | 60 | 0.695925 | 96 | 638 | 4.541667 | 0.177083 | 0.201835 | 0.06422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104089 | 0.15674 | 638 | 20 | 61 | 31.9 | 0.70632 | 0 | 0 | 0 | 0 | 0 | 0.012539 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a238a1ceea17ace23e3bc7ba0790cd397f800bc7 | 1,936 | py | Python | lsg/gene.py | nemanja-m/learning-space-generator | 376147fa4240aef5ab5fea9953ac026622943c4b | [
"MIT"
] | 2 | 2021-01-06T15:00:08.000Z | 2021-03-18T20:58:55.000Z | lsg/gene.py | nemanja-m/learning-space-generator | 376147fa4240aef5ab5fea9953ac026622943c4b | [
"MIT"
] | null | null | null | lsg/gene.py | nemanja-m/learning-space-generator | 376147fa4240aef5ab5fea9953ac026622943c4b | [
"MIT"
] | 1 | 2021-07-07T12:42:58.000Z | 2021-07-07T12:42:58.000Z | import random
from .structure import KnowledgeState
class Gene:
def __init__(self, key):
self.key = key
def __eq__(self, other: 'Gene') -> int:
return self.key == other.key
def distance(self, other: 'Gene') -> int:
raise NotImplementedError()
def copy(self) -> 'Gene':
raise NotImplementedError()
def crossover(self, other: 'Gene') -> 'Gene':
raise NotImplementedError()
def mutate(self) -> None:
raise NotImplementedError()
class KnowledgeStateGene(Gene):
def __init__(self, state: KnowledgeState):
key = state.to_bitstring()
super().__init__(key)
self.knowledge_state = state
def distance(self, other: 'KnowledgeStateGene') -> int:
return self.knowledge_state.distance(other.knowledge_state)
def copy(self) -> Gene:
return KnowledgeStateGene(state=self.knowledge_state)
def crossover(self, other: 'KnowledgeStateGene') -> Gene:
assert self.key == other.key, 'Gene keys must be same.'
# Inherit attributes from random parent.
state = self.knowledge_state if random.random() > 0.5 else other.knowledge_state
return KnowledgeStateGene(state=state)
def mutate(self) -> Gene:
bitarray = self.knowledge_state._bitarray
# Only flip 0 bits to 1. This shuld speed up convergence.
zero_bit_indexes = [bit_idx for bit_idx, bit in enumerate(bitarray) if not bit]
# No new gene is created when all bits are 1.
if not zero_bit_indexes:
return self
idx = random.choice(zero_bit_indexes)
new_bitarray = [0] * len(bitarray)
new_bitarray[idx] = 1
state_mask = KnowledgeState(new_bitarray)
new_state = self.knowledge_state | state_mask
return KnowledgeStateGene(state=new_state)
def __lt__(self, other):
return self.knowledge_state < other.knowledge_state
| 29.333333 | 88 | 0.657025 | 230 | 1,936 | 5.326087 | 0.3 | 0.114286 | 0.102857 | 0.056327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004814 | 0.248967 | 1,936 | 65 | 89 | 29.784615 | 0.837689 | 0.071281 | 0 | 0.097561 | 0 | 0 | 0.044036 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 1 | 0.292683 | false | 0 | 0.04878 | 0.097561 | 0.560976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a257742eec39f35c6d5fd82dfdb85767267136de | 997 | py | Python | binarysearch.io/22_add_binary_numbers.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | 2 | 2020-06-25T21:10:32.000Z | 2020-12-10T06:53:45.000Z | binarysearch.io/22_add_binary_numbers.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | null | null | null | binarysearch.io/22_add_binary_numbers.py | mishrakeshav/Competitive-Programming | b25dcfeec0fb9a9c71bf3a05644b619f4ca83dd2 | [
"MIT"
] | 3 | 2020-05-15T14:17:09.000Z | 2021-07-25T13:18:20.000Z | class Solution:
def solve(self, a, b):
def bin_sum(x,y,c):
x = int(x)
y = int(y)
c = int(c)
si = (x+y+c)%2
c = (x+y+c)//2
if c:
c = 1
return str(si),str(c)
n , m = len(a),len(b)
i,j = n-1,m-1
ans = ''
c = '0'
while i >= 0 and j >= 0:
si,c = bin_sum(a[i],b[j],c)
ans = si + ans
i -= 1
j -= 1
while i >= 0:
if c:
si,c = bin_sum(0,a[i],c)
ans = si + ans
else:
ans = a[:i+1] + ans
i -= 1
while j >= 0:
if c:
si , c = bin_sum(0,b[j],c)
ans = si + ans
else:
ans = b[:j+1] + ans
j -= 1
ans = c + ans
ans = str(int(ans))
return ans
| 23.738095 | 42 | 0.268806 | 135 | 997 | 1.955556 | 0.214815 | 0.090909 | 0.034091 | 0.102273 | 0.276515 | 0.276515 | 0.106061 | 0.106061 | 0 | 0 | 0 | 0.045 | 0.598796 | 997 | 42 | 43 | 23.738095 | 0.615 | 0 | 0 | 0.324324 | 0 | 0 | 0.001002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.