hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26da2fe9f135fe6b562ac20a884cdc708ce8db44 | 298 | py | Python | sparse/__init__.py | pbloem/sparse-hyper | a1b2186e2d0c9047085ef4273909d207facbeeb3 | [
"MIT"
] | 47 | 2018-10-23T11:20:30.000Z | 2020-12-29T18:31:43.000Z | sparse/__init__.py | pbloem/sparse-hyper | a1b2186e2d0c9047085ef4273909d207facbeeb3 | [
"MIT"
] | null | null | null | sparse/__init__.py | pbloem/sparse-hyper | a1b2186e2d0c9047085ef4273909d207facbeeb3 | [
"MIT"
] | 3 | 2018-10-23T22:02:19.000Z | 2019-09-23T16:23:04.000Z | from .sort import Split, SortLayer
from .layers import SparseLayer, NASLayer, Convolution, transform_means, transform_sigmas
from .layers import ngenerate, transform_means, densities
from .tensors import contract, logsoftmax, batchmm, simple_normalize
# from .tensors import flatten_indices_mat
| 33.111111 | 89 | 0.832215 | 36 | 298 | 6.722222 | 0.638889 | 0.082645 | 0.132231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114094 | 298 | 8 | 90 | 37.25 | 0.916667 | 0.134228 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
26f01a8e2a7a43f1518dc97d9edc0dac95fd28e1 | 58 | py | Python | makesure/__init__.py | faris404/make-sure | 00dc5728a1fb6aece54392922b0eb8237e4866ad | [
"MIT"
] | 2 | 2021-05-25T08:24:33.000Z | 2021-08-20T02:57:37.000Z | makesure/__init__.py | faris404/make-sure | 00dc5728a1fb6aece54392922b0eb8237e4866ad | [
"MIT"
] | null | null | null | makesure/__init__.py | faris404/make-sure | 00dc5728a1fb6aece54392922b0eb8237e4866ad | [
"MIT"
] | null | null | null | from makesure.main import make_sure,Regx,MakeSureException | 58 | 58 | 0.896552 | 8 | 58 | 6.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 58 | 1 | 58 | 58 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f88d3584fb96e766b457253c62da17d828f3acd4 | 44 | py | Python | snoop/data/management/commands/__init__.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | null | null | null | snoop/data/management/commands/__init__.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | 168 | 2019-11-07T12:38:07.000Z | 2021-04-19T09:53:51.000Z | snoop/data/management/commands/__init__.py | liquidinvestigations/hoover-snoop2 | 28e328401609f53fb56abaa4817619085aa3fbee | [
"MIT"
] | null | null | null | """Django management commands directory."""
| 22 | 43 | 0.75 | 4 | 44 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.825 | 0.840909 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f89bbcc510da6f5331da41bd5dd8a97d49d2d9b3 | 98 | py | Python | powerlibs/gdal/utils/gdal2tiles/__init__.py | DroneMapp/powerlibs-gdal-utils | eb1714c240c995868d0ac4d3dbe8ae20bdf401b7 | [
"MIT"
] | null | null | null | powerlibs/gdal/utils/gdal2tiles/__init__.py | DroneMapp/powerlibs-gdal-utils | eb1714c240c995868d0ac4d3dbe8ae20bdf401b7 | [
"MIT"
] | null | null | null | powerlibs/gdal/utils/gdal2tiles/__init__.py | DroneMapp/powerlibs-gdal-utils | eb1714c240c995868d0ac4d3dbe8ae20bdf401b7 | [
"MIT"
] | 1 | 2021-05-24T14:34:40.000Z | 2021-05-24T14:34:40.000Z | from .non_raster import Geodetic, Mercator # NOQA: F401
from .raster import Raster # NOQA: F401
| 32.666667 | 56 | 0.755102 | 14 | 98 | 5.214286 | 0.571429 | 0.328767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.173469 | 98 | 2 | 57 | 49 | 0.82716 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f8c5157a234c564b3c211768bf6f86ee6fa4eba1 | 52 | py | Python | paysage/backends/__init__.py | fyumoto/RBMs | 57a3b2a346cea940bd83f0174c741e8760594112 | [
"MIT"
] | 124 | 2017-02-01T23:52:58.000Z | 2022-03-03T11:51:15.000Z | paysage/backends/__init__.py | fyumoto/RBMs | 57a3b2a346cea940bd83f0174c741e8760594112 | [
"MIT"
] | 58 | 2017-01-29T15:55:57.000Z | 2021-08-25T14:53:18.000Z | paysage/backends/__init__.py | fyumoto/RBMs | 57a3b2a346cea940bd83f0174c741e8760594112 | [
"MIT"
] | 37 | 2017-02-12T12:55:51.000Z | 2022-03-06T16:57:50.000Z | from .select_backend import *
from .common import *
| 17.333333 | 29 | 0.769231 | 7 | 52 | 5.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 52 | 2 | 30 | 26 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3e5024d1e970482e384ffaee17516c7f90293c12 | 64 | py | Python | utils/models/anatomynet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | 3 | 2022-01-18T19:25:46.000Z | 2022-02-05T18:53:24.000Z | utils/models/anatomynet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | utils/models/anatomynet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | from .model import AnatomyNet3D
from .loss import FocalDiceLoss
| 21.333333 | 31 | 0.84375 | 8 | 64 | 6.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.125 | 64 | 2 | 32 | 32 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3e5285e178cc4489a8c66c1b5ae66eada3292c40 | 499 | py | Python | pk_model/model_factory.py | SABS-best-team/SABS-Pharmokinetics-Project | 608993c0056d2f273f164e3cdb23e6365fe2acfd | [
"MIT"
] | 1 | 2021-11-12T20:06:35.000Z | 2021-11-12T20:06:35.000Z | pk_model/model_factory.py | SABS-best-team/SABS-Pharmokinetics-Project | 608993c0056d2f273f164e3cdb23e6365fe2acfd | [
"MIT"
] | 1 | 2021-10-21T14:49:23.000Z | 2021-10-21T14:49:23.000Z | pk_model/model_factory.py | SABS-best-team/SABS-Pharmokinetics-Project | 608993c0056d2f273f164e3cdb23e6365fe2acfd | [
"MIT"
] | null | null | null | from .models.iv_model_scipy import IvModelScipy
from .models.sub_model_scipy import SubModelScipy
from .models.iv_model_ncompt_scipy import NComptIvModelScipy
from .models.sub_model_ncompt_scipy import NComptSubModelScipy
class ModelFactory():
def getIvModelScipy():
return IvModelScipy
def getSubModelScipy():
return SubModelScipy
def getNCompIVModel():
return NComptIvModelScipy
def getNCompSubCutModel():
return NComptSubModelScipy | 27.722222 | 62 | 0.757515 | 48 | 499 | 7.666667 | 0.416667 | 0.108696 | 0.065217 | 0.092391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196393 | 499 | 18 | 63 | 27.722222 | 0.917706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | true | 0 | 0.307692 | 0.307692 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
3e56be0d79db377584a9dbb7e777feca302b2f55 | 231 | py | Python | social/dto.py | Odreystella/My_Place_Record_Project2 | b26fdd3d07fbd5bc428f004a826eda37ee0d7533 | [
"MIT"
] | null | null | null | social/dto.py | Odreystella/My_Place_Record_Project2 | b26fdd3d07fbd5bc428f004a826eda37ee0d7533 | [
"MIT"
] | null | null | null | social/dto.py | Odreystella/My_Place_Record_Project2 | b26fdd3d07fbd5bc428f004a826eda37ee0d7533 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from user.models import User
from place.models import Place
from .models import Comment
@dataclass
class CommentCreateDto():
place : Place
commenter : User
content : str
pk : str
| 17.769231 | 33 | 0.74026 | 29 | 231 | 5.896552 | 0.482759 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 231 | 12 | 34 | 19.25 | 0.93956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.9 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3e57e1c7f3d3230b0652c29140043dd32c21d573 | 178 | py | Python | feedback/admin.py | City-of-Helsinki/palautebot | 601e732f0d8868874676ba5629d83155e3b5afb6 | [
"MIT"
] | 2 | 2017-09-21T14:46:11.000Z | 2020-07-29T06:59:51.000Z | feedback/admin.py | City-of-Helsinki/palautebot | 601e732f0d8868874676ba5629d83155e3b5afb6 | [
"MIT"
] | 47 | 2018-01-22T17:31:24.000Z | 2021-06-10T20:54:47.000Z | feedback/admin.py | City-of-Helsinki/palautebot | 601e732f0d8868874676ba5629d83155e3b5afb6 | [
"MIT"
] | 2 | 2018-02-12T19:54:14.000Z | 2018-07-24T07:29:24.000Z | from django.contrib import admin
from .models import DirectMessage, Feedback, Tweet
admin.site.register(DirectMessage)
admin.site.register(Feedback)
admin.site.register(Tweet)
| 22.25 | 50 | 0.825843 | 23 | 178 | 6.391304 | 0.478261 | 0.183673 | 0.346939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08427 | 178 | 7 | 51 | 25.428571 | 0.90184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e40d421375ee7f80eee85ccbc2bc275350be6567 | 167 | py | Python | students/K33401/Tikhonova_Elena/Lr2/airport_board/board/admin.py | TikhonovaElena/ITMO_ICT_WebDevelopment_2020-2021 | a0ceb17b3ec0e53fa4b92af507251381a6760e6c | [
"MIT"
] | null | null | null | students/K33401/Tikhonova_Elena/Lr2/airport_board/board/admin.py | TikhonovaElena/ITMO_ICT_WebDevelopment_2020-2021 | a0ceb17b3ec0e53fa4b92af507251381a6760e6c | [
"MIT"
] | null | null | null | students/K33401/Tikhonova_Elena/Lr2/airport_board/board/admin.py | TikhonovaElena/ITMO_ICT_WebDevelopment_2020-2021 | a0ceb17b3ec0e53fa4b92af507251381a6760e6c | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Flight, User, Reservation
admin.site.register(Flight)
admin.site.register(User)
admin.site.register(Reservation)
| 23.857143 | 45 | 0.820359 | 23 | 167 | 5.956522 | 0.478261 | 0.19708 | 0.372263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083832 | 167 | 6 | 46 | 27.833333 | 0.895425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e41fc2f3945ab03ae49147a905f72bf705825f06 | 81,077 | py | Python | scripts/lib/python/dn_base_vrf_svcs_config.py | open-switch/opx-nas-linux | 073b287c7c998b0dc16bc732fa37bbdddfd69d66 | [
"CC-BY-4.0"
] | 1 | 2017-12-28T16:57:02.000Z | 2017-12-28T16:57:02.000Z | scripts/lib/python/dn_base_vrf_svcs_config.py | open-switch/opx-nas-linux | 073b287c7c998b0dc16bc732fa37bbdddfd69d66 | [
"CC-BY-4.0"
] | 10 | 2017-08-07T22:43:34.000Z | 2021-06-09T13:34:01.000Z | scripts/lib/python/dn_base_vrf_svcs_config.py | open-switch/opx-nas-linux | 073b287c7c998b0dc16bc732fa37bbdddfd69d66 | [
"CC-BY-4.0"
] | 14 | 2017-01-05T19:18:42.000Z | 2020-03-06T10:01:04.000Z | # Copyright (c) 2019 Dell Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# THIS CODE IS PROVIDED ON AN *AS IS* BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT
# LIMITATION ANY IMPLIED WARRANTIES OR CONDITIONS OF TITLE, FITNESS
# FOR A PARTICULAR PURPOSE, MERCHANTABLITY OR NON-INFRINGEMENT.
#
# See the Apache Version 2.0 License for specific language governing
# permissions and limitations under the License.
"""
This module provides support for caching VRF incoming & outgoing IP rules, as well as rule
configuration through iptables
"""
from dn_base_vrf_tool import iplink_cmd, run_command, log_info, log_err, get_ip_str,\
rej_rule_mark_value, vrf_chain_name, _vrf_name_to_id
from dn_base_vrf_tool import process_outgoing_ip_svcs_sub_net_config, get_veth_ip_str
from dn_base_id_tool import IdGenerator
import cps_object
import socket
import bisect
import threading
import logging
import copy
import binascii
from StringIO import StringIO
from enum import IntEnum
import re
import netaddr
import ipaddress
DEFAULT_VRF_ID = 0
MGMT_VRF_ID = 1024
""" Object to define VRF services rule definitions like
Type, Action, Protocol, Group Priority.
"""
class VrfSvcsRuleType(IntEnum):
RULE_TYPE_IP = 1
RULE_TYPE_ACL = 2
RULE_TYPE_OUT_IP = 3
RULE_TYPE_SNAT = 4
class VrfSvcsRuleAction(IntEnum):
""" actions defined in model,
insert new action values per model before internal actions.
"""
RULE_ACTION_ALLOW = 1
RULE_ACTION_DENY = 2
""" internal actions defined for kernel programming """
RULE_ACTION_DNAT = 3
RULE_ACTION_REJECT = 4
RULE_ACTION_SNAT = 5
class VrfSvcsRuleProto(IntEnum):
RULE_PROTO_TCP = 1
RULE_PROTO_UDP = 2
RULE_PROTO_ICMP = 3
RULE_PROTO_ALL = 4
RULE_PROTO_ICMPV6 = 5
class VrfSvcsRuleGroupPrio(IntEnum):
HIGH_GRP_PRIO = 1
DEFAULT_GRP_PRIO = 10
""" Object to define one VRF incoming IP service rule """
class VrfIncomingSvcsRule(object):
KEY_ATTRS = ['rule_type', 'vrf_name', 'af', 'src_ip', 'src_prefix_len',
'protocol', 'dst_port', 'low_dst_port', 'high_dst_port',
'action', 'in_intf', 'dst_ip', 'dst_prefix_len']
@classmethod
def is_rule_equal(cls, r1, r2):
if r1 is None or r2 is None:
if r1 is None and r2 is None:
return True
else:
return False
r1_attrs = vars(r1)
r2_attrs = vars(r2)
for key_attr in cls.KEY_ATTRS:
if key_attr in r1_attrs and key_attr in r2_attrs:
if r1_attrs[key_attr] != r2_attrs[key_attr]:
return False
else:
if key_attr not in r1_attrs and key_attr not in r2_attrs:
continue
return False
return True
def normalize_ip_prefix(self, ip_attr, prefix_attr):
if ip_attr not in vars(self) or prefix_attr not in vars(self):
return
ip = vars(self)[ip_attr]
prefix_len = vars(self)[prefix_attr]
if ip is not None and prefix_len is not None:
ip_net = netaddr.IPNetwork('%s/%d' % (socket.inet_ntop(self.af, ip),
prefix_len))
ip = ip_net.cidr.ip.packed
# anywhere IP address
elif ip is None or ip == '\x00' * len(ip):
prefix_len = 0
if ip is None:
ip_str = '0.0.0.0' if self.af == socket.AF_INET else '::'
ip = socket.inet_pton(self.af, ip_str)
else:
return
self.__setattr__(ip_attr, ip)
self.__setattr__(prefix_attr, prefix_len)
def __init__(self, rule_type, vrf_name, action, af, src_ip = None, src_prefix_len = None,
protocol = None, dst_port = None, dst_ip = None, dst_prefix_len = None, low_dst_port = None,
high_dst_port = None, seq_num = 0, rule_id = None, high_prio = False,
in_intf = None):
"""
Constructor to create a ACL rule object
@rule_type - either IP or ACL rule
@vrf_name - namespace
@action - 1: accept 2: drop 3: dnat
@af - address family, either IPv4 or IPv6, it could be direct number or string
@src_ip - matched source IP address
@src_prefix_len - prefix length to specify source subnet
@protocol - IP protocol: 1: tcp, 2: udp, 3: icmp, 4: all
@dst_port - L4 destination port
@low_dst_port - lower L4 destination port (inclusive)
@high_dst_port - upper L4 destination port (inclusive)
@dst_ip - specify destination IP address
@dst_prefix_len - prefix length to specify destination subnet
@seq_num - sequence number of the rule
@rule_id - rule ID. it is optional
@high_prio - if it is high priority rule
@in_intf - interface where packets coming from
"""
self.rule_type = rule_type
self.vrf_name = vrf_name
self.af = af
self.src_ip = src_ip
self.src_prefix_len = src_prefix_len
self.protocol = protocol
self.dst_port = dst_port
self.low_dst_port = low_dst_port
self.high_dst_port = high_dst_port
self.dst_ip = dst_ip
self.dst_prefix_len = dst_prefix_len
self.seq_num = seq_num
self.action = action
self.grp_priority = high_prio
self.packet_count = None
self.byte_count = None
if self.action == VrfSvcsRuleAction.RULE_ACTION_DNAT and self.dst_ip is None:
log_err('Destination IP is mandatory for DNAT action')
raise ValueError
if self.action == VrfSvcsRuleAction.RULE_ACTION_DENY and self.rule_type == VrfSvcsRuleType.RULE_TYPE_ACL:
# For ACL rule, use REJECT action instead of DROP
self.action = VrfSvcsRuleAction.RULE_ACTION_REJECT
self.rule_id = rule_id
self.in_intf = in_intf
self.normalize_ip_prefix('src_ip', 'src_prefix_len')
self.normalize_ip_prefix('dst_ip', 'dst_prefix_len')
def __setattr__(self, key, val):
if key == 'grp_priority' and val is not None:
if val:
val = VrfSvcsRuleGroupPrio.HIGH_GRP_PRIO
else:
val = VrfSvcsRuleGroupPrio.DEFAULT_GRP_PRIO
elif key == 'in_intf' and val is not None:
if len(val) > 0 and val[0] == '!':
val = val[1:]
super(VrfIncomingSvcsRule, self).__setattr__('negative', True)
else:
super(VrfIncomingSvcsRule, self).__setattr__('negative', False)
super(VrfIncomingSvcsRule, self).__setattr__(key, val)
if key == 'rule_type' and self.get_rule_type_name() is None:
log_err('Invalid rule type %s' % str(self.rule_type))
raise ValueError
elif key == 'af' and self.get_af_name() is None:
log_err('Invalid address family number %s' % str(self.af))
raise ValueError
elif key == 'action' and self.get_action_name() is None:
log_err('Invalid action ID %s' % str(self.action))
raise ValueError
if val is not None:
if key == 'protocol' and self.get_proto_name() is None:
log_err('Invalid protocol number %s' % str(self.protocol))
raise ValueError
def __eq__(self, other):
return VrfIncomingSvcsRule.is_rule_equal(self, other)
def __ne__(self, other):
return not VrfIncomingSvcsRule.is_rule_equal(self, other)
def __hash__(self):
hash_val = 0
attrs = vars(self)
for key_attr in self.KEY_ATTRS:
if key_attr in attrs and attrs[key_attr] is not None:
hash_val ^= hash(attrs[key_attr])
return hash_val
def get_rule_type_name(self):
type_name_map = {VrfSvcsRuleType.RULE_TYPE_IP: 'IP',
VrfSvcsRuleType.RULE_TYPE_ACL: 'ACL'}
if self.rule_type in type_name_map:
return type_name_map[self.rule_type]
else:
return None
def get_af_name(self):
af_name_map = {socket.AF_INET: 'IPv4', socket.AF_INET6: 'IPv6'}
if self.af in af_name_map:
return af_name_map[self.af]
else:
return None
def get_action_name(self, for_ipt_target = False):
action_name_map = {VrfSvcsRuleAction.RULE_ACTION_ALLOW: ('allow', 'accept'),
VrfSvcsRuleAction.RULE_ACTION_DENY: ('deny', 'drop'),
VrfSvcsRuleAction.RULE_ACTION_DNAT: 'dnat',
VrfSvcsRuleAction.RULE_ACTION_REJECT: ('reject', 'mark')}
if self.action in action_name_map:
action_name = action_name_map[self.action]
if type(action_name) is tuple:
return action_name[1] if for_ipt_target else action_name[0]
else:
return action_name
else:
return None
def get_proto_name(self):
proto_name_map = {VrfSvcsRuleProto.RULE_PROTO_TCP: 'tcp',
VrfSvcsRuleProto.RULE_PROTO_UDP: 'udp',
VrfSvcsRuleProto.RULE_PROTO_ICMP: 'icmp',
VrfSvcsRuleProto.RULE_PROTO_ICMPV6: 'icmpv6',
VrfSvcsRuleProto.RULE_PROTO_ALL: 'ip'}
if self.protocol in proto_name_map:
return proto_name_map[self.protocol]
else:
return None
def __str__(self):
# <type> <id> VRF <vrf> SEQ <prio>-<seq> RULE <action> AF <af> [src_ip/pfx][dst_ip/pfx][proto][port][port_range][iif]
ret_str = ('%-5s %-8sVRF %-10s SEQ %5d-%-4d RULE %-10s AF %s %s%s%s%s%s%s' %
(self.get_rule_type_name(),
('-' if self.rule_id is None else ('%d' % self.rule_id)),
self.vrf_name, self.grp_priority, self.seq_num,
('%s' % self.get_action_name() if self.action is not None else ''),
self.get_af_name(),
(' SIP %s/%d' % (socket.inet_ntop(self.af, self.src_ip), self.src_prefix_len) \
if self.src_ip is not None and self.src_prefix_len is not None else ''),
(' DIP %s/%d' % (socket.inet_ntop(self.af, self.dst_ip), self.dst_prefix_len) \
if self.dst_ip is not None and self.dst_prefix_len is not None else ''),
(' %s' % self.get_proto_name() if self.protocol is not None else ''),
(' DST_PORT %d' % self.dst_port if self.dst_port is not None else ''),
(' DST_PORT RANGE %d-%d' % (self.low_dst_port, self.high_dst_port) \
if self.low_dst_port is not None else ''),
(' IIF %s%s' % (('not ' if self.negative else ''), self.in_intf) if self.in_intf is not None else '')))
return ret_str
def to_cps_obj(self):
cps_attr_map = {
'vrf_name': 'ni-name',
'af': 'af',
'src_ip': 'src-ip',
'src_prefix_len': 'src-prefix-len',
'dst_ip': 'dst-ip',
'dst_prefix_len': 'dst-prefix-len',
'protocol': 'protocol',
'dst_port': 'dst-port',
'low_dst_port': 'lower-dst-port',
'high_dst_port': 'upper-dst-port',
'action': 'action',
'seq_num': 'seq-num',
'in_intf': 'ifname',
'rule_id': 'id',
'packet_count': 'matched-packets',
'byte_count': 'matched-bytes'}
obj = cps_object.CPSObject('vrf-firewall/ns-incoming-service')
for attr_name, attr_val in vars(self).items():
if attr_name in cps_attr_map and attr_val is not None:
if attr_name == 'action':
if attr_val == VrfSvcsRuleAction.RULE_ACTION_DNAT:
# Always use allow action for DNAT
attr_val = VrfSvcsRuleAction.RULE_ACTION_ALLOW
elif attr_val == VrfSvcsRuleAction.RULE_ACTION_REJECT:
# Use deny action for cps get output
attr_val = VrfSvcsRuleAction.RULE_ACTION_DENY
if attr_name == 'src_ip':
attr_val = binascii.hexlify(attr_val)
if attr_name == 'in_intf' and self.negative is True:
attr_val = '!'+ attr_val
if (attr_name == 'dst_ip' or attr_name == 'dst_prefix_len') and \
self.rule_type == VrfSvcsRuleType.RULE_TYPE_IP:
# Only show dst_ip for ACL rule
continue
if attr_name == 'dst_ip':
attr_val = binascii.hexlify(attr_val)
if attr_name == 'packet_count' and attr_val is None:
continue
if attr_name == 'byte_count' and attr_val is None:
continue
obj.add_attr(cps_attr_map[attr_name], attr_val)
return obj
def match(self, **params):
attrs = vars(self)
for key, val in params.items():
if key not in attrs:
return False
if val is not None and val != attrs[key]:
return False
return True
class VrfIncomingSvcsRuleList(list):
def __init__(self):
super(VrfIncomingSvcsRuleList, self).__init__()
# sorted list of all seq num
self.seq_num_list = []
# map: rule_id => rule position in list
self.rule_id_map = {}
def __str__(self):
str_buf = StringIO()
for rule in self:
str_buf.write('%s\n' % rule)
out_str = str_buf.getvalue()
str_buf.close()
return out_str
def __eq__(self, other):
return super(VrfIncomingSvcsRuleList, self).__eq__(other)
def __ne__(self, other):
return super(VrfIncomingSvcsRuleList, self).__ne__(other)
def update_rule_id_map(self, start_idx):
for idx in range(start_idx, len(self)):
self.rule_id_map[self[idx].rule_id] = idx
# insert rule to list, update seq_num list and rule_id map
def insert(self, rule):
if rule.rule_id is None or rule.rule_id in self.rule_id_map:
# rule ID should be assigned and not used by another rule
log_err('Rule ID is not assigned' if rule.rule_id is None
else ('Rule ID %d is used' % rule.rule_id))
return None
try:
idx = self.index(rule)
except ValueError:
idx = None
if idx is not None:
# same rule already in list
log_err('Rule to be inserted is already in list')
return None
idx = bisect.bisect_right(self.seq_num_list, (rule.grp_priority, rule.seq_num))
super(VrfIncomingSvcsRuleList, self).insert(idx, rule)
self.seq_num_list.insert(idx, (rule.grp_priority, rule.seq_num))
self.update_rule_id_map(idx)
return idx
def remove(self, rule):
if rule.rule_id is not None and rule.rule_id in self.rule_id_map:
idx = self.rule_id_map[rule.rule_id]
rule_id = rule.rule_id
else:
try:
idx = self.index(rule)
except ValueError:
# rule not found
log_err('Rule not found for delete')
return None
rule_id = self[idx].rule_id
orig_rule = self[idx]
del self[idx]
del self.seq_num_list[idx]
del self.rule_id_map[rule_id]
self.update_rule_id_map(idx)
return (orig_rule, idx)
def remove_by_id(self, rule_id):
if rule_id not in self.rule_id_map:
return None
idx = self.rule_id_map[rule_id]
orig_rule = self[idx]
del self[idx]
del self.seq_num_list[idx]
del self.rule_id_map[rule_id]
self.update_rule_id_map(idx)
return orig_rule
def clear(self):
del self[:]
del self.seq_num_list[:]
self.rule_id_map.clear()
#IP tables handler for both incoming & outgoing service configurations
class IptablesHandler:
PKT_CNT_TK_ID = 0
BYTE_CNT_TK_ID = 1
TARGET_TK_ID = 2
PROTO_TK_ID = 3
IN_IF_TK_ID = 4
OUT_IF_TK_ID = 5
SRC_IP_TK_ID = 6
DST_IP_TK_ID = 7
OPT_TK_ID = 8
MIN_TK_NUM = 9
@classmethod
def is_vrf_valid(cls, vrf_name):
cmd = [iplink_cmd, 'netns', 'show']
res = []
if run_command(cmd, res) != 0:
log_err('Failed to run command: %s' % ' '.join(cmd))
return False
for token in res:
m = re.search('(.*)\s+\(id:\s+(\S+)\)', token)
if m is not None:
vrf, _ = m.groups()
else:
vrf = token
if vrf == vrf_name:
return True
return False
@classmethod
def get_chain_name(cls, vrf_name, rule_type):
if rule_type == VrfSvcsRuleType.RULE_TYPE_IP:
chain_name = 'INPUT' if vrf_name == 'default' else vrf_chain_name
elif rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
chain_name = 'PREROUTING'
elif rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
chain_name = 'POSTROUTING'
else:
chain_name = vrf_chain_name
return chain_name
@classmethod
def get_ipt_cmd_prefix(cls, rule_type, af, vrf_name):
iptables = 'iptables' if af == socket.AF_INET else 'ip6tables'
if vrf_name == 'default':
ipt_prefix = ['/sbin/%s' % iptables]
#tbl_name = (None if rule_type == VrfSvcsRuleType.RULE_TYPE_IP else 'raw')
if rule_type == VrfSvcsRuleType.RULE_TYPE_IP:
tbl_name = None
elif rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
tbl_name = 'nat'
else :
tbl_name = 'raw'
else:
ipt_prefix = [iplink_cmd, 'netns', 'exec', vrf_name, iptables]
#tbl_name = ('nat' if rule_type == VrfSvcsRuleType.RULE_TYPE_IP else 'raw')
if rule_type == VrfSvcsRuleType.RULE_TYPE_IP or\
rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP or\
rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
tbl_name = 'nat'
else :
tbl_name = 'raw'
if tbl_name is not None:
ipt_prefix += ['-t', tbl_name]
return ipt_prefix
@classmethod
def find_stats_info(cls, pattern, line):
"""Method to parse regex matching return value"""
match = re.match(pattern, line)
if match is None:
return (None, None)
match = match.group()
stats_pattern = re.compile(r'\d+')
stats_list = stats_pattern.findall(match)
#number at first position is packet count and the one at second position is byte count.
return (stats_list[0], stats_list[1])
@classmethod
def parse_rule_stats(cls, result):
packets, bytes = cls.find_stats_info(r'\s+\d+\s+\d+\s+', result)
return (packets, bytes)
@classmethod
def proc_rule(cls, op, rule, idx = None):
if (op.lower() == 'delete' and rule.vrf_name != 'default' and
not cls.is_vrf_valid(rule.vrf_name)):
log_info('VRF %s is not opened, bypass iptables setting.' % rule.vrf_name)
return True
if (op.lower() == 'get' and rule.vrf_name != 'default' and
not IptablesHandler.is_vrf_valid(rule.vrf_name)):
log_info('VRF %s is not opened, bypass iptables get.' % rule.vrf_name)
return True
""" outgoing IP services rules are not allowed in default VRF """
if rule.vrf_name == 'default' and rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
log_info('Invalid Rule. rule type %s not supported in VRF:%s.' % (rule.rule_type, rule.vrf_name))
return False
if op == 'replace' and idx is None:
log_err('Missing rule index for replace operation')
return False
ipt_prefix = cls.get_ipt_cmd_prefix(rule.rule_type, rule.af, rule.vrf_name)
# Set protocol related filtering options
flt_args = []
if rule.src_ip is not None:
flt_args += ['-s', '%s%s' % (socket.inet_ntop(rule.af, rule.src_ip),
('/%d' % rule.src_prefix_len if rule.src_prefix_len is not None else ''))]
if rule.protocol is not None:
flt_args += ['-p', rule.get_proto_name()]
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
flt_args += ['--dport', str(rule.private_port)]
elif rule.dst_port is not None:
flt_args += ['--dport', str(rule.dst_port)]
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_ACL and rule.low_dst_port is not None:
flt_args += ['--match', 'multiport', '--dports',
'%s:%s' % (str(rule.low_dst_port), str(rule.high_dst_port))]
if (rule.rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT or\
rule.rule_type == VrfSvcsRuleType.RULE_TYPE_ACL or \
(rule.vrf_name == 'default' and \
rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP)) and rule.dst_ip is not None:
flt_args += ['-d', '%s%s' % (socket.inet_ntop(rule.af, rule.dst_ip),
('/%d' % rule.dst_prefix_len if rule.dst_prefix_len is not None else ''))]
# Set interface filtering options
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP:
# Allow the IP services only from mgmt and data VRFs if configured, any IP services received
# in front panel ports will be ignored.
if rule.vrf_name == 'default':
if rule.in_intf is not None:
flt_args += ['-i', rule.in_intf]
else:
flt_args += ['!', '-i', 'vdst-nsid+']
elif rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
vrf_id = None
vrf_id = _vrf_name_to_id.get(rule.vrf_name, None)
if vrf_id is not None:
flt_args += ['-i', 'vdst-nsid%d'%DEFAULT_VRF_ID]
# Do the DNAT operation for IP services destined to interval veth IP,
# all the other traffic forwarding via veth should not affect this DNAT rule.
flt_args += ['-d', '%s' % (get_veth_ip_str(rule.af, rule.vrf_name, False))]
elif rule.rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
#in management/data vrf, apply SNAT rules on interfaces other than internal veth interfaces.
if rule.vrf_name != 'default':
flt_args += ['!', '-o', 'vdst-nsid%d'%DEFAULT_VRF_ID]
else:
if rule.in_intf is not None:
if rule.negative:
flt_args.append('!')
flt_args += ['-i', rule.in_intf]
# Set rule action
flt_args += ['-j', rule.get_action_name(True).upper()]
if rule.action == VrfSvcsRuleAction.RULE_ACTION_DNAT:
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
if rule.af == socket.AF_INET:
flt_args += ['--to-destination',
'%s:%s' % (socket.inet_ntop(rule.af, rule.dst_ip),str(rule.dst_port))]
else:
flt_args += ['--to-destination',
'[%s]:%s' % (socket.inet_ntop(rule.af, rule.dst_ip),str(rule.dst_port))]
else:
flt_args += ['--to-destination', '%s' % (socket.inet_ntop(rule.af, rule.dst_ip))]
elif rule.action == VrfSvcsRuleAction.RULE_ACTION_REJECT:
flt_args += ['--set-mark', str(rej_rule_mark_value)]
elif rule.action == VrfSvcsRuleAction.RULE_ACTION_SNAT:
flt_args += ['--to-source', '%s' % (socket.inet_ntop(rule.af, rule.out_src_ip))]
# Chain configuration
chain_name = cls.get_chain_name(rule.vrf_name, rule.rule_type)
if op.lower() == 'insert':
if idx is None:
chain_args = ['-A', chain_name]
else:
chain_args = ['-I', chain_name, '%d' % (idx + 1)]
cmd = ipt_prefix + chain_args + flt_args
elif op.lower() == 'delete':
chain_args = ['-D', chain_name]
if idx is not None:
chain_args.append('%d' % (idx + 1))
cmd = ipt_prefix + chain_args
if idx is None:
cmd += flt_args
elif op.lower() == 'replace':
chain_args = ['-R', chain_name]
chain_args.append('%d' % (idx + 1))
cmd = ipt_prefix + chain_args + flt_args
elif op.lower() == 'check':
chain_args = ['-C', chain_name]
cmd = ipt_prefix + chain_args + flt_args
elif op.lower() == 'get':
if idx is None:
log_err('Invalid idx for operation %s' % op)
return False
chain_args = ['-xvL', chain_name, '%d' % (idx + 1)]
cmd = ipt_prefix + chain_args
log_info('GET CMD: %s' % ' '.join(cmd))
res = []
if run_command(cmd, res, op.lower() != 'check') != 0:
log_err('Invalid idx for operation:%s, error:%s' % (op, res))
return False
else:
if res[0] is not None:
rule.packet_count, rule.byte_count = cls.parse_rule_stats(res[0])
log_info('Rule stats, packet_count:%s, byte_count:%s' %(rule.packet_count, rule.byte_count))
return True
else:
log_err('Invalid operation %s' % op)
return False
log_info('CMD: %s' % ' '.join(cmd))
res = []
return run_command(cmd, res, op.lower() != 'check') == 0
@classmethod
def get_l4_port_num(cls, port_str):
if port_str.isdigit():
port_num = int(port_str)
else:
try:
port_num = socket.getservbyname(port_str)
except socket.error:
return None
return port_num
@classmethod
def get_ip_prefix(cls, af, ip_str):
if ip_str == 'anywhere':
return (None, None)
else:
try:
ip_mask = ip_str.split('/')
ip_addr = socket.inet_pton(af, ip_mask[0])
if len(ip_mask) > 1:
prefix_len = int(ip_mask[1])
else:
prefix_len = ipaddress.IPV4LENGTH if af == socket.AF_INET else ipaddress.IPV6LENGTH
except ValueError, socket.error:
return (None, None)
return (ip_addr, prefix_len)
@classmethod
def ipt_tokens_to_rule(cls, rule_type, af, vrf_name, tokens):
if len(tokens) < cls.MIN_TK_NUM:
return None
log_info('TOKENS: %s' % tokens)
dst_ip = None
dst_prefix_len = None
dst_port = None
low_port = high_port = None
if tokens[cls.TARGET_TK_ID] == 'ACCEPT':
action = VrfSvcsRuleAction.RULE_ACTION_ALLOW
elif tokens[cls.TARGET_TK_ID] == 'DROP':
action = VrfSvcsRuleAction.RULE_ACTION_DENY
elif tokens[cls.TARGET_TK_ID] == 'MARK':
mo = re.search('MARK\s+set\s+(\S+)', tokens[cls.OPT_TK_ID])
if mo is None or len(mo.groups()) < 1:
return None
try:
mark_num = int(mo.groups()[0], 16)
except ValueError:
return None
if mark_num != rej_rule_mark_value:
return None
action = VrfSvcsRuleAction.RULE_ACTION_REJECT
elif tokens[cls.TARGET_TK_ID] == 'DNAT':
mo = re.search('to:(\S+)', tokens[cls.OPT_TK_ID])
if mo is None or len(mo.groups()) < 1:
return None
try:
dst_ip = socket.inet_pton(af, mo.groups()[0])
except socket.error:
return None
action = VrfSvcsRuleAction.RULE_ACTION_DNAT
else:
log_err('Invalid target %s' % tokens[cls.TARGET_TK_ID])
return None
protocol = None
proto_type_map = {'tcp': VrfSvcsRuleProto.RULE_PROTO_TCP,
'udp': VrfSvcsRuleProto.RULE_PROTO_UDP,
'icmp': VrfSvcsRuleProto.RULE_PROTO_ICMP,
'icmpv6': VrfSvcsRuleProto.RULE_PROTO_ICMPV6}
if tokens[cls.PROTO_TK_ID] in proto_type_map:
protocol = proto_type_map[tokens[cls.PROTO_TK_ID]]
if tokens[cls.IN_IF_TK_ID] == 'any':
in_intf = None
else:
in_intf = tokens[cls.IN_IF_TK_ID]
if (rule_type == VrfSvcsRuleType.RULE_TYPE_IP and vrf_name == 'default' and
in_intf == '!vdst-nsid+'):
in_intf = None
src_ip, src_prefix_len = cls.get_ip_prefix(af, tokens[cls.SRC_IP_TK_ID])
if rule_type == VrfSvcsRuleType.RULE_TYPE_ACL:
dst_ip, dst_prefix_len = cls.get_ip_prefix(af, tokens[cls.DST_IP_TK_ID])
mo = re.search('dpt:(\S+)', tokens[cls.OPT_TK_ID])
if mo is not None and mo.groups() > 0:
dst_port = cls.get_l4_port_num(mo.groups()[0])
if dst_port is None:
log_err('Failed to get L4 destination port from token %s' % tokens[cls.OPT_TK_ID])
return None
mo = re.search('multiport\s+dports\s+(\S+):(\S+)', tokens[cls.OPT_TK_ID])
if mo is not None and mo.groups() >= 2:
low_port = cls.get_l4_port_num(mo.groups()[0])
high_port = cls.get_l4_port_num(mo.groups()[1])
if low_port is None or high_port is None:
log_err('Failed to get L4 destination port range from token %s' % tokens[cls.OPT_TK_ID])
return None
rule = VrfIncomingSvcsRule(rule_type, vrf_name, action, af, src_ip = src_ip, src_prefix_len = src_prefix_len,
protocol = protocol, dst_port = dst_port,
dst_ip = dst_ip, dst_prefix_len = dst_prefix_len, in_intf = in_intf,
low_dst_port = low_port, high_dst_port = high_port)
log_info('RULE: %s' % rule)
return rule
@classmethod
def get_rule_from_ipt(cls, rule_type, af, vrf_name, rule_list):
cmd = cls.get_ipt_cmd_prefix(rule_type, af, vrf_name)
cmd += ['-L', cls.get_chain_name(vrf_name, rule_type), '-v']
log_info('GET_CMD: %s' % ' '.join(cmd))
res = []
if run_command(cmd, res) != 0:
log_err('Failed to read iptables rules')
return False
start = False
for line in res:
if start:
max_split = cls.MIN_TK_NUM if af == socket.AF_INET else cls.MIN_TK_NUM - 1
tokens = line.split(None, max_split)
if len(tokens) < cls.MIN_TK_NUM - 1:
log_err('Invalid number of tokens in line: %s' % tokens)
continue
if af == socket.AF_INET:
# OPT field
del tokens[4]
if len(tokens) < cls.MIN_TK_NUM:
# add empty optional field
tokens.append('')
rule = cls.ipt_tokens_to_rule(rule_type, af, vrf_name, tokens)
if rule is None:
log_err('Failed to get ACL rule from tokens: %s' % tokens)
continue
rule_list.append(rule)
else:
ret_val = re.match('pkts\s+bytes\s+target\s+prot\s+opt\s+in\s+out\s+source\s+destination',
line.lstrip())
if ret_val is not None:
start = True
return True
class VrfIncomingSvcsRuleCache:
# map: af, vrf_name => rule_list
acl_rules = {socket.AF_INET: {}, socket.AF_INET6: {}}
ip_rules = {socket.AF_INET: {}, socket.AF_INET6: {}}
mutex = threading.RLock()
id_generator = IdGenerator()
@classmethod
def insert_rule(cls, rule):
log_info('Handling add rule: %s' % rule)
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af] if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[rule.af]
if rule.vrf_name not in rule_list:
rule_list[rule.vrf_name] = VrfIncomingSvcsRuleList()
if rule.rule_id is None:
rule.rule_id = cls.id_generator.get_new_id()
if rule.rule_id is None:
log_err('Could not generate new rule ID')
log_info(str(cls.id_generator))
cls.mutex.release()
return False
else:
if cls.id_generator.is_id_used(rule.rule_id):
log_err('Given rule ID %d is used' % rule.rule_id)
log_info(str(cls.id_generator))
cls.mutex.release()
return False
if not cls.id_generator.reserve_id(rule.rule_id):
log_err('Failed to reserve rule ID %d' % rule.rule_id)
log_info(str(cls.id_generator))
cls.mutex.release()
return False
ret_val = True
idx = rule_list[rule.vrf_name].insert(rule)
if idx is not None:
if not IptablesHandler.proc_rule('insert', rule, idx):
log_err('Failed to call iptables to insert ACL rule')
# rollback
rule_list[rule.vrf_name].remove(rule)
ret_val = False
else:
log_err('Failed to insert rule to cache')
cls.id_generator.release_id(rule.rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
log_info('Rule added, ID=%d' % rule.rule_id)
return ret_val
@classmethod
def delete_rule(cls, rule):
log_info('Handling delete rule: %s' % rule)
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af] if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[rule.af]
if rule.vrf_name not in rule_list:
log_err('VRF name %s not found in cache' % rule.vrf_name)
cls.mutex.release()
return False
ret_val = rule_list[rule.vrf_name].remove(rule)
if ret_val is None:
log_err('Failed to delete rule from cache')
cls.mutex.release()
return False
del_rule, idx = ret_val
ret_val = True
if not IptablesHandler.proc_rule('delete', del_rule, idx):
log_err('Failed to call iptables to delete ACL rule')
# rollback
rule_list[del_rule.vrf_name].insert(del_rule)
ret_val = False
if len(rule_list[rule.vrf_name]) == 0:
del rule_list[rule.vrf_name]
cls.mutex.release()
if ret_val:
if not cls.id_generator.release_id(del_rule.rule_id):
log_err('Failed to release rule ID %d' % del_rule.rule_id)
log_info(str(cls.id_generator))
log_info('Rule deleted')
return ret_val
@classmethod
def find_rule_by_id(cls, rule_id):
ret_val = None
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
for vrf_name, rule_list in cls.ip_rules[af].items():
if rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule_id]
ret_val = (rule_list[idx], idx)
IptablesHandler.proc_rule('get', rule_list[idx], idx)
break
if ret_val is not None:
break
for vrf_name, rule_list in cls.acl_rules[af].items():
if rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule_id]
ret_val = (rule_list[idx], idx)
IptablesHandler.proc_rule('get', rule_list[idx], idx)
break
if ret_val is not None:
break
cls.mutex.release()
return ret_val
@classmethod
def find_rule_by_match(cls, rule):
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af] if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[rule.af]
if rule.vrf_name not in rule_list:
cls.mutex.release()
return None
try:
idx = rule_list[rule.vrf_name].index(rule)
except ValueError:
cls.mutex.release()
return None
cls.mutex.release()
return rule_list[rule.vrf_name][idx]
@classmethod
def delete_rule_by_id(cls, rule_id):
log_info('Handling delete rule by ID: %d' % rule_id)
cls.mutex.acquire()
ret_val = True
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is not None:
del_rule, idx = found_rule
rule_list = cls.ip_rules[del_rule.af] \
if del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[del_rule.af]
if rule_list[del_rule.vrf_name].remove_by_id(rule_id) is None:
log_err('Failed to remove rule with ID %d' % rule_id)
cls.mutex.release()
return False
if not IptablesHandler.proc_rule('delete', del_rule, idx):
log_err('Failed to call iptables to delete ACL rule')
# rollback
rule_list = cls.ip_rules[del_rule.af] \
if del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[del_rule.af]
rule_list[del_rule.vrf_name].insert(del_rule)
ret_val = False
if len(rule_list[del_rule.vrf_name]) == 0:
del rule_list[del_rule.vrf_name]
else:
log_err('Rule ID %d not found for delete' % rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
if not cls.id_generator.release_id(rule_id):
log_err('Failed to release rule ID %d' % rule_id)
log_info(str(cls.id_generator))
log_info('Rule deleted')
return ret_val
@classmethod
def replace_rule(cls, rule_id, new_rule):
log_info('Handling replace rule with ID %d' % rule_id)
cls.mutex.acquire()
ret_val = True
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is not None:
old_rule, idx = found_rule
rule_list = cls.ip_rules[old_rule.af] \
if old_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_IP else cls.acl_rules[old_rule.af]
if not IptablesHandler.proc_rule('replace', new_rule, idx):
log_err('Failed to call iptables to replace ACL rule')
ret_val = False
else:
rule_list[old_rule.vrf_name][idx] = new_rule
else:
log_err('Rule ID %d not found for replace' % rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
log_info('Rule replaced')
return ret_val
@classmethod
def update_rule(cls, rule_id, **params):
log_info('Handling update rule: ID %d param %s' % (rule_id, params))
cls.mutex.acquire()
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is None:
log_err('Rule ID %d not found for update' % rule_id)
cls.mutex.release()
return False
old_rule, _ = found_rule
upd_rule = copy.deepcopy(old_rule)
changed_attrs = set()
def check_and_set(rule, key, val):
try:
orig_val = getattr(rule, key)
except AttributeError:
orig_val = None
if val != orig_val:
if val is not None:
setattr(rule, key, val)
else:
delattr(rule, key)
changed_attrs.add(key)
for key, val in params.items():
if val is not None:
check_and_set(upd_rule, key, val)
if len(changed_attrs) == 0:
log_info('There is no change to be updated, just return')
cls.mutex.release()
return True
log_info('old_rule: %s' % old_rule)
log_info('new_rule: %s' % upd_rule)
match_rule = cls.find_rule_by_match(upd_rule)
if match_rule is not None and match_rule.rule_id != old_rule.rule_id:
log_err('The updating rule already exists')
cls.mutex.release()
return False
repl_attrs = {'src_ip', 'src_prefix_len', 'dst_ip', 'dst_prefix_len', 'protocol', 'dst_port',
'low_dst_port', 'high_dst_port', 'action', 'in_intf'}
no_repl_attrs = changed_attrs.difference(repl_attrs)
if len(no_repl_attrs) > 0:
log_info('Non-replacable attributes %s changed, delete and add rule' % no_repl_attrs)
if not cls.delete_rule_by_id(rule_id):
log_err('Failed to delete existing rule by ID')
cls.mutex.release()
return False
ret_val = cls.insert_rule(upd_rule)
if not ret_val:
log_err('Failed to insert updated rule')
cls.insert_rule(old_rule)
else:
log_info('Only replacable attributes changed, just replace rule')
ret_val = cls.replace_rule(rule_id, upd_rule)
if not ret_val:
log_err('Failed to replace rule')
cls.mutex.release()
if ret_val:
log_info('Rule updated')
return ret_val
@classmethod
def clear_all_rules(cls, flt_vrf = None, flt_af = None):
log_info('Handling clear all rules for VRF %s and AF %s' % (
flt_vrf if flt_vrf is not None else '-',
str(flt_af) if flt_af is not None else '-'))
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
if flt_af is not None and flt_af != af:
continue
for vrf_name, rule_list in cls.ip_rules[af].items():
if flt_vrf is not None and flt_vrf != vrf_name:
continue
for rule in rule_list:
IptablesHandler.proc_rule('delete', rule)
cls.id_generator.release_id(rule.rule_id)
rule_list.clear()
del cls.ip_rules[af][vrf_name]
for vrf_name, rule_list in cls.acl_rules[af].items():
if flt_vrf is not None and flt_vrf != vrf_name:
continue
for rule in rule_list:
IptablesHandler.proc_rule('delete', rule)
cls.id_generator.release_id(rule.rule_id)
rule_list.clear()
del cls.acl_rules[af][vrf_name]
cls.mutex.release()
@classmethod
def dump_rules(cls):
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
for vrf_name, rule_list in cls.ip_rules[af].items():
log_info('-------------------------------------')
log_info(' %s IP Rules of VRF %s' % ('IPv4' if af == socket.AF_INET else 'IPv6', vrf_name))
log_info('-------------------------------------')
log_info('\n%s\n' % rule_list)
for vrf_name, rule_list in cls.acl_rules[af].items():
log_info('-------------------------------------')
log_info(' %s ACL Rules of VRF %s' % ('IPv4' if af == socket.AF_INET else 'IPv6', vrf_name))
log_info('-------------------------------------')
log_info('\n%s\n' % rule_list)
cls.mutex.release()
@classmethod
def get_all_rules(cls, rule_type = None, vrf_name = None, **params):
log_info('Handling get rules for vrf %s' % (vrf_name if vrf_name is not None else 'ALL'))
cls.mutex.acquire()
ret_list = []
for af in [socket.AF_INET, socket.AF_INET6]:
if rule_type is None or rule_type == VrfSvcsRuleType.RULE_TYPE_IP:
for vrf, rule_list in cls.ip_rules[af].items():
if vrf_name is not None and vrf != vrf_name:
continue
for rule in rule_list:
if not rule.match(**params):
continue
if rule.rule_id is not None and rule.rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule.rule_id]
IptablesHandler.proc_rule('get', rule, idx)
ret_list.append(copy.deepcopy(rule))
if rule_type is None or rule_type == VrfSvcsRuleType.RULE_TYPE_ACL:
for vrf, rule_list in cls.acl_rules[af].items():
if vrf_name is not None and vrf != vrf_name:
continue
for rule in rule_list:
if not rule.match(**params):
continue
if rule.rule_id is not None and rule.rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule.rule_id]
IptablesHandler.proc_rule('get', rule, idx)
ret_list.append(copy.deepcopy(rule))
cls.mutex.release()
return ret_list
@classmethod
def check_ipt_rules(cls, rule_type, af, vrf_name):
log_info('Checking if rules in cache are in sync with system')
cls.mutex.acquire()
if rule_type == VrfSvcsRuleType.RULE_TYPE_IP and vrf_name in cls.ip_rules[af]:
rule_list = cls.ip_rules[af][vrf_name]
elif rule_type == VrfSvcsRuleType.RULE_TYPE_ACL and vrf_name in cls.acl_rules[af]:
rule_list = cls.acl_rules[af][vrf_name]
else:
rule_list = []
ipt_rule_list = []
if not IptablesHandler.get_rule_from_ipt(rule_type, af, vrf_name, ipt_rule_list):
log_err('Failed to get iptables rule for rule_type %d af %d vrf %s' %
(rule_type, af, vrf_name))
cls.mutex.release()
return False
is_sync = (rule_list == ipt_rule_list)
if not is_sync:
log_err('Rules of cache is not sync with those in system: TYPE %d AF %d VRF %s' %
(rule_type, af, vrf_name))
log_info('-------------------------------')
log_info('%d rules in cache' % len(rule_list))
log_info('-------------------------------')
for idx in range(len(rule_list)):
log_info('%3d: %s' % (idx, rule_list[idx]))
log_info('-------------------------------')
log_info('%d rules in system' % len(ipt_rule_list))
log_info('-------------------------------')
for idx in range(len(ipt_rule_list)):
log_info('%3d: %s' % (idx, ipt_rule_list[idx]))
cls.mutex.release()
return is_sync
def process_vrf_svcs_rule_add(rule_type, vrf_name, action, af, **params):
try:
rule = VrfIncomingSvcsRule(rule_type, vrf_name, action, af, **params)
if not VrfIncomingSvcsRuleCache.insert_rule(rule):
return None
except ValueError:
log_err('Failed to initiate rule object')
return None
except Exception as ex:
logging.exception(ex)
return None
return rule.rule_id
def process_vrf_svcs_rule_set(rule_id, **params):
try:
if not VrfIncomingSvcsRuleCache.update_rule(rule_id, **params):
return False
except ValueError:
log_err('Failed to update rule with params: %s' % params)
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_svcs_rule_del(rule_type, vrf_name, action, af, **params):
try:
rule = VrfIncomingSvcsRule(rule_type, vrf_name, action, af, **params)
if not VrfIncomingSvcsRuleCache.delete_rule(rule):
return False
except ValueError:
log_err('Failed to initiate rule object')
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_svcs_rule_del_by_id(rule_id):
try:
if not VrfIncomingSvcsRuleCache.delete_rule_by_id(rule_id):
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_svcs_rule_get(resp, rule_id = None, rule_type = None, vrf_name = None, **params):
try:
if rule_id is not None:
found_rule = VrfIncomingSvcsRuleCache.find_rule_by_id(rule_id)
if found_rule is not None:
resp.append(found_rule[0].to_cps_obj().get())
else:
rule_list = VrfIncomingSvcsRuleCache.get_all_rules(rule_type, vrf_name, **params)
for rule in rule_list:
resp.append(rule.to_cps_obj().get())
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_svcs_clear_rules(vrf_name = None):
try:
VrfIncomingSvcsRuleCache.clear_all_rules(vrf_name)
except Exception as ex:
logging.exception(ex)
return False
return True
""" Object to define one VRF outgoing IP service rule """
class VrfOutgoingSvcsRule(object):
KEY_ATTRS = ['rule_type', 'vrf_name', 'af', 'dst_ip',
'protocol', 'dst_port', 'out_src_ip']
@classmethod
def is_rule_equal(cls, r1, r2):
r1_attrs = vars(r1)
r2_attrs = vars(r2)
for key_attr in cls.KEY_ATTRS:
if key_attr in r1_attrs and key_attr in r2_attrs:
if r1_attrs[key_attr] != r2_attrs[key_attr]:
return False
else:
return False
return True
def __init__(self, rule_type, vrf_name, action, af, dst_ip = None, protocol = None,
dst_port = None, out_src_ip = None, private_ip = None, private_port = None,
seq_num = 0, rule_id = None, high_prio = False):
"""
Constructor to create a outgoing service IP rule object
@rule_type - either IP or SNAT rule
@vrf_name - namespace
@action - 1: dnat 2: snat
@af - address family, either IPv4 or IPv6, it could be direct number or string
@dst_ip - matched destination IP address
@protocol - IP protocol: 1: tcp, 2: udp, 3: icmp
@dst_port - L4 destination port
@out_src_ip - outgoing source IP when action is SNAT
@private_ip - private IP for outgoing IP services. it is optional
@private_port - private PORT for outgoing IP services. it is optional
@seq_num - sequence number of the rule. it is optional
@rule_id - rule ID. it is optional
@high_prio - if it is high priority rule
"""
self.rule_type = rule_type
self.vrf_name = vrf_name
self.af = af
self.dst_ip = dst_ip
self.dst_prefix_len = None # For compatibility purpose
self.protocol = protocol
self.dst_port = dst_port
self.out_src_ip = out_src_ip
self.action = action
self.seq_num = seq_num
self.grp_priority = high_prio
self.packet_count = None
self.byte_count = None
self.private_ip = private_ip
self.private_port = private_port
# following attributes are added to outgoing service rule,
# just to keep the rule common b/w incoming & outgoing service rule.
self.src_ip = None
if self.rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
if self.action != VrfSvcsRuleAction.RULE_ACTION_SNAT:
log_err('for SNAT rule, action should be SNAT')
raise ValueError
if self.out_src_ip is None:
log_err('Outgoing Source IP is mandatory for SNAT action')
raise ValueError
self.rule_id = rule_id
def __setattr__(self, key, val):
if key == 'grp_priority' and val is not None:
if val:
val = VrfSvcsRuleGroupPrio.HIGH_GRP_PRIO
else:
val = VrfSvcsRuleGroupPrio.DEFAULT_GRP_PRIO
super(VrfOutgoingSvcsRule, self).__setattr__(key, val)
if key == 'rule_type' and self.get_rule_type_name() is None:
log_err('Invalid rule type %s' % str(self.rule_type))
raise ValueError
elif key == 'af' and self.get_af_name() is None:
log_err('Invalid address family number %s' % str(self.af))
raise ValueError
elif key == 'action' and self.get_action_name() is None:
log_err('Invalid action ID %s' % str(self.action))
raise ValueError
if val is not None:
if key == 'protocol' and self.get_proto_name() is None:
log_err('Invalid protocol number %s' % str(self.protocol))
raise ValueError
def __eq__(self, other):
return VrfOutgoingSvcsRule.is_rule_equal(self, other)
def __ne__(self, other):
return not VrfOutgoingSvcsRule.is_rule_equal(self, other)
def __hash__(self):
hash_val = 0
attrs = vars(self)
for key_attr in self.KEY_ATTRS:
if key_attr in attrs and attrs[key_attr] is not None:
hash_val ^= hash(attrs[key_attr])
return hash_val
def get_rule_type_name(self):
type_name_map = {VrfSvcsRuleType.RULE_TYPE_OUT_IP: 'IP',
VrfSvcsRuleType.RULE_TYPE_SNAT: 'SNAT'}
if self.rule_type in type_name_map:
return type_name_map[self.rule_type]
else:
return None
def get_af_name(self):
af_name_map = {socket.AF_INET: 'IPv4', socket.AF_INET6: 'IPv6'}
if self.af in af_name_map:
return af_name_map[self.af]
else:
return None
def get_action_name(self, for_ipt_target = False):
action_name_map = {VrfSvcsRuleAction.RULE_ACTION_DNAT: 'dnat',
VrfSvcsRuleAction.RULE_ACTION_SNAT: 'snat'}
if self.action in action_name_map:
action_name = action_name_map[self.action]
if type(action_name) is tuple:
return action_name[1] if for_ipt_target else action_name[0]
else:
return action_name
else:
return None
def get_proto_name(self):
proto_name_map = {VrfSvcsRuleProto.RULE_PROTO_TCP: 'tcp',
VrfSvcsRuleProto.RULE_PROTO_UDP: 'udp',
VrfSvcsRuleProto.RULE_PROTO_ICMP: 'icmp',
VrfSvcsRuleProto.RULE_PROTO_ICMPV6: 'icmpv6'}
if self.protocol in proto_name_map:
return proto_name_map[self.protocol]
else:
return None
def __str__(self):
ret_str = ('%-5s %-8sVRF: %-10s SEQ: %5d-%-4d RULE: %-10s %s%s%s%s' %
(self.get_rule_type_name(),
('-' if self.rule_id is None else ('%d' % self.rule_id)),
self.vrf_name, self.grp_priority, self.seq_num,
('%s' % self.get_action_name() if self.action is not None else ''),
self.get_af_name(),
('dst_ip %s' % (socket.inet_ntop(self.af, self.dst_ip))\
if self.dst_ip is not None else ''),
(' %s' % self.get_proto_name() if self.protocol is not None else ''),
(' dst_port %d' % self.dst_port if self.dst_port is not None else '')))
if self.action == VrfSvcsRuleAction.RULE_ACTION_SNAT:
ret_str += ('%s' %
('outgoing source IP %s' % (socket.inet_ntop(self.af, self.out_src_ip) \
if self.af is not None and self.out_src_ip is not None else '')))
if self.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
ret_str += ('%s%s' %
(('private-ip %s' % (socket.inet_ntop(self.af, self.private_ip) \
if self.af is not None and self.private_ip is not None else '')),
('private-port %s' % (str(self.private_port) if self.private_port is not None else ''))))
return ret_str
def to_cps_obj(self):
cps_attr_map = {
'vrf_name': 'ni-name',
'af': 'af',
'dst_ip': 'public-ip',
'protocol': 'protocol',
'dst_port': 'public-port',
'out_src_ip': 'outgoing-source-ip',
'private_ip': 'private-ip',
'private_port': 'private-port',
'rule_id': 'id'}
obj = cps_object.CPSObject('vrf-firewall/ns-outgoing-service')
for attr_name, attr_val in vars(self).items():
if attr_name in cps_attr_map and attr_val is not None:
if attr_name == 'dst_ip' or attr_name == 'out_src_ip' or attr_name == 'private_ip':
attr_val = binascii.hexlify(attr_val)
obj.add_attr(cps_attr_map[attr_name], attr_val)
return obj
def match(self, **params):
attrs = vars(self)
for key, val in params.items():
if key not in attrs:
return False
if val is not None and val != attrs[key]:
return False
return True
class VrfOutgoingSvcsRuleList(list):
def __init__(self):
super(VrfOutgoingSvcsRuleList, self).__init__()
# sorted list of all seq num
self.seq_num_list = []
# map: rule_id => rule position in list
self.rule_id_map = {}
def __str__(self):
str_buf = StringIO()
for rule in self:
str_buf.write('%s\n' % rule)
out_str = str_buf.getvalue()
str_buf.close()
return out_str
def __eq__(self, other):
return super(VrfOutgoingSvcsRuleList, self).__eq__(other)
def __ne__(self, other):
return super(VrfOutgoingSvcsRuleList, self).__ne__(other)
def update_rule_id_map(self, start_idx):
for idx in range(start_idx, len(self)):
self.rule_id_map[self[idx].rule_id] = idx
# insert rule to list, update seq_num list and rule_id map
def insert(self, rule):
if rule.rule_id is None or rule.rule_id in self.rule_id_map:
# rule ID should be assigned and not used by another rule
log_err('Rule ID is not assigned' if rule.rule_id is None
else ('Rule ID %d is used' % rule.rule_id))
return None
try:
idx = self.index(rule)
except ValueError:
idx = None
if idx is not None:
# same rule already in list
log_err('Rule to be inserted is already in list')
return None
idx = bisect.bisect_right(self.seq_num_list, (rule.grp_priority, rule.seq_num))
super(VrfOutgoingSvcsRuleList, self).insert(idx, rule)
self.seq_num_list.insert(idx, (rule.grp_priority, rule.seq_num))
self.update_rule_id_map(idx)
return idx
def remove(self, rule):
if rule.rule_id is not None and rule.rule_id in self.rule_id_map:
idx = self.rule_id_map[rule.rule_id]
rule_id = rule.rule_id
else:
try:
idx = self.index(rule)
except ValueError:
# rule not found
log_err('Rule not found for delete')
return None
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
if rule.private_ip is not None and \
(self[idx].private_ip is None or \
rule.private_ip != self[idx].private_ip):
# rule not found
log_err('Rule (private-ip) not found for delete.')
return None
if rule.private_port is not None and \
(self[idx].private_port is None or \
rule.private_port != self[idx].private_port):
# rule not found
log_err('Rule (private-port) not found for delete.')
return None
rule_id = self[idx].rule_id
orig_rule = self[idx]
del self[idx]
del self.seq_num_list[idx]
del self.rule_id_map[rule_id]
self.update_rule_id_map(idx)
return (orig_rule, idx)
def remove_by_id(self, rule_id):
if rule_id not in self.rule_id_map:
return None
idx = self.rule_id_map[rule_id]
orig_rule = self[idx]
del self[idx]
del self.seq_num_list[idx]
del self.rule_id_map[rule_id]
self.update_rule_id_map(idx)
return orig_rule
def clear(self):
del self[:]
del self.seq_num_list[:]
self.rule_id_map.clear()
class VrfOutgoingSvcsRuleCache:
# map: af, vrf_name => rule_list
snat_rules = {socket.AF_INET: {}, socket.AF_INET6: {}}
ip_rules = {socket.AF_INET: {}, socket.AF_INET6: {}}
mutex = threading.RLock()
id_generator = IdGenerator()
@classmethod
def outgoing_ip_svcs_rule_sub_net_config(cls, op, rule):
if rule.rule_type != VrfSvcsRuleType.RULE_TYPE_OUT_IP:
log_err('Cannot update private IP/Port for rule that is not of outgoing service binding config')
return False
#During rule add, retrieve new private IP & port information.
if (op.lower() == 'insert'):
ret, private_ip, private_port = process_outgoing_ip_svcs_sub_net_config(True,\
rule.vrf_name, rule.af, rule.protocol, rule.dst_ip, rule.dst_port)
if ret is False:
log_err('Failed to allocate private IP, port for outgoing service binding config.'
' VRF %s %s%s%s%s%s' % (
rule.vrf_name,
('AF %d' % rule.af if rule.af is not None else ' '),
(' PROTO %d' % rule.protocol if rule.protocol is not None else ''),
(' DST IP %s' % get_ip_str(rule.af, rule.dst_ip) if rule.dst_ip is not None else ''),
(' PORT %d' % rule.dst_port if rule.dst_port is not None else ''),
(' ID %d' % rule.rule_id if rule.rule_id is not None else '')))
else:
#update the allocated private IP and port to the rule cache.
rule.private_ip = private_ip
rule.private_port = private_port
return ret
elif (op.lower() == 'delete'):
#During rule delete, release private IP & port information.
ret, private_ip, private_port = process_outgoing_ip_svcs_sub_net_config(False,\
rule.vrf_name, rule.af, rule.protocol, rule.dst_ip, rule.dst_port)
if ret is False:
log_err('Failed to release private IP, port for outgoing service binding config.'
' VRF %s %s%s%s%s%s' % (
rule.vrf_name,
('AF %d' % rule.af if rule.af is not None else ' '),
(' PROTO %d' % rule.protocol if rule.protocol is not None else ''),
(' DST IP %s' % get_ip_str(rule.af, rule.dst_ip) if rule.dst_ip is not None else ''),
(' PORT %d' % rule.dst_port if rule.dst_port is not None else ''),
(' ID %d' % rule.rule_id if rule.rule_id is not None else '')))
return ret
return False
@classmethod
def insert_rule(cls, rule):
log_info('Handling add rule: %s' % rule)
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af]\
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP else cls.snat_rules[rule.af]
if rule.vrf_name not in rule_list:
rule_list[rule.vrf_name] = VrfOutgoingSvcsRuleList()
if rule.rule_id is None:
rule.rule_id = cls.id_generator.get_new_id()
if rule.rule_id is None:
log_err('Could not generate new rule ID')
log_info(str(cls.id_generator))
cls.mutex.release()
return False
else:
if cls.id_generator.is_id_used(rule.rule_id):
log_err('Given rule ID %d is used' % rule.rule_id)
log_info(str(cls.id_generator))
cls.mutex.release()
return False
if not cls.id_generator.reserve_id(rule.rule_id):
log_err('Failed to reserve rule ID %d' % rule.rule_id)
log_info(str(cls.id_generator))
cls.mutex.release()
return False
ret_val = True
#allocate private IP and port for outgoing service binding config
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP and \
not cls.outgoing_ip_svcs_rule_sub_net_config('insert', rule):
log_err('Failed to retrieve private IP, port for outgoing service binding config')
cls.id_generator.release_id(rule.rule_id)
cls.mutex.release()
return False
idx = rule_list[rule.vrf_name].insert(rule)
if idx is not None:
if not IptablesHandler.proc_rule('insert', rule, idx):
log_err('Failed to call iptables to insert rule')
cls.id_generator.release_id(rule.rule_id)
# rollback
rule_list[rule.vrf_name].remove(rule)
ret_val = False
#on failure, release private IP and port for outgoing service binding config
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
cls.outgoing_ip_svcs_rule_sub_net_config('delete', rule)
else:
log_err('Failed to insert rule to cache')
#on failure, release private IP and port for outgoing service binding config
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
cls.outgoing_ip_svcs_rule_sub_net_config('delete', rule)
cls.id_generator.release_id(rule.rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
log_info('Rule added, ID=%d' % rule.rule_id)
return ret_val
@classmethod
def delete_rule(cls, rule):
log_info('Handling delete rule: %s' % rule)
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af]\
if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP else cls.snat_rules[rule.af]
if rule.vrf_name not in rule_list:
log_err('VRF name %s not found in cache' % rule.vrf_name)
cls.mutex.release()
return False
ret_val = rule_list[rule.vrf_name].remove(rule)
if ret_val is None:
log_err('Failed to delete rule from cache')
cls.mutex.release()
return False
del_rule, idx = ret_val
ret_val = True
if not IptablesHandler.proc_rule('delete', del_rule, idx):
log_err('Failed to call iptables to delete rule')
# rollback
rule_list[del_rule.vrf_name].insert(del_rule)
ret_val = False
#release private IP and port for outgoing service binding config
if ret_val is True and del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
cls.outgoing_ip_svcs_rule_sub_net_config('delete', del_rule)
if len(rule_list[rule.vrf_name]) == 0:
del rule_list[rule.vrf_name]
cls.mutex.release()
if ret_val:
if not cls.id_generator.release_id(del_rule.rule_id):
log_err('Failed to release rule ID %d' % del_rule.rule_id)
log_info(str(cls.id_generator))
log_info('Rule deleted')
return ret_val
@classmethod
def find_rule_by_id(cls, rule_id):
ret_val = None
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
for vrf_name, rule_list in cls.ip_rules[af].items():
if rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule_id]
ret_val = (rule_list[idx], idx)
break
if ret_val is not None:
break
for vrf_name, rule_list in cls.snat_rules[af].items():
if rule_id in rule_list.rule_id_map:
idx = rule_list.rule_id_map[rule_id]
ret_val = (rule_list[idx], idx)
break
if ret_val is not None:
break
cls.mutex.release()
return ret_val
@classmethod
def find_rule_by_match(cls, rule):
cls.mutex.acquire()
rule_list = cls.ip_rules[rule.af] if rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP else cls.snat_rules[rule.af]
if rule.vrf_name not in rule_list:
cls.mutex.release()
return None
try:
idx = rule_list[rule.vrf_name].index(rule)
except ValueError:
cls.mutex.release()
return None
cls.mutex.release()
return rule_list[rule.vrf_name][idx]
@classmethod
def delete_rule_by_id(cls, rule_id):
log_info('Handling delete rule by ID: %d' % rule_id)
cls.mutex.acquire()
ret_val = True
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is not None:
del_rule, idx = found_rule
rule_list = cls.ip_rules[del_rule.af] \
if del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP \
else cls.snat_rules[del_rule.af]
if rule_list[del_rule.vrf_name].remove_by_id(rule_id) is None:
log_err('Failed to remove rule with ID %d' % rule_id)
cls.mutex.release()
return False
if not IptablesHandler.proc_rule('delete', del_rule, idx):
log_err('Failed to call iptables to delete rule')
# rollback
rule_list = cls.ip_rules[del_rule.af] \
if del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP \
else cls.snat_rules[del_rule.af]
rule_list[del_rule.vrf_name].insert(del_rule)
ret_val = False
#release private IP and port for outgoing service binding config
if ret_val is True and del_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
cls.outgoing_ip_svcs_rule_sub_net_config('delete', del_rule)
if len(rule_list[del_rule.vrf_name]) == 0:
del rule_list[del_rule.vrf_name]
else:
log_err('Rule ID %d not found for delete' % rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
if not cls.id_generator.release_id(rule_id):
log_err('Failed to release rule ID %d' % rule_id)
log_info(str(cls.id_generator))
log_info('Rule deleted')
return ret_val
@classmethod
def replace_rule(cls, rule_id, new_rule):
log_info('Handling replace rule with ID %d' % rule_id)
cls.mutex.acquire()
ret_val = True
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is not None:
old_rule, idx = found_rule
rule_list = cls.ip_rules[old_rule.af] \
if old_rule.rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP else cls.snat_rules[old_rule.af]
if not IptablesHandler.proc_rule('replace', new_rule, idx):
log_err('Failed to call iptables to replace rule')
ret_val = False
else:
rule_list[old_rule.vrf_name][idx] = new_rule
else:
log_err('Rule ID %d not found for replace' % rule_id)
ret_val = False
cls.mutex.release()
if ret_val:
log_info('Rule replaced')
return ret_val
@classmethod
def update_rule(cls, rule_id, **params):
log_info('Handling update rule: ID %d param %s' % (rule_id, params))
cls.mutex.acquire()
found_rule = cls.find_rule_by_id(rule_id)
if found_rule is None:
log_err('Rule ID %d not found for update' % rule_id)
cls.mutex.release()
return False
old_rule, _ = found_rule
upd_rule = copy.deepcopy(old_rule)
changed_attrs = set()
def check_and_set(rule, key, val):
try:
orig_val = getattr(rule, key)
except AttributeError:
orig_val = None
if val != orig_val:
if val is not None:
setattr(rule, key, val)
else:
delattr(rule, key)
changed_attrs.add(key)
for key, val in params.items():
if val is not None:
check_and_set(upd_rule, key, val)
if len(changed_attrs) == 0:
log_info('There is no change to be updated, just return')
cls.mutex.release()
return True
log_info('old_rule: %s' % old_rule)
log_info('new_rule: %s' % upd_rule)
match_rule = cls.find_rule_by_match(upd_rule)
if match_rule is not None and match_rule.rule_id != old_rule.rule_id:
log_err('The updating rule already exists')
cls.mutex.release()
return False
repl_attrs = {'dst_ip', 'dst_port', 'out_src_ip', 'action'}
no_repl_attrs = changed_attrs.difference(repl_attrs)
if len(no_repl_attrs) > 0:
log_info('Non-replacable attributes %s changed, delete and add rule' % no_repl_attrs)
if not cls.delete_rule_by_id(rule_id):
log_err('Failed to delete existing rule by ID')
cls.mutex.release()
return False
ret_val = cls.insert_rule(upd_rule)
if not ret_val:
log_err('Failed to insert updated rule')
cls.insert_rule(old_rule)
else:
log_info('Only replacable attributes changed, just replace rule')
ret_val = cls.replace_rule(rule_id, upd_rule)
if not ret_val:
log_err('Failed to replace rule')
cls.mutex.release()
if ret_val:
log_info('Rule updated')
return ret_val
@classmethod
def clear_all_rules(cls, flt_vrf = None, flt_af = None):
log_info('Handling clear all rules for VRF %s and AF %s' % (
flt_vrf if flt_vrf is not None else '-',
str(flt_af) if flt_af is not None else '-'))
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
if flt_af is not None and flt_af != af:
continue
for vrf_name, rule_list in cls.ip_rules[af].items():
if flt_vrf is not None and flt_vrf != vrf_name:
continue
for rule in rule_list:
IptablesHandler.proc_rule('delete', rule)
cls.outgoing_ip_svcs_rule_sub_net_config('delete', rule)
cls.id_generator.release_id(rule.rule_id)
rule_list.clear()
del cls.ip_rules[af][vrf_name]
for vrf_name, rule_list in cls.snat_rules[af].items():
if flt_vrf is not None and flt_vrf != vrf_name:
continue
for rule in rule_list:
IptablesHandler.proc_rule('delete', rule)
cls.id_generator.release_id(rule.rule_id)
rule_list.clear()
del cls.snat_rules[af][vrf_name]
cls.mutex.release()
@classmethod
def dump_rules(cls):
cls.mutex.acquire()
for af in [socket.AF_INET, socket.AF_INET6]:
for vrf_name, rule_list in cls.ip_rules[af].items():
log_info('-------------------------------------')
log_info(' %s IP Rules of VRF %s' % ('IPv4' if af == socket.AF_INET else 'IPv6', vrf_name))
log_info('-------------------------------------')
log_info('\n%s\n' % rule_list)
for vrf_name, rule_list in cls.snat_rules[af].items():
log_info('-------------------------------------')
log_info(' %s SNAT Rules of VRF %s' % ('IPv4' if af == socket.AF_INET else 'IPv6', vrf_name))
log_info('--------------------------------------')
log_info('\n%s\n' % rule_list)
cls.mutex.release()
@classmethod
def get_all_rules(cls, rule_type = None, vrf_name = None, **params):
log_info('Handling get rules for vrf %s' % (vrf_name if vrf_name is not None else 'ALL'))
cls.mutex.acquire()
ret_list = []
for af in [socket.AF_INET, socket.AF_INET6]:
if rule_type is None or rule_type == VrfSvcsRuleType.RULE_TYPE_OUT_IP:
for vrf, rule_list in cls.ip_rules[af].items():
if vrf_name is not None and vrf != vrf_name:
continue
for rule in rule_list:
if not rule.match(**params):
continue
ret_list.append(copy.deepcopy(rule))
if rule_type is None or rule_type == VrfSvcsRuleType.RULE_TYPE_SNAT:
for vrf, rule_list in cls.snat_rules[af].items():
if vrf_name is not None and vrf != vrf_name:
continue
for rule in rule_list:
if not rule.match(**params):
continue
ret_list.append(copy.deepcopy(rule))
cls.mutex.release()
return ret_list
def process_vrf_outgoing_svcs_rule_add(rule_type, vrf_name, action, af, **params):
try:
rule = VrfOutgoingSvcsRule(rule_type, vrf_name, action, af, **params)
if not VrfOutgoingSvcsRuleCache.insert_rule(rule):
return (None, None, None)
except ValueError:
log_err('Failed to initiate rule object')
return (None, None, None)
except Exception as ex:
logging.exception(ex)
return (None, None, None)
return (rule.rule_id, rule.private_ip, rule.private_port)
def process_vrf_outgoing_svcs_rule_set(rule_id, **params):
try:
if not VrfOutgoingSvcsRuleCache.update_rule(rule_id, **params):
return False
except ValueError:
log_err('Failed to update rule with params: %s' % params)
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_outgoing_svcs_rule_del(rule_type, vrf_name, action, af, **params):
try:
rule = VrfOutgoingSvcsRule(rule_type, vrf_name, action, af, **params)
if not VrfOutgoingSvcsRuleCache.delete_rule(rule):
return None
except ValueError:
log_err('Failed to initiate rule object')
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_outgoing_svcs_rule_del_by_id(rule_id):
try:
if not VrfOutgoingSvcsRuleCache.delete_rule_by_id(rule_id):
return False
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_outgoing_svcs_rule_get(resp, rule_id = None, rule_type = None, vrf_name = None, **params):
try:
if rule_id is not None:
found_rule = VrfOutgoingSvcsRuleCache.find_rule_by_id(rule_id)
if found_rule is not None:
resp.append(found_rule[0].to_cps_obj().get())
else:
rule_list = VrfOutgoingSvcsRuleCache.get_all_rules(rule_type, vrf_name, **params)
for rule in rule_list:
resp.append(rule.to_cps_obj().get())
except Exception as ex:
logging.exception(ex)
return False
return True
def process_vrf_outgoing_svcs_clear_rules(vrf_name = None):
try:
VrfOutgoingSvcsRuleCache.clear_all_rules(vrf_name)
except Exception as ex:
logging.exception(ex)
return False
return True
| 42.271637 | 128 | 0.57063 | 10,749 | 81,077 | 4.045586 | 0.047167 | 0.030631 | 0.02111 | 0.031665 | 0.792416 | 0.748609 | 0.71538 | 0.668744 | 0.646783 | 0.623925 | 0 | 0.003791 | 0.333054 | 81,077 | 1,917 | 129 | 42.293688 | 0.800403 | 0.036472 | 0 | 0.702115 | 0 | 0.001813 | 0.092784 | 0.008016 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001208 | 0.009063 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e46ef0ba1964bdc9f7db56535239826fcdb762b1 | 133 | py | Python | tests/conftest.py | Dakhnovskiy/background_worker_lib | cd19a3b263ef7a3a4f7ac8a5986f1bbfff16ec38 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Dakhnovskiy/background_worker_lib | cd19a3b263ef7a3a4f7ac8a5986f1bbfff16ec38 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Dakhnovskiy/background_worker_lib | cd19a3b263ef7a3a4f7ac8a5986f1bbfff16ec38 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import sys
import os
LIBRARY_DIR = os.path.dirname(os.path.dirname(__file__))
sys.path.append(LIBRARY_DIR)
| 16.625 | 56 | 0.721805 | 21 | 133 | 4.285714 | 0.571429 | 0.222222 | 0.288889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.112782 | 133 | 7 | 57 | 19 | 0.754237 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e4884fe24f334edcf66893b4e2970c2e149529bd | 6,478 | py | Python | route/prj.py | KeitaShiratori/ripple | 183abdbb5f3853acab459c703c3c10d3fcd47e9b | [
"MIT"
] | null | null | null | route/prj.py | KeitaShiratori/ripple | 183abdbb5f3853acab459c703c3c10d3fcd47e9b | [
"MIT"
] | null | null | null | route/prj.py | KeitaShiratori/ripple | 183abdbb5f3853acab459c703c3c10d3fcd47e9b | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request, Blueprint, session, redirect, url_for
from mw.prj import scan_prj, get_prj, create_prj, update_prj, upd_prj, join_prj, add_timeline, add_link, complete_prj
from mw.img import upload_img, get_img
prj = Blueprint('prj', __name__, url_prefix='/prj')
@prj.route('/')
def index():
session['referer'] = '/prj'
return render_template('prj/search.html', title='Project検索', prj={})
@prj.route('/show/<prj_id>', methods=['GET'])
def show(prj_id):
if prj_id is None:
# 早期リターン
return render_template('prj/show.html', title='Project詳細', data={})
# sessionからユーザ情報を取得
user_id = session['userID'] if "userID" in session else ""
# prj_idでAWSのDBを検索
data = get_prj(prj_id, user_id)
if data is None:
# データが取得できなかった場合、
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
# データが取得できた場合、sessionに参照しているプロジェクト情報をセットする
session['LatestViewProjectId'] = prj_id
session['referer'] = '/prj/show'
# 詳細ページを表示
return render_template('prj/show.html', title='Project詳細', data=data)
@prj.route('/jrnpoiajoiaj/<photo_id>', methods=['GET'])
def show_img(photo_id):
if photo_id is None:
return ""
return get_img(photo_id)
@prj.route('/entry', methods=['GET'])
def entry():
session['referer'] = '/prj/entry'
return render_template('prj/entry.html', title='Project作成')
@prj.route('/create', methods=['POST'])
def create():
data = dict(request.form)
if 'name' not in data or len(data['name']) is 0:
# 早期リターン
return jsonify({'code': 'W00100','message': '入力された値が無効です。'})
user_id = session.get('userID')
user_name = session.get('name')
if user_id is None or user_name is None:
# ユーザ情報がなかったら後続処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
# 画像を取得
if 'photo' in request.files and len(request.files['photo'].filename) > 0:
photo = request.files['photo']
photo_id = upload_img(photo)
if photo_id is not None:
data['photo_id'] = photo_id
# prjを更新、更新後の情報をprjに再セット
prj_id = create_prj(data, user_id, user_name)
session['referer'] = '/prj/create'
# 詳細ページを表示
return redirect(url_for('prj.show', prj_id=prj_id))
@prj.route('/update_entry/<prj_id>', methods=['GET'])
def update_entry(prj_id):
if prj_id is None:
# 早期リターン
return render_template('prj/show.html', title='Project詳細', data={})
# sessionからユーザ情報を取得
user_id = session['userID'] if "userID" in session else ""
# prj_idでAWSのDBを検索
data = get_prj(prj_id, user_id)
session['referer'] = '/prj/update_entry'
# 詳細ページを表示
return render_template('prj/update_entry.html', title='Project更新', data=data)
@prj.route('/update', methods=['POST'])
def update():
new_data = dict(request.form)
if 'name' not in new_data or len(new_data['name']) is 0:
# 早期リターン
return jsonify({'code': 'W00100','message': '入力された値が無効です。'})
user_id = session.get('userID')
user_name = session.get('name')
prj_id = session.get('LatestViewProjectId')
if user_id is None or user_name is None:
# ユーザ情報がなかったら後続処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
# 画像を取得
if 'photo' in request.files and len(request.files['photo'].filename) > 0:
photo = request.files['photo']
photo_id = upload_img(photo)
if photo_id is not None:
new_data['photo_id'] = photo_id
# prjを更新、更新後の情報をprjに再セット
data = update_prj(prj_id, new_data)
session['referer'] = '/prj/update'
# 詳細ページを表示
return redirect(url_for('prj.show', prj_id=prj_id))
@prj.route('/joinProject', methods=['GET'])
def joinProject():
if session['referer'] != '/prj/show':
# プロジェクト参加が可能なのは、プロジェクト詳細ページからプロジェクト参加ボタンを押されたときのみ。
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
user_id = session.get('userID')
user_name = session.get('name')
prj_id = session.get('LatestViewProjectId')
if user_id is None or user_name is None or prj_id is None:
# ユーザ情報またはプロジェクトIDがなかったらプロジェクト参加処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
data = join_prj(prj_id, user_id, user_name)
if data is None:
# データが取得できなかった場合、
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
session['referer'] = '/prj/joinProject'
return redirect(url_for('prj.show', prj_id=prj_id))
@prj.route('/addTimeline', methods=['POST'])
def addTimeline():
if session['referer'] != '/prj/show':
# プロジェクト参加が可能なのは、プロジェクト詳細ページからプロジェクト参加ボタンを押されたときのみ。
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
prj_id = session.get('LatestViewProjectId')
if prj_id is None:
# ユーザ情報またはプロジェクトIDがなかったらプロジェクト参加処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
form = dict(request.form)
data = add_timeline(prj_id, form)
if data is None:
# データが取得できなかった場合、
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
session['referer'] = '/prj/addTimeline'
return redirect(url_for('prj.show', prj_id=prj_id))
@prj.route('/addLink', methods=['POST'])
def addLink():
if session['referer'] != '/prj/show':
# プロジェクト参加が可能なのは、プロジェクト詳細ページからプロジェクト参加ボタンを押されたときのみ。
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
prj_id = session.get('LatestViewProjectId')
if prj_id is None:
# ユーザ情報またはプロジェクトIDがなかったらプロジェクト参加処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
form = dict(request.form)
data = add_link(prj_id, form)
if data is None:
# データが取得できなかった場合、
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
session['referer'] = '/prj/addLink'
return redirect(url_for('prj.show', prj_id=prj_id))
@prj.route('/complete', methods=['POST'])
def complete():
if session['referer'] != '/prj/show':
# プロジェクト参加が可能なのは、プロジェクト詳細ページからプロジェクト参加ボタンを押されたときのみ。
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
prj_id = session.get('LatestViewProjectId')
if prj_id is None:
# ユーザ情報またはプロジェクトIDがなかったらプロジェクト参加処理を実行できないのでエラー
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
form = dict(request.form)
data = complete_prj(prj_id, form)
if data is None:
# データが取得できなかった場合、
return render_template('err/404.html', title='404 | 指定された情報が見つかりませんでした')
session['referer'] = '/prj/complete'
return redirect(url_for('prj.show', prj_id=prj_id))
| 31.294686 | 117 | 0.705465 | 837 | 6,478 | 5.297491 | 0.107527 | 0.040595 | 0.094723 | 0.077808 | 0.756653 | 0.731394 | 0.731394 | 0.713802 | 0.700271 | 0.68922 | 0 | 0.018841 | 0.147885 | 6,478 | 206 | 118 | 31.446602 | 0.78442 | 0.115468 | 0 | 0.540323 | 0 | 0 | 0.252062 | 0.011761 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08871 | false | 0 | 0.024194 | 0 | 0.362903 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e49be29820386b62ea6678e09bca7cb9f4c1433a | 456 | py | Python | vnpy/trader/app/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | 5 | 2019-01-17T12:14:14.000Z | 2021-05-30T10:24:42.000Z | vnpy/trader/app/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | null | null | null | vnpy/trader/app/__init__.py | black0144/vnpy | 0d0ea30dad14a0150f7500ff9a62528030321426 | [
"MIT"
] | 5 | 2019-03-26T03:17:45.000Z | 2019-11-05T08:08:18.000Z | # encoding: UTF-8
########################################################################
class AppEngine(object):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
pass
#----------------------------------------------------------------------
def stop(self):
""""""
raise NotImplementedError
| 20.727273 | 75 | 0.212719 | 16 | 456 | 5.8125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002933 | 0.252193 | 456 | 22 | 76 | 20.727273 | 0.269795 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
e4c7f63cf986f84f26885ea4bb8436d00c5e1137 | 62 | py | Python | pyheif/__init__.py | L3o-pold/pyheif | 082c58d4275f702da23f7b6207c37bc4a090aa9c | [
"Apache-2.0"
] | null | null | null | pyheif/__init__.py | L3o-pold/pyheif | 082c58d4275f702da23f7b6207c37bc4a090aa9c | [
"Apache-2.0"
] | null | null | null | pyheif/__init__.py | L3o-pold/pyheif | 082c58d4275f702da23f7b6207c37bc4a090aa9c | [
"Apache-2.0"
] | null | null | null | from .reader import read_heif
from .writer import write_heif
| 15.5 | 30 | 0.822581 | 10 | 62 | 4.9 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 62 | 3 | 31 | 20.666667 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4e1d5a9b53e534b29a7cb852549c587799ec2a5 | 7,811 | py | Python | code/nin.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | 1 | 2020-06-03T12:53:43.000Z | 2020-06-03T12:53:43.000Z | code/nin.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | null | null | null | code/nin.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | null | null | null | from caffe import layers as L
from caffe import params as P
import caffe
def conv_relu(bottom, num_output=64, kernel_size=3, stride=1, pad=0):
conv = L.Convolution(bottom, num_output=num_output, kernel_size=kernel_size, stride=stride, pad=pad,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0))
relu = L.ReLU(conv, in_place=True)
return conv, relu
def fc_relu_drop(bottom, fc_num_output=4096, dropout_ratio=0.5):
fc = L.InnerProduct(bottom, num_output=fc_num_output,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0)
)
relu = L.ReLU(fc, in_place=True)
drop = L.Dropout(fc, in_place=True, dropout_param=dict(dropout_ratio=dropout_ratio))
return fc, relu, drop
def conv_bn_scale_relu(bottom, num_output=64, kernel_size=3, stride=1, pad=0):
conv = L.Convolution(bottom, num_output=num_output, kernel_size=kernel_size, stride=stride, pad=pad,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0))
bn = L.BatchNorm(conv, use_global_stats=False, in_place=True)
scale = L.Scale(conv, scale_param=dict(bias_term=True), in_place=True)
relu = L.ReLU(conv, in_place=True)
return conv, bn, scale, relu
def accuracy_top1_top5(bottom, label):
accuracy_top1 = L.Accuracy(bottom, label, include=dict(phase=1))
accuracy_top5 = L.Accuracy(bottom, label, include=dict(phase=1), accuracy_param=dict(top_k=5))
return accuracy_top1, accuracy_top5
class NIN(object):
def __init__(self, lmdb_train, lmdb_test, num_output):
self.train_data = lmdb_train
self.test_data = lmdb_test
self.classifier_num = num_output
def nin_proto(self, batch_size, phase='TRAIN'):
n = caffe.NetSpec()
if phase == 'TRAIN':
source_data = self.train_data
mirror = True
else:
source_data = self.test_data
mirror = False
n.data, n.label = L.Data(source=source_data, backend=P.Data.LMDB, batch_size=batch_size, ntop=2,
transform_param=dict(crop_size=224, mean_value=[104, 117, 123], mirror=mirror))
n.conv1, n.relu0 = conv_relu(n.data, num_output=96, kernel_size=11, stride=4) # 96x53x53
n.cccp1, n.relu1 = conv_relu(n.conv1, num_output=96, kernel_size=1, stride=1)
n.cccp2, n.relu2 = conv_relu(n.cccp1, num_output=96, kernel_size=1, stride=1)
n.pool1 = L.Pooling(n.cccp2, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 96x26x26
n.conv2, n.relu3 = conv_relu(n.pool1, num_output=256, kernel_size=5, stride=1, pad=2) # 256x26x26
n.cccp3, n.relu4 = conv_relu(n.conv2, num_output=256, kernel_size=1, stride=1)
n.cccp4, n.relu5 = conv_relu(n.cccp3, num_output=256, kernel_size=1, stride=1)
n.pool2 = L.Pooling(n.cccp4, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 256x13x13
n.conv3, n.relu6 = conv_relu(n.pool2, num_output=384, kernel_size=3, stride=1, pad=1) # 384x13x13
n.cccp5, n.relu7 = conv_relu(n.conv3, num_output=384, kernel_size=1, stride=1)
n.cccp6, n.relu8 = conv_relu(n.cccp5, num_output=384, kernel_size=1, stride=1)
n.pool3 = L.Pooling(n.cccp6, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 384x6x6
n.drop7 = L.Dropout(n.pool3, in_place=True, dropout_param=dict(dropout_ratio=0.5))
n.conv4, n.relu9 = conv_relu(n.pool3, num_output=1024, kernel_size=3, stride=1, pad=1) # 1024x6x6
n.cccp7, n.relu10 = conv_relu(n.conv4, num_output=1024, kernel_size=1, stride=1)
n.cccp8, n.relu11 = conv_relu(n.cccp7, num_output=1024, kernel_size=1, stride=1)
n.pool4 = L.Pooling(n.cccp8, pool=P.Pooling.AVE, kernel_size=6, stride=1) # 1024x1x1
n.classifier = L.InnerProduct(n.pool4, num_output=self.classifier_num,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0)
)
n.loss = L.SoftmaxWithLoss(n.classifier, n.label)
if phase == 'TRAIN':
pass
else:
n.accuracy_top1, n.accuracy_top5 = accuracy_top1_top5(n.pool4, n.label)
return n.to_proto()
def nin_bn_proto(self, batch_size, phase='TRAIN'):
n = caffe.NetSpec()
if phase == 'TRAIN':
source_data = self.train_data
mirror = True
else:
source_data = self.test_data
mirror = False
n.data, n.label = L.Data(source=source_data, backend=P.Data.LMDB, batch_size=batch_size, ntop=2,
transform_param=dict(crop_size=227, mean_value=[104, 117, 123], mirror=mirror))
n.conv1, n.bn0, n.scale0, n.relu0 = conv_bn_scale_relu(n.data, num_output=96, kernel_size=11, stride=4)
n.cccp1, n.bn1, n.scale1, n.relu1 = conv_bn_scale_relu(n.conv1, num_output=96, kernel_size=1, stride=1)
n.cccp2, n.bn2, n.scale2, n.relu2 = conv_bn_scale_relu(n.cccp1, num_output=96, kernel_size=1, stride=1)
n.pool1 = L.Pooling(n.cccp2, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 96x26x26
n.conv2, n.bn3, n.scale3, n.relu3 = conv_bn_scale_relu(n.pool1, num_output=256, kernel_size=5, stride=1, pad=2)
n.cccp3, n.bn4, n.scale4, n.relu4 = conv_bn_scale_relu(n.conv2, num_output=256, kernel_size=1, stride=1)
n.cccp4, n.bn5, n.scale5, n.relu5 = conv_bn_scale_relu(n.cccp3, num_output=256, kernel_size=1, stride=1)
n.pool2 = L.Pooling(n.cccp4, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 256x13x13
n.conv3, n.bn6, n.scale6, n.relu6 = conv_bn_scale_relu(n.pool2, num_output=384, kernel_size=3, stride=1, pad=1)
n.cccp5, n.bn7, n.scale7, n.relu7 = conv_bn_scale_relu(n.conv3, num_output=384, kernel_size=1, stride=1)
n.cccp6, n.bn8, n.scale8, n.relu8 = conv_bn_scale_relu(n.cccp5, num_output=384, kernel_size=1, stride=1)
n.pool3 = L.Pooling(n.cccp6, pool=P.Pooling.MAX, kernel_size=3, stride=2) # 384x6x6
n.drop7 = L.Dropout(n.pool3, in_place=True, dropout_param=dict(dropout_ratio=0.5))
n.conv4, n.bn9, n.scale9, n.relu9 = conv_bn_scale_relu(n.pool3, num_output=1024, kernel_size=3, stride=1, pad=1)
n.cccp7, n.bn10, n.scale10, n.relu10 = conv_bn_scale_relu(n.conv4, num_output=1024, kernel_size=1, stride=1)
n.cccp8, n.bn11, n.scale11, n.relu11 = conv_bn_scale_relu(n.cccp7, num_output=1024, kernel_size=1, stride=1)
n.pool4 = L.Pooling(n.cccp8, pool=P.Pooling.AVE, kernel_size=6, stride=1) # 1024x1x1
n.classifier = L.InnerProduct(n.pool4, num_output=self.classifier_num,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0)
)
n.loss = L.SoftmaxWithLoss(n.classifier, n.label)
if phase == 'TRAIN':
pass
else:
n.accuracy_top1, n.accuracy_top5 = accuracy_top1_top5(n.pool4, n.label)
return n.to_proto()
| 56.194245 | 120 | 0.629369 | 1,220 | 7,811 | 3.835246 | 0.138525 | 0.081214 | 0.037615 | 0.058132 | 0.794401 | 0.766189 | 0.766189 | 0.766189 | 0.757427 | 0.725582 | 0 | 0.075189 | 0.238894 | 7,811 | 138 | 121 | 56.601449 | 0.711859 | 0.013955 | 0 | 0.522124 | 0 | 0 | 0.014306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061947 | false | 0.017699 | 0.026549 | 0 | 0.150442 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9018e97ded4d5af635be2e75d49910976c65c224 | 90 | py | Python | python/build/PaxHeaders.127271/soutil.py | ictyangye/ovs-c2ratelimiter | c0e1ada35b3b5f2524fbba6324c9e996e84ac9bc | [
"Apache-2.0"
] | null | null | null | python/build/PaxHeaders.127271/soutil.py | ictyangye/ovs-c2ratelimiter | c0e1ada35b3b5f2524fbba6324c9e996e84ac9bc | [
"Apache-2.0"
] | null | null | null | python/build/PaxHeaders.127271/soutil.py | ictyangye/ovs-c2ratelimiter | c0e1ada35b3b5f2524fbba6324c9e996e84ac9bc | [
"Apache-2.0"
] | null | null | null | 30 mtime=1527291425.657873726
30 atime=1527291425.777874152
30 ctime=1527291454.661972623
| 22.5 | 29 | 0.866667 | 12 | 90 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.75 | 0.066667 | 90 | 3 | 30 | 30 | 0.178571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9047403f89e05ac27f3e8196446deeefcdb574c1 | 49 | py | Python | qimageview/__main__.py | nevion/pyqimageview | 0f0e2966d2a2a089ec80b5bf777a773443df7f9e | [
"MIT"
] | 9 | 2015-10-30T12:14:45.000Z | 2022-03-11T15:38:55.000Z | qimageview/__main__.py | nevion/pyqimageview | 0f0e2966d2a2a089ec80b5bf777a773443df7f9e | [
"MIT"
] | null | null | null | qimageview/__main__.py | nevion/pyqimageview | 0f0e2966d2a2a089ec80b5bf777a773443df7f9e | [
"MIT"
] | 5 | 2016-05-31T15:09:02.000Z | 2019-07-10T21:47:04.000Z | import qimageview.view
qimageview.viewer.main()
| 12.25 | 24 | 0.816327 | 6 | 49 | 6.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 3 | 25 | 16.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
5f69f718fc986e0536f81ece04347d83b3612ddf | 22 | py | Python | tests/__init__.py | EpicWink/seddy | 5cbd8fc561fa829e14e4e2f10e81154e896ad919 | [
"MIT"
] | null | null | null | tests/__init__.py | EpicWink/seddy | 5cbd8fc561fa829e14e4e2f10e81154e896ad919 | [
"MIT"
] | 23 | 2020-02-18T12:32:21.000Z | 2020-07-23T09:02:03.000Z | tests/__init__.py | EpicWink/seddy | 5cbd8fc561fa829e14e4e2f10e81154e896ad919 | [
"MIT"
] | null | null | null | """Test ``seddy``."""
| 11 | 21 | 0.409091 | 2 | 22 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 1 | 22 | 22 | 0.45 | 0.681818 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5fa00b4f38400ae5dcd9e8f2652a9dc704e0554d | 114 | py | Python | python/6Kyu/Sum of Digits Digital Root.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/6Kyu/Sum of Digits Digital Root.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/6Kyu/Sum of Digits Digital Root.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | def digital_root(n):
if n < 10:
return n
return digital_root(sum([int(c) for c in list(str(n))])) | 28.5 | 60 | 0.587719 | 21 | 114 | 3.095238 | 0.666667 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.263158 | 114 | 4 | 60 | 28.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
5fa44ee3d46ec2a10e87a9fc6dc3ea6d543167d1 | 174 | py | Python | src/lightextclassification/vocab.py | duoan/light-text-classification | 6c96c9fb6b52abd42e4b4358cb85c44473731668 | [
"MIT"
] | 1 | 2021-03-20T20:59:57.000Z | 2021-03-20T20:59:57.000Z | src/lightextclassification/vocab.py | classtag/light-text-classification | 6c96c9fb6b52abd42e4b4358cb85c44473731668 | [
"MIT"
] | 244 | 2018-11-22T13:37:48.000Z | 2021-07-14T18:40:29.000Z | src/lightextclassification/vocab.py | duoan/light-text-classification | 6c96c9fb6b52abd42e4b4358cb85c44473731668 | [
"MIT"
] | 1 | 2018-11-22T12:03:13.000Z | 2018-11-22T12:03:13.000Z | from torchtext.vocab import Vectors
class LocalVectors(Vectors):
def __init__(self, vector_path, **kwargs):
super(LocalVectors, self).__init__(vector_path, **kwargs) | 24.857143 | 61 | 0.764368 | 21 | 174 | 5.857143 | 0.666667 | 0.162602 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 174 | 7 | 61 | 24.857143 | 0.803922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
39e16bb137291b16f83138d86714800d886d33a7 | 205 | py | Python | Python_Exercicios/Dobro.py | thalles-dreissig20/Quebra_Cabeca | eeb9458dbabac72d9867e5ec5d7f1aa9b5993d79 | [
"MIT"
] | null | null | null | Python_Exercicios/Dobro.py | thalles-dreissig20/Quebra_Cabeca | eeb9458dbabac72d9867e5ec5d7f1aa9b5993d79 | [
"MIT"
] | 1 | 2021-11-29T18:37:14.000Z | 2021-11-29T18:37:14.000Z | Python_Exercicios/Dobro.py | thalles-dreissig20/Quebra_Cabeca | eeb9458dbabac72d9867e5ec5d7f1aa9b5993d79 | [
"MIT"
] | null | null | null | var = int(input("Digite um valor: "))
print("O dobro desse valor é {}".format(var * 2))
print("O triplo desse valor é {}".format(var * 3))
print("A raiz quadrada desse valor é {:.2f}".format(var ** (1/2))) | 51.25 | 66 | 0.639024 | 36 | 205 | 3.638889 | 0.555556 | 0.229008 | 0.251908 | 0.259542 | 0.305344 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.146341 | 205 | 4 | 66 | 51.25 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0.495146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
f2ea1fc8cb88f0226c27324561d0cc93f7141f70 | 4,679 | py | Python | crawlers/crawlers/spiders/kaletou.py | jackliusr/scrapy-crawlers | f3af825bf240c4b53aa9fa61903d89f09386c6a2 | [
"Apache-2.0"
] | 3 | 2015-02-18T21:03:27.000Z | 2019-06-27T13:23:25.000Z | crawlers/crawlers/spiders/kaletou.py | jackliusr/scrapy-crawlers | f3af825bf240c4b53aa9fa61903d89f09386c6a2 | [
"Apache-2.0"
] | 3 | 2021-03-24T11:34:33.000Z | 2021-03-24T11:34:33.000Z | crawlers/crawlers/spiders/kaletou.py | jackliusr/scrapy-crawlers | f3af825bf240c4b53aa9fa61903d89f09386c6a2 | [
"Apache-2.0"
] | null | null | null | import scrapy
from scrapy.selector import Selector
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from crawlers.items import GameItem
from scrapy.http import TextResponse,FormRequest,Request
import json
class KaletouSpider(CrawlSpider):
name = 'kaletou'
allowed_domains = ['kaletou.com']
#start_urls = ['http://www.kaletou.com/']
start_urls = ['http://www.kaletou.com/category.php?id=13',
'http://www.kaletou.com/category.php?id=17',
'http://www.kaletou.com/category.php?id=159',
'http://www.kaletou.com/category.php?id=160',
'http://www.kaletou.com/category.php?id=161',
'http://www.kaletou.com/category.php?id=162',
'http://www.kaletou.com/category.php?id=1',
'http://www.kaletou.com/category.php?id=5',
'http://www.kaletou.com/category.php?id=49',
'http://www.kaletou.com/category.php?id=163',
'http://www.kaletou.com/category.php?id=164',
'http://www.kaletou.com/category.php?id=165',
'http://www.kaletou.com/category.php?id=14',
'http://www.kaletou.com/category.php?id=25',
'http://www.kaletou.com/category.php?id=29',
'http://www.kaletou.com/category.php?id=92',
'http://www.kaletou.com/category.php?id=34',
'http://www.kaletou.com/category.php?id=166',
'http://www.kaletou.com/category.php?id=167',
'http://www.kaletou.com/category.php?id=2',
'http://www.kaletou.com/category.php?id=3',
'http://www.kaletou.com/category.php?id=4',
'http://www.kaletou.com/category.php?id=6',
'http://www.kaletou.com/category.php?id=15',
'http://www.kaletou.com/category.php?id=16',
'http://www.kaletou.com/category.php?id=18',
'http://www.kaletou.com/category.php?id=19',
'http://www.kaletou.com/category.php?id=20',
'http://www.kaletou.com/category.php?id=22',
'http://www.kaletou.com/category.php?id=23',
'http://www.kaletou.com/category.php?id=24',
'http://www.kaletou.com/category.php?id=26',
'http://www.kaletou.com/category.php?id=27',
'http://www.kaletou.com/category.php?id=28',
'http://www.kaletou.com/category.php?id=30',
'http://www.kaletou.com/category.php?id=35',
'http://www.kaletou.com/category.php?id=36',
'http://www.kaletou.com/category.php?id=37',
'http://www.kaletou.com/category.php?id=40',
'http://www.kaletou.com/category.php?id=41',
'http://www.kaletou.com/category.php?id=45',
'http://www.kaletou.com/category.php?id=48',
'http://www.kaletou.com/category.php?id=50',
'http://www.kaletou.com/category.php?id=51',
'http://www.kaletou.com/category.php?id=54',
'http://www.kaletou.com/category.php?id=55',
'http://www.kaletou.com/category.php?id=57',
'http://www.kaletou.com/category.php?id=61',
'http://www.kaletou.com/category.php?id=64',
'http://www.kaletou.com/category.php?id=70',
'http://www.kaletou.com/category.php?id=80',
'http://www.kaletou.com/category.php?id=87',
'http://www.kaletou.com/category.php?id=90',
'http://www.kaletou.com/category.php?id=97',
'http://www.kaletou.com/category.php?id=114',
'http://www.kaletou.com/category.php?id=115',
'http://www.kaletou.com/category.php?id=121',
'http://www.kaletou.com/category.php?id=129',
'http://www.kaletou.com/category.php?id=155',
'http://www.kaletou.com/category.php?id=168',
'http://www.kaletou.com/category.php?id=170',
'http://www.kaletou.com/category.php?id=174'
]
rules = (
Rule(LinkExtractor(allow=r'goods\.php\?id='), callback='parse_item', follow=False),
)
def parse_item(self, response):
sel = Selector(response)
i = GameItem()
i['name'] = sel.xpath("//div[@id='content']/div/div[1]/div[1]/div[2]/h1/text()").extract()[0]
i['description'] = ' '.join(sel.xpath("//div[@id='description']/div[2]/div/*").extract())
cntLinks = sel.xpath("//p[@class='breadcrumbs']/a")
if len(cntLinks) >2:
i['category'] = sel.xpath("//p[@class='breadcrumbs']/a[3]/text()").extract()[0]
else:
i['category'] = i['name']
i['price'] = sel.xpath("//form[@id='purchase_form']/ul/li[1]/em/text()").extract()[0]
i['image_urls'] = ["http://www.kaletou.com/" + sel.xpath("//div[@id='gallery']/a/img/@src").extract()[0]]
return i;
| 48.237113 | 112 | 0.593503 | 660 | 4,679 | 4.19697 | 0.201515 | 0.234657 | 0.323466 | 0.39278 | 0.714079 | 0.706498 | 0.687726 | 0.022383 | 0 | 0 | 0 | 0.038848 | 0.19128 | 4,679 | 96 | 113 | 48.739583 | 0.693182 | 0.008549 | 0 | 0 | 0 | 0.011236 | 0.626051 | 0.050248 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011236 | false | 0 | 0.078652 | 0 | 0.157303 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
845609152ea648c197a7a7fad813f37ca8eeddfd | 2,104 | py | Python | tests/test_graph.py | FilipKlaesson/cops | 67d2e5dd4534b3f3eec95b6cfda9d4c9c1746ef0 | [
"BSD-3-Clause"
] | null | null | null | tests/test_graph.py | FilipKlaesson/cops | 67d2e5dd4534b3f3eec95b6cfda9d4c9c1746ef0 | [
"BSD-3-Clause"
] | null | null | null | tests/test_graph.py | FilipKlaesson/cops | 67d2e5dd4534b3f3eec95b6cfda9d4c9c1746ef0 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import networkx as nx
from cops.graph import Graph
def test_pre_S():
G = Graph()
connectivity_edges = [0, 1, 2, 3] # directed connectivity path (one way)
transition_edges = [0, 1, 2, 3] # directed transition path (one way)
# Add edges to graph
G.add_transition_path(transition_edges)
G.add_connectivity_path(connectivity_edges)
S_v_t = set([(1, 0), (0, 1)])
np.testing.assert_equal(G.pre_tran_vt(S_v_t), set([(0, 0, 0)]))
np.testing.assert_equal(
G.pre_conn_vt(S_v_t), set([(0, 1, 0), (2, 1, 0), (1, 0, 1)])
)
def test_pre_S2():
G = Graph()
connectivity_edges = [0, 1, 2, 3] # directed connectivity path (one way)
transition_edges = [0, 1, 2, 3] # directed transition path (one way)
# Add edges to graph
G.add_transition_path(transition_edges)
G.add_connectivity_path(connectivity_edges)
S_v_t = set([(1, 0), (2, 1), (0, 2), (1, 2), (3, 2)])
# np.testing.assert_equal(G.pre_tran_vt(S_v_t),
# set([(0,0,0)]))
np.testing.assert_equal(
G.pre_conn_vt(S_v_t),
set([(0, 1, 0), (2, 1, 0), (1, 2, 1), (3, 2, 1), (2, 1, 2), (2, 3, 2)]),
)
np.testing.assert_equal(
G.pre_tran_vt(S_v_t),
set(
[
(3, 2, 0),
(0, 1, 1),
(0, 0, 1),
(1, 0, 1),
(2, 2, 0),
(3, 3, 1),
(1, 1, 1),
]
),
)
def test_pre():
G = Graph()
nx.add_path(G, [0, 1, 2, 3], type="transition")
nx.add_path(G, [3, 2], type="connectivity")
nx.add_path(G, [1, 0], type="connectivity")
np.testing.assert_equal(G.pre_tran([2, 3]), set([1, 2]))
np.testing.assert_equal(G.pre_tran([2]), set([1]))
np.testing.assert_equal(G.post_tran([2, 3]), set([3]))
np.testing.assert_equal(G.post_tran([2]), set([3]))
np.testing.assert_equal(G.pre_conn([2]), set([3]))
np.testing.assert_equal(G.pre_conn([0, 2]), set([1, 3]))
np.testing.assert_equal(G.post_conn([0, 1, 2, 3]), set([2, 0]))
| 28.432432 | 80 | 0.536122 | 347 | 2,104 | 3.057637 | 0.112392 | 0.028275 | 0.169651 | 0.226202 | 0.780396 | 0.769086 | 0.767201 | 0.738926 | 0.62771 | 0.62771 | 0 | 0.076066 | 0.27519 | 2,104 | 73 | 81 | 28.821918 | 0.619672 | 0.126901 | 0 | 0.27451 | 0 | 0 | 0.0186 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
845ee11ee61cd1812f102e16965e8973a7f943ce | 10,271 | py | Python | TheImperialGod/cogs/info/math.py | JakeLion/TheImperialGod | 2473cc42f97e9eb5f2a6c52d299be17fa9d5c1b0 | [
"CC0-1.0"
] | null | null | null | TheImperialGod/cogs/info/math.py | JakeLion/TheImperialGod | 2473cc42f97e9eb5f2a6c52d299be17fa9d5c1b0 | [
"CC0-1.0"
] | null | null | null | TheImperialGod/cogs/info/math.py | JakeLion/TheImperialGod | 2473cc42f97e9eb5f2a6c52d299be17fa9d5c1b0 | [
"CC0-1.0"
] | null | null | null | import discord
from discord.ext import commands
from discord.ext.commands import cooldown, BucketType
import math as m
class Mathematics(commands.Cog):
def __init__(self, client):
self.client = client
@commands.Cog.listener()
async def on_ready(self):
print("Mathematics commands are ready!")
def get_out(_type, num1, num2=None):
try:
num1 = float(num1)
num2 = float(num2) if num2 is not None and _type in ["a", "s", "m", "d"] else None
except ValueError:
return False
if _type == "a":
return num1 + num2
elif _type == "s":
return num1 - num2
elif _type == "m":
return num1 * num2
elif _type == "d":
return num1 / num2
elif _type == "sq":
return num1 * num1
@commands.command()
@cooldown(1, 5, BucketType.user)
async def add(self, ctx, num1, num2):
output = get_out("a", num1, num2)
if output == False:
await ctx.send("Both your input must be numbers! Next time add numbers!")
return
em = discord.Embed(title = "<:success:761297849475399710> Adding Successful", color = ctx.author.color)
em.add_field(name = "Number 1:", value = f"`{num1}`")
em.add_field(name= "Number 2:", value = f"`{num2}`")
em.add_field(name = "Answer", value = f"`{output}`", inline = False)
return await ctx.send(embed = em)
@add.error
async def add_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Adding Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide all 2 numbers to add!")
em.add_field(name = "Usage:", value = f"```\nimp add <num1> <num2>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to middle school? Stop constantly adding!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
@commands.command(aliases=["sub"])
@cooldown(1, 5, BucketType.user)
async def subtract(self, ctx, num1, num2):
output = get_out("s", num1, num2)
if output == False:
await ctx.send("Both your input must be numbers! Next time subtract numbers!")
return
em = discord.Embed(title = "<:success:761297849475399710> Subtracting Successful", color = ctx.author.color)
em.add_field(name = "Number 1:", value = f"`{num1}`")
em.add_field(name= "Number 2:", value = f"`{num2}`")
em.add_field(name = "Answer", value = f"`{output}`", inline = False)
return await ctx.send(embed = em)
@subtract.error
async def subtract_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Subtracting Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide all 2 numbers to subtract!")
em.add_field(name = "Usage:", value = f"```\nimp sub <num1> <num2>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to middle school? Stop constantly subtracting!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
@commands.command(aliases=["mul"])
@cooldown(1, 5, BucketType.user)
async def multiply(self, ctx, num1, num2):
output = get_out("m", num1, num2)
if output == False:
await ctx.send("Both your input must be numbers! Next time multiply numbers!")
return
em = discord.Embed(title = "<:success:761297849475399710> Multiplication Successful", color = ctx.author.color)
em.add_field(name = "Number 1:", value = f"`{num1}`")
em.add_field(name= "Number 2:", value = f"`{num2}`")
em.add_field(name = "Answer", value = f"`{output}`", inline = False)
return await ctx.send(embed = em)
@multiply.error
async def multiply_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Multiplication Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide all 2 numbers to multiply!")
em.add_field(name = "Usage:", value = f"```\nimp mul <num1> <num2>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to middle school? Stop constantly multiplying!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
@commands.command(aliases=["div"])
@cooldown(1, 5, BucketType.user)
async def divide(self, ctx, num1, num2):
output = get_out("d", num1, num2)
if output == False:
await ctx.send("Both your input must be numbers! Next time divide numbers!")
return
em = discord.Embed(title = "<:success:761297849475399710> Division Successful", color = ctx.author.color)
em.add_field(name = "Number 1:", value = f"`{num1}`")
em.add_field(name= "Number 2:", value = f"`{num2}`")
em.add_field(name = "Answer", value = f"`{output}`", inline = False)
return await ctx.send(embed = em)
@divide.error
async def divide_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Division Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide all 2 numbers to divide!")
em.add_field(name = "Usage:", value = f"```\nimp div <num1> <num2>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to middle school? Stop constantly dividing!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
@commands.command(aliases=["sq"])
@cooldown(1, 30, BucketType.user)
async def square(self, ctx, num):
output = get_out("sq", num)
if output == False:
return await ctx.channel.send("Your input has to be an number")
else:
em = discord.Embed(title = "<:success:761297849475399710> Getting Squared Successful", color = ctx.author.color)
em.add_field(name = "Number", value = f"`{num}`")
em.add_field(name = "Answer", value = f"`{output}`", inline = False)
return await ctx.send(embed = em)
@square.error
async def square_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Getting Squared Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide 1 number to square!")
em.add_field(name = "Usage:", value = f"```\nimp sq <num>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to high school? Stop constantly getting a square of a number!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
@commands.command(aliases=["sqrt"])
@cooldown(1, 30, BucketType.user)
async def squareroot(self, ctx, num):
try:
num1 = float(num)
except:
return await ctx.channel.send("Your input has to be a number")
else:
em = discord.Embed(title = f"<:success:761297849475399710> Getting Square Root of {num} is Successful", color = ctx.author.color)
em.add_field(name = "Number", value = f"`{num}`")
em.add_field(name = "Answer", value = f"`{m.sqrt(num)}`", inline = False)
return await ctx.send(embed = em)
@squareroot.error
async def squareroot_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
em = discord.Embed(title = "<:fail:761292267360485378> Getting Square Root Failed", color = ctx.author.color)
em.add_field(name = "Reason:", value = "You didn't provide 1 number to get the square root of!")
em.add_field(name = "Usage:", value = f"```\nimp sqrt <num>\n```")
await ctx.send(embed = em)
if isinstance(error, commands.CommandOnCooldown):
em = discord.Embed(title = "<:fail:761292267360485378> Slow it down C'mon", color = ctx.author.color)
em.add_field(name = "Reason:", value = "Did you go to high school? Stop constantly getting a square root of a number!")
em.add_field(name = "Try again in:", value = "{:.2f}".format(error.retry_after))
await ctx.send(embed = em)
def setup(client):
client.add_cog(Mathematics(client))
| 53.217617 | 142 | 0.594879 | 1,280 | 10,271 | 4.717969 | 0.109375 | 0.033946 | 0.066236 | 0.092731 | 0.830436 | 0.815864 | 0.80212 | 0.752111 | 0.679583 | 0.679583 | 0 | 0.054255 | 0.267842 | 10,271 | 192 | 143 | 53.494792 | 0.748803 | 0 | 0 | 0.444444 | 0 | 0 | 0.251315 | 0.048219 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.023392 | 0 | 0.152047 | 0.005848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ffcf8dd25a7d569e92cc74ffea72e4c653fc35c2 | 1,191 | py | Python | analyst/resources/tokens.py | jobscry/soc-analyst | 95f3736017a3960c5fe1bd3b93e8b89418f5f527 | [
"MIT"
] | null | null | null | analyst/resources/tokens.py | jobscry/soc-analyst | 95f3736017a3960c5fe1bd3b93e8b89418f5f527 | [
"MIT"
] | null | null | null | analyst/resources/tokens.py | jobscry/soc-analyst | 95f3736017a3960c5fe1bd3b93e8b89418f5f527 | [
"MIT"
] | null | null | null | import falcon
from peewee import DoesNotExist
from analyst.models.user import User
from analyst.resources import BaseResource
class TokensResource(BaseResource):
def on_put(self, req: falcon.Request, resp: falcon.Response, username: str):
try:
user = User.get(username=username)
if not req.context["user"].is_admin and req.context["user"].id != user.id:
raise falcon.HTTPForbidden(
"Forbidden", "Insufficient privileges for operation."
)
user.generate_token()
user.save()
resp.media = {"token": user.token}
except DoesNotExist:
raise falcon.HTTPNotFound()
def on_get(self, req: falcon.Request, resp: falcon.Response, username: str):
try:
user = User.get(username=username)
if not req.context["user"].is_admin and req.context["user"].id != user.id:
raise falcon.HTTPForbidden(
"Forbidden", "Insufficient privileges for operation."
)
resp.media = {"token": user.token}
except DoesNotExist:
raise falcon.HTTPNotFound()
| 30.538462 | 86 | 0.594458 | 125 | 1,191 | 5.624 | 0.352 | 0.056899 | 0.079659 | 0.056899 | 0.742532 | 0.742532 | 0.742532 | 0.742532 | 0.742532 | 0.742532 | 0 | 0 | 0.304786 | 1,191 | 38 | 87 | 31.342105 | 0.849034 | 0 | 0 | 0.592593 | 0 | 0 | 0.100756 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.148148 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ffe1936b42acd66811d2520d779044b136158179 | 329 | py | Python | python/light_curve_service/version.py | jchiang87/light_curve_service | c523c72901aba072477ba70bb3d4d8cd9989514a | [
"BSD-3-Clause"
] | null | null | null | python/light_curve_service/version.py | jchiang87/light_curve_service | c523c72901aba072477ba70bb3d4d8cd9989514a | [
"BSD-3-Clause"
] | null | null | null | python/light_curve_service/version.py | jchiang87/light_curve_service | c523c72901aba072477ba70bb3d4d8cd9989514a | [
"BSD-3-Clause"
] | null | null | null | #--------- This file is automatically generated by LSST's sconsUtils ---------#
__version__ = 'unknown'
__repo_version__ = 'unknown'
__repo_version__ = 'unknown'
__fingerprint__ = 'HEAD'
__dependency_versions__ = {
}
__all__ = ('__version__', '__repo_version__', '__repo_version__', '__fingerprint__', '__dependency_versions__')
| 36.555556 | 111 | 0.738602 | 30 | 329 | 6.433333 | 0.566667 | 0.227979 | 0.186529 | 0.259067 | 0.259067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100304 | 329 | 8 | 112 | 41.125 | 0.652027 | 0.234043 | 0 | 0.285714 | 1 | 0 | 0.424 | 0.092 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
081521c3229c855c014ac3d031cfae590a2b279e | 249 | py | Python | home/views.py | Pilgriman/Heroku-app | 4b2f1414ffa681b95423962b89e93b2f367cfcc2 | [
"MIT"
] | 1 | 2020-11-11T21:04:12.000Z | 2020-11-11T21:04:12.000Z | home/views.py | Pilgriman/Heroku-app | 4b2f1414ffa681b95423962b89e93b2f367cfcc2 | [
"MIT"
] | null | null | null | home/views.py | Pilgriman/Heroku-app | 4b2f1414ffa681b95423962b89e93b2f367cfcc2 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.contrib.auth import logout
# Create your views here.
def home(request):
return render(request,'home.html')
def my_logout(request):
logout(request)
return render(request,'home.html')
| 15.5625 | 38 | 0.73494 | 34 | 249 | 5.352941 | 0.529412 | 0.10989 | 0.208791 | 0.285714 | 0.373626 | 0.373626 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164659 | 249 | 15 | 39 | 16.6 | 0.875 | 0.092369 | 0 | 0.285714 | 0 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0.142857 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f2387c4a574b925d5f43486432b2ad55865290e6 | 224 | py | Python | by-session/ta-921/j1/turtle2.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | 2 | 2015-04-29T20:59:35.000Z | 2018-09-26T13:33:43.000Z | by-session/ta-921/j1/turtle2.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | by-session/ta-921/j1/turtle2.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | import turtle
while turtle.heading() < 360:
turtle.left(1)
turtle.forward(5)
if turtle.xcor() > 100:
turtle.right(40)
if turtle.xcor() < -100:
turtle.right(100)
turtle.done()
| 16 | 30 | 0.558036 | 28 | 224 | 4.464286 | 0.535714 | 0.216 | 0.192 | 0.24 | 0.416 | 0.416 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.303571 | 224 | 13 | 31 | 17.230769 | 0.698718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f26193da1e5860edb593a47592509b4e791a4465 | 51 | py | Python | src/satextractor/plugins/__init__.py | oxfordeo/sat-extractor | 1d6841751e8b2ce65a02f5d3d608f181a31ab917 | [
"BSD-2-Clause"
] | 29 | 2021-11-02T16:07:04.000Z | 2022-03-14T00:16:27.000Z | src/satextractor/plugins/__init__.py | oxfordeo/sat-extractor | 1d6841751e8b2ce65a02f5d3d608f181a31ab917 | [
"BSD-2-Clause"
] | 16 | 2021-11-01T16:23:01.000Z | 2022-03-24T11:44:13.000Z | src/satextractor/plugins/__init__.py | oxfordeo/sat-extractor | 1d6841751e8b2ce65a02f5d3d608f181a31ab917 | [
"BSD-2-Clause"
] | 6 | 2021-11-09T01:10:30.000Z | 2022-03-14T18:04:32.000Z | from .gcp_mtl_plugin import copy_mtl_files # noqa
| 25.5 | 50 | 0.823529 | 9 | 51 | 4.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 1 | 51 | 51 | 0.863636 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f275b5bda9fee8b5e2b52c36e8a0881ebccb16bb | 21 | py | Python | dirstat/app/app.py | peter201943/misc-code | f7a50e1ca141a41b6b3e625dd2f20d72f3b288b0 | [
"CC0-1.0"
] | null | null | null | dirstat/app/app.py | peter201943/misc-code | f7a50e1ca141a41b6b3e625dd2f20d72f3b288b0 | [
"CC0-1.0"
] | null | null | null | dirstat/app/app.py | peter201943/misc-code | f7a50e1ca141a41b6b3e625dd2f20d72f3b288b0 | [
"CC0-1.0"
] | null | null | null |
# Yep, Python
| 2.333333 | 13 | 0.428571 | 2 | 21 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.47619 | 21 | 8 | 14 | 2.625 | 0.818182 | 0.52381 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b588ac075d4f5733384fcddc6ce25d506ad20e4 | 205 | py | Python | pybluedot/cli.py | commitmas/santapy | 472aec24aec04130bbf863b93ad58a13d2748c18 | [
"Apache-2.0"
] | null | null | null | pybluedot/cli.py | commitmas/santapy | 472aec24aec04130bbf863b93ad58a13d2748c18 | [
"Apache-2.0"
] | 12 | 2016-11-28T16:07:41.000Z | 2021-02-08T15:57:22.000Z | pybluedot/cli.py | commitmas/santapy | 472aec24aec04130bbf863b93ad58a13d2748c18 | [
"Apache-2.0"
] | 5 | 2016-11-28T03:40:02.000Z | 2016-12-22T17:19:01.000Z | """
cli.py constructs a cli interface of commands, options, and subcommands in argparse, given data structures from commands.py
"""
import argparse
import json
from pybluedot.commands import cmd, subcmd
| 25.625 | 123 | 0.790244 | 29 | 205 | 5.586207 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 205 | 7 | 124 | 29.285714 | 0.925714 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b8c80401879d637a33bbc7fe2c4e1254e94e676 | 49 | py | Python | askanna/core/__init__.py | askanna-io/askanna-python | b6a1dec3b9911888f3d769cef46ce8c2d5cb0dfe | [
"Apache-2.0"
] | 1 | 2021-02-22T15:53:47.000Z | 2021-02-22T15:53:47.000Z | askanna/core/__init__.py | askanna-io/askanna-python | b6a1dec3b9911888f3d769cef46ce8c2d5cb0dfe | [
"Apache-2.0"
] | null | null | null | askanna/core/__init__.py | askanna-io/askanna-python | b6a1dec3b9911888f3d769cef46ce8c2d5cb0dfe | [
"Apache-2.0"
] | null | null | null | from .apiclient import Client
client = Client()
| 12.25 | 29 | 0.755102 | 6 | 49 | 6.166667 | 0.666667 | 0.648649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 3 | 30 | 16.333333 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4b253ffbe5dd2ed7d4b78b9d03843a9c485887fc | 28 | py | Python | odoo-13.0/venv/lib/python3.8/site-packages/ImageShow.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 2 | 2021-06-20T16:56:45.000Z | 2021-06-20T17:30:18.000Z | odoo-13.0/venv/lib/python3.8/site-packages/ImageShow.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/venv/lib/python3.8/site-packages/ImageShow.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | from PIL.ImageShow import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d9c5f661784b37f0405d53a1ce9c0bf553abcb28 | 80 | py | Python | supermario/supermario 1124/show.flies.py | Kimmiryeong/2DGP_GameProject | ad3fb197aab27227fc92fd404b2c310f8d0827ca | [
"MIT"
] | null | null | null | supermario/supermario 1124/show.flies.py | Kimmiryeong/2DGP_GameProject | ad3fb197aab27227fc92fd404b2c310f8d0827ca | [
"MIT"
] | null | null | null | supermario/supermario 1124/show.flies.py | Kimmiryeong/2DGP_GameProject | ad3fb197aab27227fc92fd404b2c310f8d0827ca | [
"MIT"
] | null | null | null | import
os
file_name_list = os.listdir()
for name in file_name_list:
print (name) | 16 | 29 | 0.7875 | 15 | 80 | 3.933333 | 0.6 | 0.271186 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 80 | 5 | 30 | 16 | 0.842857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d9ef6ae97d2b38d2708b0c06f874d0a00d7e93b0 | 259 | py | Python | hardware_usage_notifier/comparators/example_comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | hardware_usage_notifier/comparators/example_comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | hardware_usage_notifier/comparators/example_comparator.py | ovidiupw/HardwareUsageNotifier | b5f600fa66c1ede1a2337c4a39fc6ec8a209dcf5 | [
"MIT"
] | null | null | null | from hardware_usage_notifier.comparators.comparator import Comparator
class ExampleComparator(Comparator):
def __init__(self, reference_value):
super(ExampleComparator, self).__init__(reference_value)
def compare(self, value):
pass
| 25.9 | 69 | 0.764479 | 27 | 259 | 6.888889 | 0.62963 | 0.150538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 259 | 9 | 70 | 28.777778 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
8a3d461befa4fc111e9303ea80855e31d4df771f | 35 | py | Python | rviz_pyplot/python/rviz_pyplot/PlotObject.py | uschwes/rviz_pyplot | 7080a0730bf20aa12b7f533a78a29e4d9e4b49c1 | [
"BSD-3-Clause"
] | 1 | 2017-08-29T10:19:23.000Z | 2017-08-29T10:19:23.000Z | rviz_pyplot/python/rviz_pyplot/PlotObject.py | uschwes/rviz_pyplot | 7080a0730bf20aa12b7f533a78a29e4d9e4b49c1 | [
"BSD-3-Clause"
] | null | null | null | rviz_pyplot/python/rviz_pyplot/PlotObject.py | uschwes/rviz_pyplot | 7080a0730bf20aa12b7f533a78a29e4d9e4b49c1 | [
"BSD-3-Clause"
] | null | null | null | class PlotObject(object):
pass
| 11.666667 | 25 | 0.714286 | 4 | 35 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 2 | 26 | 17.5 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
8a4c3ee15e268141081a73c01430511aecd9f932 | 942 | py | Python | asura/example/python3/abci/msg.py | teragrid/dgrid | 3746bc3dd5ab51af9571720df93f0d69e97f5bbb | [
"Apache-2.0"
] | 1 | 2018-11-06T07:59:07.000Z | 2018-11-06T07:59:07.000Z | asura/example/python3/abci/msg.py | teragrid/dgrid | 3746bc3dd5ab51af9571720df93f0d69e97f5bbb | [
"Apache-2.0"
] | 3 | 2018-10-11T12:03:40.000Z | 2018-10-22T14:09:28.000Z | asura/example/python3/abci/msg.py | teragrid/teragrid | 3746bc3dd5ab51af9571720df93f0d69e97f5bbb | [
"Apache-2.0"
] | null | null | null | from .wire import decode_string
# map type_byte to message name
message_types = {
0x01: "echo",
0x02: "flush",
0x03: "info",
0x04: "set_option",
0x21: "deliver_tx",
0x22: "check_tx",
0x23: "commit",
0x24: "add_listener",
0x25: "rm_listener",
}
# return the decoded arguments of asura messages
class RequestDecoder():
def __init__(self, reader):
self.reader = reader
def echo(self):
return decode_string(self.reader)
def flush(self):
return
def info(self):
return
def set_option(self):
return decode_string(self.reader), decode_string(self.reader)
def deliver_tx(self):
return decode_string(self.reader)
def check_tx(self):
return decode_string(self.reader)
def commit(self):
return
def add_listener(self):
# TODO
return
def rm_listener(self):
# TODO
return
| 18.470588 | 69 | 0.612527 | 115 | 942 | 4.826087 | 0.408696 | 0.126126 | 0.144144 | 0.198198 | 0.299099 | 0.254054 | 0.196396 | 0.133333 | 0 | 0 | 0 | 0.040119 | 0.285563 | 942 | 50 | 70 | 18.84 | 0.784547 | 0.091295 | 0 | 0.242424 | 0 | 0 | 0.082256 | 0 | 0 | 0 | 0.042303 | 0.02 | 0 | 1 | 0.30303 | false | 0 | 0.030303 | 0.272727 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a535a1c112fb5fd458a8419ef54a0106b0d038e | 126 | py | Python | sodre/turbo_memory/cli.py | sodre/turbo-memory | 9a71627bd9fed99722039c155117e899e65aea63 | [
"BSD-3-Clause"
] | null | null | null | sodre/turbo_memory/cli.py | sodre/turbo-memory | 9a71627bd9fed99722039c155117e899e65aea63 | [
"BSD-3-Clause"
] | 2 | 2020-01-02T17:18:16.000Z | 2020-01-05T22:13:04.000Z | sodre/turbo_memory/cli.py | sodre/turbo-memory | 9a71627bd9fed99722039c155117e899e65aea63 | [
"BSD-3-Clause"
] | null | null | null | """Console script for turbo-memory."""
import sys
if __name__ == "__main__":
sys.exit(turbo_memory) # pragma: no cover | 18 | 46 | 0.68254 | 17 | 126 | 4.529412 | 0.823529 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174603 | 126 | 7 | 46 | 18 | 0.740385 | 0.396825 | 0 | 0 | 0 | 0 | 0.112676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8a7eb2c8be3799a1791e3b1d61212f1e5e3aea75 | 13,653 | py | Python | watertap/unit_models/zero_order/tests/test_air_flotation_zo.py | srikanthallu/watertap | 6ad5552b91163917fb19342754b9b57b3d9cbd85 | [
"BSD-3-Clause-LBNL"
] | 1 | 2022-01-14T19:28:27.000Z | 2022-01-14T19:28:27.000Z | watertap/unit_models/zero_order/tests/test_air_flotation_zo.py | srikanthallu/watertap | 6ad5552b91163917fb19342754b9b57b3d9cbd85 | [
"BSD-3-Clause-LBNL"
] | null | null | null | watertap/unit_models/zero_order/tests/test_air_flotation_zo.py | srikanthallu/watertap | 6ad5552b91163917fb19342754b9b57b3d9cbd85 | [
"BSD-3-Clause-LBNL"
] | null | null | null | ###############################################################################
# WaterTAP Copyright (c) 2021, The Regents of the University of California,
# through Lawrence Berkeley National Laboratory, Oak Ridge National
# Laboratory, National Renewable Energy Laboratory, and National Energy
# Technology Laboratory (subject to receipt of any required approvals from
# the U.S. Dept. of Energy). All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and license
# information, respectively. These files are also available online at the URL
# "https://github.com/watertap-org/watertap/"
#
###############################################################################
"""
Tests for zero-order air flotation model
"""
import pytest
from io import StringIO
from pyomo.environ import (
Block,
ConcreteModel,
Constraint,
value,
Var,
assert_optimal_termination,
)
from pyomo.util.check_units import assert_units_consistent
from idaes.core import FlowsheetBlock
from idaes.core.util import get_solver
from idaes.core.util.model_statistics import degrees_of_freedom
from idaes.core.util.testing import initialization_tester
from idaes.generic_models.costing import UnitModelCostingBlock
from watertap.unit_models.zero_order import AirFlotationZO
from watertap.core.wt_database import Database
from watertap.core.zero_order_properties import WaterParameterBlock
from watertap.core.zero_order_costing import ZeroOrderCosting
solver = get_solver()
class TestAirFloatationZO:
@pytest.fixture(scope="class")
def model(self):
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(default={"solute_list": ["tss"]})
m.fs.unit = AirFlotationZO(
default={"property_package": m.fs.params, "database": m.db}
)
m.fs.unit.inlet.flow_mass_comp[0, "H2O"].fix(10)
m.fs.unit.inlet.flow_mass_comp[0, "tss"].fix(1)
return m
@pytest.mark.unit
def test_build(self, model):
assert model.fs.unit.config.database == model.db
assert isinstance(model.fs.unit.electricity, Var)
assert isinstance(model.fs.unit.energy_electric_flow_vol_inlet, Var)
assert isinstance(model.fs.unit.electricity_consumption, Constraint)
@pytest.mark.component
def test_load_parameters(self, model):
data = model.db.get_unit_operation_parameters("air_flotation")
model.fs.unit.load_parameters_from_database()
assert model.fs.unit.recovery_frac_mass_H2O[0].fixed
assert (
model.fs.unit.recovery_frac_mass_H2O[0].value
== data["recovery_frac_mass_H2O"]["value"]
)
for (t, j), v in model.fs.unit.removal_frac_mass_solute.items():
assert v.fixed
assert v.value == data["removal_frac_mass_solute"][j]["value"]
assert model.fs.unit.energy_electric_flow_vol_inlet.fixed
assert (
model.fs.unit.energy_electric_flow_vol_inlet.value
== data["energy_electric_flow_vol_inlet"]["value"]
)
@pytest.mark.component
def test_degrees_of_freedom(self, model):
assert degrees_of_freedom(model.fs.unit) == 0
@pytest.mark.component
def test_unit_consistency(self, model):
assert_units_consistent(model.fs.unit)
@pytest.mark.component
def test_initialize(self, model):
initialization_tester(model)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, model):
results = solver.solve(model)
# Check for optimal solution
assert_optimal_termination(results)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, model):
assert pytest.approx(1.1e-2, rel=1e-5) == value(
model.fs.unit.properties_in[0].flow_vol
)
assert pytest.approx(90.9091, rel=1e-5) == value(
model.fs.unit.properties_in[0].conc_mass_comp["tss"]
)
assert pytest.approx(0.010099, rel=1e-5) == value(
model.fs.unit.properties_treated[0].flow_vol
)
assert pytest.approx(9.90197, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["tss"]
)
assert pytest.approx(9.01e-4, rel=1e-5) == value(
model.fs.unit.properties_byproduct[0].flow_vol
)
assert pytest.approx(998.890, rel=1e-5) == value(
model.fs.unit.properties_byproduct[0].conc_mass_comp["tss"]
)
assert pytest.approx(11.88, abs=1e-5) == value(model.fs.unit.electricity[0])
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_conservation(self, model):
for j in model.fs.params.component_list:
assert 1e-6 >= abs(
value(
model.fs.unit.inlet.flow_mass_comp[0, j]
- model.fs.unit.treated.flow_mass_comp[0, j]
- model.fs.unit.byproduct.flow_mass_comp[0, j]
)
)
@pytest.mark.component
def test_report(self, model):
stream = StringIO()
model.fs.unit.report(ostream=stream)
output = """
====================================================================================
Unit : fs.unit Time: 0.0
------------------------------------------------------------------------------------
Unit Performance
Variables:
Key : Value : Fixed : Bounds
Electricity Demand : 11.880 : False : (0, None)
Electricity Intensity : 0.30000 : True : (None, None)
Solute Removal [tss] : 0.90000 : True : (0, None)
Water Recovery : 0.99990 : True : (1e-08, 1.0000001)
------------------------------------------------------------------------------------
Stream Table
Inlet Treated Byproduct
Volumetric Flowrate 0.011000 0.010099 0.00090100
Mass Concentration H2O 909.09 990.10 1.1099
Mass Concentration tss 90.909 9.9020 998.89
====================================================================================
"""
assert output in stream.getvalue()
class TestAirFlotationZO_w_default_removal:
@pytest.fixture(scope="class")
def model(self):
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(default={"solute_list": ["tss", "foo"]})
m.fs.unit = AirFlotationZO(
default={"property_package": m.fs.params, "database": m.db}
)
m.fs.unit.inlet.flow_mass_comp[0, "H2O"].fix(10)
m.fs.unit.inlet.flow_mass_comp[0, "tss"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "foo"].fix(1)
return m
@pytest.mark.unit
def test_build(self, model):
assert model.fs.unit.config.database == model.db
assert isinstance(model.fs.unit.electricity, Var)
assert isinstance(model.fs.unit.energy_electric_flow_vol_inlet, Var)
assert isinstance(model.fs.unit.electricity_consumption, Constraint)
@pytest.mark.component
def test_load_parameters(self, model):
data = model.db.get_unit_operation_parameters("air_flotation")
model.fs.unit.load_parameters_from_database(use_default_removal=True)
assert model.fs.unit.recovery_frac_mass_H2O[0].fixed
assert (
model.fs.unit.recovery_frac_mass_H2O[0].value
== data["recovery_frac_mass_H2O"]["value"]
)
for (t, j), v in model.fs.unit.removal_frac_mass_solute.items():
assert v.fixed
if j == "foo":
assert v.value == data["default_removal_frac_mass_solute"]["value"]
else:
assert v.value == data["removal_frac_mass_solute"][j]["value"]
assert model.fs.unit.energy_electric_flow_vol_inlet.fixed
assert (
model.fs.unit.energy_electric_flow_vol_inlet.value
== data["energy_electric_flow_vol_inlet"]["value"]
)
@pytest.mark.component
def test_degrees_of_freedom(self, model):
assert degrees_of_freedom(model.fs.unit) == 0
@pytest.mark.component
def test_unit_consistency(self, model):
assert_units_consistent(model.fs.unit)
@pytest.mark.component
def test_initialize(self, model):
initialization_tester(model)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, model):
results = solver.solve(model)
# Check for optimal solution
assert_optimal_termination(results)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, model):
assert pytest.approx(1.2e-2, rel=1e-5) == value(
model.fs.unit.properties_in[0].flow_vol
)
assert pytest.approx(83.3333, rel=1e-5) == value(
model.fs.unit.properties_in[0].conc_mass_comp["tss"]
)
assert pytest.approx(83.3333, rel=1e-5) == value(
model.fs.unit.properties_in[0].conc_mass_comp["foo"]
)
assert pytest.approx(0.011099, rel=1e-5) == value(
model.fs.unit.properties_treated[0].flow_vol
)
assert pytest.approx(9.00982, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["tss"]
)
assert pytest.approx(90.0982, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["foo"]
)
assert pytest.approx(9.01e-4, rel=1e-5) == value(
model.fs.unit.properties_byproduct[0].flow_vol
)
assert pytest.approx(998.890, rel=1e-5) == value(
model.fs.unit.properties_byproduct[0].conc_mass_comp["tss"]
)
assert pytest.approx(8.8790e-7, rel=1e-5) == value(
model.fs.unit.properties_byproduct[0].conc_mass_comp["foo"]
)
assert pytest.approx(12.96, abs=1e-5) == value(model.fs.unit.electricity[0])
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_conservation(self, model):
for j in model.fs.params.component_list:
assert 1e-6 >= abs(
value(
model.fs.unit.inlet.flow_mass_comp[0, j]
- model.fs.unit.treated.flow_mass_comp[0, j]
- model.fs.unit.byproduct.flow_mass_comp[0, j]
)
)
@pytest.mark.component
def test_report(self, model):
stream = StringIO()
model.fs.unit.report(ostream=stream)
output = """
====================================================================================
Unit : fs.unit Time: 0.0
------------------------------------------------------------------------------------
Unit Performance
Variables:
Key : Value : Fixed : Bounds
Electricity Demand : 12.960 : False : (0, None)
Electricity Intensity : 0.30000 : True : (None, None)
Solute Removal [foo] : 0.0000 : True : (0, None)
Solute Removal [tss] : 0.90000 : True : (0, None)
Water Recovery : 0.99990 : True : (1e-08, 1.0000001)
------------------------------------------------------------------------------------
Stream Table
Inlet Treated Byproduct
Volumetric Flowrate 0.012000 0.011099 0.00090100
Mass Concentration H2O 833.33 900.89 1.1099
Mass Concentration tss 83.333 9.0098 998.89
Mass Concentration foo 83.333 90.098 8.8790e-07
====================================================================================
"""
assert output in stream.getvalue()
def test_costing():
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(default={"solute_list": ["sulfur", "toc", "tss"]})
m.fs.costing = ZeroOrderCosting()
m.fs.unit1 = AirFlotationZO(
default={"property_package": m.fs.params, "database": m.db}
)
m.fs.unit1.inlet.flow_mass_comp[0, "H2O"].fix(10000)
m.fs.unit1.inlet.flow_mass_comp[0, "sulfur"].fix(1)
m.fs.unit1.inlet.flow_mass_comp[0, "toc"].fix(2)
m.fs.unit1.inlet.flow_mass_comp[0, "tss"].fix(3)
m.fs.unit1.load_parameters_from_database(use_default_removal=True)
assert degrees_of_freedom(m.fs.unit1) == 0
m.fs.unit1.costing = UnitModelCostingBlock(
default={"flowsheet_costing_block": m.fs.costing}
)
assert isinstance(m.fs.costing.air_flotation, Block)
assert isinstance(m.fs.costing.air_flotation.capital_a_parameter, Var)
assert isinstance(m.fs.costing.air_flotation.capital_b_parameter, Var)
assert isinstance(m.fs.costing.air_flotation.reference_state, Var)
assert isinstance(m.fs.unit1.costing.capital_cost, Var)
assert isinstance(m.fs.unit1.costing.capital_cost_constraint, Constraint)
assert_units_consistent(m.fs)
assert degrees_of_freedom(m.fs.unit1) == 0
assert m.fs.unit1.electricity[0] in m.fs.costing._registered_flows["electricity"]
| 36.800539 | 88 | 0.603311 | 1,674 | 13,653 | 4.770609 | 0.152927 | 0.043576 | 0.067493 | 0.038067 | 0.794641 | 0.767844 | 0.767844 | 0.761583 | 0.726647 | 0.693589 | 0 | 0.039465 | 0.227935 | 13,653 | 370 | 89 | 36.9 | 0.718148 | 0.045851 | 0 | 0.64311 | 0 | 0 | 0.211673 | 0.068405 | 0 | 0 | 0 | 0 | 0.212014 | 1 | 0.074205 | false | 0 | 0.045936 | 0 | 0.134276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a87965dd427eacbc7e46b5fa29ffbb8e4a07d81 | 2,026 | py | Python | cogs/moderation.py | xSOGIx/Seina | 5972992e11b7ce15513b1cfcd430e3a28b2a49e1 | [
"MIT"
] | 1 | 2021-11-10T10:41:51.000Z | 2021-11-10T10:41:51.000Z | cogs/moderation.py | xSOGIx/Seina | 5972992e11b7ce15513b1cfcd430e3a28b2a49e1 | [
"MIT"
] | null | null | null | cogs/moderation.py | xSOGIx/Seina | 5972992e11b7ce15513b1cfcd430e3a28b2a49e1 | [
"MIT"
] | null | null | null | import discord
import os
import datetime
import asyncio
from discord.ext import commands
class Moderation(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
@commands.has_permissions(kick_members=True)
@commands.cooldown(1, 5, commands.BucketType.guild)
async def kick(self, ctx, member: discord.Member, reason="No Reason"):
if member == None:
embed = discord.Embed(f"{ctx.message.author}, Please enter a valid user!")
await ctx.reply(embed=embed)
else:
guild = ctx.guild
embed = discord.Embed(title="Kicked!", description=f"{member.mention} has been kicked!!", colour=0xce2167, timestamp=datetime.datetime.utcnow())
embed.add_field(name="Reason: ", value=reason, inline=False)
await ctx.reply(embed=embed)
await guild.kick(user=member)
@commands.command()
@commands.has_permissions(kick_members=True)
@commands.cooldown(1, 5, commands.BucketType.guild)
async def ban(self, ctx, member: discord.Member, reason="No Reason"):
if member == None:
embed = discord.Embed(f"{ctx.message.author}, Please enter a valid user!")
await ctx.reply(embed=embed)
else:
guild = ctx.guild
embed = discord.Embed(title="Banned!", description=f"{member.mention} has been banned!", colour=0xce2167, timestamp=datetime.datetime.utcnow())
embed.add_field(name="Reason: ", value=reason, inline=False)
await ctx.reply(embed=embed)
await guild.ban(user=member)
@commands.command()
@commands.has_permissions(kick_members=True)
@commands.cooldown(1, 5, commands.BucketType.guild)
async def unban(self, ctx, user: discord.User):
if user == None:
embed = discord.Embed(f"{ctx.message.author}, Please enter a valid user!")
await ctx.reply(embed=embed)
else:
guild = ctx.guild
embed = discord.Embed(title="Unbanned!", description=f"{user.display_name} has been unbanned!", colour=0xce2167, timestamp=datetime.datetime.utcnow())
await ctx.reply(embed=embed)
await guild.unban(user=user)
def setup(bot):
bot.add_cog(Moderation(bot)) | 31.169231 | 153 | 0.729023 | 282 | 2,026 | 5.187943 | 0.234043 | 0.049214 | 0.06972 | 0.073821 | 0.799727 | 0.799727 | 0.725222 | 0.702666 | 0.702666 | 0.702666 | 0 | 0.011925 | 0.1308 | 2,026 | 65 | 154 | 31.169231 | 0.818853 | 0 | 0 | 0.583333 | 0 | 0 | 0.150962 | 0.03108 | 0 | 0 | 0.01184 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.104167 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a998b7b6c1867c145bbb0fde6d4cd7652afe054 | 258 | py | Python | cms/templatetags/limitador.py | eliasfernandez/django-simplecms | fd2d4b0c53a1262a7253e2428e5bd8a4e8074ce4 | [
"BSD-2-Clause"
] | null | null | null | cms/templatetags/limitador.py | eliasfernandez/django-simplecms | fd2d4b0c53a1262a7253e2428e5bd8a4e8074ce4 | [
"BSD-2-Clause"
] | null | null | null | cms/templatetags/limitador.py | eliasfernandez/django-simplecms | fd2d4b0c53a1262a7253e2428e5bd8a4e8074ce4 | [
"BSD-2-Clause"
] | null | null | null | from django.template import Library
register = Library()
def limita_caracteres(cadena, num_caracteres):
# import pdb;pdb.set_trace()
return cadena[:num_caracteres] + "..." if cadena.__len__() >num_caracteres else cadena
register.filter(limita_caracteres) | 28.666667 | 87 | 0.786822 | 33 | 258 | 5.848485 | 0.575758 | 0.202073 | 0.196891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100775 | 258 | 9 | 88 | 28.666667 | 0.831897 | 0.100775 | 0 | 0 | 0 | 0 | 0.012987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8aa9d45c0bc2a97f273b035575d429684a0ce13b | 28 | py | Python | src/scheduler.py | Landowon/Atomic | 85ab70e36552d74dfa6e1269e9b4da53dfbba6df | [
"Apache-2.0"
] | null | null | null | src/scheduler.py | Landowon/Atomic | 85ab70e36552d74dfa6e1269e9b4da53dfbba6df | [
"Apache-2.0"
] | null | null | null | src/scheduler.py | Landowon/Atomic | 85ab70e36552d74dfa6e1269e9b4da53dfbba6df | [
"Apache-2.0"
] | null | null | null | #schedule nodes and subnodes | 28 | 28 | 0.857143 | 4 | 28 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0.964286 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8ab6a873eea8aedb21d5f7ada918194665bc5b9f | 118 | py | Python | .history/py/main_20201230171126.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/py/main_20201230171126.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null | .history/py/main_20201230171126.py | minefarmer/Comprehensive-Python | f97b9b83ec328fc4e4815607e6a65de90bb8de66 | [
"Unlicense"
] | null | null | null |
>>> fruits = ["grapes", "berries"]
my_fruits = ["grapes", "berries"]
fav_fruits = fruits
print(fruits is fav_fruits) | 19.666667 | 34 | 0.677966 | 15 | 118 | 5.133333 | 0.466667 | 0.311688 | 0.493506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 118 | 6 | 35 | 19.666667 | 0.754902 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8ac5f2b6e1d8d99c3465bd346091240da050103e | 62 | py | Python | vmprof_viewer_client/__init__.py | blue-yonder/vmprof-viewer-client | 6b7219e04a1e224bcb46f486b1acc58817af3b33 | [
"MIT"
] | 4 | 2018-09-06T23:36:56.000Z | 2020-01-06T12:07:03.000Z | vmprof_viewer_client/__init__.py | blue-yonder/vmprof-viewer-client | 6b7219e04a1e224bcb46f486b1acc58817af3b33 | [
"MIT"
] | null | null | null | vmprof_viewer_client/__init__.py | blue-yonder/vmprof-viewer-client | 6b7219e04a1e224bcb46f486b1acc58817af3b33 | [
"MIT"
] | 1 | 2020-11-06T09:31:52.000Z | 2020-11-06T09:31:52.000Z | from vmprof_viewer_client.decorator import configure, profile
| 31 | 61 | 0.887097 | 8 | 62 | 6.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 62 | 1 | 62 | 62 | 0.929825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0a099105f86b80d3de1c1a865ad1b00efde476e1 | 14,465 | py | Python | show/muxcable.py | bratashX/sonic-utilities | ff226d0a9cd7829151b401573fb1e391cb10176b | [
"Apache-2.0"
] | null | null | null | show/muxcable.py | bratashX/sonic-utilities | ff226d0a9cd7829151b401573fb1e391cb10176b | [
"Apache-2.0"
] | 4 | 2020-04-17T06:53:05.000Z | 2020-12-01T02:37:34.000Z | show/muxcable.py | bratashX/sonic-utilities | ff226d0a9cd7829151b401573fb1e391cb10176b | [
"Apache-2.0"
] | null | null | null | import json
import sys
import click
import utilities_common.cli as clicommon
from sonic_py_common import multi_asic
from swsscommon import swsscommon
from swsssdk import ConfigDBConnector
from tabulate import tabulate
from utilities_common import platform_sfputil_helper
platform_sfputil = None
REDIS_TIMEOUT_MSECS = 0
CONFIG_SUCCESSFUL = 101
CONFIG_FAIL = 1
STATUS_FAIL = 1
STATUS_SUCCESSFUL = 102
#
# 'muxcable' command ("show muxcable")
#
@click.group(name='muxcable', cls=clicommon.AliasedGroup)
def muxcable():
"""SONiC command line - 'show muxcable' command"""
global platform_sfputil
# Load platform-specific sfputil class
platform_sfputil_helper.load_platform_sfputil()
# Load port info
platform_sfputil_helper.platform_sfputil_read_porttab_mappings()
platform_sfputil = platform_sfputil_helper.platform_sfputil
def get_value_for_key_in_dict(mdict, port, key, table_name):
value = mdict.get(key, None)
if value is None:
click.echo("could not retrieve key {} value for port {} inside table {}".format(key, port, table_name))
sys.exit(STATUS_FAIL)
return value
def get_value_for_key_in_config_tbl(config_db, port, key, table):
info_dict = {}
info_dict = config_db.get_entry(table, port)
if info_dict is None:
click.echo("could not retrieve key {} value for port {} inside table {}".format(key, port, table))
sys.exit(STATUS_FAIL)
value = get_value_for_key_in_dict(info_dict, port, key, table)
return value
def get_switch_name(config_db):
info_dict = {}
info_dict = config_db.get_entry("DEVICE_METADATA", "localhost")
#click.echo("{} ".format(info_dict))
switch_name = get_value_for_key_in_dict(info_dict, "localhost", "peer_switch", "DEVICE_METADATA")
if switch_name is not None:
return switch_name
else:
click.echo("could not retreive switch name")
sys.exit(STATUS_FAIL)
def create_json_dump_per_port_status(port_status_dict, muxcable_info_dict, asic_index, port):
status_value = get_value_for_key_in_dict(muxcable_info_dict[asic_index], port, "state", "MUX_CABLE_TABLE")
port_status_dict["MUX_CABLE"][port] = {}
port_status_dict["MUX_CABLE"][port]["STATUS"] = status_value
# TODO : Fix the health status of the port
port_status_dict["MUX_CABLE"][port]["HEALTH"] = "HEALTHY"
def create_table_dump_per_port_status(print_data, muxcable_info_dict, asic_index, port):
print_port_data = []
status_value = get_value_for_key_in_dict(muxcable_info_dict[asic_index], port, "state", "MUX_CABLE_TABLE")
#status_value = get_value_for_key_in_tbl(y_cable_asic_table, port, "status")
print_port_data.append(port)
print_port_data.append(status_value)
print_port_data.append("HEALTHY")
print_data.append(print_port_data)
def create_table_dump_per_port_config(print_data, per_npu_configdb, asic_id, port):
port_list = []
port_list.append(port)
state_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "state", "MUX_CABLE")
port_list.append(state_value)
ipv4_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "server_ipv4", "MUX_CABLE")
port_list.append(ipv4_value)
ipv6_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "server_ipv6", "MUX_CABLE")
port_list.append(ipv6_value)
print_data.append(port_list)
def create_json_dump_per_port_config(port_status_dict, per_npu_configdb, asic_id, port):
state_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "state", "MUX_CABLE")
port_status_dict["MUX_CABLE"]["PORTS"][port] = {"STATE": state_value}
port_status_dict["MUX_CABLE"]["PORTS"][port]["SERVER"] = {}
ipv4_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "server_ipv4", "MUX_CABLE")
port_status_dict["MUX_CABLE"]["PORTS"][port]["SERVER"]["IPv4"] = ipv4_value
ipv6_value = get_value_for_key_in_config_tbl(per_npu_configdb[asic_id], port, "server_ipv6", "MUX_CABLE")
port_status_dict["MUX_CABLE"]["PORTS"][port]["SERVER"]["IPv6"] = ipv6_value
@muxcable.command()
@click.argument('port', required=False, default=None)
@click.option('--json', 'json_output', required=False, is_flag=True, type=click.BOOL, help="display the output in json format")
def status(port, json_output):
"""Show muxcable status information"""
port_table_keys = {}
per_npu_statedb = {}
muxcable_info_dict = {}
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
per_npu_statedb[asic_id] = swsscommon.SonicV2Connector(use_unix_socket_path=False, namespace=namespace)
per_npu_statedb[asic_id].connect(per_npu_statedb[asic_id].STATE_DB)
port_table_keys[asic_id] = per_npu_statedb[asic_id].keys(
per_npu_statedb[asic_id].STATE_DB, 'MUX_CABLE_TABLE|*')
if port is not None:
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
sys.exit(STATUS_FAIL)
muxcable_info_dict[asic_index] = per_npu_statedb[asic_index].get_all(
per_npu_statedb[asic_index].STATE_DB, 'MUX_CABLE_TABLE|{}'.format(port))
if muxcable_info_dict[asic_index] is not None:
logical_key = "MUX_CABLE_TABLE"+"|"+port
if logical_key in port_table_keys[asic_index]:
if json_output:
port_status_dict = {}
port_status_dict["MUX_CABLE"] = {}
create_json_dump_per_port_status(port_status_dict, muxcable_info_dict, asic_index, port)
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
sys.exit(STATUS_SUCCESSFUL)
else:
print_data = []
create_table_dump_per_port_status(print_data, muxcable_info_dict, asic_index, port)
headers = ['PORT', 'STATUS', 'HEALTH']
click.echo(tabulate(print_data, headers=headers))
sys.exit(STATUS_SUCCESSFUL)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(STATUS_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(STATUS_FAIL)
else:
if json_output:
port_status_dict = {}
port_status_dict["MUX_CABLE"] = {}
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
port = key.split("|")[1]
muxcable_info_dict[asic_id] = per_npu_statedb[asic_id].get_all(
per_npu_statedb[asic_id].STATE_DB, 'MUX_CABLE_TABLE|{}'.format(port))
create_json_dump_per_port_status(port_status_dict, muxcable_info_dict, asic_id, port)
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
else:
print_data = []
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
port = key.split("|")[1]
muxcable_info_dict[asic_id] = per_npu_statedb[asic_id].get_all(
per_npu_statedb[asic_id].STATE_DB, 'MUX_CABLE_TABLE|{}'.format(port))
create_table_dump_per_port_status(print_data, muxcable_info_dict, asic_id, port)
headers = ['PORT', 'STATUS', 'HEALTH']
click.echo(tabulate(print_data, headers=headers))
sys.exit(STATUS_SUCCESSFUL)
@muxcable.command()
@click.argument('port', required=False, default=None)
@click.option('--json', 'json_output', required=False, is_flag=True, type=click.BOOL, help="display the output in json format")
def config(port, json_output):
"""Show muxcable config information"""
port_mux_tbl_keys = {}
asic_start_idx = None
per_npu_configdb = {}
mux_tbl_cfg_db = {}
peer_switch_tbl_cfg_db = {}
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
if asic_start_idx is None:
asic_start_idx = asic_id
# TO-DO replace the macros with correct swsscommon names
#config_db[asic_id] = swsscommon.DBConnector("CONFIG_DB", REDIS_TIMEOUT_MSECS, True, namespace)
#mux_tbl_cfg_db[asic_id] = swsscommon.Table(config_db[asic_id], swsscommon.CFG_MUX_CABLE_TABLE_NAME)
per_npu_configdb[asic_id] = ConfigDBConnector(use_unix_socket_path=False, namespace=namespace)
per_npu_configdb[asic_id].connect()
mux_tbl_cfg_db[asic_id] = per_npu_configdb[asic_id].get_table("MUX_CABLE")
peer_switch_tbl_cfg_db[asic_id] = per_npu_configdb[asic_id].get_table("PEER_SWITCH")
#peer_switch_tbl_cfg_db[asic_id] = swsscommon.Table(config_db[asic_id], swsscommon.CFG_PEER_SWITCH_TABLE_NAME)
port_mux_tbl_keys[asic_id] = mux_tbl_cfg_db[asic_id].keys()
if port is not None:
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
sys.exit(CONFIG_FAIL)
port_status_dict = {}
port_status_dict["MUX_CABLE"] = {}
port_status_dict["MUX_CABLE"]["PEER_TOR"] = {}
peer_switch_value = None
switch_name = get_switch_name(per_npu_configdb[asic_start_idx])
if asic_start_idx is not None:
peer_switch_value = get_value_for_key_in_config_tbl(
per_npu_configdb[asic_start_idx], switch_name, "address_ipv4", "PEER_SWITCH")
port_status_dict["MUX_CABLE"]["PEER_TOR"] = peer_switch_value
if port_mux_tbl_keys[asic_id] is not None:
if port in port_mux_tbl_keys[asic_id]:
if json_output:
port_status_dict["MUX_CABLE"] = {}
port_status_dict["MUX_CABLE"]["PORTS"] = {}
create_json_dump_per_port_config(port_status_dict, per_npu_configdb, asic_id, port)
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
sys.exit(CONFIG_SUCCESSFUL)
else:
print_data = []
print_peer_tor = []
create_table_dump_per_port_config(print_data, per_npu_configdb, asic_id, port)
headers = ['SWITCH_NAME', 'PEER_TOR']
peer_tor_data = []
peer_tor_data.append(switch_name)
peer_tor_data.append(peer_switch_value)
print_peer_tor.append(peer_tor_data)
click.echo(tabulate(print_peer_tor, headers=headers))
headers = ['port', 'state', 'ipv4', 'ipv6']
click.echo(tabulate(print_data, headers=headers))
sys.exit(CONFIG_SUCCESSFUL)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(CONFIG_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(CONFIG_FAIL)
else:
port_status_dict = {}
port_status_dict["MUX_CABLE"] = {}
port_status_dict["MUX_CABLE"]["PEER_TOR"] = {}
peer_switch_value = None
switch_name = get_switch_name(per_npu_configdb[asic_start_idx])
if asic_start_idx is not None:
peer_switch_value = get_value_for_key_in_config_tbl(
per_npu_configdb[asic_start_idx], switch_name, "address_ipv4", "PEER_SWITCH")
port_status_dict["MUX_CABLE"]["PEER_TOR"] = peer_switch_value
if json_output:
port_status_dict["MUX_CABLE"]["PORTS"] = {}
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for port in port_mux_tbl_keys[asic_id]:
create_json_dump_per_port_config(port_status_dict, per_npu_configdb, asic_id, port)
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
else:
print_data = []
print_peer_tor = []
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for port in port_mux_tbl_keys[asic_id]:
create_table_dump_per_port_config(print_data, per_npu_configdb, asic_id, port)
headers = ['SWITCH_NAME', 'PEER_TOR']
peer_tor_data = []
peer_tor_data.append(switch_name)
peer_tor_data.append(peer_switch_value)
print_peer_tor.append(peer_tor_data)
click.echo(tabulate(print_peer_tor, headers=headers))
headers = ['port', 'state', 'ipv4', 'ipv6']
click.echo(tabulate(print_data, headers=headers))
sys.exit(CONFIG_SUCCESSFUL)
| 42.669617 | 127 | 0.671275 | 1,954 | 14,465 | 4.560389 | 0.08956 | 0.037706 | 0.050275 | 0.0404 | 0.821008 | 0.764897 | 0.737515 | 0.708787 | 0.686904 | 0.668948 | 0 | 0.003252 | 0.234704 | 14,465 | 338 | 128 | 42.795858 | 0.801716 | 0.075354 | 0 | 0.62963 | 0 | 0 | 0.09976 | 0 | 0.004115 | 0 | 0 | 0.002959 | 0 | 1 | 0.041152 | false | 0 | 0.045267 | 0 | 0.098765 | 0.106996 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a206e6a4bc28eef7aefaa430364aa68a1f52205 | 3,157 | py | Python | guillotina/tests/cache/test_cache_store.py | Qiwn/guillotina | 1ed545497424d23de63984961cab5243f4a200ec | [
"BSD-2-Clause"
] | null | null | null | guillotina/tests/cache/test_cache_store.py | Qiwn/guillotina | 1ed545497424d23de63984961cab5243f4a200ec | [
"BSD-2-Clause"
] | null | null | null | guillotina/tests/cache/test_cache_store.py | Qiwn/guillotina | 1ed545497424d23de63984961cab5243f4a200ec | [
"BSD-2-Clause"
] | null | null | null | from guillotina.component import get_utility
from guillotina.tests import mocks
from guillotina.contrib.cache.strategy import BasicCache
from guillotina.contrib.cache import CACHE_PREFIX
from guillotina.contrib.cache import serialize
from guillotina.interfaces import ICacheUtility
from guillotina.utils import resolve_dotted_name
import pytest
DEFAULT_SETTINGS = {
'applications': [
'guillotina',
'guillotina.contrib.redis',
'guillotina.contrib.cache',
],
'cache': {
'updates_channel': None,
'driver': 'guillotina.contrib.redis'
}
}
@pytest.mark.app_settings(DEFAULT_SETTINGS)
async def test_cache_set(redis_container, guillotina_main, loop):
util = get_utility(ICacheUtility)
await util.initialize()
assert util.initialized
assert util._obj_driver is not None
trns = mocks.MockTransaction(mocks.MockTransactionManager())
trns.added = trns.deleted = {}
rcache = BasicCache(trns)
await rcache.clear()
await rcache.set('bar', oid='foo')
# make sure it is in redis
driver = await resolve_dotted_name('guillotina.contrib.redis').get_driver()
val = await driver.get(CACHE_PREFIX + 'root-foo')
assert serialize.loads(val) == "bar"
# but also in memory
assert util._memory_cache.get('root-foo') == 'bar'
# and api matches..
assert await rcache.get(oid='foo') == 'bar'
await util.finalize(None)
@pytest.mark.app_settings(DEFAULT_SETTINGS)
async def test_cache_delete(redis_container, guillotina_main, loop):
util = get_utility(ICacheUtility)
await util.initialize()
assert util.initialized
assert util._obj_driver is not None
trns = mocks.MockTransaction(mocks.MockTransactionManager())
trns.added = trns.deleted = {}
rcache = BasicCache(trns)
await rcache.clear()
await rcache.set('bar', oid='foo')
# make sure it is in redis
driver = await resolve_dotted_name('guillotina.contrib.redis').get_driver()
assert serialize.loads(
await driver.get(CACHE_PREFIX + 'root-foo')) == "bar"
assert util._memory_cache.get('root-foo') == 'bar'
assert await rcache.get(oid='foo') == 'bar'
# now delete
await rcache.delete('root-foo')
assert await rcache.get(oid='foo') is None
await util.finalize(None)
@pytest.mark.app_settings(DEFAULT_SETTINGS)
async def test_cache_clear(redis_container, guillotina_main, loop):
util = get_utility(ICacheUtility)
await util.initialize()
assert util.initialized
assert util._obj_driver is not None
trns = mocks.MockTransaction(mocks.MockTransactionManager())
trns.added = trns.deleted = {}
rcache = BasicCache(trns)
await rcache.clear()
await rcache.set('bar', oid='foo')
# make sure it is in redis
driver = await resolve_dotted_name('guillotina.contrib.redis').get_driver()
assert serialize.loads(
await driver.get(CACHE_PREFIX + 'root-foo')) == "bar"
assert util._memory_cache.get('root-foo') == 'bar'
assert await rcache.get(oid='foo') == 'bar'
await rcache.clear()
assert await rcache.get(oid='foo') is None
await util.finalize(None)
| 31.257426 | 79 | 0.704783 | 398 | 3,157 | 5.462312 | 0.180905 | 0.065777 | 0.050598 | 0.045998 | 0.777829 | 0.74839 | 0.74839 | 0.733671 | 0.702392 | 0.702392 | 0 | 0 | 0.180868 | 3,157 | 100 | 80 | 31.57 | 0.840681 | 0.038644 | 0 | 0.662162 | 0 | 0 | 0.101785 | 0.047588 | 0 | 0 | 0 | 0 | 0.22973 | 1 | 0 | false | 0 | 0.108108 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a67c16d32bf7b78edce4ee46db1e948dbb9f9f5 | 166 | py | Python | decorator/confirmation.py | rlelito/DesignPatterns | 4e59442a10c1407ed4d9cdceea790263c30223b3 | [
"MIT"
] | null | null | null | decorator/confirmation.py | rlelito/DesignPatterns | 4e59442a10c1407ed4d9cdceea790263c30223b3 | [
"MIT"
] | null | null | null | decorator/confirmation.py | rlelito/DesignPatterns | 4e59442a10c1407ed4d9cdceea790263c30223b3 | [
"MIT"
] | null | null | null | from component import Component
class Confirmation(Component):
def print(self) -> None:
# here code printing confirmation
print("CONFIRMATION")
| 20.75 | 41 | 0.692771 | 17 | 166 | 6.764706 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228916 | 166 | 7 | 42 | 23.714286 | 0.898438 | 0.186747 | 0 | 0 | 0 | 0 | 0.090226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 5 |
6a5ce2d60c660a55e97d852e746524ac18d1d12b | 358 | py | Python | djwebsockets/mixins/__init__.py | Extremus-io/djwebsockets | 777f5e9908d48560a6da5174e356c570a27f3a1f | [
"MIT"
] | 34 | 2015-09-15T17:52:58.000Z | 2018-02-06T04:39:47.000Z | djwebsockets/mixins/__init__.py | Extremus-io/djwebsockets | 777f5e9908d48560a6da5174e356c570a27f3a1f | [
"MIT"
] | 3 | 2016-02-17T22:31:38.000Z | 2016-06-07T13:08:20.000Z | djwebsockets/mixins/__init__.py | Extremus-io/djwebsockets | 777f5e9908d48560a6da5174e356c570a27f3a1f | [
"MIT"
] | 5 | 2015-10-17T19:22:44.000Z | 2019-01-25T10:45:00.000Z | class BaseWSMixin(object):
@classmethod
def on_connect(cls, socket, path):
pass
@classmethod
def on_message(cls, socket, message):
pass
@classmethod
def on_close(cls, socket):
pass
class MixinFail(Exception):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
pass
| 17.9 | 41 | 0.606145 | 39 | 358 | 5.282051 | 0.512821 | 0.203884 | 0.23301 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282123 | 358 | 19 | 42 | 18.842105 | 0.801556 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.285714 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
6a8f7cb7963928f8716a5eb94e1fa6d310ace1ac | 20 | py | Python | checkov/version.py | c0mmad0r3/checkov | 1375fffe54fb3c94757b7829d1076b59af599c6a | [
"Apache-2.0"
] | null | null | null | checkov/version.py | c0mmad0r3/checkov | 1375fffe54fb3c94757b7829d1076b59af599c6a | [
"Apache-2.0"
] | null | null | null | checkov/version.py | c0mmad0r3/checkov | 1375fffe54fb3c94757b7829d1076b59af599c6a | [
"Apache-2.0"
] | null | null | null | version = '2.0.861'
| 10 | 19 | 0.6 | 4 | 20 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 0.15 | 20 | 1 | 20 | 20 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6aa4b1f25b3ba240f4193b8104500ee154d9e02d | 1,711 | py | Python | src/launches/migrations/0003_auto_20190816_1051.py | SakshiUppoor/unicode-backend | 57562eac9a0c7006cc393c2ee17b76e3e12df0fa | [
"MIT"
] | null | null | null | src/launches/migrations/0003_auto_20190816_1051.py | SakshiUppoor/unicode-backend | 57562eac9a0c7006cc393c2ee17b76e3e12df0fa | [
"MIT"
] | null | null | null | src/launches/migrations/0003_auto_20190816_1051.py | SakshiUppoor/unicode-backend | 57562eac9a0c7006cc393c2ee17b76e3e12df0fa | [
"MIT"
] | 1 | 2021-08-25T05:27:30.000Z | 2021-08-25T05:27:30.000Z | # Generated by Django 2.2.4 on 2019-08-16 05:21
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('launches', '0002_auto_20190816_1035'),
]
operations = [
migrations.AlterField(
model_name='launches',
name='article_link',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='launches',
name='details',
field=models.CharField(blank=True, max_length=1000, null=True),
),
migrations.AlterField(
model_name='launches',
name='launch_success',
field=models.CharField(blank=True, max_length=10, null=True),
),
migrations.AlterField(
model_name='launches',
name='mission_patch_link',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='launches',
name='reddit_launch',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='launches',
name='rocket_name',
field=models.CharField(blank=True, max_length=50, null=True),
),
migrations.AlterField(
model_name='launches',
name='video_link',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='launches',
name='wikipedia',
field=models.CharField(blank=True, max_length=200, null=True),
),
]
| 31.685185 | 75 | 0.574518 | 172 | 1,711 | 5.563953 | 0.290698 | 0.167189 | 0.208986 | 0.242424 | 0.755486 | 0.755486 | 0.712644 | 0.593521 | 0.439916 | 0.439916 | 0 | 0.04557 | 0.307423 | 1,711 | 53 | 76 | 32.283019 | 0.762025 | 0.0263 | 0 | 0.617021 | 1 | 0 | 0.113582 | 0.013822 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6ab658cf7a8ae576767a1ddededbf946d13213d0 | 13,647 | py | Python | dojo/api_v2/permissions.py | iainrawson/django-DefectDojo | b080a8ee56fdbab7f60514625f83765c96718cbb | [
"BSD-3-Clause"
] | null | null | null | dojo/api_v2/permissions.py | iainrawson/django-DefectDojo | b080a8ee56fdbab7f60514625f83765c96718cbb | [
"BSD-3-Clause"
] | 562 | 2019-06-21T18:44:38.000Z | 2022-03-28T18:09:08.000Z | dojo/api_v2/permissions.py | iainrawson/django-DefectDojo | b080a8ee56fdbab7f60514625f83765c96718cbb | [
"BSD-3-Clause"
] | null | null | null | import re
from rest_framework.exceptions import ParseError
from dojo.models import Endpoint, Engagement, Finding, Product_Type, Product, Test, Dojo_Group
from django.shortcuts import get_object_or_404
from rest_framework import permissions
from dojo.authorization.authorization import user_has_permission
from dojo.authorization.roles_permissions import Permissions
def check_post_permission(request, post_model, post_pk, post_permission):
if request.method == 'POST':
if request.data.get(post_pk) is None:
raise ParseError('Attribute \'{}\' is required'.format(post_pk))
object = get_object_or_404(post_model, pk=request.data.get(post_pk))
return user_has_permission(request.user, object, post_permission)
else:
return True
def check_object_permission(request, object, get_permission, put_permission, delete_permission, post_permission=None):
if request.method == 'GET':
return user_has_permission(request.user, object, get_permission)
elif request.method == 'PUT' or request.method == 'PATCH':
return user_has_permission(request.user, object, put_permission)
elif request.method == 'DELETE':
return user_has_permission(request.user, object, delete_permission)
elif request.method == 'POST':
return user_has_permission(request.user, object, post_permission)
else:
return False
class UserHasAppAnalysisPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product, 'product', Permissions.Technology_Add)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj.product, Permissions.Technology_View, Permissions.Technology_Edit, Permissions.Technology_Delete)
class UserHasDojoGroupPermission(permissions.BasePermission):
def has_permission(self, request, view):
if request.method == 'POST':
return request.user.is_staff
else:
return True
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Group_View, Permissions.Group_Edit, Permissions.Group_Delete)
class UserHasDojoGroupMemberPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Dojo_Group, 'group', Permissions.Group_Manage_Members)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Group_View, Permissions.Group_Manage_Members, Permissions.Group_Member_Delete)
class UserHasDojoMetaPermission(permissions.BasePermission):
def has_permission(self, request, view):
if request.method == 'POST':
has_permission_result = True
product_id = request.data.get('product', None)
if product_id:
object = get_object_or_404(Product, pk=product_id)
has_permission_result = has_permission_result and \
user_has_permission(request.user, object, Permissions.Product_Edit)
finding_id = request.data.get('finding', None)
if finding_id:
object = get_object_or_404(Finding, pk=finding_id)
has_permission_result = has_permission_result and \
user_has_permission(request.user, object, Permissions.Finding_Edit)
endpoint_id = request.data.get('endpoint', None)
if endpoint_id:
object = get_object_or_404(Endpoint, pk=endpoint_id)
has_permission_result = has_permission_result and \
user_has_permission(request.user, object, Permissions.Endpoint_Edit)
return has_permission_result
else:
return True
def has_object_permission(self, request, view, obj):
has_permission_result = True
product = obj.product
if product:
has_permission_result = has_permission_result and \
check_object_permission(request, product, Permissions.Product_View, Permissions.Product_Edit, Permissions.Product_Edit)
finding = obj.finding
if finding:
has_permission_result = has_permission_result and \
check_object_permission(request, finding, Permissions.Finding_View, Permissions.Finding_Edit, Permissions.Finding_Edit)
endpoint = obj.endpoint
if endpoint:
has_permission_result = has_permission_result and \
check_object_permission(request, endpoint, Permissions.Endpoint_View, Permissions.Endpoint_Edit, Permissions.Endpoint_Edit)
return has_permission_result
class UserHasEndpointPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product, 'product', Permissions.Endpoint_Add)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Endpoint_View, Permissions.Endpoint_Edit, Permissions.Endpoint_Delete)
class UserHasEndpointStatusPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Endpoint, 'endpoint', Permissions.Endpoint_Edit)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj.endpoint, Permissions.Endpoint_View, Permissions.Endpoint_Edit, Permissions.Endpoint_Edit)
class UserHasEngagementPermission(permissions.BasePermission):
# Permission checks for related objects (like notes or metadata) can be moved
# into a seperate class, when the legacy authorization will be removed.
path_engagement_post = re.compile(r'^/api/v2/engagements/$')
path_engagement = re.compile(r'^/api/v2/engagements/\d+/$')
def has_permission(self, request, view):
if UserHasEngagementPermission.path_engagement_post.match(request.path) or \
UserHasEngagementPermission.path_engagement.match(request.path):
return check_post_permission(request, Product, 'product', Permissions.Engagement_Add)
else:
# related object only need object permission
return True
def has_object_permission(self, request, view, obj):
if UserHasEngagementPermission.path_engagement_post.match(request.path) or \
UserHasEngagementPermission.path_engagement.match(request.path):
return check_object_permission(request, obj, Permissions.Engagement_View, Permissions.Engagement_Edit, Permissions.Engagement_Delete)
else:
return check_object_permission(request, obj, Permissions.Engagement_View, Permissions.Engagement_Edit, Permissions.Engagement_Edit, Permissions.Engagement_Edit)
class UserHasFindingPermission(permissions.BasePermission):
# Permission checks for related objects (like notes or metadata) can be moved
# into a seperate class, when the legacy authorization will be removed.
path_finding_post = re.compile(r'^/api/v2/findings/$')
path_finding = re.compile(r'^/api/v2/findings/\d+/$')
path_stub_finding_post = re.compile(r'^/api/v2/stub_findings/$')
path_stub_finding = re.compile(r'^/api/v2/stub_findings/\d+/$')
def has_permission(self, request, view):
if UserHasFindingPermission.path_finding_post.match(request.path) or \
UserHasFindingPermission.path_finding.match(request.path) or \
UserHasFindingPermission.path_stub_finding_post.match(request.path) or \
UserHasFindingPermission.path_stub_finding.match(request.path):
return check_post_permission(request, Test, 'test', Permissions.Finding_Add)
else:
# related object only need object permission
return True
def has_object_permission(self, request, view, obj):
if UserHasFindingPermission.path_finding_post.match(request.path) or \
UserHasFindingPermission.path_finding.match(request.path) or \
UserHasFindingPermission.path_stub_finding_post.match(request.path) or \
UserHasFindingPermission.path_stub_finding.match(request.path):
return check_object_permission(request, obj, Permissions.Finding_View, Permissions.Finding_Edit, Permissions.Finding_Delete)
else:
return check_object_permission(request, obj, Permissions.Finding_View, Permissions.Finding_Edit, Permissions.Finding_Edit, Permissions.Finding_Edit)
class UserHasImportPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Engagement, 'engagement', Permissions.Import_Scan_Result)
class UserHasProductPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product_Type, 'prod_type', Permissions.Product_Type_Add_Product)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_View, Permissions.Product_Edit, Permissions.Product_Delete)
class UserHasProductMemberPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product, 'product', Permissions.Product_Manage_Members)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_View, Permissions.Product_Manage_Members, Permissions.Product_Member_Delete)
class UserHasProductGroupPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product, 'product', Permissions.Product_Group_Add)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_Group_View, Permissions.Product_Group_Edit, Permissions.Product_Group_Delete)
class UserHasProductTypePermission(permissions.BasePermission):
def has_permission(self, request, view):
if request.method == 'POST':
return request.user.is_staff
else:
return True
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_Type_View, Permissions.Product_Type_Edit, Permissions.Product_Type_Delete)
class UserHasProductTypeMemberPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product_Type, 'product_type', Permissions.Product_Type_Manage_Members)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_Type_View, Permissions.Product_Type_Manage_Members, Permissions.Product_Type_Member_Delete)
class UserHasProductTypeGroupPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product_Type, 'product_type', Permissions.Product_Type_Group_Add)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Product_Type_Group_View, Permissions.Product_Type_Group_Edit, Permissions.Product_Type_Group_Delete)
class UserHasReimportPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Test, 'test', Permissions.Import_Scan_Result)
class UserHasTestPermission(permissions.BasePermission):
# Permission checks for related objects (like notes or metadata) can be moved
# into a seperate class, when the legacy authorization will be removed.
path_tests_post = re.compile(r'^/api/v2/tests/$')
path_tests = re.compile(r'^/api/v2/tests/\d+/$')
def has_permission(self, request, view):
if UserHasTestPermission.path_tests_post.match(request.path) or \
UserHasTestPermission.path_tests.match(request.path):
return check_post_permission(request, Engagement, 'engagement', Permissions.Test_Add)
else:
# related object only need object permission
return True
def has_object_permission(self, request, view, obj):
if UserHasTestPermission.path_tests_post.match(request.path) or \
UserHasTestPermission.path_tests.match(request.path):
return check_object_permission(request, obj, Permissions.Test_View, Permissions.Test_Edit, Permissions.Test_Delete)
else:
return check_object_permission(request, obj, Permissions.Test_View, Permissions.Test_Edit, Permissions.Test_Edit, Permissions.Test_Edit)
class UserHasTestImportPermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Test, 'test', Permissions.Test_Edit)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj.test, Permissions.Test_View, Permissions.Test_Edit, Permissions.Test_Delete)
class UserHasLanguagePermission(permissions.BasePermission):
def has_permission(self, request, view):
return check_post_permission(request, Product, 'product', Permissions.Language_Add)
def has_object_permission(self, request, view, obj):
return check_object_permission(request, obj, Permissions.Language_View, Permissions.Language_Edit, Permissions.Language_Delete)
class IsSuperUser(permissions.BasePermission):
def has_permission(self, request, view):
return request.user and request.user.is_superuser
| 50.732342 | 172 | 0.75174 | 1,548 | 13,647 | 6.366925 | 0.078165 | 0.082792 | 0.078835 | 0.093851 | 0.763088 | 0.745333 | 0.716416 | 0.688718 | 0.647829 | 0.625406 | 0 | 0.002029 | 0.169415 | 13,647 | 268 | 173 | 50.921642 | 0.86749 | 0.041474 | 0 | 0.458763 | 0 | 0 | 0.029072 | 0.00941 | 0 | 0 | 0 | 0 | 0 | 1 | 0.201031 | false | 0 | 0.061856 | 0.139175 | 0.680412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
0a75ce8ab2a27cc36a2eac1df80a40ff512f466d | 1,047 | py | Python | tests/tagtrain/test_create.py | c17r/TagTrain | 5aa1ca36439cc5e81d0c691f905a4bb879b78399 | [
"MIT"
] | null | null | null | tests/tagtrain/test_create.py | c17r/TagTrain | 5aa1ca36439cc5e81d0c691f905a4bb879b78399 | [
"MIT"
] | 7 | 2020-03-24T17:54:31.000Z | 2021-09-21T12:34:34.000Z | tests/tagtrain/test_create.py | c17r/TagTrain | 5aa1ca36439cc5e81d0c691f905a4bb879b78399 | [
"MIT"
] | null | null | null | from datetime import datetime
from unittest.mock import MagicMock, patch
from tagtrain import data
from . import fake
from tagtrain.tagtrain.tt_create import Create
@patch('tagtrain.data.by_owner.create_group')
def test_existing(create_group):
group = fake.create_group(name='GroupName', member_count=1)
create_group.return_value = (group, False)
app, reply, message, match = fake.create_all()
Create(app).run(reply, message, match)
create_group.assert_called_once_with('AuthorName', 'GroupName')
reply.append.assert_called_once_with('Group `GroupName` already exists. Skipping.')
@patch('tagtrain.data.by_owner.create_group')
def test_good(create_group):
group = fake.create_group(name='GroupName', member_count=1)
create_group.return_value = (group, True)
app, reply, message, match = fake.create_all()
Create(app).run(reply, message, match)
create_group.assert_called_once_with('AuthorName', 'GroupName')
reply.append.assert_called_once_with('Group `GroupName` created, no Members.')
| 31.727273 | 88 | 0.757402 | 142 | 1,047 | 5.352113 | 0.323944 | 0.144737 | 0.089474 | 0.105263 | 0.742105 | 0.742105 | 0.742105 | 0.742105 | 0.742105 | 0.631579 | 0 | 0.002193 | 0.12894 | 1,047 | 32 | 89 | 32.71875 | 0.83114 | 0 | 0 | 0.47619 | 0 | 0 | 0.198663 | 0.066858 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.095238 | false | 0 | 0.238095 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0accc51b15188a20a300dd56084035fa6bbf9940 | 1,088 | py | Python | test/test_repository_helm_app_spec.py | RyanSiu1995/argocd-python-client | 2e8f097fe09f247a46ac70692241a93d1acd076a | [
"MIT"
] | 1 | 2021-11-20T13:37:43.000Z | 2021-11-20T13:37:43.000Z | test/test_repository_helm_app_spec.py | RyanSiu1995/argocd-python-client | 2e8f097fe09f247a46ac70692241a93d1acd076a | [
"MIT"
] | null | null | null | test/test_repository_helm_app_spec.py | RyanSiu1995/argocd-python-client | 2e8f097fe09f247a46ac70692241a93d1acd076a | [
"MIT"
] | null | null | null | """
Consolidate Services
Description of all APIs # noqa: E501
The version of the OpenAPI document: version not set
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import argocd_python_client
from argocd_python_client.model.v1alpha1_helm_file_parameter import V1alpha1HelmFileParameter
from argocd_python_client.model.v1alpha1_helm_parameter import V1alpha1HelmParameter
globals()['V1alpha1HelmFileParameter'] = V1alpha1HelmFileParameter
globals()['V1alpha1HelmParameter'] = V1alpha1HelmParameter
from argocd_python_client.model.repository_helm_app_spec import RepositoryHelmAppSpec
class TestRepositoryHelmAppSpec(unittest.TestCase):
"""RepositoryHelmAppSpec unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testRepositoryHelmAppSpec(self):
"""Test RepositoryHelmAppSpec"""
# FIXME: construct object with mandatory attributes with example values
# model = RepositoryHelmAppSpec() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 27.2 | 93 | 0.761029 | 109 | 1,088 | 7.376147 | 0.550459 | 0.059701 | 0.089552 | 0.08209 | 0.130597 | 0.097015 | 0.097015 | 0 | 0 | 0 | 0 | 0.024336 | 0.169118 | 1,088 | 39 | 94 | 27.897436 | 0.865044 | 0.311581 | 0 | 0.176471 | 1 | 0 | 0.075736 | 0.064516 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.176471 | false | 0.176471 | 0.352941 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
0adab32d6dddb9bb0655d5853fe66934a9e1f15a | 85 | py | Python | pythoncalculator/__init__.py | jem00re/python-calculator | 46887d55d979418b0d59a3383a5a4d2fda82dddf | [
"MIT"
] | null | null | null | pythoncalculator/__init__.py | jem00re/python-calculator | 46887d55d979418b0d59a3383a5a4d2fda82dddf | [
"MIT"
] | 3 | 2021-05-24T14:19:56.000Z | 2021-05-25T07:50:30.000Z | pythoncalculator/__init__.py | jem00re/python-calculator | 46887d55d979418b0d59a3383a5a4d2fda82dddf | [
"MIT"
] | null | null | null | from .add import add
from .multiply import multiply
from .subtract import subtract
| 21.25 | 31 | 0.8 | 12 | 85 | 5.666667 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164706 | 85 | 3 | 32 | 28.333333 | 0.957746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0ae2dc2de2014652d062812fc5f90b4b20bbf9a1 | 296 | py | Python | venv/Lib/site-packages/pybrain/structure/networks/__init__.py | ishatserka/MachineLearningAndDataAnalysisCoursera | e82e772df2f4aec162cb34ac6127df10d14a625a | [
"MIT"
] | 4 | 2015-01-01T14:57:38.000Z | 2018-07-12T04:21:36.000Z | pybrain/structure/networks/__init__.py | abhishekgahlot/pybrain | c54661f13857d5bcb0095ba2fb12f5a403a4a70f | [
"BSD-3-Clause"
] | null | null | null | pybrain/structure/networks/__init__.py | abhishekgahlot/pybrain | c54661f13857d5bcb0095ba2fb12f5a403a4a70f | [
"BSD-3-Clause"
] | 2 | 2015-01-23T09:23:58.000Z | 2019-02-22T05:42:29.000Z | from swiping import SwipingNetwork
from borderswiping import BorderSwipingNetwork
from neurondecomposable import NeuronDecomposableNetwork
from feedforward import FeedForwardNetwork
from recurrent import RecurrentNetwork
from network import Network
from bidirectional import BidirectionalNetwork
| 37 | 56 | 0.905405 | 28 | 296 | 9.571429 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094595 | 296 | 7 | 57 | 42.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0ae9a4fd9cdc5504da4791ede08b495e487ab8e4 | 21,600 | py | Python | src/query_optimizer/tests/t_rule_query_optimizer.py | kcaras/Eva | c8344e3aa52f4a0e1378154f939293b87fef08c1 | [
"Apache-2.0"
] | null | null | null | src/query_optimizer/tests/t_rule_query_optimizer.py | kcaras/Eva | c8344e3aa52f4a0e1378154f939293b87fef08c1 | [
"Apache-2.0"
] | null | null | null | src/query_optimizer/tests/t_rule_query_optimizer.py | kcaras/Eva | c8344e3aa52f4a0e1378154f939293b87fef08c1 | [
"Apache-2.0"
] | null | null | null | from query_optimizer.rule_query_optimizer import RuleQueryOptimizer, Rules
from expression.comparison_expression import ComparisonExpression
from expression.abstract_expression import ExpressionType
from expression.constant_value_expression import ConstantValueExpression
from expression.tuple_value_expression import TupleValueExpression
from query_planner.seq_scan_plan import SeqScanPlan
from query_planner.logical_inner_join_plan import LogicalInnerJoinPlan
from query_planner.logical_projection_plan import LogicalProjectionPlan
from query_parser.table_ref import TableRef, TableInfo
from loaders.video_loader import SimpleVideoLoader
from models.catalog.video_info import VideoMetaInfo
from models.catalog.properties import VideoFormat
def test_simple_predicate_pushdown(verbose=False):
# Creating the videos
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
projection_output = ['v1.1', 'v2.2']
root = LogicalProjectionPlan(videos=[video1, video2], column_ids=projection_output, foreign_column_ids=[])
# Creating Expression for Select: Expression is basically where v1.1 == 4
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(projection_output[0].split('.')[1]))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
# used both videos because purposely placed BEFORE the join
s1 = SeqScanPlan(predicate=expression, column_ids=['v1.1'], videos=[video1, video2], foreign_column_ids=[])
s1.parent = root
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.3', 'v2.3'])
j1.parent = s1
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
s1.set_children([j1])
t1.parent = j1
t2.parent = j1
j1.set_children([t1, t2])
root.set_children([s1])
rule_list = [Rules.PREDICATE_PUSHDOWN]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert root.children == [j1]
assert j1.parent == root
assert j1.children == [s1, t2]
assert s1.parent == j1
assert s1.videos == [video1]
assert t2.parent == j1
assert s1.children == [t1]
assert t1.parent == s1
print('Simple Predicate Pushdown Succeeded!')
def test_simple_projection_pushdown_join(verbose=False):
# Creating the videos
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
projection_output = ['v1.3', 'v2.4']
root = LogicalProjectionPlan(videos=[video1, video2], column_ids=projection_output, foreign_column_ids=[])
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.1', 'v2.1'])
j1.parent = root
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
t1.parent = j1
t2.parent = j1
j1.set_children([t1, t2])
root.set_children([j1])
rule_list = [Rules.PROJECTION_PUSHDOWN_JOIN]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert root.children == [j1]
assert j1.parent == root
assert type(j1.children[0]) == LogicalProjectionPlan
assert type(j1.children[1]) == LogicalProjectionPlan
assert 'v1.1' in j1.children[0].column_ids
assert 'v1.3' in j1.children[0].column_ids
assert 'v2.1' in j1.children[1].column_ids
assert 'v2.4' in j1.children[1].column_ids
assert type(t2.parent) == LogicalProjectionPlan
assert type(t1.parent) == LogicalProjectionPlan
assert j1.children[0].children == [t1]
assert j1.children[1].children == [t2]
print('Simple Projection Pushdown Join Test Successful!')
def test_simple_projection_pushdown_select(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
# Creating Expression for Select: Expression is basically where v2.7 == 4
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(7))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
s1 = SeqScanPlan(predicate=expression, column_ids=['v1.7'], videos=[video1], foreign_column_ids=[])
projection_output = ['v1.3', 'v1.4']
root = LogicalProjectionPlan(videos=[video1], column_ids=projection_output, foreign_column_ids=[])
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
root.set_children([s1])
s1.parent = root
s1.set_children([t1])
t1.parent = s1
rule_list = [Rules.PROJECTION_PUSHDOWN_SELECT]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert root.children == [s1]
assert s1.parent == root
assert len(s1.children) == 1
assert type(s1.children[0]) == LogicalProjectionPlan
assert 'v1.7' in s1.children[0].column_ids
assert 'v1.3' in s1.children[0].column_ids
assert 'v1.4' in s1.children[0].column_ids
assert type(t1.parent) == LogicalProjectionPlan
assert s1.children[0].children == [t1]
print('Simple Projection Pushdown Select Test Successful!')
def test_combined_projection_pushdown(verbose=False):
# Creating the videos
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
projection_output = ['v1.3', 'v2.4']
root = LogicalProjectionPlan(videos=[video1, video2], column_ids=projection_output, foreign_column_ids=[])
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.1', 'v2.1'])
j1.parent = root
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(7))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
s1 = SeqScanPlan(predicate=expression, column_ids=['v2.7'], videos=[video1], foreign_column_ids=[])
s1.parent = j1
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
s1.set_children([t2])
t1.parent = j1
t2.parent = s1
j1.set_children([t1, s1])
root.set_children([j1])
rule_list = [Rules.PROJECTION_PUSHDOWN_JOIN, Rules.PROJECTION_PUSHDOWN_SELECT]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert root.children == [j1]
assert j1.parent == root
assert type(j1.children[0]) == LogicalProjectionPlan
assert type(j1.children[1]) == LogicalProjectionPlan
assert type(s1.parent) == LogicalProjectionPlan
assert 'v2.1' in s1.parent.column_ids
assert 'v2.4' in s1.parent.column_ids
assert s1.parent in j1.children
assert len(s1.children) == 1
assert type(s1.children[0]) == LogicalProjectionPlan
assert 'v2.7' in s1.children[0].column_ids
assert 'v2.1' in s1.children[0].column_ids
assert 'v2.4' in s1.children[0].column_ids
assert type(t1.parent) == LogicalProjectionPlan
assert 'v1.1' in t1.parent.column_ids
assert 'v1.3' in t1.parent.column_ids
assert t1.parent in j1.children
assert s1.children[0].children == [t2]
print('Combined Projection Pushdown Select Test Successful!')
def test_both_projection_pushdown_and_predicate_pushdown(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
projection_output = ['v1.1', 'v2.2']
root = LogicalProjectionPlan(videos=[video1, video2], column_ids=projection_output, foreign_column_ids=[])
# Creating Expression for Select: Expression is basically where v1.1 == 4
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(projection_output[0].split('.')[1]))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
# used both videos because purposely placed BEFORE the join
s1 = SeqScanPlan(predicate=expression, column_ids=['v1.1'], videos=[video1, video2], foreign_column_ids=[])
s1.parent = root
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.3', 'v2.3'])
j1.parent = s1
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
s1.set_children([j1])
t1.parent = j1
t2.parent = j1
j1.set_children([t1, t2])
root.set_children([s1])
rule_list = [Rules.PREDICATE_PUSHDOWN, Rules.PROJECTION_PUSHDOWN_JOIN, Rules.PROJECTION_PUSHDOWN_SELECT]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert root.children == [j1]
assert j1.parent == root
assert len(j1.children) == 2
assert s1 in j1.children
assert s1.parent == j1
assert s1.videos == [video1]
assert len(s1.children) == 1
assert type(s1.children[0]) == LogicalProjectionPlan
assert 'v1.1' in s1.children[0].column_ids
assert 'v1.3' in s1.children[0].column_ids
assert s1.children[0].children == [t1]
assert t1.parent == s1.children[0]
s1_ix = j1.children.index(s1)
if s1_ix == 0:
proj_ix = 1
else:
proj_ix = 0
assert type(j1.children[proj_ix]) == LogicalProjectionPlan
assert j1.children[proj_ix].parent == j1
assert 'v2.3' in j1.children[proj_ix].column_ids
assert 'v2.2' in j1.children[proj_ix].column_ids
assert t2.parent == j1.children[proj_ix]
print('Combined Projection Pushdown and Predicate Pushdown Test Successful!')
def test_double_join_predicate_pushdown(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
meta3 = VideoMetaInfo(file='v3', c_format=VideoFormat.MOV, fps=30)
video3 = SimpleVideoLoader(video_metadata=meta3)
projection_output = ['v1.1', 'v2.2', 'v3.4']
root = LogicalProjectionPlan(videos=[video1, video2, video3], column_ids=projection_output, foreign_column_ids=[])
# Creating Expression for Select: Expression is basically where v3.3 == 4
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(projection_output[2].split('.')[1]))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
# used both videos because purposely placed BEFORE the join
s1 = SeqScanPlan(predicate=expression, column_ids=['v3.3'], videos=[video1, video2, video3], foreign_column_ids=[])
s1.parent = root
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.3', 'v2.3'])
j2 = LogicalInnerJoinPlan(videos=[video1, video2, video3], join_ids=['v1.3', 'v2.3', 'v3.3'])
j1.parent = j2
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
t3 = TableRef(video=video3, table_info=TableInfo(table_name='v3'))
s1.set_children([j2])
t1.parent = j1
t2.parent = j1
j2.set_children([j1,t3])
t3.parent = j2
j1.set_children([t1, t2])
root.set_children([s1])
rule_list = [Rules.PREDICATE_PUSHDOWN]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert len(root.children) == 1
assert root.children[0].parent == root
assert j2.parent == root
assert len(j2.children) == 2
assert j2.children[0] == j1
assert j2.children[1] == s1
assert s1.parent == j2
assert j1.parent == j2
assert len(s1.videos) == 1
assert s1.videos[0] == video3
assert len(s1.children) == 1
assert s1.children[0] == t3
assert t3.parent == s1
assert len(j1.children) == 2
assert j1.children[0] == t1
assert j1.children[1] == t2
assert t1.parent == j1
assert t2.parent == j1
print('Double join predicate Pushdown Successful!')
def test_double_join_projection_pushdown(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
meta3 = VideoMetaInfo(file='v3', c_format=VideoFormat.MOV, fps=30)
video3 = SimpleVideoLoader(video_metadata=meta3)
projection_output = ['v1.1', 'v2.2', 'v3.4']
root = LogicalProjectionPlan(videos=[video1, video2, video3], column_ids=projection_output, foreign_column_ids=[])
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.3', 'v2.3'])
j2 = LogicalInnerJoinPlan(videos=[video1, video2, video3], join_ids=['v1.3', 'v2.3', 'v3.3'])
j1.parent = j2
j2.parent = root
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
t3 = TableRef(video=video3, table_info=TableInfo(table_name='v3'))
t1.parent = j1
t2.parent = j1
j2.set_children([t3, j1])
t3.parent = j2
j1.set_children([t1, t2])
root.set_children([j2])
rule_list = [Rules.PROJECTION_PUSHDOWN_JOIN]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert 'v1.1'in root.column_ids
assert 'v2.2'in root.column_ids
assert 'v3.4'in root.column_ids
assert len(root.children) == 1
assert root.children[0] == j2
assert j2.parent == root
assert len(j2.videos) == 3
assert video1 in j2.videos
assert video2 in j2.videos
assert video3 in j2.videos
assert len(j2.children) == 2
assert j1 in j2.children
j1_ix = j2.children.index(j1)
pix = 1 - j1_ix
assert type(j2.children[pix]) == LogicalProjectionPlan
assert len(j2.children[pix].column_ids) == 2
assert 'v3.4' in j2.children[pix].column_ids
assert 'v3.3' in j2.children[pix].column_ids
assert j2.children[pix].parent == j2
assert len(j2.children[pix].children) == 1
assert j2.children[pix].children[0] == t3
assert len(j1.videos) == 2
assert video1 in j1.videos
assert video2 in j1.videos
assert len(j1.children) == 2
assert type(j1.children[0]) == LogicalProjectionPlan
assert type(j1.children[1]) == LogicalProjectionPlan
assert len(j1.children[0].column_ids) == 2
assert len(j1.children[0].children) == 1
assert j1.children[0].children[0] == t1
assert len(j1.children[1].children) == 1
assert j1.children[1].children[0] == t2
assert 'v1.3' in j1.children[0].column_ids
assert 'v1.1' in j1.children[0].column_ids
assert len(j1.children[1].column_ids) == 2
assert 'v2.3' in j1.children[1].column_ids
assert 'v2.2' in j1.children[1].column_ids
assert j1.children[0].parent == j1
assert j1.children[1].parent == j1
print('Double join Projection Pushdown Successful!')
def test_join_elimination(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
meta2 = VideoMetaInfo(file='v2', c_format=VideoFormat.MOV, fps=30)
video2 = SimpleVideoLoader(video_metadata=meta2)
projection_output = ['v1.1', 'v2.2']
root = LogicalProjectionPlan(videos=[video1, video2], column_ids=projection_output, foreign_column_ids=['v2.2'])
# Creating Expression for Select: Expression is basically where v1.1 == v2.2
# Also creating a foreign key constraint for v1 where it requires v2.2
# hence join elimination should delete the join node and just return all of v1.1 for select
tup1 = TupleValueExpression(col_idx=1)
tup2 = TupleValueExpression(col_idx=2)
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup1, right=tup2)
# used both videos because purposely placed BEFORE the join
s1 = SeqScanPlan(predicate=expression, column_ids=['v1.1', 'v2.2'], videos=[video1, video2],
foreign_column_ids=['v2.2'])
s1.parent = root
j1 = LogicalInnerJoinPlan(videos=[video1, video2], join_ids=['v1.1', 'v2.2'])
j1.parent = s1
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
t2 = TableRef(video=video2, table_info=TableInfo(table_name='v2'))
t1.parent = j1
t2.parent = j1
root.set_children([s1])
s1.set_children([j1])
j1.set_children([t1, t2])
rule_list = [Rules.JOIN_ELIMINATION]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
assert root.parent is None
assert (type(t1.parent) == SeqScanPlan)
assert (type(s1.children[0]) == TableRef)
assert (len(s1.children) == 1)
assert (len(s1.foreign_column_ids) == 0)
assert ('v1.2' in root.column_ids)
assert (len(root.column_ids) == 2)
assert (len(root.foreign_column_ids) == 0)
assert (type(root.children[0]) == SeqScanPlan)
def test_shouldnot_simply_predicate(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
# Creating Expression for Select: Expression is basically where v1.7 == 4
const = ConstantValueExpression(value=4)
tup = TupleValueExpression(col_idx=int(7))
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=tup, right=const)
s1 = SeqScanPlan(predicate=expression, column_ids=['v1.7'], videos=[video1], foreign_column_ids=[])
projection_output = ['v1.3', 'v1.4']
root = LogicalProjectionPlan(videos=[video1], column_ids=projection_output, foreign_column_ids=[])
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
root.set_children([s1])
s1.parent = root
s1.set_children([t1])
t1.parent = s1
rule_list = [Rules.SIMPLIFY_PREDICATE]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
def test_should_simply_predicate(verbose=False):
meta1 = VideoMetaInfo(file='v1', c_format=VideoFormat.MOV, fps=30)
video1 = SimpleVideoLoader(video_metadata=meta1)
# Creating Expression for Select: Expression is basically where 0==1
const1 = ConstantValueExpression(value=0)
const2 = ConstantValueExpression(value=1)
# Comparing if 0 is equal to 1
# It will be False which will trigger the simplify predicate function
expression = ComparisonExpression(exp_type=ExpressionType.COMPARE_EQUAL, left=const1, right=const2)
s1 = SeqScanPlan(predicate=expression, column_ids=[], videos=[], foreign_column_ids=[])
projection_output = ['v1.3', 'v1.4']
root = LogicalProjectionPlan(videos=[video1], column_ids=projection_output, foreign_column_ids=[])
t1 = TableRef(video=video1, table_info=TableInfo(table_name='v1'))
root.set_children([s1])
s1.parent = root
s1.set_children([t1])
t1.parent = s1
rule_list = [Rules.SIMPLIFY_PREDICATE]
if verbose:
print('Original Plan Tree')
print(root)
qo = RuleQueryOptimizer()
new_tree = qo.run(root, rule_list)
if verbose:
print('New Plan Tree')
print(new_tree)
if __name__ == '__main__':
test_simple_predicate_pushdown()
test_simple_projection_pushdown_select()
test_simple_projection_pushdown_join()
test_combined_projection_pushdown()
test_both_projection_pushdown_and_predicate_pushdown()
test_double_join_predicate_pushdown()
test_double_join_projection_pushdown()
test_join_elimination()
#test_shouldnot_simply_predicate()
#test_should_simply_predicate()
| 37.762238 | 119 | 0.701019 | 2,866 | 21,600 | 5.135729 | 0.057572 | 0.042802 | 0.028535 | 0.027108 | 0.837421 | 0.779401 | 0.738637 | 0.712277 | 0.688838 | 0.673755 | 0 | 0.0459 | 0.177963 | 21,600 | 571 | 120 | 37.828371 | 0.783059 | 0.051481 | 0 | 0.662971 | 0 | 0 | 0.051595 | 0 | 0 | 0 | 0 | 0 | 0.299335 | 1 | 0.022173 | false | 0 | 0.026608 | 0 | 0.04878 | 0.104213 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0aec990841e6253fa4b9c319dbc08cd9d69e7f6e | 55 | py | Python | sherlock/__init__.py | avinashshenoy97/sherlock | 7c9298cd35f7afce401b13d363d2a9f53c8fc86b | [
"MIT"
] | null | null | null | sherlock/__init__.py | avinashshenoy97/sherlock | 7c9298cd35f7afce401b13d363d2a9f53c8fc86b | [
"MIT"
] | null | null | null | sherlock/__init__.py | avinashshenoy97/sherlock | 7c9298cd35f7afce401b13d363d2a9f53c8fc86b | [
"MIT"
] | null | null | null | from .constants import *
from .sherlock import Sherlock | 27.5 | 30 | 0.818182 | 7 | 55 | 6.428571 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 55 | 2 | 30 | 27.5 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0af5f0e04b9f5265e8a3393358bfde2241f92dbc | 42,836 | py | Python | tensornetwork/backends/symmetric/symmetric_backend_test.py | adityasharma3/TensorNetwork | 02a290576cab4adbd7dcfeb727eddc49f598b328 | [
"Apache-2.0"
] | null | null | null | tensornetwork/backends/symmetric/symmetric_backend_test.py | adityasharma3/TensorNetwork | 02a290576cab4adbd7dcfeb727eddc49f598b328 | [
"Apache-2.0"
] | 1 | 2020-08-27T14:38:25.000Z | 2020-08-27T19:01:51.000Z | tensornetwork/backends/symmetric/symmetric_backend_test.py | adityasharma3/TensorNetwork | 02a290576cab4adbd7dcfeb727eddc49f598b328 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pytest
from tensornetwork.backends.symmetric import symmetric_backend
from tensornetwork.backends.numpy import numpy_backend
from tensornetwork.block_sparse.charge import (U1Charge, charge_equal,
BaseCharge, fuse_charges)
from tensornetwork.block_sparse.blocksparse_utils import _find_diagonal_sparse_blocks #pylint: disable=line-too-long
from tensornetwork.block_sparse.index import Index
from tensornetwork.block_sparse import (tensordot, BlockSparseTensor, transpose,
sqrt, ChargeArray, diag, trace, norm,
eye, ones, zeros, randn, random, eigh,
inv)
from tensornetwork.block_sparse.caching import get_cacher, get_caching_status
from tensornetwork.ncon_interface import ncon
np_randn_dtypes = [np.float32, np.float16, np.float64]
np_dtypes = np_randn_dtypes + [np.complex64, np.complex128]
np_tensordot_dtypes = [np.float16, np.float64, np.complex128]
def get_tensor(R, num_charges, dtype=np.float64):
Ds = np.random.randint(8, 12, R)
charges = [
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges) for n in range(R)
]
flows = list(np.full(R, fill_value=False, dtype=np.bool))
indices = [Index(charges[n], flows[n]) for n in range(R)]
return BlockSparseTensor.random(indices=indices, dtype=dtype)
def get_square_matrix(num_charges, dtype=np.float64):
D = np.random.randint(40, 60)
charges = BaseCharge(
np.random.randint(-5, 6, (D, num_charges)),
charge_types=[U1Charge] * num_charges)
flows = [False, True]
indices = [Index(charges, flows[n]) for n in range(2)]
return BlockSparseTensor.random(indices=indices, dtype=dtype)
def get_hermitian_matrix(num_charges, dtype=np.float64):
D = np.random.randint(40, 60)
charges = BaseCharge(
np.random.randint(-5, 6, (D, num_charges)),
charge_types=[U1Charge] * num_charges)
flows = [False, True]
indices = [Index(charges, flows[n]) for n in range(2)]
A = BlockSparseTensor.random(indices=indices, dtype=dtype)
return A + A.conj().T
def get_chargearray(num_charges, dtype=np.float64):
D = np.random.randint(8, 12)
charge = BaseCharge(
np.random.randint(-5, 6, (D, num_charges)),
charge_types=[U1Charge] * num_charges)
flow = False
index = Index(charge, flow)
return ChargeArray.random(indices=[index], dtype=dtype)
def get_contractable_tensors(R1, R2, cont, dtype, num_charges):
DsA = np.random.randint(5, 10, R1)
DsB = np.random.randint(5, 10, R2)
assert R1 >= cont
assert R2 >= cont
chargesA = [
BaseCharge(
np.random.randint(-5, 6, (DsA[n], num_charges)),
charge_types=[U1Charge] * num_charges) for n in range(R1 - cont)
]
commoncharges = [
BaseCharge(
np.random.randint(-5, 6, (DsA[n + R1 - cont], num_charges)),
charge_types=[U1Charge] * num_charges) for n in range(cont)
]
chargesB = [
BaseCharge(
np.random.randint(-5, 6, (DsB[n], num_charges)),
charge_types=[U1Charge] * num_charges) for n in range(R2 - cont)
]
#contracted indices
indsA = np.random.choice(np.arange(R1), cont, replace=False)
indsB = np.random.choice(np.arange(R2), cont, replace=False)
flowsA = np.full(R1, False, dtype=np.bool)
flowsB = np.full(R2, False, dtype=np.bool)
flowsB[indsB] = True
indicesA = [None for _ in range(R1)]
indicesB = [None for _ in range(R2)]
for n, iA in enumerate(indsA):
indicesA[iA] = Index(commoncharges[n], flowsA[iA])
indicesB[indsB[n]] = Index(commoncharges[n], flowsB[indsB[n]])
compA = list(set(np.arange(R1)) - set(indsA))
compB = list(set(np.arange(R2)) - set(indsB))
for n, cA in enumerate(compA):
indicesA[cA] = Index(chargesA[n], flowsA[cA])
for n, cB in enumerate(compB):
indicesB[cB] = Index(chargesB[n], flowsB[cB])
indices_final = []
for n in sorted(compA):
indices_final.append(indicesA[n])
for n in sorted(compB):
indices_final.append(indicesB[n])
A = BlockSparseTensor.random(indices=indicesA, dtype=dtype)
B = BlockSparseTensor.random(indices=indicesB, dtype=dtype)
return A, B, indsA, indsB
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R1, R2, cont", [(4, 4, 2), (4, 3, 3), (3, 4, 3)])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_tensordot(R1, R2, cont, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a, b, indsa, indsb = get_contractable_tensors(R1, R2, cont, dtype,
num_charges)
actual = backend.tensordot(a, b, (indsa, indsb))
expected = tensordot(a, b, (indsa, indsb))
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
def test_gmres_not_implemented():
backend = symmetric_backend.SymmetricBackend()
with pytest.raises(NotImplementedError):
backend.gmres(lambda x: x, np.ones((2)))
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5, 6, 7])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_reshape(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
shape = a.shape
partitions = np.append(
np.append(
0,
np.sort(
np.random.choice(
np.arange(1, R), np.random.randint(1, R), replace=False))), R)
new_shape = tuple([
np.prod(shape[partitions[n - 1]:partitions[n]])
for n in range(1, len(partitions))
])
actual = backend.shape_tuple(backend.reshape(a, new_shape))
assert actual == new_shape
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5, 6, 7])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_transpose(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
order = np.arange(R)
np.random.shuffle(order)
actual = backend.transpose(a, order)
expected = transpose(a, order)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5, 6, 7])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_transpose_default(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
order = np.arange(R)[::-1]
np.random.shuffle(order)
actual = backend.transpose(a)
expected = transpose(a, order)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
def test_shape_concat():
backend = symmetric_backend.SymmetricBackend()
a = np.asarray((2 * np.ones((1, 3, 1))))
b = np.asarray(np.ones((1, 2, 1)))
expected = backend.shape_concat((a, b), axis=1)
actual = np.array([[[2.0], [2.0], [2.0], [1.0], [1.0]]])
np.testing.assert_allclose(expected, actual)
def test_shape_tensor():
backend = symmetric_backend.SymmetricBackend()
a = np.asarray(np.ones([2, 3, 4]))
assert isinstance(backend.shape_tensor(a), tuple)
actual = backend.shape_tensor(a)
expected = np.array([2, 3, 4])
np.testing.assert_allclose(expected, actual)
def test_shape_tuple():
backend = symmetric_backend.SymmetricBackend()
a = np.asarray(np.ones([2, 3, 4]))
actual = backend.shape_tuple(a)
assert actual == (2, 3, 4)
def test_shape_prod():
backend = symmetric_backend.SymmetricBackend()
a = np.array(2 * np.ones([1, 2, 3, 4]))
actual = np.array(backend.shape_prod(a))
assert actual == 2**24
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5, 6, 7])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_sqrt(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
actual = backend.sqrt(a)
expected = sqrt(a)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R1, R2", [(2, 2), (2, 3), (3, 3)])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_outer_product(R1, R2, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R1, num_charges, dtype)
b = get_tensor(R2, num_charges, dtype)
actual = backend.outer_product(a, b)
expected = tensordot(a, b, 0)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_norm(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
assert backend.norm(a) == norm(a)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_eye(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
index = Index(
BaseCharge(
np.random.randint(-5, 6, (100, num_charges)),
charge_types=[U1Charge] * num_charges), False)
actual = backend.eye(index, dtype=dtype)
expected = eye(index, dtype=dtype)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_eye_dtype(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
index = Index(
BaseCharge(
np.random.randint(-5, 6, (100, num_charges)),
charge_types=[U1Charge] * num_charges), False)
actual = backend.eye(index, dtype=dtype)
assert actual.dtype == dtype
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_ones(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.ones(indices, dtype=dtype)
expected = ones(indices, dtype=dtype)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_ones_dtype(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.ones(indices, dtype=dtype)
assert actual.dtype == dtype
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_zeros(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.zeros(indices, dtype=dtype)
expected = zeros(indices, dtype=dtype)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_zeros_dtype(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.zeros(indices, dtype=dtype)
assert actual.dtype == dtype
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_randn(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.randn(indices, dtype=dtype, seed=10)
np.random.seed(10)
expected = randn(indices, dtype=dtype)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_randn_dtype(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.randn(indices, dtype=dtype, seed=10)
assert actual.dtype == dtype
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_random_uniform(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.random_uniform(indices, dtype=dtype, seed=10)
np.random.seed(10)
expected = random(indices, dtype=dtype)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_random_uniform_dtype(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.random_uniform(indices, dtype=dtype, seed=10)
assert actual.dtype == dtype
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("dtype", [np.complex64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_randn_non_zero_imag(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.randn(indices, dtype=dtype, seed=10)
assert np.linalg.norm(np.imag(actual.data)) != 0.0
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("dtype", [np.complex64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_random_uniform_non_zero_imag(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
actual = backend.random_uniform(indices, dtype=dtype, seed=10)
assert np.linalg.norm(np.imag(actual.data)) != 0.0
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_randn_seed(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
a = backend.randn(indices, dtype=dtype, seed=10)
b = backend.randn(indices, dtype=dtype, seed=10)
np.testing.assert_allclose(a.data, b.data)
assert np.all([
charge_equal(a._charges[n], b._charges[n])
for n in range(len(a._charges))
])
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_random_uniform_seed(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
a = backend.random_uniform(indices, dtype=dtype, seed=10)
b = backend.random_uniform(indices, dtype=dtype, seed=10)
np.testing.assert_allclose(a.data, b.data)
assert np.all([
charge_equal(a._charges[n], b._charges[n])
for n in range(len(a._charges))
])
@pytest.mark.parametrize("dtype", np_randn_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_random_uniform_boundaries(dtype, num_charges):
np.random.seed(10)
lb = 1.2
ub = 4.8
R = 4
backend = symmetric_backend.SymmetricBackend()
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (10, num_charges)),
charge_types=[U1Charge] * num_charges), False) for _ in range(R)
]
a = backend.random_uniform(indices, seed=10, dtype=dtype)
b = backend.random_uniform(indices, (lb, ub), seed=10, dtype=dtype)
assert ((a.data >= 0).all() and (a.data <= 1).all() and
(b.data >= lb).all() and (b.data <= ub).all())
@pytest.mark.parametrize(
"dtype", [np.complex64, np.complex128, np.float64, np.float32, np.float16])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_conj(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
aconj = backend.conj(a)
np.testing.assert_allclose(aconj.data, np.conj(a.data))
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_addition(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = BlockSparseTensor.random(a.sparse_shape)
res = backend.addition(a, b)
np.testing.assert_allclose(res.data, a.data + b.data)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_addition_raises(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = get_tensor(R + 1, num_charges, dtype)
with pytest.raises(ValueError):
backend.addition(a, b)
shape = b.sparse_shape
c = BlockSparseTensor.random([shape[n] for n in reversed(range(len(shape)))])
with pytest.raises(ValueError):
backend.addition(a, c)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_subtraction(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = BlockSparseTensor.random(a.sparse_shape)
res = backend.subtraction(a, b)
np.testing.assert_allclose(res.data, a.data - b.data)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("R", [2, 3, 4, 5])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_subbtraction_raises(R, dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = get_tensor(R + 1, num_charges, dtype)
with pytest.raises(ValueError):
backend.subtraction(a, b)
shape = b.sparse_shape
c = BlockSparseTensor.random([shape[n] for n in reversed(range(len(shape)))])
with pytest.raises(ValueError):
backend.subtraction(a, c)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_multiply(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
res = backend.multiply(a, 5.1)
np.testing.assert_allclose(res.data, a.data * 5.1)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_multiply_raises(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
with pytest.raises(TypeError):
backend.multiply(a, np.array([5.1]))
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_truediv(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
res = backend.divide(a, 5.1)
np.testing.assert_allclose(res.data, a.data / 5.1)
@pytest.mark.parametrize("dtype", np_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_truediv_raises(dtype, num_charges):
np.random.seed(10)
R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
with pytest.raises(TypeError):
backend.divide(a, np.array([5.1]))
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_eigh(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
H = get_hermitian_matrix(num_charges, dtype)
eta, U = backend.eigh(H)
eta_ac, U_ac = eigh(H)
np.testing.assert_allclose(eta.data, eta_ac.data)
np.testing.assert_allclose(U.data, U_ac.data)
assert charge_equal(eta._charges[0], eta_ac._charges[0])
assert np.all([
charge_equal(U._charges[n], U_ac._charges[n])
for n in range(len(U._charges))
])
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_matrix_inv(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
H = get_hermitian_matrix(num_charges, dtype)
Hinv = backend.inv(H)
Hinv_ac = inv(H)
np.testing.assert_allclose(Hinv_ac.data, Hinv.data)
assert np.all([
charge_equal(Hinv._charges[n], Hinv_ac._charges[n])
for n in range(len(Hinv._charges))
])
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_matrix_inv_raises(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
H = get_tensor(3, num_charges, dtype)
with pytest.raises(ValueError):
backend.inv(H)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_broadcast_right_multiplication(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
Ds = [10, 30, 24]
R = len(Ds)
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges), False) for n in range(R)
]
tensor1 = backend.randn(indices, dtype=dtype)
tensor2 = ChargeArray.random(
indices=[indices[-1].copy().flip_flow()], dtype=dtype)
t1dense = tensor1.todense()
t2dense = tensor2.todense()
out = backend.broadcast_right_multiplication(tensor1, tensor2)
dense = t1dense * t2dense
np.testing.assert_allclose(out.todense(), dense)
def test_broadcast_right_multiplication_raises():
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
num_charges = 1
Ds = [10, 30, 24]
R = len(Ds)
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges), False) for n in range(R)
]
tensor1 = backend.randn(indices)
tensor2 = ChargeArray.random(indices=indices)
with pytest.raises(ValueError):
backend.broadcast_right_multiplication(tensor1, tensor2)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_broadcast_left_multiplication(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
Ds = [10, 30, 24]
R = len(Ds)
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges), False) for n in range(R)
]
tensor1 = ChargeArray.random(indices=[indices[0]], dtype=dtype)
tensor2 = backend.randn(indices, dtype=dtype)
t1dense = tensor1.todense()
t2dense = tensor2.todense()
out = backend.broadcast_left_multiplication(tensor1, tensor2)
dense = np.reshape(t1dense, (10, 1, 1)) * t2dense
np.testing.assert_allclose(out.todense(), dense)
def test_broadcast_left_multiplication_raises():
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
num_charges = 1
Ds = [10, 30, 24]
R = len(Ds)
indices = [
Index(
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges), False) for n in range(R)
]
tensor1 = ChargeArray.random(indices=indices)
tensor2 = backend.randn(indices)
with pytest.raises(ValueError):
backend.broadcast_left_multiplication(tensor1, tensor2)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("num_charges", [1, 2])
def test_sparse_shape(dtype, num_charges):
np.random.seed(10)
Ds = [11, 12, 13]
R = len(Ds)
charges = [
BaseCharge(
np.random.randint(-5, 6, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges) for n in range(R)
]
flows = list(np.full(R, fill_value=False, dtype=np.bool))
indices = [Index(charges[n], flows[n]) for n in range(R)]
a = BlockSparseTensor.random(indices=indices, dtype=dtype)
backend = symmetric_backend.SymmetricBackend()
for s1, s2 in zip(a.sparse_shape, backend.sparse_shape(a)):
assert s1 == s2
#################################################################
# the following are sanity checks for eigsh_lanczos which do not
# really use block sparsity (all charges are identity charges)
#################################################################
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
def test_eigsh_valid_init_operator_with_shape_sanity_check(dtype):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
D = 16
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
a = BlockSparseTensor.random(indices, dtype=dtype)
H = a + a.T.conj()
def mv(vec, mat):
return mat @ vec
init = BlockSparseTensor.random([index], dtype=dtype)
eta1, U1 = backend.eigsh_lanczos(mv, [H], init)
v1 = np.reshape(U1[0].todense(), (D))
v1 = v1 / sum(v1)
eta2, U2 = np.linalg.eigh(H.todense())
v2 = U2[:, 0]
v2 = v2 / sum(v2)
np.testing.assert_allclose(eta1[0], min(eta2))
np.testing.assert_allclose(v1, v2)
def test_eigsh_small_number_krylov_vectors_sanity_check():
np.random.seed(10)
dtype = np.float64
backend = symmetric_backend.SymmetricBackend()
index = Index(U1Charge.random(2, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
H.data = np.array([1, 2, 3, 4], dtype=np.float64)
init = BlockSparseTensor.random([index], dtype=dtype)
init.data = np.array([1, 1], dtype=np.float64)
def mv(x, mat):
return mat @ x
eta, _ = backend.eigsh_lanczos(mv, [H], init, num_krylov_vecs=1)
np.testing.assert_allclose(eta[0], 5)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
def test_eigsh_lanczos_sanity_check_1(dtype):
np.random.seed(10)
D = 16
backend = symmetric_backend.SymmetricBackend()
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
H = H + H.conj().T
init = BlockSparseTensor.random([index], dtype=dtype)
def mv(x, mat):
return mat @ x
eta1, U1 = backend.eigsh_lanczos(mv, [H], init)
eta2, U2 = np.linalg.eigh(H.todense())
v1 = np.reshape(U1[0].todense(), (D))
v1 = v1 / sum(v1)
v2 = U2[:, 0]
v2 = v2 / sum(v2)
np.testing.assert_allclose(eta1[0], min(eta2))
np.testing.assert_allclose(v1, v2)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
def test_eigsh_lanczos_sanity_check_2(dtype):
np.random.seed(10)
D = 16
backend = symmetric_backend.SymmetricBackend()
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
H = H + H.conj().T
def mv(x, mat):
return mat @ x
eta1, U1 = backend.eigsh_lanczos(
mv, [H], shape=(H.sparse_shape[1].flip_flow(),), dtype=dtype)
eta2, U2 = np.linalg.eigh(H.todense())
v1 = np.reshape(U1[0].todense(), (D))
v1 = v1 / sum(v1)
v2 = U2[:, 0]
v2 = v2 / sum(v2)
np.testing.assert_allclose(eta1[0], min(eta2))
np.testing.assert_allclose(v1, v2)
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
@pytest.mark.parametrize("numeig", [1, 2, 3, 4])
def test_eigsh_lanczos_reorthogonalize_sanity_check(dtype, numeig):
np.random.seed(10)
D = 24
backend = symmetric_backend.SymmetricBackend()
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
H = H + H.conj().T
def mv(x, mat):
return mat @ x
eta1, U1 = backend.eigsh_lanczos(
mv, [H],
shape=(H.sparse_shape[1].flip_flow(),),
dtype=dtype,
numeig=numeig,
num_krylov_vecs=D,
reorthogonalize=True,
ndiag=1,
tol=10**(-12),
delta=10**(-12))
eta2, U2 = np.linalg.eigh(H.todense())
np.testing.assert_allclose(eta1[0:numeig], eta2[0:numeig])
for n in range(numeig):
v2 = U2[:, n]
v2 /= np.sum(v2) #fix phases
v1 = np.reshape(U1[n].todense(), (D))
v1 /= np.sum(v1)
np.testing.assert_allclose(v1, v2, rtol=10**(-5), atol=10**(-5))
#################################################################
# finished sanity checks
#################################################################
def test_eigsh_lanczos_raises():
backend = symmetric_backend.SymmetricBackend()
with pytest.raises(
ValueError, match='`num_krylov_vecs` >= `numeig` required!'):
backend.eigsh_lanczos(lambda x: x, numeig=10, num_krylov_vecs=9)
with pytest.raises(
ValueError,
match="Got numeig = 2 > 1 and `reorthogonalize = False`. "
"Use `reorthogonalize=True` for `numeig > 1`"):
backend.eigsh_lanczos(lambda x: x, numeig=2, reorthogonalize=False)
with pytest.raises(
ValueError,
match="if no `initial_state` is passed, then `shape` and"
"`dtype` have to be provided"):
backend.eigsh_lanczos(lambda x: x, shape=(10,), dtype=None)
with pytest.raises(
ValueError,
match="if no `initial_state` is passed, then `shape` and"
"`dtype` have to be provided"):
backend.eigsh_lanczos(lambda x: x, shape=None, dtype=np.float64)
with pytest.raises(
ValueError,
match="if no `initial_state` is passed, then `shape` and"
"`dtype` have to be provided"):
backend.eigsh_lanczos(lambda x: x)
with pytest.raises(
TypeError, match="Expected a `BlockSparseTensor`. Got <class 'list'>"):
backend.eigsh_lanczos(lambda x: x, initial_state=[1, 2, 3])
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
def test_eigsh_valid_init_operator_with_shape(dtype):
np.random.seed(100)
backend = symmetric_backend.SymmetricBackend()
np_backend = numpy_backend.NumPyBackend()
D = 16
index = Index(U1Charge.random(D, -1, 1), True)
indices = [index, index.copy().flip_flow()]
a = BlockSparseTensor.random(indices, dtype=dtype)
H = a + a.T.conj()
def mv(vec, mat):
return mat @ vec
init = BlockSparseTensor.random([index], dtype=dtype)
# note: this will only find eigenvalues in the charge (0,0)
# block of H because `init` only has non-zero values there.
# To find eigen values in other sectors we need to support non-zero
# divergence for block-sparse tensors
eta1, U1 = backend.eigsh_lanczos(mv, [H], init)
eta2, U2 = np_backend.eigsh_lanczos(mv, [H.todense()], init.todense())
v1 = np.reshape(U1[0].todense(), (D))
v1 = v1 / sum(v1)
v1 /= np.linalg.norm(v1)
v2 = np.reshape(U2[0], (D))
v2 = v2 / sum(v2)
v2[np.abs(v2) < 1E-12] = 0.0
v2 /= np.linalg.norm(v2)
np.testing.assert_allclose(eta1[0], min(eta2))
np.testing.assert_allclose(v1, v2)
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
def test_diagflat(dtype, num_charges):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(3, num_charges, dtype)
with pytest.raises(ValueError):
backend.diagflat(a)
b = get_chargearray(num_charges, dtype)
expected = diag(b)
actual = backend.diagflat(b)
np.testing.assert_allclose(expected.data, actual.data)
assert np.all([
charge_equal(expected._charges[n], actual._charges[n])
for n in range(len(actual._charges))
])
with pytest.raises(
NotImplementedError, match="Can't specify k with Symmetric backend"):
actual = backend.diagflat(b, k=1)
@pytest.mark.parametrize('dtype', np_dtypes)
@pytest.mark.parametrize('num_charges', [1, 2, 3])
@pytest.mark.parametrize('Ds', [[200, 100], [100, 200]])
@pytest.mark.parametrize('flow', [False, True])
def test_diagonal(Ds, dtype, num_charges, flow):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
np_flow = -np.int((np.int(flow) - 0.5) * 2)
indices = [
Index(
BaseCharge(
np.random.randint(-2, 3, (Ds[n], num_charges)),
charge_types=[U1Charge] * num_charges), flow) for n in range(2)
]
arr = BlockSparseTensor.random(indices, dtype=dtype)
fused = fuse_charges(arr.flat_charges, arr.flat_flows)
inds = np.nonzero(fused == np.zeros((1, num_charges), dtype=np.int16))[0]
# pylint: disable=no-member
left, _ = np.divmod(inds, Ds[1])
unique = np.unique(
np_flow * (indices[0]._charges[0].charges[left, :]), axis=0)
diagonal = backend.diagonal(arr)
sparse_blocks, _, block_shapes = _find_diagonal_sparse_blocks(
arr.flat_charges, arr.flat_flows, 1)
data = np.concatenate([
np.diag(np.reshape(arr.data[sparse_blocks[n]], block_shapes[:, n]))
for n in range(len(sparse_blocks))
])
np.testing.assert_allclose(data, diagonal.data)
np.testing.assert_allclose(unique, diagonal.flat_charges[0].unique_charges)
with pytest.raises(NotImplementedError):
diagonal = backend.diagonal(arr, axis1=0)
with pytest.raises(NotImplementedError):
diagonal = backend.diagonal(arr, axis2=1)
with pytest.raises(NotImplementedError):
diagonal = backend.diagonal(arr, offset=1)
@pytest.mark.parametrize("dtype", np_tensordot_dtypes)
@pytest.mark.parametrize("num_charges", [1, 2])
@pytest.mark.parametrize("offset", [0, 1])
@pytest.mark.parametrize("axis1", range(0, 1))
@pytest.mark.parametrize("axis2", range(0, 1))
def test_trace(dtype, num_charges, offset, axis1, axis2):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
a = get_square_matrix(num_charges, dtype)
if offset != 0:
with pytest.raises(NotImplementedError):
actual = backend.trace(a, offset=offset, axis1=axis1, axis2=axis2)
else:
if axis1 == axis2:
with pytest.raises(ValueError):
actual = backend.trace(a, offset=offset, axis1=axis1, axis2=axis2)
else:
actual = backend.trace(a, offset=offset, axis1=axis1, axis2=axis2)
expected = trace(a, [axis1, axis2])
np.testing.assert_allclose(actual.data, expected.data)
def test_pivot_not_implemented():
backend = symmetric_backend.SymmetricBackend()
with pytest.raises(NotImplementedError):
backend.pivot(np.ones((2, 2)))
def test_eigsh_lanczos_caching():
def matvec(mps, A, B, C):
return ncon([A, mps, B, C],
[[3, 1, -1], [1, 2, 4], [3, 5, -2, 2], [5, 4, -3]],
backend='symmetric')
backend = symmetric_backend.SymmetricBackend()
D = 100
M = 5
mpsinds = [
Index(U1Charge(np.random.randint(5, 15, D, dtype=np.int16)), False),
Index(U1Charge(np.array([0, 1, 2, 3], dtype=np.int16)), False),
Index(U1Charge(np.random.randint(5, 18, D, dtype=np.int16)), True)
]
mpoinds = [
Index(U1Charge(np.random.randint(0, 5, M)), False),
Index(U1Charge(np.random.randint(0, 10, M)), True), mpsinds[1],
mpsinds[1].flip_flow()
]
Linds = [mpoinds[0].flip_flow(), mpsinds[0].flip_flow(), mpsinds[0]]
Rinds = [mpoinds[1].flip_flow(), mpsinds[2].flip_flow(), mpsinds[2]]
mps = BlockSparseTensor.random(mpsinds)
mpo = BlockSparseTensor.random(mpoinds)
L = BlockSparseTensor.random(Linds)
R = BlockSparseTensor.random(Rinds)
ncv = 20
backend.eigsh_lanczos(
matvec, [L, mpo, R], initial_state=mps, num_krylov_vecs=ncv)
assert get_cacher().cache == {}
def test_eigsh_lanczos_cache_exception():
dtype = np.float64
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
D = 16
index = Index(U1Charge.random(D, 0, 0), True)
def mv(vec):
raise ValueError()
init = BlockSparseTensor.random([index], dtype=dtype)
with pytest.raises(ValueError):
backend.eigsh_lanczos(mv, [], init)
cacher = get_cacher()
assert not cacher.do_caching
assert not get_caching_status()
assert cacher.cache == {}
def compare_eigvals_and_eigvecs(U, eta, U_exact, eta_exact, thresh=1E-8):
_, iy = np.nonzero(np.abs(eta[:, None] - eta_exact[None, :]) < thresh)
U_exact_perm = U_exact[:, iy]
U_exact_perm = U_exact_perm / np.expand_dims(np.sum(U_exact_perm, axis=0), 0)
U = U / np.expand_dims(np.sum(U, axis=0), 0)
np.testing.assert_allclose(U_exact_perm, U)
np.testing.assert_allclose(eta, eta_exact[iy])
#################################################################
# the following is a sanity check for eigs which does not
# really use block sparsity (all charges are identity charges)
#################################################################
@pytest.mark.parametrize("dtype", [np.float64, np.complex128])
def test_eigs_valid_init_operator_with_shape_sanity_check(dtype):
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
D = 16
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
def mv(vec, mat):
return mat @ vec
init = BlockSparseTensor.random([index], dtype=dtype)
eta1, U1 = backend.eigs(mv, [H], init)
eta2, U2 = np.linalg.eig(H.todense())
compare_eigvals_and_eigvecs(
np.stack([u.todense() for u in U1], axis=1), eta1, U2, eta2, thresh=1E-8)
def test_eigs_cache_exception():
dtype = np.float64
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
D = 16
index = Index(U1Charge.random(D, 0, 0), True)
def mv(vec):
raise ValueError()
init = BlockSparseTensor.random([index], dtype=dtype)
with pytest.raises(ValueError):
backend.eigs(mv, [], init)
cacher = get_cacher()
assert not cacher.do_caching
assert not get_caching_status()
assert cacher.cache == {}
def test_eigs_raises():
np.random.seed(10)
dtype = np.float64
backend = symmetric_backend.SymmetricBackend()
D = 16
index = Index(U1Charge.random(D, 0, 0), True)
indices = [index, index.copy().flip_flow()]
H = BlockSparseTensor.random(indices, dtype=dtype)
init = BlockSparseTensor.random([index], dtype=dtype)
with pytest.raises(
ValueError, match='which = SI is currently not supported.'):
backend.eigs(lambda x: x, [H], initial_state=init, which='SI')
with pytest.raises(
ValueError, match='which = LI is currently not supported.'):
backend.eigs(lambda x: x, [H], initial_state=init, which='LI')
with pytest.raises(
ValueError,
match="if no `initial_state` is passed, then `shape` and"
"`dtype` have to be provided"):
backend.eigs(lambda x: x, [H])
with pytest.raises(ValueError, match="`num_krylov_vecs`"):
backend.eigs(lambda x: x, [H], numeig=3, num_krylov_vecs=3)
with pytest.raises(TypeError, match="Expected a"):
backend.eigs(lambda x: x, [H], initial_state=[])
def test_decomps_raise():
np.random.seed(10)
dtype = np.float64
backend = symmetric_backend.SymmetricBackend()
D = 16
R = 3
indices = [Index(U1Charge.random(D, -5, 5), True) for _ in range(R)]
H = BlockSparseTensor.random(indices, dtype=dtype)
with pytest.raises(
NotImplementedError,
match="Can't specify non_negative_diagonal with BlockSparse."):
backend.qr(H, non_negative_diagonal=True)
with pytest.raises(
NotImplementedError,
match="Can't specify non_negative_diagonal with BlockSparse."):
backend.rq(H, non_negative_diagonal=True)
def test_convert_to_tensor_raises():
np.random.seed(10)
backend = symmetric_backend.SymmetricBackend()
with pytest.raises(TypeError, match="cannot convert tensor of type"):
backend.convert_to_tensor(np.random.rand(3, 3))
def test_einsum_raises():
backend = symmetric_backend.SymmetricBackend()
with pytest.raises(
NotImplementedError, match="`einsum` currently not implemented"):
backend.einsum('', [])
| 33.755713 | 117 | 0.676697 | 5,962 | 42,836 | 4.724254 | 0.060382 | 0.060001 | 0.082759 | 0.087233 | 0.801924 | 0.758574 | 0.728183 | 0.702869 | 0.6723 | 0.6543 | 0 | 0.030901 | 0.166706 | 42,836 | 1,268 | 118 | 33.782334 | 0.758173 | 0.01319 | 0 | 0.622517 | 0 | 0 | 0.03683 | 0.001529 | 0 | 0 | 0 | 0 | 0.075686 | 1 | 0.07474 | false | 0.003784 | 0.009461 | 0.007569 | 0.0965 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e40a760b77fb785161d12a76b0a5ad1602d1d59a | 89 | py | Python | project0/companies/admin.py | priyanshgupta1998/Python-django | 00ed73ddf22ebaaec9221c766f934d4fe42a3778 | [
"MIT"
] | null | null | null | project0/companies/admin.py | priyanshgupta1998/Python-django | 00ed73ddf22ebaaec9221c766f934d4fe42a3778 | [
"MIT"
] | null | null | null | project0/companies/admin.py | priyanshgupta1998/Python-django | 00ed73ddf22ebaaec9221c766f934d4fe42a3778 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Stock
admin.site.register(Stock) | 22.25 | 33 | 0.797753 | 13 | 89 | 5.461538 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 4 | 34 | 22.25 | 0.922078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c158913793b9714649054c6bc8fb3fc2423cb12 | 96 | py | Python | bitmovin_api_sdk/encoding/filters/audio_mix/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/filters/audio_mix/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/filters/audio_mix/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.filters.audio_mix.customdata.customdata_api import CustomdataApi
| 48 | 95 | 0.90625 | 13 | 96 | 6.384615 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 96 | 1 | 96 | 96 | 0.902174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c57db5de8cf2bdfae342534b6f3d08decee4825 | 476 | py | Python | guillotina/api/__init__.py | onna/guillotina | e4b05d97a1a8e4c211c6b97b2770ced1d97c4569 | [
"BSD-2-Clause"
] | null | null | null | guillotina/api/__init__.py | onna/guillotina | e4b05d97a1a8e4c211c6b97b2770ced1d97c4569 | [
"BSD-2-Clause"
] | null | null | null | guillotina/api/__init__.py | onna/guillotina | e4b05d97a1a8e4c211c6b97b2770ced1d97c4569 | [
"BSD-2-Clause"
] | null | null | null | # these imports are done to force loading services
from . import addons # noqa
from . import app # noqa
from . import behaviors # noqa
from . import container # noqa
from . import content # noqa
from . import dynamic # noqa
from . import files # noqa
from . import registry # noqa
from . import search # noqa
from . import storage # noqa
from . import types # noqa
from . import user # noqa
from . import ws # noqa
from guillotina.json import definitions # noqa
| 29.75 | 50 | 0.714286 | 66 | 476 | 5.151515 | 0.409091 | 0.382353 | 0.494118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22479 | 476 | 15 | 51 | 31.733333 | 0.921409 | 0.247899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c5e6abb85c67218da386ebc3a4d8493e93bc0f2 | 32 | py | Python | pydumpi/__init__.py | justacid/pydumpi | b2fd6dd37d85fe80e941d68005ccd3502f5cee4d | [
"MIT"
] | null | null | null | pydumpi/__init__.py | justacid/pydumpi | b2fd6dd37d85fe80e941d68005ccd3502f5cee4d | [
"MIT"
] | null | null | null | pydumpi/__init__.py | justacid/pydumpi | b2fd6dd37d85fe80e941d68005ccd3502f5cee4d | [
"MIT"
] | null | null | null | from .undumpi import DumpiTrace
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
7c787b4c7348a50c77ea6b03aaa8419f36f63aa1 | 176 | py | Python | src/globus_sdk/services/transfer/response/__init__.py | sirosen/globus-sdk-python | 0d4e420f52329ab8f993bfe6f86729fb1ef07570 | [
"ECL-2.0",
"Apache-2.0"
] | 47 | 2016-04-13T21:28:19.000Z | 2022-02-28T18:28:18.000Z | src/globus_sdk/services/transfer/response/__init__.py | sirosen/globus-sdk-python | 0d4e420f52329ab8f993bfe6f86729fb1ef07570 | [
"ECL-2.0",
"Apache-2.0"
] | 314 | 2016-04-12T15:07:32.000Z | 2022-03-14T21:00:50.000Z | src/globus_sdk/services/transfer/response/__init__.py | sirosen/globus-sdk-python | 0d4e420f52329ab8f993bfe6f86729fb1ef07570 | [
"ECL-2.0",
"Apache-2.0"
] | 36 | 2016-06-14T14:05:13.000Z | 2022-02-18T17:20:51.000Z | from .activation import ActivationRequirementsResponse
from .iterable import IterableTransferResponse
__all__ = ["IterableTransferResponse", "ActivationRequirementsResponse"]
| 35.2 | 72 | 0.869318 | 11 | 176 | 13.545455 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073864 | 176 | 4 | 73 | 44 | 0.91411 | 0 | 0 | 0 | 0 | 0 | 0.306818 | 0.306818 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c9d72436dee7c49600faf3f4e958eca53acd0fe | 47 | py | Python | companies.py | eeti3084/python_learning | 38f2394bbabf6d836905495619ad9b4e5dd8f0af | [
"MIT"
] | null | null | null | companies.py | eeti3084/python_learning | 38f2394bbabf6d836905495619ad9b4e5dd8f0af | [
"MIT"
] | null | null | null | companies.py | eeti3084/python_learning | 38f2394bbabf6d836905495619ad9b4e5dd8f0af | [
"MIT"
] | null | null | null |
i = 0
for i in range(1, 10):
print(10)
i += 1
| 7.833333 | 22 | 0.531915 | 12 | 47 | 2.083333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205882 | 0.276596 | 47 | 5 | 23 | 9.4 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7cbc5ceb2bd2a119a9340ff4f0187c46c2ec3280 | 1,148 | py | Python | logs_manager/migrations/0007_auto_20200724_0725.py | adwait-thattey/raygun_api | 9d5571de452fbf70d34b9583ebc42eb662292f61 | [
"MIT"
] | null | null | null | logs_manager/migrations/0007_auto_20200724_0725.py | adwait-thattey/raygun_api | 9d5571de452fbf70d34b9583ebc42eb662292f61 | [
"MIT"
] | 7 | 2020-06-06T01:40:06.000Z | 2022-02-10T09:12:56.000Z | logs_manager/migrations/0007_auto_20200724_0725.py | adwait-thattey/raygun_api | 9d5571de452fbf70d34b9583ebc42eb662292f61 | [
"MIT"
] | 1 | 2021-08-16T13:23:34.000Z | 2021-08-16T13:23:34.000Z | # Generated by Django 2.1.5 on 2020-07-24 07:25
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('logs_manager', '0006_auto_20200724_0722'),
]
operations = [
migrations.AlterField(
model_name='userinteraction',
name='element',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='userinteraction',
name='elementId',
field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='userinteraction',
name='innerText',
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name='userinteraction',
name='location',
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name='userinteraction',
name='timestamp',
field=models.CharField(blank=True, max_length=200, null=True),
),
]
| 29.435897 | 74 | 0.582753 | 109 | 1,148 | 6.027523 | 0.40367 | 0.152207 | 0.190259 | 0.2207 | 0.701674 | 0.701674 | 0.628615 | 0.628615 | 0.628615 | 0.628615 | 0 | 0.049938 | 0.302265 | 1,148 | 38 | 75 | 30.210526 | 0.770287 | 0.039199 | 0 | 0.625 | 1 | 0 | 0.138056 | 0.02089 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7cc868991267e5a83c6d269f5bad71e2804e1fbb | 15,540 | py | Python | sushy_tools/tests/unit/emulator/test_libvirt.py | vcrhonek/sushy-tools | ff4e7f4e339a1bce09204dc1bcccc261b3b6615e | [
"Apache-2.0"
] | null | null | null | sushy_tools/tests/unit/emulator/test_libvirt.py | vcrhonek/sushy-tools | ff4e7f4e339a1bce09204dc1bcccc261b3b6615e | [
"Apache-2.0"
] | null | null | null | sushy_tools/tests/unit/emulator/test_libvirt.py | vcrhonek/sushy-tools | ff4e7f4e339a1bce09204dc1bcccc261b3b6615e | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
import libvirt
from oslotest import base
from six.moves import mock
from sushy_tools.emulator.drivers.libvirtdriver import LibvirtDriver
from sushy_tools import error
class LibvirtDriverTestCase(base.BaseTestCase):
name = 'QEmu-fedora-i686'
uuid = 'c7a5fdbd-cdaf-9455-926a-d65c16db1809'
def setUp(self):
self.test_driver = LibvirtDriver()
super(LibvirtDriverTestCase, self).setUp()
@mock.patch('libvirt.open', autospec=True)
def test__get_domain_by_name(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
lookupByUUID_mock = conn_mock.lookupByUUID
domain_mock = lookupByUUID_mock.return_value
domain_mock.UUIDString.return_value = self.uuid
self.assertRaises(
error.AliasAccessError, self.test_driver._get_domain, self.name)
@mock.patch('libvirt.open', autospec=True)
def test__get_domain_by_uuid(self, libvirt_mock):
domain_id = uuid.UUID(self.uuid)
conn_mock = libvirt_mock.return_value
lookupByUUID_mock = conn_mock.lookupByUUID
self.test_driver._get_domain(str(domain_id))
lookupByUUID_mock.assert_called_once_with(domain_id.bytes)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_uuid(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByName.return_value
domain_mock.UUIDString.return_value = self.uuid
self.assertRaises(error.AliasAccessError,
self.test_driver.uuid, 'name')
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_systems(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain = mock.MagicMock()
domain.UUIDString.return_value = self.uuid
conn_mock.listDefinedDomains.return_value = [domain]
systems = self.test_driver.systems
self.assertEqual([self.uuid], systems)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_power_state_on(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.UUIDString.return_value = self.uuid
domain_mock.isActive.return_value = True
domain_mock.maxMemory.return_value = 1024 * 1024
domain_mock.maxVcpus.return_value = 2
power_state = self.test_driver.get_power_state(self.uuid)
self.assertEqual('On', power_state)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_power_state_off(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = False
power_state = self.test_driver.get_power_state(self.uuid)
self.assertEqual('Off', power_state)
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_on(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = False
self.test_driver.set_power_state(self.uuid, 'On')
domain_mock.create.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_forceon(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = False
self.test_driver.set_power_state(self.uuid, 'ForceOn')
domain_mock.create.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_forceoff(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
self.test_driver.set_power_state(self.uuid, 'ForceOff')
domain_mock.destroy.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_gracefulshutdown(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
self.test_driver.set_power_state(self.uuid, 'GracefulShutdown')
domain_mock.shutdown.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_gracefulrestart(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
self.test_driver.set_power_state(self.uuid, 'GracefulRestart')
domain_mock.reboot.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_forcerestart(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
self.test_driver.set_power_state(self.uuid, 'ForceRestart')
domain_mock.reset.assert_called_once_with()
@mock.patch('libvirt.open', autospec=True)
def test_set_power_state_nmi(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
self.test_driver.set_power_state(self.uuid, 'Nmi')
domain_mock.injectNMI.assert_called_once_with()
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_boot_device(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml', 'r') as f:
data = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = data
boot_device = self.test_driver.get_boot_device(self.uuid)
self.assertEqual('Cd', boot_device)
@mock.patch('libvirt.open', autospec=True)
def test_set_boot_device(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml', 'r') as f:
data = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = data
self.test_driver.set_boot_device(self.uuid, 'Hdd')
conn_mock.defineXML.assert_called_once_with(mock.ANY)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_boot_mode(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml', 'r') as f:
data = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = data
boot_mode = self.test_driver.get_boot_mode(self.uuid)
self.assertEqual('Legacy', boot_mode)
@mock.patch('libvirt.open', autospec=True)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_set_boot_mode(self, libvirt_mock, libvirt_rw_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml', 'r') as f:
data = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = data
self.test_driver.set_boot_mode(self.uuid, 'Uefi')
conn_mock = libvirt_rw_mock.return_value
conn_mock.defineXML.assert_called_once_with(mock.ANY)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_total_memory(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.maxMemory.return_value = 1024 * 1024
memory = self.test_driver.get_total_memory(self.uuid)
self.assertEqual(1, memory)
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_total_cpus(self, libvirt_mock):
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.isActive.return_value = True
domain_mock.XMLDesc.return_value = b'<empty/>'
domain_mock.maxVcpus.return_value = 2
cpus = self.test_driver.get_total_cpus(self.uuid)
self.assertEqual(2, cpus)
@mock.patch('libvirt.open', autospec=True)
def test_get_bios(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
bios_attributes = self.test_driver.get_bios(self.uuid)
self.assertEqual(LibvirtDriver.DEFAULT_BIOS_ATTRIBUTES,
bios_attributes)
conn_mock.defineXML.assert_called_once_with(mock.ANY)
@mock.patch('libvirt.open', autospec=True)
def test_get_bios_existing(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain_bios.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
bios_attributes = self.test_driver.get_bios(self.uuid)
self.assertEqual({"BootMode": "Bios",
"EmbeddedSata": "Raid",
"NicBoot1": "NetworkBoot",
"ProcTurboMode": "Disabled"},
bios_attributes)
conn_mock.defineXML.assert_not_called()
@mock.patch('libvirt.open', autospec=True)
def test_set_bios(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain_bios.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
self.test_driver.set_bios(self.uuid,
{"BootMode": "Uefi",
"ProcTurboMode": "Enabled"})
conn_mock.defineXML.assert_called_once_with(mock.ANY)
@mock.patch('libvirt.open', autospec=True)
def test_reset_bios(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain_bios.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
self.test_driver.reset_bios(self.uuid)
conn_mock.defineXML.assert_called_once_with(mock.ANY)
def test__process_bios_attributes_get_default(self):
with open('sushy_tools/tests/unit/emulator/domain.xml') as f:
domain_xml = f.read()
result = self.test_driver._process_bios_attributes(domain_xml)
self.assertTrue(result.attributes_written)
self.assertEqual(LibvirtDriver.DEFAULT_BIOS_ATTRIBUTES,
result.bios_attributes)
self._assert_bios_xml(result.tree)
def test__process_bios_attributes_get_default_metadata_exists(self):
with open('sushy_tools/tests/unit/emulator/'
'domain_metadata.xml') as f:
domain_xml = f.read()
result = self.test_driver._process_bios_attributes(domain_xml)
self.assertTrue(result.attributes_written)
self.assertEqual(LibvirtDriver.DEFAULT_BIOS_ATTRIBUTES,
result.bios_attributes)
self._assert_bios_xml(result.tree)
def test__process_bios_attributes_get_existing(self):
with open('sushy_tools/tests/unit/emulator/domain_bios.xml') as f:
domain_xml = f.read()
result = self.test_driver._process_bios_attributes(domain_xml)
self.assertFalse(result.attributes_written)
self.assertEqual({"BootMode": "Bios",
"EmbeddedSata": "Raid",
"NicBoot1": "NetworkBoot",
"ProcTurboMode": "Disabled"},
result.bios_attributes)
self._assert_bios_xml(result.tree)
def test__process_bios_attributes_update(self):
with open('sushy_tools/tests/unit/emulator/domain_bios.xml') as f:
domain_xml = f.read()
result = self.test_driver._process_bios_attributes(
domain_xml,
{"BootMode": "Uefi",
"ProcTurboMode": "Enabled"},
True)
self.assertTrue(result.attributes_written)
self.assertEqual({"BootMode": "Uefi",
"ProcTurboMode": "Enabled"},
result.bios_attributes)
self._assert_bios_xml(result.tree)
def _assert_bios_xml(self, tree):
ns = {'sushy': 'http://openstack.org/xmlns/libvirt/sushy'}
self.assertIsNotNone(tree.find('metadata')
.find('sushy:bios', ns)
.find('sushy:attributes', ns))
@mock.patch('libvirt.open', autospec=True)
def test__process_bios_error(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
conn_mock.defineXML.side_effect = libvirt.libvirtError(
'because I can')
self.assertRaises(error.FishyError,
self.test_driver._process_bios,
'xxx-yyy-zzz',
{"BootMode": "Uefi",
"ProcTurboMode": "Enabled"})
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_nics(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain_nics.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
nics = self.test_driver.get_nics(self.uuid)
self.assertEqual([{'id': '00:11:22:33:44:55',
'mac': '00:11:22:33:44:55'},
{'id': '52:54:00:4e:5d:37',
'mac': '52:54:00:4e:5d:37'}],
sorted(nics, key=lambda k: k['id']))
@mock.patch('libvirt.openReadOnly', autospec=True)
def test_get_nics_empty(self, libvirt_mock):
with open('sushy_tools/tests/unit/emulator/domain.xml') as f:
domain_xml = f.read()
conn_mock = libvirt_mock.return_value
domain_mock = conn_mock.lookupByUUID.return_value
domain_mock.XMLDesc.return_value = domain_xml
nics = self.test_driver.get_nics(self.uuid)
self.assertEqual([], nics)
| 39.744246 | 78 | 0.67677 | 1,930 | 15,540 | 5.14456 | 0.110881 | 0.090845 | 0.095881 | 0.101521 | 0.796958 | 0.773089 | 0.746601 | 0.729177 | 0.705711 | 0.683453 | 0 | 0.007499 | 0.227735 | 15,540 | 390 | 79 | 39.846154 | 0.819848 | 0.035135 | 0 | 0.622837 | 0 | 0 | 0.11209 | 0.045797 | 0 | 0 | 0 | 0 | 0.145329 | 1 | 0.110727 | false | 0 | 0.020761 | 0 | 0.141869 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7cd939ae3940c5c2c223b3db009166e805f85572 | 208 | py | Python | Python Tutorial Reinforcement Learning/9_muzero/games/utils/physics.py | PaulPan00/donkey_wrapper | a03cf0f42f65625fbce792b06c98acd153c5d6c8 | [
"MIT"
] | 6 | 2021-03-26T01:42:31.000Z | 2021-04-11T16:17:42.000Z | Python Tutorial Reinforcement Learning/9_muzero/games/utils/physics.py | packetsss/Python | a03cf0f42f65625fbce792b06c98acd153c5d6c8 | [
"MIT"
] | null | null | null | Python Tutorial Reinforcement Learning/9_muzero/games/utils/physics.py | packetsss/Python | a03cf0f42f65625fbce792b06c98acd153c5d6c8 | [
"MIT"
] | 7 | 2021-04-06T06:55:22.000Z | 2021-05-03T11:26:38.000Z | # Create by Packetsss
# Personal use is allowed
# Commercial use is prohibited
import numpy as np
def distance_between_two_points(p1, p2):
return np.sqrt(((p1[0] - p2[0]) ** 2) + ((p1[1] - p2[1]) ** 2)) | 26 | 67 | 0.658654 | 35 | 208 | 3.828571 | 0.714286 | 0.074627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 0.182692 | 208 | 8 | 67 | 26 | 0.717647 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
6b1b7902186f76bd74575a9557d662ac79cf40b3 | 740 | py | Python | lib/gateway/trading_gateway.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | 1 | 2018-06-28T09:49:08.000Z | 2018-06-28T09:49:08.000Z | lib/gateway/trading_gateway.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | null | null | null | lib/gateway/trading_gateway.py | myron0330/metatrade | b0358ad3dce6ba50e4801b6af557d7883d8a5d9a | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
# **********************************************************************************#
# File:
# **********************************************************************************#
from . base_gateway import BaseTradingGateway
class TradingGateway(BaseTradingGateway):
def start(self):
"""
Start
"""
raise NotImplementedError
def stop(self):
"""
Stop
"""
raise NotImplementedError
def on_bar(self, *args, **kwargs):
"""
On bar response
"""
raise NotImplementedError
def on_portfolio(self, *args, **kwargs):
"""
On portfolio response
"""
raise NotImplementedError
| 22.424242 | 85 | 0.409459 | 46 | 740 | 6.521739 | 0.5 | 0.32 | 0.27 | 0.193333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00188 | 0.281081 | 740 | 32 | 86 | 23.125 | 0.56203 | 0.333784 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.1 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
6b39e32498bfb7f59da24e2922a0528e3eb234ca | 780 | py | Python | ROS workspaces/husky_ws/build/husky_base/catkin_generated/pkg.develspace.context.pc.py | arimendelow/automated_pipeline_inspector | 0b3977eb86e66ae3ef056a48fd04dc5198864dc7 | [
"MIT"
] | null | null | null | ROS workspaces/husky_ws/build/husky_base/catkin_generated/pkg.develspace.context.pc.py | arimendelow/automated_pipeline_inspector | 0b3977eb86e66ae3ef056a48fd04dc5198864dc7 | [
"MIT"
] | null | null | null | ROS workspaces/husky_ws/build/husky_base/catkin_generated/pkg.develspace.context.pc.py | arimendelow/automated_pipeline_inspector | 0b3977eb86e66ae3ef056a48fd04dc5198864dc7 | [
"MIT"
] | null | null | null | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "/home/nvidia/husky_ws/src/husky_base/include;/usr/include".split(';') if "/home/nvidia/husky_ws/src/husky_base/include;/usr/include" != "" else []
PROJECT_CATKIN_DEPENDS = "diagnostic_updater;hardware_interface;husky_msgs;roscpp;sensor_msgs".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "-lhorizon_legacy;-l:/usr/lib/aarch64-linux-gnu/libboost_chrono.so;-l:/usr/lib/aarch64-linux-gnu/libboost_system.so".split(';') if "-lhorizon_legacy;-l:/usr/lib/aarch64-linux-gnu/libboost_chrono.so;-l:/usr/lib/aarch64-linux-gnu/libboost_system.so" != "" else []
PROJECT_NAME = "husky_base"
PROJECT_SPACE_DIR = "/home/nvidia/husky_ws/devel"
PROJECT_VERSION = "0.2.6"
| 86.666667 | 296 | 0.776923 | 118 | 780 | 4.864407 | 0.466102 | 0.027875 | 0.04878 | 0.097561 | 0.473868 | 0.473868 | 0.473868 | 0.473868 | 0.473868 | 0.473868 | 0 | 0.014845 | 0.05 | 780 | 8 | 297 | 97.5 | 0.759784 | 0.069231 | 0 | 0 | 1 | 0.285714 | 0.628453 | 0.60221 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6b445eb10d40ce44fdaa3841ad48e12c4a070817 | 149 | py | Python | tests/web_platform/CSS2/positioning/test_abspos_overflow.py | fletchgraham/colosseum | 77be4896ee52b8f5956a3d77b5f2ccd2c8608e8f | [
"BSD-3-Clause"
] | null | null | null | tests/web_platform/CSS2/positioning/test_abspos_overflow.py | fletchgraham/colosseum | 77be4896ee52b8f5956a3d77b5f2ccd2c8608e8f | [
"BSD-3-Clause"
] | null | null | null | tests/web_platform/CSS2/positioning/test_abspos_overflow.py | fletchgraham/colosseum | 77be4896ee52b8f5956a3d77b5f2ccd2c8608e8f | [
"BSD-3-Clause"
] | 1 | 2020-01-16T01:56:41.000Z | 2020-01-16T01:56:41.000Z | from tests.utils import W3CTestCase
class TestAbsposOverflow(W3CTestCase):
vars().update(W3CTestCase.find_tests(__file__, 'abspos-overflow-'))
| 24.833333 | 71 | 0.791946 | 16 | 149 | 7.0625 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.09396 | 149 | 5 | 72 | 29.8 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
863b0906e7311671c8675d8441ca648cb7418391 | 171 | py | Python | app/main/routing.py | t3m8ch/true-di-fastapi | 5f55db0c20a4e2b4977fb2a170acbc6e6a5c6c62 | [
"MIT"
] | null | null | null | app/main/routing.py | t3m8ch/true-di-fastapi | 5f55db0c20a4e2b4977fb2a170acbc6e6a5c6c62 | [
"MIT"
] | null | null | null | app/main/routing.py | t3m8ch/true-di-fastapi | 5f55db0c20a4e2b4977fb2a170acbc6e6a5c6c62 | [
"MIT"
] | null | null | null | from fastapi import FastAPI
from app.modules.todo.endpoints import router as todo_router
def include_routers(app: FastAPI) -> None:
app.include_router(todo_router)
| 21.375 | 60 | 0.795322 | 25 | 171 | 5.28 | 0.52 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134503 | 171 | 7 | 61 | 24.428571 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
863d3c235a763ad10aff1c8ee194451214a10578 | 311 | py | Python | Python/Tests/TestData/Coverage/Expressions/test.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 404 | 2019-05-07T02:21:57.000Z | 2022-03-31T17:03:04.000Z | Python/Tests/TestData/Coverage/Expressions/test.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 1,672 | 2019-05-06T21:09:38.000Z | 2022-03-31T23:16:04.000Z | Python/Tests/TestData/Coverage/Expressions/test.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 186 | 2019-05-13T03:17:37.000Z | 2022-03-31T16:24:05.000Z | min
1
1 + 2
{42:100}
[2,3,4]
(min)
min(2,3)
1 and 2
`min`
[x for x in 'abc']
[x for x in 'abc' if x == 42]
42 if min else 100
{x:y for x,y in {2:3}.items()}
(x for x in [1,2,3])
y = [42][0]
y = min.__name__
1 or 2
x = (2,3,4)
x = [1,2,3][0:2]
x = {2,3,4}
x = +100
x = -100
{x for x in [2,3,4]}
x = lambda x: 42
| 12.44 | 30 | 0.504823 | 89 | 311 | 1.719101 | 0.224719 | 0.104575 | 0.078431 | 0.183007 | 0.20915 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0.229787 | 0.244373 | 311 | 24 | 31 | 12.958333 | 0.421277 | 0 | 0 | 0 | 0 | 0 | 0.019293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
86496c0ae6e7526f22d070aac27c30f17d844f31 | 218 | py | Python | users/serializers.py | gallardowolfcode/test_django | f09a1cfefa0370fd8dd1390d7f99fd4f554b61fb | [
"MIT"
] | null | null | null | users/serializers.py | gallardowolfcode/test_django | f09a1cfefa0370fd8dd1390d7f99fd4f554b61fb | [
"MIT"
] | null | null | null | users/serializers.py | gallardowolfcode/test_django | f09a1cfefa0370fd8dd1390d7f99fd4f554b61fb | [
"MIT"
] | null | null | null | from rest_framework import serializers
from django.contrib.auth.models import User
from .models import User
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
exclude = [] | 24.222222 | 50 | 0.743119 | 25 | 218 | 6.44 | 0.64 | 0.149068 | 0.198758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197248 | 218 | 9 | 51 | 24.222222 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
864d31db2269c155705f34b6bcf65938587b2b3f | 8,172 | py | Python | faster_rcnn/data/build.py | Shuntw6096/swda-detectron2 | 6ca9cfc2487b979f38adff5cd9138f233a6578cd | [
"MIT"
] | 4 | 2021-09-16T10:29:35.000Z | 2022-03-13T14:28:23.000Z | faster_rcnn/data/build.py | Shuntw6096/swda-detectron2 | 6ca9cfc2487b979f38adff5cd9138f233a6578cd | [
"MIT"
] | 2 | 2021-10-07T00:24:06.000Z | 2022-03-04T10:17:16.000Z | faster_rcnn/data/build.py | Shuntw6096/swda-detectron2 | 6ca9cfc2487b979f38adff5cd9138f233a6578cd | [
"MIT"
] | null | null | null | # modified from https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/build.py
import logging
import torch.utils.data as torchdata
from detectron2.config import configurable
from detectron2.data import get_detection_dataset_dicts, DatasetMapper, build_batch_data_loader
from detectron2.data.samplers import TrainingSampler, RepeatFactorTrainingSampler
from detectron2.data.samplers import RandomSubsetTrainingSampler
from detectron2.utils.logger import _log_api_usage
from detectron2.data.common import DatasetFromList, MapDataset
def _DA_train_loader_from_config(cfg, mapper=None, *, dataset_domain=None, dataset=None, sampler=None):
if dataset is None:
if dataset_domain == 'source':
dataset = get_detection_dataset_dicts(
cfg.DATASETS.SOURCE_DOMAIN.TRAIN,
filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
if cfg.MODEL.KEYPOINT_ON
else 0,
proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
)
_log_api_usage("dataset." + cfg.DATASETS.SOURCE_DOMAIN.TRAIN[0])
elif dataset_domain == 'target':
dataset = get_detection_dataset_dicts(
cfg.DATASETS.TARGET_DOMAIN.TRAIN,
filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
if cfg.MODEL.KEYPOINT_ON
else 0,
proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
)
_log_api_usage("dataset." + cfg.DATASETS.TARGET_DOMAIN.TRAIN[0])
if mapper is None:
mapper = DatasetMapper(cfg, True)
if sampler is None:
sampler_name = cfg.DATALOADER.SAMPLER_TRAIN
logger = logging.getLogger(__name__)
logger.info("Using training sampler {}".format(sampler_name))
if sampler_name == "TrainingSampler":
sampler = TrainingSampler(len(dataset))
elif sampler_name == "RepeatFactorTrainingSampler":
repeat_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency(
dataset, cfg.DATALOADER.REPEAT_THRESHOLD
)
sampler = RepeatFactorTrainingSampler(repeat_factors)
elif sampler_name == "RandomSubsetTrainingSampler":
sampler = RandomSubsetTrainingSampler(len(dataset), cfg.DATALOADER.RANDOM_SUBSET_RATIO)
else:
raise ValueError("Unknown training sampler: {}".format(sampler_name))
return {
"dataset": dataset,
"sampler": sampler,
"mapper": mapper,
"total_batch_size": cfg.SOLVER.IMS_PER_BATCH,
"aspect_ratio_grouping": cfg.DATALOADER.ASPECT_RATIO_GROUPING,
"num_workers": cfg.DATALOADER.NUM_WORKERS,
}
@configurable(from_config=_DA_train_loader_from_config)
def build_DA_detection_train_loader(
dataset, *, mapper, sampler=None, total_batch_size, aspect_ratio_grouping=True, num_workers=0
):
"""
Build a one2one domain adptation dataloader for object detection with some default features.
This interface is experimental.
Args:
dataset (list or torch.utils.data.Dataset): a list of dataset dicts,
or a pytorch dataset (either map-style or iterable). It can be obtained
by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`.
mapper (callable): a callable which takes a sample (dict) from dataset and
returns the format to be consumed by the model.
When using cfg, the default choice is ``DatasetMapper(cfg, is_train=True)``.
sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces
indices to be applied on ``dataset``.
If ``dataset`` is map-style, the default sampler is a :class:`TrainingSampler`,
which coordinates an infinite random shuffle sequence across all workers.
Sampler must be None if ``dataset`` is iterable.
total_batch_size (int): total batch size across all workers. Batching
simply puts data into a list.
aspect_ratio_grouping (bool): whether to group images with similar
aspect ratio for efficiency. When enabled, it requires each
element in dataset be a dict with keys "width" and "height".
num_workers (int): number of parallel data loading workers
Returns:
torch.utils.data.DataLoader:
a dataloader. Each output from it is a ``list[mapped_element]`` of length
``total_batch_size / num_workers``, where ``mapped_element`` is produced
by the ``mapper``.
"""
if isinstance(dataset, list):
dataset = DatasetFromList(dataset, copy=False)
if mapper is not None:
dataset = MapDataset(dataset, mapper)
if isinstance(dataset, torchdata.IterableDataset):
assert sampler is None, "sampler must be None if dataset is IterableDataset"
else:
if sampler is None:
sampler = TrainingSampler(len(dataset))
assert isinstance(sampler, torchdata.Sampler), f"Expect a Sampler but got {type(sampler)}"
return build_batch_data_loader(
dataset,
sampler,
total_batch_size,
aspect_ratio_grouping=aspect_ratio_grouping,
num_workers=num_workers,
)
# @configurable(from_config=_DA_train_loader_from_config(dataset_source='traget'))
# def build_target_domain_detection_train_loader(
# dataset, *, mapper, sampler=None, total_batch_size, aspect_ratio_grouping=True, num_workers=0
# ):
# """
# Build a dataloader for object detection with some default features.
# This interface is experimental.
# Args:
# dataset (list or torch.utils.data.Dataset): a list of dataset dicts,
# or a pytorch dataset (either map-style or iterable). It can be obtained
# by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`.
# mapper (callable): a callable which takes a sample (dict) from dataset and
# returns the format to be consumed by the model.
# When using cfg, the default choice is ``DatasetMapper(cfg, is_train=True)``.
# sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces
# indices to be applied on ``dataset``.
# If ``dataset`` is map-style, the default sampler is a :class:`TrainingSampler`,
# which coordinates an infinite random shuffle sequence across all workers.
# Sampler must be None if ``dataset`` is iterable.
# total_batch_size (int): total batch size across all workers. Batching
# simply puts data into a list.
# aspect_ratio_grouping (bool): whether to group images with similar
# aspect ratio for efficiency. When enabled, it requires each
# element in dataset be a dict with keys "width" and "height".
# num_workers (int): number of parallel data loading workers
# Returns:
# torch.utils.data.DataLoader:
# a dataloader. Each output from it is a ``list[mapped_element]`` of length
# ``total_batch_size / num_workers``, where ``mapped_element`` is produced
# by the ``mapper``.
# """
# if isinstance(dataset, list):
# dataset = DatasetFromList(dataset, copy=False)
# if mapper is not None:
# dataset = MapDataset(dataset, mapper)
# if isinstance(dataset, torchdata.IterableDataset):
# assert sampler is None, "sampler must be None if dataset is IterableDataset"
# else:
# if sampler is None:
# sampler = TrainingSampler(len(dataset))
# assert isinstance(sampler, torchdata.Sampler), f"Expect a Sampler but got {type(sampler)}"
# return build_batch_data_loader(
# dataset,
# sampler,
# total_batch_size,
# aspect_ratio_grouping=aspect_ratio_grouping,
# num_workers=num_workers,
# ) | 50.134969 | 103 | 0.678536 | 978 | 8,172 | 5.492843 | 0.195297 | 0.024572 | 0.028667 | 0.022338 | 0.770663 | 0.724497 | 0.724497 | 0.708861 | 0.708861 | 0.708861 | 0 | 0.002421 | 0.241679 | 8,172 | 163 | 104 | 50.134969 | 0.864451 | 0.514929 | 0 | 0.236842 | 0 | 0 | 0.080946 | 0.019711 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.026316 | false | 0 | 0.105263 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8666b2cd8df9be349b46ad1144d6065c8287c043 | 671 | py | Python | flexs/baselines/models/__init__.py | an1lam/FLEXS | f261a77b4b3ac862d9a1647d5df34d07956ad2f0 | [
"Apache-2.0"
] | 89 | 2020-10-04T21:47:35.000Z | 2022-03-31T14:42:19.000Z | flexs/baselines/models/__init__.py | an1lam/FLEXS | f261a77b4b3ac862d9a1647d5df34d07956ad2f0 | [
"Apache-2.0"
] | 9 | 2020-10-05T17:05:30.000Z | 2022-02-10T03:31:58.000Z | flexs/baselines/models/__init__.py | an1lam/FLEXS | f261a77b4b3ac862d9a1647d5df34d07956ad2f0 | [
"Apache-2.0"
] | 14 | 2020-10-12T04:08:56.000Z | 2022-03-01T14:27:21.000Z | """`baselines.models` module."""
from flexs.baselines.models.adaptive_ensemble import AdaptiveEnsemble # noqa: F401
from flexs.baselines.models.cnn import CNN # noqa: F401
from flexs.baselines.models.global_epistasis_model import ( # noqa: F401
GlobalEpistasisModel,
)
from flexs.baselines.models.keras_model import KerasModel # noqa: F401
from flexs.baselines.models.mlp import MLP # noqa: F401
from flexs.baselines.models.noisy_abstract_model import NoisyAbstractModel # noqa: F401
from flexs.baselines.models.sklearn_models import ( # noqa: F401
LinearRegression,
LogisticRegression,
RandomForest,
SklearnClassifier,
SklearnRegressor,
)
| 39.470588 | 88 | 0.783905 | 77 | 671 | 6.74026 | 0.363636 | 0.231214 | 0.242775 | 0.323699 | 0.308285 | 0.308285 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.134128 | 671 | 16 | 89 | 41.9375 | 0.857143 | 0.154993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.466667 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
866a776d09c0656996bd52fa264c4835513d75ac | 47 | py | Python | kite-go/client/internal/kitelocal/internal/signatures/internal/python/testdata/json.loads.kwArg.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 17 | 2022-01-10T11:01:50.000Z | 2022-03-25T03:21:08.000Z | kite-go/client/internal/kitelocal/internal/signatures/internal/python/testdata/json.loads.kwArg.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 1 | 2022-01-13T14:28:47.000Z | 2022-01-13T14:28:47.000Z | kite-go/client/internal/kitelocal/internal/signatures/internal/python/testdata/json.loads.kwArg.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 7 | 2022-01-07T03:58:10.000Z | 2022-03-24T07:38:20.000Z | import json
json.loads(0,1,strict=true<caret>)
| 15.666667 | 34 | 0.765957 | 9 | 47 | 4 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.06383 | 47 | 2 | 35 | 23.5 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
869b5a1e2101e191f88f22d0fd8c509b642f4fd8 | 106 | py | Python | start.py | TNTwise/Rife-Vulkan-GUI-Linux | 9bf4002493597fee1f44d1f30d295a97a3b9fe7d | [
"MIT"
] | 1 | 2021-08-29T16:03:31.000Z | 2021-08-29T16:03:31.000Z | start.py | TNTwise/Rife-Vulkan-GUI | 9bf4002493597fee1f44d1f30d295a97a3b9fe7d | [
"MIT"
] | null | null | null | start.py | TNTwise/Rife-Vulkan-GUI | 9bf4002493597fee1f44d1f30d295a97a3b9fe7d | [
"MIT"
] | null | null | null | import os
os.system('pip install xterm')
os.system("xterm -e 'bash -c \"python GUI.py; sleep 1000000\" '") | 35.333333 | 65 | 0.688679 | 18 | 106 | 4.055556 | 0.777778 | 0.219178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 0.122642 | 106 | 3 | 65 | 35.333333 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0.35514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
86bc90f290613564926230b13f601582b8c65068 | 508 | py | Python | desafios/desafio009.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | desafios/desafio009.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | desafios/desafio009.py | carlosdaniel-cyber/my-python-exercises | 0d6b2874448e0bc1f8c4a5948b0beae56b95ba6b | [
"MIT"
] | null | null | null | a = int(input('Digite um número para ver sua tabuada: '))
print('-'*12)
print('{} x {:2} = {}'.format(a, 1, a*1))
print('{} x {:2} = {}'.format(a, 2, a*2))
print('{} x {:2} = {}'.format(a, 3, a*3))
print('{} x {:2} = {}'.format(a, 4, a*4))
print('{} x {:2} = {}'.format(a, 5, a*5))
print('{} x {:2} = {}'.format(a, 6, a*6))
print('{} x {:2} = {}'.format(a, 7, a*7))
print('{} x {:2} = {}'.format(a, 8, a*8))
print('{} x {:2} = {}'.format(a, 9, a*9))
print('{} x {:2} = {}'.format(a, 10, a*10))
print('-'*12)
| 36.285714 | 57 | 0.444882 | 94 | 508 | 2.404255 | 0.255319 | 0.265487 | 0.309735 | 0.575221 | 0.619469 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084507 | 0.161417 | 508 | 13 | 58 | 39.076923 | 0.446009 | 0 | 0 | 0.153846 | 0 | 0 | 0.356299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.923077 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
86e42ebeebc4753cd2c5be313e120ebf9d678205 | 291 | py | Python | bvulist.py | jbshep/bvulist | 3252adab5ef1b76ffa69854ac414efcab78c1900 | [
"Apache-2.0"
] | null | null | null | bvulist.py | jbshep/bvulist | 3252adab5ef1b76ffa69854ac414efcab78c1900 | [
"Apache-2.0"
] | null | null | null | bvulist.py | jbshep/bvulist | 3252adab5ef1b76ffa69854ac414efcab78c1900 | [
"Apache-2.0"
] | null | null | null | class bvulist(list):
''' Documentation will go here! '''
def prepend(self, data):
self.insert(0, data)
def pop_front(self):
#return self.pop(0)
# This will make the test fail.
return self.pop()
def pop_back(self):
return self.pop()
| 20.785714 | 39 | 0.570447 | 39 | 291 | 4.205128 | 0.564103 | 0.182927 | 0.237805 | 0.207317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.305842 | 291 | 13 | 40 | 22.384615 | 0.80198 | 0.264605 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
86f60ab87d593ae898d1eafa023c6b6ab1bf0462 | 1,132 | py | Python | tests/test_window.py | JhonJYJ/Marvel | 0f367a0ba00d76090cb6d766b61b0c9c364ff501 | [
"MIT"
] | null | null | null | tests/test_window.py | JhonJYJ/Marvel | 0f367a0ba00d76090cb6d766b61b0c9c364ff501 | [
"MIT"
] | null | null | null | tests/test_window.py | JhonJYJ/Marvel | 0f367a0ba00d76090cb6d766b61b0c9c364ff501 | [
"MIT"
] | null | null | null | from unittest import TestCase
class TestWindow(TestCase):
def test_allows_minimum_size(self):
import arcade
window = arcade.Window(200, 100, resizable=True)
window.set_min_size(200, 200)
window.close()
def test_disallows_minimum_size(self):
import arcade
window = arcade.Window(200, 100, resizable=False)
self.assertRaises(ValueError, window.set_min_size, 200, 200)
window.close()
def test_allows_maximum_size(self):
import arcade
window = arcade.Window(200, 100, resizable=True)
window.set_max_size(200, 200)
window.close()
def test_disallows_maximum_size(self):
import arcade
window = arcade.Window(200, 100, resizable=False)
self.assertRaises(ValueError, window.set_max_size, 200, 200)
window.close()
# def test_set_size_location(self):
# import arcade
# window = arcade.Window(200, 100, resizable=True)
# window.set_size(900, 800)
# self.assertEqual(window.width, 900)
# self.assertEqual(window.height, 800)
# window.close()
| 30.594595 | 68 | 0.65106 | 137 | 1,132 | 5.20438 | 0.240876 | 0.168303 | 0.112202 | 0.154278 | 0.771389 | 0.771389 | 0.771389 | 0.771389 | 0.746143 | 0.746143 | 0 | 0.07783 | 0.250883 | 1,132 | 36 | 69 | 31.444444 | 0.762972 | 0.206714 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.181818 | false | 0 | 0.227273 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8101aa603231bc52fb5c2b2db97eb2ec018ee8d9 | 32 | py | Python | drench/__main__.py | techtonik/drench | e99a8bf844a61d909d2d57629937ac672810469c | [
"MIT"
] | 64 | 2015-01-31T20:53:14.000Z | 2022-01-29T21:49:58.000Z | drench/__main__.py | techtonik/drench | e99a8bf844a61d909d2d57629937ac672810469c | [
"MIT"
] | 1 | 2015-12-11T18:23:59.000Z | 2015-12-11T18:23:59.000Z | drench/__main__.py | arthurcolle/drench-udp | 4555b5cbb1a47324e1f4093c4b8a3605f76b9a5c | [
"MIT"
] | 24 | 2015-02-06T08:08:13.000Z | 2019-12-22T18:33:07.000Z |
from drench import main
main()
| 8 | 23 | 0.75 | 5 | 32 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 32 | 3 | 24 | 10.666667 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
81209fb434cc5f4b4792df63bc796230db358b37 | 118 | py | Python | setup.py | hiroharu-kato/deep_dream_3d | 8f9f4ab0897bb7c453a09bf11652c4dbe80cb714 | [
"MIT"
] | 76 | 2018-01-18T02:23:39.000Z | 2022-03-31T19:51:03.000Z | setup.py | EXYNOS-999/deep_dream_3d | 8f9f4ab0897bb7c453a09bf11652c4dbe80cb714 | [
"MIT"
] | null | null | null | setup.py | EXYNOS-999/deep_dream_3d | 8f9f4ab0897bb7c453a09bf11652c4dbe80cb714 | [
"MIT"
] | 14 | 2018-01-18T13:23:22.000Z | 2022-02-21T21:40:15.000Z | import setuptools
setuptools.setup(
name='deep_dream_3d',
version='0.0.1',
packages=['deep_dream_3d'],
)
| 14.75 | 31 | 0.669492 | 16 | 118 | 4.6875 | 0.6875 | 0.24 | 0.293333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05102 | 0.169492 | 118 | 7 | 32 | 16.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.262712 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d496067c7e6526f29b7416b7d3eabc0cbfdc0d72 | 211 | py | Python | PythonExercicios/ex011.py | marcoantonio97/Curso-de-Python | b99c6c75e2b845086203b84a71a4013c0658dc44 | [
"MIT"
] | null | null | null | PythonExercicios/ex011.py | marcoantonio97/Curso-de-Python | b99c6c75e2b845086203b84a71a4013c0658dc44 | [
"MIT"
] | null | null | null | PythonExercicios/ex011.py | marcoantonio97/Curso-de-Python | b99c6c75e2b845086203b84a71a4013c0658dc44 | [
"MIT"
] | null | null | null | l = float(input('Digite a largura da parede: '))
a = float(input('Digite a altura da parede: '))
print('A área da parede é de {}m² e será necessário {} litro(s) de tinta para pintá-la'.format((l*a), ((l*a)/2)))
| 52.75 | 113 | 0.654028 | 40 | 211 | 3.45 | 0.625 | 0.173913 | 0.231884 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.156398 | 211 | 3 | 114 | 70.333333 | 0.764045 | 0 | 0 | 0 | 0 | 0.333333 | 0.635071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d4d609acfb3dff8864a47b5014a644000d6ee114 | 13,734 | py | Python | scripts/wsi_context_roi.py | higex/qpath | 0377f2fdadad6e02ecde8ba2557fe9b957280fa1 | [
"MIT"
] | 6 | 2017-03-18T19:17:42.000Z | 2019-05-05T14:57:31.000Z | WSItk/tools/wsi_context_roi.py | vladpopovici/WSItk | 02db9dbf1148106a576d7b4dd7965c73607efdae | [
"MIT"
] | null | null | null | WSItk/tools/wsi_context_roi.py | vladpopovici/WSItk | 02db9dbf1148106a576d7b4dd7965c73607efdae | [
"MIT"
] | 4 | 2015-11-29T14:47:25.000Z | 2019-11-28T03:16:39.000Z | #!/usr/bin/env python2
from __future__ import (absolute_import, division, print_function, unicode_literals)
__version__ = 0.01
__author__ = 'Vlad Popovici'
import os
import argparse as opt
import math as mh
import numpy as np
from scipy.cluster.hierarchy import *
from skimage.io import imread, imsave
from stain.he import rgb2he2
from descriptors.txtgrey import GaborDescriptor, LBPDescriptor, MFSDescriptor
from descriptors.extract import *
from util.explore import sliding_window_on_regions
from util.visualization import enhance_patches
def main():
p = opt.ArgumentParser(description="""
Segments a number of rectangular contexts from a H&E slide. The contexts are clusters
of similar regions of the image. The similarity is based on various textural
descriptors.
""")
p.add_argument('img_file', action='store', help='RGB image file')
p.add_argument('ctxt', action='store', help='Number of contexts to extract', type=int)
p.add_argument('wsize', action='store', help='Size of the (square) regions', type=int)
p.add_argument('roi', action='store', help='a file with ROI coordinates (and context descriptors)')
p.add_argument('label', action='store', help='the cluster label of interest')
p.add_argument('--prefix', action='store',
help='optional prefix for the resulting files',
default=None)
p.add_argument('--gabor', action='store_true',
help='compute Gabor descriptors and generate the corresponding contexts')
p.add_argument('--lbp', action='store_true',
help='compute LBP (local binary patterns) descriptors and generate the corresponding contexts')
p.add_argument('--mfs', action='store_true',
help='compute fractal descriptors and generate the corresponding contexts')
p.add_argument('--eosine', action='store_true', help='should also Eosine component be processed?')
p.add_argument('--scale', action='store', type=float, default=1.0,
help='scaling factor for ROI coordinates')
args = p.parse_args()
base_name = os.path.basename(args.img_file).split('.')
if len(base_name) > 1: # at least 1 suffix .ext
base_name.pop() # drop the extension
base_name = '.'.join(base_name) # reassemble the rest of the list into file name
if args.prefix is not None:
pfx = args.prefix
else:
pfx = base_name
ROIs = []
for l in file(args.roi).readlines():
# extract the coordinates and the label from each ROI
# (one per row):
lb, row_min, row_max, col_min, col_max = map(lambda _x: int(float(_x)), l.split('\t')[1:5])
row_min = int(mh.floor(row_min * args.scale))
row_max = int(mh.floor(row_max * args.scale))
col_min = int(mh.floor(col_min * args.scale))
col_max = int(mh.floor(col_max * args.scale))
if lb == args.label:
ROIs.append([row_min, row_max, col_min, col_max])
im = imread(args.img_file)
print("Original image size:", im.shape)
# get the H and E planes:
h, e, _ = rgb2he2(im)
if args.gabor:
print("---------> Gabor descriptors:")
g = GaborDescriptor()
desc_label = 'gabor'
print("------------> H plane")
# on H-plane:
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(h, g, img_iterator, desc_label)
dst = pdist_gabor(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_h.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_h_'+str(k)+'.ppm', im2)
if args.eosine:
# repeat on E plane:
print("------------> E plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(e, g, img_iterator, desc_label)
dst = pdist_gabor(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_e.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_e_'+str(k)+'.ppm', im2)
print("OK")
if args.haralick:
print("---------> Haralick descriptors:")
g = GLCMDescriptor()
desc_label = 'haralick'
print("------------> H plane")
# on H-plane:
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(h, g, img_iterator, desc_label)
dst = pdist_gabor(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_h.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_h_'+str(k)+'.ppm', im2)
if args.eosine:
# repeat on E plane:
print("------------> E plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(e, g, img_iterator, desc_label)
dst = pdist_gabor(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_e.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_e_'+str(k)+'.ppm', im2)
print("OK")
if args.lbp:
print("---------> LBP descriptors:")
g = LBPDescriptor()
desc_label = 'lbp'
# on H-plane:
print("------------> H plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(h, g, img_iterator, desc_label)
dst = pdist_lbp(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_h.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_h_'+str(k)+'.ppm', im2)
if args.eosine:
# repeat on E plane:
print("------------> E plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(e, g, img_iterator, desc_label)
dst = pdist_lbp(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_e.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_e_'+str(k)+'.ppm', im2)
print("OK")
if args.mfs:
print("---------> MFS descriptors:")
g = MFSDescriptor()
desc_label = 'mfs'
# on H-plane:
print("------------> H plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(h, g, img_iterator, desc_label)
dst = pdist_mfs(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_h.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_h_'+str(k)+'.ppm', im2)
if args.eosine:
# repeat on E plane:
print("------------> E plane")
img_iterator = sliding_window_on_regions(h.shape, ROIs, (args.wsize,args.wsize),
step=(args.wsize,args.wsize))
dsc = get_local_desc(e, g, img_iterator, desc_label)
dst = pdist_mfs(dsc)
cl = average(dst)
id = fcluster(cl, t=args.ctxt, criterion='maxclust') # get the various contexts
# save clustering/contexts - remember, the coordinates are in the
# current image system which might have been cropped from the original ->
# should add back the shift
z1 = desc_to_matrix(dsc, desc_label) # col 0: row_min, col 2: col_min
z1[:, 0] += row_min + dh
z1[:, 2] += col_min + dw
z2 = np.matrix(id).transpose()
z2 = np.hstack( (z2, z1) )
np.savetxt(pfx+'_'+desc_label+'_e.dat', z2, delimiter="\t")
# save visualizations
for k in range(1,1+args.ctxt):
i = np.where(id == k)[0]
p = [dsc[j]['roi'] for j in i]
im2 = enhance_patches(im, p)
imsave(pfx+'_'+desc_label+'_e_'+str(k)+'.ppm', im2)
print("OK")
return
# end
if __name__ == '__main__':
main() | 40.040816 | 114 | 0.552061 | 1,805 | 13,734 | 4.052632 | 0.125762 | 0.044293 | 0.028435 | 0.039371 | 0.733561 | 0.717703 | 0.717703 | 0.717703 | 0.711141 | 0.687355 | 0 | 0.015961 | 0.315713 | 13,734 | 343 | 115 | 40.040816 | 0.762396 | 0.164409 | 0 | 0.663755 | 0 | 0 | 0.125394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004367 | false | 0 | 0.052402 | 0 | 0.061135 | 0.078603 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d4f9fed1b15c1336ca3d70184c9d0b3d7ac1100f | 70 | py | Python | class 200821/matriz.py | Gabriel-Fernandes1917/lab-the-python | 0ed4fe7cf5e6c5447d3f021e50d390fc3af1b0d7 | [
"MIT"
] | null | null | null | class 200821/matriz.py | Gabriel-Fernandes1917/lab-the-python | 0ed4fe7cf5e6c5447d3f021e50d390fc3af1b0d7 | [
"MIT"
] | null | null | null | class 200821/matriz.py | Gabriel-Fernandes1917/lab-the-python | 0ed4fe7cf5e6c5447d3f021e50d390fc3af1b0d7 | [
"MIT"
] | null | null | null | ## Matriz int Python
M = [[2,-3,4],[0,7,5]]
print(M)
print(M[0][2]) | 10 | 22 | 0.514286 | 16 | 70 | 2.25 | 0.6875 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 0.157143 | 70 | 7 | 23 | 10 | 0.474576 | 0.242857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
be216eeeab0d08b702265f47b35b1ab8fe463e68 | 175 | py | Python | plaid/internal/utils.py | InspiredMember/plaid-python | 8bee5a907e0fd03c406b24a7b62166f86a42ca6b | [
"MIT"
] | null | null | null | plaid/internal/utils.py | InspiredMember/plaid-python | 8bee5a907e0fd03c406b24a7b62166f86a42ca6b | [
"MIT"
] | null | null | null | plaid/internal/utils.py | InspiredMember/plaid-python | 8bee5a907e0fd03c406b24a7b62166f86a42ca6b | [
"MIT"
] | 1 | 2021-01-11T22:16:06.000Z | 2021-01-11T22:16:06.000Z | __all__ = ['urljoin', 'urlencode']
try:
from urllib.parse import urljoin, urlencode
except ImportError:
from urllib import urlencode
from urlparse import urljoin
| 21.875 | 47 | 0.742857 | 20 | 175 | 6.3 | 0.55 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188571 | 175 | 7 | 48 | 25 | 0.887324 | 0 | 0 | 0 | 0 | 0 | 0.091429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
be250efa57b023e2356734115071d7cc9b9f2deb | 115 | py | Python | tests/conftest.py | mscarey/justopinion | bbba061f39851c19fc31ed7a2a86e9fbbc67f446 | [
"FTL"
] | null | null | null | tests/conftest.py | mscarey/justopinion | bbba061f39851c19fc31ed7a2a86e9fbbc67f446 | [
"FTL"
] | 3 | 2021-08-10T07:06:09.000Z | 2021-08-12T05:55:12.000Z | tests/conftest.py | mscarey/justopinion | bbba061f39851c19fc31ed7a2a86e9fbbc67f446 | [
"FTL"
] | null | null | null | import pytest
@pytest.fixture(scope="module")
def vcr_config():
return {"filter_headers": ["authorization"]}
| 16.428571 | 48 | 0.713043 | 13 | 115 | 6.153846 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121739 | 115 | 6 | 49 | 19.166667 | 0.792079 | 0 | 0 | 0 | 0 | 0 | 0.286957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
076d98a06b981190c0029865ce6dc7253a3e73a4 | 45 | py | Python | python/testData/optimizeImports/split.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/optimizeImports/split.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/optimizeImports/split.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | import sys, datetime
sys.path
datetime.time
| 9 | 20 | 0.8 | 7 | 45 | 5.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 4 | 21 | 11.25 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
078627afc5e7cd91ef0d29684d9a5f5986df1b60 | 359 | py | Python | tests/unit/props/utils/test_is_instance_or_type.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | null | null | null | tests/unit/props/utils/test_is_instance_or_type.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 136 | 2019-05-15T07:30:47.000Z | 2021-07-19T05:21:39.000Z | tests/unit/props/utils/test_is_instance_or_type.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 1 | 2019-05-15T10:00:34.000Z | 2019-05-15T10:00:34.000Z | from jason.props import utils
def test_is_instance():
assert utils.is_instance_or_type(123, int) is True
def test_is_not_instance():
assert utils.is_instance_or_type("nope", int) is False
def test_is_type():
assert utils.is_instance_or_type(int, int) is True
def test_is_not_type():
assert utils.is_instance_or_type(str, int) is False
| 19.944444 | 58 | 0.75766 | 63 | 359 | 3.968254 | 0.301587 | 0.2 | 0.144 | 0.336 | 0.696 | 0.696 | 0.696 | 0 | 0 | 0 | 0 | 0.009901 | 0.155989 | 359 | 17 | 59 | 21.117647 | 0.815182 | 0 | 0 | 0 | 0 | 0 | 0.011142 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.444444 | true | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
078dac3738875764892558d18271e7b88dec83bf | 174 | py | Python | src/external/elf-loader/get-valgrind-cflags.py | Tiamat-Tech/shadow | 482458a2ff0573d4b5eec0c588852af46aead24a | [
"BSD-3-Clause"
] | 1 | 2021-04-17T01:00:35.000Z | 2021-04-17T01:00:35.000Z | src/external/elf-loader/get-valgrind-cflags.py | Tiamat-Tech/shadow | 482458a2ff0573d4b5eec0c588852af46aead24a | [
"BSD-3-Clause"
] | 18 | 2020-12-15T07:11:46.000Z | 2022-02-15T00:07:52.000Z | src/external/elf-loader/get-valgrind-cflags.py | Tiamat-Tech/shadow | 482458a2ff0573d4b5eec0c588852af46aead24a | [
"BSD-3-Clause"
] | 1 | 2021-09-21T22:20:32.000Z | 2021-09-21T22:20:32.000Z | #!/usr/bin/env python3
from __future__ import print_function
import os
if os.path.exists('/usr/include/valgrind/valgrind.h'):
print('-DHAVE_VALGRIND_H -I/usr/include')
| 21.75 | 54 | 0.752874 | 27 | 174 | 4.592593 | 0.666667 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00641 | 0.103448 | 174 | 7 | 55 | 24.857143 | 0.788462 | 0.12069 | 0 | 0 | 0 | 0 | 0.421053 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
079cc4066ea6b59ac84d40040b8698d3cf47986c | 141 | py | Python | acsl-pydev/hackerrank/string_easy/week05_3.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | null | null | null | acsl-pydev/hackerrank/string_easy/week05_3.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | 3 | 2021-04-17T18:36:24.000Z | 2022-03-04T20:30:09.000Z | acsl-pydev/hackerrank/string_easy/week05_3.py | odys-z/hello | 39ca67cae34eb4bc4cbd848a06b3c0d65a995954 | [
"MIT"
] | null | null | null | '''
https://www.hackerrank.com/challenges/merge-the-tools/problem
'''
def merge_the_tools(string, k):
# your code goes here
pass
| 20.142857 | 65 | 0.680851 | 20 | 141 | 4.7 | 0.85 | 0.170213 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 141 | 6 | 66 | 23.5 | 0.803419 | 0.58156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
07a04b7aea5ee227077a44ce4968eddb69ca17e6 | 501 | py | Python | roles/lib_utils/src/lib/import.py | shgriffi/openshift-ansible | 6313f519307cf50055589c3876d8bec398bbc4d4 | [
"Apache-2.0"
] | 1 | 2017-12-13T01:11:57.000Z | 2017-12-13T01:11:57.000Z | roles/lib_utils/src/lib/import.py | shgriffi/openshift-ansible | 6313f519307cf50055589c3876d8bec398bbc4d4 | [
"Apache-2.0"
] | 6 | 2018-02-01T11:13:38.000Z | 2018-02-02T08:01:15.000Z | roles/lib_utils/src/lib/import.py | shgriffi/openshift-ansible | 6313f519307cf50055589c3876d8bec398bbc4d4 | [
"Apache-2.0"
] | 2 | 2018-10-16T05:11:13.000Z | 2018-11-07T01:46:29.000Z | # flake8: noqa
# pylint: skip-file
# pylint: disable=wrong-import-order,wrong-import-position,unused-import
from __future__ import print_function # noqa: F401
import copy # noqa: F401
import json # noqa: F401
import os # noqa: F401
import re # noqa: F401
import shutil # noqa: F401
import tempfile # noqa: F401
import time # noqa: F401
try:
import ruamel.yaml as yaml # noqa: F401
except ImportError:
import yaml # noqa: F401
from ansible.module_utils.basic import AnsibleModule
| 23.857143 | 72 | 0.736527 | 71 | 501 | 5.112676 | 0.478873 | 0.220386 | 0.269972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075795 | 0.183633 | 501 | 20 | 73 | 25.05 | 0.811736 | 0.421158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.923077 | 0 | 0.923077 | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
07b0db8c2e6d6d54b3cb07b52baf1e7c79b90ac9 | 645 | py | Python | data/IEEE802_16_2009_R3_4_B.py | DELTA37/LDPC-code | e055685d8dfebd50fa19a2638a8934b4a43b7ad3 | [
"BSD-2-Clause"
] | 2 | 2020-10-21T07:13:28.000Z | 2020-10-21T17:03:35.000Z | data/IEEE802_16_2009_R3_4_B.py | DELTA37/LDPC-code | e055685d8dfebd50fa19a2638a8934b4a43b7ad3 | [
"BSD-2-Clause"
] | null | null | null | data/IEEE802_16_2009_R3_4_B.py | DELTA37/LDPC-code | e055685d8dfebd50fa19a2638a8934b4a43b7ad3 | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
C = np.array([
-1, 81, -1, 28, -1, -1, 14, 25, 17, -1, -1, 85, 29, 52, 78, 95, 22, 92, 0, 0, -1, -1, -1, -1,
42, -1, 14, 68, 32, -1, -1, -1, -1, 70, 43, 11, 36, 40, 33, 57, 38, 24, -1, 0, 0, -1, -1, -1,
-1, -1, 20, -1, -1, 63, 39, -1, 70, 67, -1, 38, 4, 72, 47, 29, 60, 5, 80, -1, 0, 0, -1, -1,
64, 2, -1, -1, 63, -1, -1, 3, 51, -1, 81, 15, 94, 9, 85, 36, 14, 19, -1, -1, -1, 0, 0, -1,
-1, 53, 60, 80, -1, 26, 75, -1, -1, -1, -1, 86, 77, 1, 3, 72, 60, 25, -1, -1, -1, -1, 0, 0,
77, -1, -1, -1, 15, 28, -1, 35, -1, 72, 30, 68, 85, 84, 26, 64, 11, 89, 0, -1, -1, -1, -1, 0
]).reshape((6, -1))
r = 96
| 46.071429 | 97 | 0.384496 | 156 | 645 | 1.589744 | 0.403846 | 0.241935 | 0.181452 | 0.112903 | 0.149194 | 0.048387 | 0 | 0 | 0 | 0 | 0 | 0.470716 | 0.285271 | 645 | 13 | 98 | 49.615385 | 0.067245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
07b24c132f099be67c67394e45826dc4c7a08466 | 76 | py | Python | setup.py | i-wanna-be-a-programmer/stealer | c875e4733d53ccbcfde68fee2dc440f86f362c48 | [
"MIT"
] | null | null | null | setup.py | i-wanna-be-a-programmer/stealer | c875e4733d53ccbcfde68fee2dc440f86f362c48 | [
"MIT"
] | 4 | 2021-02-11T18:54:15.000Z | 2021-03-05T14:50:05.000Z | setup.py | i-wanna-be-a-programmer/stealer | c875e4733d53ccbcfde68fee2dc440f86f362c48 | [
"MIT"
] | 4 | 2021-02-20T16:11:09.000Z | 2021-04-11T07:53:22.000Z | # >_ python setup.py
import os
from setuptools import setup, find_packages
| 15.2 | 43 | 0.789474 | 11 | 76 | 5.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 76 | 4 | 44 | 19 | 0.90625 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
07d9c602cbcffa315c6ae0af6c74cd9d30fdf0ff | 171 | py | Python | main.py | manupeco/MediumPythonExercise | fbfbc70a899319f9663b46350196c88905ff57f4 | [
"MIT"
] | null | null | null | main.py | manupeco/MediumPythonExercise | fbfbc70a899319f9663b46350196c88905ff57f4 | [
"MIT"
] | null | null | null | main.py | manupeco/MediumPythonExercise | fbfbc70a899319f9663b46350196c88905ff57f4 | [
"MIT"
] | null | null | null | from callsupUseCase import sendCallsUp
def pdfmedium(request):
#req_data = request.args
req_data = request.get_json()
return sendCallsUp(req_data) | 28.5 | 40 | 0.707602 | 20 | 171 | 5.85 | 0.65 | 0.179487 | 0.239316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 171 | 6 | 40 | 28.5 | 0.879699 | 0.134503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
07dec95355c8f44e6b78d3f83028434f2a321330 | 177 | py | Python | tests/test_utils.py | hudson-ayers/safe-sports-streams | eb5fff6781a54c411d25596d1d42c25ae41c429b | [
"MIT"
] | 1 | 2019-03-25T10:48:00.000Z | 2019-03-25T10:48:00.000Z | tests/test_utils.py | hudson-ayers/safe-sports-streams | eb5fff6781a54c411d25596d1d42c25ae41c429b | [
"MIT"
] | 1 | 2018-10-25T21:06:49.000Z | 2018-10-25T21:06:49.000Z | tests/test_utils.py | hudson-ayers/safe-sports-streams | eb5fff6781a54c411d25596d1d42c25ae41c429b | [
"MIT"
] | null | null | null | from streamscrape.utils import get_ip_address
def test_get_ip_address():
test_url = "https://sing.stanford.edu/site"
assert "171.67.76.12" == get_ip_address(test_url)
| 25.285714 | 53 | 0.745763 | 29 | 177 | 4.241379 | 0.689655 | 0.121951 | 0.292683 | 0.260163 | 0.308943 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058442 | 0.129944 | 177 | 6 | 54 | 29.5 | 0.74026 | 0 | 0 | 0 | 0 | 0 | 0.237288 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed45b6c0ad7db03f3cce4243e74cc70767074bfb | 2,603 | py | Python | descriptors/getRcpiResults.py | kennethriva/Machine-Learning-for-drugs-cytokines | a26e00ec072f27f90a0d5971d90c05a5a5d6e803 | [
"Apache-2.0"
] | 4 | 2018-11-17T22:02:37.000Z | 2019-12-11T11:49:47.000Z | descriptors/getRcpiResults.py | kennethriva/Machine-Learning-for-drugs-cytokines | a26e00ec072f27f90a0d5971d90c05a5a5d6e803 | [
"Apache-2.0"
] | 4 | 2018-11-06T00:19:13.000Z | 2018-11-14T14:43:15.000Z | descriptors/getRcpiResults.py | kennethriva/Machine-Learning-for-drugs-cytokines | a26e00ec072f27f90a0d5971d90c05a5a5d6e803 | [
"Apache-2.0"
] | 1 | 2018-11-14T17:06:24.000Z | 2018-11-14T17:06:24.000Z | #get Rcpi results from folder CSVs
import os
sFolder = "Rcpi"
no=0
list = os.listdir(sFolder) # dir is your directory path
print "No,CanSMILES,ALogP,ALogp2,AMR,apol,nA,nR,nN,nD,nC,nF,nQ,nE,nG,nH,nI,nP,nL,nK,nM,nS,nT,nY,nV,nW,naAromAtom,nAromBond,nAtom,ATSc1,ATSc2,ATSc3,ATSc4,ATSc5,ATSm1,ATSm2,ATSm3,ATSm4,ATSm5,ATSp1,ATSp2,ATSp3,ATSp4,ATSp5,BCUTw.1l,BCUTw.1h,BCUTc.1l,BCUTc.1h,BCUTp.1l,BCUTp.1h,bpol,nB,PPSA.1,PPSA.2,PPSA.3,PNSA.1,PNSA.2,PNSA.3,DPSA.1,DPSA.2,DPSA.3,FPSA.1,FPSA.2,FPSA.3,FNSA.1,FNSA.2,FNSA.3,WPSA.1,WPSA.2,WPSA.3,WNSA.1,WNSA.2,WNSA.3,RPCG,RNCG,RPCS,RNCS,THSA,TPSA,RHSA,RPSA,C1SP1,C2SP1,C1SP2,C2SP2,C3SP2,C1SP3,C2SP3,C3SP3,C4SP3,SCH.3,SCH.4,SCH.5,SCH.6,SCH.7,VCH.3,VCH.4,VCH.5,VCH.6,VCH.7,SC.3,SC.4,SC.5,SC.6,VC.3,VC.4,VC.5,VC.6,SPC.4,SPC.5,SPC.6,VPC.4,VPC.5,VPC.6,SP.0,SP.1,SP.2,SP.3,SP.4,SP.5,SP.6,SP.7,VP.0,VP.1,VP.2,VP.3,VP.4,VP.5,VP.6,VP.7,ECCEN,FMF,fragC,GRAV.1,GRAV.2,GRAV.3,GRAVH.1,GRAVH.2,GRAVH.3,GRAV.4,GRAV.5,GRAV.6,nHBAcc,nHBDon,HybRatio,MolIP,Kier1,Kier2,Kier3,khs.sLi,khs.ssBe,khs.ssssBe,khs.ssBH,khs.sssB,khs.ssssB,khs.sCH3,khs.dCH2,khs.ssCH2,khs.tCH,khs.dsCH,khs.aaCH,khs.sssCH,khs.ddC,khs.tsC,khs.dssC,khs.aasC,khs.aaaC,khs.ssssC,khs.sNH3,khs.sNH2,khs.ssNH2,khs.dNH,khs.ssNH,khs.aaNH,khs.tN,khs.sssNH,khs.dsN,khs.aaN,khs.sssN,khs.ddsN,khs.aasN,khs.ssssN,khs.sOH,khs.dO,khs.ssO,khs.aaO,khs.sF,khs.sSiH3,khs.ssSiH2,khs.sssSiH,khs.ssssSi,khs.sPH2,khs.ssPH,khs.sssP,khs.dsssP,khs.sssssP,khs.sSH,khs.dS,khs.ssS,khs.aaS,khs.dssS,khs.ddssS,khs.sCl,khs.sGeH3,khs.ssGeH2,khs.sssGeH,khs.ssssGe,khs.sAsH2,khs.ssAsH,khs.sssAs,khs.sssdAs,khs.sssssAs,khs.sSeH,khs.dSe,khs.ssSe,khs.aaSe,khs.dssSe,khs.ddssSe,khs.sBr,khs.sSnH3,khs.ssSnH2,khs.sssSnH,khs.ssssSn,khs.sI,khs.sPbH3,khs.ssPbH2,khs.sssPbH,khs.ssssPb,nAtomLC,nAtomP,LOBMAX,LOBMIN,nAtomLAC,MDEC.11,MDEC.12,MDEC.13,MDEC.14,MDEC.22,MDEC.23,MDEC.24,MDEC.33,MDEC.34,MDEC.44,MDEO.11,MDEO.12,MDEO.22,MDEN.11,MDEN.12,MDEN.13,MDEN.22,MDEN.23,MDEN.33,MLogP,MOMI.X,MOMI.Y,MOMI.Z,MOMI.XY,MOMI.XZ,MOMI.YZ,MOMI.R,PetitjeanNumber,topoShape,geomShape,nRotB,LipinskiFailures,TopoPSA,VABC,VAdjMat,Wlambda1.unity,Wlambda2.unity,Wlambda3.unity,Wnu1.unity,Wnu2.unity,Wgamma1.unity,Wgamma2.unity,Wgamma3.unity,Weta1.unity,Weta2.unity,Weta3.unity,WT.unity,WA.unity,WV.unity,WK.unity,WG.unity,WD.unity,MW,WTPT.1,WTPT.2,WTPT.3,WTPT.4,WTPT.5,WPATH,WPOL,XLogP,Zagreb"
for item in list:
if item[-3:]=="csv":
sParts = item.split(".")
f=open(os.path.join(sFolder,item),"r")
lines = f.readlines()
print str((item.split(".")[0]).split("-")[1])+","+lines[1][:-1]
f.close()
no +=1
#print "CSV files =", no
| 137 | 2,193 | 0.73761 | 560 | 2,603 | 3.428571 | 0.489286 | 0.003125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075939 | 0.038801 | 2,603 | 18 | 2,194 | 144.611111 | 0.691447 | 0.031886 | 0 | 0 | 0 | 0.076923 | 0.872865 | 0.868097 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed55ba18ffd20967930d62df547ab99991e27eda | 215 | py | Python | test/test_generator.py | xoxlov/ironpython_training | bda562aa2e47159a4fca6aa49d0d4c62cad6b56c | [
"Apache-2.0"
] | null | null | null | test/test_generator.py | xoxlov/ironpython_training | bda562aa2e47159a4fca6aa49d0d4c62cad6b56c | [
"Apache-2.0"
] | null | null | null | test/test_generator.py | xoxlov/ironpython_training | bda562aa2e47159a4fca6aa49d0d4c62cad6b56c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from generator.group import GroupGenerator
def test_generator(app):
gg = GroupGenerator(app)
gg.generate()
def test_reader(app):
gg = GroupGenerator(app)
print(gg.read())
| 16.538462 | 42 | 0.669767 | 27 | 215 | 5.259259 | 0.592593 | 0.105634 | 0.267606 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005747 | 0.190698 | 215 | 12 | 43 | 17.916667 | 0.810345 | 0.097674 | 0 | 0.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.428571 | 0.142857 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed65ccf58176cc3e0ae2423874def5f3315cbdc1 | 56 | py | Python | bip_utils/utils/base32/__init__.py | MIPPLTeam/bip_utils | c66446e7ac3879d2cf6308c5b8eb7f7705292660 | [
"MIT"
] | 149 | 2020-05-15T08:11:43.000Z | 2022-03-29T16:34:42.000Z | bip_utils/utils/base32/__init__.py | MIPPLTeam/bip_utils | c66446e7ac3879d2cf6308c5b8eb7f7705292660 | [
"MIT"
] | 41 | 2020-04-03T15:57:56.000Z | 2022-03-31T08:25:11.000Z | bip_utils/utils/base32/__init__.py | MIPPLTeam/bip_utils | c66446e7ac3879d2cf6308c5b8eb7f7705292660 | [
"MIT"
] | 55 | 2020-04-03T17:05:15.000Z | 2022-03-24T12:43:42.000Z | from bip_utils.utils.base32.base32 import Base32Encoder
| 28 | 55 | 0.875 | 8 | 56 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0.071429 | 56 | 1 | 56 | 56 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.