hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4bf5d91066f325eb10f718cf17653424ef557a11 | 482 | py | Python | clippy/__init__.py | gowithfloat/clippy | 0d537ff5fb8d2be79f33aa8451fc9facc75c372a | [
"MIT"
] | 2 | 2021-01-16T07:14:11.000Z | 2021-01-30T00:57:05.000Z | clippy/__init__.py | gowithfloat/clippy | 0d537ff5fb8d2be79f33aa8451fc9facc75c372a | [
"MIT"
] | 1 | 2020-02-11T19:29:52.000Z | 2020-02-17T16:03:55.000Z | clippy/__init__.py | gowithfloat/clippy | 0d537ff5fb8d2be79f33aa8451fc9facc75c372a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Clippy (Command Line Interface Parser for Python) crawls the abstract syntax tree (AST) of a Python file and generates a simple command-line interface.
Any function annotated with `@clippy` will have it's name, parameters, type annotation, and documentation parsed to generate commands.
"""
# flake8: noqa
from .clip import begin_clippy # pylint: disable=unused-import
from .clip import clippy # pylint: disable=unused-import
| 34.428571 | 151 | 0.753112 | 69 | 482 | 5.246377 | 0.73913 | 0.060773 | 0.110497 | 0.138122 | 0.171271 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00489 | 0.151452 | 482 | 13 | 152 | 37.076923 | 0.880196 | 0.8361 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ef01459d292c37967d4fc784a80bd2bf31727bab | 50 | py | Python | xspectre/calibration/__init__.py | njmiller/pyspextool | 00783a6bc0bc96441e362f7060fd7c250d16e1a1 | [
"MIT"
] | null | null | null | xspectre/calibration/__init__.py | njmiller/pyspextool | 00783a6bc0bc96441e362f7060fd7c250d16e1a1 | [
"MIT"
] | null | null | null | xspectre/calibration/__init__.py | njmiller/pyspextool | 00783a6bc0bc96441e362f7060fd7c250d16e1a1 | [
"MIT"
] | 2 | 2020-03-13T20:33:53.000Z | 2020-03-20T20:56:38.000Z | from xspectre.calibration.arcs import CombinedArc
| 25 | 49 | 0.88 | 6 | 50 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ef17337f786cb54fc99f34185d73ae47fbc4c67e | 84 | py | Python | cgn/regop/__init__.py | FabianKP/cgn | 9963e60c4a4bf4f3869e43d1dfbe11da74887ba5 | [
"MIT"
] | 1 | 2022-03-21T00:40:23.000Z | 2022-03-21T00:40:23.000Z | cgn/regop/__init__.py | FabianKP/cgn | 9963e60c4a4bf4f3869e43d1dfbe11da74887ba5 | [
"MIT"
] | null | null | null | cgn/regop/__init__.py | FabianKP/cgn | 9963e60c4a4bf4f3869e43d1dfbe11da74887ba5 | [
"MIT"
] | null | null | null | from .regularization_operator import RegularizationOperator
from .operators import * | 42 | 59 | 0.880952 | 8 | 84 | 9.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 84 | 2 | 60 | 42 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ef1cbd006320b9527035d1e7dc3db40092c3fee5 | 84 | py | Python | flags/ctf/admin.py | fredericoricco-debug/groupproject | bf22303cff7a2a4bc097f568f0df0b5d72000c3b | [
"MIT"
] | null | null | null | flags/ctf/admin.py | fredericoricco-debug/groupproject | bf22303cff7a2a4bc097f568f0df0b5d72000c3b | [
"MIT"
] | null | null | null | flags/ctf/admin.py | fredericoricco-debug/groupproject | bf22303cff7a2a4bc097f568f0df0b5d72000c3b | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import User
admin.site.register(User) | 21 | 32 | 0.821429 | 13 | 84 | 5.307692 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 84 | 4 | 33 | 21 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ef485c18164994bc09da83904180f75ff3fc09be | 95 | py | Python | import/main.py | seemoo-lab/polypyus_pdom | c08d06d3268578566978697d0560c34f3b224b55 | [
"Apache-2.0"
] | null | null | null | import/main.py | seemoo-lab/polypyus_pdom | c08d06d3268578566978697d0560c34f3b224b55 | [
"Apache-2.0"
] | null | null | null | import/main.py | seemoo-lab/polypyus_pdom | c08d06d3268578566978697d0560c34f3b224b55 | [
"Apache-2.0"
] | null | null | null | from gui.gui import GuiFromWindow
win = GuiFromWindow()
win.Show("Import PDOM signatures", 32) | 23.75 | 38 | 0.778947 | 13 | 95 | 5.692308 | 0.692308 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.115789 | 95 | 4 | 38 | 23.75 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.229167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ef50b0f95f1e6506a84ff99f8009842533a31c28 | 145 | py | Python | anagram_solver/__main__.py | patrickleweryharris/anagram-solver | 176154a889cea9f228e0fde2f563e44e31a12bb5 | [
"MIT"
] | 12 | 2016-08-30T19:31:05.000Z | 2021-12-26T02:49:53.000Z | anagram_solver/__main__.py | patrickleweryharris/anagram-solver | 176154a889cea9f228e0fde2f563e44e31a12bb5 | [
"MIT"
] | null | null | null | anagram_solver/__main__.py | patrickleweryharris/anagram-solver | 176154a889cea9f228e0fde2f563e44e31a12bb5 | [
"MIT"
] | 3 | 2017-09-12T02:28:43.000Z | 2020-09-10T07:04:12.000Z | # -*- coding: utf-8 -*-
"""anagram_solver.__main__: executed when directory is called as script."""
from .anagram_solver import main
main()
| 14.5 | 75 | 0.696552 | 19 | 145 | 5 | 0.789474 | 0.273684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.158621 | 145 | 9 | 76 | 16.111111 | 0.770492 | 0.634483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
32423f71c51ebcc2de8383d01839296ad8838c31 | 374 | py | Python | jobs/admin.py | Qlwentt/qually | 550efa326532d6fbb154f9d244905e957f30f3ea | [
"MIT"
] | 1 | 2017-06-20T23:18:15.000Z | 2017-06-20T23:18:15.000Z | jobs/admin.py | Qlwentt/qually | 550efa326532d6fbb154f9d244905e957f30f3ea | [
"MIT"
] | 8 | 2017-02-05T05:51:48.000Z | 2017-08-27T15:44:05.000Z | jobs/admin.py | Qlwentt/qually | 550efa326532d6fbb154f9d244905e957f30f3ea | [
"MIT"
] | null | null | null | from django.contrib import admin
from jobs.models import Keyword
from jobs.models import SavedJob
from jobs.models import Resume
from jobs.models import Profile
from jobs.models import CachedJob
# Register your models here.
admin.site.register(Keyword)
admin.site.register(SavedJob)
admin.site.register(Resume)
admin.site.register(Profile)
admin.site.register(CachedJob) | 23.375 | 33 | 0.826203 | 54 | 374 | 5.722222 | 0.296296 | 0.12945 | 0.226537 | 0.323625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09893 | 374 | 16 | 34 | 23.375 | 0.916914 | 0.069519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.545455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
32878215c8f58ea9df89eb66b0342ef7120bf888 | 101 | py | Python | flakehell/__main__.py | ffix/flakehell-py27 | c5f0f68c58cbd39c677e27b3057a52efdcf4080b | [
"MIT"
] | null | null | null | flakehell/__main__.py | ffix/flakehell-py27 | c5f0f68c58cbd39c677e27b3057a52efdcf4080b | [
"MIT"
] | null | null | null | flakehell/__main__.py | ffix/flakehell-py27 | c5f0f68c58cbd39c677e27b3057a52efdcf4080b | [
"MIT"
] | null | null | null | from __future__ import absolute_import, unicode_literals
from ._cli import entrypoint
entrypoint()
| 16.833333 | 56 | 0.841584 | 12 | 101 | 6.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118812 | 101 | 5 | 57 | 20.2 | 0.876404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
32a58b883a66f9c41a771541e0399bfb1a503354 | 629 | py | Python | app/models/user.py | julianamonr03/SunSquare | 7e4d3bcc3b3f2e91710b835d1e19e3f55055429b | [
"MIT"
] | null | null | null | app/models/user.py | julianamonr03/SunSquare | 7e4d3bcc3b3f2e91710b835d1e19e3f55055429b | [
"MIT"
] | null | null | null | app/models/user.py | julianamonr03/SunSquare | 7e4d3bcc3b3f2e91710b835d1e19e3f55055429b | [
"MIT"
] | 1 | 2022-01-13T15:33:39.000Z | 2022-01-13T15:33:39.000Z | from config import db
from flask_login import UserMixin
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(100), unique=True)
password = db.Column(db.String(45), nullable=False)
#name = db.Column(db.String(45), nullable=False)
#lastname = db.Column(db.String(45), nullable=False)
#age = db.Column(db.Integer, nullable=False)
#typeofwork = db.Column(db.String(100), nullable=False)
#incomes = db.Column(db.Integer, nullable=False)
#phone = db.Column(db.Integer, nullable=False)
#references = db.Column(db.String(200), nullable=False)
| 41.933333 | 59 | 0.704293 | 91 | 629 | 4.846154 | 0.351648 | 0.181406 | 0.226757 | 0.217687 | 0.501134 | 0.414966 | 0.210884 | 0 | 0 | 0 | 0 | 0.027933 | 0.146264 | 629 | 14 | 60 | 44.928571 | 0.793296 | 0.54213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
086946b588febbc6e181914e74192844634e5492 | 39 | py | Python | basics/solutions/rms.py | carlosal1015/ACM-Python-Tutorials-KAUST-2015 | 688acf1017dba7687254a8c880b7f19c6f939c3f | [
"CC-BY-3.0"
] | 5 | 2019-01-16T14:43:43.000Z | 2021-06-29T02:20:47.000Z | basics/solutions/rms.py | carlosal1015/ACM-Python-Tutorials-KAUST-2015 | 688acf1017dba7687254a8c880b7f19c6f939c3f | [
"CC-BY-3.0"
] | null | null | null | basics/solutions/rms.py | carlosal1015/ACM-Python-Tutorials-KAUST-2015 | 688acf1017dba7687254a8c880b7f19c6f939c3f | [
"CC-BY-3.0"
] | 3 | 2017-02-21T06:19:19.000Z | 2021-06-29T02:20:54.000Z | rms = sqrt(sse / y.shape[0])
print rms
| 13 | 28 | 0.641026 | 8 | 39 | 3.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 39 | 2 | 29 | 19.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
08a9596cac187df6e6db91920770d8638e0ce04d | 44 | py | Python | src/runscraper.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | 2 | 2018-02-27T10:45:36.000Z | 2018-03-23T11:40:47.000Z | src/runscraper.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | null | null | null | src/runscraper.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | null | null | null | from db import operations
operations.main()
| 14.666667 | 25 | 0.818182 | 6 | 44 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 2 | 26 | 22 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
08b48e08020c924691652fcc6f4e94fda2a7726f | 82 | py | Python | scripts/field/autogen_ZipanguScene_6_6.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | scripts/field/autogen_ZipanguScene_6_6.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | scripts/field/autogen_ZipanguScene_6_6.py | hsienjan/SideQuest-Server | 3e88debaf45615b759d999255908f99a15283695 | [
"MIT"
] | null | null | null | # Character field ID when accessed: 800025005
# ObjectID: 0
# ParentID: 800025005
| 20.5 | 45 | 0.768293 | 10 | 82 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275362 | 0.158537 | 82 | 3 | 46 | 27.333333 | 0.637681 | 0.914634 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3eaf613110f51eb51c719d0f4210b7ad47102419 | 338 | py | Python | abctenant/aws/utils.py | mufasa101/python_daraja | d5c3f4e8357e1054834a4735386f6fc622ef40d3 | [
"MIT"
] | null | null | null | abctenant/aws/utils.py | mufasa101/python_daraja | d5c3f4e8357e1054834a4735386f6fc622ef40d3 | [
"MIT"
] | null | null | null | abctenant/aws/utils.py | mufasa101/python_daraja | d5c3f4e8357e1054834a4735386f6fc622ef40d3 | [
"MIT"
] | null | null | null | from storages.backends.s3boto3 import S3Boto3Storage
# def StaticRootS3BotoStorage(): return S3Boto3Storage(location='static')
# def MediaRootS3BotoStorage(): return S3Boto3Storage(location='media')
StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static')
MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media')
| 33.8 | 73 | 0.819527 | 28 | 338 | 9.892857 | 0.5 | 0.31769 | 0.202166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051447 | 0.079882 | 338 | 9 | 74 | 37.555556 | 0.839228 | 0.41716 | 0 | 0 | 0 | 0 | 0.056701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f5aa87bbb81b4cf530d4712791a914ae8dcf9d32 | 46 | py | Python | gip/logging/autoinit.py | elda27/GIP | b79a444a411042967484d7c5d88639084748d1b0 | [
"MIT"
] | null | null | null | gip/logging/autoinit.py | elda27/GIP | b79a444a411042967484d7c5d88639084748d1b0 | [
"MIT"
] | null | null | null | gip/logging/autoinit.py | elda27/GIP | b79a444a411042967484d7c5d88639084748d1b0 | [
"MIT"
] | null | null | null | from gip import logging
logging.initialize()
| 11.5 | 23 | 0.804348 | 6 | 46 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 46 | 3 | 24 | 15.333333 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f5dfb4ff4036da14b7adf30b6d0f8c681601097c | 215 | py | Python | models/models.py | noahkw/jag-url | 4e24ce251213efbbb366bf686beab47a217c44ae | [
"MIT"
] | null | null | null | models/models.py | noahkw/jag-url | 4e24ce251213efbbb366bf686beab47a217c44ae | [
"MIT"
] | null | null | null | models/models.py | noahkw/jag-url | 4e24ce251213efbbb366bf686beab47a217c44ae | [
"MIT"
] | null | null | null | from sqlalchemy import Column, String
from database import Base
class Url(Base):
__tablename__ = 'urls'
short_url = Column(String, primary_key=True, index=True)
long_url = Column(String, index=True)
| 19.545455 | 60 | 0.730233 | 29 | 215 | 5.172414 | 0.586207 | 0.24 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181395 | 215 | 10 | 61 | 21.5 | 0.852273 | 0 | 0 | 0 | 0 | 0 | 0.018605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
eb219ff0c73974990b0284e10070ea141530c7cd | 87 | py | Python | Python Programming/02. Control Flow/01-Comparing Operators.py | luckyrabbit85/Python | ed134fd70b4a7b84b183b87b85ad5190f54c9526 | [
"MIT"
] | 1 | 2021-07-15T18:40:26.000Z | 2021-07-15T18:40:26.000Z | Python Programming/02. Control Flow/01-Comparing Operators.py | luckyrabbit85/Python | ed134fd70b4a7b84b183b87b85ad5190f54c9526 | [
"MIT"
] | null | null | null | Python Programming/02. Control Flow/01-Comparing Operators.py | luckyrabbit85/Python | ed134fd70b4a7b84b183b87b85ad5190f54c9526 | [
"MIT"
] | null | null | null | print(1 > 2)
print("tiger" > "elephant")
print(ord("t"))
print(ord("a"))
print(9 == 9)
| 14.5 | 27 | 0.574713 | 15 | 87 | 3.333333 | 0.6 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.126437 | 87 | 5 | 28 | 17.4 | 0.605263 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
de6ed9b0ff179bb3730324525d43ef0ad5c4f26f | 231 | py | Python | HeterogeneousCore/CUDATest/python/prod1FromCUDA_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | HeterogeneousCore/CUDATest/python/prod1FromCUDA_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | HeterogeneousCore/CUDATest/python/prod1FromCUDA_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from HeterogeneousCore.CUDATest.testCUDAProducerGPUtoCPU_cfi import testCUDAProducerGPUtoCPU as _testCUDAProducerGPUtoCPU
prod1FromCUDA = _testCUDAProducerGPUtoCPU.clone(src = "prod1CUDA")
| 46.2 | 121 | 0.883117 | 20 | 231 | 10.05 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009302 | 0.069264 | 231 | 4 | 122 | 57.75 | 0.925581 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
de6f9bd8f89729f01c6cb11464d8212ca33a68a3 | 341 | py | Python | stockpyle/stores.py | mjpizz/stockpyle | 627ab3038daff4a68ccfc5359b329b253fabc8ef | [
"BSD-3-Clause"
] | 4 | 2016-04-16T20:55:58.000Z | 2021-03-07T00:27:26.000Z | stockpyle/stores.py | rblomberg/stockpyle | 627ab3038daff4a68ccfc5359b329b253fabc8ef | [
"BSD-3-Clause"
] | 1 | 2016-03-19T15:28:24.000Z | 2016-03-21T08:32:57.000Z | stockpyle/stores.py | rblomberg/stockpyle | 627ab3038daff4a68ccfc5359b329b253fabc8ef | [
"BSD-3-Clause"
] | 1 | 2016-03-11T18:09:33.000Z | 2016-03-11T18:09:33.000Z | from stockpyle._base import BaseStore, BaseDictionaryStore
from stockpyle._threadlocal import ThreadLocalStore
from stockpyle._procmem import ProcessMemoryStore
from stockpyle._memcache import MemcacheStore
from stockpyle._shove import ShoveStore
from stockpyle._sqlalchemy import SqlAlchemyStore
from stockpyle._chained import ChainedStore
| 42.625 | 58 | 0.891496 | 36 | 341 | 8.25 | 0.5 | 0.306397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085044 | 341 | 7 | 59 | 48.714286 | 0.951923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
de8a6d4e5e62e79d25a9e228bca7281fd5f21d4d | 41 | py | Python | tests/__init__.py | theislab/pertpy | 54a9244fd032cdab2fb7fc0e4a2208ba088ff54e | [
"MIT"
] | 1 | 2021-06-23T14:16:14.000Z | 2021-06-23T14:16:14.000Z | tests/__init__.py | theislab/pertpy | 54a9244fd032cdab2fb7fc0e4a2208ba088ff54e | [
"MIT"
] | 36 | 2021-07-12T10:42:03.000Z | 2022-03-29T13:07:01.000Z | tests/__init__.py | theislab/pertpy | 54a9244fd032cdab2fb7fc0e4a2208ba088ff54e | [
"MIT"
] | 1 | 2022-01-28T13:27:58.000Z | 2022-01-28T13:27:58.000Z | """Test suite for the pertpy package."""
| 20.5 | 40 | 0.682927 | 6 | 41 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 1 | 41 | 41 | 0.8 | 0.829268 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
72062b105fcdaf01bbabf192fa814be2d64a35cd | 30 | py | Python | oruliner/__init__.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | 3 | 2019-06-11T16:19:09.000Z | 2020-05-25T18:43:38.000Z | oruliner/__init__.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | null | null | null | oruliner/__init__.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | null | null | null | from .oruliner import oruline
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9d29c2d04a0f95eeaf1301c0b041273c5207db65 | 89 | py | Python | Uche Clare/Phase 1/Python Basic 1/Day 12/Task 105.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Uche Clare/Phase 1/Python Basic 1/Day 12/Task 105.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Uche Clare/Phase 1/Python Basic 1/Day 12/Task 105.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #Write a Python program to get the users environment.
import os
print()
print(os.environ) | 22.25 | 53 | 0.786517 | 15 | 89 | 4.666667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 4 | 54 | 22.25 | 0.909091 | 0.58427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
9d2e42a3abb9432f2d0ad86786cb782fd707cebf | 36 | py | Python | tasker/tasker/figurer/test/resources/nested-imports.py | sara-nl/data-exchange | 52b9c2554a52b56686f3a06f583a7a6454bf6df6 | [
"Apache-2.0"
] | 4 | 2020-12-03T14:13:29.000Z | 2021-04-19T03:03:19.000Z | tasker/tasker/figurer/test/resources/nested-imports.py | sara-nl/data-exchange | 52b9c2554a52b56686f3a06f583a7a6454bf6df6 | [
"Apache-2.0"
] | null | null | null | tasker/tasker/figurer/test/resources/nested-imports.py | sara-nl/data-exchange | 52b9c2554a52b56686f3a06f583a7a6454bf6df6 | [
"Apache-2.0"
] | null | null | null | import foo
def bar:
import bar | 7.2 | 14 | 0.666667 | 6 | 36 | 4 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 36 | 5 | 14 | 7.2 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.666667 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
19dc2408d050d0563b3ba56582d9efd18c9db60c | 4,432 | py | Python | pyaz/batch/pool/autoscale/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/batch/pool/autoscale/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/batch/pool/autoscale/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | '''
Manage automatic scaling of Batch pools.
'''
from .... pyaz_utils import _call_az
def disable(pool_id, account_endpoint=None, account_key=None, account_name=None):
'''
Required Parameters:
- pool_id -- The ID of the Pool on which to disable automatic scaling.
Optional Parameters:
- account_endpoint -- Batch service endpoint. Alternatively, set by environment variable: AZURE_BATCH_ENDPOINT
- account_key -- Batch account key. Alternatively, set by environment variable: AZURE_BATCH_ACCESS_KEY
- account_name -- Batch account name. Alternatively, set by environment variable: AZURE_BATCH_ACCOUNT
'''
return _call_az("az batch pool autoscale disable", locals())
def enable(pool_id, account_endpoint=None, account_key=None, account_name=None, auto_scale_evaluation_interval=None, auto_scale_formula=None, if_match=None, if_modified_since=None, if_none_match=None, if_unmodified_since=None):
'''
Required Parameters:
- pool_id -- The ID of the Pool on which to enable automatic scaling.
Optional Parameters:
- account_endpoint -- Batch service endpoint. Alternatively, set by environment variable: AZURE_BATCH_ENDPOINT
- account_key -- Batch account key. Alternatively, set by environment variable: AZURE_BATCH_ACCESS_KEY
- account_name -- Batch account name. Alternatively, set by environment variable: AZURE_BATCH_ACCOUNT
- auto_scale_evaluation_interval -- The default value is 15 minutes. The minimum and maximum value are 5 minutes and 168 hours respectively. If you specify a value less than 5 minutes or greater than 168 hours, the Batch service rejects the request with an invalid property value error; if you are calling the REST API directly, the HTTP status code is 400 (Bad Request). If you specify a new interval, then the existing autoscale evaluation schedule will be stopped and a new autoscale evaluation schedule will be started, with its starting time being the time when this request was issued.
- auto_scale_formula -- The formula is checked for validity before it is applied to the Pool. If the formula is not valid, the Batch service rejects the request with detailed error information. For more information about specifying this formula, see Automatically scale Compute Nodes in an Azure Batch Pool (https://azure.microsoft.com/en-us/documentation/articles/batch-automatic-scaling).
- if_match -- An ETag value associated with the version of the resource known to the client. The operation will be performed only if the resource's current ETag on the service exactly matches the value specified by the client.
- if_modified_since -- A timestamp indicating the last modified time of the resource known to the client. The operation will be performed only if the resource on the service has been modified since the specified time.
- if_none_match -- An ETag value associated with the version of the resource known to the client. The operation will be performed only if the resource's current ETag on the service does not match the value specified by the client.
- if_unmodified_since -- A timestamp indicating the last modified time of the resource known to the client. The operation will be performed only if the resource on the service has not been modified since the specified time.
'''
return _call_az("az batch pool autoscale enable", locals())
def evaluate(auto_scale_formula, pool_id, account_endpoint=None, account_key=None, account_name=None):
'''
Required Parameters:
- auto_scale_formula -- The formula is validated and its results calculated, but it is not applied to the Pool. To apply the formula to the Pool, 'Enable automatic scaling on a Pool'. For more information about specifying this formula, see Automatically scale Compute Nodes in an Azure Batch Pool (https://azure.microsoft.com/en-us/documentation/articles/batch-automatic-scaling).
- pool_id -- The ID of the Pool on which to evaluate the automatic scaling formula.
Optional Parameters:
- account_endpoint -- Batch service endpoint. Alternatively, set by environment variable: AZURE_BATCH_ENDPOINT
- account_key -- Batch account key. Alternatively, set by environment variable: AZURE_BATCH_ACCESS_KEY
- account_name -- Batch account name. Alternatively, set by environment variable: AZURE_BATCH_ACCOUNT
'''
return _call_az("az batch pool autoscale evaluate", locals())
| 77.754386 | 595 | 0.777753 | 651 | 4,432 | 5.168971 | 0.235023 | 0.032689 | 0.048143 | 0.077563 | 0.721843 | 0.702229 | 0.665973 | 0.617236 | 0.617236 | 0.617236 | 0 | 0.003521 | 0.166968 | 4,432 | 56 | 596 | 79.142857 | 0.907909 | 0.813628 | 0 | 0 | 0 | 0 | 0.13964 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
19f2b4d8c895d124b6fd863a915ddb83a70cbd17 | 165 | py | Python | obm/__init__.py | aploium/obm | d68a15d2b53accbb661eb66f6610825285f4ef7b | [
"MIT"
] | null | null | null | obm/__init__.py | aploium/obm | d68a15d2b53accbb661eb66f6610825285f4ef7b | [
"MIT"
] | null | null | null | obm/__init__.py | aploium/obm | d68a15d2b53accbb661eb66f6610825285f4ef7b | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# coding=utf-8
from .model import Model
from .fields import IntField, BytesField, BitsField
from .fields_extended import PrefixedOptionsField
| 27.5 | 51 | 0.812121 | 22 | 165 | 6.045455 | 0.727273 | 0.150376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013605 | 0.109091 | 165 | 5 | 52 | 33 | 0.891156 | 0.206061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c22f9b147f63cbfb9fcbd76a7a3749cac6bb85af | 326 | py | Python | msg.py | lobadic/gladiatus-api | 819351b7f771ba70f2fd3800986fb2143bbaf92f | [
"MIT"
] | 3 | 2019-02-14T17:43:42.000Z | 2019-11-27T15:12:25.000Z | msg.py | lobadic/gladiatus-api | 819351b7f771ba70f2fd3800986fb2143bbaf92f | [
"MIT"
] | null | null | null | msg.py | lobadic/gladiatus-api | 819351b7f771ba70f2fd3800986fb2143bbaf92f | [
"MIT"
] | null | null | null | import time
#time.strftime("%d-%m-%Y %H:%M:%S", time.gmtime())
def info(text):
print(f'[INFO {time.strftime("%H:%M:%S", time.gmtime())}] {text}')
def warning(text):
print(f'[WARNING {time.strftime("%H:%M:%S", time.gmtime())}] {text}')
def error(text):
print(f'[ERROR {time.strftime("%H:%M:%S", time.gmtime())}] {text}') | 27.166667 | 70 | 0.601227 | 54 | 326 | 3.62963 | 0.296296 | 0.244898 | 0.061224 | 0.142857 | 0.540816 | 0.47449 | 0.47449 | 0.47449 | 0.326531 | 0 | 0 | 0 | 0.088957 | 326 | 12 | 71 | 27.166667 | 0.659933 | 0.150307 | 0 | 0 | 0 | 0 | 0.620939 | 0.281588 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.142857 | 0 | 0.571429 | 0.428571 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 5 |
dfa58769055e9aa6d9389e6e8922733c1b2c168a | 101 | py | Python | gui/hooks/hook-scipy.py | duke-lungmap-team/lungmap-pipeline | 8f27c76ec42f1cb1f8cbc8063848f808bb160ce7 | [
"BSD-2-Clause"
] | null | null | null | gui/hooks/hook-scipy.py | duke-lungmap-team/lungmap-pipeline | 8f27c76ec42f1cb1f8cbc8063848f808bb160ce7 | [
"BSD-2-Clause"
] | 6 | 2019-06-03T15:05:43.000Z | 2019-06-28T14:36:20.000Z | gui/hooks/hook-scipy.py | duke-lungmap-team/lungmap-pipeline | 8f27c76ec42f1cb1f8cbc8063848f808bb160ce7 | [
"BSD-2-Clause"
] | 1 | 2019-06-07T14:57:42.000Z | 2019-06-07T14:57:42.000Z | from PyInstaller.utils.hooks import collect_submodules
hiddenimports = collect_submodules('scipy')
| 20.2 | 54 | 0.841584 | 11 | 101 | 7.545455 | 0.818182 | 0.409639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089109 | 101 | 4 | 55 | 25.25 | 0.902174 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
dfd45a5b153b179ccd6c7103301eae6994c00b4d | 66 | py | Python | RL/utils/__init__.py | JUSTLOVELE/MobileDevStudy | ddcfd67d9ad66dd710fcbb355406bab3679ebaf7 | [
"MIT"
] | 1 | 2021-09-26T04:31:52.000Z | 2021-09-26T04:31:52.000Z | RL/utils/__init__.py | JUSTLOVELE/MobileDevStudy | ddcfd67d9ad66dd710fcbb355406bab3679ebaf7 | [
"MIT"
] | null | null | null | RL/utils/__init__.py | JUSTLOVELE/MobileDevStudy | ddcfd67d9ad66dd710fcbb355406bab3679ebaf7 | [
"MIT"
] | 1 | 2020-06-28T01:04:38.000Z | 2020-06-28T01:04:38.000Z | import numpy as np
import torch as torch
print(torch.__version__) | 16.5 | 24 | 0.818182 | 11 | 66 | 4.545455 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 66 | 4 | 24 | 16.5 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
dfe5a09adcaae8eba498ceb70308db4a0a3f5dc5 | 737 | py | Python | apgl/graph/__init__.py | mathemaphysics/APGL | 6ca7c176e04017feeae00c4cee069fd126df0fbc | [
"BSD-3-Clause"
] | 13 | 2015-02-19T14:39:09.000Z | 2021-04-12T01:22:32.000Z | apgl/graph/__init__.py | mathemaphysics/APGL | 6ca7c176e04017feeae00c4cee069fd126df0fbc | [
"BSD-3-Clause"
] | 1 | 2020-07-29T07:09:33.000Z | 2020-07-29T07:09:33.000Z | apgl/graph/__init__.py | mathemaphysics/APGL | 6ca7c176e04017feeae00c4cee069fd126df0fbc | [
"BSD-3-Clause"
] | 7 | 2015-03-16T07:26:49.000Z | 2021-01-12T06:57:27.000Z | from apgl.graph.SparseGraph import SparseGraph
from apgl.graph.GeneralVertexList import GeneralVertexList
from apgl.graph.DenseGraph import DenseGraph
from apgl.graph.DictGraph import DictGraph
from apgl.graph.VertexList import VertexList
from apgl.graph.GraphUtils import GraphUtils
from apgl.graph.GraphStatistics import GraphStatistics
from apgl.graph.AbstractSingleGraph import AbstractSingleGraph
from apgl.graph.AbstractMatrixGraph import AbstractMatrixGraph
#Optional modules are tried and ignored if not present
try:
from apgl.graph.PySparseGraph import PySparseGraph
except ImportError as error:
pass
try:
from apgl.graph.CsArrayGraph import CsArrayGraph
except ImportError as error:
pass
| 35.095238 | 63 | 0.822252 | 87 | 737 | 6.965517 | 0.344828 | 0.145215 | 0.235974 | 0.052805 | 0.092409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141113 | 737 | 20 | 64 | 36.85 | 0.957346 | 0.071913 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.117647 | 0.764706 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
dff5f460d1f5c6edc026c55ad01df99d6b48eeb8 | 128 | py | Python | AnovaMaster/__init__.py | lolouk44/anovamaster | 5054d027e691dd43be02e50be9b2bfeb3a5dfcbd | [
"BSD-2-Clause"
] | 2 | 2020-09-08T21:12:09.000Z | 2022-02-13T05:57:19.000Z | AnovaMaster/__init__.py | lolouk44/anovamaster | 5054d027e691dd43be02e50be9b2bfeb3a5dfcbd | [
"BSD-2-Clause"
] | null | null | null | AnovaMaster/__init__.py | lolouk44/anovamaster | 5054d027e691dd43be02e50be9b2bfeb3a5dfcbd | [
"BSD-2-Clause"
] | 1 | 2020-09-28T21:18:23.000Z | 2020-09-28T21:18:23.000Z | from .AnovaConfiguration import AnovaConfiguration
from .AnovaMaster import AnovaMaster
from .AnovaStatus import AnovaStatus
| 32 | 51 | 0.859375 | 12 | 128 | 9.166667 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117188 | 128 | 3 | 52 | 42.666667 | 0.973451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a03b735ebc54e3a5c7e8f784cb5609781b344cb7 | 111 | py | Python | Jeewa/chapter 5/3.py | Eurydia/Xian-assignment | 4a7e4bcd3d4999ea7429054fec1792064c96ff30 | [
"MIT"
] | null | null | null | Jeewa/chapter 5/3.py | Eurydia/Xian-assignment | 4a7e4bcd3d4999ea7429054fec1792064c96ff30 | [
"MIT"
] | null | null | null | Jeewa/chapter 5/3.py | Eurydia/Xian-assignment | 4a7e4bcd3d4999ea7429054fec1792064c96ff30 | [
"MIT"
] | null | null | null | n = int(input('Enter an integer(n): '))
print(f'Fac({n}) => {"*".join(str(i) for i in range(1, n+1))} = {f}')
| 27.75 | 69 | 0.513514 | 22 | 111 | 2.590909 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.153153 | 111 | 3 | 70 | 37 | 0.585106 | 0 | 0 | 0 | 0 | 0.5 | 0.720721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a03d132123290dbd402ce6ce24330fb27a1b924b | 81 | py | Python | src/autotests/conftest.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | 3 | 2019-10-27T07:40:26.000Z | 2020-04-18T20:44:08.000Z | src/autotests/conftest.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | 1 | 2019-11-04T04:16:29.000Z | 2019-11-04T04:16:29.000Z | src/autotests/conftest.py | NSKgooner/hack_moscow | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | [
"Apache-2.0"
] | null | null | null | import pytest
@pytest.fixture
def url():
return 'http://206.81.5.208:8080'
| 11.571429 | 37 | 0.666667 | 13 | 81 | 4.153846 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 0.160494 | 81 | 6 | 38 | 13.5 | 0.602941 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
a07666e340ef2d93d5ba996ce114464365405086 | 339 | py | Python | core/__init__.py | Pomb/Yahtzee | b15af59299ef68bdf44c56fb314514fe76d416ca | [
"MIT"
] | null | null | null | core/__init__.py | Pomb/Yahtzee | b15af59299ef68bdf44c56fb314514fe76d416ca | [
"MIT"
] | null | null | null | core/__init__.py | Pomb/Yahtzee | b15af59299ef68bdf44c56fb314514fe76d416ca | [
"MIT"
] | null | null | null | from .save.jsonSave import JsonSave # noqa
from .dice import d6Set # noqa
from .command import * # noqa
from .player import Player # noqa
from .turn import Turn # noqa
from .titleVisualizer import TitleVisualizer # noqa
from .scoreRules import ScoreRules # noqa
from .gameData import GameData # noqa
from .menu import Menu # noqa
| 33.9 | 52 | 0.755162 | 45 | 339 | 5.688889 | 0.311111 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003623 | 0.185841 | 339 | 9 | 53 | 37.666667 | 0.923913 | 0.129794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a0a948e92093e299e5e3f500126265645d2c3bc3 | 67 | py | Python | pykeos/__init__.py | yop0/pykeos | 02c141cfb3a56a221a3a47ab90b8da047bee2daf | [
"BSD-3-Clause"
] | null | null | null | pykeos/__init__.py | yop0/pykeos | 02c141cfb3a56a221a3a47ab90b8da047bee2daf | [
"BSD-3-Clause"
] | null | null | null | pykeos/__init__.py | yop0/pykeos | 02c141cfb3a56a221a3a47ab90b8da047bee2daf | [
"BSD-3-Clause"
] | null | null | null | from .tools import *
from .systems import *
from .measures import * | 22.333333 | 23 | 0.746269 | 9 | 67 | 5.555556 | 0.555556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 3 | 23 | 22.333333 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
260d01a152294bf619300c6eeb10d8d2f53b071b | 11,516 | py | Python | test/console/pathtests.py | geoco84/comodit-client | 4cf47e60a6739ed8b88ce8b955ed57375c4d400d | [
"MIT"
] | 1 | 2015-01-20T17:24:34.000Z | 2015-01-20T17:24:34.000Z | test/console/pathtests.py | geoco84/comodit-client | 4cf47e60a6739ed8b88ce8b955ed57375c4d400d | [
"MIT"
] | null | null | null | test/console/pathtests.py | geoco84/comodit-client | 4cf47e60a6739ed8b88ce8b955ed57375c4d400d | [
"MIT"
] | 24 | 2016-09-07T15:28:00.000Z | 2021-12-08T16:03:16.000Z | import unittest
import comodit_client.console.items as items
from comodit_client.console.paths import resolve_dots, absolute_elems, PathModel
from unittest import TestCase
class Mock:
pass
class PathResolving(TestCase):
def test_resolve_dots(self):
self.assertEqual(resolve_dots(['x', '.']), ['x'])
self.assertEqual(resolve_dots(['x', '.', 'y']), ['x', 'y'])
self.assertEqual(resolve_dots(['x', 'y', '..']), ['x'])
self.assertEqual(resolve_dots(['x', '..']), [])
self.assertEqual(resolve_dots(['x', '..', 'y']), ['y'])
self.assertEqual(resolve_dots(['x', 'y', 'z', '..', '..']), ['x'])
self.assertEqual(resolve_dots(['..', '..']), [])
def test_absolute_elems(self):
current = Mock()
current.absolute_path = 'x'
self.assertEqual(absolute_elems(current, 'y'), ['x', 'y'])
def test_absolute_elems_slash(self):
current = Mock()
self.assertEqual(absolute_elems(current, '/'), [])
def test_resolve_root(self):
(res_vars, node_type) = PathModel().resolve([])
self.assertEqual(node_type['label'], items.RootItem)
self.assertEqual(res_vars, {})
def test_resolve_org(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name'])
self.assertEqual(node_type['label'], items.OrganizationItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_org_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name', 'settings'])
self.assertEqual(node_type['label'], items.OrgSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_org_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'settings', 'name2'])
self.assertEqual(node_type['label'], items.OrgSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'key': 'name2'})
def test_resolve_envs(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name', 'environments'])
self.assertEqual(node_type['label'], items.EnvironmentsItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_env(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2'])
self.assertEqual(node_type['label'], items.EnvironmentItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2'})
def test_resolve_env_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'settings'])
self.assertEqual(node_type['label'], items.EnvSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2'})
def test_resolve_env_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'settings', 'name3'])
self.assertEqual(node_type['label'], items.EnvSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'key': 'name3'})
def test_resolve_hosts(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts'])
self.assertEqual(node_type['label'], items.HostsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2'})
def test_resolve_host(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3'])
self.assertEqual(node_type['label'], items.HostItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'settings'])
self.assertEqual(node_type['label'], items.HostSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'settings', 'name4'])
self.assertEqual(node_type['label'], items.HostSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'key': 'name4'})
def test_resolve_apps(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name', 'applications'])
self.assertEqual(node_type['label'], items.ApplicationsItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_app(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'applications', 'name2'])
self.assertEqual(node_type['label'], items.ApplicationItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'app_name': 'name2'})
def test_resolve_dists(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name', 'distributions'])
self.assertEqual(node_type['label'], items.DistributionsItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_dist(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'distributions', 'name2'])
self.assertEqual(node_type['label'], items.DistributionItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'dist_name': 'name2'})
def test_resolve_dist_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'distributions', 'name2', 'settings'])
self.assertEqual(node_type['label'], items.DistSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'dist_name': 'name2'})
def test_resolve_dist_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'distributions', 'name2', 'settings', 'name3'])
self.assertEqual(node_type['label'], items.DistSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'dist_name': 'name2', 'key': 'name3'})
def test_resolve_plats(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name', 'platforms'])
self.assertEqual(node_type['label'], items.PlatformsItem)
self.assertEqual(res_vars, {'org_name': 'name'})
def test_resolve_plat(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'platforms', 'name2'])
self.assertEqual(node_type['label'], items.PlatformItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'plat_name': 'name2'})
def test_resolve_plat_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'platforms', 'name2', 'settings'])
self.assertEqual(node_type['label'], items.PlatSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'plat_name': 'name2'})
def test_resolve_plat_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'platforms', 'name2', 'settings', 'name3'])
self.assertEqual(node_type['label'], items.PlatSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'plat_name': 'name2', 'key': 'name3'})
def test_resolve_host_apps(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'applications'])
self.assertEqual(node_type['label'], items.HostAppsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_app(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'applications', 'name4'])
self.assertEqual(node_type['label'], items.HostAppItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'app_name': 'name4'})
def test_resolve_host_dist(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'distribution'])
self.assertEqual(node_type['label'], items.HostDistItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_plat(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'platform'])
self.assertEqual(node_type['label'], items.HostPlatItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_app_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'applications', 'name4', 'settings'])
self.assertEqual(node_type['label'], items.HostAppSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'app_name': 'name4'})
def test_resolve_host_app_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'applications', 'name4', 'settings', 'name5'])
self.assertEqual(node_type['label'], items.HostAppSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'app_name': 'name4', 'key': 'name5'})
def test_resolve_host_dist_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'distribution', 'settings'])
self.assertEqual(node_type['label'], items.HostDistSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_dist_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'distribution', 'settings', 'name4'])
self.assertEqual(node_type['label'], items.HostDistSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'key': 'name4'})
def test_resolve_host_plat_settings(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'platform', 'settings'])
self.assertEqual(node_type['label'], items.HostPlatSettingsItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
def test_resolve_host_plat_setting(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'platform', 'settings', 'name4'])
self.assertEqual(node_type['label'], items.HostPlatSettingItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3', 'key': 'name4'})
def test_resolve_host_instance(self):
(res_vars, node_type) = PathModel().resolve(['organizations', 'name1', 'environments', 'name2', 'hosts', 'name3', 'instance'])
self.assertEqual(node_type['label'], items.HostInstanceItem)
self.assertEqual(res_vars, {'org_name': 'name1', 'env_name': 'name2', 'host_name': 'name3'})
if __name__ == '__main__':
unittest.main()
| 57.293532 | 168 | 0.658562 | 1,293 | 11,516 | 5.611756 | 0.075793 | 0.155044 | 0.065601 | 0.068219 | 0.860943 | 0.837238 | 0.788175 | 0.680816 | 0.651461 | 0.590546 | 0 | 0.015604 | 0.154133 | 11,516 | 200 | 169 | 57.58 | 0.729289 | 0 | 0 | 0.179487 | 0 | 0 | 0.237061 | 0 | 0 | 0 | 0 | 0 | 0.480769 | 1 | 0.230769 | false | 0.00641 | 0.025641 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
13fdbd4a2394334c1021403873dadb3dbe7525c4 | 2,618 | py | Python | tests/parsers/c_parser/exprs/binary_ops/neq_op_expr_tests.py | mehrdad-shokri/retdec-regression-tests-framework | 9c3edcd0a7bc292a0d5b5cbfb4315010c78d3bc3 | [
"MIT"
] | 21 | 2017-12-12T20:38:43.000Z | 2019-04-14T12:46:10.000Z | tests/parsers/c_parser/exprs/binary_ops/neq_op_expr_tests.py | mehrdad-shokri/retdec-regression-tests-framework | 9c3edcd0a7bc292a0d5b5cbfb4315010c78d3bc3 | [
"MIT"
] | 6 | 2018-01-06T13:32:23.000Z | 2018-09-14T15:09:11.000Z | tests/parsers/c_parser/exprs/binary_ops/neq_op_expr_tests.py | mehrdad-shokri/retdec-regression-tests-framework | 9c3edcd0a7bc292a0d5b5cbfb4315010c78d3bc3 | [
"MIT"
] | 11 | 2017-12-12T20:38:46.000Z | 2018-07-19T03:12:03.000Z | """
Tests for the
:module`regression_tests.parsers.c_parser.exprs.binary_ops.neq_op_expr`
module.
"""
from tests.parsers.c_parser import WithModuleTests
class NeqOpExprTests(WithModuleTests):
"""Tests for `NeqOpExpr`."""
def test_neq_op_expr_is_neq_op(self):
neq_op_expr = self.get_expr('1 != 2', 'int')
self.assertTrue(neq_op_expr.is_neq_op())
def test_neq_op_expr_is_no_other_expr(self):
neq_op_expr = self.get_expr('1 != 2', 'int')
self.assertFalse(neq_op_expr.is_eq_op())
self.assertFalse(neq_op_expr.is_gt_op())
self.assertFalse(neq_op_expr.is_gt_eq_op())
self.assertFalse(neq_op_expr.is_lt_op())
self.assertFalse(neq_op_expr.is_lt_eq_op())
self.assertFalse(neq_op_expr.is_add_op())
self.assertFalse(neq_op_expr.is_sub_op())
self.assertFalse(neq_op_expr.is_mul_op())
self.assertFalse(neq_op_expr.is_mod_op())
self.assertFalse(neq_op_expr.is_div_op())
self.assertFalse(neq_op_expr.is_and_op())
self.assertFalse(neq_op_expr.is_or_op())
self.assertFalse(neq_op_expr.is_bit_and_op())
self.assertFalse(neq_op_expr.is_bit_or_op())
self.assertFalse(neq_op_expr.is_bit_xor_op())
self.assertFalse(neq_op_expr.is_bit_shl_op())
self.assertFalse(neq_op_expr.is_bit_shr_op())
self.assertFalse(neq_op_expr.is_not_op())
self.assertFalse(neq_op_expr.is_neg_op())
self.assertFalse(neq_op_expr.is_assign_op())
self.assertFalse(neq_op_expr.is_address_op())
self.assertFalse(neq_op_expr.is_deref_op())
self.assertFalse(neq_op_expr.is_array_index_op())
self.assertFalse(neq_op_expr.is_comma_op())
self.assertFalse(neq_op_expr.is_ternary_op())
self.assertFalse(neq_op_expr.is_call())
self.assertFalse(neq_op_expr.is_cast())
self.assertFalse(neq_op_expr.is_pre_increment_op())
self.assertFalse(neq_op_expr.is_post_increment_op())
self.assertFalse(neq_op_expr.is_pre_decrement_op())
self.assertFalse(neq_op_expr.is_post_decrement_op())
self.assertFalse(neq_op_expr.is_compound_assign_op())
self.assertFalse(neq_op_expr.is_struct_ref_op())
self.assertFalse(neq_op_expr.is_struct_deref_op())
def test_repr_returns_correct_repr(self):
add_op_expr = self.get_expr('1 != 2', 'int')
self.assertEqual(repr(add_op_expr), '<NeqOpExpr lhs=1 rhs=2>')
def test_str_returns_correct_str(self):
add_op_expr = self.get_expr('1 != 2', 'int')
self.assertEqual(str(add_op_expr), '1 != 2')
| 42.918033 | 75 | 0.703972 | 410 | 2,618 | 4.017073 | 0.173171 | 0.160291 | 0.218579 | 0.247116 | 0.76867 | 0.76867 | 0.704918 | 0.464481 | 0.133576 | 0.093503 | 0 | 0.005556 | 0.174943 | 2,618 | 60 | 76 | 43.633333 | 0.756944 | 0.044309 | 0 | 0.085106 | 0 | 0 | 0.026241 | 0 | 0 | 0 | 0 | 0 | 0.787234 | 1 | 0.085106 | false | 0 | 0.021277 | 0 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
cd25a1185185ccf421781ed099b85849c622386f | 15,769 | py | Python | tests/tools/emr/test_report_long_jobs.py | mikiec84/mrjob | 801fffffdc6af860edd7813c948f9da341305b21 | [
"Apache-2.0"
] | 1,538 | 2015-01-02T10:22:17.000Z | 2022-03-29T16:42:33.000Z | tests/tools/emr/test_report_long_jobs.py | mikiec84/mrjob | 801fffffdc6af860edd7813c948f9da341305b21 | [
"Apache-2.0"
] | 1,027 | 2015-01-09T21:30:37.000Z | 2022-02-26T18:21:42.000Z | tests/tools/emr/test_report_long_jobs.py | mikiec84/mrjob | 801fffffdc6af860edd7813c948f9da341305b21 | [
"Apache-2.0"
] | 403 | 2015-01-06T15:49:44.000Z | 2022-03-29T16:42:34.000Z | # Copyright 2011-2012 Yelp
# Copyright 2015-2017 Yelp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Very basic tests for the audit_usage script"""
import sys
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from dateutil.tz import tzutc
from mrjob.py2 import StringIO
from mrjob.tools.emr.report_long_jobs import _find_long_running_jobs
from mrjob.tools.emr.report_long_jobs import main
from tests.mock_boto3 import MockBoto3TestCase
CLUSTERS = [
dict(
Id='j-STARTING',
Name='mr_grieving',
Status=dict(
State='STARTING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:05:00Z'),
),
),
Tags=[],
_steps=[],
),
dict(
Id='j-BOOTSTRAPPING',
Name='mr_grieving',
Status=dict(
State='BOOTSTRAPPING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:05:00Z'),
),
),
Tags=[],
_steps=[],
),
dict(
Id='j-RUNNING1STEP',
Name='mr_grieving',
Status=dict(
State='RUNNING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
Tags=[dict(Key='my_key', Value='my_value')],
_Steps=[
dict(
Name='mr_denial: Step 1 of 5',
Status=dict(
State='RUNNING',
Timeline=dict(
StartDateTime=parse('2010-06-06T00:20:00Z'),
),
),
),
],
),
dict(
Id='j-RUNNING2STEPS',
Name='mr_grieving',
Status=dict(
State='RUNNING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
Tags=[],
_Steps=[
dict(
Name='mr_denial: Step 1 of 5',
Status=dict(
State='COMPLETED',
Timeline=dict(
EndDateTime=parse('2010-06-06T00:25:00Z'),
StartDateTime=parse('2010-06-06T00:20:00Z'),
),
),
),
dict(
Name='mr_anger: Step 2 of 5',
Status=dict(
State='RUNNING',
Timeline=dict(
StartDateTime=parse('2010-06-06T00:30:00Z'),
),
),
),
]
),
dict(
Id='j-RUNNINGANDPENDING',
Name='mr_grieving',
Status=dict(
State='RUNNING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
Tags=[],
_Steps=[
dict(
Name='mr_denial: Step 1 of 5',
Status=dict(
State='COMPLETED',
Timeline=dict(
EndDateTime=parse('2010-06-06T00:25:00Z'),
StartDateTime=parse('2010-06-06T00:20:00Z'),
),
),
),
dict(
Name='mr_anger: Step 2 of 5',
Status=dict(
State='RUNNING',
Timeline=dict(
StartDateTime=parse('2010-06-06T00:30:00Z'),
),
),
),
dict(
Name='mr_bargaining: Step 3 of 5',
Status=dict(
State='PENDING',
),
),
]
),
dict(
Id='j-PENDING1STEP',
Name='mr_grieving',
Status=dict(
State='RUNNING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
Tags=[],
_Steps=[
dict(
Name='mr_bargaining: Step 3 of 5',
Status=dict(
State='PENDING',
),
),
]
),
dict(
Id='j-PENDING2STEPS',
Name='mr_grieving',
Status=dict(
State='RUNNING',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
Tags=[],
_Steps=[
dict(
Name='mr_bargaining: Step 3 of 5',
Status=dict(
State='COMPLETED',
Timeline=dict(
EndDateTime=parse('2010-06-06T00:35:00Z'),
StartDateTime=parse('2010-06-06T00:20:00Z'),
),
),
),
dict(
Name='mr_depression: Step 4 of 5',
Status=dict(
State='PENDING',
),
),
]
),
dict(
Id='j-COMPLETED',
Name='mr_grieving',
Status=dict(
State='COMPLETED',
Timeline=dict(
CreationDateTime=parse('2010-06-06T00:00:00Z'),
ReadyDateTime=parse('2010-06-06T00:15:00Z'),
),
),
State='COMPLETED',
Tags=[],
_Steps=[
dict(
Name='mr_acceptance: Step 5 of 5',
Status=dict(
State='COMPLETED',
Timeline=dict(
EndDateTime=parse('2010-06-06T00:40:00Z'),
StartDateTime=parse('2010-06-06T00:20:00Z'),
),
),
),
]
),
]
CLUSTERS_BY_ID = dict((cluster['Id'], cluster) for cluster in CLUSTERS)
CLUSTER_SUMMARIES_BY_ID = dict(
(cluster['Id'], dict(
Id=cluster['Id'],
Name=cluster['Name'],
Status=cluster['Status']))
for cluster in CLUSTERS)
class ReportLongJobsTestCase(MockBoto3TestCase):
def setUp(self):
super(ReportLongJobsTestCase, self).setUp()
# redirect print statements to self.stdout
self._real_stdout = sys.stdout
self.stdout = StringIO()
sys.stdout = self.stdout
def tearDown(self):
sys.stdout = self._real_stdout
super(ReportLongJobsTestCase, self).tearDown()
def test_with_no_clusters(self):
main(['-q', '--no-conf']) # just make sure it doesn't crash
def test_with_all_clusters(self):
for cluster in CLUSTERS:
self.add_mock_emr_cluster(cluster)
emr_client = self.client('emr')
emr_client.run_job_flow(
Name='no name',
Instances=dict(
MasterInstanceType='m1.medium',
InstanceCount=1,
),
JobFlowRole='fake-instance-profile',
ReleaseLabel='emr-4.0.0',
ServiceRole='fake-service-role',
)
main(['-q', '--no-conf'])
lines = [line for line in StringIO(self.stdout.getvalue())]
self.assertEqual(len(lines), len(CLUSTERS_BY_ID) - 1)
self.assertNotIn('j-COMPLETED', self.stdout.getvalue())
def test_exclude(self):
for cluster in CLUSTERS:
self.add_mock_emr_cluster(cluster)
main(['-q', '--no-conf', '-x', 'my_key,my_value'])
lines = [line for line in StringIO(self.stdout.getvalue())]
self.assertEqual(len(lines), len(CLUSTERS_BY_ID) - 2)
self.assertNotIn('j-COMPLETED', self.stdout.getvalue())
self.assertNotIn('j-RUNNING1STEP', self.stdout.getvalue())
class FindLongRunningJobsTestCase(MockBoto3TestCase):
maxDiff = None # show whole diff when tests fail
def setUp(self):
super(FindLongRunningJobsTestCase, self).setUp()
for cluster in CLUSTERS:
self.add_mock_emr_cluster(cluster)
def _find_long_running_jobs(self, cluster_summaries, min_time, now):
emr_client = self.client('emr')
return _find_long_running_jobs(
emr_client,
cluster_summaries,
min_time=min_time,
now=now)
def test_starting(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-STARTING']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-STARTING',
'name': u'mr_grieving',
'state': u'STARTING',
'time': timedelta(hours=3, minutes=55)}])
def test_bootstrapping(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-BOOTSTRAPPING']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-BOOTSTRAPPING',
'name': u'mr_grieving',
'state': u'BOOTSTRAPPING',
'time': timedelta(hours=3, minutes=55)}])
def test_running_one_step(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNING1STEP']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-RUNNING1STEP',
'name': u'mr_denial: Step 1 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=40)}])
# job hasn't been running for 1 day
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNING1STEP']],
min_time=timedelta(days=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[])
def test_running_two_steps(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNING2STEPS']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-RUNNING2STEPS',
'name': u'mr_anger: Step 2 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=30)}])
# job hasn't been running for 1 day
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNING2STEPS']],
min_time=timedelta(days=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[])
def test_running_and_pending(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNINGANDPENDING']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-RUNNINGANDPENDING',
'name': u'mr_anger: Step 2 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=30)}])
# job hasn't been running for 1 day
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-RUNNINGANDPENDING']],
min_time=timedelta(days=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[])
def test_pending_one_step(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-PENDING1STEP']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-PENDING1STEP',
'name': u'mr_bargaining: Step 3 of 5',
'state': u'PENDING',
'time': timedelta(hours=3, minutes=45)}])
# job hasn't been running for 1 day
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-PENDING1STEP']],
min_time=timedelta(days=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[])
def test_pending_two_steps(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-PENDING2STEPS']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-PENDING2STEPS',
'name': u'mr_depression: Step 4 of 5',
'state': u'PENDING',
'time': timedelta(hours=3, minutes=25)}])
# job hasn't been running for 1 day
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-PENDING2STEPS']],
min_time=timedelta(days=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[])
def test_completed(self):
self.assertEqual(
list(self._find_long_running_jobs(
[CLUSTER_SUMMARIES_BY_ID['j-COMPLETED']],
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[]
)
def test_all_together(self):
self.assertEqual(
list(self._find_long_running_jobs(
CLUSTERS,
min_time=timedelta(hours=1),
now=datetime(2010, 6, 6, 4, tzinfo=tzutc())
)),
[{'cluster_id': u'j-STARTING',
'name': u'mr_grieving',
'state': u'STARTING',
'time': timedelta(hours=3, minutes=55)},
{'cluster_id': u'j-BOOTSTRAPPING',
'name': u'mr_grieving',
'state': u'BOOTSTRAPPING',
'time': timedelta(hours=3, minutes=55)},
{'cluster_id': u'j-RUNNING1STEP',
'name': u'mr_denial: Step 1 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=40)},
{'cluster_id': u'j-RUNNING2STEPS',
'name': u'mr_anger: Step 2 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=30)},
{'cluster_id': u'j-RUNNINGANDPENDING',
'name': u'mr_anger: Step 2 of 5',
'state': u'RUNNING',
'time': timedelta(hours=3, minutes=30)},
{'cluster_id': u'j-PENDING1STEP',
'name': u'mr_bargaining: Step 3 of 5',
'state': u'PENDING',
'time': timedelta(hours=3, minutes=45)},
{'cluster_id': u'j-PENDING2STEPS',
'name': u'mr_depression: Step 4 of 5',
'state': u'PENDING',
'time': timedelta(hours=3, minutes=25)}])
| 32.98954 | 74 | 0.49889 | 1,626 | 15,769 | 4.687577 | 0.137761 | 0.047756 | 0.03608 | 0.05248 | 0.740226 | 0.727499 | 0.712936 | 0.701653 | 0.684072 | 0.673839 | 0 | 0.061142 | 0.37561 | 15,769 | 477 | 75 | 33.0587 | 0.71298 | 0.05625 | 0 | 0.753589 | 0 | 0 | 0.160215 | 0.001414 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.038278 | false | 0 | 0.021531 | 0 | 0.069378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
cd408d849c3cc799fd5bdb6542af5d05f1721324 | 110 | py | Python | api/tests/placeholder_test.py | fitzypop/todo-app-farm | bdc953cb8e0ebc430492e79c60fd051b6a1f0abe | [
"Unlicense"
] | null | null | null | api/tests/placeholder_test.py | fitzypop/todo-app-farm | bdc953cb8e0ebc430492e79c60fd051b6a1f0abe | [
"Unlicense"
] | 15 | 2021-07-21T22:11:26.000Z | 2021-07-22T18:39:52.000Z | api/tests/placeholder_test.py | langnostic/todo-app | bdc953cb8e0ebc430492e79c60fd051b6a1f0abe | [
"Unlicense"
] | null | null | null | from pytest_mock import MockerFixture
def test_placeholder(mocker: MockerFixture) -> None:
assert False
| 18.333333 | 52 | 0.790909 | 13 | 110 | 6.538462 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154545 | 110 | 5 | 53 | 22 | 0.913978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
cd4dd881c7729b66df879cf5ab78a436a6c90db9 | 62 | py | Python | pysql/exceptions/core/constraints.py | jha-hitesh/pysql | ad7c7e4e7e65a97e4dc15cda395678e0c09b02ab | [
"MIT"
] | null | null | null | pysql/exceptions/core/constraints.py | jha-hitesh/pysql | ad7c7e4e7e65a97e4dc15cda395678e0c09b02ab | [
"MIT"
] | null | null | null | pysql/exceptions/core/constraints.py | jha-hitesh/pysql | ad7c7e4e7e65a97e4dc15cda395678e0c09b02ab | [
"MIT"
] | null | null | null | class InvalidConstraintPropertyException(Exception):
pass
| 20.666667 | 52 | 0.83871 | 4 | 62 | 13 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 53 | 31 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
cd79531e6d378f48cc6fe64697f8bd6deeefa9f5 | 11,635 | py | Python | lib/RESKESearchDemo/RESKESearchDemoClient.py | kbaseapps/RESKESearchDemo | 9140c6726e3ae26ae7ba20b3fdcfed96049393d5 | [
"MIT"
] | null | null | null | lib/RESKESearchDemo/RESKESearchDemoClient.py | kbaseapps/RESKESearchDemo | 9140c6726e3ae26ae7ba20b3fdcfed96049393d5 | [
"MIT"
] | null | null | null | lib/RESKESearchDemo/RESKESearchDemoClient.py | kbaseapps/RESKESearchDemo | 9140c6726e3ae26ae7ba20b3fdcfed96049393d5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
############################################################
#
# Autogenerated by the KBase type compiler -
# any changes made here will be overwritten
#
############################################################
from __future__ import print_function
# the following is a hack to get the baseclient to import whether we're in a
# package or not. This makes pep8 unhappy hence the annotations.
try:
# baseclient and this client are in a package
from .baseclient import BaseClient as _BaseClient # @UnusedImport
except:
# no they aren't
from baseclient import BaseClient as _BaseClient # @Reimport
class RESKESearchDemo(object):
def __init__(
self, url=None, timeout=30 * 60, user_id=None,
password=None, token=None, ignore_authrc=False,
trust_all_ssl_certificates=False,
auth_svc='https://kbase.us/services/authorization/Sessions/Login'):
if url is None:
raise ValueError('A url is required')
self._service_ver = None
self._client = _BaseClient(
url, timeout=timeout, user_id=user_id, password=password,
token=token, ignore_authrc=ignore_authrc,
trust_all_ssl_certificates=trust_all_ssl_certificates,
auth_svc=auth_svc)
def add_workspace_to_index(self, params, context=None):
"""
This operation means that given workspace will be shared with
system indexing user with write access. User calling this
function should be owner of this workspace.
:param params: instance of type "AddWorkspaceToIndexInput" ->
structure: parameter "ws_name" of String, parameter "ws_id" of Long
"""
return self._client.call_method(
'RESKESearchDemo.add_workspace_to_index',
[params], self._service_ver, context)
def search_types(self, params, context=None):
"""
:param params: instance of type "SearchTypesInput" -> structure:
parameter "match_filter" of type "MatchFilter" -> structure:
parameter "full_text_in_all" of String, parameter
"access_group_id" of Long, parameter "object_name" of String,
parameter "parent_guid" of type "GUID" (Global user identificator.
It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "timestamp" of type "MatchValue" -> structure: parameter
"value" of String, parameter "int_value" of Long, parameter
"double_value" of Double, parameter "bool_value" of type "boolean"
(A boolean. 0 = false, other = true.), parameter "min_int" of
Long, parameter "max_int" of Long, parameter "min_date" of Long,
parameter "max_date" of Long, parameter "min_double" of Double,
parameter "max_double" of Double, parameter "lookupInKeys" of
mapping from String to type "MatchValue" -> structure: parameter
"value" of String, parameter "int_value" of Long, parameter
"double_value" of Double, parameter "bool_value" of type "boolean"
(A boolean. 0 = false, other = true.), parameter "min_int" of
Long, parameter "max_int" of Long, parameter "min_date" of Long,
parameter "max_date" of Long, parameter "min_double" of Double,
parameter "max_double" of Double, parameter "access_filter" of
type "AccessFilter" -> structure: parameter "with_private" of type
"boolean" (A boolean. 0 = false, other = true.), parameter
"with_public" of type "boolean" (A boolean. 0 = false, other =
true.), parameter "with_all_history" of type "boolean" (A boolean.
0 = false, other = true.)
:returns: instance of type "SearchTypesOutput" -> structure:
parameter "type_to_count" of mapping from String to Long,
parameter "search_time" of Long
"""
return self._client.call_method(
'RESKESearchDemo.search_types',
[params], self._service_ver, context)
def search_objects(self, params, context=None):
"""
:param params: instance of type "SearchObjectsInput" -> structure:
parameter "object_type" of String, parameter "match_filter" of
type "MatchFilter" -> structure: parameter "full_text_in_all" of
String, parameter "access_group_id" of Long, parameter
"object_name" of String, parameter "parent_guid" of type "GUID"
(Global user identificator. It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "timestamp" of type "MatchValue" -> structure: parameter
"value" of String, parameter "int_value" of Long, parameter
"double_value" of Double, parameter "bool_value" of type "boolean"
(A boolean. 0 = false, other = true.), parameter "min_int" of
Long, parameter "max_int" of Long, parameter "min_date" of Long,
parameter "max_date" of Long, parameter "min_double" of Double,
parameter "max_double" of Double, parameter "lookupInKeys" of
mapping from String to type "MatchValue" -> structure: parameter
"value" of String, parameter "int_value" of Long, parameter
"double_value" of Double, parameter "bool_value" of type "boolean"
(A boolean. 0 = false, other = true.), parameter "min_int" of
Long, parameter "max_int" of Long, parameter "min_date" of Long,
parameter "max_date" of Long, parameter "min_double" of Double,
parameter "max_double" of Double, parameter "sorting_rules" of
list of type "SortingRule" -> structure: parameter "is_timestamp"
of type "boolean" (A boolean. 0 = false, other = true.), parameter
"is_object_name" of type "boolean" (A boolean. 0 = false, other =
true.), parameter "key_name" of String, parameter "descending" of
type "boolean" (A boolean. 0 = false, other = true.), parameter
"access_filter" of type "AccessFilter" -> structure: parameter
"with_private" of type "boolean" (A boolean. 0 = false, other =
true.), parameter "with_public" of type "boolean" (A boolean. 0 =
false, other = true.), parameter "with_all_history" of type
"boolean" (A boolean. 0 = false, other = true.), parameter
"pagination" of type "Pagination" -> structure: parameter "start"
of Long, parameter "count" of Long, parameter "post_processing" of
type "PostProcessing" (ids_only - shortcut to mark all three skips
as true.) -> structure: parameter "ids_only" of type "boolean" (A
boolean. 0 = false, other = true.), parameter "skip_info" of type
"boolean" (A boolean. 0 = false, other = true.), parameter
"skip_keys" of type "boolean" (A boolean. 0 = false, other =
true.), parameter "skip_data" of type "boolean" (A boolean. 0 =
false, other = true.), parameter "data_includes" of list of String
:returns: instance of type "SearchObjectsOutput" -> structure:
parameter "pagination" of type "Pagination" -> structure:
parameter "start" of Long, parameter "count" of Long, parameter
"sorting_rules" of list of type "SortingRule" -> structure:
parameter "is_timestamp" of type "boolean" (A boolean. 0 = false,
other = true.), parameter "is_object_name" of type "boolean" (A
boolean. 0 = false, other = true.), parameter "key_name" of
String, parameter "descending" of type "boolean" (A boolean. 0 =
false, other = true.), parameter "objects" of list of type
"ObjectData" -> structure: parameter "guid" of type "GUID" (Global
user identificator. It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "parent_guid" of type "GUID" (Global user identificator.
It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "object_name" of String, parameter "timestamp" of Long,
parameter "parent_data" of unspecified object, parameter "data" of
unspecified object, parameter "key_props" of mapping from String
to String, parameter "total" of Long, parameter "search_time" of
Long
"""
return self._client.call_method(
'RESKESearchDemo.search_objects',
[params], self._service_ver, context)
def get_objects(self, params, context=None):
"""
:param params: instance of type "GetObjectsInput" -> structure:
parameter "guids" of list of type "GUID" (Global user
identificator. It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "post_processing" of type "PostProcessing" (ids_only -
shortcut to mark all three skips as true.) -> structure: parameter
"ids_only" of type "boolean" (A boolean. 0 = false, other =
true.), parameter "skip_info" of type "boolean" (A boolean. 0 =
false, other = true.), parameter "skip_keys" of type "boolean" (A
boolean. 0 = false, other = true.), parameter "skip_data" of type
"boolean" (A boolean. 0 = false, other = true.), parameter
"data_includes" of list of String
:returns: instance of type "GetObjectsOutput" -> structure: parameter
"objects" of list of type "ObjectData" -> structure: parameter
"guid" of type "GUID" (Global user identificator. It has structure
like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "parent_guid" of type "GUID" (Global user identificator.
It has structure like this:
<data-source-code>:<full-reference>[:<sub-type>/<sub-id>]),
parameter "object_name" of String, parameter "timestamp" of Long,
parameter "parent_data" of unspecified object, parameter "data" of
unspecified object, parameter "key_props" of mapping from String
to String, parameter "search_time" of Long
"""
return self._client.call_method(
'RESKESearchDemo.get_objects',
[params], self._service_ver, context)
def list_types(self, params, context=None):
"""
:param params: instance of type "ListTypesInput" (type_name -
optional parameter; if not specified all types are described.) ->
structure: parameter "type_name" of String
:returns: instance of type "ListTypesOutput" -> structure: parameter
"types" of mapping from String to type "TypeDescriptor" (TODO: add
more details like parent type, relations, primary key, ...) ->
structure: parameter "type_name" of String, parameter
"type_ui_title" of String, parameter "keys" of list of type
"KeyDescription" -> structure: parameter "key_name" of String,
parameter "key_ui_title" of String, parameter "key_value_type" of
String
"""
return self._client.call_method(
'RESKESearchDemo.list_types',
[params], self._service_ver, context)
def status(self, context=None):
return self._client.call_method('RESKESearchDemo.status',
[], self._service_ver, context)
| 57.315271 | 79 | 0.628964 | 1,375 | 11,635 | 5.189091 | 0.152727 | 0.046251 | 0.060967 | 0.047092 | 0.786125 | 0.77295 | 0.729082 | 0.703013 | 0.696426 | 0.696426 | 0 | 0.0035 | 0.263343 | 11,635 | 202 | 80 | 57.59901 | 0.828958 | 0.72067 | 0 | 0.238095 | 1 | 0 | 0.120278 | 0.08499 | 0 | 0 | 0 | 0.004951 | 0 | 1 | 0.166667 | false | 0.047619 | 0.071429 | 0.02381 | 0.404762 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
26db29318fc4052d5a5a0f51251223d21b5af67a | 179 | py | Python | stests/workers/interactive_1.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 4 | 2020-03-10T15:28:17.000Z | 2021-10-02T11:41:17.000Z | stests/workers/interactive_1.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 1 | 2020-03-25T11:31:44.000Z | 2020-03-25T11:31:44.000Z | stests/workers/interactive_1.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 9 | 2020-02-25T18:43:42.000Z | 2021-08-10T17:08:42.000Z | from stests.workers.utils import setup_interactive
from stests.workers.utils import start_monitoring
# Setup.
setup_interactive()
# Start chain monitoring.
start_monitoring()
| 16.272727 | 50 | 0.821229 | 22 | 179 | 6.5 | 0.454545 | 0.13986 | 0.237762 | 0.307692 | 0.391608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111732 | 179 | 10 | 51 | 17.9 | 0.899371 | 0.167598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
26fedf21e4a110487c9949afff020c30cbd1e394 | 40 | py | Python | mpf/benchmarks/__init__.py | Scottacus64/mpf | fcfb6c5698b9c7d8bf0eb64b021aaa389ea6478a | [
"MIT"
] | 163 | 2015-01-25T02:19:50.000Z | 2022-03-26T12:00:28.000Z | mpf/benchmarks/__init__.py | Scottacus64/mpf | fcfb6c5698b9c7d8bf0eb64b021aaa389ea6478a | [
"MIT"
] | 1,086 | 2015-03-23T19:53:17.000Z | 2022-03-24T20:46:11.000Z | mpf/benchmarks/__init__.py | Scottacus64/mpf | fcfb6c5698b9c7d8bf0eb64b021aaa389ea6478a | [
"MIT"
] | 148 | 2015-01-28T02:31:39.000Z | 2022-03-22T13:54:01.000Z | """Benchmarks for the MPF framework."""
| 20 | 39 | 0.7 | 5 | 40 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.8 | 0.825 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f854b22113d78398c95810519c4f8b88448bf0b3 | 146 | py | Python | digitemp/device/__init__.py | tharrrk/pydigitemp | b88c1069b1a7d0b74d09fe2af9d8687aaca01ff9 | [
"CNRI-Python"
] | 8 | 2018-09-06T04:48:15.000Z | 2020-10-07T03:28:57.000Z | digitemp/device/__init__.py | tharrrk/pydigitemp | b88c1069b1a7d0b74d09fe2af9d8687aaca01ff9 | [
"CNRI-Python"
] | 5 | 2018-07-26T06:36:08.000Z | 2021-04-13T06:29:00.000Z | digitemp/device/__init__.py | tharrrk/pydigitemp | b88c1069b1a7d0b74d09fe2af9d8687aaca01ff9 | [
"CNRI-Python"
] | 8 | 2018-06-28T09:42:26.000Z | 2021-04-10T18:00:24.000Z | from .generic import AddressableDevice
from .factories import TemperatureSensor
from .thermometer import DS18S20, DS1820, DS1920, DS18B20, DS1822
| 36.5 | 65 | 0.842466 | 16 | 146 | 7.6875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0.109589 | 146 | 3 | 66 | 48.666667 | 0.792308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f862f71f017bc9e19c013ba8e23d17e1108c8fb4 | 162 | py | Python | lost_years/__init__.py | gojiplus/lost_years | b7af037827bbb478b8defb7db007bfc950d9ebfe | [
"MIT"
] | 5 | 2020-04-02T05:43:43.000Z | 2020-04-02T18:26:13.000Z | lost_years/__init__.py | gojiplus/lost_years | b7af037827bbb478b8defb7db007bfc950d9ebfe | [
"MIT"
] | null | null | null | lost_years/__init__.py | gojiplus/lost_years | b7af037827bbb478b8defb7db007bfc950d9ebfe | [
"MIT"
] | null | null | null | from .hld import lost_years_hld
from .ssa import lost_years_ssa
from .who import lost_years_who
__all__ = ['lost_years_ssa', 'lost_years_hld', 'lost_years_who']
| 27 | 64 | 0.802469 | 28 | 162 | 4.071429 | 0.285714 | 0.473684 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 162 | 5 | 65 | 32.4 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f87dc45d179d82778d6187ae1ffe9a18371296e8 | 75 | py | Python | lib/data/__init__.py | Hirosaji/PIFu | 4b3e55ce4e613640d567a5629fe2163248427773 | [
"MIT"
] | 1,359 | 2020-02-19T10:06:31.000Z | 2022-03-30T19:40:58.000Z | lib/data/__init__.py | GabbyYam/PIFu | f0a9c99ef887e1eb360e865a87aa5f166231980e | [
"MIT"
] | 117 | 2020-02-23T16:37:38.000Z | 2022-03-09T00:53:34.000Z | lib/data/__init__.py | GabbyYam/PIFu | f0a9c99ef887e1eb360e865a87aa5f166231980e | [
"MIT"
] | 308 | 2020-02-20T05:23:31.000Z | 2022-03-30T12:22:11.000Z | from .EvalDataset import EvalDataset
from .TrainDataset import TrainDataset | 37.5 | 38 | 0.88 | 8 | 75 | 8.25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 75 | 2 | 38 | 37.5 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f8bdd8a3ce206067b3f74f27f0a97c78e7d5a9f7 | 51 | py | Python | src/foremast_ui/routes/__init__.py | e4r7hbug/foremast_ui | ae9f864f90fbae2341a8c1e75e1e763e9b389b6b | [
"Apache-2.0"
] | 1 | 2018-07-02T19:48:43.000Z | 2018-07-02T19:48:43.000Z | src/foremast_ui/routes/__init__.py | e4r7hbug/foremast_ui | ae9f864f90fbae2341a8c1e75e1e763e9b389b6b | [
"Apache-2.0"
] | null | null | null | src/foremast_ui/routes/__init__.py | e4r7hbug/foremast_ui | ae9f864f90fbae2341a8c1e75e1e763e9b389b6b | [
"Apache-2.0"
] | null | null | null | from ..application import APP
from . import index
| 12.75 | 29 | 0.764706 | 7 | 51 | 5.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 51 | 3 | 30 | 17 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4462e21b79804b9d024323f6be7841b7111d05d | 38 | py | Python | QCAT_Basic/QCAT_Basic/__init__.py | gmg2719/python_qcat_apis | 6e1f29bfa768e53dc310598401d3446c4bd7cea0 | [
"MIT"
] | 1 | 2022-02-27T08:30:29.000Z | 2022-02-27T08:30:29.000Z | QCAT_Basic/QCAT_Basic/__init__.py | gmg2719/python_qcat_apis | 6e1f29bfa768e53dc310598401d3446c4bd7cea0 | [
"MIT"
] | null | null | null | QCAT_Basic/QCAT_Basic/__init__.py | gmg2719/python_qcat_apis | 6e1f29bfa768e53dc310598401d3446c4bd7cea0 | [
"MIT"
] | 1 | 2022-02-27T08:30:15.000Z | 2022-02-27T08:30:15.000Z | import QCAT_Basic
name = "QCAT_Basic" | 19 | 19 | 0.789474 | 6 | 38 | 4.666667 | 0.666667 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 2 | 19 | 19 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e48c0fc6b8cd635c2b6212543ce2863463d08a78 | 81 | py | Python | python/decorator/01_decorator.py | rrbb014/rrbb-playground | 5487078b882c4f0df3bacc7f55c47216bc201c04 | [
"MIT"
] | null | null | null | python/decorator/01_decorator.py | rrbb014/rrbb-playground | 5487078b882c4f0df3bacc7f55c47216bc201c04 | [
"MIT"
] | null | null | null | python/decorator/01_decorator.py | rrbb014/rrbb-playground | 5487078b882c4f0df3bacc7f55c47216bc201c04 | [
"MIT"
] | null | null | null | from common import identity
@identity
def foo():
return 'bar'
print(foo())
| 10.125 | 27 | 0.679012 | 11 | 81 | 5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197531 | 81 | 7 | 28 | 11.571429 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
900bdab732ce4efbedaeff0b96d08ccc513172c7 | 382 | py | Python | app/__init__.py | sabiharustam/TBD5 | 2dafad06e866dabc7f16c51d8961e905991a1287 | [
"MIT"
] | 3 | 2020-07-01T17:42:18.000Z | 2021-03-04T19:59:45.000Z | app/__init__.py | sabiharustam/TBD5 | 2dafad06e866dabc7f16c51d8961e905991a1287 | [
"MIT"
] | 4 | 2019-03-18T03:20:04.000Z | 2019-03-22T16:28:13.000Z | app/__init__.py | sabiharustam/TBD5 | 2dafad06e866dabc7f16c51d8961e905991a1287 | [
"MIT"
] | 5 | 2019-03-22T15:38:24.000Z | 2019-11-21T01:57:40.000Z | from __future__ import absolute_import, division, print_function
#import sys
#sys.path.append("../voltcycle/functions_and_tests")
__all__ = ["baseline","calculations","file_read", "peak_detection_fxn"]
from voltcycle/submodule import baseline
from voltcycle/submodule import calculations
from voltcycle/submodule import file_read
from voltcycle/submodule import peak_detection_fxn
| 38.2 | 71 | 0.837696 | 48 | 382 | 6.291667 | 0.479167 | 0.172185 | 0.291391 | 0.370861 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078534 | 382 | 9 | 72 | 42.444444 | 0.857955 | 0.159686 | 0 | 0 | 0 | 0 | 0.147335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.833333 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
90345e4af3afc86c49b327d9582ccec144618f67 | 124 | py | Python | pip_api/exceptions.py | sugatoray/pip-api | dec3a5e30c911b794763483ed985960a6732a40e | [
"Apache-2.0"
] | 81 | 2018-03-21T02:09:38.000Z | 2022-02-11T09:30:13.000Z | pip_api/exceptions.py | sugatoray/pip-api | dec3a5e30c911b794763483ed985960a6732a40e | [
"Apache-2.0"
] | 67 | 2018-09-27T16:03:02.000Z | 2022-03-11T20:05:37.000Z | pip_api/exceptions.py | sugatoray/pip-api | dec3a5e30c911b794763483ed985960a6732a40e | [
"Apache-2.0"
] | 15 | 2018-03-31T01:15:18.000Z | 2022-03-10T08:13:23.000Z | class Incompatible(Exception):
pass
class InvalidArguments(Exception):
pass
class PipError(Exception):
pass
| 11.272727 | 34 | 0.725806 | 12 | 124 | 7.5 | 0.5 | 0.433333 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201613 | 124 | 10 | 35 | 12.4 | 0.909091 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
5f72621bd4393fe8fdd8697e053e31d6af7d0804 | 924 | py | Python | build/warehouse/cmake/warehouse-genmsg-context.py | frozaidi/RobotSystems | e2c2f9e1623c71d6f5889e84bd9b4ff1d2043a1e | [
"BSD-3-Clause"
] | null | null | null | build/warehouse/cmake/warehouse-genmsg-context.py | frozaidi/RobotSystems | e2c2f9e1623c71d6f5889e84bd9b4ff1d2043a1e | [
"BSD-3-Clause"
] | null | null | null | build/warehouse/cmake/warehouse-genmsg-context.py | frozaidi/RobotSystems | e2c2f9e1623c71d6f5889e84bd9b4ff1d2043a1e | [
"BSD-3-Clause"
] | null | null | null | # generated from genmsg/cmake/pkg-genmsg.context.in
messages_str = "/home/ubuntu/armpi_fpv/src/warehouse/msg/Rotate.msg;/home/ubuntu/armpi_fpv/src/warehouse/msg/Pose.msg;/home/ubuntu/armpi_fpv/src/warehouse/msg/Grasp.msg"
services_str = "/home/ubuntu/armpi_fpv/src/warehouse/srv/SetInTarget.srv;/home/ubuntu/armpi_fpv/src/warehouse/srv/SetOutTarget.srv;/home/ubuntu/armpi_fpv/src/warehouse/srv/SetExchangeTarget.srv"
pkg_name = "warehouse"
dependencies_str = "std_msgs;std_srvs;geometry_msgs"
langs = "gencpp;geneus;genlisp;gennodejs;genpy"
dep_include_paths_str = "warehouse;/home/ubuntu/armpi_fpv/src/warehouse/msg;std_msgs;/opt/ros/melodic/share/std_msgs/cmake/../msg;geometry_msgs;/opt/ros/melodic/share/geometry_msgs/cmake/../msg"
PYTHON_EXECUTABLE = "/usr/bin/python2"
package_has_static_sources = '' == 'TRUE'
genmsg_check_deps_script = "/opt/ros/melodic/share/genmsg/cmake/../../../lib/genmsg/genmsg_check_deps.py"
| 77 | 194 | 0.80303 | 141 | 924 | 5.049645 | 0.411348 | 0.098315 | 0.147472 | 0.176966 | 0.411517 | 0.349719 | 0.349719 | 0.202247 | 0 | 0 | 0 | 0.001124 | 0.036797 | 924 | 11 | 195 | 84 | 0.798876 | 0.05303 | 0 | 0 | 1 | 0.333333 | 0.767469 | 0.73425 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5fb977850034eb416797a116b12f0ff7838a7e36 | 9,960 | py | Python | tests/activity/test_activity_copy_glencoe_still_images.py | gnott/elife-bot | 584c315d15d1289e0d2c27c28aaaae31174812e4 | [
"MIT"
] | null | null | null | tests/activity/test_activity_copy_glencoe_still_images.py | gnott/elife-bot | 584c315d15d1289e0d2c27c28aaaae31174812e4 | [
"MIT"
] | null | null | null | tests/activity/test_activity_copy_glencoe_still_images.py | gnott/elife-bot | 584c315d15d1289e0d2c27c28aaaae31174812e4 | [
"MIT"
] | null | null | null | import unittest
import settings_mock
from activity.activity_CopyGlencoeStillImages import activity_CopyGlencoeStillImages
from mock import patch, MagicMock
from classes_mock import FakeSession
from classes_mock import FakeStorageContext
from classes_mock import FakeLogger
import test_activity_data as test_activity_data
import provider.glencoe_check as glencoe_check
class TestCopyGlencoeStillImages(unittest.TestCase):
def setUp(self):
self.copyglencoestillimages = activity_CopyGlencoeStillImages(settings_mock, None, None, None, None)
self.copyglencoestillimages.logger = FakeLogger()
@patch.object(activity_CopyGlencoeStillImages, 'list_files_from_cdn')
@patch.object(activity_CopyGlencoeStillImages, 'store_jpgs')
@patch('provider.glencoe_check.metadata')
@patch('activity.activity_CopyGlencoeStillImages.StorageContext')
@patch('activity.activity_CopyGlencoeStillImages.Session')
@patch.object(activity_CopyGlencoeStillImages, 'emit_monitor_event')
def test_do_activity(self, fake_emit, fake_session, fake_storage_context, fake_glencoe_metadata,
fake_store_jpgs, fake_list_files_from_cdn):
# Given
activity_data = test_activity_data.data_example_before_publish
fake_storage_context.return_value = FakeStorageContext()
fake_session.return_value = FakeSession(test_activity_data.session_example)
fake_glencoe_metadata.return_value = test_activity_data.glencoe_metadata
fake_store_jpgs.return_value = test_activity_data.jpgs_added_in_cdn
fake_list_files_from_cdn.return_value = test_activity_data.cdn_folder_files + \
test_activity_data.jpgs_added_in_cdn
# When
result = self.copyglencoestillimages.do_activity(activity_data)
# Then
self.assertEqual(self.copyglencoestillimages.ACTIVITY_SUCCESS, result)
@patch.object(activity_CopyGlencoeStillImages, 'list_files_from_cdn')
@patch.object(activity_CopyGlencoeStillImages, 'store_jpgs')
@patch('provider.glencoe_check.metadata')
@patch('activity.activity_CopyGlencoeStillImages.StorageContext')
@patch('activity.activity_CopyGlencoeStillImages.Session')
@patch.object(activity_CopyGlencoeStillImages, 'emit_monitor_event')
def test_do_activity_success_no_videos_for_article(self, fake_emit, fake_session, fake_storage_context, fake_glencoe_metadata,
fake_store_jpgs, fake_list_files_from_cdn):
# Given
activity_data = test_activity_data.data_example_before_publish
fake_storage_context.return_value = FakeStorageContext()
fake_session.return_value = FakeSession(test_activity_data.session_example)
fake_glencoe_metadata.side_effect = AssertionError("article has no videos - url requested: ...")
# When
result = self.copyglencoestillimages.do_activity(activity_data)
# Then
self.assertEqual(self.copyglencoestillimages.ACTIVITY_SUCCESS, result)
@patch('activity.activity_CopyGlencoeStillImages.Session')
@patch.object(activity_CopyGlencoeStillImages, 'emit_monitor_event')
def test_do_activity_success_POA(self, fake_emit, fake_session):
# Given
activity_data = test_activity_data.data_example_before_publish
session_POA = test_activity_data.session_example.copy()
session_POA['file_name'] = 'elife-00353-poa-v1.zip'
fake_session.return_value = FakeSession(session_POA)
# When
result = self.copyglencoestillimages.do_activity(activity_data)
# Then
self.assertEqual(self.copyglencoestillimages.ACTIVITY_SUCCESS, result)
@patch.object(activity_CopyGlencoeStillImages, 'store_jpgs')
@patch('provider.glencoe_check.metadata')
@patch('activity.activity_CopyGlencoeStillImages.StorageContext')
@patch('activity.activity_CopyGlencoeStillImages.Session')
@patch.object(activity_CopyGlencoeStillImages, 'emit_monitor_event')
def test_do_activity_error(self, fake_emit, fake_session, fake_storage_context, fake_glencoe_metadata, fake_store_jpgs):
# Given
activity_data = test_activity_data.data_example_before_publish
fake_storage_context.return_value = FakeStorageContext()
fake_session.return_value = FakeSession(test_activity_data.session_example)
fake_glencoe_metadata.return_value = test_activity_data.glencoe_metadata
fake_store_jpgs.side_effect = Exception("Something went wrong!")
# When
result = self.copyglencoestillimages.do_activity(activity_data)
# Then
self.assertEqual(result, self.copyglencoestillimages.ACTIVITY_PERMANENT_FAILURE)
fake_emit.assert_called_with(settings_mock,
activity_data["article_id"],
activity_data["version"],
activity_data["run"],
self.copyglencoestillimages.pretty_name,
"error",
"An error occurred when checking/copying Glencoe still images. Article " +
activity_data["article_id"] + "; message: Something went wrong!")
@patch.object(activity_CopyGlencoeStillImages, 'list_files_from_cdn')
@patch.object(activity_CopyGlencoeStillImages, 'store_jpgs')
@patch('provider.glencoe_check.metadata')
@patch('activity.activity_CopyGlencoeStillImages.StorageContext')
@patch('activity.activity_CopyGlencoeStillImages.Session')
@patch.object(activity_CopyGlencoeStillImages, 'emit_monitor_event')
def test_do_activity_bad_files(self, fake_emit, fake_session, fake_storage_context, fake_glencoe_metadata,
fake_store_jpgs, fake_list_files_from_cdn):
# Given
activity_data = test_activity_data.data_example_before_publish
fake_storage_context.return_value = FakeStorageContext()
fake_session.return_value = FakeSession(test_activity_data.session_example)
fake_glencoe_metadata.return_value = test_activity_data.glencoe_metadata
self.copyglencoestillimages.logger = MagicMock()
fake_list_files_from_cdn.return_value = test_activity_data.cdn_folder_files
fake_store_jpgs.return_value = test_activity_data.jpgs_added_in_cdn
# When
result = self.copyglencoestillimages.do_activity(activity_data)
# Then
self.assertEqual(result, self.copyglencoestillimages.ACTIVITY_PERMANENT_FAILURE)
fake_emit.assert_called_with(settings_mock,
activity_data["article_id"],
activity_data["version"],
activity_data["run"],
self.copyglencoestillimages.pretty_name,
"error",
"Not all still images .jpg have a video with the same name " +
"missing videos file names: ['elife-12620-media1', 'elife-12620-media2']" +
" Please check them against CDN files. Article: 00353")
def test_validate_jpgs_against_cdn(self):
# Given
cdn_all_files = test_activity_data.cdn_folder_files_article_07398
cdn_still_jpgs = test_activity_data.cdn_folder_jpgs_article_07398
# When
result_bad_files = self.copyglencoestillimages.validate_jpgs_against_cdn(cdn_all_files, cdn_still_jpgs,
"07398")
# Then
self.assertEqual(0, len(result_bad_files))
def test_validate_pgs_against_cdn_long_article_ids(self):
# Given
cdn_all_files = ["elife-1234500230-media1-v1.wmv", "elife-1234500230-media2-v1.mp4",
"elife-1234500230-media1-v1.jpg", "elife-1234500230-media2-v1.jpg",
"elife-1234500230-fig1-figsupp1-v2-1084w.jpg"]
cdn_still_jpgs = ["elife-1234500230-media1-v1.jpg", "elife-1234500230-media2-v1.jpg"]
# When
result_bad_files = self.copyglencoestillimages.validate_jpgs_against_cdn(cdn_all_files, cdn_still_jpgs,
"00230")
# Then
self.assertEqual(0, len(result_bad_files))
def test_validate_pgs_against_cdn_long_article_ids_one_missing(self):
# Given
cdn_all_files = ["elife-1234500230-media1-v1.wmv", "elife-1234500230-media2-v1.mp4",
"elife-1234500230-media1-v1.jpg", "elife-1234500230-media2-v1.jpg",
"elife-1234500230-fig1-figsupp1-v2-1084w.jpg"]
cdn_still_jpgs = ["elife-1234500230-media1-v1.jpg", "elife-1234500230-media2-v1.jpg",
"elife-1234500230-media3-v1.jpg"]
# When
result_bad_files = self.copyglencoestillimages.validate_jpgs_against_cdn(cdn_all_files, cdn_still_jpgs,
"00230")
# Then
self.assertEqual(1, len(result_bad_files))
@patch('requests.get')
@patch('activity.activity_CopyGlencoeStillImages.StorageContext')
def test_store_file_according_to_the_current_article_id_whatever_is_the_filename_on_glencoe(self, fake_storage_context, fake_requests_get):
fake_storage_context.return_value = FakeStorageContext()
fake_requests_get.return_value = MagicMock()
fake_requests_get.return_value.status_code = 200
cdn_jpg_filename = self.copyglencoestillimages.store_file("http://glencoe.com/some-dir/elife-00666-media1.jpg", "12345600666")
self.assertEqual(cdn_jpg_filename, "elife-12345600666-media1.jpg")
if __name__ == '__main__':
unittest.main()
| 52.698413 | 143 | 0.691767 | 1,048 | 9,960 | 6.175573 | 0.144084 | 0.072312 | 0.054388 | 0.07602 | 0.781829 | 0.745365 | 0.740729 | 0.732849 | 0.732849 | 0.732849 | 0 | 0.035952 | 0.232028 | 9,960 | 188 | 144 | 52.978723 | 0.810171 | 0.012751 | 0 | 0.62406 | 0 | 0 | 0.189947 | 0.123165 | 0 | 0 | 0 | 0 | 0.090226 | 1 | 0.075188 | false | 0 | 0.067669 | 0 | 0.150376 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
399481c043816e9e8c26ed396fa0b050e2f14f3b | 106 | py | Python | enthought/block_canvas/canvas/font_metrics_cache.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/block_canvas/canvas/font_metrics_cache.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/block_canvas/canvas/font_metrics_cache.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from blockcanvas.canvas.font_metrics_cache import *
| 26.5 | 51 | 0.858491 | 14 | 106 | 6 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103774 | 106 | 3 | 52 | 35.333333 | 0.884211 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
399976837577da837075de176597b4c22bc9ff96 | 125 | py | Python | token_handle/admin.py | swapancell/token_app | 4702ae12675ef5b4e466c8044795a3e70a367ee0 | [
"Apache-2.0"
] | null | null | null | token_handle/admin.py | swapancell/token_app | 4702ae12675ef5b4e466c8044795a3e70a367ee0 | [
"Apache-2.0"
] | null | null | null | token_handle/admin.py | swapancell/token_app | 4702ae12675ef5b4e466c8044795a3e70a367ee0 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from token_handle.models import *
# Register your models here.
admin.site.register(tokens) | 20.833333 | 33 | 0.808 | 18 | 125 | 5.555556 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 125 | 6 | 34 | 20.833333 | 0.909091 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
39d2a20d12f770051926b9bdec467ba99791d043 | 18 | py | Python | app/views.py | matthewacha/LectureApp | 8dbac3d747087cc90db8c16a08eb17c5587005eb | [
"OLDAP-2.2.1"
] | null | null | null | app/views.py | matthewacha/LectureApp | 8dbac3d747087cc90db8c16a08eb17c5587005eb | [
"OLDAP-2.2.1"
] | null | null | null | app/views.py | matthewacha/LectureApp | 8dbac3d747087cc90db8c16a08eb17c5587005eb | [
"OLDAP-2.2.1"
] | null | null | null | #views for the app | 18 | 18 | 0.777778 | 4 | 18 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.933333 | 0.944444 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
841dd9f1ac44118da8bd189aaad82c7bd570b070 | 30 | py | Python | temp.py | shatheesh171/stacks_and_queues_problems | 2ba487539b327dc7795873eb879e29983b48963d | [
"MIT"
] | null | null | null | temp.py | shatheesh171/stacks_and_queues_problems | 2ba487539b327dc7795873eb879e29983b48963d | [
"MIT"
] | null | null | null | temp.py | shatheesh171/stacks_and_queues_problems | 2ba487539b327dc7795873eb879e29983b48963d | [
"MIT"
] | null | null | null | if (None<5):
print("goya") | 15 | 17 | 0.533333 | 5 | 30 | 3.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.2 | 30 | 2 | 17 | 15 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
845c17508493945a422e1f3b667d5f4449af63c6 | 4,989 | py | Python | 2020/12/12.py | Sveder/advent_of_code | 57a36bc3066fcba6330d4d579de053e8b7e78c74 | [
"CC0-1.0"
] | null | null | null | 2020/12/12.py | Sveder/advent_of_code | 57a36bc3066fcba6330d4d579de053e8b7e78c74 | [
"CC0-1.0"
] | null | null | null | 2020/12/12.py | Sveder/advent_of_code | 57a36bc3066fcba6330d4d579de053e8b7e78c74 | [
"CC0-1.0"
] | null | null | null | input = """R90
W5
R90
F3
E4
L90
N5
F100
W1
R180
F11
S1
E4
F75
N4
W1
F66
N3
L90
F2
R90
E1
S3
F84
L90
N1
R90
F49
S1
R90
E2
R90
N1
L90
S1
R90
F100
E4
R90
N3
W1
L180
E4
F30
N5
F40
R270
E4
L90
F94
L90
E3
F61
S4
F78
S2
R180
W2
F14
L90
F100
L90
F15
E4
L90
F77
S2
F58
R90
E2
F41
N4
R180
S2
F91
S3
F52
W4
S3
F73
S4
F50
W3
L90
F70
N5
N2
L90
F100
W1
R90
N4
E1
R90
F63
L90
W4
F30
N3
L90
F76
R90
W1
N4
R180
F20
S4
F8
R90
F47
W2
L180
W2
F75
F56
L90
F35
R90
S2
L90
F1
W3
F44
L90
S2
F93
E1
N5
F83
F28
S1
R90
N4
W2
E4
F48
N3
F65
E1
R180
W4
S1
W5
S3
F18
E3
S3
R90
W5
R180
W2
R90
F71
W2
R90
S4
L180
N4
W3
S5
F12
L90
F83
E5
W2
N3
E1
S4
F95
R90
F77
L270
N2
W4
F45
W5
R90
W5
N1
R90
E2
F5
L270
E5
F79
L90
F57
R90
E5
F46
N5
F30
E3
N4
W2
F100
E3
S5
W5
F34
E4
S3
F30
S4
L90
S2
F51
W5
F41
E5
N1
R90
S4
F97
L180
S1
F38
N3
E3
R90
S5
F59
L270
W2
F71
L90
N3
F36
L90
S1
F24
S1
R90
F56
S3
F53
E1
E3
F78
R180
N2
E4
R90
E1
S2
W3
S3
F64
W1
R180
F73
F92
S5
R180
W4
N5
E2
R270
E5
N2
E3
S3
R90
S4
W2
N4
E5
L180
F6
R90
W2
N1
L90
F16
N4
R90
F65
R90
N3
L90
W5
R90
N1
F41
W2
S1
W3
F69
W5
N4
F71
W4
R90
F94
W1
F30
W2
N2
F65
R90
N4
R90
E1
L180
E2
L180
S4
R90
E4
R90
E5
S3
F73
N4
W4
S2
E4
L90
N4
F100
S4
L90
E3
N3
R90
N1
W4
S3
S5
F46
N5
R180
F75
S5
L90
F42
L90
L90
W2
F67
W2
W3
E3
L90
W3
F72
N3
W2
L90
N4
F12
W2
F20
W2
F5
N5
W5
L180
W2
F45
W4
L90
E3
L90
S2
F69
R90
W3
R180
N5
E3
F8
S5
R90
S5
F64
R90
W4
F46
R90
W3
N1
F6
N4
R90
F38
F5
E5
N4
R90
W1
F66
R270
W3
R90
W5
R90
W2
S4
W2
R270
E5
R90
S5
R90
S1
L90
N2
W5
S3
W3
L90
E2
L90
F51
R90
L90
N3
W4
N1
W4
W4
L90
F7
S4
E1
S1
R90
F3
E4
F73
W4
L90
W4
F4
R90
E5
S2
E3
L90
F77
W3
L90
S5
W4
S3
E4
R270
S5
F99
E4
E5
L90
N2
F58
R90
E2
N1
W4
F85
W4
N2
E3
L90
E4
R90
N4
E3
F64
W5
S5
F89
F29
L90
F80
N1
L90
W4
S5
F76
E5
F83
E2
F60
W3
N4
W1
R90
F25
W5
S3
W5
L180
E4
F79
S1
W5
F42
W3
F6
E5
L90
S3
L90
N2
L270
F80
W3
L180
E4
N2
F87
W5
W5
R90
L90
W3
R180
F69
L90
F9
E4
F37
L90
S3
F50
L90
W1
F70
N5
L90
W4
N2
W5
F19
N4
W2
N1
W4
R90
F56
W1
L90
E4
R90
S5
F2
L90
N2
F77
E1
L270
F31
W1
N4
L90
W2
L180
W1
S2
E3
F93
N5
E4
F39
S4
W3
L90
N1
R90
S2
F11
F95
E3
S4
W4
R90
F56
N3
F16
L270
E2
S3
F56
W4
N1
E1
N5
R90
F86
N5
R90
S1
S4
L90
E2
N1
F28
E5
R180
F93
L90
F84
E5
R180
E4
F25
L90
E5
N2
R270
F13
N4
F91
E3
F7
S1
W5
W1
F67
S4
W5
R90
L180
E4
S2
R90
S2
F45
N4
E3
N4
F53
N3
E3
S4
R90
S1
F52
N2
S5
E2
N3
F1
L180
N3
E2
F31
S2
S5
N2
F5
W4
R90
R90
W1
F34
W2
L90
E2
R180
N5
W2
R180
N2
S2
N3
F48
S2
L90
F81
N2
F16
W4
F40
L90
N3
L180
N2
L90
E5
F8
F6
R90
S3
W5
W1
R90
F18
S3
L90
F19
R90
F46
F37
R90
N3
W4
E3
F28
E3
N5
L90
S4
E1
F28
L90
N5
R180
N1
L90
E3
S1
R180
F32
R90
N4
F22
L90
S5
E4
R90
F70
W4
F39
S4
L90
N1
L90
F6
L90
F92
L90
N5
L90
L90
N5
E2
L180
E5
R90
F95
N4
E1
F77
N3
R180
S2
W2
F71
F59
N3
F10
E4
L90
N5
F9
R180
W1
S5
W5
F71
E1
F35
R90
F45
N1
F54"""
#
# input = """F10
# N3
# F7
# R90
# F11"""
input = input.split('\n')
x, y = 0, 0
direction = (0, 1)
map = {
"N": (1, 0),
"S": (-1, 0),
"W": (0, -1),
"E": (0, 1),
}
r_map = {
(0, 1): (-1, 0),
(-1, 0): (0, -1),
(0, -1): (1, 0),
(1, 0): (0, 1),
}
l_map = {
(0, 1): (1, 0),
(1, 0): (0, -1),
(0, -1): (-1, 0),
(-1, 0): (0, 1),
}
# for i in input:
# instruction, amount = i[0], int(i[1:])
#
# print("Instuction %s in amount %s from %s" % (instruction, amount, (x,y)))
#
# if instruction in map.keys():
# x += map[instruction][0] * amount
# y += map[instruction][1] * amount
#
# elif instruction == "F":
# x += direction[0] * amount
# y += direction[1] * amount
#
# elif instruction in "RL":
# amount %= 360
# amount /= 90
# print("Instruction:", amount)
#
# if instruction == "R":
# for _ in range(amount):
# direction = r_map[direction]
#
# if instruction == "L":
# for _ in range(amount):
# direction = l_map[direction]
#
# print("Part 1 manhattan distance: ", x , "+", y, "=", abs(x)+abs(y))
x, y = 0, 0
wx, wy = 1, 10
direction = (0, 1)
print("Start: ship: %s --- waypoint: %s" % ((x,y), (wx, wy)))
for i in input:
instruction, amount = i[0], int(i[1:])
print("Instruction:", i)
if instruction in map.keys():
wx += map[instruction][0] * amount
wy += map[instruction][1] * amount
elif instruction == "F":
x += (wx) * amount
y += (wy) * amount
# wx += wx * amount
# wy += wy * amount
elif instruction in "RL":
amount %= 360
print("Amount:", amount)
if amount == 180:
wx, wy = -wx, -wy
elif amount == 90:
if instruction == "L":
wx, wy = wy, -wx
if instruction == "R":
wx, wy = -wy, wx
elif amount == 270:
if instruction == "L":
wx, wy = -wy, wx
if instruction == "R":
wx, wy = wy, -wx
print("End : ship: %s --- waypoint: %s" % ((x, y), (wx, wy)))
print("Part 2manhattan distance: ", x , "+", y, "=", abs(x)+abs(y))
| 5.695205 | 80 | 0.586089 | 1,063 | 4,989 | 2.745061 | 0.145814 | 0.010966 | 0.006169 | 0.005483 | 0.178204 | 0.14599 | 0.14599 | 0.11035 | 0.070596 | 0.070596 | 0 | 0.370338 | 0.317499 | 4,989 | 875 | 81 | 5.701714 | 0.486637 | 0.165564 | 0 | 0.919414 | 0 | 0 | 0.686455 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.006105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ffef0b9a0d51339172fabc98f2eb6e718de7cc74 | 152 | py | Python | sld/types.py | instituteofcancerresearch/slidebook-python | 0477478c68b435e52e52f438411dba954a50bf04 | [
"MIT"
] | 1 | 2022-01-26T17:32:58.000Z | 2022-01-26T17:32:58.000Z | sld/types.py | instituteofcancerresearch/slidebook-python | 0477478c68b435e52e52f438411dba954a50bf04 | [
"MIT"
] | 8 | 2022-01-26T18:08:49.000Z | 2022-02-15T13:48:39.000Z | sld/types.py | instituteofcancerresearch/slidebook-python | 0477478c68b435e52e52f438411dba954a50bf04 | [
"MIT"
] | 1 | 2022-03-24T08:42:26.000Z | 2022-03-24T08:42:26.000Z | from typing import TYPE_CHECKING, Union
import numpy as np
if TYPE_CHECKING:
import dask.array
ArrayLike = Union[np.ndarray, "dask.array.Array"]
| 16.888889 | 49 | 0.763158 | 23 | 152 | 4.956522 | 0.608696 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 152 | 8 | 50 | 19 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
081b6d8660001255c7e8971a20c750f1a5003257 | 42 | py | Python | script/scaffold/templates/tests/__init__.py | psicot/home-assistant | 60f0988435ac104f83ace36fa762bb27cb093509 | [
"Apache-2.0"
] | null | null | null | script/scaffold/templates/tests/__init__.py | psicot/home-assistant | 60f0988435ac104f83ace36fa762bb27cb093509 | [
"Apache-2.0"
] | null | null | null | script/scaffold/templates/tests/__init__.py | psicot/home-assistant | 60f0988435ac104f83ace36fa762bb27cb093509 | [
"Apache-2.0"
] | null | null | null | """Tests for the NEW_NAME integration."""
| 21 | 41 | 0.714286 | 6 | 42 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.783784 | 0.833333 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
082fbf5774c8e4aa2e7004df623539d76b8d37fa | 112 | py | Python | cythonTest/callFromC/setup.py | terasakisatoshi/pythonCodes | baee095ecee96f6b5ec6431267cdc6c40512a542 | [
"MIT"
] | null | null | null | cythonTest/callFromC/setup.py | terasakisatoshi/pythonCodes | baee095ecee96f6b5ec6431267cdc6c40512a542 | [
"MIT"
] | null | null | null | cythonTest/callFromC/setup.py | terasakisatoshi/pythonCodes | baee095ecee96f6b5ec6431267cdc6c40512a542 | [
"MIT"
] | null | null | null | from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules=cythonize("cytest.pyx")) | 28 | 42 | 0.821429 | 16 | 112 | 5.6875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089286 | 112 | 4 | 42 | 28 | 0.892157 | 0 | 0 | 0 | 0 | 0 | 0.088496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
08429fee12844c7f1ac82f89f5bc06b033673f2c | 105 | py | Python | src/ui/core/property/__init__.py | temportalflux/MelodyBot | f5172a6895b8dc4ee89bf6327fd8663456390861 | [
"Apache-2.0"
] | null | null | null | src/ui/core/property/__init__.py | temportalflux/MelodyBot | f5172a6895b8dc4ee89bf6327fd8663456390861 | [
"Apache-2.0"
] | null | null | null | src/ui/core/property/__init__.py | temportalflux/MelodyBot | f5172a6895b8dc4ee89bf6327fd8663456390861 | [
"Apache-2.0"
] | null | null | null | from .single_line_text_field import SingleLineTextField
from .playlist_tag_table import PlaylistTagTable
| 35 | 55 | 0.904762 | 13 | 105 | 6.923077 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 105 | 2 | 56 | 52.5 | 0.927835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
084e9f73d0fb73a8087fa303b25f9d5ff0e706bc | 439 | py | Python | RecoParticleFlow/PFClusterProducer/python/particleFlowClusterTimeAssigner_cfi.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | RecoParticleFlow/PFClusterProducer/python/particleFlowClusterTimeAssigner_cfi.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | RecoParticleFlow/PFClusterProducer/python/particleFlowClusterTimeAssigner_cfi.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | import FWCore.ParameterSet.Config as cms
from RecoParticleFlow.PFClusterProducer.particleFlowClusterTimeAssignerDefault_cfi import *
particleFlowTimeAssignerECAL = particleFlowClusterTimeAssignerDefault.clone()
particleFlowTimeAssignerECAL.timeSrc = cms.InputTag('ecalBarrelClusterFastTimer:PerfectResolutionModel')
particleFlowTimeAssignerECAL.timeResoSrc = cms.InputTag('ecalBarrelClusterFastTimer:PerfectResolutionModelResolution')
| 43.9 | 118 | 0.895216 | 27 | 439 | 14.518519 | 0.703704 | 0.056122 | 0.188776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047836 | 439 | 9 | 119 | 48.777778 | 0.937799 | 0 | 0 | 0 | 0 | 0 | 0.24714 | 0.24714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f229c88eb3135443b84a13569aaf91cd6d64bb46 | 215 | py | Python | src/wreadinput/__init__.py | WisconsinRobotics/wreadinput | 8d63d130dd2e6a8e440b9ca1cee571c01d4091ea | [
"MIT"
] | null | null | null | src/wreadinput/__init__.py | WisconsinRobotics/wreadinput | 8d63d130dd2e6a8e440b9ca1cee571c01d4091ea | [
"MIT"
] | 1 | 2022-03-16T06:49:21.000Z | 2022-03-16T06:49:21.000Z | src/wreadinput/__init__.py | WisconsinRobotics/wreadinput | 8d63d130dd2e6a8e440b9ca1cee571c01d4091ea | [
"MIT"
] | null | null | null | from . import default_node
from .control import *
from .device import InputDevice
from .finder import DeviceFinder
from .shape import DeviceShape
from .wreadinput import WReadInput
from .util.evdev_const import *
| 21.5 | 34 | 0.813953 | 28 | 215 | 6.178571 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 215 | 9 | 35 | 23.888889 | 0.935135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f2a6203014c5e0e8af2db5a3821c9e353df148d3 | 1,760 | py | Python | gripql/python/gripql/operators.py | jordan2lee/grip | 64c4ae923ae8819569fe584c6750a24e51932aec | [
"MIT"
] | 17 | 2018-10-25T15:33:51.000Z | 2022-03-16T19:07:22.000Z | gripql/python/gripql/operators.py | jordan2lee/grip | 64c4ae923ae8819569fe584c6750a24e51932aec | [
"MIT"
] | 80 | 2018-08-28T17:15:31.000Z | 2022-01-25T02:01:16.000Z | gripql/python/gripql/operators.py | jordan2lee/grip | 64c4ae923ae8819569fe584c6750a24e51932aec | [
"MIT"
] | 5 | 2017-09-12T05:40:41.000Z | 2018-06-29T10:00:39.000Z | from __future__ import absolute_import, print_function, unicode_literals
def and_(*expressions):
return {"and": {"expressions": expressions}}
def or_(*expressions):
return {"or": {"expressions": expressions}}
def not_(expression):
return {"not": expression}
def eq(key, value):
return {"condition": {"key": key, "value": value, "condition": "EQ"}}
def neq(key, value):
return {"condition": {"key": key, "value": value, "condition": "NEQ"}}
def gt(key, value):
return {"condition": {"key": key, "value": value, "condition": "GT"}}
def gte(key, value):
return {"condition": {"key": key, "value": value, "condition": "GTE"}}
def lt(key, value):
return {"condition": {"key": key, "value": value, "condition": "LT"}}
def lte(key, value):
return {"condition": {"key": key, "value": value, "condition": "LTE"}}
def inside(key, lower, upper):
return {"condition": {"key": key, "value": [lower, upper], "condition": "INSIDE"}}
def outside(key, lower, upper):
return {"condition": {"key": key, "value": [lower, upper], "condition": "OUTSIDE"}}
def between(key, lower, upper):
return {"condition": {"key": key, "value": [lower, upper], "condition": "BETWEEN"}}
def within(key, values):
if not isinstance(values, list):
if not isinstance(values, dict):
values = [values]
return {"condition": {"key": key, "value": values, "condition": "WITHIN"}}
def without(key, values):
if not isinstance(values, list):
if not isinstance(values, dict):
values = [values]
return {"condition": {"key": key, "value": values, "condition": "WITHOUT"}}
def contains(key, value):
return {"condition": {"key": key, "value": value, "condition": "CONTAINS"}}
| 25.882353 | 87 | 0.613068 | 203 | 1,760 | 5.26601 | 0.182266 | 0.142189 | 0.202058 | 0.235734 | 0.686623 | 0.686623 | 0.686623 | 0.686623 | 0.686623 | 0.372311 | 0 | 0 | 0.182386 | 1,760 | 67 | 88 | 26.268657 | 0.742877 | 0 | 0 | 0.162162 | 0 | 0 | 0.226136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.405405 | false | 0 | 0.027027 | 0.351351 | 0.837838 | 0.027027 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
4b574c5e7977306ac2695c9a7441fa3f0da7210b | 4,659 | py | Python | notebook/str_num_determine.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 174 | 2018-05-30T21:14:50.000Z | 2022-03-25T07:59:37.000Z | notebook/str_num_determine.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 5 | 2019-08-10T03:22:02.000Z | 2021-07-12T20:31:17.000Z | notebook/str_num_determine.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 53 | 2018-04-27T05:26:35.000Z | 2022-03-25T07:59:37.000Z | s = '1234567890'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = 1234567890
# isdecimal: True
# isdigit: True
# isnumeric: True
s = '1234567890'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = 1234567890
# isdecimal: True
# isdigit: True
# isnumeric: True
s = '-1.23'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = -1.23
# isdecimal: False
# isdigit: False
# isnumeric: False
s = '10\u00B2'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = 10²
# isdecimal: False
# isdigit: True
# isnumeric: True
s = '\u00BD'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = ½
# isdecimal: False
# isdigit: False
# isnumeric: True
s = '\u2166'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = Ⅶ
# isdecimal: False
# isdigit: False
# isnumeric: True
s = '一二三四五六七八九〇'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = 一二三四五六七八九〇
# isdecimal: False
# isdigit: False
# isnumeric: True
s = '壱億参阡萬'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = 壱億参阡萬
# isdecimal: False
# isdigit: False
# isnumeric: True
s = 'abc'
print('s =', s)
print('isalpha:', s.isalpha())
# s = abc
# isalpha: True
s = 'あいうえお'
print('s =', s)
print('isalpha:', s.isalpha())
# s = あいうえお
# isalpha: True
s = 'アイウエオ'
print('s =', s)
print('isalpha:', s.isalpha())
# s = アイウエオ
# isalpha: True
s = '漢字'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 漢字
# isalpha: True
s = '1234567890'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 1234567890
# isalpha: False
s = '1234567890'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 1234567890
# isalpha: False
s = '一二三四五六七八九'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 一二三四五六七八九
# isalpha: True
s = '壱億参阡萬'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 壱億参阡萬
# isalpha: True
s = '〇'
print('s =', s)
print('isalpha:', s.isalpha())
# s = 〇
# isalpha: False
s = '\u2166'
print('s =', s)
print('isalpha:', s.isalpha())
# s = Ⅶ
# isalpha: False
s = 'abc123'
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
# s = abc123
# isalnum: True
# isalpha: False
# isdecimal: False
# isdigit: False
# isnumeric: False
s = 'abc123+-,.&'
print('s =', s)
print('isascii:', s.isascii())
print('isalnum:', s.isalnum())
# s = abc123+-,.&
# isascii: True
# isalnum: False
s = 'あいうえお'
print('s =', s)
print('isascii:', s.isascii())
print('isalnum:', s.isalnum())
# s = あいうえお
# isascii: False
# isalnum: True
s = ''
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
print('isascii:', s.isascii())
# s =
# isalnum: False
# isalpha: False
# isdecimal: False
# isdigit: False
# isnumeric: False
# isascii: True
print(bool(''))
# False
print(bool('abc123'))
# True
s = '-1.23'
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
print('isascii:', s.isascii())
# s = -1.23
# isalnum: False
# isalpha: False
# isdecimal: False
# isdigit: False
# isnumeric: False
# isascii: True
print(float('-1.23'))
# -1.23
print(type(float('-1.23')))
# <class 'float'>
# print(float('abc'))
# ValueError: could not convert string to float: 'abc'
def is_num(s):
try:
float(s)
except ValueError:
return False
else:
return True
print(is_num('123'))
# True
print(is_num('-1.23'))
# True
print(is_num('+1.23e10'))
# True
print(is_num('abc'))
# False
print(is_num('10,000,000'))
# False
def is_num_delimiter(s):
try:
float(s.replace(',', ''))
except ValueError:
return False
else:
return True
print(is_num_delimiter('10,000,000'))
# True
def is_num_delimiter2(s):
try:
float(s.replace(',', '').replace(' ', ''))
except ValueError:
return False
else:
return True
print(is_num_delimiter2('10,000,000'))
# True
print(is_num_delimiter2('10 000 000'))
# True
| 17.581132 | 54 | 0.616656 | 627 | 4,659 | 4.566188 | 0.08453 | 0.050297 | 0.056235 | 0.096402 | 0.81942 | 0.790779 | 0.782745 | 0.699267 | 0.599022 | 0.580161 | 0 | 0.045907 | 0.158403 | 4,659 | 264 | 55 | 17.647727 | 0.680949 | 0.26422 | 0 | 0.783582 | 0 | 0 | 0.232084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022388 | false | 0 | 0 | 0 | 0.067164 | 0.671642 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
4b5c792b081b69fb295bc0474c99b0974975dce8 | 241 | py | Python | server/statistics/micro_statistics/schemas/tracks_selected.py | ndjuric93/MusicOrganizer | ef2e50abfeb1629325274b260f654935fd3f2740 | [
"Apache-2.0"
] | 1 | 2019-09-13T18:05:27.000Z | 2019-09-13T18:05:27.000Z | server/statistics/micro_statistics/schemas/tracks_selected.py | ndjuric93/MusicOrganizer | ef2e50abfeb1629325274b260f654935fd3f2740 | [
"Apache-2.0"
] | 5 | 2021-03-09T00:49:53.000Z | 2022-02-17T20:03:16.000Z | server/statistics/micro_statistics/schemas/tracks_selected.py | ndjuric93/MusicOrganizer | ef2e50abfeb1629325274b260f654935fd3f2740 | [
"Apache-2.0"
] | null | null | null | from micro_statistics import ma
from micro_statistics.models.track_selected_count import TrackSelectedCount
class TrackSelected(ma.ModelSchema):
class Meta:
model = TrackSelectedCount
track_selected = TrackSelected(many=True)
| 24.1 | 75 | 0.813278 | 27 | 241 | 7.074074 | 0.62963 | 0.094241 | 0.198953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136929 | 241 | 9 | 76 | 26.777778 | 0.918269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b712b9f3c524f9ba569e0e06982fcfefd0a1e13 | 148 | py | Python | back/app/routes/__init__.py | davidroeca/simple_graphql | a6b2b20b6458b6b2fa9363a542015ab42761bd98 | [
"MIT"
] | null | null | null | back/app/routes/__init__.py | davidroeca/simple_graphql | a6b2b20b6458b6b2fa9363a542015ab42761bd98 | [
"MIT"
] | null | null | null | back/app/routes/__init__.py | davidroeca/simple_graphql | a6b2b20b6458b6b2fa9363a542015ab42761bd98 | [
"MIT"
] | null | null | null | from flask import Blueprint
api_v1 = Blueprint('api_v1', __name__)
# Enforce mutations from sub-modules
from . import (
graphql,
hello,
)
| 14.8 | 38 | 0.709459 | 19 | 148 | 5.210526 | 0.684211 | 0.242424 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.202703 | 148 | 9 | 39 | 16.444444 | 0.822034 | 0.22973 | 0 | 0 | 0 | 0 | 0.053571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4ba3bcf676532fcf782354b4561b7cc3a6582cd8 | 172 | py | Python | hlrl/torch/vision/transforms/__init__.py | Chainso/HLRL | 584f4ed2fa4d8b311a21dbd862ec9434833dd7cd | [
"MIT"
] | null | null | null | hlrl/torch/vision/transforms/__init__.py | Chainso/HLRL | 584f4ed2fa4d8b311a21dbd862ec9434833dd7cd | [
"MIT"
] | null | null | null | hlrl/torch/vision/transforms/__init__.py | Chainso/HLRL | 584f4ed2fa4d8b311a21dbd862ec9434833dd7cd | [
"MIT"
] | null | null | null | from .interpolate import Interpolate
from .grayscale import Grayscale
from .convert_dimension_order import ConvertDimensionOrder
from .stack_dimension import StackDimension | 43 | 58 | 0.889535 | 19 | 172 | 7.894737 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087209 | 172 | 4 | 59 | 43 | 0.955414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4bb4bb2eeebcf0e4d59e512366887657596aebcf | 69 | py | Python | pmxbot/__main__.py | adriandiazgar/pmxbot | ba3ef406c5359c3fde496bd7d22bfd4a758484da | [
"MIT"
] | 17 | 2016-01-27T12:10:03.000Z | 2019-08-28T23:02:51.000Z | pmxbot/__main__.py | adriandiazgar/pmxbot | ba3ef406c5359c3fde496bd7d22bfd4a758484da | [
"MIT"
] | 79 | 2015-12-02T16:02:01.000Z | 2020-02-09T01:51:05.000Z | pmxbot/__main__.py | adriandiazgar/pmxbot | ba3ef406c5359c3fde496bd7d22bfd4a758484da | [
"MIT"
] | 8 | 2016-06-27T11:07:42.000Z | 2019-01-24T20:21:42.000Z | import pmxbot.core
if __name__ == '__main__':
pmxbot.core.run()
| 13.8 | 26 | 0.681159 | 9 | 69 | 4.333333 | 0.777778 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 69 | 4 | 27 | 17.25 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4bb664579ad58f9e83230268996a29cd465a44d1 | 96 | py | Python | nvdbapiv3/__init__.py | alexdiem/nvdbapi-V3 | 18265ee6d02aed17d6199e5ed42fe731c9320a08 | [
"MIT"
] | null | null | null | nvdbapiv3/__init__.py | alexdiem/nvdbapi-V3 | 18265ee6d02aed17d6199e5ed42fe731c9320a08 | [
"MIT"
] | null | null | null | nvdbapiv3/__init__.py | alexdiem/nvdbapi-V3 | 18265ee6d02aed17d6199e5ed42fe731c9320a08 | [
"MIT"
] | null | null | null | from .nvdbapiv3 import *
from .nvdb2geojson import *
from .apiforbindelse import apiforbindelse
| 24 | 42 | 0.822917 | 10 | 96 | 7.9 | 0.5 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.125 | 96 | 3 | 43 | 32 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
29975bcf657921f98fdfcad7684b9c5cd83e4b8f | 141 | py | Python | samples/nlp_sentiment_analysis/sentiment_analysis/data/__init__.py | katyamust/ml-expr-fw | 5ede3ff1f777430cf25e8731e4798fc37387fb9d | [
"MIT"
] | null | null | null | samples/nlp_sentiment_analysis/sentiment_analysis/data/__init__.py | katyamust/ml-expr-fw | 5ede3ff1f777430cf25e8731e4798fc37387fb9d | [
"MIT"
] | null | null | null | samples/nlp_sentiment_analysis/sentiment_analysis/data/__init__.py | katyamust/ml-expr-fw | 5ede3ff1f777430cf25e8731e4798fc37387fb9d | [
"MIT"
] | null | null | null | from .data_loader import DataLoader
from .nlp_sample_data_loader import NLPSampleDataLoader
__all__ = ['DataLoader', "NLPSampleDataLoader"]
| 28.2 | 55 | 0.836879 | 15 | 141 | 7.333333 | 0.6 | 0.181818 | 0.290909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092199 | 141 | 4 | 56 | 35.25 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0.205674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
29d15602d0ffd00098d0c2ac7da37179f8d97e09 | 1,033 | py | Python | simuvex/simuvex/engines/vex/statements/__init__.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 86 | 2015-08-06T23:25:07.000Z | 2022-02-17T14:58:22.000Z | simuvex/simuvex/engines/vex/statements/__init__.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 132 | 2015-09-10T19:06:59.000Z | 2018-10-04T20:36:45.000Z | simuvex/simuvex/engines/vex/statements/__init__.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 80 | 2015-08-07T10:30:20.000Z | 2020-03-21T14:45:28.000Z | from angr.engines.vex.statements.base import SimIRStmt
from angr.engines.vex.statements.noop import SimIRStmt_NoOp
from angr.engines.vex.statements.imark import SimIRStmt_IMark
from angr.engines.vex.statements.abihint import SimIRStmt_AbiHint
from angr.engines.vex.statements.wrtmp import SimIRStmt_WrTmp
from angr.engines.vex.statements.put import SimIRStmt_Put
from angr.engines.vex.statements.store import SimIRStmt_Store
from angr.engines.vex.statements.mbe import SimIRStmt_MBE
from angr.engines.vex.statements.dirty import SimIRStmt_Dirty
from angr.engines.vex.statements.exit import SimIRStmt_Exit
from angr.engines.vex.statements.cas import SimIRStmt_CAS
from angr.engines.vex.statements.storeg import SimIRStmt_StoreG
from angr.engines.vex.statements.loadg import SimIRStmt_LoadG
from angr.engines.vex.statements.llsc import SimIRStmt_LLSC
from angr.engines.vex.statements.puti import SimIRStmt_PutI
from angr.errors import UnsupportedIRStmtError, UnsupportedDirtyError, SimStatementError
from angr import sim_options as o
| 54.368421 | 88 | 0.868345 | 148 | 1,033 | 5.959459 | 0.202703 | 0.154195 | 0.255102 | 0.306122 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070668 | 1,033 | 18 | 89 | 57.388889 | 0.91875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b17dc9ef9d8017b0de4ad1dcce2dea49d5395df | 48 | py | Python | magic/gateway/entity/payment_type/payment_type_exception.py | DomAmato/magic-agent | 5ab477feba00538f5e59aae644c212c45c4955e1 | [
"MIT"
] | 7 | 2019-01-16T20:49:48.000Z | 2019-06-04T08:43:42.000Z | magic/gateway/entity/payment_type/payment_type_exception.py | DomAmato/magic-agent | 5ab477feba00538f5e59aae644c212c45c4955e1 | [
"MIT"
] | 5 | 2019-01-09T20:36:44.000Z | 2019-07-11T16:45:18.000Z | magic/gateway/entity/payment_type/payment_type_exception.py | DomAmato/magic-agent | 5ab477feba00538f5e59aae644c212c45c4955e1 | [
"MIT"
] | 2 | 2019-02-27T20:00:43.000Z | 2019-06-08T04:13:18.000Z | class PaymentTypeException(Exception):
pass
| 16 | 38 | 0.791667 | 4 | 48 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 39 | 24 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
4b1b8c29ba32da9d7fe064d6e8346db4a141a0c3 | 190 | py | Python | tests/test_package.py | lukasheinrich/awkward-xaod-bridge | cb78e69fcfc15fb7e5731a02794b87b11122920c | [
"BSD-3-Clause"
] | 2 | 2021-06-08T14:42:49.000Z | 2021-06-11T16:27:08.000Z | tests/test_package.py | lukasheinrich/awkward-xaod-bridge | cb78e69fcfc15fb7e5731a02794b87b11122920c | [
"BSD-3-Clause"
] | 7 | 2021-06-23T17:10:58.000Z | 2021-12-15T17:11:51.000Z | tests/test_package.py | lukasheinrich/awkward-xaod-bridge | cb78e69fcfc15fb7e5731a02794b87b11122920c | [
"BSD-3-Clause"
] | null | null | null | import awkward_xaod_bridge as xaodbridge
def test_pybind11():
assert xaodbridge.calibrate([1,2,3], [4,5,6]).tolist() == [5,7,9]
def test_version():
assert xaodbridge.__version__
| 19 | 69 | 0.710526 | 28 | 190 | 4.535714 | 0.75 | 0.110236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0.142105 | 190 | 9 | 70 | 21.111111 | 0.711656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
d9c1d4c478441b118c6cdc371465ba26068405a3 | 109 | py | Python | mango/hedging/__init__.py | neilfutureftr/mango-explorer | d8d5740cc53da90414323e8f62f5b6fbfc6ea5ed | [
"MIT"
] | null | null | null | mango/hedging/__init__.py | neilfutureftr/mango-explorer | d8d5740cc53da90414323e8f62f5b6fbfc6ea5ed | [
"MIT"
] | null | null | null | mango/hedging/__init__.py | neilfutureftr/mango-explorer | d8d5740cc53da90414323e8f62f5b6fbfc6ea5ed | [
"MIT"
] | null | null | null | from .hedger import Hedger
from .nullhedger import NullHedger
from .perptospothedger import PerpToSpotHedger
| 27.25 | 46 | 0.862385 | 12 | 109 | 7.833333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 109 | 3 | 47 | 36.333333 | 0.969072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a0b6053c7edcde3119e2ddbfa1cef6c28d814c0 | 162 | py | Python | aerolyzer/__init__.py | wingarlo/Aerolyzer | 16c91740ba561b988e67fdcd6ef802ed8a826da2 | [
"Apache-2.0"
] | null | null | null | aerolyzer/__init__.py | wingarlo/Aerolyzer | 16c91740ba561b988e67fdcd6ef802ed8a826da2 | [
"Apache-2.0"
] | null | null | null | aerolyzer/__init__.py | wingarlo/Aerolyzer | 16c91740ba561b988e67fdcd6ef802ed8a826da2 | [
"Apache-2.0"
] | null | null | null | version = "0.0.0.4"
#import image_restriction_functions
#import image_restriction_main
#import retrieve_image_data
from wunderData import *
from horizon import *
| 23.142857 | 35 | 0.82716 | 23 | 162 | 5.565217 | 0.565217 | 0.03125 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 0.104938 | 162 | 6 | 36 | 27 | 0.855172 | 0.549383 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a0f856a64fe7334e45d6ba2575e63f778636cf5 | 75 | py | Python | discord_interactions/ocm/__init__.py | lyczak/discord-interactions.py | e4e448ed9ab922b7e5ba3965dcdb708473f92b7c | [
"MIT"
] | null | null | null | discord_interactions/ocm/__init__.py | lyczak/discord-interactions.py | e4e448ed9ab922b7e5ba3965dcdb708473f92b7c | [
"MIT"
] | null | null | null | discord_interactions/ocm/__init__.py | lyczak/discord-interactions.py | e4e448ed9ab922b7e5ba3965dcdb708473f92b7c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from .command import Command, Option, OptionChoices
| 18.75 | 51 | 0.773333 | 10 | 75 | 5.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 75 | 3 | 52 | 25 | 0.878788 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a1555d32a5315cf3d7664cb18a929783705ca0d | 153 | py | Python | apps/common/views.py | thunderwin/bbs | 2ac7ef2c61ddba11c61aeefa25f8dfc22a014cc9 | [
"Apache-2.0"
] | null | null | null | apps/common/views.py | thunderwin/bbs | 2ac7ef2c61ddba11c61aeefa25f8dfc22a014cc9 | [
"Apache-2.0"
] | null | null | null | apps/common/views.py | thunderwin/bbs | 2ac7ef2c61ddba11c61aeefa25f8dfc22a014cc9 | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint
common_bp = Blueprint('common',__name__,url_prefix='/common')
@common_bp.route('/')
def index():
return 'common index'
| 19.125 | 61 | 0.72549 | 20 | 153 | 5.2 | 0.65 | 0.288462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124183 | 153 | 7 | 62 | 21.857143 | 0.776119 | 0 | 0 | 0 | 0 | 0 | 0.169935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a1d42f41a2098403d349610a226eef3dea82b9e | 72 | py | Python | tests/conftest.py | slxiao/partition | 53ff137b26816d64bf002baf269dbf97b601a3ca | [
"MIT"
] | 5 | 2019-11-22T08:34:12.000Z | 2021-09-21T03:18:31.000Z | tests/conftest.py | slxiao/partition | 53ff137b26816d64bf002baf269dbf97b601a3ca | [
"MIT"
] | 3 | 2019-12-22T10:28:44.000Z | 2021-10-09T19:14:31.000Z | tests/conftest.py | slxiao/partition | 53ff137b26816d64bf002baf269dbf97b601a3ca | [
"MIT"
] | 1 | 2020-12-01T15:31:30.000Z | 2020-12-01T15:31:30.000Z | import pytest
@pytest.fixture
def numbers():
return [4, 5, 6, 7, 8] | 14.4 | 26 | 0.638889 | 12 | 72 | 3.833333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 0.208333 | 72 | 5 | 26 | 14.4 | 0.719298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
8a307c7fd0204d5d2c1dfba53ccb3657fb7d90e6 | 72 | py | Python | app/gws/gis/ows/__init__.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | app/gws/gis/ows/__init__.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | app/gws/gis/ows/__init__.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | from . import request, formats, error
from .util import shared_provider
| 24 | 37 | 0.805556 | 10 | 72 | 5.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 72 | 2 | 38 | 36 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a5d04ee33b2f052981c1ad1a87e3f9fcfcc6647 | 6,718 | py | Python | highlights/highlights_state_selection.py | yotamitai/Highway_Disagreements | 18dccaa67b238700691c0f89b9fc2dfc2dab6751 | [
"MIT"
] | null | null | null | highlights/highlights_state_selection.py | yotamitai/Highway_Disagreements | 18dccaa67b238700691c0f89b9fc2dfc2dab6751 | [
"MIT"
] | null | null | null | highlights/highlights_state_selection.py | yotamitai/Highway_Disagreements | 18dccaa67b238700691c0f89b9fc2dfc2dab6751 | [
"MIT"
] | null | null | null | import numpy as np
from scipy.spatial import distance
from bisect import bisect
from bisect import insort_left
def compute_states_importance(states_q_values_df, compare_to='worst'):
if compare_to == 'worst':
states_q_values_df['importance'] = states_q_values_df['q_values'].apply(
lambda x: np.max(x) - np.min(x))
elif compare_to == 'second':
states_q_values_df['importance'] = states_q_values_df['q_values'].apply(
lambda x: np.max(x) - np.partition(x.flatten(), -2)[-2])
else:
raise Exception('No importance criteria provided.')
return states_q_values_df
def highlights(state_importance_df, trace_lens, budget, context_length, minimum_gap):
''' generate highlights summary
:param state_importance_df: dataframe with 2 columns: state and importance score of the state
:param budget: allowed length of summary - note this includes only the important states, it doesn't count context
around them
:param context_length: how many states to show around the chosen important state (e.g., if context_lenght=10, we
will show 10 states before and 10 states after the important state
:param minimum_gap: how many states should we skip after showing the context for an important state. For example, if
we chose state 200, and the context length is 10, we will show states 189-211. If minimum_gap=10, we will not
consider states 212-222 and states 178-198 because they are too close
:return: a list with the indices of the important states, and a list with all summary states (includes the context)
'''
sorted_df = state_importance_df.sort_values(['importance'], ascending=False)
summary_states = []
for index, row in sorted_df.iterrows():
state_index = row['state']
index_in_summary = bisect(summary_states, state_index)
state_before = None
state_after = None
if index_in_summary > 0:
state_before = summary_states[index_in_summary-1]
if index_in_summary < len(summary_states):
state_after = summary_states[index_in_summary]
if state_after is not None:
if state_index[0] == state_after[0]:
if state_index[1]+context_length+minimum_gap > state_after[1]:
continue
if state_before is not None:
if state_index[0] == state_before[0]:
if state_index[1]-context_length-minimum_gap < state_before[1]:
continue
insort_left(summary_states,state_index)
if len(summary_states) == budget:
break
summary_states_with_context = {}
for state in summary_states:
s, e = max(state[1]-context_length,0), min(state[1]+context_length, trace_lens[state[0]]-1)
summary_states_with_context[state] = [(state[0], x) for x in (range(s,e))]
return summary_states_with_context
def find_similar_state_in_summary(state_importance_df, threshold, summary_states, new_state):
most_similar_state = None
minimal_distance = threshold
for state in summary_states:
state_features = state_importance_df.loc[state_importance_df['state'] == state].iloc[
0].features
distance = sum(state_features - new_state)
if distance < minimal_distance:
minimal_distance = distance
most_similar_state = state
return most_similar_state, minimal_distance
def highlights_div(state_importance_df, trace_lens, budget, context_length, minimum_gap,
threshold=50000):
''' generate highlights-div summary
:param state_importance_df: dataframe with 2 columns: state and importance score of the state
:param budget: allowed length of summary - note this includes only the important states, it doesn't count context
around them
:param context_length: how many states to show around the chosen important state (e.g., if context_lenght=10, we
will show 10 states before and 10 states after the important state
:param minimum_gap: how many states should we skip after showing the context for an important state. For example, if
we chose state 200, and the context length is 10, we will show states 189-211. If minimum_gap=10, we will not
consider states 212-222 and states 178-198 because they are too close
:param distance_metric: metric to use for comparing states (function)
:param percentile_threshold: what minimal distance to allow between states in summary
:param subset_threshold: number of random states to be used as basis for the div-threshold
:return: a list with the indices of the important states, and a list with all summary states (includes the context)
'''
sorted_df = state_importance_df.sort_values(['importance'], ascending=False)
summary_states = []
summary_states_with_context = []
num_chosen_states = 0
for index, row in sorted_df.iterrows():
state_index = row['state']
index_in_summary = bisect(summary_states, state_index)
state_before = None
state_after = None
if index_in_summary > 0:
state_before = summary_states[index_in_summary - 1]
if index_in_summary < len(summary_states):
state_after = summary_states[index_in_summary]
if state_after is not None:
if state_index[0] == state_after[0]:
if state_index[1] + context_length + minimum_gap > state_after[1]:
continue
if state_before is not None:
if state_index[0] == state_before[0]:
if state_index[1] - context_length - minimum_gap < state_before[1]:
continue
most_similar_state, min_distance = \
find_similar_state_in_summary(state_importance_df, threshold,
summary_states_with_context, row['features'])
if most_similar_state is None:
insort_left(summary_states, state_index)
num_chosen_states += 1
print('summary_states:', summary_states)
else:
if min_distance > threshold:
insort_left(summary_states, state_index)
num_chosen_states += 1
print('summary_states:', summary_states)
summary_states_with_context = {}
for state in summary_states:
s, e = max(state[1] - context_length, 0), min(state[1] + context_length,
trace_lens[state[0]] - 1)
summary_states_with_context[state] = [(state[0], x) for x in (range(s, e))]
if len(summary_states) == budget:
break
return summary_states_with_context | 49.036496 | 120 | 0.676689 | 920 | 6,718 | 4.698913 | 0.165217 | 0.099237 | 0.039325 | 0.044414 | 0.789035 | 0.764053 | 0.737451 | 0.737451 | 0.737451 | 0.737451 | 0 | 0.021251 | 0.250521 | 6,718 | 137 | 121 | 49.036496 | 0.837339 | 0.287734 | 0 | 0.645833 | 0 | 0 | 0.033554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.15625 | 0 | 0.239583 | 0.020833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a682cd255cfdd83d060fcb7f4bbd6e7236292ac | 184 | py | Python | pyemto/EOS/__init__.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | 11 | 2018-04-10T02:01:12.000Z | 2021-12-10T06:44:54.000Z | pyemto/EOS/__init__.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | null | null | null | pyemto/EOS/__init__.py | hitliaomq/pyemto | 334903fbe626fa8b9f7e1f7a089b7c0e30edba46 | [
"MIT"
] | 2 | 2020-02-01T19:59:50.000Z | 2020-04-07T20:53:40.000Z | # -*- coding: utf-8 -*-
"""
Created on Wed Dec 3 15:01:34 2014
@author: Matti Ropo
@author: Henrik Levämäki
"""
from __future__ import print_function
from pyemto.EOS.EOS import EOS
| 16.727273 | 37 | 0.706522 | 29 | 184 | 4.310345 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 0.168478 | 184 | 10 | 38 | 18.4 | 0.738562 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
8a68e31e26bc0cb21ab9b6a7be5c10fa10fcee0c | 20,795 | py | Python | packages/python/plotly/plotly/graph_objs/layout/_grid.py | eranws/plotly.py | 5b0e8d3ccab55fe1a6e4ba123cfc9d718a9ffc5a | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/graph_objs/layout/_grid.py | eranws/plotly.py | 5b0e8d3ccab55fe1a6e4ba123cfc9d718a9ffc5a | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/graph_objs/layout/_grid.py | eranws/plotly.py | 5b0e8d3ccab55fe1a6e4ba123cfc9d718a9ffc5a | [
"MIT"
] | null | null | null | from plotly.basedatatypes import BaseLayoutHierarchyType as _BaseLayoutHierarchyType
import copy as _copy
class Grid(_BaseLayoutHierarchyType):
# class properties
# --------------------
_parent_path_str = "layout"
_path_str = "layout.grid"
_valid_props = {
"columns",
"domain",
"pattern",
"roworder",
"rows",
"subplots",
"xaxes",
"xgap",
"xside",
"yaxes",
"ygap",
"yside",
}
# columns
# -------
@property
def columns(self):
"""
The number of columns in the grid. If you provide a 2D
`subplots` array, the length of its longest row is used as the
default. If you give an `xaxes` array, its length is used as
the default. But it's also possible to have a different length,
if you want to leave a row at the end for non-cartesian
subplots.
The 'columns' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [1, 9223372036854775807]
Returns
-------
int
"""
return self["columns"]
@columns.setter
def columns(self, val):
self["columns"] = val
# domain
# ------
@property
def domain(self):
"""
The 'domain' property is an instance of Domain
that may be specified as:
- An instance of :class:`plotly.graph_objs.layout.grid.Domain`
- A dict of string/value properties that will be passed
to the Domain constructor
Supported dict properties:
x
Sets the horizontal domain of this grid subplot
(in plot fraction). The first and last cells
end exactly at the domain edges, with no grout
around the edges.
y
Sets the vertical domain of this grid subplot
(in plot fraction). The first and last cells
end exactly at the domain edges, with no grout
around the edges.
Returns
-------
plotly.graph_objs.layout.grid.Domain
"""
return self["domain"]
@domain.setter
def domain(self, val):
self["domain"] = val
# pattern
# -------
@property
def pattern(self):
"""
If no `subplots`, `xaxes`, or `yaxes` are given but we do have
`rows` and `columns`, we can generate defaults using
consecutive axis IDs, in two ways: "coupled" gives one x axis
per column and one y axis per row. "independent" uses a new xy
pair for each cell, left-to-right across each row then
iterating rows according to `roworder`.
The 'pattern' property is an enumeration that may be specified as:
- One of the following enumeration values:
['independent', 'coupled']
Returns
-------
Any
"""
return self["pattern"]
@pattern.setter
def pattern(self, val):
self["pattern"] = val
# roworder
# --------
@property
def roworder(self):
"""
Is the first row the top or the bottom? Note that columns are
always enumerated from left to right.
The 'roworder' property is an enumeration that may be specified as:
- One of the following enumeration values:
['top to bottom', 'bottom to top']
Returns
-------
Any
"""
return self["roworder"]
@roworder.setter
def roworder(self, val):
self["roworder"] = val
# rows
# ----
@property
def rows(self):
"""
The number of rows in the grid. If you provide a 2D `subplots`
array or a `yaxes` array, its length is used as the default.
But it's also possible to have a different length, if you want
to leave a row at the end for non-cartesian subplots.
The 'rows' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [1, 9223372036854775807]
Returns
-------
int
"""
return self["rows"]
@rows.setter
def rows(self, val):
self["rows"] = val
# subplots
# --------
@property
def subplots(self):
"""
Used for freeform grids, where some axes may be shared across
subplots but others are not. Each entry should be a cartesian
subplot id, like "xy" or "x3y2", or "" to leave that cell
empty. You may reuse x axes within the same column, and y axes
within the same row. Non-cartesian subplots and traces that
support `domain` can place themselves in this grid separately
using the `gridcell` attribute.
The 'subplots' property is an info array that may be specified as:
* a 2D list where:
The 'subplots[i][j]' property is an enumeration that may be specified as:
- One of the following enumeration values:
['']
- A string that matches one of the following regular expressions:
['^x([2-9]|[1-9][0-9]+)?y([2-9]|[1-9][0-9]+)?$']
Returns
-------
list
"""
return self["subplots"]
@subplots.setter
def subplots(self, val):
self["subplots"] = val
# xaxes
# -----
@property
def xaxes(self):
"""
Used with `yaxes` when the x and y axes are shared across
columns and rows. Each entry should be an x axis id like "x",
"x2", etc., or "" to not put an x axis in that column. Entries
other than "" must be unique. Ignored if `subplots` is present.
If missing but `yaxes` is present, will generate consecutive
IDs.
The 'xaxes' property is an info array that may be specified as:
* a list of elements where:
The 'xaxes[i]' property is an enumeration that may be specified as:
- One of the following enumeration values:
['']
- A string that matches one of the following regular expressions:
['^x([2-9]|[1-9][0-9]+)?$']
Returns
-------
list
"""
return self["xaxes"]
@xaxes.setter
def xaxes(self, val):
self["xaxes"] = val
# xgap
# ----
@property
def xgap(self):
"""
Horizontal space between grid cells, expressed as a fraction of
the total width available to one cell. Defaults to 0.1 for
coupled-axes grids and 0.2 for independent grids.
The 'xgap' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
int|float
"""
return self["xgap"]
@xgap.setter
def xgap(self, val):
self["xgap"] = val
# xside
# -----
@property
def xside(self):
"""
Sets where the x axis labels and titles go. "bottom" means the
very bottom of the grid. "bottom plot" is the lowest plot that
each x axis is used in. "top" and "top plot" are similar.
The 'xside' property is an enumeration that may be specified as:
- One of the following enumeration values:
['bottom', 'bottom plot', 'top plot', 'top']
Returns
-------
Any
"""
return self["xside"]
@xside.setter
def xside(self, val):
self["xside"] = val
# yaxes
# -----
@property
def yaxes(self):
"""
Used with `yaxes` when the x and y axes are shared across
columns and rows. Each entry should be an y axis id like "y",
"y2", etc., or "" to not put a y axis in that row. Entries
other than "" must be unique. Ignored if `subplots` is present.
If missing but `xaxes` is present, will generate consecutive
IDs.
The 'yaxes' property is an info array that may be specified as:
* a list of elements where:
The 'yaxes[i]' property is an enumeration that may be specified as:
- One of the following enumeration values:
['']
- A string that matches one of the following regular expressions:
['^y([2-9]|[1-9][0-9]+)?$']
Returns
-------
list
"""
return self["yaxes"]
@yaxes.setter
def yaxes(self, val):
self["yaxes"] = val
# ygap
# ----
@property
def ygap(self):
"""
Vertical space between grid cells, expressed as a fraction of
the total height available to one cell. Defaults to 0.1 for
coupled-axes grids and 0.3 for independent grids.
The 'ygap' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
int|float
"""
return self["ygap"]
@ygap.setter
def ygap(self, val):
self["ygap"] = val
# yside
# -----
@property
def yside(self):
"""
Sets where the y axis labels and titles go. "left" means the
very left edge of the grid. *left plot* is the leftmost plot
that each y axis is used in. "right" and *right plot* are
similar.
The 'yside' property is an enumeration that may be specified as:
- One of the following enumeration values:
['left', 'left plot', 'right plot', 'right']
Returns
-------
Any
"""
return self["yside"]
@yside.setter
def yside(self, val):
self["yside"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
columns
The number of columns in the grid. If you provide a 2D
`subplots` array, the length of its longest row is used
as the default. If you give an `xaxes` array, its
length is used as the default. But it's also possible
to have a different length, if you want to leave a row
at the end for non-cartesian subplots.
domain
:class:`plotly.graph_objects.layout.grid.Domain`
instance or dict with compatible properties
pattern
If no `subplots`, `xaxes`, or `yaxes` are given but we
do have `rows` and `columns`, we can generate defaults
using consecutive axis IDs, in two ways: "coupled"
gives one x axis per column and one y axis per row.
"independent" uses a new xy pair for each cell, left-
to-right across each row then iterating rows according
to `roworder`.
roworder
Is the first row the top or the bottom? Note that
columns are always enumerated from left to right.
rows
The number of rows in the grid. If you provide a 2D
`subplots` array or a `yaxes` array, its length is used
as the default. But it's also possible to have a
different length, if you want to leave a row at the end
for non-cartesian subplots.
subplots
Used for freeform grids, where some axes may be shared
across subplots but others are not. Each entry should
be a cartesian subplot id, like "xy" or "x3y2", or ""
to leave that cell empty. You may reuse x axes within
the same column, and y axes within the same row. Non-
cartesian subplots and traces that support `domain` can
place themselves in this grid separately using the
`gridcell` attribute.
xaxes
Used with `yaxes` when the x and y axes are shared
across columns and rows. Each entry should be an x axis
id like "x", "x2", etc., or "" to not put an x axis in
that column. Entries other than "" must be unique.
Ignored if `subplots` is present. If missing but
`yaxes` is present, will generate consecutive IDs.
xgap
Horizontal space between grid cells, expressed as a
fraction of the total width available to one cell.
Defaults to 0.1 for coupled-axes grids and 0.2 for
independent grids.
xside
Sets where the x axis labels and titles go. "bottom"
means the very bottom of the grid. "bottom plot" is the
lowest plot that each x axis is used in. "top" and "top
plot" are similar.
yaxes
Used with `yaxes` when the x and y axes are shared
across columns and rows. Each entry should be an y axis
id like "y", "y2", etc., or "" to not put a y axis in
that row. Entries other than "" must be unique. Ignored
if `subplots` is present. If missing but `xaxes` is
present, will generate consecutive IDs.
ygap
Vertical space between grid cells, expressed as a
fraction of the total height available to one cell.
Defaults to 0.1 for coupled-axes grids and 0.3 for
independent grids.
yside
Sets where the y axis labels and titles go. "left"
means the very left edge of the grid. *left plot* is
the leftmost plot that each y axis is used in. "right"
and *right plot* are similar.
"""
def __init__(
self,
arg=None,
columns=None,
domain=None,
pattern=None,
roworder=None,
rows=None,
subplots=None,
xaxes=None,
xgap=None,
xside=None,
yaxes=None,
ygap=None,
yside=None,
**kwargs
):
"""
Construct a new Grid object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of :class:`plotly.graph_objs.layout.Grid`
columns
The number of columns in the grid. If you provide a 2D
`subplots` array, the length of its longest row is used
as the default. If you give an `xaxes` array, its
length is used as the default. But it's also possible
to have a different length, if you want to leave a row
at the end for non-cartesian subplots.
domain
:class:`plotly.graph_objects.layout.grid.Domain`
instance or dict with compatible properties
pattern
If no `subplots`, `xaxes`, or `yaxes` are given but we
do have `rows` and `columns`, we can generate defaults
using consecutive axis IDs, in two ways: "coupled"
gives one x axis per column and one y axis per row.
"independent" uses a new xy pair for each cell, left-
to-right across each row then iterating rows according
to `roworder`.
roworder
Is the first row the top or the bottom? Note that
columns are always enumerated from left to right.
rows
The number of rows in the grid. If you provide a 2D
`subplots` array or a `yaxes` array, its length is used
as the default. But it's also possible to have a
different length, if you want to leave a row at the end
for non-cartesian subplots.
subplots
Used for freeform grids, where some axes may be shared
across subplots but others are not. Each entry should
be a cartesian subplot id, like "xy" or "x3y2", or ""
to leave that cell empty. You may reuse x axes within
the same column, and y axes within the same row. Non-
cartesian subplots and traces that support `domain` can
place themselves in this grid separately using the
`gridcell` attribute.
xaxes
Used with `yaxes` when the x and y axes are shared
across columns and rows. Each entry should be an x axis
id like "x", "x2", etc., or "" to not put an x axis in
that column. Entries other than "" must be unique.
Ignored if `subplots` is present. If missing but
`yaxes` is present, will generate consecutive IDs.
xgap
Horizontal space between grid cells, expressed as a
fraction of the total width available to one cell.
Defaults to 0.1 for coupled-axes grids and 0.2 for
independent grids.
xside
Sets where the x axis labels and titles go. "bottom"
means the very bottom of the grid. "bottom plot" is the
lowest plot that each x axis is used in. "top" and "top
plot" are similar.
yaxes
Used with `yaxes` when the x and y axes are shared
across columns and rows. Each entry should be an y axis
id like "y", "y2", etc., or "" to not put a y axis in
that row. Entries other than "" must be unique. Ignored
if `subplots` is present. If missing but `xaxes` is
present, will generate consecutive IDs.
ygap
Vertical space between grid cells, expressed as a
fraction of the total height available to one cell.
Defaults to 0.1 for coupled-axes grids and 0.3 for
independent grids.
yside
Sets where the y axis labels and titles go. "left"
means the very left edge of the grid. *left plot* is
the leftmost plot that each y axis is used in. "right"
and *right plot* are similar.
Returns
-------
Grid
"""
super(Grid, self).__init__("grid")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.layout.Grid
constructor must be a dict or
an instance of :class:`plotly.graph_objs.layout.Grid`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("columns", None)
_v = columns if columns is not None else _v
if _v is not None:
self["columns"] = _v
_v = arg.pop("domain", None)
_v = domain if domain is not None else _v
if _v is not None:
self["domain"] = _v
_v = arg.pop("pattern", None)
_v = pattern if pattern is not None else _v
if _v is not None:
self["pattern"] = _v
_v = arg.pop("roworder", None)
_v = roworder if roworder is not None else _v
if _v is not None:
self["roworder"] = _v
_v = arg.pop("rows", None)
_v = rows if rows is not None else _v
if _v is not None:
self["rows"] = _v
_v = arg.pop("subplots", None)
_v = subplots if subplots is not None else _v
if _v is not None:
self["subplots"] = _v
_v = arg.pop("xaxes", None)
_v = xaxes if xaxes is not None else _v
if _v is not None:
self["xaxes"] = _v
_v = arg.pop("xgap", None)
_v = xgap if xgap is not None else _v
if _v is not None:
self["xgap"] = _v
_v = arg.pop("xside", None)
_v = xside if xside is not None else _v
if _v is not None:
self["xside"] = _v
_v = arg.pop("yaxes", None)
_v = yaxes if yaxes is not None else _v
if _v is not None:
self["yaxes"] = _v
_v = arg.pop("ygap", None)
_v = ygap if ygap is not None else _v
if _v is not None:
self["ygap"] = _v
_v = arg.pop("yside", None)
_v = yside if yside is not None else _v
if _v is not None:
self["yside"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
| 34.600666 | 84 | 0.553835 | 2,724 | 20,795 | 4.18906 | 0.090675 | 0.010516 | 0.018929 | 0.021032 | 0.769696 | 0.757164 | 0.753922 | 0.753396 | 0.753396 | 0.749715 | 0 | 0.00837 | 0.362251 | 20,795 | 600 | 85 | 34.658333 | 0.852059 | 0.464727 | 0 | 0.117886 | 0 | 0 | 0.4598 | 0.005398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105691 | false | 0 | 0.00813 | 0.004065 | 0.186992 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
76f4249b6a49fad095b062095f245895449de4c2 | 25 | py | Python | src/pyrocksdb/__init__.py | jdp5/python-rocksdb | 3f4f0bc54e50e71ed38a12b8ba75ddbd88afff8e | [
"MIT"
] | 1 | 2021-06-12T07:12:09.000Z | 2021-06-12T07:12:09.000Z | src/pyrocksdb/__init__.py | jdp5/python-rocksdb | 3f4f0bc54e50e71ed38a12b8ba75ddbd88afff8e | [
"MIT"
] | null | null | null | src/pyrocksdb/__init__.py | jdp5/python-rocksdb | 3f4f0bc54e50e71ed38a12b8ba75ddbd88afff8e | [
"MIT"
] | null | null | null | from .pyrocksdb import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0a3bcf34fe3227baf9ced5a8c13cebec164684b4 | 109 | py | Python | whatsapp_tracker/mixins/selenium_urls_mixin.py | itay-bardugo/whatsapp_tracker | c53a309b08bf47597c8191ec0a155a1fe1536842 | [
"MIT"
] | 1 | 2021-09-25T12:22:35.000Z | 2021-09-25T12:22:35.000Z | whatsapp_tracker/mixins/selenium_urls_mixin.py | itay-bardugo/whatsapp_tracker | c53a309b08bf47597c8191ec0a155a1fe1536842 | [
"MIT"
] | null | null | null | whatsapp_tracker/mixins/selenium_urls_mixin.py | itay-bardugo/whatsapp_tracker | c53a309b08bf47597c8191ec0a155a1fe1536842 | [
"MIT"
] | null | null | null | class SeleniumUrlsMixin:
@classmethod
def open_url(cls, driver, url):
return driver.get(url)
| 21.8 | 35 | 0.678899 | 13 | 109 | 5.615385 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229358 | 109 | 4 | 36 | 27.25 | 0.869048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0a52b305c3bf821d349e3fdff609c8659b332cb3 | 114 | py | Python | named.py | adamsmd/fork-Inline-Python | 53a1c2ca1751b5928909d929493e5ed29a721d02 | [
"Artistic-2.0"
] | null | null | null | named.py | adamsmd/fork-Inline-Python | 53a1c2ca1751b5928909d929493e5ed29a721d02 | [
"Artistic-2.0"
] | null | null | null | named.py | adamsmd/fork-Inline-Python | 53a1c2ca1751b5928909d929493e5ed29a721d02 | [
"Artistic-2.0"
] | null | null | null | from logging import warn
def test_named(a, b):
return a + b
def test_kwargs(**kwargs):
return len(kwargs)
| 19 | 26 | 0.692982 | 19 | 114 | 4.052632 | 0.631579 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201754 | 114 | 5 | 27 | 22.8 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0a549db3762ba3068c7ad33727ce37b631d50dfe | 14,593 | py | Python | tests/unit/states/test_ipset.py | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 5 | 2018-05-01T20:51:14.000Z | 2021-11-09T05:43:00.000Z | tests/unit/states/test_ipset.py | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 86 | 2017-01-27T11:54:46.000Z | 2020-05-20T06:25:26.000Z | tests/unit/states/test_ipset.py | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 11 | 2017-01-26T19:36:29.000Z | 2021-12-11T07:54:16.000Z | # -*- coding: utf-8 -*-
'''
:codeauthor: Jayesh Kariya <jayeshk@saltstack.com>
'''
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import skipIf, TestCase
from tests.support.mock import (
NO_MOCK,
NO_MOCK_REASON,
MagicMock,
call,
patch)
# Import Salt Libs
import salt.states.ipset as ipset
@skipIf(NO_MOCK, NO_MOCK_REASON)
class IpsetSetPresentTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.ipset.present
'''
fake_name = 'fake_ipset'
fake_set_type = {'bitmap': '192.168.0.3'}
def setup_loader_modules(self):
return {ipset: {}}
def _runner(self, expected_ret, test=False, check_set=False, new_set=None,
new_set_assertion=True):
mock_check_set = MagicMock(return_value=check_set)
mock_new_set = MagicMock() if new_set is None else MagicMock(return_value=new_set)
with patch.dict(ipset.__salt__, {'ipset.check_set': mock_check_set,
'ipset.new_set': mock_new_set}):
with patch.dict(ipset.__opts__, {'test': test}):
actual_ret = ipset.set_present(self.fake_name, self.fake_set_type)
mock_check_set.assert_called_once_with(self.fake_name)
if new_set_assertion:
mock_new_set.assert_called_once_with(self.fake_name, self.fake_set_type, 'ipv4')
else:
self.assertTrue(mock_new_set.call_count == 0)
self.assertDictEqual(actual_ret, expected_ret)
def test_already_exists(self):
'''
Test to verify the chain exists when it already exists.
'''
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset set {0} already exists for ipv4'.format(self.fake_name),
'changes': {}}
self._runner(ret, check_set=True, new_set_assertion=False)
def test_needs_update_test_mode(self):
'''
Test to verify that detects need for update but doesn't apply when in test mode.
'''
ret = {'name': self.fake_name,
'result': None,
'comment': 'ipset set {0} would be added for ipv4'.format(self.fake_name),
'changes': {}}
self._runner(ret, test=True, new_set_assertion=False)
def test_creates_set(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset set {0} created successfully for ipv4'.format(self.fake_name),
'changes': {'locale': self.fake_name}}
self._runner(ret, new_set=True)
def test_create_fails(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'Failed to create set {0} for ipv4: '.format(self.fake_name),
'changes': {}}
self._runner(ret, new_set='')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class IpsetSetAbsentTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.ipset.present
'''
fake_name = 'fake_ipset'
fake_set_type = {'bitmap': '192.168.0.3'}
def setup_loader_modules(self):
return {ipset: {}}
def _runner(self, expected_ret, test=False, check_set=True, delete_set='',
flush_assertion=False, delete_set_assertion=False):
mock_check_set = MagicMock(return_value=check_set)
mock_flush = MagicMock()
mock_delete_set = MagicMock() if delete_set is None else MagicMock(return_value=delete_set)
with patch.dict(ipset.__opts__, {'test': test}):
with patch.dict(ipset.__salt__, {'ipset.check_set': mock_check_set,
'ipset.flush': mock_flush,
'ipset.delete_set': mock_delete_set}):
actual_ret = ipset.set_absent(self.fake_name)
mock_check_set.assert_called_once_with(self.fake_name, 'ipv4')
if flush_assertion:
mock_flush.assert_called_once_with(self.fake_name, 'ipv4')
else:
self.assertTrue(mock_flush.call_count == 0)
if delete_set_assertion:
mock_delete_set.assert_called_once_with(self.fake_name, 'ipv4')
else:
self.assertTrue(mock_delete_set.call_count == 0)
self.assertDictEqual(actual_ret, expected_ret)
def test_already_absent(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset set {0} for ipv4 is already absent'.format(self.fake_name),
'changes': {}}
self._runner(ret, check_set=False, delete_set=None)
def test_remove_test_mode(self):
ret = {'name': self.fake_name,
'result': None,
'comment': 'ipset set {0} for ipv4 would be removed'.format(self.fake_name),
'changes': {}}
self._runner(ret, test=True, delete_set=None)
def test_remove_fails(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'Failed to delete set {0} for ipv4: '.format(self.fake_name),
'changes': {}}
self._runner(ret, flush_assertion=True, delete_set_assertion=True)
def test_remove_success(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset set {0} deleted successfully for family ipv4'.format(self.fake_name),
'changes': {'locale': 'fake_ipset'}}
self._runner(ret, delete_set=True, flush_assertion=True, delete_set_assertion=True)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class IpsetPresentTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.ipset.present
'''
fake_name = 'fake_ipset'
fake_entries = ['192.168.0.3', '192.168.1.3']
def setup_loader_modules(self):
return {ipset: {}}
def _runner(self, expected_ret, test=False, check=False, add=False,
add_assertion=False):
mock_check = MagicMock(return_value=check)
mock_add = MagicMock(return_value=add)
with patch.dict(ipset.__opts__, {'test': test}):
with patch.dict(ipset.__salt__, {'ipset.check': mock_check,
'ipset.add': mock_add}):
actual_ret = ipset.present(self.fake_name, self.fake_entries, set_name=self.fake_name)
mock_check.assert_has_calls([call(self.fake_name, e, 'ipv4') for e in self.fake_entries], any_order=True)
if add_assertion:
expected_calls = [call(self.fake_name, e, 'ipv4', set_name=self.fake_name) for e in self.fake_entries]
if add is not True:
# if the add fails, then it will only get called once.
expected_calls = expected_calls[:1]
mock_add.assert_has_calls(expected_calls, any_order=True)
else:
self.assertTrue(mock_add.call_count == 0)
self.assertDictEqual(actual_ret, expected_ret)
def test_entries_already_present(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'entry for 192.168.0.3 already in set {0} for ipv4\n'
'entry for 192.168.1.3 already in set {0} for ipv4\n'
''.format(self.fake_name),
'changes': {}}
self._runner(ret, check=True)
def test_in_test_mode(self):
ret = {'name': self.fake_name,
'result': None,
'comment': 'entry 192.168.0.3 would be added to set {0} for family ipv4\n'
'entry 192.168.1.3 would be added to set {0} for family ipv4\n'
''.format(self.fake_name),
'changes': {}}
self._runner(ret, test=True)
def test_add_fails(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'Failed to add to entry 192.168.1.3 to set {0} for family ipv4.\n'
'Error'.format(self.fake_name),
'changes': {}}
self._runner(ret, add='Error', add_assertion=True)
def test_success(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'entry 192.168.0.3 added to set {0} for family ipv4\n'
'entry 192.168.1.3 added to set {0} for family ipv4\n'
''.format(self.fake_name),
'changes': {'locale': 'fake_ipset'}}
self._runner(ret, add='worked', add_assertion=True)
def test_missing_entry(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'ipset entry must be specified',
'changes': {}}
mock = MagicMock(return_value=True)
with patch.dict(ipset.__salt__, {'ipset.check': mock}):
self.assertDictEqual(ipset.present(self.fake_name), ret)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class IpsetAbsentTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.ipset.present
'''
fake_name = 'fake_ipset'
fake_entries = ['192.168.0.3', '192.168.1.3']
def setup_loader_modules(self):
return {ipset: {}}
def _runner(self, expected_ret, test=False, check=False, delete=False,
delete_assertion=False):
mock_check = MagicMock(return_value=check)
mock_delete = MagicMock(return_value=delete)
with patch.dict(ipset.__opts__, {'test': test}):
with patch.dict(ipset.__salt__, {'ipset.check': mock_check,
'ipset.delete': mock_delete}):
actual_ret = ipset.absent(self.fake_name, self.fake_entries, set_name=self.fake_name)
mock_check.assert_has_calls([call(self.fake_name, e, 'ipv4') for e in self.fake_entries], any_order=True)
if delete_assertion:
expected_calls = [call(self.fake_name, e, 'ipv4', set_name=self.fake_name) for e in self.fake_entries]
if delete is not True:
expected_calls = expected_calls[:1]
mock_delete.assert_has_calls(expected_calls, any_order=True)
else:
self.assertTrue(mock_delete.call_count == 0)
self.assertDictEqual(actual_ret, expected_ret)
def test_already_absent(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset entry for 192.168.0.3 not present in set {0} for ipv4\n'
'ipset entry for 192.168.1.3 not present in set {0} for ipv4\n'
''.format(self.fake_name),
'changes': {}}
self._runner(ret)
def test_in_test_mode(self):
ret = {'name': self.fake_name,
'result': None,
'comment': 'ipset entry 192.168.0.3 would be removed from set {0} for ipv4\n'
'ipset entry 192.168.1.3 would be removed from set {0} for ipv4\n'
''.format(self.fake_name),
'changes': {}}
self._runner(ret, test=True, check=True)
def test_del_fails(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'Failed to delete ipset entry from set {0} for ipv4. Attempted entry was 192.168.1.3.\n'
'Error\n'.format(self.fake_name),
'changes': {}}
self._runner(ret, check=True, delete='Error', delete_assertion=True)
def test_success(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'ipset entry 192.168.0.3 removed from set {0} for ipv4\n'
'ipset entry 192.168.1.3 removed from set {0} for ipv4\n'
''.format(self.fake_name),
'changes': {'locale': 'fake_ipset'}}
self._runner(ret, check=True, delete='worked', delete_assertion=True)
def test_absent(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'ipset entry must be specified',
'changes': {}}
mock = MagicMock(return_value=True)
with patch.dict(ipset.__salt__, {'ipset.check': mock}):
self.assertDictEqual(ipset.absent(self.fake_name), ret)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class IpsetFlushTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.ipset.present
'''
fake_name = 'fake_ipset'
def setup_loader_modules(self):
return {ipset: {}}
def _runner(self, expected_ret, test=False, check_set=True, flush=True,
flush_assertion=True):
mock_check_set = MagicMock(return_value=check_set)
mock_flush = MagicMock(return_value=flush)
with patch.dict(ipset.__opts__, {'test': test}):
with patch.dict(ipset.__salt__, {'ipset.check_set': mock_check_set,
'ipset.flush': mock_flush}):
actual_ret = ipset.flush(self.fake_name)
mock_check_set.assert_called_once_with(self.fake_name)
if flush_assertion:
mock_flush.assert_called_once_with(self.fake_name, 'ipv4')
else:
self.assertTrue(mock_flush.call_count == 0)
self.assertDictEqual(actual_ret, expected_ret)
def test_no_set(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'ipset set {0} does not exist for ipv4'.format(self.fake_name),
'changes': {}}
self._runner(ret, check_set=False, flush_assertion=False)
def test_in_test_mode(self):
ret = {'name': self.fake_name,
'result': None,
'comment': 'ipset entries in set {0} for ipv4 would be flushed'.format(self.fake_name),
'changes': {}}
self._runner(ret, test=True, flush_assertion=False)
def test_flush_fails(self):
ret = {'name': self.fake_name,
'result': False,
'comment': 'Failed to flush ipset entries from set {0} for ipv4'.format(self.fake_name),
'changes': {}}
self._runner(ret, flush=False)
def test_success(self):
ret = {'name': self.fake_name,
'result': True,
'comment': 'Flushed ipset entries from set {0} for ipv4'.format(self.fake_name),
'changes': {'locale': 'fake_ipset'}}
self._runner(ret)
| 41.457386 | 114 | 0.592065 | 1,816 | 14,593 | 4.511013 | 0.084251 | 0.071289 | 0.095215 | 0.050781 | 0.82312 | 0.793457 | 0.763062 | 0.715576 | 0.692383 | 0.674316 | 0 | 0.022164 | 0.288906 | 14,593 | 351 | 115 | 41.575499 | 0.767274 | 0.03625 | 0 | 0.557196 | 0 | 0.02952 | 0.170173 | 0 | 0 | 0 | 0 | 0 | 0.166052 | 1 | 0.118081 | false | 0 | 0.01845 | 0.01845 | 0.206642 | 0.00369 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a65590a2c7a0baac9416ba56cf7a27819b552f8 | 8,989 | py | Python | Wrappers/Python/ccpi/filters/regularisers.py | TomasKulhanek/CCPi-Regularisation-Toolkit | ccff5dceaf0bbdd0f6fe40c995f5778fbaf86a6b | [
"Apache-2.0"
] | null | null | null | Wrappers/Python/ccpi/filters/regularisers.py | TomasKulhanek/CCPi-Regularisation-Toolkit | ccff5dceaf0bbdd0f6fe40c995f5778fbaf86a6b | [
"Apache-2.0"
] | 2 | 2019-01-22T13:15:52.000Z | 2019-01-24T14:32:47.000Z | Wrappers/Python/ccpi/filters/regularisers.py | TomasKulhanek/CCPi-Regularisation-Toolkit | ccff5dceaf0bbdd0f6fe40c995f5778fbaf86a6b | [
"Apache-2.0"
] | null | null | null | """
script which assigns a proper device core function based on a flag ('cpu' or 'gpu')
"""
from ccpi.filters.cpu_regularisers import TV_ROF_CPU, TV_FGP_CPU, TV_SB_CPU, dTV_FGP_CPU, TNV_CPU, NDF_CPU, Diff4th_CPU, TGV_CPU, LLT_ROF_CPU, PATCHSEL_CPU, NLTV_CPU
try:
from ccpi.filters.gpu_regularisers import TV_ROF_GPU, TV_FGP_GPU, TV_SB_GPU, dTV_FGP_GPU, NDF_GPU, Diff4th_GPU, TGV_GPU, LLT_ROF_GPU, PATCHSEL_GPU
gpu_enabled = True
except ImportError:
gpu_enabled = False
from ccpi.filters.cpu_regularisers import NDF_INPAINT_CPU, NVM_INPAINT_CPU
def ROF_TV(inputData, regularisation_parameter, iterations,
time_marching_parameter,device='cpu'):
if device == 'cpu':
return TV_ROF_CPU(inputData,
regularisation_parameter,
iterations,
time_marching_parameter)
elif device == 'gpu' and gpu_enabled:
return TV_ROF_GPU(inputData,
regularisation_parameter,
iterations,
time_marching_parameter)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def FGP_TV(inputData, regularisation_parameter,iterations,
tolerance_param, methodTV, nonneg, printM, device='cpu'):
if device == 'cpu':
return TV_FGP_CPU(inputData,
regularisation_parameter,
iterations,
tolerance_param,
methodTV,
nonneg,
printM)
elif device == 'gpu' and gpu_enabled:
return TV_FGP_GPU(inputData,
regularisation_parameter,
iterations,
tolerance_param,
methodTV,
nonneg,
printM)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def SB_TV(inputData, regularisation_parameter, iterations,
tolerance_param, methodTV, printM, device='cpu'):
if device == 'cpu':
return TV_SB_CPU(inputData,
regularisation_parameter,
iterations,
tolerance_param,
methodTV,
printM)
elif device == 'gpu' and gpu_enabled:
return TV_SB_GPU(inputData,
regularisation_parameter,
iterations,
tolerance_param,
methodTV,
printM)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def FGP_dTV(inputData, refdata, regularisation_parameter, iterations,
tolerance_param, eta_const, methodTV, nonneg, printM, device='cpu'):
if device == 'cpu':
return dTV_FGP_CPU(inputData,
refdata,
regularisation_parameter,
iterations,
tolerance_param,
eta_const,
methodTV,
nonneg,
printM)
elif device == 'gpu' and gpu_enabled:
return dTV_FGP_GPU(inputData,
refdata,
regularisation_parameter,
iterations,
tolerance_param,
eta_const,
methodTV,
nonneg,
printM)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def TNV(inputData, regularisation_parameter, iterations, tolerance_param):
return TNV_CPU(inputData,
regularisation_parameter,
iterations,
tolerance_param)
def NDF(inputData, regularisation_parameter, edge_parameter, iterations,
time_marching_parameter, penalty_type, device='cpu'):
if device == 'cpu':
return NDF_CPU(inputData,
regularisation_parameter,
edge_parameter,
iterations,
time_marching_parameter,
penalty_type)
elif device == 'gpu' and gpu_enabled:
return NDF_GPU(inputData,
regularisation_parameter,
edge_parameter,
iterations,
time_marching_parameter,
penalty_type)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def DIFF4th(inputData, regularisation_parameter, edge_parameter, iterations,
time_marching_parameter, device='cpu'):
if device == 'cpu':
return Diff4th_CPU(inputData,
regularisation_parameter,
edge_parameter,
iterations,
time_marching_parameter)
elif device == 'gpu' and gpu_enabled:
return Diff4th_GPU(inputData,
regularisation_parameter,
edge_parameter,
iterations,
time_marching_parameter)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def PatchSelect(inputData, searchwindow, patchwindow, neighbours, edge_parameter, device='cpu'):
if device == 'cpu':
return PATCHSEL_CPU(inputData,
searchwindow,
patchwindow,
neighbours,
edge_parameter)
elif device == 'gpu' and gpu_enabled:
return PATCHSEL_GPU(inputData,
searchwindow,
patchwindow,
neighbours,
edge_parameter)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def NLTV(inputData, H_i, H_j, H_k, Weights, regularisation_parameter, iterations):
return NLTV_CPU(inputData,
H_i,
H_j,
H_k,
Weights,
regularisation_parameter,
iterations)
def TGV(inputData, regularisation_parameter, alpha1, alpha0, iterations,
LipshitzConst, device='cpu'):
if device == 'cpu':
return TGV_CPU(inputData,
regularisation_parameter,
alpha1,
alpha0,
iterations,
LipshitzConst)
elif device == 'gpu' and gpu_enabled:
return TGV_GPU(inputData,
regularisation_parameter,
alpha1,
alpha0,
iterations,
LipshitzConst)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def LLT_ROF(inputData, regularisation_parameterROF, regularisation_parameterLLT, iterations,
time_marching_parameter, device='cpu'):
if device == 'cpu':
return LLT_ROF_CPU(inputData, regularisation_parameterROF, regularisation_parameterLLT, iterations, time_marching_parameter)
elif device == 'gpu' and gpu_enabled:
return LLT_ROF_GPU(inputData, regularisation_parameterROF, regularisation_parameterLLT, iterations, time_marching_parameter)
else:
if not gpu_enabled and device == 'gpu':
raise ValueError ('GPU is not available')
raise ValueError('Unknown device {0}. Expecting gpu or cpu'\
.format(device))
def NDF_INP(inputData, maskData, regularisation_parameter, edge_parameter, iterations,
time_marching_parameter, penalty_type):
return NDF_INPAINT_CPU(inputData, maskData, regularisation_parameter,
edge_parameter, iterations, time_marching_parameter, penalty_type)
def NVM_INP(inputData, maskData, SW_increment, iterations):
return NVM_INPAINT_CPU(inputData, maskData, SW_increment, iterations)
| 41.809302 | 165 | 0.562131 | 835 | 8,989 | 5.815569 | 0.106587 | 0.127883 | 0.131796 | 0.089374 | 0.882619 | 0.865321 | 0.799835 | 0.711285 | 0.652183 | 0.549012 | 0 | 0.003534 | 0.370342 | 8,989 | 214 | 166 | 42.004673 | 0.854417 | 0.009234 | 0 | 0.730392 | 0 | 0 | 0.072825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063725 | false | 0 | 0.019608 | 0.019608 | 0.191176 | 0.044118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a4fd33c9ea88f6a6a1c7e5c7c3b7370c41bce6c | 3,469 | py | Python | sudoku/bin/dinkum_sudoku_print_worksheet.py | dinkumsoftware/dinkum | 57fd217b81a5c95b08653977c8df17f7783ae3f6 | [
"Apache-2.0"
] | null | null | null | sudoku/bin/dinkum_sudoku_print_worksheet.py | dinkumsoftware/dinkum | 57fd217b81a5c95b08653977c8df17f7783ae3f6 | [
"Apache-2.0"
] | null | null | null | sudoku/bin/dinkum_sudoku_print_worksheet.py | dinkumsoftware/dinkum | 57fd217b81a5c95b08653977c8df17f7783ae3f6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# dinkum/sudoku/bin/dinkum_sudoku_print_worksheet.py
''' Prints (to stdout) a sudoku board with rows/columns/blocks/cells
labeled with number. An example is shown below:
0 1 2 3 4 5 6 7 8
/---------------------------------------------------------\
|0------------------1------------------2------------------|
|0 1 2 |3 4 5 |6 7 8 ||
0|| | | || | | || | | ||0
|| | | || | | || | | ||
|---------------------------------------------------------|
|9 10 11 |12 13 14 |15 16 17 ||
1|| | | || | | || | | ||1
|| | | || | | || | | ||
|---------------------------------------------------------|
|18 19 20 |21 22 23 |24 25 26 ||
2|| | | || | | || | | ||2
|| | | || | | || | | ||
|---------------------------------------------------------|
|3------------------4------------------5------------------|
|27 28 29 |30 31 32 |33 34 35 ||
3|| | | || | | || | | ||3
|| | | || | | || | | ||
|---------------------------------------------------------|
|36 37 38 |39 40 41 |42 43 44 ||
4|| | | || | | || | | ||4
|| | | || | | || | | ||
|---------------------------------------------------------|
|45 46 47 |48 49 50 |51 52 53 ||
5|| | | || | | || | | ||5
|| | | || | | || | | ||
|---------------------------------------------------------|
|6------------------7------------------8------------------|
|54 55 56 |57 58 59 |60 61 62 ||
6|| | | || | | || | | ||6
|| | | || | | || | | ||
|---------------------------------------------------------|
|63 64 65 |66 67 68 |69 70 71 ||
7|| | | || | | || | | ||7
|| | | || | | || | | ||
|---------------------------------------------------------|
|72 73 74 |75 76 77 |78 79 80 ||
8|| | | || | | || | | ||8
|| | | || | | || | | ||
\---------------------------------------------------------/
0 1 2 3 4 5 6 7 8
'''
# 2019-11-13 tc Initial development
# 2019-11-19 tc refactored replace_substr_at() into dinkum.utils.str_utils.py
# 2019-11-22 tc debugging and putting in block separaters
# 2019-11-23 tc labeling cell# and blk#
# refactored in sudoku.labeled_printer.labeled_print()
# 2019-12-19 tc Renamed dinkum_sudoku_print_worksheet.
# Changed the example
from dinkum.sudoku.labeled_printer import print_labeled_board
if __name__ == "__main__" :
# [] of lines making up output
# each is NOT terminated by \n
# With no board supplied, it prints empty board
# which is what we want for a worksheet
print_labeled_board()
| 46.878378 | 77 | 0.238109 | 256 | 3,469 | 3.132813 | 0.605469 | 0.05985 | 0.014963 | 0.01995 | 0.03616 | 0.03616 | 0.03616 | 0.03616 | 0.03616 | 0.024938 | 0 | 0.122051 | 0.437878 | 3,469 | 73 | 78 | 47.520548 | 0.289231 | 0.948688 | 0 | 0 | 0 | 0 | 0.053691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
6a7351479c5caf8de5612fd649a9a29fde536572 | 119 | py | Python | Module-2-Python-Basics/Types-And-Variables/print_types.py | CloudSkills/Coding-From-Scratch-Course | 9b2f75f0f39fe3703f6e10b8fed0834078f6cb73 | [
"MIT"
] | null | null | null | Module-2-Python-Basics/Types-And-Variables/print_types.py | CloudSkills/Coding-From-Scratch-Course | 9b2f75f0f39fe3703f6e10b8fed0834078f6cb73 | [
"MIT"
] | null | null | null | Module-2-Python-Basics/Types-And-Variables/print_types.py | CloudSkills/Coding-From-Scratch-Course | 9b2f75f0f39fe3703f6e10b8fed0834078f6cb73 | [
"MIT"
] | null | null | null | one_string = 'one'
one_integer = 1
# Print the type of each variable
print(type(one_string))
print(type(one_integer)) | 17 | 33 | 0.756303 | 20 | 119 | 4.3 | 0.5 | 0.209302 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.134454 | 119 | 7 | 34 | 17 | 0.825243 | 0.260504 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
6a7c6dcefbf57d6c0c7ee1eb769978ed594f80e3 | 32 | py | Python | apps/locations/views/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | null | null | null | apps/locations/views/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | 1 | 2019-09-20T01:33:45.000Z | 2019-09-20T01:33:45.000Z | apps/locations/views/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | null | null | null | from .states import StateViewSet | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6a7df4904565a1cf44b5a732f09ae3bb7023469c | 1,191 | py | Python | fixtures/pages/login_page.py | SvetaSadikova/qacoursemoodle_ui_test | 08b64a02b770acba5b4330e3ab4856a80b816777 | [
"Apache-2.0"
] | null | null | null | fixtures/pages/login_page.py | SvetaSadikova/qacoursemoodle_ui_test | 08b64a02b770acba5b4330e3ab4856a80b816777 | [
"Apache-2.0"
] | null | null | null | fixtures/pages/login_page.py | SvetaSadikova/qacoursemoodle_ui_test | 08b64a02b770acba5b4330e3ab4856a80b816777 | [
"Apache-2.0"
] | null | null | null | from selenium.webdriver.common.by import By
from selenium.webdriver.remote.webelement import WebElement
from fixtures.models.fake_data import LoginData
from fixtures.pages.base_page import BasePage
class LoginMoodle(BasePage):
def __password_input(self) -> WebElement:
return self.find_element_method(locator_name=(By.ID, 'password'), wait_time=3)
def __login_input(self) -> WebElement:
return self.find_element_method(locator_name=(By.ID, 'username'))
def __submit_button(self) -> WebElement:
return self.find_element_method(locator_name=(By.ID, 'loginbtn'))
def auth_fun(self, data_for_login: LoginData, is_submit: bool = True):
"""
Функция авторизации
TODO Добавить проверку на is_login
"""
self.input_data_method(data_for_login.log, locator_name=(By.ID, 'username'))
self.input_data_method(data_for_login.passw, locator_name=(By.ID, 'password'))
if is_submit:
self.click_method(locator_name=(By.ID, 'loginbtn'), wait_time=3)
def error_text(self):
error_text = self.find_element_method(locator_name=(By.ID, 'loginerrormessage'))
return error_text.text
| 37.21875 | 88 | 0.716205 | 158 | 1,191 | 5.107595 | 0.360759 | 0.095415 | 0.112763 | 0.130112 | 0.464684 | 0.387856 | 0.342007 | 0.26518 | 0.22057 | 0.22057 | 0 | 0.002043 | 0.178002 | 1,191 | 31 | 89 | 38.419355 | 0.822268 | 0.04534 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.263158 | false | 0.157895 | 0.210526 | 0.157895 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
6a8669d52bd544262c8ec927af49101590c83288 | 14 | py | Python | abcd.py | salil-gtm/SignDetect | 2ca7d6b2d7b2b615971cf4e743e943915389cbe3 | [
"MIT"
] | 23 | 2018-03-11T07:50:30.000Z | 2021-09-03T11:25:45.000Z | abcd.py | salil-gtm/SignDetect | 2ca7d6b2d7b2b615971cf4e743e943915389cbe3 | [
"MIT"
] | null | null | null | abcd.py | salil-gtm/SignDetect | 2ca7d6b2d7b2b615971cf4e743e943915389cbe3 | [
"MIT"
] | 8 | 2018-03-11T03:15:42.000Z | 2022-02-03T05:01:51.000Z | print 'Hello'; | 14 | 14 | 0.714286 | 2 | 14 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 14 | 1 | 14 | 14 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
6a86ae751bdcc632b652c16fc1bb719b1c89f8bc | 38 | py | Python | mLog/__init__.py | xujiang1/mLog | c1b939040880c9d3f305ea0b61547ce942bdf4c3 | [
"MIT"
] | null | null | null | mLog/__init__.py | xujiang1/mLog | c1b939040880c9d3f305ea0b61547ce942bdf4c3 | [
"MIT"
] | null | null | null | mLog/__init__.py | xujiang1/mLog | c1b939040880c9d3f305ea0b61547ce942bdf4c3 | [
"MIT"
] | null | null | null | #!/usr/local/bin/python
# coding=utf-8 | 19 | 23 | 0.710526 | 7 | 38 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.052632 | 38 | 2 | 24 | 19 | 0.722222 | 0.921053 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6acafaedb8ac46873ea84b32c9c0288aee5bd982 | 4,015 | py | Python | nba/endpoints/draft.py | rozzac90/new_nba | 87feec5fc8ff6654fdb65229a047e0ff3023a9ff | [
"MIT"
] | 1 | 2017-12-29T05:01:17.000Z | 2017-12-29T05:01:17.000Z | nba/endpoints/draft.py | rozzac90/new_nba | 87feec5fc8ff6654fdb65229a047e0ff3023a9ff | [
"MIT"
] | 2 | 2017-10-26T07:47:15.000Z | 2020-04-18T12:24:36.000Z | nba/endpoints/draft.py | rozzac90/nba | 87feec5fc8ff6654fdb65229a047e0ff3023a9ff | [
"MIT"
] | null | null | null | from nba import enums
from nba.utils import clean_locals
from nba.endpoints.baseendpoint import BaseEndpoint
class Draft(BaseEndpoint):
def combine_drill_results(
self, league_id=enums.LeagueID.Default, season_year="2016-17"
):
"""
Combine drill results for a given year.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:param season_year: draft season.
:type season_year: str('%Y-%y')
:returns: Combine drill results by player.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "draftcombinedrillresults"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
def combine_stationary_shooting(
self, league_id=enums.LeagueID.Default, season_year="2016-17"
):
"""
Moving shooting scores broken down by movement type.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:param season_year: draft season.
:type season_year: str('%Y-%y')
:returns: Movement shooting results by player.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "draftcombinenonstationaryshooting"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
def combine_player_anthropology(
self, league_id=enums.LeagueID.Default, season_year="2016-17"
):
"""
Detailed breakdown of players measurements and physical stats.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:param season_year: draft season.
:type season_year: str('%Y-%y')
:returns: Measurements and physical information by player.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "draftcombineplayeranthro"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
def combine_spot_shooting(
self, league_id=enums.LeagueID.Default, season_year="2016-17"
):
"""
Get raw and pct shooting results from draft combine for a given year.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:param season_year: draft season.
:type season_year: str('%Y-%y')
:returns: Combine shooting results by player.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "draftcombinespotshooting"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
def combine_stats(self, league_id=enums.LeagueID.Default, season_year="2016-17"):
"""
Get combine results for a draft year.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:param season_year: draft season.
:type season_year: str('%Y-%y')
:returns: Combine results by player.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "draftcombinestats"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
def history(self, league_id=enums.LeagueID.Default):
"""
Breakdown of pick number and player data for historical drafts.
:param league_id: define league to look at, nba.
:type league_id: nba.enums.LeagueID
:returns: Player pick numbers for historic drafts.
:rtype: Dataframe
"""
params = clean_locals(locals())
endpoint = "drafthistory"
r = self.request(endpoint, params)
df = self.process_response(r, 0, "resultSets")
return df
| 33.458333 | 85 | 0.617435 | 463 | 4,015 | 5.235421 | 0.183585 | 0.059406 | 0.029703 | 0.042079 | 0.737624 | 0.737624 | 0.724422 | 0.705858 | 0.705858 | 0.683993 | 0 | 0.012596 | 0.288169 | 4,015 | 119 | 86 | 33.739496 | 0.835549 | 0.389539 | 0 | 0.666667 | 0 | 0 | 0.115307 | 0.05287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6ace578ddd98762078d76e9613d5f8580399c671 | 79 | py | Python | modulation_utils.py | KarlXing/NeuromodulationDQN | 485e7d304f5c534ce25200ce868cb3ca987537f2 | [
"MIT"
] | null | null | null | modulation_utils.py | KarlXing/NeuromodulationDQN | 485e7d304f5c534ce25200ce868cb3ca987537f2 | [
"MIT"
] | null | null | null | modulation_utils.py | KarlXing/NeuromodulationDQN | 485e7d304f5c534ce25200ce868cb3ca987537f2 | [
"MIT"
] | null | null | null | import torch
def tanh_beta(x, beta):
x = x * beta
return torch.tanh(x) | 15.8 | 24 | 0.632911 | 14 | 79 | 3.5 | 0.5 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253165 | 79 | 5 | 24 | 15.8 | 0.830508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
6ae9db0817b79df90fec0d99d7f7bdecb9ad2ea6 | 34,682 | py | Python | tests20/python_client/testcases/test_insert.py | chriswarnock/milvus | ff4754a638a491adf7eca9952e1057272ba5d1a4 | [
"Apache-2.0"
] | null | null | null | tests20/python_client/testcases/test_insert.py | chriswarnock/milvus | ff4754a638a491adf7eca9952e1057272ba5d1a4 | [
"Apache-2.0"
] | 1 | 2020-03-10T12:13:12.000Z | 2020-03-10T12:13:12.000Z | tests20/python_client/testcases/test_insert.py | chriswarnock/milvus | ff4754a638a491adf7eca9952e1057272ba5d1a4 | [
"Apache-2.0"
] | 1 | 2020-03-13T15:04:22.000Z | 2020-03-13T15:04:22.000Z | import threading
import numpy as np
import pandas as pd
import pytest
from pymilvus_orm import Index
from base.client_base import TestcaseBase
from utils.util_log import test_log as log
from common import common_func as cf
from common import common_type as ct
from common.common_type import CaseLabel, CheckTasks
prefix = "insert"
exp_name = "name"
exp_schema = "schema"
exp_num = "num_entities"
exp_primary = "primary"
default_schema = cf.gen_default_collection_schema()
default_binary_schema = cf.gen_default_binary_collection_schema()
default_index_params = {"index_type": "IVF_SQ8", "metric_type": "L2", "params": {"nlist": 64}}
default_binary_index_params = {"index_type": "BIN_IVF_FLAT", "metric_type": "JACCARD", "params": {"nlist": 64}}
class TestInsertParams(TestcaseBase):
""" Test case of Insert interface """
@pytest.fixture(scope="function", params=ct.get_invalid_strs)
def get_non_data_type(self, request):
if isinstance(request.param, list) or request.param is None:
pytest.skip("list and None type is valid data type")
yield request.param
@pytest.fixture(scope="module", params=ct.get_invalid_strs)
def get_invalid_field_name(self, request):
if isinstance(request.param, (list, dict)):
pytest.skip()
yield request.param
@pytest.mark.tags(CaseLabel.L0)
def test_insert_dataframe_data(self):
"""
target: test insert DataFrame data
method: 1.create 2.insert dataframe data
expected: assert num entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(ct.default_nb)
mutation_res, _ = collection_w.insert(data=df)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_insert_list_data(self):
"""
target: test insert list-like data
method: 1.create 2.insert list data
expected: assert num entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
data = cf.gen_default_list_data(ct.default_nb)
mutation_res, _ = collection_w.insert(data=data)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == data[0]
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_non_data_type(self, get_non_data_type):
"""
target: test insert with non-dataframe, non-list data
method: insert with data (non-dataframe and non-list type)
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
error = {ct.err_code: 0, ct.err_msg: "Data type is not support"}
collection_w.insert(data=get_non_data_type, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L0)
@pytest.mark.parametrize("data", [[], pd.DataFrame()])
def test_insert_empty_data(self, data):
"""
target: test insert empty data
method: insert empty
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
error = {ct.err_code: 0, ct.err_msg: "The data fields number is not match with schema"}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_dataframe_only_columns(self):
"""
target: test insert with dataframe just columns
method: dataframe just have columns
expected: num entities is zero
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
columns = [ct.default_int64_field_name, ct.default_float_vec_field_name]
df = pd.DataFrame(columns=columns)
error = {ct.err_code: 0, ct.err_msg: "Cannot infer schema from empty dataframe"}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_empty_field_name_dataframe(self):
"""
target: test insert empty field name df
method: dataframe with empty column
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(10)
df.rename(columns={ct.default_int64_field_name: ' '}, inplace=True)
error = {ct.err_code: 0, ct.err_msg: "The types of schema and data do not match"}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_invalid_field_name_dataframe(self, get_invalid_field_name):
"""
target: test insert with invalid dataframe data
method: insert with invalid field name dataframe
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(10)
df.rename(columns={ct.default_int64_field_name: get_invalid_field_name}, inplace=True)
error = {ct.err_code: 0, ct.err_msg: "The types of schema and data do not match"}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
def test_insert_dataframe_index(self):
"""
target: test insert dataframe with index
method: insert dataframe with index
expected: todo
"""
pass
@pytest.mark.tags(CaseLabel.L1)
def test_insert_none(self):
"""
target: test insert None
method: data is None
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
mutation_res, _ = collection_w.insert(data=None)
assert mutation_res.insert_count == 0
assert len(mutation_res.primary_keys) == 0
assert collection_w.is_empty
assert collection_w.num_entities == 0
@pytest.mark.tags(CaseLabel.L1)
def test_insert_numpy_data(self):
"""
target: test insert numpy.ndarray data
method: 1.create by schema 2.insert data
expected: assert num_entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
data = cf.gen_numpy_data(nb=10)
error = {ct.err_code: 0, ct.err_msg: "Data type not support numpy.ndarray"}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_binary_dataframe(self):
"""
target: test insert binary dataframe
method: 1. create by schema 2. insert dataframe
expected: assert num_entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, schema=default_binary_schema)
df, _ = cf.gen_default_binary_dataframe_data(ct.default_nb)
mutation_res, _ = collection_w.insert(data=df)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_insert_binary_data(self):
"""
target: test insert list-like binary data
method: 1. create by schema 2. insert data
expected: assert num_entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, schema=default_binary_schema)
data, _ = cf.gen_default_binary_list_data(ct.default_nb)
mutation_res, _ = collection_w.insert(data=data)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == data[0]
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_insert_single(self):
"""
target: test insert single
method: insert one entity
expected: verify num
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
data = cf.gen_default_list_data(nb=1)
mutation_res, _ = collection_w.insert(data=data)
assert mutation_res.insert_count == 1
assert mutation_res.primary_keys == data[0]
assert collection_w.num_entities == 1
@pytest.mark.tags(CaseLabel.L1)
@pytest.mark.xfail(reason="exception not MilvusException")
def test_insert_dim_not_match(self):
"""
target: test insert with not match dim
method: insert data dim not equal to schema dim
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
dim = 129
df = cf.gen_default_dataframe_data(ct.default_nb, dim=dim)
error = {ct.err_code: 1,
ct.err_msg: f'Collection field dim is {ct.default_dim}, but entities field dim is {dim}'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
@pytest.mark.xfail(reason="exception not MilvusException")
def test_insert_binary_dim_not_match(self):
"""
target: test insert binary with dim not match
method: insert binary data dim not equal to schema
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, schema=default_binary_schema)
dim = 120
df, _ = cf.gen_default_binary_dataframe_data(ct.default_nb, dim=dim)
error = {ct.err_code: 1,
ct.err_msg: f'Collection field dim is {ct.default_dim}, but entities field dim is {dim}'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_field_name_not_match(self):
"""
target: test insert field name not match
method: data field name not match schema
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(10)
df.rename(columns={ct.default_float_field_name: "int"}, inplace=True)
error = {ct.err_code: 0, ct.err_msg: 'The types of schema and data do not match'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_field_value_not_match(self):
"""
target: test insert data value not match
method: insert data value type not match schema
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 10
df = cf.gen_default_dataframe_data(nb)
new_float_value = pd.Series(data=[float(i) for i in range(nb)], dtype="float64")
df.iloc[:, 1] = new_float_value
error = {ct.err_code: 0, ct.err_msg: 'The types of schema and data do not match'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_value_less(self):
"""
target: test insert value less than other
method: int field value less than vec-field value
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 10
int_values = [i for i in range(nb - 1)]
float_values = [np.float32(i) for i in range(nb)]
float_vec_values = cf.gen_vectors(nb, ct.default_dim)
data = [int_values, float_values, float_vec_values]
error = {ct.err_code: 0, ct.err_msg: 'Arrays must all be same length.'}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_vector_value_less(self):
"""
target: test insert vector value less than other
method: vec field value less than int field
expected: todo
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 10
int_values = [i for i in range(nb)]
float_values = [np.float32(i) for i in range(nb)]
float_vec_values = cf.gen_vectors(nb - 1, ct.default_dim)
data = [int_values, float_values, float_vec_values]
error = {ct.err_code: 0, ct.err_msg: 'Arrays must all be same length.'}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_fields_more(self):
"""
target: test insert with fields more
method: field more than schema fields
expected: todo
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(ct.default_nb)
new_values = [i for i in range(ct.default_nb)]
df.insert(3, 'new', new_values)
error = {ct.err_code: 0, ct.err_msg: 'The data fields number is not match with schema.'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_fields_less(self):
"""
target: test insert with fields less
method: fields less than schema fields
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
df = cf.gen_default_dataframe_data(ct.default_nb)
df.drop(ct.default_float_vec_field_name, axis=1, inplace=True)
error = {ct.err_code: 0, ct.err_msg: 'The data fields number is not match with schema.'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_list_order_inconsistent_schema(self):
"""
target: test insert data fields order inconsistent with schema
method: insert list data, data fields order inconsistent with schema
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 10
int_values = [i for i in range(nb)]
float_values = [np.float32(i) for i in range(nb)]
float_vec_values = cf.gen_vectors(nb, ct.default_dim)
data = [float_values, int_values, float_vec_values]
error = {ct.err_code: 0, ct.err_msg: 'The types of schema and data do not match'}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_dataframe_order_inconsistent_schema(self):
"""
target: test insert with dataframe fields inconsistent with schema
method: insert dataframe, and fields order inconsistent with schema
expected: assert num entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 10
int_values = pd.Series(data=[i for i in range(nb)])
float_values = pd.Series(data=[float(i) for i in range(nb)], dtype="float32")
float_vec_values = cf.gen_vectors(nb, ct.default_dim)
df = pd.DataFrame({
ct.default_float_field_name: float_values,
ct.default_float_vec_field_name: float_vec_values,
ct.default_int64_field_name: int_values
})
error = {ct.err_code: 0, ct.err_msg: 'The types of schema and data do not match'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_inconsistent_data(self):
"""
target: test insert with inconsistent data
method: insert with data that same field has different type data
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
data = cf.gen_default_list_data(nb=100)
data[0][1] = 1.0
error = {ct.err_code: 0, ct.err_msg: "The data in the same column must be of the same type"}
collection_w.insert(data, check_task=CheckTasks.err_res, check_items=error)
class TestInsertOperation(TestcaseBase):
"""
******************************************************************
The following cases are used to test insert interface operations
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L1)
def test_insert_without_connection(self):
"""
target: test insert without connection
method: insert after remove connection
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
self.connection_wrap.remove_connection(ct.default_alias)
res_list, _ = self.connection_wrap.list_connections()
assert ct.default_alias not in res_list
data = cf.gen_default_list_data(10)
error = {ct.err_code: 0, ct.err_msg: 'should create connect first'}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_insert_drop_collection(self):
"""
target: test insert and drop
method: insert data and drop collection
expected: verify collection if exist
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
collection_list, _ = self.utility_wrap.list_collections()
assert collection_w.name in collection_list
df = cf.gen_default_dataframe_data(ct.default_nb)
collection_w.insert(data=df)
collection_w.drop()
collection_list, _ = self.utility_wrap.list_collections()
assert collection_w.name not in collection_list
@pytest.mark.tags(CaseLabel.L1)
def test_insert_create_index(self):
"""
target: test insert and create index
method: 1. insert 2. create index
expected: verify num entities and index
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(ct.default_nb)
collection_w.insert(data=df)
assert collection_w.num_entities == ct.default_nb
collection_w.create_index(ct.default_float_vec_field_name, default_index_params)
assert collection_w.has_index()
index, _ = collection_w.index()
assert index == Index(collection_w.collection, ct.default_float_vec_field_name, default_index_params)
assert collection_w.indexes[0] == index
@pytest.mark.tags(CaseLabel.L1)
def test_insert_after_create_index(self):
"""
target: test insert after create index
method: 1. create index 2. insert data
expected: verify index and num entities
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
collection_w.create_index(ct.default_float_vec_field_name, default_index_params)
assert collection_w.has_index()
index, _ = collection_w.index()
assert index == Index(collection_w.collection, ct.default_float_vec_field_name, default_index_params)
assert collection_w.indexes[0] == index
df = cf.gen_default_dataframe_data(ct.default_nb)
collection_w.insert(data=df)
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_insert_binary_after_index(self):
"""
target: test insert binary after index
method: 1.create index 2.insert binary data
expected: 1.index ok 2.num entities correct
"""
schema = cf.gen_default_binary_collection_schema()
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix), schema=schema)
collection_w.create_index(ct.default_binary_vec_field_name, default_binary_index_params)
assert collection_w.has_index()
index, _ = collection_w.index()
assert index == Index(collection_w.collection, ct.default_binary_vec_field_name, default_binary_index_params)
assert collection_w.indexes[0] == index
df, _ = cf.gen_default_binary_dataframe_data(ct.default_nb)
collection_w.insert(data=df)
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_create_index(self):
"""
target: test create index in auto_id=True collection
method: 1.create auto_id=True collection and insert 2.create index
expected: index correct
"""
schema = cf.gen_default_collection_schema(auto_id=True)
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix), schema=schema)
df = cf.gen_default_dataframe_data(ct.default_nb)
df.drop(ct.default_int64_field_name, axis=1, inplace=True)
mutation_res, _ = collection_w.insert(data=df)
assert cf._check_primary_keys(mutation_res.primary_keys, ct.default_nb)
assert collection_w.num_entities == ct.default_nb
# create index
collection_w.create_index(ct.default_float_vec_field_name, default_index_params)
assert collection_w.has_index()
index, _ = collection_w.index()
assert index == Index(collection_w.collection, ct.default_float_vec_field_name, default_index_params)
assert collection_w.indexes[0] == index
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_true(self):
"""
target: test insert ids fields values when auto_id=True
method: 1.create collection with auto_id=True 2.insert without ids
expected: verify primary_keys and num_entities
"""
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(auto_id=True)
collection_w = self.init_collection_wrap(name=c_name, schema=schema)
df = cf.gen_default_dataframe_data(ct.default_nb)
df.drop(ct.default_int64_field_name, axis=1, inplace=True)
mutation_res, _ = collection_w.insert(data=df)
assert cf._check_primary_keys(mutation_res.primary_keys, ct.default_nb)
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_twice_auto_id_true(self):
"""
target: test insert ids fields twice when auto_id=True
method: 1.create collection with auto_id=True 2.insert twice
expected: verify primary_keys unique
"""
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(auto_id=True)
nb = 10
collection_w = self.init_collection_wrap(name=c_name, schema=schema)
df = cf.gen_default_dataframe_data(nb)
df.drop(ct.default_int64_field_name, axis=1, inplace=True)
mutation_res, _ = collection_w.insert(data=df)
primary_keys = mutation_res.primary_keys
assert cf._check_primary_keys(primary_keys, nb)
mutation_res_1, _ = collection_w.insert(data=df)
primary_keys.extend(mutation_res_1.primary_keys)
assert cf._check_primary_keys(primary_keys, nb * 2)
assert collection_w.num_entities == nb * 2
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_true_list_data(self):
"""
target: test insert ids fields values when auto_id=True
method: 1.create collection with auto_id=True 2.insert list data with ids field values
expected: assert num entities
"""
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(auto_id=True)
collection_w = self.init_collection_wrap(name=c_name, schema=schema)
data = cf.gen_default_list_data(nb=ct.default_nb)
mutation_res, _ = collection_w.insert(data=data[1:])
assert mutation_res.insert_count == ct.default_nb
assert cf._check_primary_keys(mutation_res.primary_keys, ct.default_nb)
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_true_with_dataframe_values(self):
"""
target: test insert with auto_id=True
method: create collection with auto_id=True
expected: 1.verify num entities 2.verify ids
"""
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(auto_id=True)
collection_w = self.init_collection_wrap(name=c_name, schema=schema)
df = cf.gen_default_dataframe_data(nb=100)
error = {ct.err_code: 0, ct.err_msg: 'Auto_id is True, primary field should not have data'}
collection_w.insert(data=df, check_task=CheckTasks.err_res, check_items=error)
assert collection_w.is_empty
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_true_with_list_values(self):
"""
target: test insert with auto_id=True
method: create collection with auto_id=True
expected: 1.verify num entities 2.verify ids
"""
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(auto_id=True)
collection_w = self.init_collection_wrap(name=c_name, schema=schema)
data = cf.gen_default_list_data(nb=100)
error = {ct.err_code: 0, ct.err_msg: 'The data fields number is not match with schema'}
collection_w.insert(data=data, check_task=CheckTasks.err_res, check_items=error)
assert collection_w.is_empty
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_false_same_values(self):
"""
target: test insert same ids with auto_id false
method: 1.create collection with auto_id=False 2.insert same int64 field values
expected: raise exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 100
data = cf.gen_default_list_data(nb=nb)
data[0] = [1 for i in range(nb)]
mutation_res, _ = collection_w.insert(data)
assert mutation_res.insert_count == nb
assert mutation_res.primary_keys == data[0]
@pytest.mark.tags(CaseLabel.L1)
def test_insert_auto_id_false_negative_values(self):
"""
target: test insert negative ids with auto_id false
method: auto_id=False, primary field values is negative
expected: verify num entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
nb = 100
data = cf.gen_default_list_data(nb)
data[0] = [i for i in range(0, -nb, -1)]
mutation_res, _ = collection_w.insert(data)
assert mutation_res.primary_keys == data[0]
assert collection_w.num_entities == nb
@pytest.mark.tags(CaseLabel.L2)
def test_insert_multi_threading(self):
"""
target: test concurrent insert
method: multi threads insert
expected: verify num entities
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(ct.default_nb)
thread_num = 4
threads = []
primary_keys = df[ct.default_int64_field_name].values.tolist()
def insert(thread_i):
log.debug(f'In thread-{thread_i}')
mutation_res, _ = collection_w.insert(df)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == primary_keys
for i in range(thread_num):
x = threading.Thread(target=insert, args=(i,))
threads.append(x)
x.start()
for t in threads:
t.join()
assert collection_w.num_entities == ct.default_nb * thread_num
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.skip(reason="Currently primary keys are not unique")
def test_insert_multi_threading_auto_id(self):
"""
target: test concurrent insert auto_id=True collection
method: 1.create auto_id=True collection 2.concurrent insert
expected: verify primary keys unique
"""
pass
@pytest.mark.tags(CaseLabel.L2)
def test_insert_multi_times(self):
"""
target: test insert multi times
method: insert data multi times
expected: verify num entities
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name)
step = 120
for _ in range(ct.default_nb // step):
df = cf.gen_default_dataframe_data(step)
mutation_res, _ = collection_w.insert(data=df)
assert mutation_res.insert_count == step
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
class TestInsertAsync(TestcaseBase):
"""
******************************************************************
The following cases are used to test insert async
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L1)
def test_insert_sync(self):
"""
target: test async insert
method: insert with async=True
expected: verify num entities
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(nb=ct.default_nb)
future, _ = collection_w.insert(data=df, _async=True)
future.done()
mutation_res = future.result()
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_async_false(self):
"""
target: test insert with false async
method: async = false
expected: verify num entities
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(nb=ct.default_nb)
mutation_res, _ = collection_w.insert(data=df, _async=False)
assert mutation_res.insert_count == ct.default_nb
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L1)
def test_insert_async_callback(self):
"""
target: test insert with callback func
method: insert with callback func
expected: verify num entities
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(nb=ct.default_nb)
future, _ = collection_w.insert(data=df, _async=True, _callback=assert_mutation_result)
future.done()
mutation_res = future.result()
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_insert_async_long(self):
"""
target: test insert with async
method: insert 5w entities with callback func
expected: verify num entities
"""
nb = 50000
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(nb)
future, _ = collection_w.insert(data=df, _async=True)
future.done()
mutation_res = future.result()
assert mutation_res.insert_count == nb
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
assert collection_w.num_entities == nb
@pytest.mark.tags(CaseLabel.L2)
def test_insert_async_callback_timeout(self):
"""
target: test insert async with callback
method: insert 10w entities with timeout=1
expected: raise exception
"""
nb = 100000
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data(nb)
future, _ = collection_w.insert(data=df, _async=True, _callback=assert_mutation_result, timeout=1)
with pytest.raises(Exception):
future.result()
@pytest.mark.tags(CaseLabel.L2)
def test_insert_async_invalid_data(self):
"""
target: test insert async with invalid data
method: insert async with invalid data
expected: raise exception
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
columns = [ct.default_int64_field_name, ct.default_float_vec_field_name]
df = pd.DataFrame(columns=columns)
error = {ct.err_code: 0, ct.err_msg: "Cannot infer schema from empty dataframe"}
collection_w.insert(data=df, _async=True, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_insert_async_invalid_partition(self):
"""
target: test insert async with invalid partition
method: insert async with invalid partition
expected: raise exception
"""
collection_w = self.init_collection_wrap(name=cf.gen_unique_str(prefix))
df = cf.gen_default_dataframe_data()
err_msg = "partitionID of partitionName:p can not be find"
future, _ = collection_w.insert(data=df, partition_name="p", _async=True)
future.done()
with pytest.raises(Exception, match=err_msg):
future.result()
def assert_mutation_result(mutation_res):
assert mutation_res.insert_count == ct.default_nb
| 43.680101 | 117 | 0.673318 | 4,720 | 34,682 | 4.666314 | 0.054449 | 0.068422 | 0.023973 | 0.048036 | 0.829784 | 0.773666 | 0.734847 | 0.700295 | 0.686493 | 0.666334 | 0 | 0.009399 | 0.230004 | 34,682 | 793 | 118 | 43.735183 | 0.815353 | 0.170665 | 0 | 0.639752 | 0 | 0 | 0.04755 | 0 | 0 | 0 | 0 | 0.003783 | 0.153209 | 1 | 0.10559 | false | 0.004141 | 0.020704 | 0 | 0.132505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a98a20e4633cd1119228d822c332441a4ce3633 | 110 | py | Python | c__31.py | fhansmann/coding-challenges | eebb37565c72e05b77383c24e8273a1e4019b58e | [
"MIT"
] | null | null | null | c__31.py | fhansmann/coding-challenges | eebb37565c72e05b77383c24e8273a1e4019b58e | [
"MIT"
] | null | null | null | c__31.py | fhansmann/coding-challenges | eebb37565c72e05b77383c24e8273a1e4019b58e | [
"MIT"
] | null | null | null | def value(n):
if n%2 == 0:
print("It is an even number")
else:
print("It is an odd number")
value(7)
| 13.75 | 31 | 0.590909 | 22 | 110 | 2.954545 | 0.681818 | 0.215385 | 0.276923 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.245455 | 110 | 7 | 32 | 15.714286 | 0.746988 | 0 | 0 | 0 | 0 | 0 | 0.354545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0ab87fc857d24f513b36b4c403b0235f8f65c7e7 | 1,021 | py | Python | NoteBooks/Curso de Flask/Ex_Files_Full_Stack_Dev_Flask/Exercise Files/Section 5/5.5 End/application/models.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | 1 | 2021-02-26T13:12:22.000Z | 2021-02-26T13:12:22.000Z | NoteBooks/Curso de Flask/Ex_Files_Full_Stack_Dev_Flask/Exercise Files/Section 5/5.5 End/application/models.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | null | null | null | NoteBooks/Curso de Flask/Ex_Files_Full_Stack_Dev_Flask/Exercise Files/Section 5/5.5 End/application/models.py | Alejandro-sin/Learning_Notebooks | 161d6bed4c7b1d171b45f61c0cc6fa91e9894aad | [
"MIT"
] | null | null | null | import flask
from application import db
from werkzeug.security import generate_password_hash, check_password_hash
class User(db.Document):
user_id = db.IntField( unique=True )
first_name = db.StringField( max_length=50 )
last_name = db.StringField( max_length=50 )
email = db.StringField( max_length=30, unique=True )
password = db.StringField( )
def set_password(self, password):
self.password = generate_password_hash(password)
def get_password(self, password):
return check_password_hash(self.password, password)
class Course(db.Document):
course_id = db.StringField( max_length=10, unique=True )
title = db.StringField( max_length=100 )
description = db.StringField( max_length=255 )
credits = db.IntField()
term = db.StringField( max_length=25 )
class Enrollment(db.Document):
user_id = db.IntField()
course_id = db.StringField( max_length=10 ) | 37.814815 | 74 | 0.659158 | 123 | 1,021 | 5.276423 | 0.341463 | 0.180277 | 0.197227 | 0.271186 | 0.265023 | 0.265023 | 0.098613 | 0 | 0 | 0 | 0 | 0.02356 | 0.251714 | 1,021 | 27 | 75 | 37.814815 | 0.825916 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.272727 | 0.136364 | 0.045455 | 0.954545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
0ad4f5546db4d5989731cb1593b5640ee7af4c07 | 759 | py | Python | custom-odoo-addons/demo_module/controllers/controllers.py | andrelohmann/vagrant-odoo_develoment | 0b54403d6bdf01b17b82c32a1e9882629fa08ee8 | [
"MIT"
] | null | null | null | custom-odoo-addons/demo_module/controllers/controllers.py | andrelohmann/vagrant-odoo_develoment | 0b54403d6bdf01b17b82c32a1e9882629fa08ee8 | [
"MIT"
] | null | null | null | custom-odoo-addons/demo_module/controllers/controllers.py | andrelohmann/vagrant-odoo_develoment | 0b54403d6bdf01b17b82c32a1e9882629fa08ee8 | [
"MIT"
] | 1 | 2020-11-17T19:05:16.000Z | 2020-11-17T19:05:16.000Z | # -*- coding: utf-8 -*-
# from odoo import http
# class DemoModule(http.Controller):
# @http.route('/demo_module/demo_module', auth='public')
# def index(self, **kw):
# return "Hello, world"
# @http.route('/demo_module/demo_module/objects', auth='public')
# def list(self, **kw):
# return http.request.render('demo_module.listing', {
# 'root': '/demo_module/demo_module',
# 'objects': http.request.env['demo_module.demo_module'].search([]),
# })
# @http.route('/demo_module/demo_module/objects/<model("demo_module.demo_module"):obj>', auth='public')
# def object(self, obj, **kw):
# return http.request.render('demo_module.object', {
# 'object': obj
# })
| 34.5 | 107 | 0.592885 | 89 | 759 | 4.898876 | 0.370787 | 0.321101 | 0.192661 | 0.275229 | 0.454128 | 0.392202 | 0.325688 | 0 | 0 | 0 | 0 | 0.001681 | 0.216074 | 759 | 21 | 108 | 36.142857 | 0.731092 | 0.948617 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0ae292da3c5b3c778b0a92d14ebc8e4eabdf201f | 1,787 | bzl | Python | third_party/com_fasterxml_jackson_datatype.bzl | wix-playground/rules_maven_third_party | ff0b486df194779d7d8e6c9102cd12138e3305c3 | [
"Apache-2.0"
] | null | null | null | third_party/com_fasterxml_jackson_datatype.bzl | wix-playground/rules_maven_third_party | ff0b486df194779d7d8e6c9102cd12138e3305c3 | [
"Apache-2.0"
] | null | null | null | third_party/com_fasterxml_jackson_datatype.bzl | wix-playground/rules_maven_third_party | ff0b486df194779d7d8e6c9102cd12138e3305c3 | [
"Apache-2.0"
] | null | null | null | load(":import_external.bzl", import_external = "import_external")
def dependencies():
import_external(
name = "com_fasterxml_jackson_datatype_jackson_datatype_jdk8",
artifact = "com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.12.4",
artifact_sha256 = "00852350fc1503344723b590f1afe9593ab732fb5b035659b503b49bbea5c9b2",
srcjar_sha256 = "480de9700a341cfb58adb26a87372c934c3b3f3f1dba5ec319796317c7524e64",
deps = [
"@com_fasterxml_jackson_core_jackson_core",
"@com_fasterxml_jackson_core_jackson_databind",
],
)
import_external(
name = "com_fasterxml_jackson_datatype_jackson_datatype_joda",
artifact = "com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4",
artifact_sha256 = "0de93d725472df2027c3e869301a3035892e607d94423c589c96964305d51051",
srcjar_sha256 = "999d17249f6d4491656e123b4ef49e326a57b1ec771c0b67834cc4b4ca692b45",
deps = [
"@com_fasterxml_jackson_core_jackson_annotations",
"@com_fasterxml_jackson_core_jackson_core",
"@com_fasterxml_jackson_core_jackson_databind",
"@joda_time_joda_time",
],
)
import_external(
name = "com_fasterxml_jackson_datatype_jackson_datatype_jsr310",
artifact = "com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.12.4",
artifact_sha256 = "af5a384d020e43f91f56d083f170d67aaf5aead71fa8fa1ad80a425b13ba13e4",
srcjar_sha256 = "5d080278c2b98374a419c6b78586072de0c888f82618106a1e27ff20249d4b20",
deps = [
"@com_fasterxml_jackson_core_jackson_annotations",
"@com_fasterxml_jackson_core_jackson_core",
"@com_fasterxml_jackson_core_jackson_databind",
],
)
| 43.585366 | 93 | 0.724678 | 146 | 1,787 | 8.369863 | 0.212329 | 0.13748 | 0.217676 | 0.150573 | 0.590835 | 0.546645 | 0.543372 | 0.397709 | 0.397709 | 0.250409 | 0 | 0.210856 | 0.195859 | 1,787 | 40 | 94 | 44.675 | 0.639527 | 0 | 0 | 0.485714 | 0 | 0 | 0.627868 | 0.59709 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | true | 0 | 0.114286 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.