hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b425c9c9f8ae82e61b1f78f1507a1a54617d3401 | 101 | py | Python | abc/abc156/abc156b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | abc/abc156/abc156b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | abc/abc156/abc156b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | N, K = map(int, input().split())
result = 0
while N != 0:
result += 1
N //= K
print(result)
| 12.625 | 32 | 0.514851 | 17 | 101 | 3.058824 | 0.647059 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 0.277228 | 101 | 7 | 33 | 14.428571 | 0.671233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b4286ff31fbba48bbffbe61d13b22c6feaa707ee | 1,217 | py | Python | falcon/util/time.py | RioAtHome/falcon | edd9352e630dbbb6272370281fc5fa6d792df057 | [
"Apache-2.0"
] | null | null | null | falcon/util/time.py | RioAtHome/falcon | edd9352e630dbbb6272370281fc5fa6d792df057 | [
"Apache-2.0"
] | null | null | null | falcon/util/time.py | RioAtHome/falcon | edd9352e630dbbb6272370281fc5fa6d792df057 | [
"Apache-2.0"
] | null | null | null | """Time and date utilities.
This module provides utility functions and classes for dealing with
times and dates. These functions are hoisted into the `falcon` module
for convenience::
import falcon
tz = falcon.TimezoneGMT()
"""
import datetime
__all__ = ['TimezoneGMT']
class TimezoneGMT(datetime.tzinfo):
"""GMT timezone class implementing the :py:class:`datetime.tzinfo` interface."""
GMT_ZERO = datetime.timedelta(hours=0)
def utcoffset(self, dt):
"""Get the offset from UTC.
Args:
dt(datetime.datetime): Ignored
Returns:
datetime.timedelta: GMT offset, which is equivalent to UTC and
so is aways 0.
"""
return self.GMT_ZERO
def tzname(self, dt):
"""Get the name of this timezone.
Args:
dt(datetime.datetime): Ignored
Returns:
str: "GMT"
"""
return 'GMT'
def dst(self, dt):
"""Return the daylight saving time (DST) adjustment.
Args:
dt(datetime.datetime): Ignored
Returns:
datetime.timedelta: DST adjustment for GMT, which is always 0.
"""
return self.GMT_ZERO
| 20.627119 | 84 | 0.600657 | 140 | 1,217 | 5.171429 | 0.464286 | 0.029006 | 0.058011 | 0.09116 | 0.245856 | 0.196133 | 0.146409 | 0.146409 | 0 | 0 | 0 | 0.003563 | 0.308135 | 1,217 | 58 | 85 | 20.982759 | 0.856295 | 0.604766 | 0 | 0.2 | 0 | 0 | 0.043344 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b43488081f782e60f929dfb3d7dec79c527c755a | 120 | py | Python | nemo_cf/__init__.py | willirath/nemo_cf | 560b7766cd22916752aeb1a656c7dfa108a290b6 | [
"MIT"
] | null | null | null | nemo_cf/__init__.py | willirath/nemo_cf | 560b7766cd22916752aeb1a656c7dfa108a290b6 | [
"MIT"
] | 4 | 2020-02-28T12:18:55.000Z | 2021-02-24T13:52:25.000Z | nemo_cf/__init__.py | willirath/nemo_cf | 560b7766cd22916752aeb1a656c7dfa108a290b6 | [
"MIT"
] | 2 | 2021-03-19T09:22:12.000Z | 2021-05-31T07:35:11.000Z | """Top-level package for NEMO CF."""
__author__ = """Willi Rath"""
__email__ = 'wrath@geomar.de'
__version__ = '0.1.0'
| 20 | 36 | 0.658333 | 17 | 120 | 3.941176 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028846 | 0.133333 | 120 | 5 | 37 | 24 | 0.615385 | 0.25 | 0 | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b43863ea2282716a434818d10e3a3506fdad83aa | 348 | py | Python | convert.py | Mohitsharma44/pyfrac | 152a4cbf6e90b0f11c8594780b77705898bab197 | [
"MIT"
] | 7 | 2018-06-15T06:29:48.000Z | 2021-12-03T05:51:45.000Z | convert.py | Mohitsharma44/pyfrac | 152a4cbf6e90b0f11c8594780b77705898bab197 | [
"MIT"
] | 2 | 2018-01-09T16:08:52.000Z | 2018-07-19T11:15:15.000Z | convert.py | Mohitsharma44/pyfrac | 152a4cbf6e90b0f11c8594780b77705898bab197 | [
"MIT"
] | 2 | 2018-04-16T03:37:54.000Z | 2018-12-13T05:43:00.000Z | from pyfrac.convert import radtocsv
r = radtocsv.RadConv(basedir='./ir_images')
r._exifProcess()
#r.get_meta(base_dir='./ir_images', batch=True, tofile=True, filenames=['test020316.jpg'])
#r.tograyscale(base_dir='./ir_images', batch=True, meta=False, filenames=['test.jpg'])
r.tocsv(base_dir='./ir_images', batch=True, filenames=None)
r.cleanup()
| 38.666667 | 90 | 0.744253 | 52 | 348 | 4.807692 | 0.519231 | 0.128 | 0.108 | 0.18 | 0.288 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0.018349 | 0.060345 | 348 | 8 | 91 | 43.5 | 0.746177 | 0.5 | 0 | 0 | 0 | 0 | 0.127907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b4392b141a1925f819efd3eb7943e20bba21ccb4 | 420 | py | Python | picdown/picdown/items.py | gamegrd/discuz_spider | 06f16083f40b129e4d80478d57963ccced1c833c | [
"MIT"
] | 3 | 2017-01-11T03:26:26.000Z | 2020-07-18T11:25:18.000Z | picdown/picdown/items.py | andyzhuangyy/discuz_spider | 06f16083f40b129e4d80478d57963ccced1c833c | [
"MIT"
] | null | null | null | picdown/picdown/items.py | andyzhuangyy/discuz_spider | 06f16083f40b129e4d80478d57963ccced1c833c | [
"MIT"
] | 3 | 2016-09-03T03:44:22.000Z | 2020-07-18T11:25:20.000Z | # Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
from scrapy.item import Item, Field
class PicdownItem(Item):
# define the fields for your item here like:
# name = Field()
#text = Field()
image_urls = Field()
image_paths = Field()
site_url = Field()
time = Field()
text = Field()
#link = Field()
pass
| 22.105263 | 51 | 0.645238 | 57 | 420 | 4.701754 | 0.649123 | 0.052239 | 0.104478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 420 | 18 | 52 | 23.333333 | 0.8375 | 0.483333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.125 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b4421d2b5e57fc6449dab964451d143c95dc9621 | 601 | py | Python | e621/posts/migrations/0005_auto_20181205_2231.py | adjspecies/explore621 | 0ec946d28ed54d11569aa237f721001f74e7f1be | [
"MIT"
] | 3 | 2019-10-12T13:32:22.000Z | 2021-11-18T19:17:16.000Z | e621/posts/migrations/0005_auto_20181205_2231.py | adjspecies/explore621 | 0ec946d28ed54d11569aa237f721001f74e7f1be | [
"MIT"
] | 6 | 2018-12-11T20:38:26.000Z | 2021-06-10T21:01:45.000Z | e621/posts/migrations/0005_auto_20181205_2231.py | adjspecies/explore621 | 0ec946d28ed54d11569aa237f721001f74e7f1be | [
"MIT"
] | 4 | 2018-12-11T06:19:59.000Z | 2022-02-17T00:29:15.000Z | # Generated by Django 2.1.3 on 2018-12-05 22:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('posts', '0004_tag_tag_type'),
]
operations = [
migrations.AddField(
model_name='ingestlog',
name='new',
field=models.IntegerField(default=0),
preserve_default=False,
),
migrations.AddField(
model_name='ingestlog',
name='updated',
field=models.IntegerField(default=0),
preserve_default=False,
),
]
| 23.115385 | 49 | 0.567388 | 59 | 601 | 5.661017 | 0.627119 | 0.107784 | 0.137725 | 0.161677 | 0.54491 | 0.54491 | 0.305389 | 0.305389 | 0 | 0 | 0 | 0.051597 | 0.322795 | 601 | 25 | 50 | 24.04 | 0.769042 | 0.074875 | 0 | 0.526316 | 1 | 0 | 0.090253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b44c18ee037b88ef11f860a4fe764085c991c641 | 197 | py | Python | lib/ConfParser.py | jessehorne/anonchat.io | c01ec997ea1b736ed17c3071cb91f5de0307d590 | [
"MIT"
] | 1 | 2016-06-12T01:57:19.000Z | 2016-06-12T01:57:19.000Z | lib/ConfParser.py | jessehorne/anonchat.io | c01ec997ea1b736ed17c3071cb91f5de0307d590 | [
"MIT"
] | 7 | 2016-06-05T18:33:37.000Z | 2016-06-27T02:01:04.000Z | lib/ConfParser.py | jessehorne/anonchat.io | c01ec997ea1b736ed17c3071cb91f5de0307d590 | [
"MIT"
] | 1 | 2016-06-10T06:15:31.000Z | 2016-06-10T06:15:31.000Z | from ConfigParser import ConfigParser
def parse(filename):
config_dir = ""
parser = ConfigParser()
parser.read(config_dir + filename + ".conf")
return parser._sections[filename]
| 19.7 | 48 | 0.705584 | 21 | 197 | 6.47619 | 0.619048 | 0.132353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192893 | 197 | 9 | 49 | 21.888889 | 0.855346 | 0 | 0 | 0 | 0 | 0 | 0.025381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b45b7a12966f6b6599da83e90d6fb3cb245f366c | 310 | py | Python | posts/api/views.py | rkarthikdev/blog_api_django | 9d4e6f7984015518a63fb2793b8539e5636cb9ba | [
"MIT"
] | null | null | null | posts/api/views.py | rkarthikdev/blog_api_django | 9d4e6f7984015518a63fb2793b8539e5636cb9ba | [
"MIT"
] | 5 | 2021-03-19T11:21:48.000Z | 2021-11-15T17:52:14.000Z | posts/api/views.py | rkarthikdev/blog_api_django | 9d4e6f7984015518a63fb2793b8539e5636cb9ba | [
"MIT"
] | null | null | null | from rest_framework import viewsets
from .serializers import PostSerializer
from posts.models import Post
from .permissions import IsGetOrIsAdmin
class PostViewSet(viewsets.ModelViewSet):
queryset = Post.objects.all()
serializer_class = PostSerializer
permission_classes = (IsGetOrIsAdmin,)
| 25.833333 | 42 | 0.796774 | 32 | 310 | 7.625 | 0.65625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148387 | 310 | 11 | 43 | 28.181818 | 0.924242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b4653ed93206024a4b84f1737222d2c8bc153179 | 235 | py | Python | src/server/config.py | diegosanchezstrange/auth-server-jwt-flask | 0f08e73c5d572b521a8ec361fad4358991258fc9 | [
"MIT"
] | 1 | 2019-10-18T07:14:56.000Z | 2019-10-18T07:14:56.000Z | src/server/config.py | diegosanchezstrange/auth-server-jwt-flask | 0f08e73c5d572b521a8ec361fad4358991258fc9 | [
"MIT"
] | 2 | 2021-12-01T11:29:55.000Z | 2021-12-01T11:29:56.000Z | src/server/config.py | diegosanchezstrange/auth-server-jwt-flask | 0f08e73c5d572b521a8ec361fad4358991258fc9 | [
"MIT"
] | null | null | null | import os
class Config(object):
DEBUG = False
TESTING = False
SECRET_KEY = os.getenv('SECRET_KEY', 'SUPER_IMPORTANT_KEY')
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
| 18.076923 | 63 | 0.706383 | 28 | 235 | 5.785714 | 0.571429 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 235 | 12 | 64 | 19.583333 | 0.861702 | 0 | 0 | 0 | 0 | 0 | 0.123404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 1.111111 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b476df0cc2558e876a3887a124ffd21a404bb895 | 5,583 | py | Python | tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/structured_output.py | joshz123/tensorflow | 7841ca029060ab78e221e757d4b1ee6e3e0ffaa4 | [
"Apache-2.0"
] | 388 | 2020-06-27T01:38:29.000Z | 2022-03-29T14:12:01.000Z | tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/structured_output.py | sagol/tensorflow | 04f2870814d2773e09dcfa00cbe76a66a2c4de88 | [
"Apache-2.0"
] | 58 | 2021-11-22T05:41:28.000Z | 2022-01-19T01:33:40.000Z | tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/structured_output.py | sagol/tensorflow | 04f2870814d2773e09dcfa00cbe76a66a2c4de88 | [
"Apache-2.0"
] | 75 | 2021-12-24T04:48:21.000Z | 2022-03-29T10:13:39.000Z | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# RUN: %p/structured_output | FileCheck %s
# pylint: disable=missing-docstring,line-too-long
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v2 as tf
from tensorflow.compiler.mlir.tensorflow.tests.tf_saved_model import common
class TestModule(tf.Module):
# The fNNNN name prefixes in this file are such that the sorted order of the
# functions in the resulting MLIR output match the order in the source file,
# allowing us to conveniently co-locate the CHECK's with the code they are
# checking.
#
# Note: CHECK-DAG doesn't work with CHECK-SAME/CHECK-NEXT.
# Check index paths for results.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = []})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0000_single_return"]
@tf.function(input_signature=[])
def f0000_single_return(self):
return tf.constant(1.0, shape=[1])
# Check index paths for results with multiple return values.
# Note that semantically in Python, multiple return values are equivalent
# to returning a tuple/list.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = [0]},
# CHECK-SAME: tensor<2xf32> {tf_saved_model.index_path = [1]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0001_multiple_results_no_punctuation"]
@tf.function(input_signature=[])
def f0001_multiple_results_no_punctuation(self):
return tf.constant(1.0, shape=[1]), tf.constant(1.0, shape=[2])
# Check index paths for results written explicitly with parentheses.
# This is semantically equivalent to the earlier test without parentheses,
# but this test serves as documentation of this behavior for the purposes
# of tf_saved_model users.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = [0]},
# CHECK-SAME: tensor<2xf32> {tf_saved_model.index_path = [1]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0002_multiple_results_parentheses"]
@tf.function(input_signature=[])
def f0002_multiple_results_parentheses(self):
return (tf.constant(1.0, shape=[1]), tf.constant(1.0, shape=[2]))
# Check index paths for results written explicitly with brackets.
# This is semantically equivalent to the earlier test without parentheses,
# but this test serves as documentation of this behavior for the purposes
# of tf_saved_model users.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = [0]},
# CHECK-SAME: tensor<2xf32> {tf_saved_model.index_path = [1]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0003_multiple_results_brackets"]
@tf.function(input_signature=[])
def f0003_multiple_results_brackets(self):
return [tf.constant(1.0, shape=[1]), tf.constant(1.0, shape=[2])]
# Check index paths for lists.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = [0, 0]},
# CHECK-SAME: tensor<2xf32> {tf_saved_model.index_path = [0, 1]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0004_list_2_elements"]
@tf.function(input_signature=[])
def f0004_list_2_elements(self):
return [[tf.constant(1.0, shape=[1]), tf.constant(1.0, shape=[2])]]
# Check index paths for dicts.
# Keys are linearized in sorted order, matching `tf.nest.flatten`.
# More thorough testing of this is in structured_input.py. The underlying code
# path for linearization is shared, so no need to replicate that testing here.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}() -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = ["x"]},
# CHECK-SAME: tensor<2xf32> {tf_saved_model.index_path = ["y"]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0005_dict_2_keys"]
@tf.function(input_signature=[])
def f0005_dict_2_keys(self):
return {
'x': tf.constant(1.0, shape=[1]),
'y': tf.constant(1.0, shape=[2]),
}
# Check index paths for outputs are correctly handled in the presence of
# multiple return statements.
#
# CHECK: func {{@[a-zA-Z_0-9]+}}(
# CHECK-SAME: %arg0: tensor<f32> {tf_saved_model.index_path = [0]}
# CHECK-SAME: ) -> (
# CHECK-SAME: tensor<1xf32> {tf_saved_model.index_path = ["x"]})
# CHECK-SAME: attributes {{.*}} tf_saved_model.exported_names = ["f0006_multiple_return_statements"]
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def f0006_multiple_return_statements(self, x):
if x > 3.:
return {'x': tf.constant(1.0, shape=[1])}
else:
return {'x': tf.constant(1.0, shape=[1])}
if __name__ == '__main__':
common.do_test(TestModule)
| 44.309524 | 107 | 0.688519 | 801 | 5,583 | 4.615481 | 0.267166 | 0.043549 | 0.074655 | 0.059778 | 0.532324 | 0.457127 | 0.456857 | 0.456857 | 0.387341 | 0.360022 | 0 | 0.037409 | 0.162099 | 5,583 | 125 | 108 | 44.664 | 0.752886 | 0.695504 | 0 | 0.228571 | 0 | 0 | 0.007394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.142857 | 0.171429 | 0.6 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
b47d97a263a579e43f857f05ee46aef2b46a587e | 5,502 | py | Python | enaml/widgets/tool_bar.py | xtuzy/enaml | a1b5c0df71c665b6ef7f61d21260db92d77d9a46 | [
"BSD-3-Clause-Clear"
] | 1,080 | 2015-01-04T14:29:34.000Z | 2022-03-29T05:44:51.000Z | enaml/widgets/tool_bar.py | xtuzy/enaml | a1b5c0df71c665b6ef7f61d21260db92d77d9a46 | [
"BSD-3-Clause-Clear"
] | 308 | 2015-01-05T22:44:13.000Z | 2022-03-30T21:19:18.000Z | enaml/widgets/tool_bar.py | xtuzy/enaml | a1b5c0df71c665b6ef7f61d21260db92d77d9a46 | [
"BSD-3-Clause-Clear"
] | 123 | 2015-01-25T16:33:48.000Z | 2022-02-25T19:57:10.000Z | #------------------------------------------------------------------------------
# Copyright (c) 2013, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
#------------------------------------------------------------------------------
from atom.api import Bool, Enum, List, Typed, ForwardTyped, observe
from enaml.core.declarative import d_
from .action import Action
from .action_group import ActionGroup
from .constraints_widget import ConstraintsWidget, ProxyConstraintsWidget
class ProxyToolBar(ProxyConstraintsWidget):
""" The abstract definition of a proxy ToolBar object.
"""
#: A reference to the ToolBar declaration.
declaration = ForwardTyped(lambda: ToolBar)
def set_button_style(self, style):
raise NotImplementedError
def set_movable(self, movable):
raise NotImplementedError
def set_floatable(self, floatable):
raise NotImplementedError
def set_floating(self, floating):
raise NotImplementedError
def set_dock_area(self, area):
raise NotImplementedError
def set_allowed_dock_areas(self, areas):
raise NotImplementedError
def set_orientation(self, orientation):
raise NotImplementedError
class ToolBar(ConstraintsWidget):
""" A widget which displays a row of tool buttons.
A ToolBar is typically used as a child of a MainWindow where it can
be dragged and docked in various locations in the same fashion as a
DockPane. However, a ToolBar can also be used as the child of a
Container and layed out with constraints, though in this case it will
lose its ability to be docked.
"""
#: The button style to apply to actions added to the tool bar.
button_style = d_(Enum(
'icon_only', 'text_only', 'text_beside_icon', 'text_under_icon'
))
#: Whether or not the tool bar is movable by the user. This value
#: only has meaning if the tool bar is the child of a MainWindow.
movable = d_(Bool(True))
#: Whether or not the tool bar can be floated as a separate window.
#: This value only has meaning if the tool bar is the child of a
#: MainWindow.
floatable = d_(Bool(True))
#: A boolean indicating whether or not the tool bar is floating.
#: This value only has meaning if the tool bar is the child of a
#: MainWindow.
floating = d_(Bool(False))
#: The dock area in the MainWindow where the tool bar is docked.
#: This value only has meaning if the tool bar is the child of a
#: MainWindow.
dock_area = d_(Enum('top', 'right', 'left', 'bottom'))
#: The areas in the MainWindow where the tool bar can be docked
#: by the user. This value only has meaning if the tool bar is the
#: child of a MainWindow.
allowed_dock_areas = d_(List(
Enum('top', 'right', 'left', 'bottom', 'all'), ['all'],
))
#: The orientation of the toolbar. This only has meaning when the
#: toolbar is not a child of a MainWindow and is used as part of
#: a constraints based layout.
orientation = d_(Enum('horizontal', 'vertical'))
#: Whether or not to automatically adjust the 'hug_width' and
#: 'hug_height' values based on the value of 'orientation'.
auto_hug = d_(Bool(True))
#: A reference to the ProxyToolBar object.
proxy = Typed(ProxyToolBar)
def items(self):
""" Get the items defined on the tool bar.
"""
allowed = (Action, ActionGroup)
return [c for c in self.children if isinstance(c, allowed)]
#--------------------------------------------------------------------------
# Observers
#--------------------------------------------------------------------------
@observe('button_style', 'movable', 'floatable', 'floating', 'dock_area',
'allowed_dock_areas', 'orientation')
def _update_proxy(self, change):
""" An observer which sends state change to the proxy.
"""
# The superclass handler implementation is sufficient.
super(ToolBar, self)._update_proxy(change)
#--------------------------------------------------------------------------
# DefaultValue Handlers
#--------------------------------------------------------------------------
def _default_hug_width(self):
""" Get the default hug width for the slider.
The default hug width is computed based on the orientation.
"""
if self.orientation == 'horizontal':
return 'ignore'
return 'strong'
def _default_hug_height(self):
""" Get the default hug height for the slider.
The default hug height is computed based on the orientation.
"""
if self.orientation == 'vertical':
return 'ignore'
return 'strong'
#--------------------------------------------------------------------------
# PostSetAttr Handlers
#--------------------------------------------------------------------------
def _post_setattr_orientation(self, old, new):
""" Post setattr the orientation for the tool bar.
If auto hug is enabled, the hug values will be updated.
"""
if self.auto_hug:
if new == 'vertical':
self.hug_width = 'strong'
self.hug_height = 'ignore'
else:
self.hug_width = 'ignore'
self.hug_height = 'strong'
| 35.044586 | 79 | 0.58506 | 644 | 5,502 | 4.909938 | 0.27795 | 0.028779 | 0.041113 | 0.030361 | 0.226439 | 0.184693 | 0.161923 | 0.127767 | 0.127767 | 0.097407 | 0 | 0.00095 | 0.234824 | 5,502 | 156 | 80 | 35.269231 | 0.750119 | 0.513631 | 0 | 0.224138 | 0 | 0 | 0.102187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.086207 | 0 | 0.586207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b483604e2df070e1dee7920716003f0c4b087099 | 2,058 | py | Python | forgot_password/handlers/__init__.py | oursky/forgot_password | 9afde8b9d39a2837676628f12c9b6f2c45da592a | [
"Apache-2.0"
] | null | null | null | forgot_password/handlers/__init__.py | oursky/forgot_password | 9afde8b9d39a2837676628f12c9b6f2c45da592a | [
"Apache-2.0"
] | null | null | null | forgot_password/handlers/__init__.py | oursky/forgot_password | 9afde8b9d39a2837676628f12c9b6f2c45da592a | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Oursky Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ..template import TemplateProvider
from .forgot_password import add_templates as add_forgot_password_templates
from .forgot_password import register_op as register_forgot_password_op
from .reset_password import add_templates as add_reset_password_templates
from .reset_password import register_op as register_reset_password_op
from .reset_password import register_handlers \
as register_reset_password_handlers
from .welcome_email import add_templates as add_welcome_email_templates
from .welcome_email import register_hooks_and_ops \
as register_welcome_email_hooks_and_ops
from .verify_code import register as register_verify_code
def register_handlers(**kwargs):
settings = kwargs['settings']
welcome_email_settings = kwargs['welcome_email_settings']
template_provider = TemplateProvider()
add_forgot_password_templates(template_provider, settings)
add_reset_password_templates(template_provider, settings)
add_welcome_email_templates(template_provider, welcome_email_settings)
register_forgot_password_op(template_provider=template_provider, **kwargs)
register_reset_password_op(template_provider=template_provider, **kwargs)
register_reset_password_handlers(template_provider=template_provider,
**kwargs)
register_welcome_email_hooks_and_ops(template_provider=template_provider,
**kwargs)
register_verify_code(kwargs['verify_settings'])
| 45.733333 | 78 | 0.794947 | 268 | 2,058 | 5.779851 | 0.317164 | 0.123951 | 0.054229 | 0.082634 | 0.378309 | 0.352485 | 0.083925 | 0.083925 | 0.083925 | 0.083925 | 0 | 0.00459 | 0.153061 | 2,058 | 44 | 79 | 46.772727 | 0.884108 | 0.266278 | 0 | 0.08 | 0 | 0 | 0.03008 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.44 | 0.36 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
81f0254aa52bc4715daccc1a66bfb7d3214c5cba | 1,063 | py | Python | src/pymortests/algorithms/stuff.py | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | null | null | null | src/pymortests/algorithms/stuff.py | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | 4 | 2022-03-17T10:07:38.000Z | 2022-03-30T12:41:06.000Z | src/pymortests/algorithms/stuff.py | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | null | null | null | # This file is part of the pyMOR project (http://www.pymor.org).
# Copyright 2013-2020 pyMOR developers and contributors. All rights reserved.
# License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause)
import numpy as np
import pytest
from pymor.algorithms.newton import newton, NewtonError
from pymor.tools.floatcmp import float_cmp
from pymor.vectorarrays.numpy import NumpyVectorSpace
from pymortests.base import runmodule
from pymortests.fixtures.operator import MonomOperator
def _newton(order, **kwargs):
mop = MonomOperator(order)
rhs = NumpyVectorSpace.from_numpy([0.0])
guess = NumpyVectorSpace.from_numpy([1.0])
return newton(mop, rhs, initial_guess=guess, **kwargs)
@pytest.mark.parametrize("order", list(range(1, 8)))
def test_newton(order):
U, _ = _newton(order, atol=1e-15)
assert float_cmp(U.to_numpy(), 0.0)
def test_newton_fail():
with pytest.raises(NewtonError):
_ = _newton(0, maxiter=10, stagnation_threshold=np.inf)
if __name__ == "__main__":
runmodule(filename=__file__)
| 29.527778 | 77 | 0.747883 | 146 | 1,063 | 5.260274 | 0.554795 | 0.035156 | 0.026042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02623 | 0.139229 | 1,063 | 35 | 78 | 30.371429 | 0.813115 | 0.201317 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.142857 | false | 0 | 0.333333 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
81f6d5f3e4368f7b5c84a551214726b7e67f9c9f | 1,208 | py | Python | tests/test_protocol_implements_decorator.py | rbroderi/protocol_implements_decorator | f878503e27774e5b9c8b032ec2ff0fbd7f4d49bb | [
"BSD-3-Clause"
] | null | null | null | tests/test_protocol_implements_decorator.py | rbroderi/protocol_implements_decorator | f878503e27774e5b9c8b032ec2ff0fbd7f4d49bb | [
"BSD-3-Clause"
] | null | null | null | tests/test_protocol_implements_decorator.py | rbroderi/protocol_implements_decorator | f878503e27774e5b9c8b032ec2ff0fbd7f4d49bb | [
"BSD-3-Clause"
] | null | null | null | from protocol_implements_decorator import implements
from typing import Protocol
def test():
"""Run some tests on the functionality of the decorators."""
class Printable(Protocol):
"""A test protocol that requires a to_string method."""
def to_string(self) -> str:
return ""
class Otherable(Protocol):
"""Another example."""
def other(self) -> str:
return ""
fail: bool = False
try:
@implements(Printable)
class Example: # type: ignore
"""Test class that should implement printable but doesn't."""
except NotImplementedError:
fail = True
pass
assert fail
@implements(Printable)
class Example2: # type: ignore
"""Test class that does implement Printable."""
def to_string(self) -> str:
return str(self)
fail = False
@implements(Printable, Otherable)
class Example4:
"""Test class that uses multiple protocols."""
def to_string(self) -> str:
return str(self)
def other(self) -> str:
return str(self)
test = Example4()
print(test.get_protocols_implemented())
| 21.963636 | 73 | 0.595199 | 130 | 1,208 | 5.469231 | 0.430769 | 0.049226 | 0.091421 | 0.063291 | 0.254571 | 0.120956 | 0.087201 | 0.087201 | 0 | 0 | 0 | 0.003584 | 0.307119 | 1,208 | 54 | 74 | 22.37037 | 0.845878 | 0.237583 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 1 | 0.2 | false | 0.033333 | 0.066667 | 0.166667 | 0.6 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
81f86a953c4d7b1b503cb152d3b04d53193f220f | 705 | py | Python | setup.py | Natgho/HMS-Search-Kit-Python-Client | 9c0d465fc5b98882065065903ab705312ba649e6 | [
"Apache-2.0"
] | 3 | 2021-02-25T12:51:39.000Z | 2021-02-26T11:47:07.000Z | setup.py | Natgho/HMS-Search-Kit-Python-Client | 9c0d465fc5b98882065065903ab705312ba649e6 | [
"Apache-2.0"
] | null | null | null | setup.py | Natgho/HMS-Search-Kit-Python-Client | 9c0d465fc5b98882065065903ab705312ba649e6 | [
"Apache-2.0"
] | null | null | null | # Created by Sezer BOZKIR<admin@sezerbozkir.com at 2/25/2021
from setuptools import setup
setup(
name='HMSSearchKit',
version='0.1.0',
description='HMS Search Kit Python Client',
url='https://github.com/shuds13/pyexample',
author='Sezer Bozkir',
author_email='admin@sezerbozkir.com',
license='Apache License',
packages=['HMSSearchKit'],
install_requires=['requests>=2.25.1',
],
classifiers=[
'Development Status :: 1 - Planning',
'Intended Audience :: Science/Research',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
)
| 29.375 | 60 | 0.61844 | 76 | 705 | 5.710526 | 0.684211 | 0.131336 | 0.172811 | 0.179724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040892 | 0.236879 | 705 | 23 | 61 | 30.652174 | 0.765799 | 0.08227 | 0 | 0.1 | 0 | 0 | 0.52093 | 0.032558 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c30de45d6aa7c68a337d0b835acd7a7cde29a48b | 72 | py | Python | tests/fake_django_settings.py | gpshead/cheap_repr | 076d1ebf4aa02f50093f6bea9b8368fa38a91837 | [
"MIT"
] | 19 | 2018-02-04T23:29:45.000Z | 2022-01-03T14:34:30.000Z | tests/fake_django_settings.py | gpshead/cheap_repr | 076d1ebf4aa02f50093f6bea9b8368fa38a91837 | [
"MIT"
] | 15 | 2018-07-25T18:33:07.000Z | 2021-09-18T11:10:50.000Z | tests/fake_django_settings.py | gpshead/cheap_repr | 076d1ebf4aa02f50093f6bea9b8368fa38a91837 | [
"MIT"
] | 6 | 2017-11-19T17:15:30.000Z | 2021-10-08T00:55:51.000Z | SECRET_KEY = 'jhfhkghg'
INSTALLED_APPS = ['django.contrib.contenttypes'] | 36 | 48 | 0.791667 | 8 | 72 | 6.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 72 | 2 | 48 | 36 | 0.820896 | 0 | 0 | 0 | 0 | 0 | 0.479452 | 0.369863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c313af48f369b41cc0c20af22bb8384c92b1a7c4 | 127 | py | Python | chapter_05_mathematics/section_5_2_fractions/5_14_fractions_create_strings_floats.py | wuxiaofeng8764/P3SL_Example | 8ce1877af7372211b20836f799a8c44679577b7c | [
"MIT"
] | 3 | 2018-08-14T09:33:52.000Z | 2022-03-21T12:31:58.000Z | chapter_05_mathematics/section_5_2_fractions/5_14_fractions_create_strings_floats.py | wuxiaofeng8764/P3SL_Example | 8ce1877af7372211b20836f799a8c44679577b7c | [
"MIT"
] | null | null | null | chapter_05_mathematics/section_5_2_fractions/5_14_fractions_create_strings_floats.py | wuxiaofeng8764/P3SL_Example | 8ce1877af7372211b20836f799a8c44679577b7c | [
"MIT"
] | null | null | null | import fractions
for s in ['0.5', '1.5', '2.0', '5e-1']:
f = fractions.Fraction(s)
print('{0:>4} = {1}'.format(s, f))
| 21.166667 | 39 | 0.511811 | 24 | 127 | 2.708333 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 0.19685 | 127 | 5 | 40 | 25.4 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 0.19685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c31ca548bba1dd7c708c2f2d18f790bb0859d3b4 | 5,015 | py | Python | Tools/compiler/compiler/astgen.py | SaadBazaz/ChinesePython | 800955539dda912d4a1621bcf5a700aaaddc012f | [
"CNRI-Python-GPL-Compatible"
] | 3 | 2022-01-30T20:08:24.000Z | 2022-02-12T08:51:12.000Z | Tools/compiler/compiler/astgen.py | SaadBazaz/ChinesePython | 800955539dda912d4a1621bcf5a700aaaddc012f | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Tools/compiler/compiler/astgen.py | SaadBazaz/ChinesePython | 800955539dda912d4a1621bcf5a700aaaddc012f | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | """Generate ast module from specification"""
import fileinput
import getopt
import re
import sys
from StringIO import StringIO
SPEC = "ast.txt"
COMMA = ", "
def load_boilerplate(file):
f = open(file)
buf = f.read()
f.close()
i = buf.find('### ''PROLOGUE')
j = buf.find('### ''EPILOGUE')
pro = buf[i+12:j].strip()
epi = buf[j+12:].strip()
return pro, epi
def strip_default(arg):
"""Return the argname from an 'arg = default' string"""
i = arg.find('=')
if i == -1:
return arg
return arg[:i].strip()
class NodeInfo:
"""Each instance describes a specific AST node"""
def __init__(self, name, args):
self.name = name
self.args = args.strip()
self.argnames = self.get_argnames()
self.nargs = len(self.argnames)
self.children = COMMA.join(["self.%s" % c
for c in self.argnames])
self.init = []
def get_argnames(self):
if '(' in self.args:
i = self.args.find('(')
j = self.args.rfind(')')
args = self.args[i+1:j]
else:
args = self.args
return [strip_default(arg.strip())
for arg in args.split(',') if arg]
def gen_source(self):
buf = StringIO()
print >> buf, "class %s(Node):" % self.name
print >> buf, ' nodes["%s"] = "%s"' % (self.name.lower(), self.name)
self._gen_init(buf)
self._gen_getChildren(buf)
self._gen_repr(buf)
buf.seek(0, 0)
return buf.read()
def _gen_init(self, buf):
print >> buf, " def __init__(self, %s):" % self.args
if self.argnames:
for name in self.argnames:
print >> buf, " self.%s = %s" % (name, name)
else:
print >> buf, " pass"
if self.init:
print >> buf, "".join([" " + line for line in self.init])
def _gen_getChildren(self, buf):
print >> buf, " def _getChildren(self):"
if self.argnames:
if self.nargs == 1:
print >> buf, " return %s," % self.children
else:
print >> buf, " return %s" % self.children
else:
print >> buf, " return ()"
def _gen_repr(self, buf):
print >> buf, " def __repr__(self):"
if self.argnames:
fmt = COMMA.join(["%s"] * self.nargs)
vals = ["repr(self.%s)" % name for name in self.argnames]
vals = COMMA.join(vals)
if self.nargs == 1:
vals = vals + ","
print >> buf, ' return "%s(%s)" %% (%s)' % \
(self.name, fmt, vals)
else:
print >> buf, ' return "%s()"' % self.name
rx_init = re.compile('init\((.*)\):')
def parse_spec(file):
classes = {}
cur = None
for line in fileinput.input(file):
mo = rx_init.search(line)
if mo is None:
if cur is None:
# a normal entry
try:
name, args = line.split(':')
except ValueError:
continue
classes[name] = NodeInfo(name, args)
cur = None
else:
# some code for the __init__ method
cur.init.append(line)
else:
# some extra code for a Node's __init__ method
name = mo.group(1)
cur = classes[name]
return classes.values()
def main():
prologue, epilogue = load_boilerplate(sys.argv[-1])
print prologue
print
classes = parse_spec(SPEC)
for info in classes:
print info.gen_source()
print epilogue
if __name__ == "__main__":
main()
sys.exit(0)
### PROLOGUE
"""Python abstract syntax node definitions
This file is automatically generated.
"""
from types import TupleType, ListType
from consts import CO_VARARGS, CO_VARKEYWORDS
def flatten(list):
l = []
for elt in list:
t = type(elt)
if t is TupleType or t is ListType:
for elt2 in flatten(elt):
l.append(elt2)
else:
l.append(elt)
return l
def asList(nodes):
l = []
for item in nodes:
if hasattr(item, "asList"):
l.append(item.asList())
else:
t = type(item)
if t is TupleType or t is ListType:
l.append(tuple(asList(item)))
else:
l.append(item)
return l
nodes = {}
class Node:
lineno = None
def getType(self):
pass
def getChildren(self):
# XXX It would be better to generate flat values to begin with
return flatten(self._getChildren())
def asList(self):
return tuple(asList(self.getChildren()))
class EmptyNode(Node):
def __init__(self):
self.lineno = None
### EPILOGUE
klasses = globals()
for k in nodes.keys():
nodes[k] = klasses[nodes[k]]
| 27.404372 | 79 | 0.514058 | 604 | 5,015 | 4.168874 | 0.245033 | 0.041303 | 0.0278 | 0.023828 | 0.098888 | 0.060763 | 0.051628 | 0.051628 | 0.030183 | 0.030183 | 0 | 0.004655 | 0.357527 | 5,015 | 182 | 80 | 27.554945 | 0.776847 | 0.034297 | 0 | 0.155405 | 1 | 0 | 0.074581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.013514 | 0.047297 | null | null | 0.114865 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c327a4c945412beb5f683067bbc4d406ad83aa03 | 3,613 | py | Python | applications/ShallowWaterApplication/tests/shallow_water_test_factory.py | clazaro/Kratos | b947b82c90dfcbf13d60511427f85990d36b90be | [
"BSD-4-Clause"
] | null | null | null | applications/ShallowWaterApplication/tests/shallow_water_test_factory.py | clazaro/Kratos | b947b82c90dfcbf13d60511427f85990d36b90be | [
"BSD-4-Clause"
] | null | null | null | applications/ShallowWaterApplication/tests/shallow_water_test_factory.py | clazaro/Kratos | b947b82c90dfcbf13d60511427f85990d36b90be | [
"BSD-4-Clause"
] | null | null | null | import KratosMultiphysics
import KratosMultiphysics.KratosUnittest as KratosUnittest
from KratosMultiphysics.ShallowWaterApplication.shallow_water_analysis import ShallowWaterAnalysis
try:
import numpy
numpy_available = True
except ImportError:
numpy_available = False
try:
import scipy
scipy_available = True
except ImportError:
scipy_available = False
try:
import mpmath
mpmath_available = True
except ImportError:
mpmath_available = False
class ShallowWaterTestFactory(KratosUnittest.TestCase):
need_numpy = False
need_scipy = False
need_mpmath = False
def test_execution(self):
if self.need_numpy and not numpy_available:
self.skipTest("numpy not available")
if self.need_scipy and not scipy_available:
self.skipTest("scipy not available")
if self.need_mpmath and not mpmath_available:
self.skipTest("mpmath not available")
with KratosUnittest.WorkFolderScope(self.execution_directory, __file__):
with open(self.execution_file + "_parameters.json",'r') as parameter_file:
ProjectParameters = KratosMultiphysics.Parameters(parameter_file.read())
model = KratosMultiphysics.Model()
test = ShallowWaterAnalysis(model, ProjectParameters)
test.Run()
class TestSemiLagrangianShallowWaterElement(ShallowWaterTestFactory):
execution_directory = "elements_tests"
execution_file = "semi_lagrangian_swe"
class TestShallowWaterElement(ShallowWaterTestFactory):
execution_directory = "elements_tests"
execution_file = "swe"
class TestShallowWater2D3NElement(ShallowWaterTestFactory):
execution_directory = "elements_tests"
execution_file = "shallow_water_2d_3n"
class TestMonotonicShallowWater2D3NElement(ShallowWaterTestFactory):
execution_directory = "elements_tests"
execution_file = "monotonic_shallow_water_2d_3n"
class TestBoussinesq2D3NElement(ShallowWaterTestFactory):
execution_directory = "elements_tests"
execution_file = "boussinesq_2d_3n"
class TestSetTopographyProcess(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "set_topography_process"
class TestVisualizationMeshProcess(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "visualization_mesh_process"
class TestMacDonaldShockBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "mac_donald_shock_benchmark"
need_scipy = True
class TestMacDonaldTransitionBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "mac_donald_transition_benchmark"
need_scipy = True
class TestDamBreakBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "dam_break_benchmark"
need_scipy = True
class TestDryDamBreakBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "dry_dam_break_benchmark"
need_scipy = True
class TestPlanarSurfaceInParabolaBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "planar_surface_in_parabola_benchmark"
need_scipy = True
class TestSolitaryWaveBenchmark(ShallowWaterTestFactory):
execution_directory = "processes_tests"
execution_file = "solitary_wave_benchmark"
need_mpmath = True
class TestMeshMovingStrategy(ShallowWaterTestFactory):
execution_directory = "nightly_tests"
execution_file = "mesh_moving_strategy"
need_numpy = True
| 34.740385 | 98 | 0.781345 | 332 | 3,613 | 8.171687 | 0.271084 | 0.099521 | 0.211574 | 0.147438 | 0.417987 | 0.356432 | 0.356432 | 0.056764 | 0.056764 | 0 | 0 | 0.003951 | 0.159424 | 3,613 | 103 | 99 | 35.07767 | 0.889365 | 0 | 0 | 0.285714 | 0 | 0 | 0.163299 | 0.059784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011905 | false | 0 | 0.107143 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c3395edee93687c7c622661482cacfc6d62d1ef2 | 2,712 | py | Python | python_structure/code_signal_challenges/sort_by_height.py | bangyen/pascal-triangle | 0831348e93c274bdd38bba5c3aeeda7596ab97ee | [
"MIT"
] | 1 | 2020-03-11T10:20:53.000Z | 2020-03-11T10:20:53.000Z | python_structure/code_signal_challenges/sort_by_height.py | bangyen/pascal-triangle | 0831348e93c274bdd38bba5c3aeeda7596ab97ee | [
"MIT"
] | 1 | 2020-07-06T15:45:01.000Z | 2020-07-06T15:50:32.000Z | python_structure/code_signal_challenges/sort_by_height.py | bangyen/pascal-triangle | 0831348e93c274bdd38bba5c3aeeda7596ab97ee | [
"MIT"
] | 1 | 2020-07-02T05:21:58.000Z | 2020-07-02T05:21:58.000Z | """
> Task
Some people are standing in a row in a park. There are trees between them which cannot be moved.
Your task is to rearrange the people by their heights in a non-descending order without moving
the trees. People can be very tall!
> Example
For a = [-1, 150, 190, 170, -1, -1, 160, 180], the output should be [-1, 150, 160, 170, -1, -1, 180, 190].
> Input/Output
- execution time limit: 4 seconds (py3)
- input: array.integer a
If a[i] = -1, then the ith position is occupied by a tree.
Otherwise a[i] is the height of a person standing in the ith position.
- guaranteed constraints:
1 ≤ a.length ≤ 1000,
-1 ≤ a[i] ≤ 1000.
- output: array.integer
Sorted array a with all the trees untouched.
"""
def sort_by_height(a):
swap = True
while swap:
swap = False
for i in range(len(a) - 1):
j = i + 1
while a[i] == -1:
i += 1
if i == len(a):
return a
while a[j] == -1:
j += 1
if a[i] > a[j]:
a[i], a[j] = a[j], a[i]
swap = True
return a
def sort_by_height_v2(a):
print("\nunsorted: {}".format(a))
sorted_l = sorted([i for i in a if i > 0])
print("sorted: {}".format(sorted_l))
for n, i in enumerate(a):
print("n: {} | i: {} | a[{}] = {}".format(n, i, n, a[n]))
if i == -1:
sorted_l.insert(n, i)
print("sorted: {}".format(sorted_l))
return sorted_l
def sort_by_height_v3(a):
sorted_l = [n for n in a if n != -1]
sorted_l.sort()
for i in range(len(a)):
a[i] = sorted_l.pop(0) if a[i] != -1 else a[i]
return a
if __name__ == '__main__':
test_1 = [-1, 15, 19, 17, -1, -1, 16, 18]
test_2 = [-1, -1, -1, -1, -1]
test_3 = [-1]
test_4 = [4, 2, 9, 11, 2, 16]
test_5 = [2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1]
test_6 = [23, 54, -1, 43, 1, -1, -1, 77, -1, -1, -1, 3]
print("\nPit version (quite wrong)")
print(sort_by_height(test_1))
print(sort_by_height(test_2))
print(sort_by_height(test_3))
print(sort_by_height(test_4))
print(sort_by_height(test_5))
print(sort_by_height(test_6))
print("\n2nd version:")
print(sort_by_height_v2(test_1))
print(sort_by_height_v2(test_2))
print(sort_by_height_v2(test_3))
print(sort_by_height_v2(test_4))
print(sort_by_height_v2(test_5))
print(sort_by_height_v2(test_6))
print("\n3rd version")
print(sort_by_height_v3(test_1))
print(sort_by_height_v3(test_2))
print(sort_by_height_v3(test_3))
print(sort_by_height_v3(test_4))
print(sort_by_height_v3(test_5))
print(sort_by_height_v3(test_6))
| 29.478261 | 106 | 0.577065 | 477 | 2,712 | 3.08805 | 0.247379 | 0.032587 | 0.171079 | 0.207739 | 0.370672 | 0.308215 | 0.008826 | 0.008826 | 0.008826 | 0.008826 | 0 | 0.079253 | 0.269543 | 2,712 | 91 | 107 | 29.802198 | 0.662292 | 0.257006 | 0 | 0.116667 | 0 | 0 | 0.06278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.116667 | 0.416667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
c33978d1c78a7a73238baa1b20eaaa402909d1df | 1,134 | py | Python | dglt/contrib/moses/moses/model/organ/generator.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | 3 | 2022-02-06T09:13:51.000Z | 2022-02-19T15:03:35.000Z | dglt/contrib/moses/moses/model/organ/generator.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | 1 | 2022-02-14T23:16:27.000Z | 2022-02-14T23:16:27.000Z | dglt/contrib/moses/moses/model/organ/generator.py | uta-smile/CD-MVGNN | b48f4cd14befed298980a83edb417ab6809f0af6 | [
"MIT"
] | null | null | null | import numpy as np
from functools import wraps
from tqdm import tqdm
import torch
from dglt.contrib.moses.moses.abstract_generator import AbstractGenerator
from dglt.contrib.moses.moses.model.vae.dataset import VAEDesignDataset
class ORGANGenerator(AbstractGenerator):
"""ORGAN - SMILES generator."""
def __init__(self, model, config, gen_config=None):
"""Constructor function."""
super(ORGANGenerator, self).__init__(model, config, gen_config)
def sample(self, nb_smpls, max_len):
"""Sample a list of SMILES sequences."""
return self.model.sample(nb_smpls, max_len)
def recon(self, smiles_list_in, max_len):
"""Reconstruct a list of SMILES sequences from reference SMILES."""
raise NotImplementedError
def design(self, nb_smpls, properties, max_len):
"""Design a list of SMILES sequences that satisfy given property values."""
raise NotImplementedError
def recon_design(self, smiles_list_in, properties, max_len):
"""Design a list of SMILES sequences that satisfy given property values."""
raise NotImplementedError
| 30.648649 | 83 | 0.719577 | 141 | 1,134 | 5.617021 | 0.390071 | 0.037879 | 0.035354 | 0.065657 | 0.366162 | 0.247475 | 0.247475 | 0.247475 | 0.247475 | 0.247475 | 0 | 0 | 0.194004 | 1,134 | 36 | 84 | 31.5 | 0.866521 | 0.250441 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.352941 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
c33d9c168daa39857619ec4291fcccda1e048bac | 542 | py | Python | python/exercism/gigasecond/gigasecond.py | lmregus/Portfolio | 9a751443edbfe5ff2b47cdeacca86761ed03e81f | [
"MIT"
] | null | null | null | python/exercism/gigasecond/gigasecond.py | lmregus/Portfolio | 9a751443edbfe5ff2b47cdeacca86761ed03e81f | [
"MIT"
] | 1 | 2021-11-15T17:46:44.000Z | 2021-11-15T17:46:44.000Z | python/exercism/gigasecond/gigasecond.py | lmregus/Portfolio | 9a751443edbfe5ff2b47cdeacca86761ed03e81f | [
"MIT"
] | null | null | null | ##########################
# #
# Developer: Luis Regus #
# Date: 11/13/2015 #
# #
##########################
from datetime import datetime
from datetime import timedelta
def add_gigasecond(date):
''' Returns 1Gs anniversary
This function calculates 1Gs date
Args:
date (datetime): Date
Returns:
datetime: New date (1G added)
'''
if isinstance(date, datetime):
return date + timedelta(seconds=10**9)
return date
| 20.846154 | 46 | 0.490775 | 49 | 542 | 5.408163 | 0.612245 | 0.090566 | 0.135849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.341328 | 542 | 25 | 47 | 21.68 | 0.703081 | 0.370849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c3400472659a07ab2ba2519079d982e51408b002 | 1,387 | py | Python | tests/test_tokenizer.py | jonatan1609/JoLang | 177d18d18b42834c2d624a1050902cc799d7d638 | [
"MIT"
] | 25 | 2021-11-04T11:59:18.000Z | 2022-03-10T20:20:10.000Z | tests/test_tokenizer.py | jonatan1609/JoLang | 177d18d18b42834c2d624a1050902cc799d7d638 | [
"MIT"
] | null | null | null | tests/test_tokenizer.py | jonatan1609/JoLang | 177d18d18b42834c2d624a1050902cc799d7d638 | [
"MIT"
] | 1 | 2022-02-07T21:00:27.000Z | 2022-02-07T21:00:27.000Z | from unittest import TestCase
from jolang.tokenizer import Tokenizer, tokens
class TestTokenizer(TestCase):
tests_pass = {
"+": [tokens.Add],
"-": [tokens.Subtract],
">>": [tokens.RightShift],
">>=": [tokens.InplaceRightShift],
"|": [tokens.BinOr],
"||": [tokens.LogicOr],
"abc a0 01": [tokens.Identifier, tokens.Identifier, tokens.Integer],
"0x222 0o222 2.2": [tokens.Integer, tokens.Integer, tokens.Float],
"func a(){return a % 2 - 1 == 2}": [tokens.Identifier, tokens.Identifier, tokens.LeftParen, tokens.RightParen, tokens.LeftBrace, tokens.Identifier, tokens.Identifier, tokens.Modulo, tokens.Integer, tokens.Subtract, tokens.Integer, tokens.IsEqual, tokens.Integer, tokens.RightBrace],
"$ abc": [],
"a $abc \n a": [tokens.Identifier, tokens.Identifier]
}
tests_fail = ["0a", "0.a", "0o8", "@"]
def test_tokenizer_pass(self):
for test, expect in self.tests_pass.items():
t = list(Tokenizer(test).tokenize())
self.assertTrue(len(t) == len(expect), f"Length of tokens isn't {len(expect)}")
for i in range(len(expect)):
self.assertIsInstance(t[i], expect[i])
def test_tokenizer_fail(self):
for test in self.tests_fail:
self.assertRaises(SyntaxError, lambda: list(Tokenizer(test).tokenize()))
| 43.34375 | 290 | 0.622206 | 159 | 1,387 | 5.377358 | 0.396226 | 0.149708 | 0.180117 | 0.149708 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01845 | 0.218457 | 1,387 | 31 | 291 | 44.741935 | 0.770295 | 0 | 0 | 0 | 0 | 0 | 0.090844 | 0 | 0 | 0 | 0.003605 | 0 | 0.115385 | 1 | 0.076923 | false | 0.115385 | 0.076923 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c3405ab001158b193e09eeb6fb0905820b4c70b8 | 340 | py | Python | megumin/__main__.py | davitudoplugins1234/WhiterKang | f4779d2c440849fa97e7014cd856f885b0abbc87 | [
"MIT"
] | 2 | 2022-02-01T17:55:44.000Z | 2022-03-27T17:21:55.000Z | megumin/__main__.py | davitudoplugins1234/WhiterKang | f4779d2c440849fa97e7014cd856f885b0abbc87 | [
"MIT"
] | null | null | null | megumin/__main__.py | davitudoplugins1234/WhiterKang | f4779d2c440849fa97e7014cd856f885b0abbc87 | [
"MIT"
] | null | null | null | # Copyright (C) 2022 by fnixdev
#
from .bot import megux
from pyrogram import idle
import asyncio
from .utils.database.lang import load_language
async def main():
load_language()
await megux.start()
await idle()
await megux.stop()
if __name__ == "__main__" :
asyncio.get_event_loop().run_until_complete(main())
| 17.894737 | 55 | 0.705882 | 46 | 340 | 4.913043 | 0.673913 | 0.106195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.194118 | 340 | 18 | 56 | 18.888889 | 0.810219 | 0.085294 | 0 | 0 | 0 | 0 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
c35080831dc95db1d8973ea37e121ce75465ed82 | 4,401 | py | Python | tests/test_get_free_timeslots_complex_case_0.py | gemerden/make_my_day_planner | dbd7a170d8ac4688df278063568ab424e5768462 | [
"MIT"
] | 2 | 2020-02-20T14:25:51.000Z | 2020-02-20T14:26:01.000Z | tests/test_get_free_timeslots_complex_case_0.py | gemerden/make_my_day_planner | dbd7a170d8ac4688df278063568ab424e5768462 | [
"MIT"
] | 1 | 2019-12-18T12:46:41.000Z | 2019-12-19T14:37:31.000Z | tests/test_get_free_timeslots_complex_case_0.py | gemerden/make_my_day_planner | dbd7a170d8ac4688df278063568ab424e5768462 | [
"MIT"
] | 2 | 2020-02-08T14:13:09.000Z | 2020-03-21T21:17:25.000Z | import pytest
from app_scripts.get_free_timeslots import get_free_timeslots
"""
Pytest file for testing.
Run with `pytest` in root of project
"""
"""
If any test fails, drop to the python shell and use the following to execute
tests for a specific time_min
```
from app_scripts.get_free_timeslots import get_free_timeslots
time_min = '2019-12-14T13:00:00+01:00'
time_max = '2019-12-14T23:59:00+01:00'
scheduled_time_blocks = [['2019-12-14T09:00:00+01:00', '2019-12-14T10:00:00+01:00'],
['2019-12-14T11:00:00+01:00', '2019-12-14T12:30:00+01:00'],
['2019-12-14T11:30:00+01:00', '2019-12-14T12:00:00+01:00'],
['2019-12-14T14:00:00+01:00', '2019-12-14T15:00:00+01:00'],
['2019-12-14T14:30:00+01:00', '2019-12-14T16:00:00+01:00'],
['2019-12-14T17:00:00+01:00', '2019-12-14T18:00:00+01:00'],
['2019-12-14T18:00:00+01:00', '2019-12-14T19:00:00+01:00']]
get_free_timeslots(time_min, time_max, scheduled_time_blocks)
```
"""
time_s_p = ["2019-12-14T", ":00+01:00"]
scheduled_time_blocks_raw = [
["09:00", "10:00"],
["11:00", "12:30"],
["11:30", "12:00"],
["14:00", "15:00"],
["14:30", "16:00"],
["17:00", "18:00"],
["18:00", "19:00"],
]
time_max_raw = "23:59"
time_mins_raw = {
"08:00": [
["08:00", "09:00"],
["10:00", "11:00"],
["12:30", "14:00"],
["16:00", "17:00"],
["19:00", "23:59"],
],
"09:00": [
["10:00", "11:00"],
["12:30", "14:00"],
["16:00", "17:00"],
["19:00", "23:59"],
],
"09:30": [
["10:00", "11:00"],
["12:30", "14:00"],
["16:00", "17:00"],
["19:00", "23:59"],
],
"10:00": [
["10:00", "11:00"],
["12:30", "14:00"],
["16:00", "17:00"],
["19:00", "23:59"],
],
"10:30": [
["10:30", "11:00"],
["12:30", "14:00"],
["16:00", "17:00"],
["19:00", "23:59"],
],
"11:00": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"11:15": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"11:30": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"11:45": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"12:00": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"12:15": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"12:30": [["12:30", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"13:00": [["13:00", "14:00"], ["16:00", "17:00"], ["19:00", "23:59"]],
"14:00": [["16:00", "17:00"], ["19:00", "23:59"]],
"14:15": [["16:00", "17:00"], ["19:00", "23:59"]],
"14:30": [["16:00", "17:00"], ["19:00", "23:59"]],
"14:45": [["16:00", "17:00"], ["19:00", "23:59"]],
"15:00": [["16:00", "17:00"], ["19:00", "23:59"]],
"15:30": [["16:00", "17:00"], ["19:00", "23:59"]],
"16:00": [["16:00", "17:00"], ["19:00", "23:59"]],
"16:30": [["16:30", "17:00"], ["19:00", "23:59"]],
"17:00": [["19:00", "23:59"]],
"17:30": [["19:00", "23:59"]],
"18:00": [["19:00", "23:59"]],
"18:30": [["19:00", "23:59"]],
"19:00": [["19:00", "23:59"]],
"19:30": [["19:30", "23:59"]],
}
scheduled_time_blocks = [
[f"{time_s_p[0]}{i[0]}{time_s_p[1]}", f"{time_s_p[0]}{i[1]}{time_s_p[1]}"]
for i in scheduled_time_blocks_raw
]
time_min_free_time_dict = {}
for a_time_min, a_free_result_list in time_mins_raw.items():
temp_free_timeblocks = []
for i in a_free_result_list:
temp_free_timeblocks.append(
[f"{time_s_p[0]}{i[0]}{time_s_p[1]}", f"{time_s_p[0]}{i[1]}{time_s_p[1]}"]
)
time_min_free_time_dict[
f"{time_s_p[0]}{a_time_min}{time_s_p[1]}"
] = temp_free_timeblocks
time_max = f"{time_s_p[0]}{time_max_raw}{time_s_p[1]}"
free_timeblock_expected_and_received_results = []
for time_min, expected_free_timeblocks in time_min_free_time_dict.items():
received_free_timeblocks = get_free_timeslots(
time_min, time_max, scheduled_time_blocks
)
free_timeblock_expected_and_received_results.append(
[expected_free_timeblocks, received_free_timeblocks]
)
@pytest.mark.parametrize(
"expected_free_timeblocks, received_free_timeblocks",
free_timeblock_expected_and_received_results,
)
def test_complex_case_0(expected_free_timeblocks, received_free_timeblocks):
assert received_free_timeblocks == expected_free_timeblocks
| 31.891304 | 86 | 0.533288 | 752 | 4,401 | 2.93883 | 0.12633 | 0.050679 | 0.070588 | 0.094118 | 0.677376 | 0.619457 | 0.419005 | 0.390045 | 0.382805 | 0.330317 | 0 | 0.283537 | 0.180186 | 4,401 | 137 | 87 | 32.124088 | 0.328991 | 0 | 0 | 0.268041 | 0 | 0 | 0.325267 | 0.07175 | 0 | 0 | 0 | 0 | 0.010309 | 1 | 0.010309 | false | 0 | 0.020619 | 0 | 0.030928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c3589138bc1de5491880359cd41fe82267d1370e | 699 | py | Python | ww/api/admin.py | basilpork/watchword | 922f520104e840e75fe773f2a7310764f0701c49 | [
"MIT"
] | 1 | 2016-02-14T17:12:22.000Z | 2016-02-14T17:12:22.000Z | ww/api/admin.py | basilpork/watchword | 922f520104e840e75fe773f2a7310764f0701c49 | [
"MIT"
] | 3 | 2018-07-15T03:35:06.000Z | 2021-10-02T17:21:55.000Z | ww/api/admin.py | basilpork/watchword | 922f520104e840e75fe773f2a7310764f0701c49 | [
"MIT"
] | 1 | 2019-10-02T03:18:19.000Z | 2019-10-02T03:18:19.000Z | from django.contrib import admin
from ww.api.models import Watch, Ping, Flare, Launch
@admin.register(Watch)
class WatchAdmin(admin.ModelAdmin):
list_display = ('id', 'name', 'created', 'word', 'state', 'status', 'last_ping',)
@admin.register(Ping)
class PingAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'watch_name', 'method', 'user_agent', 'remote_addr',)
def watch_name(self, ping):
return ping.watch.name
@admin.register(Flare)
class FlareAdmin(admin.ModelAdmin):
list_display = ('id', 'signal', 'config',)
@admin.register(Launch)
class LaunchAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'flare', 'watch', 'trigger_state', 'message',)
| 30.391304 | 90 | 0.693848 | 86 | 699 | 5.523256 | 0.465116 | 0.109474 | 0.16 | 0.218947 | 0.265263 | 0.147368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131617 | 699 | 22 | 91 | 31.772727 | 0.782537 | 0 | 0 | 0 | 0 | 0 | 0.194564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0.0625 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c35a02171b560c4a30f4a1d48d2ee408d0833fd9 | 5,028 | py | Python | tests/reqgenerator/reqgenerator.py | gholms/eucaconsole | 4629c961c90e3aae27e3a869a7f157bafeda6489 | [
"BSD-2-Clause"
] | null | null | null | tests/reqgenerator/reqgenerator.py | gholms/eucaconsole | 4629c961c90e3aae27e3a869a7f157bafeda6489 | [
"BSD-2-Clause"
] | 10 | 2018-03-28T17:25:16.000Z | 2021-03-05T10:15:06.000Z | tests/reqgenerator/reqgenerator.py | gholms/eucaconsole | 4629c961c90e3aae27e3a869a7f157bafeda6489 | [
"BSD-2-Clause"
] | 1 | 2019-06-07T20:43:45.000Z | 2019-06-07T20:43:45.000Z | # dependencies
# pip install requests cachecontrol beautifulsoup4
from datetime import datetime
import random
import requests
import sys
import time
from threading import Thread
from bs4 import BeautifulSoup
from cachecontrol import CacheControl
# note, when using localhost and port, cache never filled.
URL = 'http://dak-logging-elb-000562685431.lb.a-02.autoqa.qa1.eucalyptus-systems.com/'
NUM_USERS = 50
NUM_ITERATIONS = 100
class UserSession(object):
"""
Class representing a user session. Meant to be extended for actual user behavior
"""
session_active = False
def __init__(self, url, session, name='', iterations=10):
self.url = url
self.session = session
self.name = name
self.iterations = iterations
def get_csrf_token_from_page(self, page):
idx1 = page.text.find('csrf_token')
idx2 = page.text[idx1:].find('value')
return page.text[idx1 + idx2 + 7:idx1 + idx2 + 47]
def get_page_completely(self, url):
start = datetime.now()
page = self.session.get(url, verify=False)
if self.session_active and page.text.find("login-form") > -1:
self.session_active = False
self.login('ui-test-acct-00', 'admin', 'mypassword0')
# now find img, link, script tags
soup = BeautifulSoup(page.text, 'html.parser')
images = [img['src'] for img in soup.findAll('img') if img.has_attr('src')]
scripts = [script['src'] for script in soup.findAll('script') if script.has_attr('src')]
links = [link['href'] for link in soup.findAll('link') if link.has_attr('href')]
# fetch resources
url = self.url
if url.endswith('/'):
url = url[:len(url)-1]
for i in images:
self.session.get(url + i, verify=False)
for s in scripts:
self.session.get(url + s, verify=False)
for l in links:
self.session.get(url + l, verify=False)
end = datetime.now()
page.elapsed = end - start
# print "cache size = "+str(len(self.session.adapters['http://'].cache.data))
return page
def login(self, account, user, password):
login_fields = {
'account': account,
'username': user,
'password': password,
'came_from': '/'
}
# TODO: switch to requests.auth ?
page = self.get_page_completely(self.url)
csrf_token = self.get_csrf_token_from_page(page)
login_fields['csrf_token'] = csrf_token
self.session.post(self.url + 'login?login_type=Eucalyptus', data=login_fields)
# TODO: return login status
self.session_active = True
def logout(self):
pass
def __call__(self):
pass
class BrowsingUser(UserSession):
pages = [
'images',
'instances',
'volumes',
'snapshots',
'keypairs',
'securitygroups',
'scalinggroups',
'users',
'stacks'
]
def __call__(self):
for i in range(self.iterations):
page_name = self.pages[random.randrange(len(self.pages))]
page = self.get_page_completely(self.url + page_name)
load_time = page.elapsed.total_seconds()
csrf_token = self.get_csrf_token_from_page(page)
page = self.session.post(self.url + page_name + '/json', data={'csrf_token': csrf_token})
load_time = load_time + page.elapsed.total_seconds()
print "{0} loading page: {1} took {2} seconds".format(self.name, page_name, load_time)
time.sleep(random.randrange(1, 20))
class VolumeManipulatorUser(UserSession):
def __call__(self):
for i in range(self.iterations):
# create volume
page = self.session.get(self.url + 'volumes/new')
csrf_token = self.get_csrf_token_from_page(page)
create_fields = {
'name': 'testvolfor{0}'.format(self.name),
'size': '1',
'zone': 'one'
}
create_fields['csrf_token'] = csrf_token
self.session.post(self.url + 'volumes/create', data=create_fields)
if __name__ == "__main__":
requests.packages.urllib3.disable_warnings()
url = URL
num_users = NUM_USERS
num_iterations = NUM_ITERATIONS
if len(sys.argv) > 1:
url = sys.argv[1]
if len(sys.argv) > 2:
num_users = int(sys.argv[2])
if len(sys.argv) > 3:
num_iterations = int(sys.argv[3])
else:
print "usage: reqgenerator.py <console-url> [num sessions] [num iterations/session]"
sys.exit()
# start a bunch of users
for i in range(0, num_users):
s = requests.Session()
s = CacheControl(s)
print "Starting user: " + str(i)
u = BrowsingUser(url, s, 'user' + str(i), num_iterations)
u.login('ui-test-acct-00', 'admin', 'mypassword0')
Thread(target=u).start()
time.sleep(2)
| 34.204082 | 101 | 0.598846 | 624 | 5,028 | 4.677885 | 0.294872 | 0.043165 | 0.023981 | 0.021925 | 0.182597 | 0.159986 | 0.138746 | 0.09421 | 0.09421 | 0.031518 | 0 | 0.01628 | 0.279236 | 5,028 | 146 | 102 | 34.438356 | 0.789183 | 0.067025 | 0 | 0.086207 | 0 | 0.008621 | 0.1259 | 0.005891 | 0 | 0 | 0 | 0.006849 | 0 | 0 | null | null | 0.051724 | 0.068966 | null | null | 0.025862 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c35bc315e3b98332b933ac9d40a0bdc24c8a3fe4 | 1,055 | py | Python | apps/gridportal/base/AYS/.macros/page/ayscodeeditors/3_ayscodeeditors.py | jumpscale7/jumpscale_portal | 8c99265e48f85643f8a52bc40a23f5266fb09231 | [
"Apache-2.0"
] | 2 | 2016-04-14T14:05:01.000Z | 2016-04-21T07:20:36.000Z | apps/gridportal/base/AYS/.macros/page/ayscodeeditors/3_ayscodeeditors.py | jumpscale7/jumpscale_portal | 8c99265e48f85643f8a52bc40a23f5266fb09231 | [
"Apache-2.0"
] | 13 | 2016-03-07T12:07:15.000Z | 2018-02-28T13:11:59.000Z | apps/gridportal/base/AYS/.macros/page/ayscodeeditors/3_ayscodeeditors.py | jumpscale7/jumpscale_portal | 8c99265e48f85643f8a52bc40a23f5266fb09231 | [
"Apache-2.0"
] | 5 | 2016-03-08T07:49:51.000Z | 2018-10-19T13:57:04.000Z | def main(j, args, params, tags, tasklet):
page = args.page
logpath = args.requestContext.params.get('logpath')
templatepath = args.requestContext.params.get('templatepath')
installedpath = args.requestContext.params.get('installedpath')
metapath = args.requestContext.params.get('metapath')
domain = args.requestContext.params.get('domain')
name = args.requestContext.params.get('servicename')
instance = args.requestContext.params.get('instance')
instancestr = ':%s' % instance if instance else ''
page.addHeading("Code editors for %s:%s%s" % (domain, name, instancestr), 2)
for representation, path in (('Installed', installedpath), ('Logs', logpath), ('Template', templatepath), ('Metadata', metapath)):
if not path or not j.system.fs.exists(path):
continue
page.addHeading("%s" % representation, 3)
page.addExplorer(path, readonly=False, tree=True, height=300)
params.result = page
return params
def match(j, args, params, tags, tasklet):
return True
| 37.678571 | 134 | 0.682464 | 121 | 1,055 | 5.950413 | 0.421488 | 0.175 | 0.233333 | 0.2625 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005767 | 0.178199 | 1,055 | 27 | 135 | 39.074074 | 0.824683 | 0 | 0 | 0 | 0 | 0 | 0.116588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0.05 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c36b261f98a6c99a79c4f376e220c3d208e834c3 | 2,077 | py | Python | tests/test_authentication.py | da8y01/xkcd3-back | 83037e50e10aa34135828661af0e032c9db983b1 | [
"MIT"
] | null | null | null | tests/test_authentication.py | da8y01/xkcd3-back | 83037e50e10aa34135828661af0e032c9db983b1 | [
"MIT"
] | null | null | null | tests/test_authentication.py | da8y01/xkcd3-back | 83037e50e10aa34135828661af0e032c9db983b1 | [
"MIT"
] | null | null | null | # coding: utf-8
from flask import url_for
from xkcd.exceptions import USER_ALREADY_REGISTERED
def _register_user(testapp, **kwargs):
return testapp.post_json(url_for("users.register_user"), {
"user": {
"first_name": "mo",
"last_name": "mo",
"email": "mo@mo.mo",
"password": "momo"
}
}, **kwargs)
class TestAuthenticate:
def test_register_user(self, testapp):
resp = _register_user(testapp)
assert resp.json['user']['email'] == 'mo@mo.mo'
assert resp.json['user']['token'] != 'None'
assert resp.json['user']['token'] != ''
def test_user_login(self, testapp):
_register_user(testapp)
resp = testapp.post_json(url_for('users.login_user'), {'user': {
'email': 'mo@mo.mo',
'password': 'momo'
}})
assert resp.json['user']['email'] == 'mo@mo.mo'
assert resp.json['user']['token'] != 'None'
assert resp.json['user']['token'] != ''
def test_get_user(self, testapp):
resp = _register_user(testapp)
token = str(resp.json['user']['token'])
resp = testapp.get(url_for('users.get_user'), headers={
'Authorization': 'Token {}'.format(token)
})
assert resp.json['user']['email'] == 'mo@mo.mo'
assert resp.json['user']['token'] == token
def test_register_already_registered_user(self, testapp):
_register_user(testapp)
resp = _register_user(testapp, expect_errors=True)
assert resp.status_int == 422
assert resp.json == USER_ALREADY_REGISTERED['message']
def test_update_user(self, testapp):
resp = _register_user(testapp)
token = str(resp.json['user']['token'])
resp = testapp.put_json(url_for('users.update_user'), {
'user': {
'email': 'meh@mo.mo',
'password': 'hmm'
}
}, headers={
'Authorization': 'Token {}'.format(token)
})
assert resp.json['user']['email'] == 'meh@mo.mo'
| 31.953846 | 72 | 0.561868 | 236 | 2,077 | 4.75 | 0.224576 | 0.042819 | 0.128457 | 0.160571 | 0.639607 | 0.598573 | 0.450491 | 0.416592 | 0.416592 | 0.416592 | 0 | 0.002642 | 0.271064 | 2,077 | 64 | 73 | 32.453125 | 0.737781 | 0.006259 | 0 | 0.352941 | 0 | 0 | 0.177013 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 1 | 0.117647 | false | 0.058824 | 0.039216 | 0.019608 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c36cda6bebbdc293607d57b2d0fddc992026d4c1 | 1,150 | py | Python | aiotdlib/api/functions/delete_revoked_chat_invite_link.py | jraylan/aiotdlib | 4528fcfca7c5c69b54a878ce6ce60e934a2dcc73 | [
"MIT"
] | 37 | 2021-05-04T10:41:41.000Z | 2022-03-30T13:48:05.000Z | aiotdlib/api/functions/delete_revoked_chat_invite_link.py | jraylan/aiotdlib | 4528fcfca7c5c69b54a878ce6ce60e934a2dcc73 | [
"MIT"
] | 13 | 2021-07-17T19:54:51.000Z | 2022-02-26T06:50:00.000Z | aiotdlib/api/functions/delete_revoked_chat_invite_link.py | jraylan/aiotdlib | 4528fcfca7c5c69b54a878ce6ce60e934a2dcc73 | [
"MIT"
] | 7 | 2021-09-22T21:27:11.000Z | 2022-02-20T02:33:19.000Z | # =============================================================================== #
# #
# This file has been generated automatically!! Do not change this manually! #
# #
# =============================================================================== #
from __future__ import annotations
from pydantic import Field
from ..base_object import BaseObject
class DeleteRevokedChatInviteLink(BaseObject):
"""
Deletes revoked chat invite links. Requires administrator privileges and can_invite_users right in the chat for own links and owner privileges for other links
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param invite_link: Invite link to revoke
:type invite_link: :class:`str`
"""
ID: str = Field("deleteRevokedChatInviteLink", alias="@type")
chat_id: int
invite_link: str
@staticmethod
def read(q: dict) -> DeleteRevokedChatInviteLink:
return DeleteRevokedChatInviteLink.construct(**q)
| 35.9375 | 162 | 0.511304 | 96 | 1,150 | 5.989583 | 0.583333 | 0.069565 | 0.034783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.293913 | 1,150 | 31 | 163 | 37.096774 | 0.708128 | 0.607826 | 0 | 0 | 1 | 0 | 0.080808 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0.1 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c373398fb511d61f08caf8d847231639a81b635a | 1,028 | py | Python | inputs/fields/field.py | Receiling/ENPAR | decd2945d21a7be5a0f73c37cfc5e252301aab15 | [
"MIT"
] | 5 | 2021-06-09T13:38:22.000Z | 2022-01-29T05:26:51.000Z | inputs/fields/field.py | Receiling/ENPAR | decd2945d21a7be5a0f73c37cfc5e252301aab15 | [
"MIT"
] | null | null | null | inputs/fields/field.py | Receiling/ENPAR | decd2945d21a7be5a0f73c37cfc5e252301aab15 | [
"MIT"
] | 1 | 2021-10-31T05:09:16.000Z | 2021-10-31T05:09:16.000Z | from abc import ABC, abstractclassmethod
class Field(ABC):
"""Abstract class `Field` define one indexing method,
generate counter from raw text data and index token in raw text data
Arguments:
ABC {ABC} -- abstract base class
"""
@abstractclassmethod
def count_vocab_items(self, counter, sentences):
"""This function constructs counter using each sentence content,
prepare for vocabulary
Arguments:
counter {dict} -- element count dict
sentences {list} -- text data
"""
raise NotImplementedError
@abstractclassmethod
def index(self, instance, vocab, sentences):
"""This function constrcuts instance using sentences and vocabulary,
each namespace is a mapping method using different type data
Arguments:
instance {dict} -- collections of various fields
vocab {dict} -- vocabulary
sentences {list} -- text data
"""
raise NotImplementedError
| 28.555556 | 76 | 0.643969 | 106 | 1,028 | 6.226415 | 0.518868 | 0.048485 | 0.033333 | 0.063636 | 0.136364 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290856 | 1,028 | 35 | 77 | 29.371429 | 0.90535 | 0.580739 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c37f768b06b5765c62042731062b3988f30471c6 | 135 | py | Python | medcon/__main__.py | FNNDSC/pl-medcon | 7cca3b6986a6fc6ede28a6bc351882b7e39f277e | [
"MIT"
] | null | null | null | medcon/__main__.py | FNNDSC/pl-medcon | 7cca3b6986a6fc6ede28a6bc351882b7e39f277e | [
"MIT"
] | 1 | 2020-06-25T14:21:39.000Z | 2020-06-25T14:21:39.000Z | medcon/__main__.py | FNNDSC/pl-medcon | 7cca3b6986a6fc6ede28a6bc351882b7e39f277e | [
"MIT"
] | 1 | 2020-11-12T21:41:02.000Z | 2020-11-12T21:41:02.000Z | from medcon.medcon import Medcon
def main():
chris_app = Medcon()
chris_app.launch()
if __name__ == "__main__":
main()
| 12.272727 | 32 | 0.644444 | 17 | 135 | 4.529412 | 0.588235 | 0.207792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22963 | 135 | 10 | 33 | 13.5 | 0.740385 | 0 | 0 | 0 | 0 | 0 | 0.059259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5effaa0122aef6ba6d0ec9b6291d5dd57a2e5485 | 447 | py | Python | pysynphot/test/test_ticket126.py | dobos/pysynphot | 5d2e0b52ceda78890940ac9239c2d88e149e0bed | [
"BSD-3-Clause"
] | 24 | 2015-01-04T23:38:21.000Z | 2022-02-01T00:11:07.000Z | pysynphot/test/test_ticket126.py | dobos/pysynphot | 5d2e0b52ceda78890940ac9239c2d88e149e0bed | [
"BSD-3-Clause"
] | 126 | 2015-01-29T14:50:37.000Z | 2022-02-15T01:58:13.000Z | pysynphot/test/test_ticket126.py | dobos/pysynphot | 5d2e0b52ceda78890940ac9239c2d88e149e0bed | [
"BSD-3-Clause"
] | 25 | 2015-02-09T12:12:02.000Z | 2021-09-09T13:06:54.000Z | """
Applies to both #125 and #126.
Test raises an error if the bug has not been fixed.
"""
from __future__ import absolute_import, division, print_function
import pytest
from ..spectrum import SourceSpectrum
from ..spparser import parse_spec
@pytest.mark.remote_data
def test_ticket125():
sp = parse_spec('rn(icat(k93models,44500,0.0,5.0),band(nicmos,2,f222m),'
'18,vegamag)')
assert isinstance(sp, SourceSpectrum)
| 24.833333 | 76 | 0.720358 | 64 | 447 | 4.875 | 0.78125 | 0.057692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07027 | 0.17226 | 447 | 17 | 77 | 26.294118 | 0.772973 | 0.183445 | 0 | 0 | 0 | 0.111111 | 0.182073 | 0.151261 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.555556 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6f08d4da07c55f770821f90e8e7ea2e38aafd071 | 513 | py | Python | app/user/views.py | chandan-urs/recipe-app-api | 647c9afb4e9b96eaf6f36cdae2de4843cab9d21a | [
"MIT"
] | null | null | null | app/user/views.py | chandan-urs/recipe-app-api | 647c9afb4e9b96eaf6f36cdae2de4843cab9d21a | [
"MIT"
] | null | null | null | app/user/views.py | chandan-urs/recipe-app-api | 647c9afb4e9b96eaf6f36cdae2de4843cab9d21a | [
"MIT"
] | null | null | null | from rest_framework import generics
from rest_framework.authtoken.views import ObtainAuthToken
from rest_framework.settings import api_settings
from . import serializers
class CreateUserView(generics.CreateAPIView):
"""Creates a new user in the system"""
serializer_class = serializers.UserSerializer
class CreateTokenView(ObtainAuthToken):
"""Create a new auth token for user"""
serializer_class = serializers.AuthTokenSerializer
renderer_classes = api_settings.DEFAULT_RENDERER_CLASSES
| 30.176471 | 60 | 0.810916 | 58 | 513 | 7 | 0.551724 | 0.059113 | 0.125616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132554 | 513 | 16 | 61 | 32.0625 | 0.91236 | 0.126706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6f1d5a786bc30402eb73dbd1c83863e37f04cead | 5,634 | py | Python | flask_news2/controlers/admin.py | lpy148145/test | b9ff99ed9faadf55455b1a78712c8fbbd0ca6676 | [
"Unlicense"
] | null | null | null | flask_news2/controlers/admin.py | lpy148145/test | b9ff99ed9faadf55455b1a78712c8fbbd0ca6676 | [
"Unlicense"
] | null | null | null | flask_news2/controlers/admin.py | lpy148145/test | b9ff99ed9faadf55455b1a78712c8fbbd0ca6676 | [
"Unlicense"
] | null | null | null | from flask import Blueprint, render_template, url_for, flash, redirect, session, request
from werkzeug.security import generate_password_hash, check_password_hash
from forms import UserForm, NewsForm
from functools import wraps
from application import db
from modules import User, News
from datetime import datetime
admin = Blueprint('admin', __name__)
# 定义一个实现访问控制的装饰器 admin_login_require
def admin_login_require(f):
# 使用functools.wraps装饰器装饰内函数wrapper,从而可以保留被修饰的函数属性
@wraps(f)
def wrapper(*args, **kwargs):
# 判断是否登录
if 'userid' not in session:
# 如果session中没有userid的键名,则重定向到登录页
return redirect(url_for('admin.login'))
return f(*args, **kwargs)
return wrapper
@admin.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
username = request.form['username']
password = request.form['password']
user = User.query.filter_by(username=username, is_valid=1).first()
if user and check_password_hash(user.password, password):
session['username'] = user.username
session['userid'] = user.id
return redirect(url_for('admin.index'))
else:
flash("您输入的用户名或密码错误", category='error')
return render_template('admin/login.html')
@admin.route('/')
@admin_login_require
def index():
return render_template('admin/index.html')
# 退出
@admin.route('/logout')
# @admin_login_require
def logout():
session.pop('username', None)
session.pop('userid', None)
return redirect(url_for('admin.login'))
# 修改密码
@admin.route('/changePassword', methods=['GET', 'POST'])
@admin_login_require
def changePassword():
print(session['username'],session['userid'])
_id = session['userid']
user1 = User.query.get(_id)
form = UserForm(obj=user1)
if form.validate_on_submit():
user1.password = generate_password_hash(form.password.data)
db.session.add(user1)
db.session.commit()
flash('修改密码成功! 请重新登录')
return redirect(url_for('admin.login'))
return render_template('admin/changePassword.html', form=form)
@admin.errorhandler(404)
def get_error(error):
return render_template('admin/error.html'), 404
@admin.route('/user/')
@admin.route('/user/<int:page>')
@admin_login_require
def manager_user(page=None):
# 分页查询
if page is None:
page = 1
# 接收查询的值
keyword = request.args.get('search')
if keyword:
try:
user_list = User.query.filter(User.username.contains(keyword)).\
order_by(User.id).\
paginate(page=page, per_page=2)
condition = "?search=" + keyword
return render_template('admin/user_index.html', user_list=user_list, condition=condition)
except:
flash('搜索失败!')
else:
user_list = User.query.paginate(page=page, per_page=5)
return render_template('admin/user_index.html', user_list=user_list)
@admin.route('/user/add', methods=['GET', 'POST'])
@admin_login_require
def user_add():
form = UserForm()
# print(form.__dict__)
# 判断表单是否通过
try:
if form.validate_on_submit():
user = User(form.username.data,
form.password.data,
form.is_valid.data)
db.session.add(user)
db.session.commit()
flash('添加成功!')
return redirect(url_for('admin.manager_user'))
except:
flash('你输入的用户名已存在!', category='error')
return render_template('admin/user_add.html', form=form)
@admin.route('/user/delete/<int:_id>')
@admin_login_require
def delete_user(_id=None):
try:
user = User.query.get(_id)
db.session.delete(user)
db.session.commit()
flash('删除成功!')
return redirect(url_for('admin.manager_user'))
except:
flash('删除失败!', category='error')
@admin.route('/user/update/<int:_id>', methods=['GET', 'POST'])
@admin_login_require
def update_user(_id):
user = User.query.get(_id)
if user is None:
return redirect(url_for('admin.manager_user'))
form = UserForm(obj=user)
if form.validate_on_submit():
try:
user.username = form.username.data
user.password = generate_password_hash(form.password.data)
user.is_valid = form.is_valid.data
db.session.add(user)
db.session.commit()
flash('成功修改管理员')
except:
flash('您输入的用户名已存在!', category='error')
return render_template('admin/user_update.html', form=form)
@admin.route('/detail')
@admin_login_require
def news_detail():
news_list = News.query.all()
return render_template('admin/news_detail.html', news_list=news_list)
@admin.route('/detail/add', methods=['GET', 'POST'])
@admin_login_require
def news_add():
form = NewsForm()
form.created_at.data = datetime.now()
try:
if form.validate_on_submit():
news = News(form.title.data,
form.content.data,
form.types.data,
form.img_url.data,
form.author.data,
form.view_count.data,
form.created_at.data,
form.is_valid.data,
form.is_recommend.data)
db.session.add(news)
db.session.commit()
flash('添加新闻成功!')
return redirect(url_for('admin.news_detail'))
except:
flash('添加新闻失败!', category='error')
return render_template('admin/news_add.html', form=form)
| 29.968085 | 101 | 0.622826 | 668 | 5,634 | 5.079341 | 0.200599 | 0.044209 | 0.055113 | 0.058945 | 0.367816 | 0.262305 | 0.21397 | 0.111406 | 0.089596 | 0.061892 | 0 | 0.003298 | 0.246539 | 5,634 | 187 | 102 | 30.128342 | 0.795995 | 0.033901 | 0 | 0.267123 | 1 | 0 | 0.124287 | 0.02854 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089041 | false | 0.061644 | 0.047945 | 0.013699 | 0.273973 | 0.020548 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6f27d04d22425e1352fe43a77e0b68ade345ddde | 478 | py | Python | django_emailsupport/management/commands/download_emails.py | rosti-cz/django-emailsupport | 908e347b1cc09d51096d2d14b4ccc3f1ccccc953 | [
"MIT"
] | 2 | 2016-01-16T08:45:28.000Z | 2017-11-15T19:30:52.000Z | django_emailsupport/management/commands/download_emails.py | rosti-cz/django-emailsupport | 908e347b1cc09d51096d2d14b4ccc3f1ccccc953 | [
"MIT"
] | null | null | null | django_emailsupport/management/commands/download_emails.py | rosti-cz/django-emailsupport | 908e347b1cc09d51096d2d14b4ccc3f1ccccc953 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import logging
from django.core.management.base import BaseCommand
from django_emailsupport.processor import download_and_save
logger = logging.getLogger('default')
class Command(BaseCommand):
help = 'Download emails'
def handle(self, *args, **options):
try:
download_and_save()
except Exception, e:
logger.error(str(e), extra={'exception': e, 'stack': True})
| 23.9 | 71 | 0.686192 | 56 | 478 | 5.678571 | 0.714286 | 0.062893 | 0.09434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002618 | 0.200837 | 478 | 19 | 72 | 25.157895 | 0.829843 | 0.043933 | 0 | 0 | 0 | 0 | 0.079121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6f2c2f00b92e727e88d16a2d923215ef4080573b | 722 | py | Python | services/app/src/concrete_data/__init__.py | yingw787/tinydevcrm-saas-starter | 0a37a8f0dccc5ec4b9074de40371d0074744ae79 | [
"MIT"
] | 2 | 2020-05-22T22:23:01.000Z | 2020-06-22T05:02:00.000Z | services/app/src/concrete_data/__init__.py | yingw787/tinydevcrm-saas-starter | 0a37a8f0dccc5ec4b9074de40371d0074744ae79 | [
"MIT"
] | 7 | 2021-03-19T02:40:00.000Z | 2021-09-22T18:57:26.000Z | services/app/src/concrete_data/__init__.py | yingw787/tinydevcrm-saas-starter | 0a37a8f0dccc5ec4b9074de40371d0074744ae79 | [
"MIT"
] | null | null | null | """
Concrete data service.
This Django app manages API endpoints related to managing "concrete data", or
data that serves as the foundational sources of truth for users. This is opposed
to "derived data", which is data computed via mathematical, logical /
relational, or other types of transformations. For example, a materialized view
would be considered "derived data", while a CSV upload would be considered
"concrete data".
Making this distinction helps ensure application data flow is unitary, and that
consequently, underlying data pipelines are acyclic. This methodology may reduce
likelihood of data corruption via concurrency / paralellism or other concerns,
and helps describe the data model more clearly.
"""
| 45.125 | 80 | 0.804709 | 104 | 722 | 5.586538 | 0.673077 | 0.061962 | 0.05852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152355 | 722 | 15 | 81 | 48.133333 | 0.949346 | 0.987535 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6f2cec1b9744315aa113d717fe41356400055e5f | 73 | py | Python | openpype/modules/job_queue/__init__.py | jonclothcat/OpenPype | d1208cbebc0a7f378de0062ccd653295c6399195 | [
"MIT"
] | 87 | 2021-05-07T08:40:46.000Z | 2022-03-19T00:36:25.000Z | openpype/modules/job_queue/__init__.py | zafrs/OpenPype | 4b8e7e1ed002fc55b31307efdea70b0feaed474f | [
"MIT"
] | 1,019 | 2021-04-26T06:22:56.000Z | 2022-03-31T16:30:43.000Z | openpype/modules/job_queue/__init__.py | zafrs/OpenPype | 4b8e7e1ed002fc55b31307efdea70b0feaed474f | [
"MIT"
] | 33 | 2021-04-29T12:35:54.000Z | 2022-03-25T14:48:42.000Z | from .module import JobQueueModule
__all__ = (
"JobQueueModule",
)
| 10.428571 | 34 | 0.69863 | 6 | 73 | 7.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 73 | 6 | 35 | 12.166667 | 0.810345 | 0 | 0 | 0 | 0 | 0 | 0.191781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6f41475b32e97fa39956e924f3ad28d185421cf5 | 375 | py | Python | dbt/adapters/synapse/connections.py | swanjson/dbt-synapse | 38f96116b6b89921e6083ac3850f3cabbb1f2c05 | [
"MIT"
] | 25 | 2020-11-05T16:22:46.000Z | 2022-03-14T04:41:50.000Z | dbt/adapters/synapse/connections.py | swanjson/dbt-synapse | 38f96116b6b89921e6083ac3850f3cabbb1f2c05 | [
"MIT"
] | 44 | 2020-11-11T00:09:21.000Z | 2022-03-29T08:12:48.000Z | dbt/adapters/synapse/connections.py | swanjson/dbt-synapse | 38f96116b6b89921e6083ac3850f3cabbb1f2c05 | [
"MIT"
] | 12 | 2020-11-05T16:22:49.000Z | 2022-02-25T18:55:03.000Z | from dataclasses import dataclass
from dbt.adapters.sqlserver import (SQLServerConnectionManager,
SQLServerCredentials)
@dataclass
class SynapseCredentials(SQLServerCredentials):
@property
def type(self):
return "synapse"
class SynapseConnectionManager(SQLServerConnectionManager):
TYPE = "synapse"
TOKEN = None
| 25 | 63 | 0.706667 | 28 | 375 | 9.464286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234667 | 375 | 14 | 64 | 26.785714 | 0.923345 | 0 | 0 | 0 | 0 | 0 | 0.037333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0.090909 | 0.727273 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6f463e6d8e867fd395721bdd11b5b2cfa8c6521f | 361 | py | Python | docs/dc/singleton/app.py | ztane/wired | 63cee7e3c39bd64ad36f1c8e6f63e22bd1010ad0 | [
"MIT"
] | 12 | 2018-07-22T15:40:35.000Z | 2020-12-27T21:39:18.000Z | docs/dc/singleton/app.py | ztane/wired | 63cee7e3c39bd64ad36f1c8e6f63e22bd1010ad0 | [
"MIT"
] | 36 | 2019-03-23T13:47:25.000Z | 2020-11-28T18:08:14.000Z | docs/dc/singleton/app.py | ztane/wired | 63cee7e3c39bd64ad36f1c8e6f63e22bd1010ad0 | [
"MIT"
] | 6 | 2019-03-23T20:08:57.000Z | 2021-06-03T16:52:06.000Z | # app.py
from dataclasses import dataclass, field
from venusian import Scanner
from wired import ServiceRegistry
from . import models
@dataclass
class App:
registry: ServiceRegistry = field(default_factory=ServiceRegistry)
def scan(self):
# Look for decorators
scanner = Scanner(registry=self.registry)
scanner.scan(models)
| 20.055556 | 70 | 0.734072 | 41 | 361 | 6.439024 | 0.536585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202216 | 361 | 17 | 71 | 21.235294 | 0.916667 | 0.072022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6f477234e17906c81082877342eabcd84e6ec08b | 451 | py | Python | setup.py | DeveloperHacker/Live-Plotter | 185cbc182bf66df52c4a4489fc4fda4ec9eaed1c | [
"MIT"
] | null | null | null | setup.py | DeveloperHacker/Live-Plotter | 185cbc182bf66df52c4a4489fc4fda4ec9eaed1c | [
"MIT"
] | null | null | null | setup.py | DeveloperHacker/Live-Plotter | 185cbc182bf66df52c4a4489fc4fda4ec9eaed1c | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='live-plotter',
version='1.0',
packages=[
'live_plotter',
'live_plotter.base',
'live_plotter.proxy'
],
url='https://github.com/DeveloperHacker/Live-Plotter',
license='MIT',
author='HackerMadCat',
author_email='hacker.mad.cat@gmail.com',
description='Tiny live plotting library',
install_requires=[
'matplotlib',
'numpy'
],
)
| 21.47619 | 58 | 0.609756 | 47 | 451 | 5.744681 | 0.744681 | 0.203704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005865 | 0.243902 | 451 | 20 | 59 | 22.55 | 0.785924 | 0 | 0 | 0.105263 | 0 | 0 | 0.419069 | 0.053215 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6f4c31a8d9a834004260fdf94482331b5334a57b | 20,008 | py | Python | applications/ParticleMechanicsApplication/tests/test_search_mpm_particle.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | 2 | 2019-10-25T09:28:10.000Z | 2019-11-21T12:51:46.000Z | applications/ParticleMechanicsApplication/tests/test_search_mpm_particle.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | 13 | 2019-10-07T12:06:51.000Z | 2020-02-18T08:48:33.000Z | applications/ParticleMechanicsApplication/tests/test_search_mpm_particle.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | null | null | null | from __future__ import print_function, absolute_import, division
import KratosMultiphysics
import KratosMultiphysics.ParticleMechanicsApplication as KratosParticle
import KratosMultiphysics.KratosUnittest as KratosUnittest
class TestSearchMPMParticle(KratosUnittest.TestCase):
def _generate_particle_element(self, current_model, dimension, geometry_element, is_structured, is_fine=False):
KratosMultiphysics.Logger.GetDefaultOutput().SetSeverity(KratosMultiphysics.Logger.Severity.WARNING)
# Initialize model part
## Material model part definition
material_point_model_part = current_model.CreateModelPart("dummy_name")
material_point_model_part.ProcessInfo.SetValue(KratosMultiphysics.DOMAIN_SIZE, dimension)
## Initial material model part definition
initial_mesh_model_part = current_model.CreateModelPart("Initial_dummy_name")
initial_mesh_model_part.ProcessInfo.SetValue(KratosMultiphysics.DOMAIN_SIZE, dimension)
## Grid model part definition
grid_model_part = current_model.CreateModelPart("Background_Grid")
grid_model_part.ProcessInfo.SetValue(KratosMultiphysics.DOMAIN_SIZE, dimension)
# Create Background Grid
sub_background = grid_model_part.CreateSubModelPart("test")
if is_structured:
self._create_background_nodes_structured(sub_background, dimension, geometry_element)
else:
self._create_background_nodes_unstructured(sub_background, dimension, geometry_element, is_fine)
self._create_background_elements(sub_background,dimension, geometry_element, is_structured)
# Create element and nodes
sub_mp = initial_mesh_model_part.CreateSubModelPart("test")
sub_mp.GetProperties()[1].SetValue(KratosParticle.PARTICLES_PER_ELEMENT, 1)
if is_structured:
self._create_nodes_structured(sub_mp, dimension, geometry_element)
else:
self._create_nodes_unstructured(sub_mp, dimension, geometry_element, is_fine)
self._create_elements(sub_mp,dimension, geometry_element)
# Set active
KratosMultiphysics.VariableUtils().SetFlag(KratosMultiphysics.ACTIVE, True, initial_mesh_model_part.Elements)
# Generate MP Elements
KratosParticle.GenerateMaterialPointElement(grid_model_part, initial_mesh_model_part, material_point_model_part, False, False)
def _create_nodes_structured(self, model_part, dimension, geometry_element):
if geometry_element == "Triangle":
model_part.CreateNewNode(1, 0.0, 0.0, 0.0)
model_part.CreateNewNode(2, 1.0, 0.0, 0.0)
model_part.CreateNewNode(3, 0.0, 1.0, 0.0)
if (dimension == 3):
model_part.CreateNewNode(4, 0.0, 0.0, 1.0)
elif geometry_element == "Quadrilateral":
model_part.CreateNewNode(1, -0.5, -0.5, 0.0)
model_part.CreateNewNode(2, 0.5, -0.5, 0.0)
model_part.CreateNewNode(3, 0.5, 0.5, 0.0)
model_part.CreateNewNode(4, -0.5, 0.5, 0.0)
if (dimension == 3):
model_part.CreateNewNode(5, -0.5, -0.5, 1.0)
model_part.CreateNewNode(6, 0.5, -0.5, 1.0)
model_part.CreateNewNode(7, 0.5, 0.5, 1.0)
model_part.CreateNewNode(8, -0.5, 0.5, 1.0)
def _create_background_nodes_structured(self, model_part, dimension, geometry_element):
self._create_nodes_structured(model_part, dimension, geometry_element)
if geometry_element == "Triangle":
model_part.CreateNewNode(5, 1.0, 1.0, 0.0)
if (dimension == 3):
model_part.CreateNewNode(6, 1.0, 0.0, 1.0)
model_part.CreateNewNode(7, 0.0, 1.0, 1.0)
model_part.CreateNewNode(8, 1.0, 1.0, 1.0)
elif geometry_element == "Quadrilateral":
model_part.CreateNewNode(9 , 1.5, -0.5, 0.0)
model_part.CreateNewNode(10, 1.5, 0.5, 0.0)
if (dimension == 3):
model_part.CreateNewNode(11, 1.5, -0.5, 1.0)
model_part.CreateNewNode(12, 1.5, 0.5, 1.0)
def _create_nodes_unstructured(self, model_part, dimension, geometry_element, is_fine):
if is_fine:
modulus=1.e-7
else:
modulus=1
if geometry_element == "Triangle":
model_part.CreateNewNode(1, 0.9*modulus, 2.9*modulus, 0.0*modulus)
model_part.CreateNewNode(2, 0.0*modulus, 0.0*modulus, 0.0*modulus)
model_part.CreateNewNode(3, 2.9*modulus, 2.4*modulus, 0.0*modulus)
if (dimension == 3):
model_part.CreateNewNode(4, 0.5*modulus, 0.5*modulus, 1.5*modulus)
elif geometry_element == "Quadrilateral":
model_part.CreateNewNode(1, -0.734*modulus, -0.621*modulus, 0.0*modulus)
model_part.CreateNewNode(2, 0.497*modulus, -0.432*modulus, 0.0*modulus)
model_part.CreateNewNode(3, 0.587*modulus, 0.402*modulus, 0.0*modulus)
model_part.CreateNewNode(4, -0.809*modulus, 0.522*modulus, 0.0*modulus)
if (dimension == 3):
model_part.CreateNewNode(5, -0.621*modulus, -0.734*modulus, 0.93*modulus)
model_part.CreateNewNode(6, 0.432*modulus, -0.497*modulus, 1.11*modulus)
model_part.CreateNewNode(7, 0.402*modulus, 0.587*modulus, 0.89*modulus)
model_part.CreateNewNode(8, -0.522*modulus, 0.809*modulus, 1.21*modulus)
def _create_background_nodes_unstructured(self, model_part, dimension, geometry_element, is_fine):
self._create_nodes_unstructured(model_part, dimension, geometry_element, is_fine)
if is_fine:
modulus=1.e-7
else:
modulus=1
if geometry_element == "Triangle":
model_part.CreateNewNode(5, 2.1*modulus, 0.1*modulus, 0.0*modulus)
elif geometry_element == "Quadrilateral":
model_part.CreateNewNode(9, 1.343*modulus, -0.451*modulus, 0.0*modulus)
model_part.CreateNewNode(10, 1.512*modulus, 0.392*modulus, 0.0*modulus)
if (dimension == 3):
model_part.CreateNewNode(11, 1.742*modulus, -0.620*modulus, 0.999*modulus)
model_part.CreateNewNode(12, 1.520*modulus, 0.671*modulus, 1.120*modulus)
def _create_elements(self, model_part, dimension, geometry_element):
if geometry_element == "Triangle":
if (dimension == 2):
model_part.CreateNewElement("UpdatedLagrangian2D3N", 1, [1,2,3], model_part.GetProperties()[1])
if (dimension == 3):
model_part.CreateNewElement("UpdatedLagrangian3D4N", 1, [1,2,3,4], model_part.GetProperties()[1])
elif geometry_element == "Quadrilateral":
if (dimension == 2):
model_part.CreateNewElement("UpdatedLagrangian2D4N", 1, [1,2,3,4], model_part.GetProperties()[1])
if (dimension == 3):
model_part.CreateNewElement("UpdatedLagrangian3D8N", 1, [1,2,3,4,5,6,7,8], model_part.GetProperties()[1])
def _create_background_elements(self, model_part, dimension, geometry_element, is_structured):
self._create_elements(model_part, dimension, geometry_element)
if geometry_element == "Triangle":
if (dimension == 2):
model_part.CreateNewElement("UpdatedLagrangian2D3N", 2, [2,3,5], model_part.GetProperties()[1])
if (dimension == 3):
if (is_structured):
model_part.CreateNewElement("UpdatedLagrangian3D4N", 2, [2,8,4,6], model_part.GetProperties()[1])
model_part.CreateNewElement("UpdatedLagrangian3D4N", 3, [4,8,3,7], model_part.GetProperties()[1])
model_part.CreateNewElement("UpdatedLagrangian3D4N", 4, [2,5,3,8], model_part.GetProperties()[1])
model_part.CreateNewElement("UpdatedLagrangian3D4N", 5, [8,3,2,4], model_part.GetProperties()[1])
else:
model_part.CreateNewElement("UpdatedLagrangian3D4N", 2, [2,3,5,4], model_part.GetProperties()[1])
elif geometry_element == "Quadrilateral":
if (dimension == 2):
model_part.CreateNewElement("UpdatedLagrangian2D4N", 2, [2,9,10,3], model_part.GetProperties()[1])
if (dimension == 3):
model_part.CreateNewElement("UpdatedLagrangian3D8N", 2, [2,9,10,3,6,11,12,7], model_part.GetProperties()[1])
def _move_and_search_element(self, current_model, new_coordinate, max_num_results = 1000, specific_tolerance = 1.e-5):
# Get model part
material_point_model_part = current_model.GetModelPart("dummy_name")
grid_model_part = current_model.GetModelPart("Background_Grid")
# Apply before search
for mpm in material_point_model_part.Elements:
mpm.SetValue(KratosParticle.MP_COORD, new_coordinate)
# Search element
KratosParticle.SearchElement(grid_model_part, material_point_model_part, max_num_results, specific_tolerance)
def _check_connectivity(self, current_model, expected_connectivity_node=[]):
# Get model part
material_point_model_part = current_model.GetModelPart("dummy_name")
grid_model_part = current_model.GetModelPart("Background_Grid")
# Check the searched node as expected connectivity
if not expected_connectivity_node:
for mpm in material_point_model_part.Elements:
self.assertEqual(mpm.GetNodes(), [])
else:
for mpm in material_point_model_part.Elements:
for i in range (len(expected_connectivity_node)):
self.assertEqual(mpm.GetNode(i).Id, grid_model_part.GetNode(expected_connectivity_node[i]).Id)
self.assertEqual(mpm.GetNode(i).X, grid_model_part.GetNode(expected_connectivity_node[i]).X)
self.assertEqual(mpm.GetNode(i).Y, grid_model_part.GetNode(expected_connectivity_node[i]).Y)
self.assertEqual(mpm.GetNode(i).Z, grid_model_part.GetNode(expected_connectivity_node[i]).Z)
def test_SearchMPMParticleTriangle2DStructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Triangle", is_structured=True)
new_coordinate = [0.5, 0.5, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3])
new_coordinate = [0.50001, 0.50001, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,3,5])
new_coordinate = [1.00001, 1.00001, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleTriangle3DStructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Triangle", is_structured=True)
new_coordinate = [0.5, 0.25, 0.20]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [0.90, 0.55, 0.90]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,8,4,6])
new_coordinate = [0.10, 0.90, 0.55]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [4,8,3,7])
new_coordinate = [0.90, 0.90, 0.55]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,5,3,8])
new_coordinate = [0.50, 0.50, 0.50]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [8,3,2,4])
new_coordinate = [1.0001, 1.0001, 1.0001]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral2DStructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Quadrilateral", is_structured=True)
new_coordinate = [-0.11111, 0.12345, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [0.6, 0.12345, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3])
new_coordinate = [1.00001, 1.00001, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral3DStructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Quadrilateral", is_structured=True)
new_coordinate = [0.5, 0.25, 0.20]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4,5,6,7,8])
new_coordinate = [0.7, 0.35, 0.3]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3,6,11,12,7])
new_coordinate = [0.50001, 0.50001, 0.50001]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleTriangle2DUnstructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Triangle", is_structured=False)
new_coordinate = [1.31967, 1.85246, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3])
new_coordinate = [1.72951, 0.491803, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,3,5])
new_coordinate = [3.00001, 3.00001, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleTriangle3DUnstructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Triangle", is_structured=False)
new_coordinate = [1.31967, 1.85246, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [1.72951, 0.491803, 0.1]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,3,5,4])
new_coordinate = [3.00001, 3.00001, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral2DUnstructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Quadrilateral", is_structured=False)
new_coordinate = [-0.11111, 0.12345, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [0.6, 0.12345, 1.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3])
new_coordinate = [1.00001, 1.00001, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral3DUnstructured(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Quadrilateral", is_structured=False)
new_coordinate = [0.5, 0.25, 0.20]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4,5,6,7,8])
new_coordinate = [0.7, 0.35, 0.3]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3,6,11,12,7])
new_coordinate = [0.70001, 0.20001, 1.20001]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleTriangle2DUnstructuredFine(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Triangle", is_structured=False, is_fine=True)
new_coordinate = [1.31967e-7, 1.85246e-7, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3])
new_coordinate = [1.72951e-7, 0.491803e-7, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,3,5])
new_coordinate = [3.00001e-7, 3.00001e-7, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleTriangle3DUnstructuredFine(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Triangle", is_structured=False, is_fine=True)
new_coordinate = [1.31967e-7, 1.85246e-7, 1.0e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [1.72951e-7, 0.491803e-7, 0.1e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,3,5,4])
new_coordinate = [3.00001e-7, 3.00001e-7, 1.0e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral2DUnstructuredFine(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=2, geometry_element="Quadrilateral", is_structured=False, is_fine=True)
new_coordinate = [-0.11111e-7, 0.12345e-7, 1.0e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4])
new_coordinate = [0.6e-7, 0.12345e-7, 1.0e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3])
new_coordinate = [1.00001e-7, 1.00001e-7, 0.0]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
def test_SearchMPMParticleQuadrilateral3DUnstructuredFine(self):
current_model = KratosMultiphysics.Model()
self._generate_particle_element(current_model, dimension=3, geometry_element="Quadrilateral", is_structured=False, is_fine=True)
new_coordinate = [0.5e-7, 0.25e-7, 0.20e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [1,2,3,4,5,6,7,8])
new_coordinate = [0.7e-7, 0.35e-7, 0.3e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model, [2,9,10,3,6,11,12,7])
new_coordinate = [0.70001e-7, 0.20001e-7, 1.20001e-7]
self._move_and_search_element(current_model, new_coordinate)
self._check_connectivity(current_model)
if __name__ == '__main__':
KratosUnittest.main()
| 50.271357 | 136 | 0.692073 | 2,546 | 20,008 | 5.128437 | 0.077769 | 0.102933 | 0.074213 | 0.06127 | 0.790381 | 0.746649 | 0.717163 | 0.689362 | 0.61132 | 0.579766 | 0 | 0.070509 | 0.20042 | 20,008 | 397 | 137 | 50.397985 | 0.745656 | 0.015644 | 0 | 0.491582 | 0 | 0 | 0.031152 | 0.012806 | 0 | 0 | 0 | 0 | 0.016835 | 1 | 0.070707 | false | 0 | 0.013468 | 0 | 0.087542 | 0.003367 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6f53d23c5cd28b4022024b044588e71e78db9e88 | 20,693 | py | Python | botenv/lib/python3.9/site-packages/telegram/ext/commandhandler.py | 0xtuytuy/unit-crypto-ski-week-poap-bot | 9bab0a6013a29db9ce76311d4f6fa1d0922ac5c1 | [
"MIT"
] | null | null | null | botenv/lib/python3.9/site-packages/telegram/ext/commandhandler.py | 0xtuytuy/unit-crypto-ski-week-poap-bot | 9bab0a6013a29db9ce76311d4f6fa1d0922ac5c1 | [
"MIT"
] | null | null | null | botenv/lib/python3.9/site-packages/telegram/ext/commandhandler.py | 0xtuytuy/unit-crypto-ski-week-poap-bot | 9bab0a6013a29db9ce76311d4f6fa1d0922ac5c1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# A library that provides a Python interface to the Telegram Bot API
# Copyright (C) 2015-2022
# Leandro Toledo de Souza <devs@python-telegram-bot.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser Public License for more details.
#
# You should have received a copy of the GNU Lesser Public License
# along with this program. If not, see [http://www.gnu.org/licenses/].
"""This module contains the CommandHandler and PrefixHandler classes."""
import re
import warnings
from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Tuple, TypeVar, Union
from telegram import MessageEntity, Update
from telegram.ext import BaseFilter, Filters
from telegram.utils.deprecate import TelegramDeprecationWarning
from telegram.utils.types import SLT
from telegram.utils.helpers import DefaultValue, DEFAULT_FALSE
from .utils.types import CCT
from .handler import Handler
if TYPE_CHECKING:
from telegram.ext import Dispatcher
RT = TypeVar('RT')
class CommandHandler(Handler[Update, CCT]):
"""Handler class to handle Telegram commands.
Commands are Telegram messages that start with ``/``, optionally followed by an ``@`` and the
bot's name and/or some additional text. The handler will add a ``list`` to the
:class:`CallbackContext` named :attr:`CallbackContext.args`. It will contain a list of strings,
which is the text following the command split on single or consecutive whitespace characters.
By default the handler listens to messages as well as edited messages. To change this behavior
use ``~Filters.update.edited_message`` in the filter argument.
Note:
* :class:`CommandHandler` does *not* handle (edited) channel posts.
* :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a :obj:`dict` you
can use to keep any data in will be sent to the :attr:`callback` function. Related to
either the user or the chat that the update was sent in. For each update from the same
user or in the same chat, it will be the same :obj:`dict`.
Note that this is DEPRECATED, and you should use context based callbacks. See
https://git.io/fxJuV for more info.
Warning:
When setting ``run_async`` to :obj:`True`, you cannot rely on adding custom
attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
Args:
command (:class:`telegram.utils.types.SLT[str]`):
The command or list of commands this handler should listen for.
Limitations are the same as described here https://core.telegram.org/bots#commands
callback (:obj:`callable`): The callback function for this handler. Will be called when
:attr:`check_update` has determined that an update should be processed by this handler.
Callback signature for context based API:
``def callback(update: Update, context: CallbackContext)``
The return value of the callback is usually ignored except for the special case of
:class:`telegram.ext.ConversationHandler`.
filters (:class:`telegram.ext.BaseFilter`, optional): A filter inheriting from
:class:`telegram.ext.filters.BaseFilter`. Standard filters can be found in
:class:`telegram.ext.filters.Filters`. Filters can be combined using bitwise
operators (& for and, | for or, ~ for not).
allow_edited (:obj:`bool`, optional): Determines whether the handler should also accept
edited messages. Default is :obj:`False`.
DEPRECATED: Edited is allowed by default. To change this behavior use
``~Filters.update.edited_message``.
pass_args (:obj:`bool`, optional): Determines whether the handler should be passed the
arguments passed to the command as a keyword argument called ``args``. It will contain
a list of strings, which is the text following the command split on single or
consecutive whitespace characters. Default is :obj:`False`
DEPRECATED: Please switch to context based callbacks.
pass_update_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``update_queue`` will be passed to the callback function. It will be the ``Queue``
instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`
that contains new updates which can be used to insert updates. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_job_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``job_queue`` will be passed to the callback function. It will be a
:class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`
which can be used to schedule new jobs. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_user_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``user_data`` will be passed to the callback function. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_chat_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``chat_data`` will be passed to the callback function. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
Defaults to :obj:`False`.
Raises:
ValueError: when command is too long or has illegal chars.
Attributes:
command (:class:`telegram.utils.types.SLT[str]`):
The command or list of commands this handler should listen for.
Limitations are the same as described here https://core.telegram.org/bots#commands
callback (:obj:`callable`): The callback function for this handler.
filters (:class:`telegram.ext.BaseFilter`): Optional. Only allow updates with these
Filters.
allow_edited (:obj:`bool`): Determines whether the handler should also accept
edited messages.
pass_args (:obj:`bool`): Determines whether the handler should be passed
``args``.
pass_update_queue (:obj:`bool`): Determines whether ``update_queue`` will be
passed to the callback function.
pass_job_queue (:obj:`bool`): Determines whether ``job_queue`` will be passed to
the callback function.
pass_user_data (:obj:`bool`): Determines whether ``user_data`` will be passed to
the callback function.
pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to
the callback function.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
"""
__slots__ = ('command', 'filters', 'pass_args')
def __init__(
self,
command: SLT[str],
callback: Callable[[Update, CCT], RT],
filters: BaseFilter = None,
allow_edited: bool = None,
pass_args: bool = False,
pass_update_queue: bool = False,
pass_job_queue: bool = False,
pass_user_data: bool = False,
pass_chat_data: bool = False,
run_async: Union[bool, DefaultValue] = DEFAULT_FALSE,
):
super().__init__(
callback,
pass_update_queue=pass_update_queue,
pass_job_queue=pass_job_queue,
pass_user_data=pass_user_data,
pass_chat_data=pass_chat_data,
run_async=run_async,
)
if isinstance(command, str):
self.command = [command.lower()]
else:
self.command = [x.lower() for x in command]
for comm in self.command:
if not re.match(r'^[\da-z_]{1,32}$', comm):
raise ValueError('Command is not a valid bot command')
if filters:
self.filters = Filters.update.messages & filters
else:
self.filters = Filters.update.messages
if allow_edited is not None:
warnings.warn(
'allow_edited is deprecated. See https://git.io/fxJuV for more info',
TelegramDeprecationWarning,
stacklevel=2,
)
if not allow_edited:
self.filters &= ~Filters.update.edited_message
self.pass_args = pass_args
def check_update(
self, update: object
) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, Dict]]]]]:
"""Determines whether an update should be passed to this handlers :attr:`callback`.
Args:
update (:class:`telegram.Update` | :obj:`object`): Incoming update.
Returns:
:obj:`list`: The list of args for the handler.
"""
if isinstance(update, Update) and update.effective_message:
message = update.effective_message
if (
message.entities
and message.entities[0].type == MessageEntity.BOT_COMMAND
and message.entities[0].offset == 0
and message.text
and message.bot
):
command = message.text[1 : message.entities[0].length]
args = message.text.split()[1:]
command_parts = command.split('@')
command_parts.append(message.bot.username)
if not (
command_parts[0].lower() in self.command
and command_parts[1].lower() == message.bot.username.lower()
):
return None
filter_result = self.filters(update)
if filter_result:
return args, filter_result
return False
return None
def collect_optional_args(
self,
dispatcher: 'Dispatcher',
update: Update = None,
check_result: Optional[Union[bool, Tuple[List[str], Optional[bool]]]] = None,
) -> Dict[str, object]:
"""Provide text after the command to the callback the ``args`` argument as list, split on
single whitespaces.
"""
optional_args = super().collect_optional_args(dispatcher, update)
if self.pass_args and isinstance(check_result, tuple):
optional_args['args'] = check_result[0]
return optional_args
def collect_additional_context(
self,
context: CCT,
update: Update,
dispatcher: 'Dispatcher',
check_result: Optional[Union[bool, Tuple[List[str], Optional[bool]]]],
) -> None:
"""Add text after the command to :attr:`CallbackContext.args` as list, split on single
whitespaces and add output of data filters to :attr:`CallbackContext` as well.
"""
if isinstance(check_result, tuple):
context.args = check_result[0]
if isinstance(check_result[1], dict):
context.update(check_result[1])
class PrefixHandler(CommandHandler):
"""Handler class to handle custom prefix commands.
This is a intermediate handler between :class:`MessageHandler` and :class:`CommandHandler`.
It supports configurable commands with the same options as CommandHandler. It will respond to
every combination of :attr:`prefix` and :attr:`command`. It will add a ``list`` to the
:class:`CallbackContext` named :attr:`CallbackContext.args`. It will contain a list of strings,
which is the text following the command split on single or consecutive whitespace characters.
Examples:
Single prefix and command:
.. code:: python
PrefixHandler('!', 'test', callback) # will respond to '!test'.
Multiple prefixes, single command:
.. code:: python
PrefixHandler(['!', '#'], 'test', callback) # will respond to '!test' and '#test'.
Multiple prefixes and commands:
.. code:: python
PrefixHandler(['!', '#'], ['test', 'help'], callback) # will respond to '!test', \
'#test', '!help' and '#help'.
By default the handler listens to messages as well as edited messages. To change this behavior
use ``~Filters.update.edited_message``.
Note:
* :class:`PrefixHandler` does *not* handle (edited) channel posts.
* :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a :obj:`dict` you
can use to keep any data in will be sent to the :attr:`callback` function. Related to
either the user or the chat that the update was sent in. For each update from the same
user or in the same chat, it will be the same :obj:`dict`.
Note that this is DEPRECATED, and you should use context based callbacks. See
https://git.io/fxJuV for more info.
Warning:
When setting ``run_async`` to :obj:`True`, you cannot rely on adding custom
attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
Args:
prefix (:class:`telegram.utils.types.SLT[str]`):
The prefix(es) that will precede :attr:`command`.
command (:class:`telegram.utils.types.SLT[str]`):
The command or list of commands this handler should listen for.
callback (:obj:`callable`): The callback function for this handler. Will be called when
:attr:`check_update` has determined that an update should be processed by this handler.
Callback signature for context based API:
``def callback(update: Update, context: CallbackContext)``
The return value of the callback is usually ignored except for the special case of
:class:`telegram.ext.ConversationHandler`.
filters (:class:`telegram.ext.BaseFilter`, optional): A filter inheriting from
:class:`telegram.ext.filters.BaseFilter`. Standard filters can be found in
:class:`telegram.ext.filters.Filters`. Filters can be combined using bitwise
operators (& for and, | for or, ~ for not).
pass_args (:obj:`bool`, optional): Determines whether the handler should be passed the
arguments passed to the command as a keyword argument called ``args``. It will contain
a list of strings, which is the text following the command split on single or
consecutive whitespace characters. Default is :obj:`False`
DEPRECATED: Please switch to context based callbacks.
pass_update_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``update_queue`` will be passed to the callback function. It will be the ``Queue``
instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`
that contains new updates which can be used to insert updates. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_job_queue (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``job_queue`` will be passed to the callback function. It will be a
:class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`
which can be used to schedule new jobs. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_user_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``user_data`` will be passed to the callback function. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
pass_chat_data (:obj:`bool`, optional): If set to :obj:`True`, a keyword argument called
``chat_data`` will be passed to the callback function. Default is :obj:`False`.
DEPRECATED: Please switch to context based callbacks.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
Defaults to :obj:`False`.
Attributes:
callback (:obj:`callable`): The callback function for this handler.
filters (:class:`telegram.ext.BaseFilter`): Optional. Only allow updates with these
Filters.
pass_args (:obj:`bool`): Determines whether the handler should be passed
``args``.
pass_update_queue (:obj:`bool`): Determines whether ``update_queue`` will be
passed to the callback function.
pass_job_queue (:obj:`bool`): Determines whether ``job_queue`` will be passed to
the callback function.
pass_user_data (:obj:`bool`): Determines whether ``user_data`` will be passed to
the callback function.
pass_chat_data (:obj:`bool`): Determines whether ``chat_data`` will be passed to
the callback function.
run_async (:obj:`bool`): Determines whether the callback will run asynchronously.
"""
# 'prefix' is a class property, & 'command' is included in the superclass, so they're left out.
__slots__ = ('_prefix', '_command', '_commands')
def __init__(
self,
prefix: SLT[str],
command: SLT[str],
callback: Callable[[Update, CCT], RT],
filters: BaseFilter = None,
pass_args: bool = False,
pass_update_queue: bool = False,
pass_job_queue: bool = False,
pass_user_data: bool = False,
pass_chat_data: bool = False,
run_async: Union[bool, DefaultValue] = DEFAULT_FALSE,
):
self._prefix: List[str] = []
self._command: List[str] = []
self._commands: List[str] = []
super().__init__(
'nocommand',
callback,
filters=filters,
allow_edited=None,
pass_args=pass_args,
pass_update_queue=pass_update_queue,
pass_job_queue=pass_job_queue,
pass_user_data=pass_user_data,
pass_chat_data=pass_chat_data,
run_async=run_async,
)
self.prefix = prefix # type: ignore[assignment]
self.command = command # type: ignore[assignment]
self._build_commands()
@property
def prefix(self) -> List[str]:
"""
The prefixes that will precede :attr:`command`.
Returns:
List[:obj:`str`]
"""
return self._prefix
@prefix.setter
def prefix(self, prefix: Union[str, List[str]]) -> None:
if isinstance(prefix, str):
self._prefix = [prefix.lower()]
else:
self._prefix = prefix
self._build_commands()
@property # type: ignore[override]
def command(self) -> List[str]: # type: ignore[override]
"""
The list of commands this handler should listen for.
Returns:
List[:obj:`str`]
"""
return self._command
@command.setter
def command(self, command: Union[str, List[str]]) -> None:
if isinstance(command, str):
self._command = [command.lower()]
else:
self._command = command
self._build_commands()
def _build_commands(self) -> None:
self._commands = [x.lower() + y.lower() for x in self.prefix for y in self.command]
def check_update(
self, update: object
) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, Dict]]]]]:
"""Determines whether an update should be passed to this handlers :attr:`callback`.
Args:
update (:class:`telegram.Update` | :obj:`object`): Incoming update.
Returns:
:obj:`list`: The list of args for the handler.
"""
if isinstance(update, Update) and update.effective_message:
message = update.effective_message
if message.text:
text_list = message.text.split()
if text_list[0].lower() not in self._commands:
return None
filter_result = self.filters(update)
if filter_result:
return text_list[1:], filter_result
return False
return None
| 45.280088 | 99 | 0.638187 | 2,583 | 20,693 | 5.023616 | 0.130081 | 0.012485 | 0.024661 | 0.017263 | 0.713548 | 0.695438 | 0.677404 | 0.670546 | 0.664072 | 0.65151 | 0 | 0.001786 | 0.26956 | 20,693 | 456 | 100 | 45.379386 | 0.856699 | 0.63089 | 0 | 0.436047 | 0 | 0 | 0.029582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063953 | false | 0.127907 | 0.063953 | 0 | 0.215116 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6f56d51e84a5a8d5db4d98c2b21687473eae0dde | 1,249 | py | Python | build/lib/Tests/Utils/test_argument_loader.py | AnaTomomi/tscfat | 74a89b61dc8ebd093212f356d8a20a3589eda789 | [
"MIT"
] | 1 | 2021-11-07T14:20:21.000Z | 2021-11-07T14:20:21.000Z | build/lib/Tests/Utils/test_argument_loader.py | AnaTomomi/tscfat | 74a89b61dc8ebd093212f356d8a20a3589eda789 | [
"MIT"
] | 11 | 2021-04-13T11:46:06.000Z | 2022-03-12T01:11:55.000Z | build/lib/Tests/Utils/test_argument_loader.py | AnaTomomi/tscfat | 74a89b61dc8ebd093212f356d8a20a3589eda789 | [
"MIT"
] | 3 | 2021-01-22T11:11:58.000Z | 2021-04-13T11:26:46.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Mar 31 22:52:40 2021
@author: ikaheia1
Tests for argument loader.
1) setup_pd should return a pandas dataframe
2) setup_ps should return a pandas series
3) setup_np shoulf return a numpy array
"""
import numpy as np
import pandas as pd
from tscfat.Utils.argument_loader import setup_pd, setup_np, setup_ps
class TestProcessDecorator(object):
def test_setup_pd(self):
"""
Test that setup_pd function returns a pandas dataframe
Returns
-------
None.
"""
test_argument = setup_pd()
assert isinstance(test_argument, pd.DataFrame)
def test_setup_ps(self):
"""
Test that setup_pd function returns a pandas series
Returns
-------
None.
"""
test_argument = setup_ps()
assert isinstance(test_argument, pd.Series)
def test_setup_np(self):
"""
Test that setup_pd function returns a numpy array.
Returns
-------
None.
"""
test_argument = setup_np()
assert isinstance(test_argument, np.ndarray)
| 19.825397 | 69 | 0.57486 | 147 | 1,249 | 4.714286 | 0.380952 | 0.070707 | 0.051948 | 0.073593 | 0.376623 | 0.168831 | 0.168831 | 0.168831 | 0.118326 | 0 | 0 | 0.021792 | 0.338671 | 1,249 | 63 | 70 | 19.825397 | 0.817191 | 0.393915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.230769 | false | 0 | 0.230769 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6f5e5f6f42858592f225d4f7cd976dca6181ef04 | 739 | py | Python | src/spaceone/core/handler/mutation_handler.py | ku524/python-core | 2004338aca00f776f68f83d73d8dc933d40d9e9c | [
"Apache-2.0"
] | null | null | null | src/spaceone/core/handler/mutation_handler.py | ku524/python-core | 2004338aca00f776f68f83d73d8dc933d40d9e9c | [
"Apache-2.0"
] | null | null | null | src/spaceone/core/handler/mutation_handler.py | ku524/python-core | 2004338aca00f776f68f83d73d8dc933d40d9e9c | [
"Apache-2.0"
] | null | null | null | import logging
from spaceone.core.error import *
from spaceone.core import pygrpc
from spaceone.core import utils
from spaceone.core.transaction import Transaction
from spaceone.core.handler import BaseMutationHandler
_LOGGER = logging.getLogger(__name__)
class SpaceONEMutationHandler(BaseMutationHandler):
def __init__(self, transaction: Transaction, config: dict):
super().__init__(transaction, config)
self.uri_info = utils.parse_grpc_uri(self.config['uri'])
def request(self, transaction: Transaction, params):
user_type = transaction.get_meta('user_type')
pass
def response(self, transaction: Transaction, result):
user_type = transaction.get_meta('user_type')
pass
| 29.56 | 64 | 0.748309 | 85 | 739 | 6.247059 | 0.411765 | 0.112994 | 0.150659 | 0.082863 | 0.143126 | 0.143126 | 0.143126 | 0.143126 | 0 | 0 | 0 | 0 | 0.166441 | 739 | 24 | 65 | 30.791667 | 0.862013 | 0 | 0 | 0.235294 | 0 | 0 | 0.028417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.117647 | 0.352941 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
6f6dd8ae7e91c253d203dccbdd4345087e922703 | 4,766 | py | Python | api/collections/nodeman.py | brookylin/bk-sops | 6c0cf78879849921c4ff6ad6bf3bb82dfdf5b973 | [
"Apache-2.0"
] | 881 | 2019-03-25T02:45:42.000Z | 2022-03-30T09:10:49.000Z | api/collections/nodeman.py | m0re-work/bk-sops | d03ba8a4ee0781c6daaf0dd38a7369dc82669f7d | [
"Apache-2.0"
] | 3,303 | 2019-03-25T04:18:03.000Z | 2022-03-31T11:52:03.000Z | api/collections/nodeman.py | m0re-work/bk-sops | d03ba8a4ee0781c6daaf0dd38a7369dc82669f7d | [
"Apache-2.0"
] | 395 | 2019-03-25T02:53:36.000Z | 2022-03-31T08:37:28.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from django.conf import settings
import env
from api.client import BKComponentClient
NODEMAN_API_ENTRY = env.BK_NODEMAN_API_ENTRY or "{}/{}".format(settings.BK_PAAS_ESB_HOST, "api/c/compapi/v2/nodeman")
NODEMAN_API_ENTRY_V2 = env.BK_NODEMAN_API_ENTRY or "{}/{}".format(
settings.BK_PAAS_ESB_HOST, "api/c/compapi/{bk_api_ver}/nodeman/api".format(bk_api_ver=settings.DEFAULT_BK_API_VER),
)
def _get_nodeman_api(api_name):
return "{}/{}/".format(NODEMAN_API_ENTRY, api_name)
def _get_nodeman_api_v2(api_name):
return "{}/{}/".format(NODEMAN_API_ENTRY_V2, api_name)
class BKNodeManClient(BKComponentClient):
def create_task(self, bk_biz_id, bk_cloud_id, node_type, op_type, creator, hosts):
return self._request(
method="post",
url=_get_nodeman_api("create_task"),
data={
"bk_biz_id": bk_biz_id,
"bk_cloud_id": bk_cloud_id,
"node_type": node_type,
"op_type": op_type,
"creator": creator,
"hosts": hosts,
},
)
def get_task_info(self, bk_biz_id, job_id):
return self._request(
method="get", url=_get_nodeman_api("get_task_info"), data={"bk_biz_id": bk_biz_id, "job_id": job_id},
)
def get_log(self, host_id, bk_biz_id):
return self._request(
method="get", url=_get_nodeman_api("get_log"), data={"host_id": host_id, "bk_biz_id": bk_biz_id},
)
def search_host_plugin(self, bk_biz_id, pagesize, conditions):
return self._request(
method="post",
url=_get_nodeman_api_v2("plugin/search"),
data={"bk_biz_id": bk_biz_id, "pagesize": pagesize, "conditions": conditions},
)
def job_install(self, job_type, hosts, **kwargs):
data = {"job_type": job_type, "hosts": hosts}
data.update(kwargs)
return self._request(method="post", url=_get_nodeman_api_v2("job/install"), data=data)
def remove_host(self, bk_biz_id, bk_host_id, is_proxy):
return self._request(
method="post",
url=_get_nodeman_api_v2("remove_host"),
data={"bk_biz_id": bk_biz_id, "bk_host_id": bk_host_id, "is_proxy": is_proxy}, # 是否移除PROXY
)
def job_operate(self, job_type, bk_biz_id, bk_host_id):
return self._request(
method="post",
url=_get_nodeman_api_v2("job/operate"),
data={"job_type": job_type, "bk_biz_id": bk_biz_id, "bk_host_id": bk_host_id},
)
def job_details(self, job_id):
return self._request(method="post", url=_get_nodeman_api_v2("job/details"), data={"job_id": job_id})
def get_job_log(self, job_id, instance_id):
return self._request(
method="post", url=_get_nodeman_api_v2("job/log"), data={"job_id": job_id, "instance_id": instance_id},
)
def cloud_list(self):
print(_get_nodeman_api_v2("cloud"))
return self._request(method="get", url=_get_nodeman_api_v2("cloud"), data={})
def ap_list(self):
return self._request(method="get", url=_get_nodeman_api_v2("ap"), data={})
def plugin_operate(self, params: dict):
return self._request(method="post", url=_get_nodeman_api_v2("plugin/operate"), data=params)
def plugin_process(self, category):
return self._request(method="post", url=_get_nodeman_api_v2("plugin/process"), data={"category": category})
def plugin_package(self, name, os):
return self._request(method="post", url=_get_nodeman_api_v2("plugin/package"), data={"name": name, "os": os})
def get_rsa_public_key(self, executor):
return self._request(
method="post",
url=_get_nodeman_api("core/api/encrypt_rsa/fetch_public_keys"),
data={
"bk_app_code": settings.APP_CODE,
"bk_app_secret": settings.SECRET_KEY,
"bk_username": executor,
"names": ["DEFAULT"],
},
)
| 40.05042 | 119 | 0.655266 | 668 | 4,766 | 4.32485 | 0.239521 | 0.086535 | 0.080997 | 0.119418 | 0.444098 | 0.4081 | 0.354102 | 0.318103 | 0.309796 | 0.280028 | 0 | 0.007266 | 0.220311 | 4,766 | 118 | 120 | 40.389831 | 0.770183 | 0.153588 | 0 | 0.180723 | 0 | 0 | 0.145093 | 0.024845 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204819 | false | 0 | 0.036145 | 0.180723 | 0.457831 | 0.012048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6f798cdc99f0e788fea4f6ab4e7e9626cad90231 | 301 | py | Python | Analyzer/__init__.py | buxiangqimingle233/NoCPerformanceModel | 5125e80d95425a98b22dcb611cf45eeda82c63f0 | [
"MIT"
] | null | null | null | Analyzer/__init__.py | buxiangqimingle233/NoCPerformanceModel | 5125e80d95425a98b22dcb611cf45eeda82c63f0 | [
"MIT"
] | null | null | null | Analyzer/__init__.py | buxiangqimingle233/NoCPerformanceModel | 5125e80d95425a98b22dcb611cf45eeda82c63f0 | [
"MIT"
] | null | null | null | import os
import sys
root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(root)
sys.path.append(root + "/Driver")
sys.path.append(root + "/Estimator")
sys.path.append(root + "/CongManager")
sys.path.append(root + "/Util")
sys.path.append(root + "/Default")
print("WUHU")
| 27.363636 | 66 | 0.717608 | 46 | 301 | 4.608696 | 0.347826 | 0.198113 | 0.367925 | 0.481132 | 0.150943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079734 | 301 | 10 | 67 | 30.1 | 0.765343 | 0 | 0 | 0 | 0 | 0 | 0.152824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
489e8d14fa4611fac2b9cfc73fffb569c8a0872a | 1,577 | py | Python | tests/test_analysis/test_outliers/test_median_outliers.py | Carlosbogo/etna | b6210f0e79ee92aa9ae8ff4fcfb267be9fb7cc94 | [
"Apache-2.0"
] | 1 | 2021-11-11T21:18:42.000Z | 2021-11-11T21:18:42.000Z | tests/test_analysis/test_outliers/test_median_outliers.py | Carlosbogo/etna | b6210f0e79ee92aa9ae8ff4fcfb267be9fb7cc94 | [
"Apache-2.0"
] | null | null | null | tests/test_analysis/test_outliers/test_median_outliers.py | Carlosbogo/etna | b6210f0e79ee92aa9ae8ff4fcfb267be9fb7cc94 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pytest
from etna.analysis.outliers import get_anomalies_median
@pytest.mark.parametrize(
"window_size, alpha, right_anomal",
(
(10, 3, {"1": [np.datetime64("2021-01-11")], "2": [np.datetime64("2021-01-09"), np.datetime64("2021-01-27")]}),
(
10,
2,
{
"1": [np.datetime64("2021-01-11")],
"2": [np.datetime64("2021-01-09"), np.datetime64("2021-01-16"), np.datetime64("2021-01-27")],
},
),
(20, 2, {"1": [np.datetime64("2021-01-11")], "2": [np.datetime64("2021-01-09"), np.datetime64("2021-01-27")]}),
),
)
def test_median_outliers(window_size, alpha, right_anomal, outliers_tsds):
assert get_anomalies_median(ts=outliers_tsds, window_size=window_size, alpha=alpha) == right_anomal
@pytest.mark.parametrize("true_params", (["1", "2"],))
def test_interface_correct_args(true_params, outliers_tsds):
d = get_anomalies_median(ts=outliers_tsds, window_size=10, alpha=2)
assert isinstance(d, dict)
assert sorted(list(d.keys())) == sorted(true_params)
for i in d.keys():
for j in d[i]:
assert isinstance(j, np.datetime64)
def test_in_column(outliers_df_with_two_columns):
outliers = get_anomalies_median(ts=outliers_df_with_two_columns, in_column="feature", window_size=10)
expected = {"1": [np.datetime64("2021-01-08")], "2": [np.datetime64("2021-01-26")]}
for key in expected:
assert key in outliers
np.testing.assert_array_equal(outliers[key], expected[key])
| 37.547619 | 119 | 0.63792 | 220 | 1,577 | 4.381818 | 0.290909 | 0.161826 | 0.19917 | 0.224066 | 0.46473 | 0.280083 | 0.280083 | 0.280083 | 0.192946 | 0.192946 | 0 | 0.11478 | 0.193405 | 1,577 | 41 | 120 | 38.463415 | 0.643082 | 0 | 0 | 0.058824 | 0 | 0 | 0.114141 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.088235 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
48b37827ff85024585b4c369afe3dd80f06a75b6 | 6,167 | py | Python | Ninja/WhiteBook/search_n_sort.py | cyandterry/Python-Study | b40e6c4db10da417e72247f61146f7570621106a | [
"MIT"
] | 61 | 2015-02-03T20:25:55.000Z | 2021-05-17T19:33:40.000Z | Ninja/WhiteBook/search_n_sort.py | cyandterry/Python-Study | b40e6c4db10da417e72247f61146f7570621106a | [
"MIT"
] | null | null | null | Ninja/WhiteBook/search_n_sort.py | cyandterry/Python-Study | b40e6c4db10da417e72247f61146f7570621106a | [
"MIT"
] | 37 | 2015-02-04T07:12:52.000Z | 2020-05-16T18:47:16.000Z | #!/usr/bin/env python
def swap(data_list, i1, i2):
temp = data_list[i1]
data_list[i1] = data_list[i2]
data_list[i2] = temp
def bubble_sort(data_list):
flag = True
for i in range( 1, len(data_list)):
flag = True
for j in range( 1, len(data_list)):
if data_list[j-1] > data_list[j]:
flag = False
swap(data_list, j, j-1)
print data_list
if flag:
break
def selection_sort(data_list):
for i in range( len(data_list)-1):
for j in range( i+1, len(data_list)):
if data_list[j] < data_list[i]:
swap(data_list, i, j)
print data_list
def insertion_sort(data_list):
for i in range( len(data_list) ):
temp = data_list[i]
k = i
while data_list[k-1] > temp and k > 0:
data_list[k] = data_list[k-1]
k -= 1
data_list[k] = temp
print data_list
def merge_sort(data_list, low, high):
if low < high:
middle = (low + high) / 2
merge_sort(data_list, low, middle)
merge_sort(data_list, middle+1, high)
merge(data_list, low, middle, high)
def merge(data_list, low, middle, high):
helper = [None] * len(data_list)
for i in range(low, high+1):
helper[i] = data_list[i]
helper_left = low
helper_right = middle + 1
current = low
while helper_left <= middle and helper_right <= high:
if helper[helper_left] <= helper[helper_right]:
data_list[current] = helper[helper_left]
helper_left += 1
else:
data_list[current] = helper[helper_right]
helper_right += 1
current += 1
remaining = middle - helper_left
for i in range(remaining+1):
data_list[current+i] = helper[helper_left+i]
# These are all for quick sort
def quick_sort(data_list, left, right):
index = partition(data_list, left, right)
if left < index - 1:
quick_sort(data_list, left, index - 1)
if index < right:
quick_sort(data_list, index, right)
def partition(data_list, left, right):
pivot = data_list[ (left + right ) / 2]
while left < right:
while data_list[left] < pivot:
left += 1
while data_list[right] > pivot:
right -= 1
if left <= right:
swap(data_list, left, right)
left += 1
right -= 1
return left
# Maybe heapsort
# Binary Search
def binary_search_recur(data_list, i, low, high):
mid = (low + high) / 2
if i == data_list[mid]:
return True
elif i < data_list[mid]:
return binary_search_recur(data_list, i, low, mid-1)
else:
return binary_search_recur(data_list, i, mid+1, high)
return False
def binary_search_iter(data_list, i):
low = 0
high = len(data_list) - 1
while low <= high:
mid = (low + high) / 2
if i == data_list[mid]:
return True
elif i < data_list[mid]:
high = mid - 1
else:
low = mid + 1
print low, mid, high
return False
# Q1 len(list_a) is much larger than len(list_b)
def merge_lists(list_a, list_b):
index_a = len(list_a) - 1
index_b = len(list_b) - 1
current = index_a + index_b + 1
while index_a > 0 and index_b > 0:
if list_a[index_a] <= list_b[index_b]:
index_a[current] = list_b[index_b]
index_b -= 1
current -=1
else:
index_a[current] = list_a[index_a]
index_a -= 1
current -= 1
if index_b > 0:
for i in range(current+1):
list_a[current] = list_b[current]
# Q2
def sort_anagrams(ana_list):
pass
# Q3
def find_rotate(data_list, i, left, right):
mid = (left + right) / 2
if data_list[mid] == i:
return True
if left > right:
return False
# left clean
if data_list[mid] > data_list[left]):
if i < data_list[mid]:
find_rotate(data_list, i, left, mid)
# Need to find both direction
else:
find_rotate(data_list, i, left, mid)
find_rotate(data_list, i, mid, right)
# right clean
else:
if i > data_list[mid]:
find_rotate(data_list, i, mid, right)
else:
find_rotate(data_list, i, left, mid)
find_rotate(data_list, i, mid, right)
return False
# Need to complete with the index
# Q5 Almost the same, but need to check if actually the same
# Need to notice the sequence of check
# 1. Check if empty
# 2. Check if correct
def search_empty_string(data_list, string, left, right):
mid = (left + right) / 2
if data_list[mid] is None:
current_left = mid - 1
current_right = mid + 1
while True:
if current_left < left and current_right > right:
return False
elif current_right <= right and data_list[current_right] is not None:
mid = current_right
break
elif current_left >= left and data_list[current_left] is not None:
mid = current_left
break
current_right += 1
current_left -= 1
if data_list[mid] == string:
return True
elif data_list[mid] < string:
search_empty_string(data_list, string, mid+1, right)
else:
search_empty_string(data_list, string, left, mid-1)x
# read Q6
# Q7 is recurrsion, will do it later
# Q8
def get_rank(node, number):
if node.data == number:
if __name__ == '__main__':
import random
data_list = []
while True:
rand = random.randint(0,20)
if rand not in data_list:
data_list.append(rand)
if len(data_list) == 21:
break
print data_list
#selection_sort(data_list)
#bubble_sort(data_list)
#insertion_sort(data_list)
#merge_sort(data_list, 0, len(data_list)-1)
#quick_sort(data_list, 0, len(data_list)-1)
#print binary_search_recur(data_list, 10, 0, len(data_list)-1)
print binary_search_iter(data_list, 10)
| 27.408889 | 81 | 0.576455 | 893 | 6,167 | 3.765957 | 0.134379 | 0.214095 | 0.040143 | 0.037467 | 0.377342 | 0.267023 | 0.223313 | 0.173952 | 0.129051 | 0.129051 | 0 | 0.019712 | 0.325442 | 6,167 | 224 | 82 | 27.53125 | 0.788702 | 0 | 0 | 0.295181 | 0 | 0 | 0.001446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006024 | 0.006024 | null | null | 0.036145 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
48b3bc6c8647990312c73813183574e4c1320ed9 | 345 | py | Python | app/area/serializers.py | spbso/so_rest | 2ede1ee849fa3e4ba9fc76a64a522b9aaa34e27f | [
"MIT"
] | null | null | null | app/area/serializers.py | spbso/so_rest | 2ede1ee849fa3e4ba9fc76a64a522b9aaa34e27f | [
"MIT"
] | 1 | 2022-03-11T14:25:08.000Z | 2022-03-11T14:25:08.000Z | app/area/serializers.py | spbso/so_rest | 2ede1ee849fa3e4ba9fc76a64a522b9aaa34e27f | [
"MIT"
] | null | null | null | from core.models import Area
from core.serializers import DynamicFieldsModelSerializer
from django.utils.translation import ugettext_lazy as _
class AreaSerializer(DynamicFieldsModelSerializer):
"""serializer for the area objects"""
class Meta:
model = Area
fields = ("id", "title")
read_only_fields = ("id",)
| 26.538462 | 57 | 0.718841 | 37 | 345 | 6.594595 | 0.702703 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197101 | 345 | 12 | 58 | 28.75 | 0.880866 | 0.089855 | 0 | 0 | 0 | 0 | 0.029221 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
48bf16be94aaa8d31095dcb1b471f4c5f061d305 | 1,693 | py | Python | inst/python/ee_utils.py | MartinHoldrege/rgee | 8534e547884198f9375428dfbb7f507e3c406bf0 | [
"Apache-2.0"
] | null | null | null | inst/python/ee_utils.py | MartinHoldrege/rgee | 8534e547884198f9375428dfbb7f507e3c406bf0 | [
"Apache-2.0"
] | null | null | null | inst/python/ee_utils.py | MartinHoldrege/rgee | 8534e547884198f9375428dfbb7f507e3c406bf0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""Python utils module for rgee
>>> ee_create_json_py | utils-upload.R | ee_gcs_to_asset_image function
>>> eedate_to_rdate | ee_Date.R | eedate_to_rdate function
"""
import base64
import hashlib
import json
import os
import ee
# utils-upload.R --> ee_gcs_to_asset_image function
def ee_create_json_py(towrite, manifest):
with open(towrite, "w") as outfile:
json.dump(manifest, outfile)
return True
# ee_Date.R --> eedate_to_rdate function
def eedate_to_rdate(eedate):
return float(eedate.getInfo()["value"])
# ee_Date.R --> ee_get_date function
def eedate_to_rdate_ic(ic, var="system:time_start"):
ic_dates = list()
for img in ic.aggregate_array(var).getInfo():
if isinstance(img, int):
ic_dates.append(float(img))
elif isinstance(img, dict):
ic_dates.append(float(img["value"]))
else:
raise ValueError("img must be a int or a dictionary with a 'value' key.")
if len(ic_dates) == 0:
return None
return ic_dates
# ee_Initialize.R --> ee_create_credentials_earthengine
def _base64param(byte_string):
"""Encodes bytes for use as a URL parameter."""
return base64.urlsafe_b64encode(byte_string).rstrip(b"=")
# ee_Initialize.R --> ee_create_credentials_earthengine
def create_codes():
code_verifier = _base64param(os.urandom(32))
code_challenge = _base64param(hashlib.sha256(code_verifier).digest())
return code_verifier, code_challenge
# Get current Earth Engine version
def ee_getversion():
return ee.__version__
def ee_path():
cred_path = os.path.expanduser("~/.config/earthengine/")
return cred_path
| 26.873016 | 85 | 0.703485 | 241 | 1,693 | 4.672199 | 0.431535 | 0.013321 | 0.057726 | 0.024867 | 0.269982 | 0.197158 | 0.197158 | 0.147425 | 0.065719 | 0 | 0 | 0.013699 | 0.180744 | 1,693 | 62 | 86 | 27.306452 | 0.798125 | 0.298287 | 0 | 0 | 0 | 0 | 0.088965 | 0.01882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.147059 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
48c4e771ce9f459f60b85770f5fbe61525f46d98 | 246 | py | Python | build/lib.linux-i686-3.3/pywt/version.py | astromaddie/pywavelets-py3 | 9d434929cb748eb44be86a4b712d8f3009326693 | [
"MIT"
] | 1 | 2018-03-13T10:44:47.000Z | 2018-03-13T10:44:47.000Z | build/lib.linux-i686-3.3/pywt/version.py | astromaddie/pywavelets-py3 | 9d434929cb748eb44be86a4b712d8f3009326693 | [
"MIT"
] | null | null | null | build/lib.linux-i686-3.3/pywt/version.py | astromaddie/pywavelets-py3 | 9d434929cb748eb44be86a4b712d8f3009326693 | [
"MIT"
] | 1 | 2018-03-13T10:44:54.000Z | 2018-03-13T10:44:54.000Z |
# THIS FILE IS GENERATED FROM PYWAVELETS SETUP.PY
short_version = '0.3.0'
version = '0.3.0'
full_version = '0.3.0.dev-7ea3e91'
git_revision = '7ea3e919b1d7bbf4d7685c47d3d10c16c599cd06'
release = False
if not release:
version = full_version
| 22.363636 | 57 | 0.756098 | 35 | 246 | 5.2 | 0.628571 | 0.131868 | 0.148352 | 0.164835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175355 | 0.142276 | 246 | 10 | 58 | 24.6 | 0.687204 | 0.191057 | 0 | 0 | 1 | 0 | 0.341837 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
48c95563ed0ab4136de54503a385f5380cc4c770 | 395 | py | Python | bot/handlers/password_handler.py | under735/botkaca | 1f560df2f9ff83eba6469927864bc7ea5e2bc3e1 | [
"MIT"
] | 1 | 2021-04-07T10:59:49.000Z | 2021-04-07T10:59:49.000Z | bot/handlers/password_handler.py | under735/botkaca | 1f560df2f9ff83eba6469927864bc7ea5e2bc3e1 | [
"MIT"
] | null | null | null | bot/handlers/password_handler.py | under735/botkaca | 1f560df2f9ff83eba6469927864bc7ea5e2bc3e1 | [
"MIT"
] | 1 | 2021-02-23T20:21:35.000Z | 2021-02-23T20:21:35.000Z | from pyrogram import Client, Message
from bot import LOCAL, CONFIG
from bot.handlers import help_message_handler
async def func(client : Client, message: Message):
try:
await message.delete()
except:
pass
if ' '.join(message.command[1:]) == CONFIG.BOT_PASSWORD:
CONFIG.CHAT_ID.append(message.chat.id)
await help_message_handler.func(client, message)
| 30.384615 | 60 | 0.703797 | 52 | 395 | 5.230769 | 0.519231 | 0.143382 | 0.132353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003175 | 0.202532 | 395 | 12 | 61 | 32.916667 | 0.860317 | 0 | 0 | 0 | 0 | 0 | 0.002532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.181818 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
48c9741c2c9edf4b861dd179995b50003fdc8b1f | 1,394 | py | Python | autocleus/library/__init__.py | ludah65/autocleus | b7403894e4f72d7874af4d888ec63bd1e7832c02 | [
"Apache-2.0"
] | 2 | 2021-05-21T15:51:24.000Z | 2021-05-21T17:27:48.000Z | autocleus/library/__init__.py | ludah65/autocleus | b7403894e4f72d7874af4d888ec63bd1e7832c02 | [
"Apache-2.0"
] | null | null | null | autocleus/library/__init__.py | ludah65/autocleus | b7403894e4f72d7874af4d888ec63bd1e7832c02 | [
"Apache-2.0"
] | null | null | null | import os
import autocleus.cmd as cmd
import jinja2
from jinja2 import Environment, PackageLoader, select_autoescape, StrictUndefined
def find_license(license):
"""
Return base license class
Args:
license(str): name of license file (Apache2, BSD3, MIT)
"""
return cmd.get_module('licenses', modpath='autocleus.library', mod=f'{license}LicenseClass')
def open_source_license(license, env_root, author, email):
"""
Generates license file at specified path
Args:
license(object): license object
env_root(str): virtual environment root directory for project
author(str): name(s) of project author(s): "Jim Bob, Bob Jim, Kim Tim"
email(str): email(s) of project authors(s): "jim@bob.com, bob@jim.com,..."
"""
# Think about rewriting license class to have a write method
# similar to VirtualScript class.
# good for now
lic = find_license(license)
lic = getattr(lic, f'{license}License')
# renders license from jinja2 template
rendered_lic = lic.render(name=author, email=email)
with open(f"{env_root}/LICENSE", "w") as out:
out.write(rendered_lic)
class classproperty(object):
'''
Decorator for class property
'''
def __init__(self, getter):
self.getter = getter
def __get__(self, instance, owner):
return self.getter(owner) | 28.44898 | 96 | 0.66858 | 180 | 1,394 | 5.072222 | 0.472222 | 0.061336 | 0.03943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004634 | 0.225968 | 1,394 | 49 | 97 | 28.44898 | 0.84152 | 0.406743 | 0 | 0 | 0 | 0 | 0.108434 | 0.028112 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.235294 | 0.058824 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
48da07dc752cb452520d3822f5a3de69a352f34d | 289 | py | Python | data/urls.py | marcia-marques/django-dashboard | c72e97b30f1b712743f28023ec841be53ce9110c | [
"MIT"
] | 2 | 2021-07-03T04:14:19.000Z | 2021-07-27T06:13:57.000Z | data/urls.py | marcia-marques/django-dashboard | c72e97b30f1b712743f28023ec841be53ce9110c | [
"MIT"
] | 1 | 2021-07-28T17:41:45.000Z | 2021-07-28T17:41:45.000Z | data/urls.py | marcia-marques/django-dashboard | c72e97b30f1b712743f28023ec841be53ce9110c | [
"MIT"
] | null | null | null | from django.urls import path
from django.conf import settings
from django.conf.urls.static import static
from .views import CampaignListView
urlpatterns = [
path('', CampaignListView.as_view(), name='campaign_list'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 28.9 | 65 | 0.788927 | 38 | 289 | 5.868421 | 0.526316 | 0.134529 | 0.125561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110727 | 289 | 9 | 66 | 32.111111 | 0.867704 | 0 | 0 | 0 | 0 | 0 | 0.044983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
48ed9298afd3fe0497e3113494d6ec9a5eb43552 | 70 | py | Python | src/__init__.py | lancondrej/freeconf | 8d05d358ab24a85b775ad39f0bace8475fff792f | [
"Unlicense"
] | null | null | null | src/__init__.py | lancondrej/freeconf | 8d05d358ab24a85b775ad39f0bace8475fff792f | [
"Unlicense"
] | null | null | null | src/__init__.py | lancondrej/freeconf | 8d05d358ab24a85b775ad39f0bace8475fff792f | [
"Unlicense"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
__author__ = 'Ondřej Lanč'
| 17.5 | 26 | 0.614286 | 9 | 70 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.142857 | 70 | 3 | 27 | 23.333333 | 0.616667 | 0.557143 | 0 | 0 | 0 | 0 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
48f62580c32459b7b545a2a4e15ffbae26de4e77 | 3,751 | py | Python | utils/text/preprocess.py | alexander-paskal/news-bias-analysis | 9226bbcd2e5ba5cc67163f64b453ed7f602bdef9 | [
"MIT"
] | 1 | 2021-11-03T00:29:19.000Z | 2021-11-03T00:29:19.000Z | utils/text/preprocess.py | alexander-paskal/news-bias-analysis | 9226bbcd2e5ba5cc67163f64b453ed7f602bdef9 | [
"MIT"
] | null | null | null | utils/text/preprocess.py | alexander-paskal/news-bias-analysis | 9226bbcd2e5ba5cc67163f64b453ed7f602bdef9 | [
"MIT"
] | null | null | null | """
Contains utility functions for preprocessing text
"""
import nltk
import re,string
from nltk.tokenize import word_tokenize
nltk.download('punkt')
STOPWORDS = {"i", "me", "my", "myself", "we", "our", "ours", "ourselves", "you", "your", "yours", "yourself", "yourselves", "he", "him", "his", "himself", "she", "her", "hers", "herself", "it", "its", "itself", "they", "them", "their", "theirs", "themselves", "what", "which", "who", "whom", "this", "that", "these", "those", "am", "is", "are", "was", "were", "be", "been", "being", "have", "has", "had", "having", "do", "does", "did", "doing", "a", "an", "the", "and", "but", "if", "or", "because", "as", "until", "while", "of", "at", "by", "for", "with", "about", "against", "between", "into", "through", "during", "before", "after", "above", "below", "to", "from", "up", "down", "in", "out", "on", "off", "over", "under", "again", "further", "then", "once", "here", "there", "when", "where", "why", "how", "all", "any", "both", "each", "few", "more", "most", "other", "some", "such", "no", "nor", "not", "only", "own", "same", "so", "than", "too", "very", "s", "t", "can", "will", "just", "don", "should", "now"}
def strip_stopwords(text, stopwords=None,tokenized = True):
"""
Strips the stopwords from a given text
:param text: The text to be processed
:type text: supports str, list of str, ...
:param stopwords:
:type stopwords: Optional, iterable: the list of stopwords to be removed.
if not specified, goes with nltk's stopword
:return: tokenized text with out stopwords
:rtype: str, or list, ...
"""
if stopwords is None:
stopwords = STOPWORDS
if isinstance(text,str):
text_tokens = word_tokenize(text)
remove_sw = [word for word in text_tokens if not word in stopwords]
return remove_sw if tokenized else " ".join(remove_sw)
if isinstance(text,list):
res = []
for sent in text:
text_tokens = word_tokenize(sent)
remove_sw = [word for word in text_tokens if not word in stopwords]
if(not tokenized): remove_sw = " ".join(remove_sw)
res.append(remove_sw)
return res
def strip_punctuation(text, symbols=None, tokenized = True):
"""
Strips punctuation from a given text
:param text: The text to be processed
:type text:
:param stopwords:
:type stopwords: Optional, iterable: the list of symbols to be removed.
if not specified, goes with some default
:return:
:rtype:
"""
# TODO implement strip_punctuation
if(isinstance(text,str)):
stripped = re.sub('[%s]' % string.punctuation,'',text)
return word_tokenize(stripped) if tokenized else stripped
if(isinstance(text,list)):
res = []
for sent in text:
stripped = re.sub('[%s]' % string.punctuation,'',sent)
if(tokenized): stripped = word_tokenize(stripped)
res.append(stripped)
return res
def preprocess(text,tokenized = True, lower = False):
'''
Combines lower, strip punctuation, and strip stop words in one function
param: text
type: str, list of str
return preprocessed text
'''
if isinstance(text,str):
if(lower):text = text.lower()
stripped = strip_punctuation(text,tokenized=False)
return strip_stopwords(stripped,tokenized = tokenized)
if isinstance(text,list): # maybe we shouldn't bother worrying about this?
res = []
for sent in text:
if(lower): sent = sent.lower()
stripped = strip_punctuation(sent,tokenized=False)
stripped = strip_stopwords(stripped,tokenized=tokenized)
res.append(stripped)
return res
| 41.677778 | 1,015 | 0.603306 | 469 | 3,751 | 4.776119 | 0.407249 | 0.025 | 0.042857 | 0.025446 | 0.292857 | 0.226786 | 0.199107 | 0.199107 | 0.169643 | 0.091071 | 0 | 0 | 0.227406 | 3,751 | 89 | 1,016 | 42.146067 | 0.772947 | 0.221274 | 0 | 0.386364 | 0 | 0 | 0.182273 | 0 | 0 | 0 | 0 | 0.011236 | 0 | 1 | 0.068182 | false | 0 | 0.068182 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
48f7b81cad1cd871521896cab7e73026a27332c3 | 4,216 | py | Python | python/Pipeline/Utilities/CBASS_U_MultiChannelTemplateMatching.py | cardin-higley-lab/CBASS | 0d0b58497313027388351feffc79766f815b47b5 | [
"Apache-2.0"
] | null | null | null | python/Pipeline/Utilities/CBASS_U_MultiChannelTemplateMatching.py | cardin-higley-lab/CBASS | 0d0b58497313027388351feffc79766f815b47b5 | [
"Apache-2.0"
] | null | null | null | python/Pipeline/Utilities/CBASS_U_MultiChannelTemplateMatching.py | cardin-higley-lab/CBASS | 0d0b58497313027388351feffc79766f815b47b5 | [
"Apache-2.0"
] | null | null | null | def MultiChannelTemplateMatching(db2Signal, db2Template, blCenter, blNormalize):
'''
Synopsis: DB1SCORE = CBASS_U_MultiChannelTemplateMatching(DB2SIGNAL, DB2TEMPLATE, [BLNORM])
Returns a score DB1SCORE indicative of how well the multi-channel signal
DB2SIGNAL matches the spatio temporal template DB2TEMPLATE
Input: -DB2SIGNAL a (channel x time sample) matrix
-DB2TEMPLATE a (channel x time sample) matrix. The number of
channels must be the same as in DB2SIGNAL. The number of time
sample must be inferior to half of the duration of the signal.
-BLNORM normalizes the score by norm of db2Signal - default is true
Output:-DBSCORE a (1 x time sample) row vector. The score represents how
well DB2SIGNAL matches DB2TEMPLATE around each time sample and is
the dot product S . T where T represents is linearized template
(DB2TEMPLATE(:) and S rempresents the linearized chunk of DB2SIGNAL
matching the size of DB2TEMPLATE centered on each time points. T is
normalized so that all its elements sum to 1.
'''
verbose = sOPTION.blVerbose
inNChan, inNSamp = db2Signal.shape[0], db2Signal.shape[1]
if verbose: print('inNChan: {}, inNSamp: {}'.format(inNChan, inNSamp))
inNSmpTmp = db2Template.shape[1]
if verbose: print('inNSmpTmp: ',inNSmpTmp)
# Pads db2Signal with zeros for ease of computation if the number of sample of the template is uneven padds in an asymetric way
inPadBeg = np.floor(inNSmpTmp/2).astype(int)
inPadEnd = np.ceil(inNSmpTmp/2).astype(int)
if verbose:
print('np.zeros((inNChan, inPadBeg)).shape: ',np.zeros((inNChan, inPadBeg)).shape)
print('np.zeros((inNChan, inPadEnd)).shape: ',np.zeros((inNChan, inPadEnd)).shape)
print('db2Signal.shape: ',db2Signal.shape)
db2Signal = np.concatenate((np.zeros((inNChan, inPadBeg)), db2Signal, np.zeros((inNChan, inPadEnd))),axis=1)
if verbose:
print('inPadBeg: ',inPadBeg)
print('inPadEnd: ',inPadEnd)
print('db2Signal.shape: ',db2Signal.shape)
# Normalizes and transposes the template
if verbose:
print('blCenter: ',blCenter)
print('db2Template: ',db2Template)
print('db2Template.shape: ',db2Template.shape)
print('np.mean(db2Template): ',np.mean(db2Template))
if blCenter:
db2Template = db2Template - np.mean(db2Template)
if verbose: print('np.linalg.norm(db2Template.flatten(): ',np.linalg.norm(db2Template.flatten()))
db2Template = db2Template.T / np.linalg.norm(db2Template.flatten())
# Computes the centering offset if needed
if blCenter:
dbXCntr = np.sum(db2Signal[:, :-inNSmpTmp]);
if verbose: print('db2Signal[:, :-inNSmpTmp].shape: ', db2Signal[:, :-inNSmpTmp].shape)
for iSmp in range(1,inNSmpTmp): #2:inNSmpTmp
if verbose:
print('db2Signal[:, iSmp:(-inNSmpTmp + iSmp)].shape:', db2Signal[:, iSmp:(-inNSmpTmp + iSmp)].shape)
print('(np.sum(db2Signal[:, iSmp:(-inNSmpTmp + iSmp)],axis=0)).shape: ', (np.sum(db2Signal[:, iSmp:(-inNSmpTmp + iSmp)],axis=0)).shape)
dbXCntr = dbXCntr + np.sum(db2Signal[:, iSmp:(-inNSmpTmp + iSmp)],axis=0)
if verbose: print('dbXCntr.shape: ',dbXCntr.shape)
dbXCntr = dbXCntr/db2Template.size;
else:
dbXCntr = 0;
# Computes the score
db1Score = np.matmul(db2Template[0, :].reshape(1,-1), (db2Signal[:, :-inNSmpTmp] - dbXCntr))
if verbose: print('db1Score.shape: ',db1Score.shape)
if blNormalize: db1SS = np.sum((db2Signal[:, :-inNSmpTmp] - dbXCntr)**2, axis=0)
for iSmp in range(1,inNSmpTmp):
db1Score = db1Score + (np.matmul(db2Template[iSmp, :], (db2Signal[:, iSmp:-inNSmpTmp+iSmp] - dbXCntr)))
if blNormalize:
db1SS = db1SS + np.sum((db2Signal[:, iSmp:-inNSmpTmp+iSmp] - dbXCntr)**2,axis=0)
if verbose:
print('np.max(db1Score): ',np.max(db1Score))
print('np.max( np.sqrt(db1SS)): ', np.max(np.sqrt(db1SS)))
print('db1SS.shape: ', db1SS.shape)
if blNormalize: db1Score = db1Score / np.sqrt(db1SS)
if verbose:
print('db1Score.shape: ',db1Score.shape)
return db1Score
| 52.049383 | 151 | 0.666509 | 514 | 4,216 | 5.463035 | 0.247082 | 0.038462 | 0.059829 | 0.064815 | 0.325142 | 0.116453 | 0.070513 | 0.042023 | 0.029202 | 0 | 0 | 0.029149 | 0.202562 | 4,216 | 80 | 152 | 52.7 | 0.806068 | 0.055503 | 0 | 0.226415 | 0 | 0 | 0.17012 | 0.026738 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.415094 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
48fb5183403caedfebd51c3c178a20a2f3fbeeed | 489 | py | Python | MovieRaterBackend/API/migrations/0002_auto_20200612_0317.py | MXSH-Dev/MovieRater-Angular-DjangoREST | b89dc1227cf92991888ef72c53b2487350a3e456 | [
"MIT"
] | null | null | null | MovieRaterBackend/API/migrations/0002_auto_20200612_0317.py | MXSH-Dev/MovieRater-Angular-DjangoREST | b89dc1227cf92991888ef72c53b2487350a3e456 | [
"MIT"
] | null | null | null | MovieRaterBackend/API/migrations/0002_auto_20200612_0317.py | MXSH-Dev/MovieRater-Angular-DjangoREST | b89dc1227cf92991888ef72c53b2487350a3e456 | [
"MIT"
] | null | null | null | # Generated by Django 2.2 on 2020-06-12 03:17
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('API', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='rating',
name='stars',
field=models.IntegerField(validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(5)]),
),
]
| 24.45 | 141 | 0.648262 | 51 | 489 | 6.176471 | 0.686275 | 0.095238 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.233129 | 489 | 19 | 142 | 25.736842 | 0.786667 | 0.087935 | 0 | 0 | 1 | 0 | 0.058559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b06b03536744cb1fae37dfac423defa9b29c3f4 | 983 | py | Python | em Python/Roteiro7/Roteiro7__testes_grafo.py | GuilhermeEsdras/Grafos | b6556c3d679496d576f65b798a1a584cd73e40f4 | [
"MIT"
] | null | null | null | em Python/Roteiro7/Roteiro7__testes_grafo.py | GuilhermeEsdras/Grafos | b6556c3d679496d576f65b798a1a584cd73e40f4 | [
"MIT"
] | null | null | null | em Python/Roteiro7/Roteiro7__testes_grafo.py | GuilhermeEsdras/Grafos | b6556c3d679496d576f65b798a1a584cd73e40f4 | [
"MIT"
] | null | null | null | from Roteiro7.Roteiro7__funcoes import Grafo
mapa = Grafo()
for v in ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10',
'11', '12', '13', '14', '15', '16', '17', '18', '19', '20',
'21', '22', '23', '24', '25', '26', '27', '28', '29', '30',
'31', '32', '33']:
mapa.adicionaVertice(v)
for a in ['1-2', '1-3', '1-4', '2-5', '2-9', '3-7', '4-3', '4-8', '5-6', '6-2', '6-10', '7-6', '7-10', '7-11',
'8-7', '8-12', '9-13', '10-9', '10-14', '11-15', '12-16', '13-17', '13-19', '14-18', '14-19', '14-20',
'15-19', '16-20', '17-21', '18-17', '18-22', '19-18', '19-24', '20-19', '21-29', '21-25', '22-25', '22-26',
'22-23', '23-19', '24-27', '24-28', '27-31', '29-30', '30-31', '31-32', '31-33', '33-32']:
mapa.adicionaAresta(a)
print("Menor caminho com Dijkstra: ", mapa.dijkstra('1', '32'))
print("Melhor caminho para o Drone no mapa do exemplo: ", mapa.dijkstra_mod('1', '32', 3, 5, ['12', '19', '21', '30']))
| 54.611111 | 119 | 0.441506 | 179 | 983 | 2.407821 | 0.324022 | 0.013921 | 0.018561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.302682 | 0.203459 | 983 | 17 | 120 | 57.823529 | 0.247765 | 0 | 0 | 0 | 0 | 0 | 0.363174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b146fb5ea09b4d7217a535fe7d8bead2a197ae2 | 854 | py | Python | Pages/login_page.py | trandahang/herokuapp | bbc405f9b291eece0e1bd29bdbb92d6535443cb3 | [
"MIT"
] | null | null | null | Pages/login_page.py | trandahang/herokuapp | bbc405f9b291eece0e1bd29bdbb92d6535443cb3 | [
"MIT"
] | null | null | null | Pages/login_page.py | trandahang/herokuapp | bbc405f9b291eece0e1bd29bdbb92d6535443cb3 | [
"MIT"
] | null | null | null | import logging
import TestData
from Locators.login_page_locators import LoginpageLocators
from Pages.base_page import BasePage
from TestData.test_data import Data
class LoginPage(BasePage):
def __init__(self, driver):
super().__init__(driver)
logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO)
self.navigate_to(Data.BASE_URL)
def login(self, username, password):
self.enter_text(LoginpageLocators.INPUT_USERNAME, username)
self.enter_text(LoginpageLocators.INPUT_PASSWORD, password)
self.click(LoginpageLocators.BUTTON_LOGIN)
def login_object(self, account):
self.enter_text(LoginpageLocators.INPUT_USERNAME, account.username)
self.enter_text(LoginpageLocators.INPUT_PASSWORD, account.password)
self.click(LoginpageLocators.BUTTON_LOGIN)
| 32.846154 | 83 | 0.755269 | 99 | 854 | 6.262626 | 0.393939 | 0.058065 | 0.083871 | 0.193548 | 0.448387 | 0.448387 | 0.164516 | 0 | 0 | 0 | 0 | 0 | 0.15808 | 854 | 25 | 84 | 34.16 | 0.862309 | 0 | 0 | 0.111111 | 0 | 0 | 0.029308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.277778 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5b189ca28a03539e41b9e0b68159873c136c8b17 | 2,255 | py | Python | bomb32/solve_phase2.py | moshekaplan/angr_playing | d77a4060e10c692190de6a9a5add2eda3d550afd | [
"MIT"
] | 4 | 2018-07-21T05:56:55.000Z | 2020-06-07T20:22:10.000Z | bomb32/solve_phase2.py | moshekaplan/angr_playing | d77a4060e10c692190de6a9a5add2eda3d550afd | [
"MIT"
] | null | null | null | bomb32/solve_phase2.py | moshekaplan/angr_playing | d77a4060e10c692190de6a9a5add2eda3d550afd | [
"MIT"
] | 1 | 2021-05-30T21:25:43.000Z | 2021-05-30T21:25:43.000Z | def approach1():
# Approach 1 - Replace the 6 integers with individual symbolized values
import angr
AFTER_READ_SIX_NUMBERS_ADDR = 0x8048B63
FIND_ADDR = 0x8048B8E
AVOID_ADDR = 0x8048B83
proj = angr.Project('bomb', load_options={'auto_load_libs':False})
state = proj.factory.blank_state(addr=AFTER_READ_SIX_NUMBERS_ADDR)
# Create 6 symbolic integers and store them in memory
six_int_offset = - 0x18
sym_ints = []
for i in range(6):
sym_int = state.se.BVS("int_%d" % i, 8 * 4)
state.memory.store(state.regs.ebp + six_int_offset + 4*i, sym_int, endness="Iend_LE")
sym_ints.append(sym_int)
path_group = proj.factory.path_group(state)
path_group.explore(find=FIND_ADDR, avoid=AVOID_ADDR)
solution_path = path_group.found[0]
solution_state = solution_path.state
for sym_int in sym_ints:
print solution_state.se.any_int(sym_int)
"""
Prints:
1
2
6
24
120
720
"""
def approach2():
# Approach 2 - Replace the 6 integers with a single symbolized array
import angr
AFTER_READ_SIX_NUMBERS_ADDR = 0x8048B63
FIND_ADDR = 0x8048B8E
AVOID_ADDR = 0x8048B83
# Load the binary
proj = angr.Project('bomb', load_options={'auto_load_libs':False})
# Start right before the data comes in:
state = proj.factory.blank_state(addr=AFTER_READ_SIX_NUMBERS_ADDR)
six_int_offset = -0x18
sym_int_array = state.se.BVS("int_arr", 8 * 4 * 6)
state.memory.store(state.regs.ebp + six_int_offset, sym_int_array, endness="Iend_LE")
path_group = proj.factory.path_group(state)
path_group.explore(find=FIND_ADDR, avoid=AVOID_ADDR)
solution_path = path_group.found[0]
solution_state = solution_path.state
# Examine EIP if we so desire
print "EIP:", hex(solution_state.se.any_int(solution_state.regs.eip))
# Print out the 6 integers in the array:
for i in range(6):
# Read 4 byte integers, fixing the endianess (since it's from memory)
sym_int = solution_state.memory.load(state.regs.ebp + six_int_offset + i*4, 4, endness="Iend_LE")
print solution_state.se.any_int(sym_int)
"""
Prints:
1
2
6
24
120
720
""" | 30.066667 | 105 | 0.677162 | 337 | 2,255 | 4.284866 | 0.296736 | 0.037396 | 0.041551 | 0.052632 | 0.658587 | 0.569252 | 0.552632 | 0.552632 | 0.552632 | 0.49723 | 0 | 0.051149 | 0.228381 | 2,255 | 75 | 106 | 30.066667 | 0.778736 | 0.167184 | 0 | 0.702703 | 0 | 0 | 0.042529 | 0 | 0 | 0 | 0.035632 | 0 | 0 | 0 | null | null | 0 | 0.054054 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b19cabd6afcd7cdbc1efbc3c3d7592c769f2da5 | 851 | py | Python | sortingview/tasks/sorting_info.py | garrettmflynn/sortingview | 0bb3df40d5d031ec651c4821f928787bbee71fbb | [
"Apache-2.0"
] | 2 | 2021-11-19T04:51:42.000Z | 2022-03-12T23:36:19.000Z | sortingview/tasks/sorting_info.py | magland/sortingview | 0b1be9d55048cd4b8a0b6b6733bd7d35cb440aa7 | [
"Apache-2.0"
] | 172 | 2021-05-10T17:39:15.000Z | 2022-03-18T21:46:15.000Z | sortingview/tasks/sorting_info.py | garrettmflynn/sortingview | 0bb3df40d5d031ec651c4821f928787bbee71fbb | [
"Apache-2.0"
] | 2 | 2021-08-29T20:13:57.000Z | 2022-03-12T23:36:34.000Z | from typing import List, cast
import hither2 as hi
import numpy as np
import spikeextractors as se
import kachery_client as kc
from sortingview.config import job_cache, job_handler
from sortingview.extractors import LabboxEphysSortingExtractor
@hi.function(
'sorting_info', '0.1.3'
)
def sorting_info(sorting_uri):
sorting = LabboxEphysSortingExtractor(sorting_uri)
return dict(
unit_ids=_to_int_list(sorting.get_unit_ids()),
samplerate=sorting.get_sampling_frequency(),
sorting_object=sorting.object()
)
@kc.taskfunction('sorting_info.3', type='pure-calculation')
def task_sorting_info(sorting_uri: str):
with hi.Config(job_handler=job_handler.misc, job_cache=job_cache):
return hi.Job(sorting_info, {'sorting_uri': sorting_uri})
def _to_int_list(x):
return np.array(x).astype(int).tolist() | 31.518519 | 70 | 0.760282 | 119 | 851 | 5.184874 | 0.445378 | 0.089141 | 0.08752 | 0.102107 | 0.090762 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00684 | 0.141011 | 851 | 27 | 71 | 31.518519 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0.068075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.304348 | 0.043478 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
5b1c8b414dc84ddcbf46c3ebc4600c697923b2db | 1,408 | py | Python | pyconsem/convertor.py | vliz-be-opsci/pyConSem | 20c4f25426dad6afb15d7f5c3bc09826cbd9f71d | [
"MIT"
] | null | null | null | pyconsem/convertor.py | vliz-be-opsci/pyConSem | 20c4f25426dad6afb15d7f5c3bc09826cbd9f71d | [
"MIT"
] | null | null | null | pyconsem/convertor.py | vliz-be-opsci/pyConSem | 20c4f25426dad6afb15d7f5c3bc09826cbd9f71d | [
"MIT"
] | null | null | null | # Use this file to describe the datamodel handled by this module
# we recommend using abstract classes to achieve proper service and interface insulation
from abc import ABC # , abstractmethod
import logging
log = logging.getLogger(__name__)
class Convertor(ABC):
""" Main class performing the conversion
"""
def __init__(self, config: dict):
""" constructor
:param config: The configuration describing the conversion to be ran
"""
self._config = config
def run(self):
""" Runs the steps described in the config
"""
self.prepare()
self.input()
log.warning("TODO -- load and output")
# self.load()
# self.output()
log.info("conversion complete")
def prepare(self):
""" analyzes the config and creates internal worker-objects to actually handle the conversion
"""
# TODO - make some work-space-temp-folder
assert 'in' in self._config, "Convertor needs input to work on"
self._input_producers = list()
for num, input_config in enumerate(self._config['in']):
# out_file = num
log.warning(f"have to make InputProducer {num}, {input_config}")
# append producer
def input(self):
""" produces the inputs to load
"""
for i in self._input_producers:
i.produce(self)
| 29.333333 | 101 | 0.618608 | 168 | 1,408 | 5.077381 | 0.529762 | 0.046893 | 0.042204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290483 | 1,408 | 47 | 102 | 29.957447 | 0.853854 | 0.40696 | 0 | 0 | 0 | 0 | 0.161538 | 0 | 0 | 0 | 0 | 0.021277 | 0.052632 | 1 | 0.210526 | false | 0 | 0.105263 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d286482546a6104e50bca01ad316a777cd2d427a | 1,080 | py | Python | tests/test_cpu.py | garybake/gbakeboy | f24d3d96c3e6c89446c2ef6645668fa8a1ccf506 | [
"MIT"
] | 1 | 2018-01-05T13:54:24.000Z | 2018-01-05T13:54:24.000Z | tests/test_cpu.py | garybake/gbakeboy | f24d3d96c3e6c89446c2ef6645668fa8a1ccf506 | [
"MIT"
] | null | null | null | tests/test_cpu.py | garybake/gbakeboy | f24d3d96c3e6c89446c2ef6645668fa8a1ccf506 | [
"MIT"
] | 1 | 2019-12-31T05:16:38.000Z | 2019-12-31T05:16:38.000Z | import unittest
from gbakeboy import Cpu
from gbakeboy import Memory
from gbakeboy import hex2int as h2i
from gbakeboy import print_bin_8
class test_cpu(unittest.TestCase):
def setUp(self):
mem = Memory()
self.cpu = Cpu(mem)
def clear_registers(self):
cpu = self.cpu
cpu.set_register_16('AF', 0)
cpu.set_register_16('BC', 0)
cpu.set_register_16('DE', 0)
cpu.set_register_16('HL', 0)
cpu.set_register_16('SP', 0)
cpu.set_register_16('PC', 0)
def test_init(self):
cpu = self.cpu
mem = self.cpu.mem
self.assertNotEqual(mem, None)
self.assertEqual(cpu.A, h2i("01"))
self.assertEqual(cpu.F, h2i("B0"))
self.assertEqual(cpu.B, h2i("00"))
self.assertEqual(cpu.C, h2i("13"))
self.assertEqual(cpu.D, h2i("00"))
self.assertEqual(cpu.E, h2i("D8"))
self.assertEqual(cpu.H, h2i("01"))
self.assertEqual(cpu.L, h2i("4D"))
self.assertEqual(cpu.SP, h2i("FFFE"))
self.assertEqual(cpu.PC, h2i("0"))
| 24.545455 | 45 | 0.598148 | 154 | 1,080 | 4.084416 | 0.324675 | 0.238474 | 0.286169 | 0.152623 | 0.281399 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05604 | 0.256481 | 1,080 | 43 | 46 | 25.116279 | 0.727273 | 0 | 0 | 0.064516 | 0 | 0 | 0.030556 | 0 | 0 | 0 | 0 | 0 | 0.354839 | 1 | 0.096774 | false | 0 | 0.16129 | 0 | 0.290323 | 0.032258 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d29b8409fa3b4f92d2371ae2f813130bdd6e163f | 78 | py | Python | django_nextjs/__init__.py | QueraTeam/django-nextjs | 21100c6262334be9f672c8c7b8eda9d377e1d43c | [
"MIT"
] | 43 | 2022-02-01T20:16:21.000Z | 2022-03-30T18:01:09.000Z | django_nextjs/__init__.py | QueraTeam/django-nextjs | 21100c6262334be9f672c8c7b8eda9d377e1d43c | [
"MIT"
] | null | null | null | django_nextjs/__init__.py | QueraTeam/django-nextjs | 21100c6262334be9f672c8c7b8eda9d377e1d43c | [
"MIT"
] | 1 | 2022-03-21T10:11:49.000Z | 2022-03-21T10:11:49.000Z | default_app_config = "django_nextjs.apps.NextJSConfig"
__version__ = "2.1.4"
| 19.5 | 54 | 0.782051 | 11 | 78 | 4.909091 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.089744 | 78 | 3 | 55 | 26 | 0.71831 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 0.397436 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d2a0c996ef0bfb00cb2263a1445ebb2530f7e02f | 432 | py | Python | PickFirstCovEntry.py | EichlerLab/chm1_scripts | 55d1783139f4ccc6e41c79812920785b1eaea65e | [
"MIT"
] | 5 | 2016-04-21T20:00:01.000Z | 2020-03-12T16:35:55.000Z | PickFirstCovEntry.py | EichlerLab/chm1_scripts | 55d1783139f4ccc6e41c79812920785b1eaea65e | [
"MIT"
] | null | null | null | PickFirstCovEntry.py | EichlerLab/chm1_scripts | 55d1783139f4ccc6e41c79812920785b1eaea65e | [
"MIT"
] | 8 | 2015-09-15T07:08:06.000Z | 2021-07-13T02:25:12.000Z | #!/usr/bin/env python
import sys
prevVals = [None, None, None]
totalCoverage = 0
nCoverage = 0
for line in sys.stdin.readlines():
vals = line.split()
if (vals[0:3] != prevVals[0:3] and prevVals[0] is not None):
coverage = totalCoverage / nCoverage
prevVals[-1] = coverage
print "\t".join([str(i) for i in prevVals])
totalCoverage = 0
nCoverage = 0
totalCoverage += float(vals[-1])
nCoverage += 1.0
prevVals = vals
| 24 | 61 | 0.673611 | 64 | 432 | 4.546875 | 0.484375 | 0.054983 | 0.158076 | 0.164948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036723 | 0.180556 | 432 | 17 | 62 | 25.411765 | 0.785311 | 0.046296 | 0 | 0.266667 | 0 | 0 | 0.004866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d2a8e6d4ac9736101fda9809a41c5e4eb5a790af | 3,072 | py | Python | release/stubs.min/Autodesk/Revit/DB/__init___parts/SchedulableField.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Autodesk/Revit/DB/__init___parts/SchedulableField.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Autodesk/Revit/DB/__init___parts/SchedulableField.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | class SchedulableField(object, IDisposable):
"""
A non-calculated field eligible to be included in a schedule.
SchedulableField(fieldType: ScheduleFieldType,parameterId: ElementId)
SchedulableField(fieldType: ScheduleFieldType)
SchedulableField()
"""
def Dispose(self):
""" Dispose(self: SchedulableField) """
pass
def Equals(self, obj):
"""
Equals(self: SchedulableField,obj: object) -> bool
Determines whether the specified System.Object is equal to the current
System.Object.
obj: The other object to evaluate.
"""
pass
def GetHashCode(self):
"""
GetHashCode(self: SchedulableField) -> int
Gets the integer value of the SchedulableField as hash code
"""
pass
def GetName(self, document):
"""
GetName(self: SchedulableField,document: Document) -> str
Gets the name of the field.
document: The document in which the field will be used.
Returns: The name of the field.
"""
pass
def ReleaseUnmanagedResources(self, *args):
""" ReleaseUnmanagedResources(self: SchedulableField,disposing: bool) """
pass
def __enter__(self, *args):
""" __enter__(self: IDisposable) -> object """
pass
def __eq__(self, *args):
""" x.__eq__(y) <==> x==y """
pass
def __exit__(self, *args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self, *args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self, fieldType=None, parameterId=None):
"""
__new__(cls: type,fieldType: ScheduleFieldType,parameterId: ElementId)
__new__(cls: type,fieldType: ScheduleFieldType)
__new__(cls: type)
"""
pass
def __ne__(self, *args):
pass
def __repr__(self, *args):
""" __repr__(self: object) -> str """
pass
FieldType = property(lambda self: object(), lambda self, v: None, lambda self: None)
"""The type of data displayed by the field.
Get: FieldType(self: SchedulableField) -> ScheduleFieldType
Set: FieldType(self: SchedulableField)=value
"""
IsValidObject = property(
lambda self: object(), lambda self, v: None, lambda self: None
)
"""Specifies whether the .NET object represents a valid Revit entity.
Get: IsValidObject(self: SchedulableField) -> bool
"""
ParameterId = property(
lambda self: object(), lambda self, v: None, lambda self: None
)
"""The ID of the parameter displayed by the field.
Get: ParameterId(self: SchedulableField) -> ElementId
Set: ParameterId(self: SchedulableField)=value
"""
| 21.942857 | 221 | 0.616536 | 317 | 3,072 | 5.675079 | 0.293375 | 0.111173 | 0.026681 | 0.031684 | 0.23791 | 0.15453 | 0.15453 | 0.15453 | 0.15453 | 0.15453 | 0 | 0 | 0.274414 | 3,072 | 139 | 222 | 22.100719 | 0.807088 | 0.422852 | 0 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.363636 | 0 | 0 | 0.484848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d2b4b50d293aa7e0f84a68d89f7fcb13707358c1 | 6,739 | py | Python | src/gdata/exif/__init__.py | Cloudlock/gdata-python3 | a6481a13590bfa225f91a97b2185cca9aacd1403 | [
"Apache-2.0"
] | 19 | 2017-06-09T13:38:03.000Z | 2020-12-12T07:45:48.000Z | src/gdata/exif/__init__.py | AlexxIT/gdata-python3 | 5cc5a83a469d87f804d1fda8760ec76bcb6050c9 | [
"Apache-1.1"
] | 11 | 2017-07-22T07:09:54.000Z | 2020-12-02T15:08:48.000Z | src/gdata/exif/__init__.py | AlexxIT/gdata-python3 | 5cc5a83a469d87f804d1fda8760ec76bcb6050c9 | [
"Apache-1.1"
] | 25 | 2017-07-03T11:30:39.000Z | 2020-10-01T02:21:13.000Z | # -*-*- encoding: utf-8 -*-*-
#
# This is gdata.photos.exif, implementing the exif namespace in gdata
#
# $Id: __init__.py 81 2007-10-03 14:41:42Z havard.gulldahl $
#
# Copyright 2007 Håvard Gulldahl
# Portions copyright 2007 Google Inc.
#
# Licensed under the Apache License 2.0;
"""This module maps elements from the {EXIF} namespace[1] to GData objects.
These elements describe image data, using exif attributes[2].
Picasa Web Albums uses the exif namespace to represent Exif data encoded
in a photo [3].
Picasa Web Albums uses the following exif elements:
exif:distance
exif:exposure
exif:flash
exif:focallength
exif:fstop
exif:imageUniqueID
exif:iso
exif:make
exif:model
exif:tags
exif:time
[1]: http://schemas.google.com/photos/exif/2007.
[2]: http://en.wikipedia.org/wiki/Exif
[3]: http://code.google.com/apis/picasaweb/reference.html#exif_reference
"""
# __author__ = 'havard@gulldahl.no' # (Håvard Gulldahl)' #BUG: pydoc chokes on non-ascii chars in __author__
import atom
EXIF_NAMESPACE = 'http://schemas.google.com/photos/exif/2007'
class ExifBaseElement(atom.AtomBase):
"""Base class for elements in the EXIF_NAMESPACE (%s). To add new elements, you only need to add the element tag name to self._tag
""" % EXIF_NAMESPACE
_tag = ''
_namespace = EXIF_NAMESPACE
_children = atom.AtomBase._children.copy()
_attributes = atom.AtomBase._attributes.copy()
def __init__(self, name=None, extension_elements=None,
extension_attributes=None, text=None):
self.name = name
self.text = text
self.extension_elements = extension_elements or []
self.extension_attributes = extension_attributes or {}
class Distance(ExifBaseElement):
"""(float) The distance to the subject, e.g. 0.0"""
_tag = 'distance'
def DistanceFromString(xml_string):
return atom.CreateClassFromXMLString(Distance, xml_string)
class Exposure(ExifBaseElement):
"""(float) The exposure time used, e.g. 0.025 or 8.0E4"""
_tag = 'exposure'
def ExposureFromString(xml_string):
return atom.CreateClassFromXMLString(Exposure, xml_string)
class Flash(ExifBaseElement):
"""(string) Boolean value indicating whether the flash was used.
The .text attribute will either be `true' or `false'
As a convenience, this object's .bool method will return what you want,
so you can say:
flash_used = bool(Flash)
"""
_tag = 'flash'
def __bool__(self):
if self.text.lower() in ('true', 'false'):
return self.text.lower() == 'true'
def FlashFromString(xml_string):
return atom.CreateClassFromXMLString(Flash, xml_string)
class Focallength(ExifBaseElement):
"""(float) The focal length used, e.g. 23.7"""
_tag = 'focallength'
def FocallengthFromString(xml_string):
return atom.CreateClassFromXMLString(Focallength, xml_string)
class Fstop(ExifBaseElement):
"""(float) The fstop value used, e.g. 5.0"""
_tag = 'fstop'
def FstopFromString(xml_string):
return atom.CreateClassFromXMLString(Fstop, xml_string)
class ImageUniqueID(ExifBaseElement):
"""(string) The unique image ID for the photo. Generated by Google Photo servers"""
_tag = 'imageUniqueID'
def ImageUniqueIDFromString(xml_string):
return atom.CreateClassFromXMLString(ImageUniqueID, xml_string)
class Iso(ExifBaseElement):
"""(int) The iso equivalent value used, e.g. 200"""
_tag = 'iso'
def IsoFromString(xml_string):
return atom.CreateClassFromXMLString(Iso, xml_string)
class Make(ExifBaseElement):
"""(string) The make of the camera used, e.g. Fictitious Camera Company"""
_tag = 'make'
def MakeFromString(xml_string):
return atom.CreateClassFromXMLString(Make, xml_string)
class Model(ExifBaseElement):
"""(string) The model of the camera used,e.g AMAZING-100D"""
_tag = 'model'
def ModelFromString(xml_string):
return atom.CreateClassFromXMLString(Model, xml_string)
class Time(ExifBaseElement):
"""(int) The date/time the photo was taken, e.g. 1180294337000.
Represented as the number of milliseconds since January 1st, 1970.
The value of this element will always be identical to the value
of the <gphoto:timestamp>.
Look at this object's .isoformat() for a human friendly datetime string:
photo_epoch = Time.text # 1180294337000
photo_isostring = Time.isoformat() # '2007-05-27T19:32:17.000Z'
Alternatively:
photo_datetime = Time.datetime() # (requires python >= 2.3)
"""
_tag = 'time'
def isoformat(self):
"""(string) Return the timestamp as a ISO 8601 formatted string,
e.g. '2007-05-27T19:32:17.000Z'
"""
import time
epoch = float(self.text) / 1000
return time.strftime('%Y-%m-%dT%H:%M:%S.000Z', time.gmtime(epoch))
def datetime(self):
"""(datetime.datetime) Return the timestamp as a datetime.datetime object
Requires python 2.3
"""
import datetime
epoch = float(self.text) / 1000
return datetime.datetime.fromtimestamp(epoch)
def TimeFromString(xml_string):
return atom.CreateClassFromXMLString(Time, xml_string)
class Tags(ExifBaseElement):
"""The container for all exif elements.
The <exif:tags> element can appear as a child of a photo entry.
"""
_tag = 'tags'
_children = atom.AtomBase._children.copy()
_children['{%s}fstop' % EXIF_NAMESPACE] = ('fstop', Fstop)
_children['{%s}make' % EXIF_NAMESPACE] = ('make', Make)
_children['{%s}model' % EXIF_NAMESPACE] = ('model', Model)
_children['{%s}distance' % EXIF_NAMESPACE] = ('distance', Distance)
_children['{%s}exposure' % EXIF_NAMESPACE] = ('exposure', Exposure)
_children['{%s}flash' % EXIF_NAMESPACE] = ('flash', Flash)
_children['{%s}focallength' % EXIF_NAMESPACE] = ('focallength', Focallength)
_children['{%s}iso' % EXIF_NAMESPACE] = ('iso', Iso)
_children['{%s}time' % EXIF_NAMESPACE] = ('time', Time)
_children['{%s}imageUniqueID' % EXIF_NAMESPACE] = ('imageUniqueID', ImageUniqueID)
def __init__(self, extension_elements=None, extension_attributes=None, text=None):
ExifBaseElement.__init__(self, extension_elements=extension_elements,
extension_attributes=extension_attributes,
text=text)
self.fstop = None
self.make = None
self.model = None
self.distance = None
self.exposure = None
self.flash = None
self.focallength = None
self.iso = None
self.time = None
self.imageUniqueID = None
def TagsFromString(xml_string):
return atom.CreateClassFromXMLString(Tags, xml_string)
| 28.196653 | 134 | 0.686304 | 832 | 6,739 | 5.425481 | 0.266827 | 0.043864 | 0.036553 | 0.0463 | 0.221311 | 0.06646 | 0.038104 | 0.023039 | 0 | 0 | 0 | 0.026852 | 0.198694 | 6,739 | 238 | 135 | 28.315126 | 0.809074 | 0.350942 | 0 | 0.042553 | 0 | 0 | 0.078765 | 0.005432 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170213 | false | 0 | 0.031915 | 0.117021 | 0.648936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
d2c3d5720d443c8428e3cc762241fd374d367940 | 290 | py | Python | mco_agent/exceptions.py | optiz0r/py-mco-agent | e7032999f7ffb51f48030c1041af4f45bf84b2a6 | [
"Apache-2.0"
] | null | null | null | mco_agent/exceptions.py | optiz0r/py-mco-agent | e7032999f7ffb51f48030c1041af4f45bf84b2a6 | [
"Apache-2.0"
] | null | null | null | mco_agent/exceptions.py | optiz0r/py-mco-agent | e7032999f7ffb51f48030c1041af4f45bf84b2a6 | [
"Apache-2.0"
] | null | null | null | class AgentException(Exception):
""" Base exception"""
description = 'Unknown error'
statuscode = 5
def __str__(self):
return '{0}: {1}'.format(self.description, ' '.join(self.args))
class InactiveAgent(AgentException):
description = "Agent is not activated"
| 24.166667 | 71 | 0.662069 | 30 | 290 | 6.266667 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012931 | 0.2 | 290 | 11 | 72 | 26.363636 | 0.797414 | 0.048276 | 0 | 0 | 0 | 0 | 0.163569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
d2c7a326c6dd76c697638c1c2111c5f69981ab8f | 88 | py | Python | learn-to-code-with-python/04-Variables/augmented-assignment-operator.py | MaciejZurek/python_practicing | 0a426f2aed151573e1f8678e0239ff596d92bbde | [
"MIT"
] | null | null | null | learn-to-code-with-python/04-Variables/augmented-assignment-operator.py | MaciejZurek/python_practicing | 0a426f2aed151573e1f8678e0239ff596d92bbde | [
"MIT"
] | null | null | null | learn-to-code-with-python/04-Variables/augmented-assignment-operator.py | MaciejZurek/python_practicing | 0a426f2aed151573e1f8678e0239ff596d92bbde | [
"MIT"
] | null | null | null | a = 1
a = a + 2
print(a)
a += 2
print(a)
word = "race"
word += " car"
print(word)
| 5.866667 | 14 | 0.488636 | 17 | 88 | 2.529412 | 0.411765 | 0.093023 | 0.139535 | 0.372093 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.306818 | 88 | 14 | 15 | 6.285714 | 0.655738 | 0 | 0 | 0.25 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d2c8340ebc6002f2683e323f943ffab89154b485 | 134 | py | Python | gold_digger/api_server/app.py | N0081K/gold-digger | 3a68084bf65565a61009f5e01da643499a5e06e6 | [
"Apache-2.0"
] | 7 | 2016-05-04T17:13:58.000Z | 2017-11-07T07:29:16.000Z | gold_digger/api_server/app.py | N0081K/gold-digger | 3a68084bf65565a61009f5e01da643499a5e06e6 | [
"Apache-2.0"
] | 94 | 2019-07-03T15:33:29.000Z | 2022-03-28T01:17:41.000Z | gold_digger/api_server/app.py | N0081K/gold-digger | 3a68084bf65565a61009f5e01da643499a5e06e6 | [
"Apache-2.0"
] | 2 | 2017-05-30T13:55:01.000Z | 2017-08-20T19:52:45.000Z | from .api_server import API
from .helpers import ContextMiddleware
app = API(
middleware=[
ContextMiddleware(),
],
)
| 14.888889 | 38 | 0.671642 | 13 | 134 | 6.846154 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238806 | 134 | 8 | 39 | 16.75 | 0.872549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d2d955fb07d9f14d4e099a73b3ca44b5938a335c | 944 | py | Python | bspwm/.config/bspwm/bspwm.py | kevin-hanselman/dotfiles | 0d7ff5703e3dfc3a7ae08b211245c6d43eea0ca5 | [
"MIT"
] | 5 | 2020-09-14T01:28:53.000Z | 2021-08-16T17:49:09.000Z | bspwm/.config/bspwm/bspwm.py | kevin-hanselman/dotfiles | 0d7ff5703e3dfc3a7ae08b211245c6d43eea0ca5 | [
"MIT"
] | null | null | null | bspwm/.config/bspwm/bspwm.py | kevin-hanselman/dotfiles | 0d7ff5703e3dfc3a7ae08b211245c6d43eea0ca5 | [
"MIT"
] | 4 | 2019-07-13T18:10:07.000Z | 2019-10-08T16:27:48.000Z | '''A module for interacting with the bspwm X window manager'''
from subprocess import Popen, PIPE, run, TimeoutExpired
import json
def __process_stdout_generator(proc):
'''Create a generator for a Popen object's STDOUT'''
while proc.poll() is None:
yield from proc.stdout
def subscribe(*subscribe_args, timeout=0.1):
'''Returns a generator for messages from a 'bspc subscribe' call.'''
cmd = ['bspc', 'subscribe'] + list(subscribe_args)
# Check to make sure the command won't error-out immediately
try:
run(cmd, check=True, timeout=timeout)
except TimeoutExpired:
pass
return __process_stdout_generator(Popen(cmd, stdout=PIPE))
def bspc(*bspc_args):
'''Run the bspc command-line tool'''
return run(['bspc'] + list(bspc_args), stdout=PIPE)
def get_state():
'''Returns bspwm's global state JSON object'''
proc = bspc('wm', '-d')
return json.loads(proc.stdout)
| 27.764706 | 72 | 0.682203 | 131 | 944 | 4.816794 | 0.496183 | 0.041204 | 0.069731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002642 | 0.198093 | 944 | 33 | 73 | 28.606061 | 0.830911 | 0.315678 | 0 | 0 | 0 | 0 | 0.033871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0.058824 | 0.117647 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
d2f5d2ba85efe47a6844821a0e43b9b2046beefc | 265 | py | Python | lang/Python/dutch-national-flag-problem-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/dutch-national-flag-problem-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/dutch-national-flag-problem-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | from itertools import chain
def dutch_flag_sort2(items, order=colours_in_order):
'return summed filter of items using the given order'
return list(chain.from_iterable([c for c in items if c==colour]
for colour in order))
| 44.166667 | 67 | 0.667925 | 39 | 265 | 4.410256 | 0.641026 | 0.081395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005181 | 0.271698 | 265 | 5 | 68 | 53 | 0.88601 | 0.192453 | 0 | 0 | 0 | 0 | 0.192453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
961116f3fd8b23cda50d826f4734ff8d74ae8c80 | 116 | py | Python | celsius_to_fahrenheit.py | jjberg83/python_eksperimenter | ea26a6bd4a0cf71e69cbf5015a06db30de811b45 | [
"MIT"
] | null | null | null | celsius_to_fahrenheit.py | jjberg83/python_eksperimenter | ea26a6bd4a0cf71e69cbf5015a06db30de811b45 | [
"MIT"
] | null | null | null | celsius_to_fahrenheit.py | jjberg83/python_eksperimenter | ea26a6bd4a0cf71e69cbf5015a06db30de811b45 | [
"MIT"
] | null | null | null | degrees = int(input('What is the temperature in Celsius? \n'))
fahrenheit = (degrees * 1.8) + 32
print(fahrenheit)
| 29 | 62 | 0.698276 | 17 | 116 | 4.764706 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0.155172 | 116 | 3 | 63 | 38.666667 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.327586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
961b358ba0f5a1d889ad5d3faeb96b97437ddae6 | 407 | py | Python | th_address/models/models.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | th_address/models/models.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | th_address/models/models.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | import logging
from odoo import api, models
_logger = logging.getLogger(__name__)
class geonames_th(models.Model):
_name = "geonames_th.geonames_th"
@api.model
def import_data(self):
TH = self.env.ref("base.th")
geoname_import = self.env["city.zip.geonames.import"]
parse_csv = geoname_import.get_and_parse_csv(TH)
geoname_import._process_csv(parse_csv, TH)
| 23.941176 | 61 | 0.702703 | 57 | 407 | 4.666667 | 0.473684 | 0.112782 | 0.112782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191646 | 407 | 16 | 62 | 25.4375 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0.132678 | 0.115479 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.545455 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
961eb46d5b68a5c8a7d4576a5bfad12d27379400 | 3,544 | py | Python | DTO/r1_data_check_specific_dto.py | fudo-myo/LST_BBDD | 564db35a08978884c9a5246e509fbb3580e51db2 | [
"MIT"
] | null | null | null | DTO/r1_data_check_specific_dto.py | fudo-myo/LST_BBDD | 564db35a08978884c9a5246e509fbb3580e51db2 | [
"MIT"
] | null | null | null | DTO/r1_data_check_specific_dto.py | fudo-myo/LST_BBDD | 564db35a08978884c9a5246e509fbb3580e51db2 | [
"MIT"
] | null | null | null | class R1DataCheckSpecificDto:
def __init__(self, id_r1_data_check_specific=None, init_event=None, end_event=None, init_pixel=None, end_pixel=None,
init_sample=None, end_sample=None,
init_subrun=None, end_subrun=None, type_of_gap_calc=None, list_of_module_in_detail=None):
self.__id_r1_data_check_specific = id_r1_data_check_specific
self.__init_event = init_event
self.__end_event = end_event
self.__init_pixel = init_pixel
self.__end_pixel = end_pixel
self.__init_sample = init_sample
self.__end_sample = end_sample
self.__init_subrun = init_subrun
self.__end_subrun = end_subrun
self.__type_of_gap_calc = type_of_gap_calc
self.__list_of_module_in_detail = list_of_module_in_detail
@property
def id_r1_data_check_specific(self):
return self.__id_r1_data_check_specific
@property
def init_event(self):
return self.__init_event
@property
def end_event(self):
return self.__end_event
@property
def init_pixel(self):
return self.__init_pixel
@property
def end_pixel(self):
return self.__end_pixel
@property
def init_sample(self):
return self.__init_sample
@property
def end_sample(self):
return self.__end_sample
@property
def init_subrun(self):
return self.__init_subrun
@property
def end_subrun(self):
return self.__end_subrun
@property
def type_of_gap_calc(self):
return self.__type_of_gap_calc
@property
def list_of_module_in_detail(self):
return self.__list_of_module_in_detail
@id_r1_data_check_specific.setter
def id_r1_data_check_specific(self, value):
self.__id_r1_data_check_specific = value
@init_event.setter
def init_event(self, value):
self.__init_event = value
@end_event.setter
def end_event(self, value):
self.__end_event = value
@init_pixel.setter
def init_pixel(self, value):
self.__init_pixel = value
@end_pixel.setter
def end_pixel(self, value):
self.__end_pixel = value
@init_sample.setter
def init_sample(self, value):
self.__init_sample = value
@end_sample.setter
def end_sample(self, value):
self.__end_sample = value
@init_subrun.setter
def init_subrun(self, value):
self.__init_subrun = value
@end_subrun.setter
def end_subrun(self, value):
self.__end_subrun = value
@type_of_gap_calc.setter
def type_of_gap_calc(self, value):
self.__type_of_gap_calc = value
@list_of_module_in_detail.setter
def list_of_module_in_detail(self, value):
self.__list_of_module_in_detail = value
def create_r1_data_check_specific(id_r1_data_check_specific, init_event, end_event, init_pixel, end_pixel, init_sample,
end_sample,
init_subrun, end_subrun, type_of_gap_calc, list_of_module_in_detail):
dto = R1DataCheckSpecificDto()
dto.id_r1_data_check_specific = id_r1_data_check_specific
dto.init_event = init_event
dto.end_event = end_event
dto.init_pixel = init_pixel
dto.end_pixel = end_pixel
dto.init_sample = init_sample
dto.end_sample = end_sample
dto.init_subrun = init_subrun
dto.end_subrun = end_subrun
dto.type_of_gap_calc = type_of_gap_calc
dto.list_of_module_in_detail = list_of_module_in_detail
return dto
| 28.813008 | 120 | 0.694695 | 499 | 3,544 | 4.366733 | 0.06012 | 0.033043 | 0.060578 | 0.104635 | 0.302891 | 0.248279 | 0.167967 | 0.117485 | 0.093621 | 0.075264 | 0 | 0.005203 | 0.240688 | 3,544 | 122 | 121 | 29.04918 | 0.804534 | 0 | 0 | 0.113402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.247423 | false | 0 | 0 | 0.113402 | 0.381443 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
961f44bd18c7d0aaac666c6f2d7708f2f30a183e | 340 | py | Python | home/kwatters/harry/gestures/takethis.py | rv8flyboy/pyrobotlab | 4e04fb751614a5cb6044ea15dcfcf885db8be65a | [
"Apache-2.0"
] | 63 | 2015-02-03T18:49:43.000Z | 2022-03-29T03:52:24.000Z | home/kwatters/harry/gestures/takethis.py | hirwaHenryChristian/pyrobotlab | 2debb381fc2db4be1e7ea6e5252a50ae0de6f4a9 | [
"Apache-2.0"
] | 16 | 2016-01-26T19:13:29.000Z | 2018-11-25T21:20:51.000Z | home/kwatters/harry/gestures/takethis.py | hirwaHenryChristian/pyrobotlab | 2debb381fc2db4be1e7ea6e5252a50ae0de6f4a9 | [
"Apache-2.0"
] | 151 | 2015-01-03T18:55:54.000Z | 2022-03-04T07:04:23.000Z | def takethis():
fullspeed()
i01.moveHead(14,90)
i01.moveArm("left",13,45,95,10)
i01.moveArm("right",5,90,30,10)
i01.moveHand("left",2,2,2,2,2,60)
i01.moveHand("right",81,66,82,60,105,113)
i01.moveTorso(85,76,90)
sleep(3)
closelefthand()
i01.moveTorso(110,90,90)
sleep(2)
isitaball()
i01.mouth.speak("what is it")
| 21.25 | 43 | 0.655882 | 61 | 340 | 3.655738 | 0.606557 | 0.035874 | 0.040359 | 0.035874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239865 | 0.129412 | 340 | 15 | 44 | 22.666667 | 0.513514 | 0 | 0 | 0 | 0 | 0 | 0.082596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | true | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8248894ceb22eeeb8e9f958c335140b2554bdb8f | 619 | py | Python | shop/source/database/__init__.py | icYFTL/RTULAB_Service | a16d0fc2ac9ac103f0a14e90824caded7156bf11 | [
"Apache-2.0"
] | null | null | null | shop/source/database/__init__.py | icYFTL/RTULAB_Service | a16d0fc2ac9ac103f0a14e90824caded7156bf11 | [
"Apache-2.0"
] | null | null | null | shop/source/database/__init__.py | icYFTL/RTULAB_Service | a16d0fc2ac9ac103f0a14e90824caded7156bf11 | [
"Apache-2.0"
] | null | null | null | from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from core import db_config
conn_str = 'postgresql+psycopg2://{username}:{password}@{host}/{db}'.format(
username=db_config['user'],
password=db_config['password'],
host=db_config['host'],
db=db_config['db']
)
Engine = create_engine(conn_str, echo=False)
Base = declarative_base()
from .models import *
Base.metadata.create_all(Engine, checkfirst=True)
Session = sessionmaker(bind=Engine)
from .methods import *
__all__ = ['Session', 'Engine', 'models', 'methods']
| 25.791667 | 76 | 0.749596 | 79 | 619 | 5.670886 | 0.405063 | 0.089286 | 0.084821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001828 | 0.116317 | 619 | 23 | 77 | 26.913043 | 0.817185 | 0 | 0 | 0 | 0 | 0 | 0.159935 | 0.088853 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117647 | 0.352941 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
8255fff10ac99c19922046584b5b3ffd3fa5ee29 | 19,985 | py | Python | plotcounts/src/d1state/system_state.py | vdave/d1_environment_status | 65368e21e649d060e085df2e844964f74ac0f985 | [
"Apache-2.0"
] | null | null | null | plotcounts/src/d1state/system_state.py | vdave/d1_environment_status | 65368e21e649d060e085df2e844964f74ac0f985 | [
"Apache-2.0"
] | null | null | null | plotcounts/src/d1state/system_state.py | vdave/d1_environment_status | 65368e21e649d060e085df2e844964f74ac0f985 | [
"Apache-2.0"
] | null | null | null | '''
Created on Feb 27, 2014
@author: vieglais
Generates a JSON object that provides a high level description of the state of
a DataONE environment at the time.
The resulting JSON can be processed with Javascript and HTML to provide a
state view, or can be loaded back into the Python structure for additional
processing.
'''
import logging
import pprint
import datetime
import json
import socket
import httplib
import math
import dns.resolver
import d1_common.types.exceptions
from d1_client import cnclient, cnclient_1_1
from d1_client import mnclient
def getNow(asDate=False):
ctime = datetime.datetime.utcnow()
return ctime
def getNowString(ctime=None):
if ctime is None:
ctime = getNow()
return ctime.strftime("%Y-%m-%d %H:%M:%S.0+00:00")
def dateTimeToListObjectsTime(dt):
'''Return a string representation of a datetime that can be used in toDate or
fromDate in a listObject API call.
%Y-%m-%dT%H:%M:%S
'''
return dt.strftime("%Y-%m-%dT%H:%M:%S")
def dateTimeToSOLRTime(dt):
'''Return a string representation of a datetime that can be used in SOLR
queries against dates such as dateUploaded
fromDate in a listObject API call.
'''
return dt.strftime("%Y-%m-%dT%H:%M:%S.000Z")
def escapeQueryTerm(term):
'''
+ - && || ! ( ) { } [ ] ^ " ~ * ? : \
'''
reserved = ['+','-','&','|','!','(',')','{','}','[',']','^','"','~','*','?',':',]
term = term.replace(u'\\',u'\\\\')
for c in reserved:
term = term.replace(c,u"\%s" % c)
return term
class NodeState(object):
def __init__(self, baseURL):
self.log = logging.getLogger(str(self.__class__.__name__))
self.baseurl = baseURL
self.clientv1 = mnclient.MemberNodeClient( self.baseurl )
def count(self):
'''
Return the number of objects on the node as reported by listObjects
Exceptions.NotAuthorized – (errorCode=401, detailCode=1520)
Exceptions.InvalidRequest – (errorCode=400, detailCode=1540)
Exceptions.NotImplemented –
(errorCode=501, detailCode=1560)
Raised if some functionality requested is not implemented. In the case of an optional request parameter not being supported, the errorCode should be 400. If the requested format (through HTTP Accept headers) is not supported, then the standard HTTP 406 error code should be returned.
Exceptions.ServiceFailure – (errorCode=500, detailCode=1580)
Exceptions.InvalidToken – (errorCode=401, detailCode=1530)
exception httplib.HTTPException
exception httplib.NotConnected -10
exception httplib.InvalidURL -11
exception httplib.UnknownProtocol -12
exception httplib.UnknownTransferEncoding -13
exception httplib.UnimplementedFileMode -14
exception httplib.IncompleteRead -15
exception httplib.ImproperConnectionState -16
exception httplib.CannotSendRequest -17
exception httplib.CannotSendHeader -18
exception httplib.ResponseNotReady -19
exception httplib.BadStatusLine -20
'''
try:
res = self.clientv1.listObjects(start=0, count=0)
return res.total
except d1_common.types.exceptions.NotAuthorized as e:
self.log.error(e)
return -401
except d1_common.types.exceptions.InvalidRequest as e:
self.log.error(e)
return -400
except d1_common.types.exceptions.NotImplemented as e:
self.log.error(e)
return -501
except d1_common.types.exceptions.ServiceFailure as e:
self.log.error(e)
return -500
except d1_common.types.exceptions.InvalidToken as e:
self.log.error(e)
return -401
except httplib.NotConnected as e:
self.log.error(e)
return -10
except httplib.InvalidURL as e:
self.log.error(e)
return -11
except httplib.UnknownProtocol as e:
self.log.error(e)
return -12
except httplib.UnknownTransferEncoding as e:
self.log.error(e)
return -13
except httplib.UnimplementedFileMode as e:
self.log.error(e)
return -14
except httplib.IncompleteRead as e:
self.log.error(e)
return -15
except httplib.ImproperConnectionState as e:
self.log.error(e)
return -16
except httplib.CannotSendRequest as e:
self.log.error(e)
return -17
except httplib.CannotSendHeader as e:
self.log.error(e)
return -18
except httplib.ResponseNotReady as e:
self.log.error(e)
return -19
except httplib.BadStatusLine as e:
self.log.error(e)
return -20
except socket.error as e:
self.log.error(e)
#See notes.md for a list of error codes
if hasattr(e, 'errno'):
if e.errno is not None:
return -1000 - e.errno
return -21
except Exception as e:
'''Something else. Need to examine the client connection object
'''
self.log.error("Error not trapped by standard exception.")
self.log.error(e)
return -1
class EnvironmentState(object):
#increment the version flag if there's a change to the generated data structure
VERSION = "18"
COUNT_PUBLIC = None
COUNT_PUBLIC_CURRENT = "-obsoletedBy:[* TO *]"
TIMESTAMP_FORMAT = "%Y-%m-%d %H:%M:%S.0+00:00"
JS_VARIABLE_STATE = "var env_state = "
JS_VARIABLE_INDEX = "var env_state_index = "
JS_VARIABLE_NODES = "var node_state_index = "
#TODO: These IP addresses are specific to the production environment and
#include changes to UCSB and ORC
CN_IP_ADDRESSES = ['160.36.134.71',
'128.111.220.46',
'128.111.54.80',
'128.111.36.80',
'160.36.13.150',
'64.106.40.6',
#'128.219.49.14', #This is a proxy server at ORNL
'128.111.220.51', #UCSB Nagios
'128.111.84.5', ] #UCSB Nagios
LOG_EVENTS = [['create','Created using DataONE API'],
['read', 'Content downloaded'],
['read.ext', 'Content downloaded by entities other than CNs'],
['update', 'Updated'],
['delete', 'Deleted'],
['replicate', 'Content retrieved by replication process'],
['synchronization_failed', 'Attempt to synchronize failed'],
['replication_failed', 'Attempt to replicate failed'],
]
def __init__(self, baseurl, cert_path=None):
self.log = logging.getLogger(str(self.__class__.__name__))
self.log.debug("Initializing...")
self.baseurl = baseurl
self.state = {'meta':None,
'formats':None,
'nodes':None,
'counts':None,
'summary': None,
'dns': None,
'logs': None,
}
self.clientv1 = cnclient.CoordinatingNodeClient( self.baseurl,
cert_path=cert_path )
self.clientv11 = cnclient_1_1.CoordinatingNodeClient( self.baseurl,
cert_path=cert_path )
def __str__(self):
return pprint.pformat( self.state )
def populateState(self):
'''Populates self.state with current environment status
'''
self.tstamp = getNow()
meta = {'tstamp': getNowString(self.tstamp),
'baseurl': self.baseurl,
'version': EnvironmentState.VERSION,
'count_meta': {0:'ALL',
1:EnvironmentState.COUNT_PUBLIC,
2:EnvironmentState.COUNT_PUBLIC_CURRENT}
}
self.state['meta'] = meta
self.state['formats'] = self.getFormats()
self.state['nodes'] = self.getNodes()
self.state['dns'] = self.getDNSInfo()
self.state['logs'] = self.getLogSummary()
self.state['counts'] = self.getCounts()
self.state['summary'] = self.summarizeCounts()
self.state['summary']['sizes'] = self.getObjectTypeSizeHistogram()
def retrieveLogResponse(self, q, fq=None):
self.clientv1.connection.close()
url = self.clientv1._rest_url('log')
query = {'q': q}
if not fq is None:
query['fq'] = fq
#logging.info("URL = %s" % url)
response = self.clientv1.GET(url, query)
logrecs = self.clientv1._read_dataone_type_response(response)
return logrecs.total
def getLogSummary(self):
periods = [['Day', 'dateLogged:[NOW-1DAY TO NOW]', 'Past day'],
['Week', 'dateLogged:[NOW-7DAY TO NOW]', 'Past week'],
['Month', 'dateLogged:[NOW-1MONTH TO NOW]', 'Past month'],
['Year', 'dateLogged:[NOW-1YEAR TO NOW]', 'Past year'],
['All', 'dateLogged:[2012-07-01T00:00:00.000Z TO NOW]', 'Since July 1, 2012'],
]
res = {'events': EnvironmentState.LOG_EVENTS,
'periods': map(lambda p: [p[0], p[2]], periods),
'data': {}}
exclude_cns = "-ipAddress:({0})"\
.format( " OR ".join(EnvironmentState.CN_IP_ADDRESSES))
for event in EnvironmentState.LOG_EVENTS:
res['data'][event[0]] = {}
for period in periods:
self.log.info('Log for {0} over {1}'.format(event[0], period[0]))
if event[0].endswith('.ext'):
ev = event[0].split(".")[0]
q = "event:{0} AND {1}".format(ev, exclude_cns)
else:
q = "event:{0}".format(event[0])
fq = period[1]
nrecords = self.retrieveLogResponse(q, fq=fq)
res['data'][event[0]][period[0]] = nrecords
return res
def getDNSInfo(self):
#TODO: Make this responsive to the CNode specified in the constructor
res = {'cn-ucsb-1.dataone.org':{},
'cn-unm-1.dataone.org':{},
'cn-orc-1.dataone.org':{},
'cn.dataone.org':{}
}
for k in res.keys():
info = dns.resolver.query(k)
res[k]['address'] = []
for ip in info:
res[k]['address'].append(ip.to_text())
return res;
def getCountsToDate(self, to_date, exclude_listObjects=False):
self.tstamp = getNow()
meta = {'tstamp': getNowString(self.tstamp),
'baseurl': self.baseurl,
'version': EnvironmentState.VERSION,
'count_meta': {0:'ALL',
1:EnvironmentState.COUNT_PUBLIC,
2:EnvironmentState.COUNT_PUBLIC_CURRENT}
}
self.state['meta'] = meta
self.state['formats'] = self.getFormats()
self.state['counts'] = self.getCounts(as_of_date = to_date,
exclude_listObjects=exclude_listObjects)
self.state['summary'] = self.summarizeCounts()
def getNodes(self):
'''Returns a dictionary of node information, keyed by nodeId
'''
def syncschedule_array(s):
if s is None:
return {}
# hour mday min mon sec wday year
# year, mon, mday, wday, hour, min, sec
return [s.year, s.mon, s.mday, s.wday, s.hour, s.min, s.sec]
res = {}
nodes = self.clientv1.listNodes()
for node in nodes.node:
entry = {'name' : node.name,
'description' : node.description,
'baseurl' : node.baseURL,
'type' : node.type,
'state': node.state,
'objectcount': -1,
}
sync = node.synchronization
if not sync is None:
entry['sync.schedule'] = syncschedule_array(sync.schedule)
entry['sync.lastHarvested'] = sync.lastHarvested.strftime("%Y-%m-%d %H:%M:%S.0%z")
entry['sync.lastCompleteHarvest'] = sync.lastCompleteHarvest.strftime("%Y-%m-%d %H:%M:%S.0%z")
#Call list objects to get a count
self.log.info("Attempting node count on {0}".format(node.baseURL))
ns = NodeState(node.baseURL)
entry['objectcount'] = ns.count()
res[node.identifier.value()] = entry
return res
def getFormats(self):
res = {}
formats = self.clientv1.listFormats()
for format in formats.objectFormat:
res[format.formatId] = {'name' : format.formatName,
'type' : format.formatType}
return res
def _countAll(self, counts, as_of_date=None):
'''Returns object counts by formatId using listObjects
Requires that self.state['formats'] has been populated
'''
to_date = None
if not as_of_date is None:
to_date = dateTimeToListObjectsTime(as_of_date)
for formatId in self.state['formats'].keys():
res = self.clientv11.listObjects(count=0,
formatId=formatId,
toDate=to_date)
self.log.info("{0:s} : {1:d}".format(formatId, res.total))
self.state['counts'][formatId][0] = res.total
def _countSOLR(self, counts, col=1, fq=None, as_of_date=None):
'''Populates counts
'''
queryEngine = "solr"
query='/'
maxrecords = 0
fields = 'id'
date_restriction = ''
if not as_of_date is None:
date_restriction = " AND dateUploaded:[* TO {0:s}]".format(dateTimeToSOLRTime(as_of_date))
for formatId in self.state['formats'].keys():
q = "formatId:\"{0:s}\"".format(escapeQueryTerm(formatId))
q = q + date_restriction
ntries = 0
while ntries < 4:
try:
ntries += 1
results = eval(self.clientv1.query(queryEngine, query=query,
q=q,
fq=fq,
wt='python',
fl=fields,
rows=maxrecords).read())
break
except httplib.BadStatusLine as e:
self.log.warn(e)
nHits = results['response']['numFound']
self.state['counts'][formatId][col] = nHits
self.log.info("{0:s} : {1:d}".format(formatId, nHits))
def getCounts(self, as_of_date=None, exclude_listObjects=False):
'''return object counts, optionally as of the specified date (datetime)
'''
#initialize the storage space
counts = {}
for formatId in self.state['formats'].keys():
counts[formatId] = [0, 0, 0]
self.state['counts'] = counts
#populate the number of all objects
for k in self.state['meta']['count_meta'].keys():
if k == 0:
if not exclude_listObjects:
self._countAll(counts, as_of_date=as_of_date)
else:
self._countSOLR(counts,
col=k,
fq=self.state['meta']['count_meta'][k],
as_of_date=as_of_date)
return counts
def getObjectSizeHistogram(self, q="*:*", nbins=10):
'''Returns a list of [size_low, size_high, count] for objects that match
the specified query.
To find minimum value:
https://cn.dataone.org/cn/v1/query/solr/?fl=size&sort=size%20asc&q=*:*&rows=1
to find maximum value:
https://cn.dataone.org/cn/v1/query/solr/?fl=size&sort=size%20desc&q=*:*&rows=1
'''
def getSOLRResponse(q, maxrecords, fields, rsort, fq=None):
ntries = 0
while ntries < 4:
try:
ntries += 1
results = eval(self.clientv1.query("solr", query="/",
q=q,
fq=fq,
wt='python',
fl=fields,
sort=rsort,
rows=maxrecords).read())
return results
except httplib.BadStatusLine as e:
self.log.warn(e)
return None
minval = getSOLRResponse(q, 1, 'size', "size asc")['response']['docs'][0]['size']
maxval = getSOLRResponse(q, 1, 'size', "size desc")['response']['docs'][0]['size']
if minval <1:
minval = 1
lminval = math.log10(minval)
lmaxval = math.log10(maxval)
binsize = (lmaxval - lminval) / (nbins*1.0)
res = []
for i in xrange(0, nbins):
row = [math.pow(10, lminval + i*binsize),
math.pow(10, lminval + (i+1)*binsize),
0]
res.append(row)
for i in xrange(0, nbins):
row = res[i]
if i == 0:
fq = "size:[{0:d} TO {1:d}]".format(math.trunc(row[0]), math.trunc(row[1]))
elif i == nbins-1:
fq = "size:[{0:d} TO {1:d}]".format(math.trunc(row[0]), math.trunc(row[1])+1)
else:
fq = "size:[{0:d} TO {1:d}]".format(math.trunc(row[0])+1, math.trunc(row[1]))
n = getSOLRResponse(q, 0, 'size', 'size asc', fq=fq)['response']['numFound']
res[i][2] = n
return {"minimum": minval,
"maximum": maxval,
"histogram": res}
def getObjectTypeSizeHistogram(self):
res = {'data':[],
'metadata':[],
'resource':[]}
res['data'] = self.getObjectSizeHistogram(q="formatType:DATA")
res['metadata'] = self.getObjectSizeHistogram(q="formatType:METADATA")
res['resource'] = self.getObjectSizeHistogram(q="formatType:RESOURCE")
return res
def summarizeCounts(self):
'''Computes summary totals for DATA, METADATA, and RESOURCE objects
'''
totalcols = ['data', 'meta', 'resource']
summary = {'all': {'data':0, 'meta': 0, 'resource': 0, 'total': 0},
'public': {'data':0, 'meta': 0, 'resource': 0, 'total': 0},
'public_notobsolete': {'data':0, 'meta': 0, 'resource': 0, 'total': 0}
}
for fmt in self.state['formats'].keys():
if self.state['formats'][fmt]['type'] == 'DATA':
summary['all']['data'] = summary['all']['data'] + self.state['counts'][fmt][0]
summary['public']['data'] = summary['public']['data'] + self.state['counts'][fmt][1]
summary['public_notobsolete']['data'] = summary['public_notobsolete']['data'] + self.state['counts'][fmt][2]
elif self.state['formats'][fmt]['type'] == 'METADATA':
summary['all']['meta'] = summary['all']['meta'] + self.state['counts'][fmt][0]
summary['public']['meta'] = summary['public']['meta'] + self.state['counts'][fmt][1]
summary['public_notobsolete']['meta'] = summary['public_notobsolete']['meta'] + self.state['counts'][fmt][2]
elif self.state['formats'][fmt]['type'] == 'RESOURCE':
summary['all']['resource'] = summary['all']['resource'] + self.state['counts'][fmt][0]
summary['public']['resource'] = summary['public']['resource'] + self.state['counts'][fmt][1]
summary['public_notobsolete']['resource'] = summary['public_notobsolete']['resource'] + self.state['counts'][fmt][2]
for ctype in summary.keys():
summary[ctype]['total'] = summary[ctype]['data']
summary[ctype]['total'] = summary[ctype]['total'] + summary[ctype]['meta']
summary[ctype]['total'] = summary[ctype]['total'] + summary[ctype]['resource']
self.state['summary'] = {'counts' : summary}
return summary
def asJSON(self, outStream):
outStream.write(EnvironmentState.JS_VARIABLE_STATE)
json.dump(self.state, outStream, indent=2)
def fromJSON(self, inStream):
jbuffer = inStream.read()
self.state = json.loads(jbuffer[len(EnvironmentState.JS_VARIABLE_STATE):])
self.tstamp = datetime.datetime().strptime(self.state['meta']['tstamp'],"%Y-%m-%d %H:%M:%S.0+00:00")
def getTStamp(self):
return self.state['meta']['tstamp']
#===============================================================================
def test1(baseurl="https://cn.dataone.org/cn"):
es = EnvironmentState(baseurl)
pprint.pprint(es.getNodes())
def test2(baseurl="https://cn.dataone.org/cn"):
es = EnvironmentState(baseurl)
pprint.pprint(es.getFormats())
def test3(baseurl="https://knb.ecoinformatics.org/knb/d1/mn"):
ns = NodeState(baseurl)
n = ns.count()
print "{0} : {1}".format(baseurl, n)
def main(baseurl="https://cn.dataone.org/cn"):
es = EnvironmentState(baseurl)
es.populateState()
print es
#===============================================================================
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
test3()
#test1()
#test2()
#main()
| 35.560498 | 283 | 0.587491 | 2,391 | 19,985 | 4.849017 | 0.205772 | 0.032603 | 0.011471 | 0.016388 | 0.286614 | 0.239693 | 0.230119 | 0.174832 | 0.152924 | 0.106003 | 0 | 0.027522 | 0.260045 | 19,985 | 561 | 284 | 35.623886 | 0.756154 | 0.036527 | 0 | 0.216958 | 1 | 0 | 0.155122 | 0.010136 | 0 | 0 | 0 | 0.003565 | 0 | 0 | null | null | 0 | 0.027431 | null | null | 0.014963 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
826e31959e7709d82188c71c395013b127847d6b | 386 | py | Python | hendrix/contrib/concurrency/signals.py | bliedblad/hendrix | 90cb4a0f020873acbe9a350840c61d68a10501a0 | [
"MIT"
] | 309 | 2015-08-14T18:54:57.000Z | 2021-12-20T18:53:33.000Z | hendrix/contrib/concurrency/signals.py | bliedblad/hendrix | 90cb4a0f020873acbe9a350840c61d68a10501a0 | [
"MIT"
] | 77 | 2015-08-14T21:07:09.000Z | 2021-07-19T21:50:17.000Z | hendrix/contrib/concurrency/signals.py | bliedblad/hendrix | 90cb4a0f020873acbe9a350840c61d68a10501a0 | [
"MIT"
] | 40 | 2015-08-14T20:59:38.000Z | 2021-07-18T20:16:36.000Z | """
Signals for easy use in django projects
"""
try:
from django import dispatch
short_task = dispatch.Signal(providing_args=["args", "kwargs"])
long_task = dispatch.Signal(providing_args=["args", "kwargs"])
message_signal = dispatch.Signal(providing_args=["data", "dispatcher"])
USE_DJANGO_SIGNALS = True
except ImportError:
USE_DJANGO_SIGNALS = False | 22.705882 | 75 | 0.707254 | 46 | 386 | 5.717391 | 0.543478 | 0.159696 | 0.262357 | 0.307985 | 0.311787 | 0.311787 | 0.311787 | 0 | 0 | 0 | 0 | 0 | 0.173575 | 386 | 17 | 76 | 22.705882 | 0.824451 | 0.101036 | 0 | 0 | 0 | 0 | 0.10119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
827009f60ad9b200a7fda26ccc32a7a305e5914c | 3,207 | py | Python | threat_connect/komand_threat_connect/actions/create_task/schema.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | threat_connect/komand_threat_connect/actions/create_task/schema.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | threat_connect/komand_threat_connect/actions/create_task/schema.py | xhennessy-r7/insightconnect-plugins | 59268051313d67735b5dd3a30222eccb92aca8e9 | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Input:
ASSIGNEE = "assignee"
ATTRIBUTES = "attributes"
DUE_DATE = "due_date"
ESCALATED = "escalated"
ESCALATEE = "escalatee"
ESCALATION_DATE = "escalation_date"
NAME = "name"
OVERDUE = "overdue"
REMINDED = "reminded"
REMINDER_DATE = "reminder_date"
SECURITY_LABEL = "security_label"
STATUS = "status"
TAGS = "tags"
class Output:
ID = "id"
class CreateTaskInput(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"assignee": {
"type": "string",
"title": "Assignee",
"description": "Task Assignee",
"order": 11
},
"attributes": {
"type": "array",
"title": "Attributes",
"description": "Task Attributes",
"items": {
"type": "object"
},
"order": 2
},
"due_date": {
"type": "string",
"title": "Due Date",
"displayType": "date",
"description": "Task due date",
"format": "date-time",
"order": 4
},
"escalated": {
"type": "boolean",
"title": "Escalated",
"description": "Use task escalation",
"order": 7
},
"escalatee": {
"type": "string",
"title": "Escalatee",
"description": "Task escalatee",
"order": 12
},
"escalation_date": {
"type": "string",
"title": "Escalation Date",
"displayType": "date",
"description": "Task escalation date",
"format": "date-time",
"order": 5
},
"name": {
"type": "string",
"title": "Name",
"description": "Task Name",
"order": 1
},
"overdue": {
"type": "boolean",
"title": "Overdue",
"description": "Is task overdue",
"order": 8
},
"reminded": {
"type": "boolean",
"title": "Reminded",
"description": "Use task Reminder",
"order": 9
},
"reminder_date": {
"type": "string",
"title": "Reminder Date",
"displayType": "date",
"description": "Task reminder date",
"format": "date-time",
"order": 6
},
"security_label": {
"type": "string",
"title": "Security Label",
"description": "Task security label",
"order": 13
},
"status": {
"type": "string",
"title": "Status",
"description": "Task status",
"enum": [
"In Progress",
"Completed",
"Waiting on Someone",
"Deferred"
],
"order": 10
},
"tags": {
"type": "string",
"title": "Tags",
"description": "Task tags comma delimited",
"order": 3
}
},
"required": [
"name"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class CreateTaskOutput(komand.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"id": {
"type": "integer",
"title": "Task ID",
"description": "Task ID",
"order": 1
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 20.824675 | 57 | 0.505145 | 281 | 3,207 | 5.637011 | 0.27758 | 0.104167 | 0.085227 | 0.035985 | 0.224116 | 0.116162 | 0.116162 | 0.116162 | 0.054293 | 0.054293 | 0 | 0.008163 | 0.312442 | 3,207 | 153 | 58 | 20.960784 | 0.710204 | 0.011537 | 0 | 0.239437 | 1 | 0 | 0.800189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.014085 | 0 | 0.169014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8275a1d4f50cd8c414f8021a48a1f2de4296d247 | 467 | py | Python | dashboard_api/models/timelapse.py | NASA-IMPACT/eis-pilot-dashboard-api | 18f28d1fcc8bd982b4161754f5852fb247fc77d0 | [
"MIT"
] | 1 | 2021-05-19T17:25:25.000Z | 2021-05-19T17:25:25.000Z | dashboard_api/models/timelapse.py | NASA-IMPACT/eis-pilot-dashboard-api | 18f28d1fcc8bd982b4161754f5852fb247fc77d0 | [
"MIT"
] | 9 | 2021-05-03T18:22:22.000Z | 2021-09-30T19:43:52.000Z | dashboard_api/models/timelapse.py | NASA-IMPACT/eis-pilot-dashboard-api | 18f28d1fcc8bd982b4161754f5852fb247fc77d0 | [
"MIT"
] | 1 | 2021-06-22T17:08:46.000Z | 2021-06-22T17:08:46.000Z | """Tilelapse models."""
from geojson_pydantic.features import Feature
from geojson_pydantic.geometries import Polygon
from pydantic import BaseModel
class PolygonFeature(Feature):
"""Feature model."""
geometry: Polygon
class TimelapseValue(BaseModel):
""""Timelapse values model."""
mean: float
median: float
class TimelapseRequest(BaseModel):
""""Timelapse request model."""
month: str
geojson: PolygonFeature
type: str
| 17.296296 | 47 | 0.715203 | 47 | 467 | 7.06383 | 0.553191 | 0.066265 | 0.114458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184154 | 467 | 26 | 48 | 17.961538 | 0.871391 | 0.17773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
828220f108f84bbc82195a5469ba50a2b32a9f13 | 512 | py | Python | example/table_example.py | lakdred/pyecharts | 02050acb0e94bb9453b88a25028de7a0ce23f125 | [
"MIT"
] | 1 | 2019-06-29T09:37:45.000Z | 2019-06-29T09:37:45.000Z | example/table_example.py | lakdred/pyecharts | 02050acb0e94bb9453b88a25028de7a0ce23f125 | [
"MIT"
] | null | null | null | example/table_example.py | lakdred/pyecharts | 02050acb0e94bb9453b88a25028de7a0ce23f125 | [
"MIT"
] | 1 | 2021-01-18T10:17:01.000Z | 2021-01-18T10:17:01.000Z | # coding=utf-8
from pyecharts.components import Table
def table_base():
table = Table()
headers = ["City name", "Area", "Population", "Annual Rainfall"]
rows = [
["Brisbane", 5905, 1857594, 1146.4],
["Adelaide", 1295, 1158259, 600.5],
["Darwin", 112, 120900, 1714.7],
["Hobart", 1357, 205556, 619.5],
["Sydney", 2058, 4336374, 1214.8],
["Melbourne", 1566, 3806092, 646.9],
["Perth", 5386, 1554769, 869.4],
]
table.add(headers, rows)
| 26.947368 | 68 | 0.558594 | 60 | 512 | 4.75 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.278947 | 0.257813 | 512 | 18 | 69 | 28.444444 | 0.471053 | 0.023438 | 0 | 0 | 0 | 0 | 0.172691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
828ad861f6573dcf74d6d87a19b86829e86914bd | 1,272 | py | Python | S2/S2_Modular_Code/ModelTransfer.py | pankaj90382/TSAI-2 | af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778 | [
"MIT"
] | null | null | null | S2/S2_Modular_Code/ModelTransfer.py | pankaj90382/TSAI-2 | af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778 | [
"MIT"
] | 9 | 2021-06-08T22:18:08.000Z | 2022-03-12T00:46:43.000Z | S2/S2_Modular_Code/ModelTransfer.py | pankaj90382/TSAI-2 | af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778 | [
"MIT"
] | 1 | 2020-10-12T17:13:35.000Z | 2020-10-12T17:13:35.000Z | from torchsummary import summary
import torch
import torch.nn as nn
import torch.nn.functional as F
from ModelTrainer import ModelTrainer
import Resnet as rn
class Net():
"""
Base network that defines helper functions, summary and mapping to device
"""
def __init__(self, model):
self.trainer = None
self.model = model
def summary(self, input_size): #input_size=(1, 28, 28)
summary(self.model, input_size=input_size)
def gotrain(self, optimizer, train_loader, test_loader, dataloader, epochs, statspath, scheduler=None, batch_scheduler=False, L1lambda=0, LossType='CrossEntropyLoss', tb=None):
self.trainer = ModelTrainer(self.model, optimizer, train_loader, test_loader, dataloader, statspath, scheduler, batch_scheduler, L1lambda, LossType, tb)
self.trainer.run(epochs)
def resumerun(self, epochs):
self.trainer.run(epochs)
def modelload(self, path):
self.model.load_state_dict(torch.load(path))
def stats(self):
return self.trainer.stats if self.trainer else None
def getmodel(self):
return self.model if self.model else None
def setmodel(self, model):
self.model = model | 34.378378 | 181 | 0.669811 | 159 | 1,272 | 5.257862 | 0.396226 | 0.09689 | 0.0311 | 0.043062 | 0.150718 | 0.095694 | 0 | 0 | 0 | 0 | 0 | 0.008299 | 0.242138 | 1,272 | 37 | 182 | 34.378378 | 0.858921 | 0.075472 | 0 | 0.16 | 0 | 0 | 0.014222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0 | 0.24 | 0.08 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
828f4e2b1ccb540c7a56d21483959504b6f04e0c | 168 | py | Python | src/data_management/dto/Data.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | null | null | null | src/data_management/dto/Data.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | null | null | null | src/data_management/dto/Data.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | 1 | 2020-12-11T15:18:49.000Z | 2020-12-11T15:18:49.000Z | from dataclasses import dataclass
from time import time_ns
@dataclass
class Data:
__slots__ = ['key', 'time', 'value']
key: int
time: int
value: str
| 14 | 40 | 0.660714 | 22 | 168 | 4.818182 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244048 | 168 | 11 | 41 | 15.272727 | 0.834646 | 0 | 0 | 0 | 0 | 0 | 0.071856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.875 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
82901893090f748e90ef49097b48af7e0cd4d993 | 681 | py | Python | tests/login/login_shib.py | giangbui/fence | 5a28b77c30ce7fb11fd05b09a023d0aec1e57e16 | [
"Apache-2.0"
] | null | null | null | tests/login/login_shib.py | giangbui/fence | 5a28b77c30ce7fb11fd05b09a023d0aec1e57e16 | [
"Apache-2.0"
] | null | null | null | tests/login/login_shib.py | giangbui/fence | 5a28b77c30ce7fb11fd05b09a023d0aec1e57e16 | [
"Apache-2.0"
] | null | null | null | def test_shib_redirect(client, app):
r = client.get("/login/shib")
assert r.status_code == 302
def test_shib_login(app, client):
r = client.get(
"/login/shib/login", headers={app.config["SHIBBOLETH_HEADER"]: "test"}
)
assert r.status_code == 200
def test_shib_login_redirect(app, client):
r = client.get("/login/shib?redirect=http://localhost")
r = client.get(
"/login/shib/login", headers={app.config["SHIBBOLETH_HEADER"]: "test"}
)
assert r.status_code == 302
assert r.headers["Location"] == "http://localhost"
def test_shib_login_fail(client):
r = client.get("/login/shib/login")
assert r.status_code == 401
| 27.24 | 78 | 0.654919 | 94 | 681 | 4.574468 | 0.244681 | 0.125581 | 0.116279 | 0.174419 | 0.611628 | 0.513953 | 0.513953 | 0.35814 | 0.35814 | 0.35814 | 0 | 0.021544 | 0.182085 | 681 | 24 | 79 | 28.375 | 0.750449 | 0 | 0 | 0.333333 | 0 | 0 | 0.242291 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8295ffbb2d18d874ceadc7ebfc0ff07ac74b254b | 27,821 | py | Python | tests/pytests/test_followhashes.py | kashish-redislabs/RediSearch | a2c5ac21bc072e6df87ece7f3329eb85d5e7618c | [
"Ruby",
"MIT"
] | null | null | null | tests/pytests/test_followhashes.py | kashish-redislabs/RediSearch | a2c5ac21bc072e6df87ece7f3329eb85d5e7618c | [
"Ruby",
"MIT"
] | null | null | null | tests/pytests/test_followhashes.py | kashish-redislabs/RediSearch | a2c5ac21bc072e6df87ece7f3329eb85d5e7618c | [
"Ruby",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
from includes import *
from common import getConnectionByEnv, waitForIndex, sortedResults, toSortedFlatList
from time import sleep
from RLTest import Env
def testSyntax1(env):
conn = getConnectionByEnv(env)
env.expect('ft.create', 'idx',
'ONfoo*',
'SCHEMA', 'foo', 'text').equal('Unknown argument `ONfoo*`')
env.expect('ft.create', 'idx2',
'LANGUAGE', 'eng'
'SCHEMA', 'foo', 'text').equal('Invalid language')
env.expect('ft.create', 'idx2',
'SCORE', '1.0'
'SCHEMA', 'foo', 'text').equal('Unknown argument `foo`')
env.expect('ft.create', 'idx2',
'PAYLOAD_FIELD', 'awfw'
'SCHEMA', 'foo', 'text').equal('Unknown argument `foo`')
env.expect('ft.create', 'idx2',
'FILTER', 'a'
'SCHEMA', 'foo', 'text').equal("Unknown symbol 'aSCHEMA'")
def testFilter1(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things',
'ON', 'HASH',
'FILTER', 'startswith(@__key, "thing:")',
'SCHEMA', 'name', 'text')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo']])
def testPrefix0a(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', '',
'SCHEMA', 'name', 'text')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo').equal([1L, 'thing:bar', ['name', 'foo']])
def testPrefix0b(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH', 'SCHEMA', 'name', 'text')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo').equal([1L, 'thing:bar', ['name', 'foo']])
def testPrefix1(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'SCHEMA', 'name', 'text')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo']])
def testPrefix2(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '2', 'this:', 'that:',
'SCHEMA', 'name', 'text')
conn.execute_command('hset', 'this:foo', 'name', 'foo')
conn.execute_command('hset', 'that:foo', 'name', 'foo')
res = env.cmd('ft.search', 'things', 'foo')
env.assertIn('that:foo', res)
env.assertIn('this:foo', res)
def testFilter2(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'stuff', 'ON', 'HASH',
'FILTER', 'startswith(@__key, "stuff:")',
'SCHEMA', 'name', 'text', 'age', 'numeric')
env.cmd('ft.create', 'things', 'ON', 'HASH',
'FILTER', 'startswith(@__key, "thing:")',
'SCHEMA', 'name', 'text', 'age', 'numeric')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
conn.execute_command('hset', 'object:jojo', 'name', 'vivi')
conn.execute_command('hset', 'thing:bar', 'age', '42')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo', 'age', '42']])
def testPrefix3(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'stuff',
'ON', 'HASH',
'PREFIX', '1', 'stuff:',
'SCHEMA', 'name', 'text', 'age', 'numeric')
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'SCHEMA', 'name', 'text', 'age', 'numeric')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
conn.execute_command('hset', 'object:jojo', 'name', 'vivi')
conn.execute_command('hset', 'thing:bar', 'age', '42')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo', 'age', '42']])
def testIdxField(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'idx1',
'ON', 'HASH',
'PREFIX', 1, 'doc',
'FILTER', '@indexName=="idx1"',
'SCHEMA', 'name', 'text', 'indexName', 'text')
env.cmd('ft.create', 'idx2',
'ON', 'HASH',
'FILTER', '@indexName=="idx2"',
'SCHEMA', 'name', 'text', 'indexName', 'text')
conn.execute_command('hset', 'doc1', 'name', 'foo', 'indexName', 'idx1')
conn.execute_command('hset', 'doc2', 'name', 'bar', 'indexName', 'idx2')
env.expect('ft.search', 'idx1', '*').equal([1L, 'doc1', ['name', 'foo', 'indexName', 'idx1']])
env.expect('ft.search', 'idx2', '*').equal([1L, 'doc2', ['name', 'bar', 'indexName', 'idx2']])
def testDel(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'SCHEMA', 'name', 'text')
env.expect('ft.search', 'things', 'foo').equal([0L])
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo').equal([1L, 'thing:bar', ['name', 'foo']])
conn.execute_command('del', 'thing:bar')
env.expect('ft.search', 'things', 'foo').equal([0L])
def testSet(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things',
'PREFIX', '1', 'thing:',
'SCHEMA', 'name', 'text')
env.expect('ft.search', 'things', 'foo').equal([0L])
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo').equal([1L, 'thing:bar', ['name', 'foo']])
env.expect('set', 'thing:bar', "bye bye")
env.expect('ft.search', 'things', 'foo').equal([0L])
def testRename(env):
env.skipOnCluster()
conn = getConnectionByEnv(env)
env.cmd('ft.create things PREFIX 1 thing: SCHEMA name text')
env.expect('ft.search things foo').equal([0L])
conn.execute_command('hset thing:bar name foo')
env.expect('ft.search things foo').equal([1L, 'thing:bar', ['name', 'foo']])
env.expect('RENAME thing:bar thing:foo').ok()
env.expect('ft.search things foo').equal([1L, 'thing:foo', ['name', 'foo']])
env.cmd('ft.create otherthings PREFIX 1 otherthing: SCHEMA name text')
env.expect('RENAME thing:foo otherthing:foo').ok()
env.expect('ft.search things foo').equal([0L])
env.expect('ft.search otherthings foo').equal([1L, 'otherthing:foo', ['name', 'foo']])
def testFlush(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'FILTER', 'startswith(@__key, "thing:")',
'SCHEMA', 'name', 'text')
conn.execute_command('FLUSHALL')
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.search', 'things', 'foo').equal('things: no such index')
def testNotExist(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'FILTER', 'startswith(@__key, "thing:")',
'SCHEMA', 'txt', 'text')
conn.execute_command('hset', 'thing:bar', 'not_text', 'foo')
env.expect('ft.search', 'things', 'foo').equal([0L])
def testPayload(env):
conn = getConnectionByEnv(env)
env.expect('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:',
'PAYLOAD_FIELD', 'payload',
'SCHEMA', 'name', 'text').ok()
conn.execute_command('hset', 'thing:foo', 'name', 'foo', 'payload', 'stuff')
for _ in env.retry_with_rdb_reload():
waitForIndex(env, 'things')
res = env.cmd('ft.search', 'things', 'foo')
env.assertEqual(toSortedFlatList(res), toSortedFlatList([1L, 'thing:foo', ['name', 'foo', 'payload', 'stuff']]))
res = env.cmd('ft.search', 'things', 'foo', 'withpayloads')
env.assertEqual(toSortedFlatList(res), toSortedFlatList([1L, 'thing:foo', 'stuff', ['name', 'foo', 'payload', 'stuff']]))
def testDuplicateFields(env):
env.expect('FT.CREATE', 'idx', 'ON', 'HASH',
'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC', 'SORTABLE').ok()
env.cmd('FT.ADD', 'idx', 'doc', 1.0,
'FIELDS', 'txt', 'foo', 'txt', 'bar', 'txt', 'baz')
env.expect('ft.search', 'idx', 'baz').equal([1L, 'doc', ['txt', 'baz']])
env.expect('ft.search', 'idx', 'foo').equal([0L])
def testReplace(env):
conn = getConnectionByEnv(env)
r = env
r.expect('ft.create idx schema f text').ok()
res = conn.execute_command('HSET', 'doc1', 'f', 'hello world')
env.assertEqual(res, 1)
res = conn.execute_command('HSET', 'doc2', 'f', 'hello world')
env.assertEqual(res, 1)
res = r.execute_command('ft.search', 'idx', 'hello world')
r.assertEqual(2, res[0])
# now replace doc1 with a different content
res = conn.execute_command('HSET', 'doc1', 'f', 'goodbye universe')
env.assertEqual(res, 0)
for _ in r.retry_with_rdb_reload():
waitForIndex(env, 'idx')
# make sure the query for hello world does not return the replaced document
r.expect('ft.search', 'idx', 'hello world', 'nocontent').equal([1, 'doc2'])
# search for the doc's new content
r.expect('ft.search', 'idx', 'goodbye universe', 'nocontent').equal([1, 'doc1'])
def testSortable(env):
env.expect('FT.CREATE', 'idx', 'ON', 'HASH', 'FILTER', 'startswith(@__key, "")',
'SCHEMA', 'test', 'TEXT', 'SORTABLE').equal('OK')
env.expect('ft.add', 'idx', 'doc1', '1.0', 'FIELDS', 'test', 'foo1').equal('OK')
def testMissingArgs(env):
env.expect('FT.CREATE', 'idx', 'ON', 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error()
env.expect('FT.CREATE', 'idx', 'ON', 'HASH', 'FILTER', 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error()
def testWrongArgs(env):
env.expect('FT.CREATE', 'idx', 'SCORE', 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error().contains('Invalid score')
env.expect('FT.CREATE', 'idx', 'SCORE', 10, 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error().contains('Invalid score')
env.expect('FT.CREATE', 'idx', 'LANGUAGE', 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error().contains('Invalid language')
env.expect('FT.CREATE', 'idx', 'LANGUAGE', 'none', 'SCHEMA', 'txt', 'TEXT', 'num', 'NUMERIC').error().contains('Invalid language')
def testLanguageDefaultAndField(env):
conn = getConnectionByEnv(env)
env.cmd('FT.CREATE', 'idxTest1', 'LANGUAGE_FIELD', 'lang', 'SCHEMA', 'body', 'TEXT')
env.cmd('FT.CREATE', 'idxTest2', 'LANGUAGE', 'hindi', 'SCHEMA', 'body', 'TEXT')
conn.execute_command('HSET', 'doc1', 'lang', 'hindi', 'body', u'अँगरेजी अँगरेजों अँगरेज़')
for _ in env.retry_with_rdb_reload():
waitForIndex(env, 'idxTest1')
waitForIndex(env, 'idxTest2')
#test for language field
res = env.cmd('FT.SEARCH', 'idxTest1', u'अँगरेज़')
res1 = {res[2][i]:res[2][i + 1] for i in range(0, len(res[2]), 2)}
env.assertEqual(u'अँगरेजी अँगरेजों अँगरेज़', unicode(res1['body'], 'utf-8'))
# test for default langauge
res = env.cmd('FT.SEARCH', 'idxTest2', u'अँगरेज़')
res1 = {res[2][i]:res[2][i + 1] for i in range(0, len(res[2]), 2)}
env.assertEqual(u'अँगरेजी अँगरेजों अँगरेज़', unicode(res1['body'], 'utf-8'))
def testScoreDecimal(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE', 'idx1', 'SCORE', '0.5', 'schema', 'title', 'text').ok()
env.expect('FT.CREATE', 'idx2', 'SCORE_FIELD', 'score', 'schema', 'title', 'text').ok()
res = conn.execute_command('HSET', 'doc1', 'title', 'hello', 'score', '0.25')
env.assertEqual(res, 2)
for _ in env.retry_with_rdb_reload():
waitForIndex(env, 'idx1')
waitForIndex(env, 'idx2')
res = env.cmd('ft.search', 'idx1', 'hello', 'withscores', 'nocontent')
env.assertEqual(float(res[2]), 0.5)
res = env.cmd('ft.search', 'idx2', 'hello', 'withscores', 'nocontent')
env.assertEqual(float(res[2]), 0.25)
def testMultiFilters1(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE', 'test', 'ON', 'HASH',
'PREFIX', '2', 'student:', 'pupil:',
'FILTER', 'startswith(@__key, "student:")',
'SCHEMA', 'first', 'TEXT', 'last', 'TEXT', 'age', 'NUMERIC').ok()
conn.execute_command('HSET', 'student:yes1', 'first', 'yes1', 'last', 'yes1', 'age', '17')
conn.execute_command('HSET', 'student:yes2', 'first', 'yes2', 'last', 'yes2', 'age', '15')
conn.execute_command('HSET', 'pupil:no1', 'first', 'no1', 'last', 'no1', 'age', '17')
conn.execute_command('HSET', 'pupil:no2', 'first', 'no2', 'last', 'no2', 'age', '15')
res1 = [2L, 'student:yes2', ['first', 'yes2', 'last', 'yes2', 'age', '15'],
'student:yes1', ['first', 'yes1', 'last', 'yes1', 'age', '17']]
res = env.cmd('ft.search test *')
env.assertEqual(toSortedFlatList(res), toSortedFlatList(res1))
def testMultiFilters2(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE', 'test', 'ON', 'HASH',
'PREFIX', '2', 'student:', 'pupil:',
'FILTER', '@age > 16',
'SCHEMA', 'first', 'TEXT', 'last', 'TEXT', 'age', 'NUMERIC').ok()
conn.execute_command('HSET', 'student:yes1', 'first', 'yes1', 'last', 'yes1', 'age', '17')
conn.execute_command('HSET', 'student:no1', 'first', 'no1', 'last', 'no1', 'age', '15')
conn.execute_command('HSET', 'pupil:yes2', 'first', 'yes2', 'last', 'yes2', 'age', '17')
conn.execute_command('HSET', 'pupil:no2', 'first', 'no2', 'last', 'no2', 'age', '15')
res1 = [2L, 'pupil:yes2', ['first', 'yes2', 'last', 'yes2', 'age', '17'],
'student:yes1', ['first', 'yes1', 'last', 'yes1', 'age', '17']]
res = env.cmd('ft.search test *')
env.assertEqual(toSortedFlatList(res), toSortedFlatList(res1))
def testInfo(env):
env.skipOnCluster()
env.expect('FT.CREATE', 'test', 'ON', 'HASH',
'PREFIX', '2', 'student:', 'pupil:',
'FILTER', '@age > 16',
'language', 'hindi',
'language_field', 'lang',
'score', '0.5',
'score_field', 'score',
'payload_field', 'pl',
'SCHEMA', 't', 'TEXT').ok()
res_actual = env.cmd('FT.INFO test')
res_expected = ['key_type', 'HASH',
'prefixes', ['student:', 'pupil:'],
'filter', '@age > 16',
'default_language', 'hindi',
'language_field', 'lang',
'default_score', '0.5',
'score_field', 'score',
'payload_field', 'pl']
env.assertEqual(res_actual[5], res_expected)
env.expect('ft.drop test').ok()
env.expect('FT.CREATE', 'test', 'SCHEMA', 't', 'TEXT').ok()
res_actual = env.cmd('FT.INFO test')
res_expected = ['key_type', 'HASH',
'prefixes', [''],
'language_field', '__language',
'default_score', '1',
'score_field', '__score',
'payload_field', '__payload']
env.assertEqual(res_actual[5], res_expected)
def testCreateDropCreate(env):
conn = getConnectionByEnv(env)
conn.execute_command('hset', 'thing:bar', 'name', 'foo')
env.expect('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:', 'SCHEMA', 'name', 'text').ok()
waitForIndex(conn, 'things')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo']])
env.expect('ft.dropindex things').ok()
env.expect('ft.create', 'things', 'ON', 'HASH',
'PREFIX', '1', 'thing:', 'SCHEMA', 'name', 'text').ok()
waitForIndex(conn, 'things')
env.expect('ft.search', 'things', 'foo') \
.equal([1L, 'thing:bar', ['name', 'foo']])
def testPartial(env):
if env.env == 'existing-env':
env.skip()
env.skipOnCluster()
env = Env(moduleArgs='PARTIAL_INDEXED_DOCS 1')
# HSET
env.expect('FT.CREATE idx SCHEMA test TEXT').equal('OK')
env.expect('HSET doc1 test foo').equal(1)
env.expect('FT.DEBUG docidtoid idx doc1').equal(1)
env.expect('HSET doc1 testtest foo').equal(1)
env.expect('FT.DEBUG docidtoid idx doc1').equal(1)
env.expect('HSET doc1 test bar').equal(0)
env.expect('FT.DEBUG docidtoid idx doc1').equal(2)
env.expect('FT.SEARCH idx bar').equal([1L, 'doc1', ['test', 'bar', 'testtest', 'foo']])
# HMSET
env.expect('HMSET doc2 test foo').ok()
env.expect('FT.DEBUG docidtoid idx doc2').equal(3)
env.expect('HMSET doc2 testtest foo').ok()
env.expect('FT.DEBUG docidtoid idx doc2').equal(3)
env.expect('HMSET doc2 test baz').ok()
env.expect('FT.DEBUG docidtoid idx doc2').equal(4)
env.expect('FT.SEARCH idx baz').equal([1L, 'doc2', ['test', 'baz', 'testtest', 'foo']])
# HSETNX
env.expect('HSETNX doc3 test foo').equal(1)
env.expect('FT.DEBUG docidtoid idx doc3').equal(5)
env.expect('HSETNX doc3 testtest foo').equal(1)
env.expect('FT.DEBUG docidtoid idx doc3').equal(5)
env.expect('HSETNX doc3 test bad').equal(0)
env.expect('FT.DEBUG docidtoid idx doc3').equal(5)
env.expect('FT.SEARCH idx foo').equal([1L, 'doc3', ['test', 'foo', 'testtest', 'foo']])
# HINCRBY
env.expect('HINCRBY doc4 test 5').equal(5)
env.expect('FT.DEBUG docidtoid idx doc4').equal(6)
env.expect('HINCRBY doc4 testtest 5').equal(5)
env.expect('FT.DEBUG docidtoid idx doc4').equal(6)
env.expect('HINCRBY doc4 test 6').equal(11)
env.expect('FT.DEBUG docidtoid idx doc4').equal(7)
env.expect('HINCRBY doc4 test 5.5').error(). contains('value is not an integer or out of range')
env.expect('FT.DEBUG docidtoid idx doc4').equal(7)
env.expect('FT.SEARCH idx 11').equal([1L, 'doc4', ['test', '11', 'testtest', '5']])
# HINCRBYFLOAT
env.expect('HINCRBYFLOAT doc5 test 5.5').equal('5.5')
env.expect('FT.DEBUG docidtoid idx doc5').equal(8)
env.expect('HINCRBYFLOAT doc5 testtest 5.5').equal('5.5')
env.expect('FT.DEBUG docidtoid idx doc5').equal(8)
env.expect('HINCRBYFLOAT doc5 test 6.6').equal('12.1')
env.expect('FT.DEBUG docidtoid idx doc5').equal(9)
env.expect('HINCRBYFLOAT doc5 test 5').equal('17.1')
env.expect('FT.DEBUG docidtoid idx doc5').equal(10)
env.expect('FT.SEARCH idx *').equal([5L, 'doc5', ['test', '17.1', 'testtest', '5.5'],
'doc4', ['test', '11', 'testtest', '5'],
'doc3', ['test', 'foo', 'testtest', 'foo'],
'doc2', ['test', 'baz', 'testtest', 'foo'],
'doc1', ['test', 'bar', 'testtest', 'foo']])
def testHDel(env):
if env.env == 'existing-env':
env.skip()
env.skipOnCluster()
env = Env(moduleArgs='PARTIAL_INDEXED_DOCS 1')
env.expect('FT.CREATE idx SCHEMA test1 TEXT test2 TEXT').equal('OK')
env.expect('FT.CREATE idx2 SCHEMA test1 TEXT test2 TEXT').equal('OK')
env.expect('HSET doc1 test1 foo test2 bar test3 baz').equal(3)
env.expect('FT.DEBUG docidtoid idx doc1').equal(1)
env.expect('HDEL doc1 test1').equal(1)
env.expect('FT.DEBUG docidtoid idx doc1').equal(2)
env.expect('HDEL doc1 test3').equal(1)
env.expect('FT.DEBUG docidtoid idx doc1').equal(2)
env.expect('FT.SEARCH idx bar').equal([1L, 'doc1', ['test2', 'bar']])
env.expect('HDEL doc1 test2').equal(1)
env.expect('FT.SEARCH idx bar').equal([0L])
def testRestore(env):
if env.env == 'existing-env':
env.skip()
env.skipOnCluster()
env.expect('FT.CREATE idx SCHEMA test TEXT').equal('OK')
env.expect('HSET doc1 test foo').equal(1)
env.expect('FT.SEARCH idx foo').equal([1L, 'doc1', ['test', 'foo']])
dump = env.cmd('dump doc1')
env.expect('DEL doc1').equal(1)
env.expect('FT.SEARCH idx foo').equal([0L])
env.expect('RESTORE', 'doc1', 0, dump)
env.expect('FT.SEARCH idx foo').equal([1L, 'doc1', ['test', 'foo']])
def testExpire(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE idx SCHEMA test TEXT').equal('OK')
conn.execute_command('HSET', 'doc1', 'test', 'foo')
env.expect('FT.SEARCH idx foo').equal([1L, 'doc1', ['test', 'foo']])
conn.execute_command('EXPIRE', 'doc1', '1')
env.expect('FT.SEARCH idx foo').equal([1L, 'doc1', ['test', 'foo']])
sleep(1.1)
env.expect('FT.SEARCH idx foo').equal([0L])
def testEvicted(env):
env.skipOnCluster()
conn = getConnectionByEnv(env)
env.expect('FT.CREATE idx SCHEMA test TEXT').equal('OK')
memory = 0
info = conn.execute_command('INFO MEMORY')
for line in info.splitlines():
if 'used_memory:' in line:
sub = line.split(':')
memory = int(sub[1])
conn.execute_command('CONFIG', 'SET', 'MAXMEMORY-POLICY', 'ALLKEYS-RANDOM')
conn.execute_command('CONFIG', 'SET', 'MAXMEMORY', memory + 100000)
for i in range(1000):
env.expect('HSET', 'doc{}'.format(i), 'test', 'foo').equal(1)
res = env.cmd('FT.SEARCH idx foo limit 0 0')
env.assertLess(res[0], 1000)
env.assertGreater(res[0], 0)
def createExpire(env, N):
env.flush()
conn = getConnectionByEnv(env)
env.expect('FT.CREATE idx SCHEMA txt1 TEXT n NUMERIC').ok()
for i in range(N):
conn.execute_command('HSET', 'doc%d' % i, 'txt1', 'hello%i' % i, 'n', i)
conn.execute_command('PEXPIRE', 'doc%d' % i, '100')
conn.execute_command('HSET', 'foo', 'txt1', 'hello', 'n', 0)
conn.execute_command('HSET', 'bar', 'txt1', 'hello', 'n', 20)
waitForIndex(env, 'idx')
env.expect('FT.SEARCH', 'idx', 'hello*', 'limit', '0', '0').noEqual([2L])
res = conn.execute_command('HGETALL', 'doc99')
if type(res) is list:
res = {res[i]:res[i + 1] for i in range(0, len(res), 2)}
env.assertEqual(res, {'txt1': 'hello99', 'n': '99'})
sleep(0.1)
res = conn.execute_command('HGETALL', 'doc99')
if isinstance(res, list):
res = {res[i]:res[i + 1] for i in range(0, len(res), 2)}
env.assertEqual(res, {})
def testExpiredDuringSearch(env):
N = 100
createExpire(env, N)
res = env.cmd('FT.SEARCH', 'idx', 'hello*', 'nocontent', 'limit', '0', '200')
env.assertGreater(103, len(res))
env.assertLess(1, len(res))
createExpire(env, N)
res = env.cmd('FT.SEARCH', 'idx', 'hello*', 'limit', '0', '200')
env.assertEqual(toSortedFlatList(res[1:]), toSortedFlatList(['bar', ['txt1', 'hello', 'n', '20'],
'foo', ['txt1', 'hello', 'n', '0']]))
def testExpiredDuringAggregate(env):
N = 100
res = [1L, ['txt1', 'hello', 'COUNT', '2']]
createExpire(env, N)
_res = env.cmd('FT.AGGREGATE idx hello*')
env.assertGreater(len(_res), 2)
createExpire(env, N)
env.expect('FT.AGGREGATE idx hello* GROUPBY 1 @txt1 REDUCE count 0 AS COUNT').equal(res)
createExpire(env, N)
env.expect('FT.AGGREGATE idx hello* LOAD 1 @txt1 GROUPBY 1 @txt1 REDUCE count 0 AS COUNT').equal(res)
createExpire(env, N)
env.expect('FT.AGGREGATE idx @txt1:hello* LOAD 1 @txt1 GROUPBY 1 @txt1 REDUCE count 0 AS COUNT').equal(res)
def testSkipInitialScan(env):
conn = getConnectionByEnv(env)
conn.execute_command('HSET', 'a', 'test', 'hello', 'text', 'world')
# Regular
env.expect('FT.CREATE idx SCHEMA test TEXT').ok()
waitForIndex(env, 'idx')
env.expect('FT.SEARCH idx hello').equal([1L, 'a', ['test', 'hello', 'text', 'world']])
# SkipInitialIndex
env.expect('FT.CREATE idx_no_scan SKIPINITIALSCAN SCHEMA test TEXT').ok()
waitForIndex(env, 'idx_no_scan')
env.expect('FT.SEARCH idx_no_scan hello').equal([0L])
# Temporary
env.expect('FT.CREATE temp_idx TEMPORARY 10 SCHEMA test TEXT').ok()
waitForIndex(env, 'temp_idx')
env.expect('FT.SEARCH temp_idx hello').equal([1L, 'a', ['test', 'hello', 'text', 'world']])
# Temporary & NoInitialIndex
env.expect('FT.CREATE temp_idx_no_scan SKIPINITIALSCAN TEMPORARY 10 SCHEMA test TEXT').equal('OK')
waitForIndex(env, 'temp_idx_no_scan')
env.expect('FT.SEARCH temp_idx_no_scan hello').equal([0L])
def testWrongFieldType(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE idx SCHEMA t TEXT n NUMERIC').ok()
conn.execute_command('HSET', 'a', 't', 'hello', 'n', '42')
conn.execute_command('HSET', 'b', 't', 'hello', 'n', 'world')
env.expect('FT.SEARCH idx hello').equal([1L, 'a', ['t', 'hello', 'n', '42']])
res_actual = env.cmd('FT.INFO idx')
res_actual = {res_actual[i]: res_actual[i + 1] for i in range(0, len(res_actual), 2)}
env.assertEqual(str(res_actual['hash_indexing_failures']), '1')
def testDocIndexedInTwoIndexes():
env = Env(moduleArgs='MAXDOCTABLESIZE 50')
env.skipOnCluster()
env.expect('FT.CREATE idx1 SCHEMA t TEXT').ok()
env.expect('FT.CREATE idx2 SCHEMA t TEXT').ok()
for i in range(1000):
env.expect('HSET', 'doc%d' % i, 't', 'foo').equal(1L)
env.expect('FT.DROPINDEX idx2 DD').ok()
env.expect('FT.SEARCH idx1 foo').equal([0L])
env.expect('FT.DROPINDEX idx1 DD').ok()
def testCountry(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'idx1',
'PREFIX', 1, 'address:',
'FILTER', '@country=="usa"',
'SCHEMA', 'business', 'text', 'country', 'text')
conn.execute_command('hset', 'address:1', 'business', 'foo', 'country', 'usa')
conn.execute_command('hset', 'address:2', 'business', 'bar', 'country', 'israel')
env.expect('ft.search', 'idx1', '*').equal([1L, 'address:1', ['business', 'foo', 'country', 'usa']])
def testIssue1571(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'idx',
'FILTER', '@index=="yes"',
'SCHEMA', 't', 'TEXT')
conn.execute_command('hset', 'doc1', 't', 'foo1', 'index', 'yes')
env.expect('ft.search', 'idx', 'foo*').equal([1L, 'doc1', ['t', 'foo1', 'index', 'yes']])
conn.execute_command('hset', 'doc1', 'index', 'no')
env.expect('ft.search', 'idx', 'foo*').equal([0L])
conn.execute_command('hset', 'doc1', 't', 'foo2')
env.expect('ft.search', 'idx', 'foo*').equal([0L])
conn.execute_command('hset', 'doc1', 'index', 'yes')
env.expect('ft.search', 'idx', 'foo*').equal([1L, 'doc1', ['t', 'foo2', 'index', 'yes']])
def testIssue1571WithRename(env):
conn = getConnectionByEnv(env)
env.cmd('ft.create', 'idx1',
'PREFIX', '1', 'idx1',
'FILTER', '@index=="yes"',
'SCHEMA', 't', 'TEXT')
env.cmd('ft.create', 'idx2',
'PREFIX', '1', 'idx2',
'FILTER', '@index=="yes"',
'SCHEMA', 't', 'TEXT')
conn.execute_command('hset', 'idx1:{doc}1', 't', 'foo1', 'index', 'yes')
env.expect('ft.search', 'idx1', 'foo*').equal([1L, 'idx1:{doc}1', ['t', 'foo1', 'index', 'yes']])
env.expect('ft.search', 'idx2', 'foo*').equal([0L])
conn.execute_command('rename', 'idx1:{doc}1', 'idx2:{doc}1')
env.expect('ft.search', 'idx2', 'foo*').equal([1L, 'idx2:{doc}1', ['t', 'foo1', 'index', 'yes']])
env.expect('ft.search', 'idx1', 'foo*').equal([0L])
conn.execute_command('hset', 'idx2:{doc}1', 'index', 'no')
env.expect('ft.search', 'idx1', 'foo*').equal([0L])
env.expect('ft.search', 'idx2', 'foo*').equal([0L])
conn.execute_command('rename', 'idx2:{doc}1', 'idx1:{doc}1')
env.expect('ft.search', 'idx1', 'foo*').equal([0L])
env.expect('ft.search', 'idx2', 'foo*').equal([0L])
conn.execute_command('hset', 'idx1:{doc}1', 'index', 'yes')
env.expect('ft.search', 'idx1', 'foo*').equal([1L, 'idx1:{doc}1', ['t', 'foo1', 'index', 'yes']])
env.expect('ft.search', 'idx2', 'foo*').equal([0L])
| 41.83609 | 134 | 0.57356 | 3,607 | 27,821 | 4.390075 | 0.088162 | 0.086959 | 0.086138 | 0.064414 | 0.780676 | 0.700347 | 0.633786 | 0.584717 | 0.528323 | 0.444269 | 0 | 0.026766 | 0.200963 | 27,821 | 664 | 135 | 41.899096 | 0.683941 | 0.011502 | 0 | 0.457306 | 0 | 0.001898 | 0.337772 | 0.0008 | 0 | 0 | 0 | 0 | 0.049336 | 0 | null | null | 0 | 0.009488 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
829655890b89680bc4889bab38b4820fbea87750 | 237 | py | Python | debug.py | efokschaner/pbundler | c6d8ea7fcdbea2116b603d068b035ea18d5b7b24 | [
"MIT"
] | null | null | null | debug.py | efokschaner/pbundler | c6d8ea7fcdbea2116b603d068b035ea18d5b7b24 | [
"MIT"
] | null | null | null | debug.py | efokschaner/pbundler | c6d8ea7fcdbea2116b603d068b035ea18d5b7b24 | [
"MIT"
] | null | null | null | """
A simple wrapper to invoke pbundler without needing to install it, making debugging easier in an IDE
"""
import sys
from pbundler import PBCli
def main():
sys.exit(PBCli().run(sys.argv))
if __name__ == '__main__':
main() | 16.928571 | 100 | 0.700422 | 35 | 237 | 4.514286 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189873 | 237 | 14 | 101 | 16.928571 | 0.822917 | 0.421941 | 0 | 0 | 0 | 0 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
82993b323a33348c2c36cf2eb1c0476dfe101805 | 1,002 | py | Python | JarekG/2_python_controlflow/exception_raise_a.py | Khayn/2021-12-elearning-pythonana | a54e407adc8fb8c3a5fd2522735ae09cdef6540a | [
"MIT"
] | null | null | null | JarekG/2_python_controlflow/exception_raise_a.py | Khayn/2021-12-elearning-pythonana | a54e407adc8fb8c3a5fd2522735ae09cdef6540a | [
"MIT"
] | null | null | null | JarekG/2_python_controlflow/exception_raise_a.py | Khayn/2021-12-elearning-pythonana | a54e407adc8fb8c3a5fd2522735ae09cdef6540a | [
"MIT"
] | null | null | null | """
* Assignment: Exception Raise One
* Required: yes
* Complexity: easy
* Lines of code: 2 lines
* Time: 3 min
English:
1. Validate value passed to a `result` function:
a. If `value` is less than zero, raise `ValueError`
2. Non-functional requirements:
a. Write solution inside `result` function
b. Mind the indentation level
3. Run doctests - all must succeed
Polish:
1. Sprawdź poprawność wartości przekazanej do funckji `result`:
a. Jeżeli `value` jest mniejsze niż zero, podnieś wyjątek `ValueError`
2. Wymagania niefunkcjonalne:
a. Rozwiązanie zapisz wewnątrz funkcji `result`
b. Zwróć uwagę na poziom wcięć
3. Uruchom doctesty - wszystkie muszą się powieść
Hints:
* `if`
* `raise`
Tests:
>>> import sys; sys.tracebacklimit = 0
>>> result(1)
>>> result(0)
>>> result(-1)
Traceback (most recent call last):
ValueError
"""
def result(value):
if value < 0:
raise ValueError()
| 23.857143 | 78 | 0.646707 | 125 | 1,002 | 5.184 | 0.704 | 0.04321 | 0.024691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017497 | 0.258483 | 1,002 | 41 | 79 | 24.439024 | 0.854643 | 0.92515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82a649e278b3437f5903b99568f04620dd558a29 | 535 | py | Python | modules/getDriver.py | ProzTock/RecibosAllianz | 83e0e4de9a883050bd5c0c93fe6528a2bfc521ed | [
"MIT"
] | 3 | 2021-12-13T16:10:29.000Z | 2022-03-12T23:01:02.000Z | modules/getDriver.py | ProzTock/RecibosAllianz | 83e0e4de9a883050bd5c0c93fe6528a2bfc521ed | [
"MIT"
] | null | null | null | modules/getDriver.py | ProzTock/RecibosAllianz | 83e0e4de9a883050bd5c0c93fe6528a2bfc521ed | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#_*_ coding: utf-8 _*_
from os import remove
from zipfile import ZipFile
from urllib.request import urlopen
def downloadDriver(url : str):
chrome_driver_file = urlopen(url=url).read()
with open("chromeDriver\\chromedriver_win32.zip", 'wb') as download:
download.write(chrome_driver_file)
download.close()
file_zip = ZipFile("chromeDriver\\chromedriver_win32.zip")
file_zip.extractall("chromeDriver")
file_zip.close()
remove("chromeDriver\\chromedriver_win32.zip") | 22.291667 | 72 | 0.721495 | 66 | 535 | 5.636364 | 0.515152 | 0.193548 | 0.233871 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.162617 | 535 | 24 | 73 | 22.291667 | 0.814732 | 0.076636 | 0 | 0 | 0 | 0 | 0.247465 | 0.219067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82adbdffb925134432e6782646630079f2bcf7d5 | 182 | py | Python | setup.py | afcarl/corvid | e257074edeac1e8dce4a737b60e93a9bea37b6b9 | [
"Apache-2.0"
] | 1 | 2019-04-15T13:49:39.000Z | 2019-04-15T13:49:39.000Z | setup.py | afcarl/corvid | e257074edeac1e8dce4a737b60e93a9bea37b6b9 | [
"Apache-2.0"
] | null | null | null | setup.py | afcarl/corvid | e257074edeac1e8dce4a737b60e93a9bea37b6b9 | [
"Apache-2.0"
] | 1 | 2020-09-02T13:49:52.000Z | 2020-09-02T13:49:52.000Z | #!/usr/bin/python
import setuptools
setuptools.setup(
name='corvid',
version='0.0.1',
url='https://github.com/allenai/corvid',
packages=setuptools.find_packages()
) | 18.2 | 44 | 0.681319 | 23 | 182 | 5.347826 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.142857 | 182 | 10 | 45 | 18.2 | 0.769231 | 0.087912 | 0 | 0 | 0 | 0 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82b521b0de83ba47519cca93ed7cf39de30dd875 | 1,478 | py | Python | player_test.py | KeepCoding/Connecta | e11c2974795cf325c194e107d1749c7e6431219c | [
"MIT"
] | null | null | null | player_test.py | KeepCoding/Connecta | e11c2974795cf325c194e107d1749c7e6431219c | [
"MIT"
] | null | null | null | player_test.py | KeepCoding/Connecta | e11c2974795cf325c194e107d1749c7e6431219c | [
"MIT"
] | null | null | null | from square_board import SquareBoard
from oracle import BaseOracle
from player import Player, _is_int, _is_non_full_column, _is_within_column_range
def test_valid_column():
board = SquareBoard.fromList([['x', None, None, None, ],
['x', 'o', 'x', 'o', ],
['o', 'o', 'x', 'x', ],
['o', None, None, None, ]])
assert _is_within_column_range(board, 0)
assert _is_within_column_range(board, 1)
assert _is_within_column_range(board, 2)
assert _is_within_column_range(board, 3)
assert _is_within_column_range(board, 5) == False
assert _is_within_column_range(board, -10) == False
assert _is_within_column_range(board, 10) == False
def test_is_non_full_column():
board = SquareBoard.fromList([['x', None, None, None, ],
['x', 'o', 'x', 'o', ],
['o', 'o', 'x', 'x', ],
['o', None, None, None, ]])
assert _is_non_full_column(board,0)
assert _is_non_full_column(board, 1) == False
assert _is_non_full_column(board,2) == False
assert _is_non_full_column(board, 3)
def test_is_int():
assert _is_int('42')
assert _is_int('0')
assert _is_int('-32')
assert _is_int(' 32 ')
assert _is_int('hola') == False
assert _is_int('') == False
assert _is_int('3.14') == False
| 36.04878 | 80 | 0.556157 | 188 | 1,478 | 3.962766 | 0.180851 | 0.193289 | 0.150336 | 0.204027 | 0.69396 | 0.681879 | 0.432215 | 0.299329 | 0.299329 | 0.193289 | 0 | 0.022549 | 0.309878 | 1,478 | 40 | 81 | 36.95 | 0.707843 | 0 | 0 | 0.25 | 0 | 0 | 0.02774 | 0 | 0 | 0 | 0 | 0 | 0.5625 | 1 | 0.09375 | false | 0 | 0.09375 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82b7d9b53d21dc0b7f10f306ebac9042c05895fe | 700 | py | Python | 15. Iterators and Generators - Lab/01_custom_range.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-27T16:56:30.000Z | 2021-03-27T16:56:30.000Z | 15. Iterators and Generators - Lab/01_custom_range.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | null | null | null | 15. Iterators and Generators - Lab/01_custom_range.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-15T14:50:39.000Z | 2021-03-15T14:50:39.000Z | class custom_range:
def __init__(self, start, end, step=1):
self.start = start
self.end = end
self.step = step
self.increment = 1
if self.step < 0:
self.start, self.end = self.end, self.start
self.increment = -1
def __iter__(self):
return self
def __next__(self):
if self.increment > 0:
if self.start > self.end:
raise StopIteration()
else:
if self.start < self.end:
raise StopIteration()
temp = self.start
self.start += self.step
return temp
one_to_ten = custom_range(1, 10)
for num in one_to_ten:
print(num)
| 22.580645 | 55 | 0.535714 | 88 | 700 | 4.056818 | 0.318182 | 0.201681 | 0.218487 | 0.134454 | 0.201681 | 0.201681 | 0.201681 | 0 | 0 | 0 | 0 | 0.018223 | 0.372857 | 700 | 30 | 56 | 23.333333 | 0.794989 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0.041667 | 0.25 | 0.041667 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82c52fd0cdf885d4f695b52df6d68a04d67b6ec2 | 2,574 | py | Python | lavalink/events.py | iWeeti/Lavalink.py | c47149c0f6dcdd11cd57ee10b6f92ef932c027ea | [
"MIT"
] | null | null | null | lavalink/events.py | iWeeti/Lavalink.py | c47149c0f6dcdd11cd57ee10b6f92ef932c027ea | [
"MIT"
] | null | null | null | lavalink/events.py | iWeeti/Lavalink.py | c47149c0f6dcdd11cd57ee10b6f92ef932c027ea | [
"MIT"
] | null | null | null | class Event:
""" The base for all Lavalink events. """
pass
class QueueEndEvent(Event):
""" This event is dispatched when there are no more songs in the queue. """
def __init__(self, player):
self.player = player
class TrackStuckEvent(Event):
""" This event is dispatched when the currently playing song is stuck. """
def __init__(self, player, track, threshold):
self.player = player
self.track = track
self.threshold = threshold
class TrackExceptionEvent(Event):
""" This event is dispatched when an exception occurs while playing a track. """
def __init__(self, player, track, exception):
self.exception = exception
self.player = player
self.track = track
class TrackEndEvent(Event):
""" This event is dispatched when the player finished playing a track. """
def __init__(self, player, track, reason):
self.reason = reason
self.player = player
self.track = track
class TrackStartEvent(Event):
""" This event is dispatched when the player starts to play a track. """
def __init__(self, player, track):
self.player = player
self.track = track
class PlayerUpdateEvent(Event):
""" This event is dispatched when the player's progress changes """
def __init__(self, player, position: int, timestamp: int):
self.player = player
self.position = position
self.timestamp = timestamp
class NodeDisconnectedEvent(Event):
""" This event is dispatched when a node disconnects and becomes unavailable """
def __init__(self, node, code: int, reason: str):
self.node = node
self.code = code
self.reason = reason
class NodeConnectedEvent(Event):
""" This event is dispatched when Lavalink.py successfully connects to a node """
def __init__(self, node):
self.node = node
class NodeChangedEvent(Event):
"""
This event is dispatched when a player changes to another node.
Keep in mind this event can be dispatched multiple times if a node
disconnects and the load balancer moves players to a new node.
Parameters
----------
player: BasePlayer
The player whose node was changed.
old_node: Node
The node the player was moved from.
new_node: Node
The node the player was moved to.
"""
def __init__(self, player, old_node, new_node):
self.player = player
self.old_node = old_node
self.new_node = new_node
# TODO: The above needs their parameters documented.
| 29.586207 | 85 | 0.662393 | 324 | 2,574 | 5.126543 | 0.280864 | 0.084287 | 0.075858 | 0.086695 | 0.373871 | 0.360626 | 0.28838 | 0.151114 | 0 | 0 | 0 | 0 | 0.254468 | 2,574 | 86 | 86 | 29.930233 | 0.865555 | 0.397824 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.214286 | false | 0.02381 | 0 | 0 | 0.452381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82db0d32c95eb2ac6e44ab8966b06f6b3c477129 | 505 | py | Python | iseq/io/test/test_readers.py | horta/iseq | f054737ed1b76e7fe618261d1bef7f465d51ea64 | [
"MIT"
] | null | null | null | iseq/io/test/test_readers.py | horta/iseq | f054737ed1b76e7fe618261d1bef7f465d51ea64 | [
"MIT"
] | null | null | null | iseq/io/test/test_readers.py | horta/iseq | f054737ed1b76e7fe618261d1bef7f465d51ea64 | [
"MIT"
] | null | null | null | # from nmm import tblout_reader
# def test_tblout(tblout):
# with open(tblout) as file:
# reader = tblout_reader(file)
# row = next(reader)
# assert row.target_name == "item2"
# assert row.full_sequence.e_value == "1.2e-07"
# assert row.best_1_domain.e_value == "1.2e-07"
# row = next(reader)
# assert row.target_name == "item3"
# assert row.full_sequence.e_value == "1.2e-07"
# assert row.best_1_domain.e_value == "1.2e-07"
| 29.705882 | 55 | 0.59604 | 72 | 505 | 3.972222 | 0.375 | 0.188811 | 0.097902 | 0.125874 | 0.664336 | 0.664336 | 0.664336 | 0.440559 | 0.440559 | 0.440559 | 0 | 0.054054 | 0.267327 | 505 | 16 | 56 | 31.5625 | 0.718919 | 0.942574 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
82e0cc3fbbfd40582ea26e54caaeab0ec7435ba4 | 584 | py | Python | djue/management/commands/components.py | brmc/django-djue | 47a096f1aec9b9f519f8b831ca61b078c5b9b521 | [
"MIT"
] | null | null | null | djue/management/commands/components.py | brmc/django-djue | 47a096f1aec9b9f519f8b831ca61b078c5b9b521 | [
"MIT"
] | null | null | null | djue/management/commands/components.py | brmc/django-djue | 47a096f1aec9b9f519f8b831ca61b078c5b9b521 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
from django.conf import settings
from django.urls import get_resolver
from djue.management.commands._actions import ModuleCommand, \
generate_components
from djue.utils import log, get_output_path
class Command(ModuleCommand):
def handle(self, *args, **options):
path = get_output_path()
for module in options.get('modules', []):
log(f'Generating components for {module}')
module = get_resolver(module)
generate_components(module.url_patterns, path)
| 24.333333 | 62 | 0.693493 | 73 | 584 | 5.410959 | 0.589041 | 0.050633 | 0.065823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002155 | 0.205479 | 584 | 23 | 63 | 25.391304 | 0.849138 | 0.071918 | 0 | 0 | 0 | 0 | 0.075926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
82e2a2fd5ec90f08b64886c49fc30d236a2850b4 | 6,381 | py | Python | src/gluonnlp/utils/lazy_imports.py | ZheyuYe/gluon-nlp | 23d9d73f89ed83376def818434684eca17a3dea6 | [
"Apache-2.0"
] | 1 | 2020-08-19T09:31:04.000Z | 2020-08-19T09:31:04.000Z | src/gluonnlp/utils/lazy_imports.py | ZheyuYe/gluon-nlp | 23d9d73f89ed83376def818434684eca17a3dea6 | [
"Apache-2.0"
] | null | null | null | src/gluonnlp/utils/lazy_imports.py | ZheyuYe/gluon-nlp | 23d9d73f89ed83376def818434684eca17a3dea6 | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Lazy import some third-party libraries."""
__all__ = ['try_import_sentencepiece',
'try_import_yttm',
'try_import_subword_nmt',
'try_import_huggingface_tokenizers',
'try_import_spacy',
'try_import_scipy',
'try_import_mwparserfromhell',
'try_import_fasttext',
'try_import_langid',
'try_import_boto3',
'try_import_jieba',
'try_import_tvm']
def try_import_sentencepiece():
try:
import sentencepiece # pylint: disable=import-outside-toplevel
except ImportError:
raise ImportError(
'sentencepiece is not installed. You must install sentencepiece '
'in order to use the Sentencepiece tokenizer. '
'You can refer to the official installation guide '
'in https://github.com/google/sentencepiece#installation')
return sentencepiece
def try_import_yttm():
try:
import youtokentome as yttm
except ImportError:
raise ImportError('YouTokenToMe is not installed. You may try to install it via '
'`pip install youtokentome`.')
return yttm
def try_import_subword_nmt():
try:
import subword_nmt
except ImportError:
raise ImportError('subword-nmt is not installed. You can run `pip install subword_nmt` '
'to install the subword-nmt BPE implementation. You may also '
'refer to the official installation guide in '
'https://github.com/rsennrich/subword-nmt.')
return subword_nmt
def try_import_huggingface_tokenizers():
try:
import tokenizers
except ImportError:
raise ImportError(
'HuggingFace tokenizers is not installed. You can run `pip install tokenizers` '
'to use the HuggingFace BPE tokenizer. You may refer to the official installation '
'guide in https://github.com/huggingface/tokenizers.')
return tokenizers
def try_import_spacy():
try:
import spacy # pylint: disable=import-outside-toplevel
from pkg_resources import parse_version # pylint: disable=import-outside-toplevel
assert parse_version(spacy.__version__) >= parse_version('2.0.0'), \
'We only support spacy>=2.0.0'
except ImportError:
raise ImportError(
'spaCy is not installed. You must install spaCy in order to use the '
'SpacyTokenizer. You can refer to the official installation guide '
'in https://spacy.io/usage/.')
return spacy
def try_import_scipy():
try:
import scipy
except ImportError:
raise ImportError('SciPy is not installed. '
'You must install SciPy >= 1.0.0 in order to use the '
'TruncNorm. You can refer to the official '
'installation guide in https://www.scipy.org/install.html .')
return scipy
def try_import_mwparserfromhell():
try:
import mwparserfromhell
except ImportError:
raise ImportError('mwparserfromhell is not installed. You must install '
'mwparserfromhell in order to run the script. You can use '
'`pip install mwparserfromhell` or refer to guide in '
'https://github.com/earwig/mwparserfromhell.')
return mwparserfromhell
def try_import_autogluon():
try:
import autogluon
except ImportError:
raise ImportError('AutoGluon is not installed. You must install autogluon in order to use '
'the functionality. You can follow the guide in '
'https://github.com/awslabs/autogluon for installation.')
return autogluon
def try_import_fasttext():
try:
import fasttext
except ImportError:
raise ImportError('FastText is not installed. You must install fasttext in order to use the'
' functionality. See https://github.com/facebookresearch/fastText for '
'more information.')
return fasttext
def try_import_langid():
try:
import langid
except ImportError:
raise ImportError('"langid" is not installed. You must install langid in order to use the'
' functionality. You may try to use `pip install langid`.')
return langid
def try_import_boto3():
try:
import boto3
except ImportError:
raise ImportError('"boto3" is not installed. To enable fast downloading in EC2. You should '
'install boto3 and correctly configure the S3. '
'See https://boto3.readthedocs.io/ for more information. '
'If you are using EC2, downloading from s3:// will '
'be multiple times faster than using the traditional http/https URL.')
return boto3
def try_import_jieba():
try:
import jieba
except ImportError:
raise ImportError('"jieba" is not installed. You must install jieba tokenizer. '
'You may try to use `pip install jieba`')
return jieba
def try_import_tvm():
try:
import tvm
except ImportError:
raise ImportError('"tvm" is not installed. You must install TVM to use the functionality. '
'To install TVM, you may see the documentation in '
'https://tvm.apache.org/ or try to use the docker of GluonNLP.')
| 37.982143 | 100 | 0.63595 | 744 | 6,381 | 5.361559 | 0.227151 | 0.085736 | 0.039108 | 0.107546 | 0.350714 | 0.208824 | 0.107295 | 0.080471 | 0.063926 | 0.063926 | 0 | 0.005345 | 0.296349 | 6,381 | 167 | 101 | 38.209581 | 0.883074 | 0.143081 | 0 | 0.233871 | 0 | 0 | 0.469324 | 0.019471 | 0 | 0 | 0 | 0 | 0.008065 | 1 | 0.104839 | false | 0 | 0.524194 | 0 | 0.725806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
82e3c3960ec4676a2461a78da13f98923685e72a | 1,770 | py | Python | backend/core/models.py | andreaabellera/Ecoyou | 7b12e8a0d910549eaabf9d0e5e2da0abc293ca36 | [
"MIT"
] | null | null | null | backend/core/models.py | andreaabellera/Ecoyou | 7b12e8a0d910549eaabf9d0e5e2da0abc293ca36 | [
"MIT"
] | null | null | null | backend/core/models.py | andreaabellera/Ecoyou | 7b12e8a0d910549eaabf9d0e5e2da0abc293ca36 | [
"MIT"
] | 1 | 2021-01-14T21:12:47.000Z | 2021-01-14T21:12:47.000Z | from django.db import models
class Challenge(models.Model):
name = models.CharField(max_length=60)
city = models.CharField(max_length=60, default='Unknown')
organizer = models.CharField(max_length=60, default='Anonymous')
duration = models.PositiveIntegerField(default=30)
reward = models.PositiveIntegerField(default=100)
class Charity(models.Model):
name = models.CharField(max_length=60)
picture_link = models.CharField(max_length=1000, blank=True, null=True)
amount = models.PositiveIntegerField(default=5) # Amount given to charity
cost = models.PositiveIntegerField(default=1000) # Points needed to redeem
class Issue(models.Model):
name = models.CharField(max_length=60)
description = models.CharField(max_length=500, blank=True, null=True)
challenge = models.ManyToManyField(Challenge)
charity = models.ManyToManyField(Charity)
class Prize(models.Model):
name = models.CharField(max_length=60)
picture_link = models.CharField(max_length=1000, blank=True, null=True)
amount = models.PositiveIntegerField(default=5) # Amount you receive
cost = models.PositiveIntegerField(default=1000) # Points needed to redeem
class Badge(models.Model):
name = models.CharField(max_length=60)
description = models.CharField(max_length=60, blank=True)
picture_link = models.CharField(max_length=1000, blank=True)
class User(models.Model):
name = models.CharField(max_length=60, default='Guest')
deeds = models.PositiveIntegerField(default=0)
score = models.PositiveIntegerField(default=0)
rank = models.PositiveIntegerField(default=2109)
goals = models.ManyToManyField(Challenge)
friends = models.ManyToManyField('self')
badges = models.ManyToManyField(Badge)
| 43.170732 | 79 | 0.753672 | 212 | 1,770 | 6.216981 | 0.268868 | 0.147951 | 0.177542 | 0.236722 | 0.588012 | 0.586495 | 0.531108 | 0.531108 | 0.468892 | 0.432473 | 0 | 0.035456 | 0.139548 | 1,770 | 40 | 80 | 44.25 | 0.829941 | 0.050847 | 0 | 0.323529 | 0 | 0 | 0.014925 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
82e895cfab9612952b99f4c398d42ac51ff34cd5 | 528 | py | Python | tests/r/test_grunfeld1.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | tests/r/test_grunfeld1.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | tests/r/test_grunfeld1.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import sys
import tempfile
from observations.r.grunfeld1 import grunfeld1
def test_grunfeld1():
"""Test module grunfeld1.py by downloading
grunfeld1.csv and testing shape of
extracted data has 200 rows and 5 columns
"""
test_path = tempfile.mkdtemp()
x_train, metadata = grunfeld1(test_path)
try:
assert x_train.shape == (200, 5)
except:
shutil.rmtree(test_path)
raise()
| 22 | 46 | 0.75947 | 72 | 528 | 5.291667 | 0.569444 | 0.07874 | 0.125984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032184 | 0.176136 | 528 | 23 | 47 | 22.956522 | 0.843678 | 0.219697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.533333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.