hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
53d1a4c43797542ab921deab118ffdd8f0a1daf4 | 1,697 | py | Python | tests/db/test_rows_from_chunks.py | ericych/pydruid_eric | dce17e4ecf21bf38d002694c99e39af50521aae9 | [
"Apache-2.0"
] | null | null | null | tests/db/test_rows_from_chunks.py | ericych/pydruid_eric | dce17e4ecf21bf38d002694c99e39af50521aae9 | [
"Apache-2.0"
] | 5 | 2020-03-24T22:57:05.000Z | 2020-10-09T19:17:02.000Z | tests/db/test_rows_from_chunks.py | ericych/pydruid_eric | dce17e4ecf21bf38d002694c99e39af50521aae9 | [
"Apache-2.0"
] | 1 | 2020-10-12T13:26:26.000Z | 2020-10-12T13:26:26.000Z | # -*- coding: utf-8 -*-
import unittest
from pydruid.db.api import rows_from_chunks
class RowsFromChunksTestSuite(unittest.TestCase):
def test_rows_from_chunks_empty(self):
chunks = []
expected = []
result = list(rows_from_chunks(chunks))
self.assertEquals(result, expected)
def test_rows_from_chunks_single_chunk(self):
chunks = ['[{"name": "alice"}, {"name": "bob"}, {"name": "charlie"}]']
expected = [
{'name': 'alice'},
{'name': 'bob'},
{'name': 'charlie'},
]
result = list(rows_from_chunks(chunks))
self.assertEquals(result, expected)
def test_rows_from_chunks_multiple_chunks(self):
chunks = [
'[{"name": "alice"}, {"name": "b',
'ob"}, {"name": "charlie"}]',
]
expected = [
{'name': 'alice'},
{'name': 'bob'},
{'name': 'charlie'},
]
result = list(rows_from_chunks(chunks))
self.assertEquals(result, expected)
def test_rows_from_chunks_bracket_in_string(self):
chunks = ['[{"name": "ali{ce"}, {"name": "bob"}]']
expected = [
{'name': 'ali{ce'},
{'name': 'bob'},
]
result = list(rows_from_chunks(chunks))
self.assertEquals(result, expected)
def test_rows_from_chunks_quote_in_string(self):
chunks = [r'[{"name": "ali\"ce"}, {"name": "bob"}]']
expected = [
{'name': 'ali"ce'},
{'name': 'bob'},
]
result = list(rows_from_chunks(chunks))
self.assertEquals(result, expected)
if __name__ == '__main__':
unittest.main()
| 28.283333 | 78 | 0.529169 | 169 | 1,697 | 5.04142 | 0.236686 | 0.103286 | 0.180751 | 0.088028 | 0.734742 | 0.671362 | 0.652582 | 0.652582 | 0.652582 | 0.652582 | 0 | 0.000839 | 0.297584 | 1,697 | 59 | 79 | 28.762712 | 0.713926 | 0.012375 | 0 | 0.468085 | 0 | 0 | 0.170251 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 1 | 0.106383 | false | 0 | 0.042553 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
53d79642b0f4b093f2cd3a8ed35ae91b9bdcb207 | 306 | py | Python | graph_wrap/tastypie/graphql_view.py | OmarThinks/graph_wrap | 5ea058eb2ce7234926cf43c54b15d893f602c38c | [
"MIT"
] | 66 | 2020-03-28T17:33:51.000Z | 2022-03-01T08:35:14.000Z | graph_wrap/tastypie/graphql_view.py | OmarThinks/graph_wrap | 5ea058eb2ce7234926cf43c54b15d893f602c38c | [
"MIT"
] | 6 | 2020-11-09T21:00:33.000Z | 2021-09-30T14:16:44.000Z | graph_wrap/tastypie/graphql_view.py | OmarThinks/graph_wrap | 5ea058eb2ce7234926cf43c54b15d893f602c38c | [
"MIT"
] | 3 | 2021-03-27T21:33:15.000Z | 2022-02-09T09:08:04.000Z | from django.views.decorators.http import require_http_methods
from graphene_django.views import GraphQLView
@require_http_methods(['POST'])
def graphql_view(request):
from graph_wrap.tastypie import schema
schema = schema()
view = GraphQLView.as_view(schema=schema)
return view(request)
| 25.5 | 61 | 0.781046 | 40 | 306 | 5.775 | 0.525 | 0.155844 | 0.155844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 306 | 11 | 62 | 27.818182 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.013115 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
53e074da5cda6408ea72538c76538b732ff70213 | 981 | py | Python | apptest/admin.py | ericsz0/autotest | 6010b58a9f8d19644e2b185f24639e0caf81c538 | [
"MIT"
] | null | null | null | apptest/admin.py | ericsz0/autotest | 6010b58a9f8d19644e2b185f24639e0caf81c538 | [
"MIT"
] | null | null | null | apptest/admin.py | ericsz0/autotest | 6010b58a9f8d19644e2b185f24639e0caf81c538 | [
"MIT"
] | null | null | null | from django.contrib import admin
from apptest.models import Appcase,Appcasestep
from webtest.models import Webcase,Webcasestep
# Register your models here.
class AppcasestepAdmin(admin.TabularInline):
list_display = ['appteststep','apptestobjname','appfindmethod','appevelement','appoptmethod','appassertdata','apptestresult','create_time','id','appcase']
model = Appcasestep
extra = 1
class AppcaseAdmin(admin.ModelAdmin):
list_display = ['appcasename','apptestresult','create_time','id']
inlines = [AppcasestepAdmin]
class WebcasestepAdmin(admin.TabularInline):
list_display = ['webcasename','webteststep','webtestobjname','webfindmethod','webevelement','weboptmethod','webassertdata','webtestresult','create_time','id','webcase']
model = Webcasestep
extra = 1
class WebcaseAdmin(admin.ModelAdmin):
list_display = ['webcasename','webtestresult','create_time','id']
inlines = [WebcasestepAdmin]
admin.site.register(Appcase,AppcaseAdmin) | 42.652174 | 172 | 0.763507 | 95 | 981 | 7.8 | 0.505263 | 0.059379 | 0.064777 | 0.078273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002268 | 0.100917 | 981 | 23 | 173 | 42.652174 | 0.837868 | 0.026504 | 0 | 0.111111 | 0 | 0 | 0.315514 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0 | false | 0.055556 | 0.166667 | 0 | 0.944444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
53e6a737df03b3bafffb1324b1798cabf151cd55 | 392 | py | Python | src/shortener/management/commands/refreshcodes.py | dishantrathi/litresin.ml | ade3eb13470f34df4835134970877370ab4ea2c2 | [
"MIT"
] | 1 | 2018-02-25T07:09:54.000Z | 2018-02-25T07:09:54.000Z | src/shortener/management/commands/refreshcodes.py | dishantrathi/litresin.ml | ade3eb13470f34df4835134970877370ab4ea2c2 | [
"MIT"
] | null | null | null | src/shortener/management/commands/refreshcodes.py | dishantrathi/litresin.ml | ade3eb13470f34df4835134970877370ab4ea2c2 | [
"MIT"
] | null | null | null | from django.core.management.base import BaseCommand, CommandError
from shortener.models import LitresinURL
class Command(BaseCommand):
help = 'Refrehes all LitresinURL shortcodes'
def add_arguments(self, parser):
parser.add_argument('--items', type=int)
def handle(self, *args, **options):
return LitresinURL.objects.refresh_shortcodes(items=options['items']) | 30.153846 | 77 | 0.742347 | 45 | 392 | 6.4 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153061 | 392 | 13 | 77 | 30.153846 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0.119593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
53e9a737457a97f92f0b7acda4b42454f0554961 | 398 | py | Python | Python_challege/count_substring.py | pavi-ninjaac/HackerRank | 1808c8ee8fb475e453aa1c19feb3734fa2be6325 | [
"MIT"
] | null | null | null | Python_challege/count_substring.py | pavi-ninjaac/HackerRank | 1808c8ee8fb475e453aa1c19feb3734fa2be6325 | [
"MIT"
] | null | null | null | Python_challege/count_substring.py | pavi-ninjaac/HackerRank | 1808c8ee8fb475e453aa1c19feb3734fa2be6325 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Oct 13 20:04:26 2020
@author: ninjaac
"""
def count_substring(string, sub_string):
return string.count(sub_string)
#return (''.join(string)).count(''.join(sub_string))
if __name__ == '__main__':
string = input().strip()
sub_string = input().strip()
count = count_substring(string, sub_string)
print(count)
| 22.111111 | 57 | 0.61809 | 50 | 398 | 4.62 | 0.54 | 0.194805 | 0.17316 | 0.199134 | 0.251082 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041801 | 0.218593 | 398 | 17 | 58 | 23.411765 | 0.700965 | 0.319095 | 0 | 0 | 0 | 0 | 0.032653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0.142857 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
53fd6ad6f3a9859c11de1a88fe0ca83c66902766 | 100 | py | Python | Solutions/PAT/Advanced/1092.py | Kahsolt/OJ-Notes | 6623ab7d61e305ce0467d6220f49134044b67c9e | [
"WTFPL"
] | null | null | null | Solutions/PAT/Advanced/1092.py | Kahsolt/OJ-Notes | 6623ab7d61e305ce0467d6220f49134044b67c9e | [
"WTFPL"
] | null | null | null | Solutions/PAT/Advanced/1092.py | Kahsolt/OJ-Notes | 6623ab7d61e305ce0467d6220f49134044b67c9e | [
"WTFPL"
] | null | null | null | #!/usr/bin/env python3
import sys
dshop={}
shop=input()
need=input()
for i in range(n):
pass
| 9.090909 | 22 | 0.64 | 17 | 100 | 3.764706 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.19 | 100 | 10 | 23 | 10 | 0.777778 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
990504124691d1af8843275d181bf46cb8fd79a7 | 2,250 | py | Python | truffe2/notifications/models.py | GayLaurent/truffe2 | 477f9408f91c9417705dc792dd2eef7de758486b | [
"BSD-2-Clause"
] | null | null | null | truffe2/notifications/models.py | GayLaurent/truffe2 | 477f9408f91c9417705dc792dd2eef7de758486b | [
"BSD-2-Clause"
] | null | null | null | truffe2/notifications/models.py | GayLaurent/truffe2 | 477f9408f91c9417705dc792dd2eef7de758486b | [
"BSD-2-Clause"
] | null | null | null | from django.db import models
from django.db.models.deletion import PROTECT, PROTECT
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes import fields
from django.conf import settings
from django.utils.translation import gettext_lazy as _
import json
class Notification(models.Model):
key = models.CharField(max_length=255)
species = models.CharField(max_length=255)
creation_date = models.DateTimeField(auto_now_add=True)
seen_date = models.DateTimeField(blank=True, null=True)
seen = models.BooleanField(default=False)
content_type = models.ForeignKey(ContentType, on_delete=PROTECT)
object_id = models.PositiveIntegerField()
linked_object = fields.GenericForeignKey('content_type', 'object_id')
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=PROTECT)
metadata = models.TextField(blank=True, null=True)
def set_metadata(self, data):
self.metadata = json.dumps(data)
def get_metadata(self):
return json.loads(self.metadata)
def get_template(self):
return 'notifications/species/%s.html' % (self.species,)
def get_email_template(self):
return 'notifications/species/mails/%s.html' % (self.species,)
def get_center_message_template(self):
return 'notifications/species/center/message/%s.html' % (self.species,)
def get_center_buttons_template(self):
return 'notifications/species/center/buttons/%s.html' % (self.species,)
class NotificationRestriction(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=PROTECT)
key = models.CharField(max_length=255)
no_email = models.BooleanField(default=False)
autoread = models.BooleanField(default=False)
no_email_group = models.BooleanField(default=False, help_text=_(u'Ne pas regrouper les notification en un seul mail'))
class NotificationEmail(models.Model):
date = models.DateTimeField(auto_now_add=True)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=PROTECT)
notification = models.ForeignKey(Notification, on_delete=PROTECT)
no_email_group = models.BooleanField(default=False, help_text=_(u'Ne pas regrouper les notification en un seul mail'))
| 35.15625 | 122 | 0.757333 | 285 | 2,250 | 5.814035 | 0.308772 | 0.03621 | 0.075438 | 0.090525 | 0.455643 | 0.393482 | 0.290887 | 0.212432 | 0.212432 | 0.212432 | 0 | 0.004668 | 0.143111 | 2,250 | 63 | 123 | 35.714286 | 0.854772 | 0 | 0 | 0.170732 | 0 | 0 | 0.120444 | 0.067556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0 | 0.170732 | 0.121951 | 0.97561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
99056d08fe7ecbe6769c03e5d36a08939b25dc1e | 360 | py | Python | Round 1/2.statements/run.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | Round 1/2.statements/run.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | Round 1/2.statements/run.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | print("hello world!")
from browser import document, window, alert
env = window.env
def PrintNiceMessage(message):
window.swal(message, "", "success")
######################
# Start Learning Here
######################
env.step(0)
env.step(0)
env.step(0)
env.step(0)
#######################
## End Learning Here
####################### | 18.947368 | 44 | 0.491667 | 37 | 360 | 4.783784 | 0.567568 | 0.158192 | 0.180791 | 0.186441 | 0.180791 | 0.180791 | 0.180791 | 0.180791 | 0.180791 | 0 | 0 | 0.013201 | 0.158333 | 360 | 19 | 45 | 18.947368 | 0.570957 | 0.102778 | 0 | 0.444444 | 0 | 0 | 0.089623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
54cb3a3824a01a29171f9486e8846d780dc1ee0b | 1,618 | py | Python | pydis_site/apps/api/viewsets/bot/offensive_message.py | Transfusion/site | 6992491f8c5f074e17c34d09553a715425112652 | [
"MIT"
] | 700 | 2018-11-17T15:56:51.000Z | 2022-03-30T22:53:17.000Z | pydis_site/apps/api/viewsets/bot/offensive_message.py | Transfusion/site | 6992491f8c5f074e17c34d09553a715425112652 | [
"MIT"
] | 542 | 2018-11-17T13:39:42.000Z | 2022-03-31T11:24:00.000Z | pydis_site/apps/api/viewsets/bot/offensive_message.py | Transfusion/site | 6992491f8c5f074e17c34d09553a715425112652 | [
"MIT"
] | 178 | 2018-11-21T09:06:56.000Z | 2022-03-31T07:43:28.000Z | from rest_framework.mixins import (
CreateModelMixin,
DestroyModelMixin,
ListModelMixin
)
from rest_framework.viewsets import GenericViewSet
from pydis_site.apps.api.models.bot.offensive_message import OffensiveMessage
from pydis_site.apps.api.serializers import OffensiveMessageSerializer
class OffensiveMessageViewSet(
CreateModelMixin, ListModelMixin, DestroyModelMixin, GenericViewSet
):
"""
View providing CRUD access to offensive messages.
## Routes
### GET /bot/offensive-messages
Returns all offensive messages in the database.
#### Response format
>>> [
... {
... 'id': '631953598091100200',
... 'channel_id': '291284109232308226',
... 'delete_date': '2019-11-01T21:51:15.545000Z'
... },
... ...
... ]
#### Status codes
- 200: returned on success
### POST /bot/offensive-messages
Create a new offensive message object.
#### Request body
>>> {
... 'id': int,
... 'channel_id': int,
... 'delete_date': datetime.datetime # ISO-8601-formatted date
... }
#### Status codes
- 201: returned on success
- 400: if the body format is invalid
### DELETE /bot/offensive-messages/<id:int>
Delete the offensive message object with the given `id`.
#### Status codes
- 204: returned on success
- 404: if a offensive message object with the given `id` does not exist
## Authentication
Requires an API token.
"""
serializer_class = OffensiveMessageSerializer
queryset = OffensiveMessage.objects.all()
| 26.096774 | 77 | 0.643387 | 163 | 1,618 | 6.325153 | 0.533742 | 0.082444 | 0.058196 | 0.032978 | 0.108632 | 0.069835 | 0.069835 | 0 | 0 | 0 | 0 | 0.061275 | 0.243511 | 1,618 | 61 | 78 | 26.52459 | 0.781046 | 0.593943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
54d17f0bb7a7260eb729e3f73b629de662c622ec | 146 | py | Python | helpers/apps.py | bluebamus/django_function_based_web_site | 5d3b453334110b6d49e5dbe09607df839bc5b649 | [
"MIT"
] | null | null | null | helpers/apps.py | bluebamus/django_function_based_web_site | 5d3b453334110b6d49e5dbe09607df839bc5b649 | [
"MIT"
] | null | null | null | helpers/apps.py | bluebamus/django_function_based_web_site | 5d3b453334110b6d49e5dbe09607df839bc5b649 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class HelpersConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'helpers'
| 20.857143 | 56 | 0.760274 | 17 | 146 | 6.411765 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 146 | 6 | 57 | 24.333333 | 0.879032 | 0 | 0 | 0 | 0 | 0 | 0.246575 | 0.19863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
54d24f8e32dd64fca081cffc9246eee021071dcc | 649 | py | Python | products-service/tests/test_service.py | piotrb5e3/sanic-uservices-demo | 8e9ae3fd7a16fdd5f10426195d3300a98d886974 | [
"MIT"
] | null | null | null | products-service/tests/test_service.py | piotrb5e3/sanic-uservices-demo | 8e9ae3fd7a16fdd5f10426195d3300a98d886974 | [
"MIT"
] | null | null | null | products-service/tests/test_service.py | piotrb5e3/sanic-uservices-demo | 8e9ae3fd7a16fdd5f10426195d3300a98d886974 | [
"MIT"
] | 1 | 2019-03-24T18:49:55.000Z | 2019-03-24T18:49:55.000Z | from service import app
def test_all_prooducts():
request, response = app.test_client.get('/')
assert response.status == 200
response_body = json.loads(response.body)
assert len(response_body) == 4
def test_single_product():
request, response = app.test_client.get('/0')
assert response.status == 200
assert response.json['id'] == 0
assert response.json['name'] == 'spanner 3/4"'
assert response.json['description'] == 'a 3/4 tool steel spanner'
assert response.json['price'] == 200
def test_product_does_not_exist():
request, response = app.test_client.get('/10')
assert response.status == 404
| 28.217391 | 69 | 0.684129 | 88 | 649 | 4.897727 | 0.420455 | 0.227378 | 0.167053 | 0.153132 | 0.215777 | 0.215777 | 0 | 0 | 0 | 0 | 0 | 0.0394 | 0.178737 | 649 | 22 | 70 | 29.5 | 0.769231 | 0 | 0 | 0.125 | 0 | 0 | 0.098613 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.1875 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
54e30f9cec0b21f3126ae5cb900dfa2f46ca1b59 | 776 | py | Python | supporting_code/utilities/image_utilities.py | MAnfal/intro-to-ml-nd-image-classifier | b8b4f3e4b6cdee6734008dab26207bd62024b3f8 | [
"MIT"
] | null | null | null | supporting_code/utilities/image_utilities.py | MAnfal/intro-to-ml-nd-image-classifier | b8b4f3e4b6cdee6734008dab26207bd62024b3f8 | [
"MIT"
] | null | null | null | supporting_code/utilities/image_utilities.py | MAnfal/intro-to-ml-nd-image-classifier | b8b4f3e4b6cdee6734008dab26207bd62024b3f8 | [
"MIT"
] | null | null | null | import numpy as np
from PIL import Image
'''
Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array.
'''
def get_image_as_np_array(image_path, image_resize_size, center_crop_size, network_means, network_std_dev):
im = Image.open(image_path)
im = im.resize((image_resize_size, image_resize_size))
im = im.crop(
(
(image_resize_size - center_crop_size) // 2,
(image_resize_size - center_crop_size) // 2,
(image_resize_size + center_crop_size) // 2,
(image_resize_size + center_crop_size) // 2
)
)
np_image = np.array(im) / 255
np_image = (np_image - network_means) / network_std_dev
np_image = np_image.transpose((2, 0, 1))
return np_image
| 25.032258 | 107 | 0.658505 | 113 | 776 | 4.150442 | 0.327434 | 0.164179 | 0.223881 | 0.223881 | 0.424307 | 0.317697 | 0.255864 | 0.255864 | 0.255864 | 0.255864 | 0 | 0.017065 | 0.244845 | 776 | 30 | 108 | 25.866667 | 0.783276 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
54f05494e7fb6c6a5784088ca544bce3c0f96767 | 1,017 | py | Python | aiida/sphinxext/__init__.py | tomzhang/aiida_core | 949810e9f3daff0f748c5c9aa1dde4f5222bb49b | [
"BSD-2-Clause"
] | 1 | 2019-04-29T12:39:31.000Z | 2019-04-29T12:39:31.000Z | aiida/sphinxext/__init__.py | tomzhang/aiida_core | 949810e9f3daff0f748c5c9aa1dde4f5222bb49b | [
"BSD-2-Clause"
] | null | null | null | aiida/sphinxext/__init__.py | tomzhang/aiida_core | 949810e9f3daff0f748c5c9aa1dde4f5222bb49b | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
###########################################################################
# Copyright (c), The AiiDA team. All rights reserved. #
# This file is part of the AiiDA code. #
# #
# The code is hosted on GitHub at https://github.com/aiidateam/aiida_core #
# For further information on the license, see the LICENSE.txt file #
# For further information please visit http://www.aiida.net #
###########################################################################
"""
Defines reStructuredText directives to simplify documenting AiiDA and its plugins.
"""
from __future__ import absolute_import
__version__ = '0.1.0'
from . import workchain
def setup(app):
"""
Setup function to add the extension classes / nodes to Sphinx.
"""
workchain.setup_aiida_workchain(app)
return {'version': __version__, 'parallel_read_safe': True}
| 37.666667 | 82 | 0.506391 | 99 | 1,017 | 5.020202 | 0.676768 | 0.032193 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005398 | 0.271386 | 1,017 | 26 | 83 | 39.115385 | 0.665317 | 0.581121 | 0 | 0 | 0 | 0 | 0.132743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
54f20614d650167877bfc902c76c2be76d94c906 | 5,181 | py | Python | lib/custom_whatlies/embedding.py | Pliploop/NLP_Bulk_labelling_app | a9a7bf3ea5b48730b56a901a9b857322c6b1f75a | [
"MIT"
] | null | null | null | lib/custom_whatlies/embedding.py | Pliploop/NLP_Bulk_labelling_app | a9a7bf3ea5b48730b56a901a9b857322c6b1f75a | [
"MIT"
] | null | null | null | lib/custom_whatlies/embedding.py | Pliploop/NLP_Bulk_labelling_app | a9a7bf3ea5b48730b56a901a9b857322c6b1f75a | [
"MIT"
] | null | null | null | from typing import Union, Optional, Sequence, Callable
from copy import deepcopy
import numpy as np
import scipy.spatial.distance as scipy_distance
from sklearn.metrics import pairwise_distances
class Embedding:
"""
This object represents a word embedding. It contains a vector and a name.
Arguments:
name: the name of this embedding, includes operations
vector: the numerical representation of the embedding
orig: original name of embedding, is left alone
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo | bar
foo - bar + bar
```
"""
def __init__(self, name, vector, orig=None):
self.orig = name if not orig else orig
self.name = name
self.vector = np.array(vector)
def add_property(self, name, func):
result = self.copy()
setattr(result, name, func(result))
return result
@property
def ndim(self):
"""
Return the dimension of embedding vector.
"""
return self.vector.shape[0]
def copy(self):
"""
Returns a deepcopy of the embdding.
"""
return deepcopy(self)
def __add__(self, other) -> "Embedding":
"""
Add two embeddings together.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo + bar
```
"""
copied = deepcopy(self)
copied.name = f"({self.name} + {other.name})"
copied.vector = self.vector + other.vector
return copied
def __sub__(self, other):
"""
Subtract two embeddings.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo - bar
```
"""
copied = deepcopy(self)
copied.name = f"({self.name} - {other.name})"
copied.vector = self.vector - other.vector
return copied
def __neg__(self):
"""
Negate an embedding.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
assert (- foo).vector == - foo.vector
```
"""
copied = deepcopy(self)
copied.name = f"(-{self.name})"
copied.vector = -self.vector
return copied
def __gt__(self, other):
"""
Measures the size of one embedding to another one.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo > bar
```
"""
return (self.vector.dot(other.vector)) / (other.vector.dot(other.vector))
def __rshift__(self, other):
"""
Maps an embedding unto another one.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo >> bar
```
"""
copied = deepcopy(self)
new_vec = (
(self.vector.dot(other.vector))
/ (other.vector.dot(other.vector))
* other.vector
)
copied.name = f"({self.name} >> {other.name})"
copied.vector = new_vec
return copied
def __or__(self, other):
"""
Makes one embedding orthogonal to the other one.
Usage:
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [0.1, 0.3])
bar = Embedding("bar", [0.7, 0.2])
foo | bar
```
"""
copied = deepcopy(self)
copied.name = f"({self.name} | {other.name})"
copied.vector = self.vector - (self >> other).vector
return copied
def __repr__(self):
return f"Emb[{self.name}]"
def __str__(self):
return self.name
@property
def norm(self):
"""Gives the norm of the vector of the embedding"""
return np.linalg.norm(self.vector)
def distance(self, other, metric: str = "cosine"):
"""
Calculates the vector distance between two embeddings.
Arguments:
other: the other embedding you're comparing against
metric: the distance metric to use, the list of valid options can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html)
**Usage**
```python
from lib.custom_whatlies.embedding import Embedding
foo = Embedding("foo", [1.0, 0.0])
bar = Embedding("bar", [0.0, 0.5])
foo.distance(bar)
foo.distance(bar, metric="euclidean")
foo.distance(bar, metric="cosine")
```
"""
return pairwise_distances([self.vector], [other.vector], metric=metric)[0][0]
| 24.789474 | 184 | 0.556263 | 604 | 5,181 | 4.687086 | 0.220199 | 0.067821 | 0.042388 | 0.050865 | 0.451077 | 0.434122 | 0.430237 | 0.430237 | 0.417167 | 0.403038 | 0 | 0.017882 | 0.320015 | 5,181 | 208 | 185 | 24.908654 | 0.785694 | 0.431963 | 0 | 0.2 | 0 | 0 | 0.070441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0 | 0.083333 | 0.033333 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
54f6140f1a6efe56fe7f82462ccabd1e06cad830 | 1,434 | py | Python | prol/user/views.py | onagbonoga/goals_tracking_app | 6618cc87316833ddec1524eab5f82b544fac21d0 | [
"MIT"
] | null | null | null | prol/user/views.py | onagbonoga/goals_tracking_app | 6618cc87316833ddec1524eab5f82b544fac21d0 | [
"MIT"
] | null | null | null | prol/user/views.py | onagbonoga/goals_tracking_app | 6618cc87316833ddec1524eab5f82b544fac21d0 | [
"MIT"
] | null | null | null | from flask import Flask, url_for, redirect, Blueprint,render_template,session
from werkzeug.security import generate_password_hash
from prol import db
from prol.user.forms import RegisterForm, LoginForm
from prol.user.models import User
user_app = Blueprint('User',__name__)
@user_app.route('/register', methods=('GET','POST'))
def register():
form = RegisterForm()
if form.validate_on_submit():
hashed_password = generate_password_hash(form.password.data)
user = User(
form.first_name.data,
form.last_name.data,
form.email.data,
hashed_password
)
db.session.add(user)
db.session.commit()
user.query.filter_by(email=form.email.data).first()
session['id'] = user.id
session['first_name'] = user.first_name
return redirect(url_for('User.home'))
return render_template("register.html",form=form)
@user_app.route('/',methods=('GET','POST'))
def login():
form = LoginForm()
if form.validate_on_submit():
user = User.query.filter_by(email=form.email.data).first()
session['id'] = user.id
session['first_name'] = user.first_name
return redirect(url_for('User.home'))
return render_template('login.html',form=form)
@user_app.route('/home')
def home():
#first_name = session['first_name']
return render_template("index.html",first_name=session['first_name'])
@user_app.route('/logout')
def logout():
session.pop('id')
session.pop('first_name')
return redirect(url_for('User.login'))
| 28.68 | 77 | 0.737099 | 206 | 1,434 | 4.936893 | 0.257282 | 0.088496 | 0.047198 | 0.058997 | 0.431662 | 0.339233 | 0.292035 | 0.259587 | 0.259587 | 0.259587 | 0 | 0 | 0.108787 | 1,434 | 49 | 78 | 29.265306 | 0.795775 | 0.02371 | 0 | 0.195122 | 1 | 0 | 0.105075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0.073171 | 0.121951 | 0.02439 | 0.365854 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
070030e2c1af68b4462e6f753228ac1e6cf6ad66 | 496 | py | Python | composition.py | prabal255/Data-structures-and-Algorithms | 54fe5b3c83764fea657f5f3a3c17d9bf04b06f94 | [
"MIT"
] | null | null | null | composition.py | prabal255/Data-structures-and-Algorithms | 54fe5b3c83764fea657f5f3a3c17d9bf04b06f94 | [
"MIT"
] | null | null | null | composition.py | prabal255/Data-structures-and-Algorithms | 54fe5b3c83764fea657f5f3a3c17d9bf04b06f94 | [
"MIT"
] | null | null | null | class Salary:
def __init__(self,pay,bonus):
self.pay=pay
self.bonus=bonus
def annual(self):
return (self.pay*12)+self.bonus
class employee:
def __init__(self,name,age,pay,bonus):
self.name=name
self.age=age
self.pay=pay
self.bonus=bonus
self.obj_salary=Salary(pay,bonus)
def total_salary(self):
return self.obj_salary.annual()
emp=employee('ajay',12,1,1)
print(emp.total_salary()) | 23.619048 | 43 | 0.596774 | 68 | 496 | 4.176471 | 0.279412 | 0.098592 | 0.077465 | 0.098592 | 0.169014 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0.016854 | 0.282258 | 496 | 21 | 44 | 23.619048 | 0.780899 | 0 | 0 | 0.235294 | 0 | 0 | 0.008386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0 | 0.117647 | 0.470588 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0701fc60672d2e35027442de2fa28eb4d29e1f93 | 1,136 | py | Python | scripts/03-predict.py | LaudateCorpus1/salgan | 86a326ad6d7355f46709f9612562e776ca45e5cb | [
"MIT"
] | 243 | 2017-01-05T02:00:37.000Z | 2019-05-18T15:03:07.000Z | scripts/03-predict.py | saitejamalyala/salgan | 86a326ad6d7355f46709f9612562e776ca45e5cb | [
"MIT"
] | 37 | 2017-01-05T22:06:05.000Z | 2019-04-08T08:55:18.000Z | scripts/03-predict.py | LaudateCorpus1/salgan | 86a326ad6d7355f46709f9612562e776ca45e5cb | [
"MIT"
] | 96 | 2017-01-05T21:29:04.000Z | 2019-05-20T01:53:54.000Z | import os
import numpy as np
from tqdm import tqdm
import cv2
import glob
from utils import *
from constants import *
from models.model_bce import ModelBCE
def test(path_to_images, path_output_maps, model_to_test=None):
list_img_files = [k.split('/')[-1].split('.')[0] for k in glob.glob(os.path.join(path_to_images, '*'))]
# Load Data
list_img_files.sort()
for curr_file in tqdm(list_img_files, ncols=20):
print os.path.join(path_to_images, curr_file + '.jpg')
img = cv2.cvtColor(cv2.imread(os.path.join(path_to_images, curr_file + '.jpg'), cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB)
predict(model=model_to_test, image_stimuli=img, name=curr_file, path_output_maps=path_output_maps)
def main():
# Create network
model = ModelBCE(INPUT_SIZE[0], INPUT_SIZE[1], batch_size=8)
# Here need to specify the epoch of model sanpshot
load_weights(model.net['output'], path='gen_', epochtoload=90)
# Here need to specify the path to images and output path
test(path_to_images='../images/', path_output_maps='../saliency/', model_to_test=model)
if __name__ == "__main__":
main()
| 36.645161 | 125 | 0.716549 | 181 | 1,136 | 4.21547 | 0.39779 | 0.047182 | 0.094364 | 0.055046 | 0.167759 | 0.115334 | 0.086501 | 0.086501 | 0.086501 | 0 | 0 | 0.01569 | 0.158451 | 1,136 | 30 | 126 | 37.866667 | 0.782427 | 0.113556 | 0 | 0 | 0 | 0 | 0.050898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.380952 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0703af328f366cd0db7cd7a851945406c5aef29f | 295 | py | Python | src/217.py | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | src/217.py | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | src/217.py | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | # 给定一个整数数组,判断是否存在重复元素。
# 如果任何值在数组中出现至少两次,函数返回 true。如果数组中每个元素都不相同,则返回 false。
# 示例 1:
# 输入: [1,2,3,1]
# 输出: true
class Solution:
def containsDuplicate(self, nums: list) -> bool:
return len(nums) != len(set(nums))
v = Solution().containsDuplicate([1,1,1,3,3,4,3,2,4,2])
print(v)
| 16.388889 | 55 | 0.647458 | 46 | 295 | 4.152174 | 0.608696 | 0.020942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 0.169492 | 295 | 17 | 56 | 17.352941 | 0.718367 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.6 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
070f57411b53f1eda5d88b1710348a522f5faed7 | 699 | py | Python | tracer/utils.py | RandySheriffH/tracer | 67c33948a9f61f3b949e71b75042d69f575ae24c | [
"MIT"
] | 1 | 2020-09-01T20:51:06.000Z | 2020-09-01T20:51:06.000Z | tracer/utils.py | RandySheriffH/tracer | 67c33948a9f61f3b949e71b75042d69f575ae24c | [
"MIT"
] | 6 | 2020-08-02T02:06:06.000Z | 2020-08-06T17:56:05.000Z | tracer/utils.py | RandySheriffH/tracer | 67c33948a9f61f3b949e71b75042d69f575ae24c | [
"MIT"
] | null | null | null | # Licensed under the MIT license.
'''utilities'''
import os
import shutil
def to_int(array):
'''convert array to ints'''
return [int(a) for a in array]
def create_temp():
'''create temp folder'''
temp = get_temp()
if not os.path.isdir(temp):
os.mkdir(temp)
def get_temp():
'''temp string'''
return './temp/'
def remove_temp():
'''remove temp folder'''
shutil.rmtree(get_temp(), ignore_errors=True)
def pwd():
'''return path to the file'''
return os.path.dirname(__file__)
class UnknownFormatError(RuntimeError):
'''raise on unsupported model format'''
def __init__(self, message):
RuntimeError.__init__(self, message)
| 17.923077 | 49 | 0.640916 | 91 | 699 | 4.714286 | 0.527473 | 0.048951 | 0.06993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214592 | 699 | 38 | 50 | 18.394737 | 0.781421 | 0.246066 | 0 | 0 | 0 | 0 | 0.014257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.352941 | false | 0 | 0.117647 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
071fe508f254d07aec04a683791602c4635ace0e | 327 | py | Python | AnyTransHelper/settings.py | wcj3/AnyTransHelper | b86ea1fceb9f8c1a2b34e1e715ebcd621ed135bd | [
"MIT"
] | null | null | null | AnyTransHelper/settings.py | wcj3/AnyTransHelper | b86ea1fceb9f8c1a2b34e1e715ebcd621ed135bd | [
"MIT"
] | null | null | null | AnyTransHelper/settings.py | wcj3/AnyTransHelper | b86ea1fceb9f8c1a2b34e1e715ebcd621ed135bd | [
"MIT"
] | null | null | null | # Location of exported xml playlist from itunes
# Update username and directory to their appropriate path e.g C:/users/bob/desktop/file.xml
xml_playlist_loc = 'C:/users/username/directory/_MyMusic_.xml'
# name of user directory where your Music folder is located
# example: C:/users/bob/music/itunes
music_folder_dir = 'name'
| 40.875 | 91 | 0.788991 | 52 | 327 | 4.846154 | 0.653846 | 0.071429 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122324 | 327 | 7 | 92 | 46.714286 | 0.878049 | 0.697248 | 0 | 0 | 0 | 0 | 0.483871 | 0.44086 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
072c81deadb1e244ccbd8e719576927f33e59977 | 269 | py | Python | learntools/python/utils.py | sfrias/learntools | 47a62d9d8513d981836f0c18f00b05455c4d64c7 | [
"Apache-2.0"
] | 1 | 2018-06-11T17:43:38.000Z | 2018-06-11T17:43:38.000Z | learntools/python/utils.py | sfrias/learntools | 47a62d9d8513d981836f0c18f00b05455c4d64c7 | [
"Apache-2.0"
] | null | null | null | learntools/python/utils.py | sfrias/learntools | 47a62d9d8513d981836f0c18f00b05455c4d64c7 | [
"Apache-2.0"
] | null | null | null | def backtickify(s):
return '`{}`'.format(s)
def bind_exercises(g, exercises, start=1):
for i, ex in enumerate(exercises):
qno = i + start
varname = 'q{}'.format(qno)
assert varname not in g
g[varname] = ex
yield varname
| 24.454545 | 42 | 0.572491 | 36 | 269 | 4.25 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005291 | 0.297398 | 269 | 10 | 43 | 26.9 | 0.804233 | 0 | 0 | 0 | 0 | 0 | 0.026022 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.222222 | false | 0 | 0 | 0.111111 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0763e2675b645a0b86aa4ab358385751fae38164 | 293 | py | Python | app/api/__init__.py | lewyuejian/flask-server | 3858881d78af4bdbef46d9d6074545be2e663bee | [
"MIT"
] | 1 | 2021-07-07T14:57:34.000Z | 2021-07-07T14:57:34.000Z | app/api/__init__.py | lewyuejian/flask-server | 3858881d78af4bdbef46d9d6074545be2e663bee | [
"MIT"
] | null | null | null | app/api/__init__.py | lewyuejian/flask-server | 3858881d78af4bdbef46d9d6074545be2e663bee | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- encoding: utf-8 -*-
'''
@author: yuejl
@application:
@contact: lewyuejian@163.com
@file: __init__.py.py
@time: 2021/7/5 0005 19:36
@desc:
'''
from flask import Blueprint
bp = Blueprint('api', __name__)
# # 写在最后是为了防止循环导入,blog.py文件也会导入 bp
from app.api import blog
| 17.235294 | 34 | 0.696246 | 43 | 293 | 4.55814 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074803 | 0.133106 | 293 | 16 | 35 | 18.3125 | 0.69685 | 0.651877 | 0 | 0 | 0 | 0 | 0.032967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 2 |
076ae8ccdc6bba4a6d156abc9b2d579a5e1f2ac1 | 173 | py | Python | aioworkers_sentry/__init__.py | aioworkers/aioworkers-sentry | f1600ec2bdf51e8b48adc0492ae2e3398e558262 | [
"Apache-2.0"
] | 3 | 2019-02-12T13:25:33.000Z | 2019-02-19T22:27:06.000Z | aioworkers_sentry/__init__.py | aioworkers/aioworkers-sentry | f1600ec2bdf51e8b48adc0492ae2e3398e558262 | [
"Apache-2.0"
] | 47 | 2020-11-16T00:39:22.000Z | 2022-03-02T10:06:29.000Z | aioworkers_sentry/__init__.py | aioworkers/aioworkers-sentry | f1600ec2bdf51e8b48adc0492ae2e3398e558262 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
try:
# true
from .version import __version__
except ImportError:
__version__ = 'dev'
configs = Path(__file__).parent / 'sentry.ini',
| 15.727273 | 47 | 0.699422 | 20 | 173 | 5.45 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208092 | 173 | 10 | 48 | 17.3 | 0.79562 | 0.023121 | 0 | 0 | 0 | 0 | 0.077844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
4ad5c95d32ef7a8712e858733eb5657014501113 | 893 | py | Python | Lib/test/test_file.py | marcosptf/cpython-2.0.1 | 73c739a764e8b1dc84640e73b880bc66e1916bca | [
"PSF-2.0"
] | 5 | 2022-03-26T21:53:36.000Z | 2022-03-30T21:47:20.000Z | Lib/test/test_file.py | marcosptf/cpython-2.0.1 | 73c739a764e8b1dc84640e73b880bc66e1916bca | [
"PSF-2.0"
] | 6 | 2020-11-18T15:48:14.000Z | 2021-05-03T21:20:50.000Z | Lib/test/test_file.py | marcosptf/cpython-2.0.1 | 73c739a764e8b1dc84640e73b880bc66e1916bca | [
"PSF-2.0"
] | 2 | 2015-07-16T08:14:13.000Z | 2022-03-27T01:55:17.000Z | from test_support import TESTFN
from UserList import UserList
# verify writelines with instance sequence
l = UserList(['1', '2'])
f = open(TESTFN, 'wb')
f.writelines(l)
f.close()
f = open(TESTFN, 'rb')
buf = f.read()
f.close()
assert buf == '12'
# verify writelines with integers
f = open(TESTFN, 'wb')
try:
f.writelines([1, 2, 3])
except TypeError:
pass
else:
print "writelines accepted sequence of integers"
f.close()
# verify writelines with integers in UserList
f = open(TESTFN, 'wb')
l = UserList([1,2,3])
try:
f.writelines(l)
except TypeError:
pass
else:
print "writelines accepted sequence of integers"
f.close()
# verify writelines with non-string object
class NonString: pass
f = open(TESTFN, 'wb')
try:
f.writelines([NonString(), NonString()])
except TypeError:
pass
else:
print "writelines accepted sequence of non-string objects"
f.close()
| 19.413043 | 62 | 0.696529 | 128 | 893 | 4.851563 | 0.3125 | 0.040258 | 0.088567 | 0.083736 | 0.466989 | 0.466989 | 0.466989 | 0.380032 | 0.380032 | 0.289855 | 0 | 0.013569 | 0.174692 | 893 | 45 | 63 | 19.844444 | 0.829037 | 0.175812 | 0 | 0.694444 | 0 | 0 | 0.19699 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | null | null | 0.111111 | 0.055556 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
4ae2382d277f0a853a36aec82ef5c1631e8b0607 | 4,695 | py | Python | azure-mgmt-cognitiveservices/azure/mgmt/cognitiveservices/models/__init__.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2018-07-23T08:59:24.000Z | 2018-07-23T08:59:24.000Z | azure-mgmt-cognitiveservices/azure/mgmt/cognitiveservices/models/__init__.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2018-11-29T14:46:42.000Z | 2018-11-29T14:46:42.000Z | azure-mgmt-cognitiveservices/azure/mgmt/cognitiveservices/models/__init__.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2018-08-28T14:36:47.000Z | 2018-08-28T14:36:47.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
try:
from .sku_py3 import Sku
from .cognitive_services_account_create_parameters_py3 import CognitiveServicesAccountCreateParameters
from .cognitive_services_account_update_parameters_py3 import CognitiveServicesAccountUpdateParameters
from .cognitive_services_account_py3 import CognitiveServicesAccount
from .cognitive_services_account_keys_py3 import CognitiveServicesAccountKeys
from .regenerate_key_parameters_py3 import RegenerateKeyParameters
from .cognitive_services_resource_and_sku_py3 import CognitiveServicesResourceAndSku
from .cognitive_services_account_enumerate_skus_result_py3 import CognitiveServicesAccountEnumerateSkusResult
from .metric_name_py3 import MetricName
from .usage_py3 import Usage
from .usages_result_py3 import UsagesResult
from .error_body_py3 import ErrorBody
from .error_py3 import Error, ErrorException
from .operation_display_info_py3 import OperationDisplayInfo
from .operation_entity_py3 import OperationEntity
from .check_sku_availability_parameter_py3 import CheckSkuAvailabilityParameter
from .check_sku_availability_result_py3 import CheckSkuAvailabilityResult
from .check_sku_availability_result_list_py3 import CheckSkuAvailabilityResultList
from .resource_sku_restriction_info_py3 import ResourceSkuRestrictionInfo
from .resource_sku_restrictions_py3 import ResourceSkuRestrictions
from .resource_sku_py3 import ResourceSku
except (SyntaxError, ImportError):
from .sku import Sku
from .cognitive_services_account_create_parameters import CognitiveServicesAccountCreateParameters
from .cognitive_services_account_update_parameters import CognitiveServicesAccountUpdateParameters
from .cognitive_services_account import CognitiveServicesAccount
from .cognitive_services_account_keys import CognitiveServicesAccountKeys
from .regenerate_key_parameters import RegenerateKeyParameters
from .cognitive_services_resource_and_sku import CognitiveServicesResourceAndSku
from .cognitive_services_account_enumerate_skus_result import CognitiveServicesAccountEnumerateSkusResult
from .metric_name import MetricName
from .usage import Usage
from .usages_result import UsagesResult
from .error_body import ErrorBody
from .error import Error, ErrorException
from .operation_display_info import OperationDisplayInfo
from .operation_entity import OperationEntity
from .check_sku_availability_parameter import CheckSkuAvailabilityParameter
from .check_sku_availability_result import CheckSkuAvailabilityResult
from .check_sku_availability_result_list import CheckSkuAvailabilityResultList
from .resource_sku_restriction_info import ResourceSkuRestrictionInfo
from .resource_sku_restrictions import ResourceSkuRestrictions
from .resource_sku import ResourceSku
from .cognitive_services_account_paged import CognitiveServicesAccountPaged
from .resource_sku_paged import ResourceSkuPaged
from .operation_entity_paged import OperationEntityPaged
from .cognitive_services_management_client_enums import (
SkuName,
SkuTier,
Kind,
ProvisioningState,
KeyName,
UnitType,
QuotaUsageStatus,
ResourceSkuRestrictionsType,
ResourceSkuRestrictionsReasonCode,
)
__all__ = [
'Sku',
'CognitiveServicesAccountCreateParameters',
'CognitiveServicesAccountUpdateParameters',
'CognitiveServicesAccount',
'CognitiveServicesAccountKeys',
'RegenerateKeyParameters',
'CognitiveServicesResourceAndSku',
'CognitiveServicesAccountEnumerateSkusResult',
'MetricName',
'Usage',
'UsagesResult',
'ErrorBody',
'Error', 'ErrorException',
'OperationDisplayInfo',
'OperationEntity',
'CheckSkuAvailabilityParameter',
'CheckSkuAvailabilityResult',
'CheckSkuAvailabilityResultList',
'ResourceSkuRestrictionInfo',
'ResourceSkuRestrictions',
'ResourceSku',
'CognitiveServicesAccountPaged',
'ResourceSkuPaged',
'OperationEntityPaged',
'SkuName',
'SkuTier',
'Kind',
'ProvisioningState',
'KeyName',
'UnitType',
'QuotaUsageStatus',
'ResourceSkuRestrictionsType',
'ResourceSkuRestrictionsReasonCode',
]
| 44.292453 | 113 | 0.792971 | 404 | 4,695 | 8.898515 | 0.277228 | 0.052573 | 0.08178 | 0.085675 | 0.658136 | 0.541307 | 0.43338 | 0.268707 | 0.116829 | 0 | 0 | 0.005415 | 0.134611 | 4,695 | 105 | 114 | 44.714286 | 0.879399 | 0.096273 | 0 | 0 | 0 | 0 | 0.155482 | 0.106805 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.505376 | 0 | 0.505376 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4ae5bc2e7d2bf042c7ae7cf7f463111268a96c72 | 2,371 | py | Python | test_fallback.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 19 | 2015-02-03T22:25:09.000Z | 2021-09-01T05:25:44.000Z | test_fallback.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 16 | 2015-06-03T15:52:43.000Z | 2019-03-24T16:47:52.000Z | test_fallback.py | jvarho/pylibscrypt | 46f9c0a2f2c909a5765f748f2c188e336af221ed | [
"0BSD"
] | 3 | 2015-05-26T01:39:20.000Z | 2017-12-15T23:44:19.000Z | #!/usr/bin/env python3
# Copyright (c) 2014-2021, Jan Varho
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import ctypes.util
import hashlib
import platform
import sys
if '-p' in sys.argv:
platform.python_implementation = lambda:'PyPy'
def raises(e):
def raising(*arg, **kwarg):
raise e
return raising
def unimport(mod=None):
del sys.modules['pylibscrypt']
sys.modules.pop('pylibscrypt.common', None)
sys.modules.pop('pylibscrypt.mcf', None)
sys.modules.pop('pylibscrypt.libsodium_load', None)
if mod is not None:
sys.modules.pop(mod, None)
import pylibscrypt
sys.modules['pylibscrypt.hashlibscrypt'] = None
if '-e' in sys.argv:
unimport()
tmp1 = ctypes.util.find_library
tmp2 = ctypes.cdll.LoadLibrary
tmp3 = ctypes.CDLL
ctypes.util.find_library = lambda *args, **kw: None
ctypes.cdll.LoadLibrary = lambda *args, **kw: None
import pylibscrypt
ctypes.util.find_library = tmp1
ctypes.cdll.LoadLibrary = tmp2
unimport('pylibscrypt.pylibscrypt')
ctypes.CDLL = lambda *args, **kw: None
import pylibscrypt
unimport('pylibscrypt.pylibscrypt')
ctypes.CDLL = raises(OSError)
import pylibscrypt
ctypes.CDLL = tmp3
unimport('pylibscrypt.pylibscrypt')
ctypes.CDLL = lambda *args, **kw: None
import pylibscrypt
unimport()
import pylibscrypt
unimport('pylibscrypt.pylibscrypt')
sys.modules['pylibscrypt.pylibscrypt'] = None
import pylibscrypt
unimport('pylibscrypt.pyscrypt')
sys.modules['scrypt'] = None
import pylibscrypt
unimport()
sys.modules['pylibscrypt.pyscrypt'] = None
import pylibscrypt
unimport()
sys.modules['pylibscrypt.pylibsodium'] = None
import pylibscrypt
| 28.914634 | 74 | 0.73682 | 312 | 2,371 | 5.583333 | 0.391026 | 0.057405 | 0.096441 | 0.083238 | 0.269805 | 0.169346 | 0.150402 | 0.092997 | 0.092997 | 0.092997 | 0 | 0.007622 | 0.16997 | 2,371 | 81 | 75 | 29.271605 | 0.877541 | 0.317166 | 0 | 0.377358 | 0 | 0 | 0.178928 | 0.11783 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0.45283 | 0 | 0.528302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4aed8a0a1b82781b52386173db09a017ab12f395 | 176 | py | Python | core/reports/forms.py | henrryyanez/test2 | 7f49391160ad6797afe83f9dae9346d320484b52 | [
"MIT"
] | null | null | null | core/reports/forms.py | henrryyanez/test2 | 7f49391160ad6797afe83f9dae9346d320484b52 | [
"MIT"
] | null | null | null | core/reports/forms.py | henrryyanez/test2 | 7f49391160ad6797afe83f9dae9346d320484b52 | [
"MIT"
] | 1 | 2021-02-25T00:57:35.000Z | 2021-02-25T00:57:35.000Z | from django.forms import *
class ReportForm(Form):
date_range = CharField(widget=TextInput(attrs={
'class': 'form-control',
'autocomplete': 'off'
}))
| 19.555556 | 51 | 0.625 | 18 | 176 | 6.055556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 176 | 8 | 52 | 22 | 0.801471 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4afbc3fa0f8da151c5df333ff9483cb0687005db | 73 | py | Python | vmaig_blog/uwsgi-2.0.14/plugins/nagios/uwsgiplugin.py | StanYaha/Blog | 3cb38918e14ebe6ce2e2952ef272de116849910d | [
"BSD-3-Clause"
] | 1 | 2018-11-24T16:10:49.000Z | 2018-11-24T16:10:49.000Z | vmaig_blog/uwsgi-2.0.14/plugins/nagios/uwsgiplugin.py | StanYaha/Blog | 3cb38918e14ebe6ce2e2952ef272de116849910d | [
"BSD-3-Clause"
] | null | null | null | vmaig_blog/uwsgi-2.0.14/plugins/nagios/uwsgiplugin.py | StanYaha/Blog | 3cb38918e14ebe6ce2e2952ef272de116849910d | [
"BSD-3-Clause"
] | null | null | null |
NAME='nagios'
CFLAGS = []
LDFLAGS = []
LIBS = []
GCC_LIST = ['nagios']
| 9.125 | 21 | 0.561644 | 8 | 73 | 5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 73 | 7 | 22 | 10.428571 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4afda08b90de118baafadac66528302b1ea9780e | 927 | py | Python | app/forms.py | kdougla01/SENDReg | 1b1fca91b1f5c5d1a92fdc3998b27abca600c2fb | [
"MIT"
] | null | null | null | app/forms.py | kdougla01/SENDReg | 1b1fca91b1f5c5d1a92fdc3998b27abca600c2fb | [
"MIT"
] | null | null | null | app/forms.py | kdougla01/SENDReg | 1b1fca91b1f5c5d1a92fdc3998b27abca600c2fb | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, BooleanField, TextAreaField, validators
class LoginForm(FlaskForm):
"""Login form to access writing and settings pages"""
username = StringField('Username', [validators.DataRequired()])
password = PasswordField('Password', [validators.DataRequired()])
class RegistrationForm(FlaskForm):
username = StringField('Username', [validators.Length(min=4, max=25)])
password = PasswordField('New Password', [validators.DataRequired()])
confirm = PasswordField('Repeat Password',
[validators.DataRequired(),validators.EqualTo('password', message='Passwords must match')])
class CreateSENDForm(FlaskForm):
send_title = StringField('SEND Name', [validators.DataRequired()])
send_acro = StringField('Acronym')
send_explanation = TextAreaField('Explanation',[validators.optional(), validators.length(max=200)])
| 46.35 | 103 | 0.74973 | 89 | 927 | 7.764045 | 0.52809 | 0.15919 | 0.130246 | 0.107091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007371 | 0.121899 | 927 | 19 | 104 | 48.789474 | 0.841523 | 0.050701 | 0 | 0 | 0 | 0 | 0.121281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.357143 | 0.142857 | 0 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ab01d86591478712191c656d6eea1fe18e6afaca | 7,892 | py | Python | sdk/python/pulumi_google_native/cloudbuild/v1alpha1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/cloudbuild/v1alpha1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/cloudbuild/v1alpha1/_inputs.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from ._enums import *
__all__ = [
'NetworkArgs',
'WorkerConfigArgs',
]
@pulumi.input_type
class NetworkArgs:
def __init__(__self__, *,
network: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
subnetwork: Optional[pulumi.Input[str]] = None):
"""
Network describes the GCP network used to create workers in.
:param pulumi.Input[str] network: Network on which the workers are created. "default" network is used if empty.
:param pulumi.Input[str] project: Project id containing the defined network and subnetwork. For a peered VPC, this will be the same as the project_id in which the workers are created. For a shared VPC, this will be the project sharing the network with the project_id project in which workers will be created. For custom workers with no VPC, this will be the same as project_id.
:param pulumi.Input[str] subnetwork: Subnetwork on which the workers are created. "default" subnetwork is used if empty.
"""
if network is not None:
pulumi.set(__self__, "network", network)
if project is not None:
pulumi.set(__self__, "project", project)
if subnetwork is not None:
pulumi.set(__self__, "subnetwork", subnetwork)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input[str]]:
"""
Network on which the workers are created. "default" network is used if empty.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "network", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
Project id containing the defined network and subnetwork. For a peered VPC, this will be the same as the project_id in which the workers are created. For a shared VPC, this will be the project sharing the network with the project_id project in which workers will be created. For custom workers with no VPC, this will be the same as project_id.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@property
@pulumi.getter
def subnetwork(self) -> Optional[pulumi.Input[str]]:
"""
Subnetwork on which the workers are created. "default" subnetwork is used if empty.
"""
return pulumi.get(self, "subnetwork")
@subnetwork.setter
def subnetwork(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "subnetwork", value)
@pulumi.input_type
class WorkerConfigArgs:
def __init__(__self__, *,
disk_size_gb: Optional[pulumi.Input[str]] = None,
machine_type: Optional[pulumi.Input[str]] = None,
network: Optional[pulumi.Input['NetworkArgs']] = None,
tag: Optional[pulumi.Input[str]] = None):
"""
WorkerConfig defines the configuration to be used for a creating workers in the pool.
:param pulumi.Input[str] disk_size_gb: Size of the disk attached to the worker, in GB. See https://cloud.google.com/compute/docs/disks/ If `0` is specified, Cloud Build will use a standard disk size. `disk_size` is overridden if you specify a different disk size in `build_options`. In this case, a VM with a disk size specified in the `build_options` will be created on demand at build time. For more information see https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds#buildoptions
:param pulumi.Input[str] machine_type: Machine Type of the worker, such as n1-standard-1. See https://cloud.google.com/compute/docs/machine-types. If left blank, Cloud Build will use a standard unspecified machine to create the worker pool. `machine_type` is overridden if you specify a different machine type in `build_options`. In this case, the VM specified in the `build_options` will be created on demand at build time. For more information see https://cloud.google.com/cloud-build/docs/speeding-up-builds#using_custom_virtual_machine_sizes
:param pulumi.Input['NetworkArgs'] network: The network definition used to create the worker. If this section is left empty, the workers will be created in WorkerPool.project_id on the default network.
:param pulumi.Input[str] tag: The tag applied to the worker, and the same tag used by the firewall rule. It is used to identify the Cloud Build workers among other VMs. The default value for tag is `worker`.
"""
if disk_size_gb is not None:
pulumi.set(__self__, "disk_size_gb", disk_size_gb)
if machine_type is not None:
pulumi.set(__self__, "machine_type", machine_type)
if network is not None:
pulumi.set(__self__, "network", network)
if tag is not None:
pulumi.set(__self__, "tag", tag)
@property
@pulumi.getter(name="diskSizeGb")
def disk_size_gb(self) -> Optional[pulumi.Input[str]]:
"""
Size of the disk attached to the worker, in GB. See https://cloud.google.com/compute/docs/disks/ If `0` is specified, Cloud Build will use a standard disk size. `disk_size` is overridden if you specify a different disk size in `build_options`. In this case, a VM with a disk size specified in the `build_options` will be created on demand at build time. For more information see https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds#buildoptions
"""
return pulumi.get(self, "disk_size_gb")
@disk_size_gb.setter
def disk_size_gb(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "disk_size_gb", value)
@property
@pulumi.getter(name="machineType")
def machine_type(self) -> Optional[pulumi.Input[str]]:
"""
Machine Type of the worker, such as n1-standard-1. See https://cloud.google.com/compute/docs/machine-types. If left blank, Cloud Build will use a standard unspecified machine to create the worker pool. `machine_type` is overridden if you specify a different machine type in `build_options`. In this case, the VM specified in the `build_options` will be created on demand at build time. For more information see https://cloud.google.com/cloud-build/docs/speeding-up-builds#using_custom_virtual_machine_sizes
"""
return pulumi.get(self, "machine_type")
@machine_type.setter
def machine_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "machine_type", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['NetworkArgs']]:
"""
The network definition used to create the worker. If this section is left empty, the workers will be created in WorkerPool.project_id on the default network.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['NetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter
def tag(self) -> Optional[pulumi.Input[str]]:
"""
The tag applied to the worker, and the same tag used by the firewall rule. It is used to identify the Cloud Build workers among other VMs. The default value for tag is `worker`.
"""
return pulumi.get(self, "tag")
@tag.setter
def tag(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tag", value)
| 54.427586 | 553 | 0.690699 | 1,126 | 7,892 | 4.743339 | 0.142984 | 0.061786 | 0.06291 | 0.074143 | 0.782251 | 0.711103 | 0.677027 | 0.66804 | 0.664482 | 0.584909 | 0 | 0.001452 | 0.214774 | 7,892 | 144 | 554 | 54.805556 | 0.860416 | 0.533198 | 0 | 0.302326 | 1 | 0 | 0.074649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.186047 | false | 0 | 0.069767 | 0 | 0.360465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ab10768a5a974c2ed4404b37b9919ac850187a3f | 304 | py | Python | tests/utils/test_classproperty.py | koskotG/ebonite | 9f9ae016b70fb24865d5edc99142afb8ab4ddc59 | [
"Apache-2.0"
] | 270 | 2019-11-14T15:46:08.000Z | 2021-09-17T16:43:03.000Z | tests/utils/test_classproperty.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | 14 | 2019-11-29T11:49:39.000Z | 2022-02-10T00:23:59.000Z | tests/utils/test_classproperty.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | 18 | 2019-11-22T13:15:14.000Z | 2021-09-01T13:36:12.000Z | from ebonite.utils.classproperty import classproperty
class MyClass:
@classproperty
def prop1(self):
return 'a'
@classproperty
@classmethod
def prop2(self):
return 'b'
def test_classproperty__get():
assert MyClass.prop1 == 'a'
assert MyClass.prop2 == 'b'
| 16.888889 | 53 | 0.654605 | 33 | 304 | 5.939394 | 0.545455 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.25 | 304 | 17 | 54 | 17.882353 | 0.842105 | 0 | 0 | 0.166667 | 0 | 0 | 0.013158 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.25 | false | 0 | 0.083333 | 0.166667 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ab12111386c0a6ed6339358b0d9ca22409c5393e | 1,206 | py | Python | Apps/tutoriais/migrations/0002_tutorial_nome_alter_tutorial_plano_1_and_more.py | arthur-asilva/rc_plataforma | 7e6f7eb7f9a3b9089c02db98518b60d8e481ce4c | [
"BSD-2-Clause"
] | null | null | null | Apps/tutoriais/migrations/0002_tutorial_nome_alter_tutorial_plano_1_and_more.py | arthur-asilva/rc_plataforma | 7e6f7eb7f9a3b9089c02db98518b60d8e481ce4c | [
"BSD-2-Clause"
] | null | null | null | Apps/tutoriais/migrations/0002_tutorial_nome_alter_tutorial_plano_1_and_more.py | arthur-asilva/rc_plataforma | 7e6f7eb7f9a3b9089c02db98518b60d8e481ce4c | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 4.0 on 2021-12-23 18:27
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('tutoriais', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='tutorial',
name='nome',
field=models.CharField(default=1, max_length=254),
preserve_default=False,
),
migrations.AlterField(
model_name='tutorial',
name='plano_1',
field=models.CharField(max_length=254),
),
migrations.AlterField(
model_name='tutorial',
name='plano_2',
field=models.CharField(max_length=254),
),
migrations.AlterField(
model_name='tutorial',
name='programacao',
field=models.CharField(max_length=254),
),
migrations.AlterField(
model_name='tutorial',
name='turma',
field=models.CharField(max_length=254),
),
migrations.AlterField(
model_name='tutorial',
name='video',
field=models.CharField(max_length=254),
),
]
| 26.8 | 62 | 0.546434 | 111 | 1,206 | 5.792793 | 0.387387 | 0.083981 | 0.158631 | 0.195956 | 0.583204 | 0.583204 | 0.533437 | 0.454121 | 0.454121 | 0.454121 | 0 | 0.048995 | 0.339967 | 1,206 | 44 | 63 | 27.409091 | 0.758794 | 0.035655 | 0 | 0.578947 | 1 | 0 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026316 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ab2913a258a83dd6a033593a0a9a45dc2f0a8d70 | 508 | py | Python | cbdcsim/transaction.py | blackrhinoabm/CBDC-sim | 775f265802bccfe3285268022be754a8dec359cd | [
"MIT"
] | null | null | null | cbdcsim/transaction.py | blackrhinoabm/CBDC-sim | 775f265802bccfe3285268022be754a8dec359cd | [
"MIT"
] | null | null | null | cbdcsim/transaction.py | blackrhinoabm/CBDC-sim | 775f265802bccfe3285268022be754a8dec359cd | [
"MIT"
] | null | null | null | def transact(environment, t, to_agent, from_agent, settlement_type, amount, description):
"Function that ensures a correct transaction between agents"
environment.measurement['period'].append(t)
environment.measurement['to_agent'].append(to_agent)
environment.measurement['from_agent'].append(from_agent)
environment.measurement['settlement_type'].append(settlement_type)
environment.measurement['amount'].append(amount)
environment.measurement['description'].append(description)
| 56.444444 | 89 | 0.785433 | 56 | 508 | 6.964286 | 0.392857 | 0.338462 | 0.138462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098425 | 508 | 8 | 90 | 63.5 | 0.851528 | 0.114173 | 0 | 0 | 0 | 0 | 0.224409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ab2b75e268d963ff43411be8b26252b2d47078b5 | 12,680 | py | Python | forum/migrations/0001_initial.py | michaelyou/One-piece-forum | 3adc6aa9c195a653dce8bc2142cb3d017d85451a | [
"MIT"
] | 1 | 2017-09-07T07:20:51.000Z | 2017-09-07T07:20:51.000Z | forum/migrations/0001_initial.py | michaelyou/One-piece-forum | 3adc6aa9c195a653dce8bc2142cb3d017d85451a | [
"MIT"
] | null | null | null | forum/migrations/0001_initial.py | michaelyou/One-piece-forum | 3adc6aa9c195a653dce8bc2142cb3d017d85451a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.contrib.auth.models
import django.utils.timezone
from django.conf import settings
import django.core.validators
import forum.models
class Migration(migrations.Migration):
dependencies = [
('auth', '0006_require_contenttypes_0002'),
]
operations = [
migrations.CreateModel(
name='ForumUser',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('password', models.CharField(max_length=128, verbose_name='password')),
('last_login', models.DateTimeField(null=True, verbose_name='last login', blank=True)),
('is_superuser', models.BooleanField(default=False, help_text='Designates that this user has all permissions without explicitly assigning them.', verbose_name='superuser status')),
('username', models.CharField(error_messages={'unique': 'A user with that username already exists.'}, max_length=30, validators=[django.core.validators.RegexValidator('^[\\w.@+-]+$', 'Enter a valid username. This value may contain only letters, numbers and @/./+/-/_ characters.', 'invalid')], help_text='Required. 30 characters or fewer. Letters, digits and @/./+/-/_ only.', unique=True, verbose_name='username')),
('first_name', models.CharField(max_length=30, verbose_name='first name', blank=True)),
('last_name', models.CharField(max_length=30, verbose_name='last name', blank=True)),
('email', models.EmailField(max_length=254, verbose_name='email address', blank=True)),
('is_staff', models.BooleanField(default=False, help_text='Designates whether the user can log into this admin site.', verbose_name='staff status')),
('is_active', models.BooleanField(default=True, help_text='Designates whether this user should be treated as active. Unselect this instead of deleting accounts.', verbose_name='active')),
('date_joined', models.DateTimeField(default=django.utils.timezone.now, verbose_name='date joined')),
('nickname', models.CharField(max_length=200, null=True, blank=True)),
('avatar', models.CharField(max_length=200, null=True, blank=True)),
('signature', models.CharField(max_length=500, null=True, blank=True)),
('location', models.CharField(max_length=200, null=True, blank=True)),
('website', models.URLField(null=True, blank=True)),
('company', models.CharField(max_length=200, null=True, blank=True)),
('role', models.IntegerField(null=True, blank=True)),
('balance', models.IntegerField(null=True, blank=True)),
('reputation', models.IntegerField(null=True, blank=True)),
('self_intro', models.CharField(max_length=500, null=True, blank=True)),
('updated', models.DateTimeField(null=True, blank=True)),
('twitter', models.CharField(max_length=200, null=True, blank=True)),
('github', models.CharField(max_length=200, null=True, blank=True)),
('douban', models.CharField(max_length=200, null=True, blank=True)),
('groups', models.ManyToManyField(related_query_name='user', related_name='user_set', to='auth.Group', blank=True, help_text='The groups this user belongs to. A user will get all permissions granted to each of their groups.', verbose_name='groups')),
('user_permissions', models.ManyToManyField(related_query_name='user', related_name='user_set', to='auth.Permission', blank=True, help_text='Specific permissions for this user.', verbose_name='user permissions')),
],
options={
'abstract': False,
'verbose_name': 'user',
'verbose_name_plural': 'users',
},
managers=[
('objects', django.contrib.auth.models.UserManager()),
],
),
migrations.CreateModel(
name='Favorite',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('involved_type', models.IntegerField(null=True, blank=True)),
('created', models.DateTimeField(null=True, blank=True)),
],
),
migrations.CreateModel(
name='Node',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=200, null=True, blank=True)),
('slug', models.SlugField(max_length=200, null=True, blank=True)),
('thumb', models.CharField(max_length=200, null=True, blank=True)),
('introduction', models.CharField(max_length=500, null=True, blank=True)),
('created', models.DateTimeField(null=True, blank=True)),
('updated', models.DateTimeField(null=True, blank=True)),
('topic_count', models.IntegerField(null=True, blank=True)),
('custom_style', forum.models.NormalTextField(null=True, blank=True)),
('limit_reputation', models.IntegerField(null=True, blank=True)),
],
),
migrations.CreateModel(
name='Notification',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('content', forum.models.NormalTextField(null=True, blank=True)),
('status', models.IntegerField(null=True, blank=True)),
('involved_type', models.IntegerField(null=True, blank=True)),
('occurrence_time', models.DateTimeField(null=True, blank=True)),
],
),
migrations.CreateModel(
name='Plane',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=200, null=True, blank=True)),
('created', models.DateTimeField(null=True, blank=True)),
('updated', models.DateTimeField(null=True, blank=True)),
],
),
migrations.CreateModel(
name='Reply',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('content', forum.models.NormalTextField(null=True, blank=True)),
('created', models.DateTimeField(null=True, blank=True)),
('updated', models.DateTimeField(null=True, blank=True)),
('up_vote', models.IntegerField(null=True, blank=True)),
('down_vote', models.IntegerField(null=True, blank=True)),
('last_touched', models.DateTimeField(null=True, blank=True)),
('author', models.ForeignKey(related_name='reply_author', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='Topic',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=200, null=True, blank=True)),
('slug', models.SlugField(max_length=200, null=True, blank=True)),
('content', forum.models.NormalTextField(null=True, blank=True)),
('status', models.IntegerField(null=True, blank=True)),
('hits', models.IntegerField(null=True, blank=True)),
('created', models.DateTimeField(null=True, blank=True)),
('updated', models.DateTimeField(null=True, blank=True)),
('reply_count', models.IntegerField(null=True, blank=True)),
('last_replied_time', models.DateTimeField(null=True, blank=True)),
('up_vote', models.IntegerField(null=True, blank=True)),
('down_vote', models.IntegerField(null=True, blank=True)),
('last_touched', models.DateTimeField(null=True, blank=True)),
('author', models.ForeignKey(related_name='topic_author', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('last_replied_by', models.ForeignKey(related_name='topic_last', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('node', models.ForeignKey(blank=True, to='forum.Node', null=True)),
],
),
migrations.CreateModel(
name='Transaction',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('type', models.IntegerField(null=True, blank=True)),
('reward', models.IntegerField(null=True, blank=True)),
('current_balance', models.IntegerField(null=True, blank=True)),
('occurrence_time', models.DateTimeField(null=True, blank=True)),
('involved_reply', models.ForeignKey(related_name='trans_reply', blank=True, to='forum.Reply', null=True)),
('involved_topic', models.ForeignKey(related_name='trans_topic', blank=True, to='forum.Topic', null=True)),
('involved_user', models.ForeignKey(related_name='trans_involved', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('user', models.ForeignKey(related_name='trans_user', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='Vote',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('status', models.IntegerField(null=True, blank=True)),
('involved_type', models.IntegerField(null=True, blank=True)),
('occurrence_time', models.DateTimeField(null=True, blank=True)),
('involved_reply', models.ForeignKey(related_name='vote_reply', blank=True, to='forum.Reply', null=True)),
('involved_topic', models.ForeignKey(related_name='vote_topic', blank=True, to='forum.Topic', null=True)),
('involved_user', models.ForeignKey(related_name='vote_user', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('trigger_user', models.ForeignKey(related_name='vote_trigger', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.AddField(
model_name='reply',
name='topic',
field=models.ForeignKey(blank=True, to='forum.Topic', null=True),
),
migrations.AddField(
model_name='notification',
name='involved_reply',
field=models.ForeignKey(related_name='notify_reply', blank=True, to='forum.Reply', null=True),
),
migrations.AddField(
model_name='notification',
name='involved_topic',
field=models.ForeignKey(related_name='notify_topic', blank=True, to='forum.Topic', null=True),
),
migrations.AddField(
model_name='notification',
name='involved_user',
field=models.ForeignKey(related_name='notify_user', blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='notification',
name='trigger_user',
field=models.ForeignKey(related_name='notify_trigger', blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='node',
name='plane',
field=models.ForeignKey(blank=True, to='forum.Plane', null=True),
),
migrations.AddField(
model_name='favorite',
name='involved_reply',
field=models.ForeignKey(related_name='fav_reply', blank=True, to='forum.Reply', null=True),
),
migrations.AddField(
model_name='favorite',
name='involved_topic',
field=models.ForeignKey(related_name='fav_topic', blank=True, to='forum.Topic', null=True),
),
migrations.AddField(
model_name='favorite',
name='owner_user',
field=models.ForeignKey(related_name='fav_user', blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
]
| 60.961538 | 432 | 0.608438 | 1,348 | 12,680 | 5.577151 | 0.152819 | 0.100559 | 0.098563 | 0.128891 | 0.746874 | 0.735169 | 0.703379 | 0.627294 | 0.592977 | 0.497739 | 0 | 0.007428 | 0.246136 | 12,680 | 207 | 433 | 61.256039 | 0.779056 | 0.001656 | 0 | 0.532338 | 0 | 0 | 0.172079 | 0.00237 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.004975 | 0.034826 | 0 | 0.049751 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ab3947b0de20a0d63c15762a63e611300361b63a | 52,769 | py | Python | template/examples/python3.8_grammar-template.py | calra123/tree-sitter-legesher-python | 80b3e9fc1982c9179cd6bc0b48d2833034905a41 | [
"MIT"
] | 1 | 2019-10-22T18:55:10.000Z | 2019-10-22T18:55:10.000Z | template/examples/python3.8_grammar-template.py | calra123/tree-sitter-legesher-python | 80b3e9fc1982c9179cd6bc0b48d2833034905a41 | [
"MIT"
] | null | null | null | template/examples/python3.8_grammar-template.py | calra123/tree-sitter-legesher-python | 80b3e9fc1982c9179cd6bc0b48d2833034905a41 | [
"MIT"
] | null | null | null | {def} {class} # Python test set -- part 1, grammar.
# This just tests whether the parser accepts them all.
{from} test.support {import} check_syntax_error
{import} inspect
{import} unittest
{import} sys
# testing import *
{from} sys {import} *
# different import patterns to check that __annotations__ does not interfere
# with import machinery
{import} test.ann_module {as} ann_module
{import} typing
{from} collections {import} ChainMap
{from} test {import} ann_module2
{import} test
# These are shared with test_tokenize {and} other test modules.
#
# Note: since several test cases filter out floats by looking for "e" and ".",
# don't add hexadecimal literals that contain "e" {or} "E".
VALID_UNDERSCORE_LITERALS = [
'0_0_0',
'4_2',
'1_0000_0000',
'0b1001_0100',
'0xffff_ffff',
'0o5_7_7',
'1_00_00.5',
'1_00_00.5e5',
'1_00_00e5_1',
'1e1_0',
'.1_4',
'.1_4e1',
'0b_0',
'0x_f',
'0o_5',
'1_00_00j',
'1_00_00.5j',
'1_00_00e5_1j',
'.1_4j',
'(1_2.5+3_3j)',
'(.5_6j)',
]
INVALID_UNDERSCORE_LITERALS = [
# Trailing underscores:
'0_',
'42_',
'1.4j_',
'0x_',
'0b1_',
'0xf_',
'0o5_',
'0 if 1_Else 1',
# Underscores in the base selector:
'0_b0',
'0_xf',
'0_o5',
# Old-style octal, still disallowed:
'0_7',
'09_99',
# Multiple consecutive underscores:
'4_______2',
'0.1__4',
'0.1__4j',
'0b1001__0100',
'0xffff__ffff',
'0x___',
'0o5__77',
'1e1__0',
'1e1__0j',
# Underscore right before a dot:
'1_.4',
'1_.4j',
# Underscore right after a dot:
'1._4',
'1._4j',
'._5',
'._5j',
# Underscore right after a sign:
'1.0e+_1',
'1.0e+_1j',
# Underscore right before j:
'1.4_j',
'1.4e5_j',
# Underscore right before e:
'1_e1',
'1.4_e1',
'1.4_e1j',
# Underscore right after e:
'1e_1',
'1.4e_1',
'1.4e_1j',
# Complex cases with parens:
'(1+1.5_j_)',
'(1+1.5_j)',
]
{class} TokenTests(unittest.TestCase):
{def} test_backslash(self):
# Backslash means line continuation:
x = 1 \
+ 1
self.assertEqual(x, 2, 'backslash for line continuation')
# Backslash does not means continuation in comments :\
x = 0
self.assertEqual(x, 0, 'backslash ending comment')
{def} test_plain_integers(self):
self.assertEqual(type(000), type(0))
self.assertEqual(0xff, 255)
self.assertEqual(0o377, 255)
self.assertEqual(2147483647, 0o17777777777)
self.assertEqual(0b1001, 9)
# "0x" is not a valid literal
self.assertRaises(SyntaxError, eval, "0x")
{from} sys {import} maxsize
{if} maxsize == 2147483647:
self.assertEqual(-2147483647-1, -0o20000000000)
# XXX -2147483648
self.assertTrue(0o37777777777 > 0)
self.assertTrue(0xffffffff > 0)
self.assertTrue(0b1111111111111111111111111111111 > 0)
{for} s {in} ('2147483648', '0o40000000000', '0x100000000',
'0b10000000000000000000000000000000'):
{try}:
x = eval(s)
{except} OverflowError:
self.fail("OverflowError on huge integer literal %r" % s)
{elif} maxsize == 9223372036854775807:
self.assertEqual(-9223372036854775807-1, -0o1000000000000000000000)
self.assertTrue(0o1777777777777777777777 > 0)
self.assertTrue(0xffffffffffffffff > 0)
self.assertTrue(0b11111111111111111111111111111111111111111111111111111111111111 > 0)
{for} s {in} '9223372036854775808', '0o2000000000000000000000', \
'0x10000000000000000', \
'0b100000000000000000000000000000000000000000000000000000000000000':
{try}:
x = eval(s)
{except} OverflowError:
self.fail("OverflowError on huge integer literal %r" % s)
{else}:
self.fail('Weird maxsize value %r' % maxsize)
{def} test_long_integers(self):
x = 0
x = 0xffffffffffffffff
x = 0Xffffffffffffffff
x = 0o77777777777777777
x = 0O77777777777777777
x = 123456789012345678901234567890
x = 0b100000000000000000000000000000000000000000000000000000000000000000000
x = 0B111111111111111111111111111111111111111111111111111111111111111111111
{def} test_floats(self):
x = 3.14
x = 314.
x = 0.314
# XXX x = 000.314
x = .314
x = 3e14
x = 3E14
x = 3e-14
x = 3e+14
x = 3.e14
x = .3e14
x = 3.1e4
{def} test_float_exponent_tokenization(self):
# See issue 21642.
self.assertEqual(1 {if} 1else 0, 1)
self.assertEqual(1 {if} 0else 0, 0)
self.assertRaises(SyntaxError, eval, "0 {if} 1Else 0")
{def} test_underscore_literals(self):
{for} lit {in} VALID_UNDERSCORE_LITERALS:
self.assertEqual(eval(lit), eval(lit.replace('_', '')))
{for} lit {in} INVALID_UNDERSCORE_LITERALS:
self.assertRaises(SyntaxError, eval, lit)
# Sanity check: no literal begins with an underscore
self.assertRaises(NameError, eval, "_0")
{def} test_string_literals(self):
x = ''; y = ""; self.assertTrue(len(x) == 0 {and} x == y)
x = '\''; y = "'"; self.assertTrue(len(x) == 1 {and} x == y {and} ord(x) == 39)
x = '"'; y = "\""; self.assertTrue(len(x) == 1 {and} x == y {and} ord(x) == 34)
x = "doesn't \"shrink\" does it"
y = 'doesn\'t "shrink" does it'
self.assertTrue(len(x) == 24 {and} x == y)
x = "does \"shrink\" doesn't it"
y = 'does "shrink" doesn\'t it'
self.assertTrue(len(x) == 24 {and} x == y)
x = """
The "quick"
brown fox
jumps over
the 'lazy' dog.
"""
y = '\nThe "quick"\nbrown fox\njumps over\nthe \'lazy\' dog.\n'
self.assertEqual(x, y)
y = '''
The "quick"
brown fox
jumps over
the 'lazy' dog.
'''
self.assertEqual(x, y)
y = "\n\
The \"quick\"\n\
brown fox\n\
jumps over\n\
the 'lazy' dog.\n\
"
self.assertEqual(x, y)
y = '\n\
The \"quick\"\n\
brown fox\n\
jumps over\n\
the \'lazy\' dog.\n\
'
self.assertEqual(x, y)
{def} test_ellipsis(self):
x = ...
self.assertTrue(x {is} Ellipsis)
self.assertRaises(SyntaxError, eval, ".. .")
{def} test_eof_error(self):
samples = ("{def} foo(", "\n{def} foo(", "{def} foo(\n")
{for} s {in} samples:
{with} self.assertRaises(SyntaxError) {as} cm:
compile(s, "<test>", "{exec}")
self.assertIn("unexpected EOF", str(cm.exception))
# var_annot_global: int # a global annotated is necessary for test_var_annot
# custom namespace for testing __annotations__
{class} CNS:
{def} __init__(self):
self._dct = {}
{def} __setitem__(self, item, value):
self._dct[item.lower()] = value
{def} __getitem__(self, item):
{return} self._dct[item]
{class} GrammarTests(unittest.TestCase):
check_syntax_error = check_syntax_error
# single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
# XXX can't test in a script -- this rule is only used when interactive
# file_input: (NEWLINE | stmt)* ENDMARKER
# Being tested as this very moment this very module
# expr_input: testlist NEWLINE
# XXX Hard to test -- used only in calls to input()
{def} test_eval_input(self):
# testlist ENDMARKER
x = eval('1, 0 {or} 1')
{def} test_var_annot_basics(self):
# all these should be allowed
var1: int = 5
# var2: [int, str]
my_lst = [42]
{def} one():
{return} 1
# int.new_attr: int
# [list][0]: type
my_lst[one()-1]: int = 5
self.assertEqual(my_lst, [5])
{def} test_var_annot_syntax_errors(self):
# parser pass
check_syntax_error(self, "{def} f: int")
check_syntax_error(self, "x: int: str")
check_syntax_error(self, "{def} f():\n"
" {nonlocal} x: int\n")
# AST pass
check_syntax_error(self, "[x, 0]: int\n")
check_syntax_error(self, "f(): int\n")
check_syntax_error(self, "(x,): int")
check_syntax_error(self, "{def} f():\n"
" (x, y): int = (1, 2)\n")
# symtable pass
check_syntax_error(self, "{def} f():\n"
" x: int\n"
" {global} x\n")
check_syntax_error(self, "{def} f():\n"
" {global} x\n"
" x: int\n")
{def} test_var_annot_basic_semantics(self):
# execution order
{with} self.assertRaises(ZeroDivisionError):
no_name[does_not_exist]: no_name_again = 1/0
{with} self.assertRaises(NameError):
no_name[does_not_exist]: 1/0 = 0
{global} var_annot_global
# function semantics
{def} f():
st: str = "Hello"
a.b: int = (1, 2)
{return} st
self.assertEqual(f.__annotations__, {})
{def} f_OK():
# x: 1/0
f_OK()
{def} fbad():
# x: int
{print}(x)
{with} self.assertRaises(UnboundLocalError):
fbad()
{def} f2bad():
# (no_such_global): int
{print}(no_such_global)
{try}:
f2bad()
{except} Exception {as} e:
self.assertIs(type(e), NameError)
# class semantics
{class} C:
# __foo: int
s: str = "attr"
z = 2
{def} __init__(self, x):
self.x: int = x
self.assertEqual(C.__annotations__, {'_C__foo': int, 's': str})
{with} self.assertRaises(NameError):
{class} CBad:
no_such_name_defined.attr: int = 0
{with} self.assertRaises(NameError):
{class} Cbad2(C):
# x: int
x.y: list = []
{def} test_var_annot_metaclass_semantics(self):
{class} CMeta(type):
@classmethod
{def} __prepare__(metacls, name, bases, **kwds):
{return} {'__annotations__': CNS()}
{class} CC(metaclass=CMeta):
# XX: 'ANNOT'
self.assertEqual(CC.__annotations__['xx'], 'ANNOT')
{def} test_var_annot_module_semantics(self):
{with} self.assertRaises(AttributeError):
{print}(test.__annotations__)
self.assertEqual(ann_module.__annotations__,
{1: 2, 'x': int, 'y': str, 'f': typing.Tuple[int, int]})
self.assertEqual(ann_module.M.__annotations__,
{'123': 123, 'o': type})
self.assertEqual(ann_module2.__annotations__, {})
{def} test_var_annot_in_module(self):
# check that functions fail the same way when executed
# outside of module where they were defined
{from} test.ann_module3 {import} f_bad_ann, g_bad_ann, D_bad_ann
{with} self.assertRaises(NameError):
f_bad_ann()
{with} self.assertRaises(NameError):
g_bad_ann()
{with} self.assertRaises(NameError):
D_bad_ann(5)
{def} test_var_annot_simple_exec(self):
gns = {}; lns= {}
{exec}("'docstring'\n"
"__annotations__[1] = 2\n"
"x: int = 5\n", gns, lns)
self.assertEqual(lns["__annotations__"], {1: 2, 'x': int})
{with} self.assertRaises(KeyError):
gns['__annotations__']
{def} test_var_annot_custom_maps(self):
# tests with custom locals() and __annotations__
ns = {'__annotations__': CNS()}
{exec}('X: int; Z: str = "Z"; (w): complex = 1j', ns)
self.assertEqual(ns['__annotations__']['x'], int)
self.assertEqual(ns['__annotations__']['z'], str)
{with} self.assertRaises(KeyError):
ns['__annotations__']['w']
nonloc_ns = {}
{class} CNS2:
{def} __init__(self):
self._dct = {}
{def} __setitem__(self, item, value):
{nonlocal} nonloc_ns
self._dct[item] = value
nonloc_ns[item] = value
{def} __getitem__(self, item):
{return} self._dct[item]
{exec}('x: int = 1', {}, CNS2())
self.assertEqual(nonloc_ns['__annotations__']['x'], int)
{def} test_var_annot_refleak(self):
# complex case: custom locals plus custom __annotations__
# this was causing refleak
cns = CNS()
nonloc_ns = {'__annotations__': cns}
{class} CNS2:
{def} __init__(self):
self._dct = {'__annotations__': cns}
{def} __setitem__(self, item, value):
{nonlocal} nonloc_ns
self._dct[item] = value
nonloc_ns[item] = value
{def} __getitem__(self, item):
{return} self._dct[item]
{exec}('X: str', {}, CNS2())
self.assertEqual(nonloc_ns['__annotations__']['x'], str)
{def} test_funcdef(self):
### [decorators] 'def' NAME parameters ['->' test] ':' suite
### decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
### decorators: decorator+
### parameters: '(' [typedargslist] ')'
### typedargslist: ((tfpdef ['=' test] ',')*
### ('*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef)
### | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
### tfpdef: NAME [':' test]
### varargslist: ((vfpdef ['=' test] ',')*
### ('*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef)
### | vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
### vfpdef: NAME
{def} f1(): {pass}
f1()
f1(*())
f1(*(), **{})
{def} f2(one_argument): {pass}
{def} f3(two, arguments): {pass}
self.assertEqual(f2.__code__.co_varnames, ('one_argument',))
self.assertEqual(f3.__code__.co_varnames, ('two', 'arguments'))
{def} a1(one_arg,): {pass}
{def} a2(two, args,): {pass}
{def} v0(*rest): {pass}
{def} v1(a, *rest): {pass}
{def} v2(a, b, *rest): {pass}
f1()
f2(1)
f2(1,)
f3(1, 2)
f3(1, 2,)
v0()
v0(1)
v0(1,)
v0(1,2)
v0(1,2,3,4,5,6,7,8,9,0)
v1(1)
v1(1,)
v1(1,2)
v1(1,2,3)
v1(1,2,3,4,5,6,7,8,9,0)
v2(1,2)
v2(1,2,3)
v2(1,2,3,4)
v2(1,2,3,4,5,6,7,8,9,0)
{def} d01(a=1): {pass}
d01()
d01(1)
d01(*(1,))
d01(*[] {or} [2])
d01(*() {or} (), *{} {and} (), **() {or} {})
d01(**{'a':2})
d01(**{'a':2} {or} {})
{def} d11(a, b=1): {pass}
d11(1)
d11(1, 2)
d11(1, **{'b':2})
{def} d21(a, b, c=1): {pass}
d21(1, 2)
d21(1, 2, 3)
d21(*(1, 2, 3))
d21(1, *(2, 3))
d21(1, 2, *(3,))
d21(1, 2, **{'c':3})
{def} d02(a=1, b=2): {pass}
d02()
d02(1)
d02(1, 2)
d02(*(1, 2))
d02(1, *(2,))
d02(1, **{'b':2})
d02(**{'a': 1, 'b': 2})
{def} d12(a, b=1, c=2): {pass}
d12(1)
d12(1, 2)
d12(1, 2, 3)
{def} d22(a, b, c=1, d=2): {pass}
d22(1, 2)
d22(1, 2, 3)
d22(1, 2, 3, 4)
{def} d01v(a=1, *rest): {pass}
d01v()
d01v(1)
d01v(1, 2)
d01v(*(1, 2, 3, 4))
d01v(*(1,))
d01v(**{'a':2})
{def} d11v(a, b=1, *rest): {pass}
d11v(1)
d11v(1, 2)
d11v(1, 2, 3)
{def} d21v(a, b, c=1, *rest): {pass}
d21v(1, 2)
d21v(1, 2, 3)
d21v(1, 2, 3, 4)
d21v(*(1, 2, 3, 4))
d21v(1, 2, **{'c': 3})
{def} d02v(a=1, b=2, *rest): {pass}
d02v()
d02v(1)
d02v(1, 2)
d02v(1, 2, 3)
d02v(1, *(2, 3, 4))
d02v(**{'a': 1, 'b': 2})
{def} d12v(a, b=1, c=2, *rest): {pass}
d12v(1)
d12v(1, 2)
d12v(1, 2, 3)
d12v(1, 2, 3, 4)
d12v(*(1, 2, 3, 4))
d12v(1, 2, *(3, 4, 5))
d12v(1, *(2,), **{'c': 3})
{def} d22v(a, b, c=1, d=2, *rest): {pass}
d22v(1, 2)
d22v(1, 2, 3)
d22v(1, 2, 3, 4)
d22v(1, 2, 3, 4, 5)
d22v(*(1, 2, 3, 4))
d22v(1, 2, *(3, 4, 5))
d22v(1, *(2, 3), **{'d': 4})
# keyword argument type tests
{try}:
str('x', **{b'foo':1 })
{except} TypeError:
{pass}
{else}:
self.fail('Bytes should not work as keyword argument names')
# keyword only argument tests
{def} pos0key1(*, key): {return} key
pos0key1(key=100)
{def} pos2key2(p1, p2, *, k1, k2=100): {return} p1,p2,k1,k2
pos2key2(1, 2, k1=100)
pos2key2(1, 2, k1=100, k2=200)
pos2key2(1, 2, k2=100, k1=200)
{def} pos2key2dict(p1, p2, *, k1=100, k2, **kwarg): {return} p1,p2,k1,k2,kwarg
pos2key2dict(1,2,k2=100,tokwarg1=100,tokwarg2=200)
pos2key2dict(1,2,tokwarg1=100,tokwarg2=200, k2=100)
self.assertRaises(SyntaxError, eval, "{def} f(*): {pass}")
self.assertRaises(SyntaxError, eval, "{def} f(*,): {pass}")
self.assertRaises(SyntaxError, eval, "{def} f(*, **kwds): {pass}")
# keyword arguments after *arglist
{def} f(*args, **kwargs):
{return} args, kwargs
self.assertEqual(f(1, x=2, *[3, 4], y=5), ((1, 3, 4),
{'x':2, 'y':5}))
self.assertEqual(f(1, *(2,3), 4), ((1, 2, 3, 4), {}))
self.assertRaises(SyntaxError, eval, "f(1, x=2, *(3,4), x=5)")
self.assertEqual(f(**{'eggs':'scrambled', 'spam':'fried'}),
((), {'eggs':'scrambled', 'spam':'fried'}))
self.assertEqual(f(spam='fried', **{'eggs':'scrambled'}),
((), {'eggs':'scrambled', 'spam':'fried'}))
# Check ast errors in *args and *kwargs
check_syntax_error(self, "f(*g(1=2))")
check_syntax_error(self, "f(**g(1=2))")
# argument annotation tests
{def} f(x) -> list: {pass}
self.assertEqual(f.__annotations__, {'return': list})
{def} f(x: int): {pass}
self.assertEqual(f.__annotations__, {'x': int})
{def} f(*x: str): {pass}
self.assertEqual(f.__annotations__, {'x': str})
{def} f(**x: float): {pass}
self.assertEqual(f.__annotations__, {'x': float})
{def} f(x, y: 1+2): {pass}
self.assertEqual(f.__annotations__, {'y': 3})
{def} f(a, b: 1, c: 2, d): {pass}
self.assertEqual(f.__annotations__, {'b': 1, 'c': 2})
{def} f(a, b: 1, c: 2, d, e: 3 = 4, f=5, *g: 6): {pass}
self.assertEqual(f.__annotations__,
{'b': 1, 'c': 2, 'e': 3, 'g': 6})
{def} f(a, b: 1, c: 2, d, e: 3 = 4, f=5, *g: 6, h: 7, i=8, j: 9 = 10,
**k: 11) -> 12: {pass}
self.assertEqual(f.__annotations__,
{'b': 1, 'c': 2, 'e': 3, 'g': 6, 'h': 7, 'j': 9,
'k': 11, 'return': 12})
# Check for issue #20625 -- annotations mangling
{class} Spam:
{def} f(self, *, __kw: 1):
{pass}
{class} Ham(Spam): {pass}
self.assertEqual(Spam.f.__annotations__, {'_Spam__kw': 1})
self.assertEqual(Ham.f.__annotations__, {'_Spam__kw': 1})
# Check for SF Bug #1697248 - mixing decorators and a return annotation
{def} null(x): {return} x
@null
{def} f(x) -> list: {pass}
self.assertEqual(f.__annotations__, {'return': list})
# test closures with a variety of opargs
closure = 1
{def} f(): {return} closure
{def} f(x=1): {return} closure
{def} f(*, k=1): {return} closure
{def} f() -> int: {return} closure
# Check trailing commas are permitted in funcdef argument list
{def} f(a,): {pass}
{def} f(*args,): {pass}
{def} f(**kwds,): {pass}
{def} f(a, *args,): {pass}
{def} f(a, **kwds,): {pass}
{def} f(*args, b,): {pass}
{def} f(*, b,): {pass}
{def} f(*args, **kwds,): {pass}
{def} f(a, *args, b,): {pass}
{def} f(a, *, b,): {pass}
{def} f(a, *args, **kwds,): {pass}
{def} f(*args, b, **kwds,): {pass}
{def} f(*, b, **kwds,): {pass}
{def} f(a, *args, b, **kwds,): {pass}
{def} f(a, *, b, **kwds,): {pass}
{def} test_lambdef(self):
### lambdef: 'lambda' [varargslist] ':' test
l1 = {lambda} : 0
self.assertEqual(l1(), 0)
l2 = {lambda} : a[d] # XXX just testing the expression
l3 = {lambda} : [2 < x {for} x {in} [-1, 3, 0]]
self.assertEqual(l3(), [0, 1, 0])
l4 = {lambda} x = {lambda} y = {lambda} z=1 : z : y() : x()
self.assertEqual(l4(), 1)
l5 = {lambda} x, y, z=2: x + y + z
self.assertEqual(l5(1, 2), 5)
self.assertEqual(l5(1, 2, 3), 6)
check_syntax_error(self, "{lambda} x: x = 2")
check_syntax_error(self, "{lambda} (None,): None")
l6 = {lambda} x, y, *, k=20: x+y+k
self.assertEqual(l6(1,2), 1+2+20)
self.assertEqual(l6(1,2,k=10), 1+2+10)
# check that trailing commas are permitted
l10 = {lambda} a,: 0
l11 = {lambda} *args,: 0
l12 = {lambda} **kwds,: 0
l13 = {lambda} a, *args,: 0
l14 = {lambda} a, **kwds,: 0
l15 = {lambda} *args, b,: 0
l16 = {lambda} *, b,: 0
l17 = {lambda} *args, **kwds,: 0
l18 = {lambda} a, *args, b,: 0
l19 = {lambda} a, *, b,: 0
l20 = {lambda} a, *args, **kwds,: 0
l21 = {lambda} *args, b, **kwds,: 0
l22 = {lambda} *, b, **kwds,: 0
l23 = {lambda} a, *args, b, **kwds,: 0
l24 = {lambda} a, *, b, **kwds,: 0
### stmt: simple_stmt | compound_stmt
# Tested below
{def} test_simple_stmt(self):
### simple_stmt: small_stmt (';' small_stmt)* [';']
x = 1; {pass}; {del} x
{def} foo():
# verify statements that end with semi-colons
x = 1; {pass}; {del} x;
foo()
### small_stmt: expr_stmt | pass_stmt | del_stmt | flow_stmt | import_stmt | global_stmt | access_stmt
# Tested below
{def} test_expr_stmt(self):
# (exprlist '=')* exprlist
1
1, 2, 3
x = 1
x = 1, 2, 3
x = y = z = 1, 2, 3
x, y, z = 1, 2, 3
abc = a, b, c = x, y, z = xyz = 1, 2, (3, 4)
check_syntax_error(self, "x + 1 = 1")
check_syntax_error(self, "a + 1 = b + 2")
# Check the heuristic for print & exec covers significant cases
# As well as placing some limits on false positives
{def} test_former_statements_refer_to_builtins(self):
keywords = "{print}", "{exec}"
# Cases where we want the custom error
cases = [
"{} foo",
"{} {{1:foo}}",
"{if} 1: {} foo",
"{if} 1: {} {{1:foo}}",
"{if} 1:\n {} foo",
"{if} 1:\n {} {{1:foo}}",
]
{for} keyword {in} keywords:
custom_msg = "call to '{}'".format(keyword)
{for} case {in} cases:
source = case.format(keyword)
{with} self.subTest(source=source):
{with} self.assertRaisesRegex(SyntaxError, custom_msg):
{exec}(source)
source = source.replace("foo", "(foo.)")
{with} self.subTest(source=source):
{with} self.assertRaisesRegex(SyntaxError, "invalid syntax"):
{exec}(source)
{def} test_del_stmt(self):
# 'del' exprlist
abc = [1,2,3]
x, y, z = abc
xyz = x, y, z
{del} abc
{del} x, y, (z, xyz)
{def} test_pass_stmt(self):
# 'pass'
{pass}
# flow_stmt: {break}_stmt | continue_stmt | return_stmt | raise_stmt
# Tested below
{def} test_break_stmt(self):
# 'break'
{while} 1: {break}
{def} test_continue_stmt(self):
# 'continue'
i = 1
{while} i: i = 0; {continue}
msg = ""
{while} {not} msg:
msg = "ok"
{try}:
{continue}
msg = "continue failed to continue inside try"
{except}:
msg = "continue inside try called except block"
{if} msg != "ok":
self.fail(msg)
msg = ""
{while} {not} msg:
msg = "finally block not called"
{try}:
{continue}
{finally}:
msg = "ok"
{if} msg != "ok":
self.fail(msg)
{def} test_break_continue_loop(self):
# This test warrants an explanation. It is a test specifically for SF bugs
# #463359 and #462937. The bug is that a 'break' statement executed or
# exception raised inside a try/except inside a loop, *after* a continue
# statement has been executed in that loop, will cause the wrong number of
# arguments to be popped off the stack and the instruction pointer reset to
# a very small number (usually 0.) Because of this, the following test
# *must* written as a function, and the tracking vars *must* be function
# arguments with default values. Otherwise, the test will loop and loop.
{def} test_inner(extra_burning_oil = 1, count=0):
big_hippo = 2
{while} big_hippo:
count += 1
{try}:
{if} extra_burning_oil {and} big_hippo == 1:
extra_burning_oil -= 1
{break}
big_hippo -= 1
{continue}
{except}:
raise
{if} count > 2 {or} big_hippo != 1:
self.fail("continue then break in try/except in loop broken!")
test_inner()
{def} test_return(self):
# 'return' [testlist]
{def} g1(): return
{def} g2(): {return} 1
g1()
x = g2()
check_syntax_error(self, "{class} foo:{return} 1")
{def} test_break_in_finally(self):
count = 0
{while} count < 2:
count += 1
{try}:
{pass}
{finally}:
{break}
self.assertEqual(count, 1)
count = 0
{while} count < 2:
count += 1
{try}:
{continue}
{finally}:
{break}
self.assertEqual(count, 1)
count = 0
{while} count < 2:
count += 1
{try}:
1/0
{finally}:
{break}
self.assertEqual(count, 1)
{for} count {in} [0, 1]:
self.assertEqual(count, 0)
{try}:
{pass}
{finally}:
{break}
self.assertEqual(count, 0)
{for} count {in} [0, 1]:
self.assertEqual(count, 0)
{try}:
{continue}
{finally}:
{break}
self.assertEqual(count, 0)
{for} count {in} [0, 1]:
self.assertEqual(count, 0)
{try}:
1/0
{finally}:
{break}
self.assertEqual(count, 0)
{def} test_continue_in_finally(self):
count = 0
{while} count < 2:
count += 1
{try}:
{pass}
{finally}:
{continue}
{break}
self.assertEqual(count, 2)
count = 0
{while} count < 2:
count += 1
{try}:
{break}
{finally}:
{continue}
self.assertEqual(count, 2)
count = 0
{while} count < 2:
count += 1
{try}:
1/0
{finally}:
{continue}
{break}
self.assertEqual(count, 2)
{for} count {in} [0, 1]:
{try}:
{pass}
{finally}:
{continue}
{break}
self.assertEqual(count, 1)
{for} count {in} [0, 1]:
{try}:
{break}
{finally}:
{continue}
self.assertEqual(count, 1)
{for} count {in} [0, 1]:
{try}:
1/0
{finally}:
{continue}
{break}
self.assertEqual(count, 1)
{def} test_return_in_finally(self):
{def} g1():
{try}:
{pass}
{finally}:
{return} 1
self.assertEqual(g1(), 1)
{def} g2():
{try}:
{return} 2
{finally}:
{return} 3
self.assertEqual(g2(), 3)
{def} g3():
{try}:
1/0
{finally}:
{return} 4
self.assertEqual(g3(), 4)
{def} test_yield(self):
# Allowed as standalone statement
{def} g(): {yield} 1
{def} g(): {yield} {from} ()
# Allowed as RHS of assignment
{def} g(): x = {yield} 1
{def} g(): x = {yield} {from} ()
# Ordinary yield accepts implicit tuples
{def} g(): {yield} 1, 1
{def} g(): x = {yield} 1, 1
# 'yield from' does not
check_syntax_error(self, "{def} g(): {yield} {from} (), 1")
check_syntax_error(self, "{def} g(): x = {yield} {from} (), 1")
# Requires parentheses as subexpression
{def} g(): 1, ({yield} 1)
{def} g(): 1, ({yield} {from} ())
check_syntax_error(self, "{def} g(): 1, {yield} 1")
check_syntax_error(self, "{def} g(): 1, {yield} {from} ()")
# Requires parentheses as call argument
{def} g(): f(({yield} 1))
{def} g(): f(({yield} 1), 1)
{def} g(): f(({yield} {from} ()))
{def} g(): f(({yield} {from} ()), 1)
check_syntax_error(self, "{def} g(): f({yield} 1)")
check_syntax_error(self, "{def} g(): f({yield} 1, 1)")
check_syntax_error(self, "{def} g(): f({yield} {from} ())")
check_syntax_error(self, "{def} g(): f({yield} {from} (), 1)")
# Not allowed at top level
check_syntax_error(self, "{yield}")
check_syntax_error(self, "{yield} from")
# Not allowed at class scope
check_syntax_error(self, "{class} foo:{yield} 1")
check_syntax_error(self, "{class} foo:{yield} {from} ()")
# Check annotation refleak on SyntaxError
check_syntax_error(self, "{def} g(a:({yield})): {pass}")
{def} test_yield_in_comprehensions(self):
# Check yield in comprehensions
{def} g(): [x {for} x {in} [({yield} 1)]]
{def} g(): [x {for} x {in} [({yield} {from} ())]]
check = self.check_syntax_error
check("{def} g(): [({yield} x) {for} x {in} ()]",
"'yield' inside list comprehension")
check("{def} g(): [x {for} x {in} () {if} not ({yield} x)]",
"'yield' inside list comprehension")
check("{def} g(): [y {for} x {in} () {for} y {in} [({yield} x)]]",
"'yield' inside list comprehension")
check("{def} g(): {({yield} x) {for} x {in} ()}",
"'yield' inside set comprehension")
check("{def} g(): {({yield} x): x {for} x {in} ()}",
"'yield' inside dict comprehension")
check("{def} g(): {x: ({yield} x) {for} x {in} ()}",
"'yield' inside dict comprehension")
check("{def} g(): (({yield} x) {for} x {in} ())",
"'yield' inside generator expression")
check("{def} g(): [({yield} {from} x) {for} x {in} ()]",
"'yield' inside list comprehension")
check("{class} C: [({yield} x) {for} x {in} ()]",
"'yield' inside list comprehension")
check("[({yield} x) {for} x {in} ()]",
"'yield' inside list comprehension")
{def} test_raise(self):
# 'raise' test [',' test]
{try}: {raise} RuntimeError('just testing')
{except} RuntimeError: {pass}
{try}: {raise} KeyboardInterrupt
{except} KeyboardInterrupt: {pass}
{def} test_import(self):
# 'import' dotted_as_names
{import} sys
{import} time, sys
# 'from' dotted_name 'import' ('*' | '(' import_as_names ')' | import_as_names)
{from} time {import} time
{from} time {import} (time)
# not testable inside a function, but already done at top of the module
# from sys import *
{from} sys {import} path, argv
{from} sys {import} (path, argv)
{from} sys {import} (path, argv,)
{def} test_global(self):
# 'global' NAME (',' NAME)*
{global} a
{global} a, b
{global} one, two, three, four, five, six, seven, eight, nine, ten
{def} test_nonlocal(self):
# 'nonlocal' NAME (',' NAME)*
x = 0
y = 0
{def} f():
{nonlocal} x
{nonlocal} x, y
{def} test_assert(self):
# assertTruestmt: 'assert' test [',' test]
{assert} 1
{assert} 1, 1
{assert} {lambda} x:x
{assert} 1, {lambda} x:x+1
{try}:
{assert} {True}
{except} AssertionError {as} e:
self.fail("'assert {True}' should not have raised an AssertionError")
{try}:
{assert} {True}, 'this should always pass'
{except} AssertionError {as} e:
self.fail("'assert {True}, msg' should not have "
"raised an AssertionError")
# these tests fail if python is run with -O, so check __debug__
@unittest.skipUnless(__debug__, "Won't work {if} __debug__ is {False}")
{def} testAssert2(self):
{try}:
{assert} 0, "msg"
{except} AssertionError {as} e:
self.assertEqual(e.args[0], "msg")
{else}:
self.fail("AssertionError not raised by assert 0")
{try}:
{assert} {False}
{except} AssertionError {as} e:
self.assertEqual(len(e.args), 0)
{else}:
self.fail("AssertionError not raised by 'assert {False}'")
### compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | funcdef | classdef
# Tested below
{def} test_if(self):
# 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
{if} 1: {pass}
{if} 1: {pass}
{else}: {pass}
{if} 0: {pass}
{elif} 0: {pass}
{if} 0: {pass}
{elif} 0: {pass}
{elif} 0: {pass}
{elif} 0: {pass}
{else}: {pass}
{def} test_while(self):
# 'while' test ':' suite ['else' ':' suite]
{while} 0: {pass}
{while} 0: {pass}
{else}: {pass}
# Issue1920: "while 0" is optimized away,
# ensure that the "else" clause is still present.
x = 0
{while} 0:
x = 1
{else}:
x = 2
self.assertEqual(x, 2)
{def} test_for(self):
# 'for' exprlist 'in' exprlist ':' suite ['else' ':' suite]
{for} i {in} 1, 2, 3: {pass}
for i, j, k in (): {pass}
{else}: {pass}
{class} Squares:
{def} __init__(self, max):
self.max = max
self.sofar = []
{def} __len__(self): {return} len(self.sofar)
{def} __getitem__(self, i):
{if} {not} 0 <= i < self.max: {raise} IndexError
n = len(self.sofar)
{while} n <= i:
self.sofar.append(n*n)
n = n+1
{return} self.sofar[i]
n = 0
{for} x {in} Squares(10): n = n+x
{if} n != 285:
self.fail('for over growing sequence')
result = []
{for} x, {in} [(1,), (2,), (3,)]:
result.append(x)
self.assertEqual(result, [1, 2, 3])
{def} test_try(self):
### try_stmt: 'try' ':' suite (except_clause ':' suite)+ ['else' ':' suite]
### | 'try' ':' suite 'finally' ':' suite
### except_clause: 'except' [expr ['as' expr]]
{try}:
1/0
{except} ZeroDivisionError:
{pass}
{else}:
{pass}
{try}: 1/0
{except} EOFError: {pass}
{except} TypeError {as} msg: {pass}
{except}: {pass}
{else}: {pass}
{try}: 1/0
{except} (EOFError, TypeError, ZeroDivisionError): {pass}
{try}: 1/0
{except} (EOFError, TypeError, ZeroDivisionError) {as} msg: {pass}
{try}: {pass}
{finally}: {pass}
{def} test_suite(self):
# simple_stmt | NEWLINE INDENT NEWLINE* (stmt NEWLINE*)+ DEDENT
{if} 1: {pass}
{if} 1:
{pass}
{if} 1:
#
#
#
{pass}
{pass}
#
{pass}
#
{def} test_test(self):
### and_test ('or' and_test)*
### and_test: not_test ('and' not_test)*
### not_test: 'not' not_test | comparison
{if} {not} 1: {pass}
{if} 1 {and} 1: {pass}
{if} 1 {or} 1: {pass}
{if} {not} not {not} 1: {pass}
{if} {not} 1 {and} 1 {and} 1: {pass}
{if} 1 {and} 1 {or} 1 {and} 1 {and} 1 {or} {not} 1 {and} 1: {pass}
{def} test_comparison(self):
### comparison: expr (comp_op expr)*
### comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'in'|'not' 'in'|'is'|'is' 'not'
{if} 1: {pass}
x = (1 == 1)
{if} 1 == 1: {pass}
{if} 1 != 1: {pass}
{if} 1 < 1: {pass}
{if} 1 > 1: {pass}
{if} 1 <= 1: {pass}
{if} 1 >= 1: {pass}
{if} 1 {is} 1: {pass}
{if} 1 {is} {not} 1: {pass}
{if} 1 {in} (): {pass}
{if} 1 {not} {in} (): {pass}
{if} 1 < 1 > 1 == 1 >= 1 <= 1 != 1 {in} 1 {not} {in} 1 {is} 1 {is} {not} 1: {pass}
{def} test_binary_mask_ops(self):
x = 1 & 1
x = 1 ^ 1
x = 1 | 1
{def} test_shift_ops(self):
x = 1 << 1
x = 1 >> 1
x = 1 << 1 >> 1
{def} test_additive_ops(self):
x = 1
x = 1 + 1
x = 1 - 1 - 1
x = 1 - 1 + 1 - 1 + 1
{def} test_multiplicative_ops(self):
x = 1 * 1
x = 1 / 1
x = 1 % 1
x = 1 / 1 * 1 % 1
{def} test_unary_ops(self):
x = +1
x = -1
x = ~1
x = ~1 ^ 1 & 1 | 1 & 1 ^ -1
x = -1*1/1 + 1*1 - ---1*1
{def} test_selectors(self):
### trailer: '(' [testlist] ')' | '[' subscript ']' | '.' NAME
### subscript: expr | [expr] ':' [expr]
{import} sys, time
c = sys.path[0]
x = time.time()
x = sys.modules['time'].time()
a = '01234'
c = a[0]
c = a[-1]
s = a[0:5]
s = a[:5]
s = a[0:]
s = a[:]
s = a[-5:]
s = a[:-1]
s = a[-4:-3]
# A rough test of SF bug 1333982. http://python.org/sf/1333982
# The testing here is fairly incomplete.
# Test cases should include: commas with 1 and 2 colons
d = {}
d[1] = 1
d[1,] = 2
d[1,2] = 3
d[1,2,3] = 4
L = list(d)
L.sort(key={lambda} x: (type(x).__name__, x))
self.assertEqual(str(L), '[1, (1,), (1, 2), (1, 2, 3)]')
{def} test_atoms(self):
### atom: '(' [testlist] ')' | '[' [testlist] ']' | '{' [dictsetmaker] '}' | NAME | NUMBER | STRING
### dictsetmaker: (test ':' test (',' test ':' test)* [',']) | (test (',' test)* [','])
x = (1)
x = (1 {or} 2 {or} 3)
x = (1 {or} 2 {or} 3, 2, 3)
x = []
x = [1]
x = [1 {or} 2 {or} 3]
x = [1 {or} 2 {or} 3, 2, 3]
x = []
x = {}
x = {'one': 1}
x = {'one': 1,}
x = {'one' {or} 'two': 1 {or} 2}
x = {'one': 1, 'two': 2}
x = {'one': 1, 'two': 2,}
x = {'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6}
x = {'one'}
x = {'one', 1,}
x = {'one', 'two', 'three'}
x = {2, 3, 4,}
x = x
x = 'x'
x = 123
### exprlist: expr (',' expr)* [',']
### testlist: test (',' test)* [',']
# These have been exercised enough above
{def} test_classdef(self):
# 'class' NAME ['(' [testlist] ')'] ':' suite
{class} B: {pass}
{class} B2(): {pass}
{class} C1(B): {pass}
{class} C2(B): {pass}
{class} D(C1, C2, B): {pass}
{class} C:
{def} meth1(self): {pass}
{def} meth2(self, arg): {pass}
{def} meth3(self, a1, a2): {pass}
# decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
# decorators: decorator+
# decorated: decorators (classdef | funcdef)
{def} class_decorator(x): {return} x
@class_decorator
{class} G: {pass}
{def} test_dictcomps(self):
# dictorsetmaker: ( (test ':' test (comp_for |
# (',' test ':' test)* [','])) |
# (test (comp_for | (',' test)* [','])) )
nums = [1, 2, 3]
self.assertEqual({i:i+1 {for} i {in} nums}, {1: 2, 2: 3, 3: 4})
{def} test_listcomps(self):
# list comprehension tests
nums = [1, 2, 3, 4, 5]
strs = ["Apple", "Banana", "Coconut"]
spcs = [" Apple", " Banana ", "Coco nut "]
self.assertEqual([s.strip() {for} s {in} spcs], ['Apple', 'Banana', 'Coco nut'])
self.assertEqual([3 * x {for} x {in} nums], [3, 6, 9, 12, 15])
self.assertEqual([x {for} x {in} nums {if} x > 2], [3, 4, 5])
self.assertEqual([(i, s) {for} i {in} nums {for} s {in} strs],
[(1, 'Apple'), (1, 'Banana'), (1, 'Coconut'),
(2, 'Apple'), (2, 'Banana'), (2, 'Coconut'),
(3, 'Apple'), (3, 'Banana'), (3, 'Coconut'),
(4, 'Apple'), (4, 'Banana'), (4, 'Coconut'),
(5, 'Apple'), (5, 'Banana'), (5, 'Coconut')])
self.assertEqual([(i, s) {for} i {in} nums {for} s {in} [f {for} f {in} strs {if} "n" in f]],
[(1, 'Banana'), (1, 'Coconut'), (2, 'Banana'), (2, 'Coconut'),
(3, 'Banana'), (3, 'Coconut'), (4, 'Banana'), (4, 'Coconut'),
(5, 'Banana'), (5, 'Coconut')])
self.assertEqual([({lambda} a:[a**i {for} i {in} range(a+1)])(j) {for} j {in} range(5)],
[[1], [1, 1], [1, 2, 4], [1, 3, 9, 27], [1, 4, 16, 64, 256]])
{def} test_in_func(l):
{return} [0 < x < 3 {for} x {in} l {if} x > 2]
self.assertEqual(test_in_func(nums), [{False}, {False}, {False}])
{def} test_nested_front():
self.assertEqual([[y {for} y {in} [x, x + 1]] {for} x {in} [1,3,5]],
[[1, 2], [3, 4], [5, 6]])
test_nested_front()
check_syntax_error(self, "[i, s {for} i {in} nums {for} s {in} strs]")
check_syntax_error(self, "[x {if} y]")
suppliers = [
(1, "Boeing"),
(2, "Ford"),
(3, "Macdonalds")
]
parts = [
(10, "Airliner"),
(20, "Engine"),
(30, "Cheeseburger")
]
suppart = [
(1, 10), (1, 20), (2, 20), (3, 30)
]
x = [
(sname, pname)
{for} (sno, sname) {in} suppliers
{for} (pno, pname) {in} parts
{for} (sp_sno, sp_pno) {in} suppart
{if} sno == sp_sno {and} pno == sp_pno
]
self.assertEqual(x, [('Boeing', 'Airliner'), ('Boeing', 'Engine'), ('Ford', 'Engine'),
('Macdonalds', 'Cheeseburger')])
{def} test_genexps(self):
# generator expression tests
g = ([x {for} x {in} range(10)] {for} x {in} range(1))
self.assertEqual(next(g), [x {for} x {in} range(10)])
{try}:
next(g)
self.fail('should produce StopIteration exception')
{except} StopIteration:
{pass}
a = 1
{try}:
g = (a {for} d {in} a)
next(g)
self.fail('should produce TypeError')
{except} TypeError:
{pass}
self.assertEqual(list((x, y) {for} x {in} 'abcd' {for} y {in} 'abcd'), [(x, y) {for} x {in} 'abcd' {for} y {in} 'abcd'])
self.assertEqual(list((x, y) {for} x {in} 'ab' {for} y {in} 'xy'), [(x, y) {for} x {in} 'ab' {for} y {in} 'xy'])
a = [x {for} x {in} range(10)]
b = (x {for} x {in} (y {for} y {in} a))
self.assertEqual(sum(b), sum([x {for} x {in} range(10)]))
self.assertEqual(sum(x**2 {for} x {in} range(10)), sum([x**2 {for} x {in} range(10)]))
self.assertEqual(sum(x*x {for} x {in} range(10) {if} x%2), sum([x*x {for} x {in} range(10) {if} x%2]))
self.assertEqual(sum(x {for} x {in} (y {for} y {in} range(10))), sum([x {for} x {in} range(10)]))
self.assertEqual(sum(x {for} x {in} (y {for} y {in} (z {for} z {in} range(10)))), sum([x {for} x {in} range(10)]))
self.assertEqual(sum(x {for} x {in} [y {for} y {in} (z {for} z {in} range(10))]), sum([x {for} x {in} range(10)]))
self.assertEqual(sum(x {for} x {in} (y {for} y {in} (z {for} z {in} range(10) {if} {True})) {if} {True}), sum([x {for} x {in} range(10)]))
self.assertEqual(sum(x {for} x {in} (y {for} y {in} (z {for} z {in} range(10) {if} {True}) {if} {False}) {if} {True}), 0)
check_syntax_error(self, "foo(x {for} x {in} range(10), 100)")
check_syntax_error(self, "foo(100, x {for} x {in} range(10))")
{def} test_comprehension_specials(self):
# test for outmost iterable precomputation
x = 10; g = (i {for} i {in} range(x)); x = 5
self.assertEqual(len(list(g)), 10)
# This should hold, since we're only precomputing outmost iterable.
x = 10; t = {False}; g = ((i,j) {for} i {in} range(x) {if} t {for} j {in} range(x))
x = 5; t = {True};
self.assertEqual([(i,j) {for} i {in} range(10) {for} j {in} range(5)], list(g))
# Grammar allows multiple adjacent 'if's in listcomps and genexps,
# even though it's silly. Make sure it works (ifelse broke this.)
self.assertEqual([ x {for} x {in} range(10) {if} x % 2 {if} x % 3 ], [1, 5, 7])
self.assertEqual(list(x {for} x {in} range(10) {if} x % 2 {if} x % 3), [1, 5, 7])
# verify unpacking single element tuples in listcomp/genexp.
self.assertEqual([x {for} x, {in} [(4,), (5,), (6,)]], [4, 5, 6])
self.assertEqual(list(x {for} x, {in} [(7,), (8,), (9,)]), [7, 8, 9])
{def} test_with_statement(self):
{class} manager(object):
{def} __enter__(self):
{return} (1, 2)
{def} __exit__(self, *args):
{pass}
{with} manager():
{pass}
{with} manager() {as} x:
{pass}
{with} manager() {as} (x, y):
{pass}
{with} manager(), manager():
{pass}
{with} manager() {as} x, manager() {as} y:
{pass}
{with} manager() {as} x, manager():
{pass}
{def} test_if_else_expr(self):
# Test ifelse expressions in various cases
{def} _checkeval(msg, ret):
"helper to check that evaluation of expressions is done correctly"
{print}(msg)
{return} ret
# the next line is not allowed anymore
#self.assertEqual([ x() {for} x {in} lambda: {True}, lambda: {False} {if} x() ], [True])
self.assertEqual([ x() {for} x {in} (lambda: {True}, lambda: {False}) {if} x() ], [True])
self.assertEqual([ x({False}) {for} x {in} ({lambda} x: {False} {if} x {else} {True}, {lambda} x: {True} {if} x {else} {False}) {if} x(False) ], [True])
self.assertEqual((5 {if} 1 {else} _checkeval("check 1", 0)), 5)
self.assertEqual((_checkeval("check 2", 0) {if} 0 {else} 5), 5)
self.assertEqual((5 {and} 6 {if} 0 {else} 1), 1)
self.assertEqual(((5 {and} 6) {if} 0 {else} 1), 1)
self.assertEqual((5 {and} (6 {if} 1 {else} 1)), 6)
self.assertEqual((0 {or} _checkeval("check 3", 2) {if} 0 {else} 3), 3)
self.assertEqual((1 {or} _checkeval("check 4", 2) {if} 1 {else} _checkeval("check 5", 3)), 1)
self.assertEqual((0 {or} 5 {if} 1 {else} _checkeval("check 6", 3)), 5)
self.assertEqual(({not} 5 {if} 1 {else} 1), {False})
self.assertEqual(({not} 5 {if} 0 {else} 1), 1)
self.assertEqual((6 + 1 {if} 1 {else} 2), 7)
self.assertEqual((6 - 1 {if} 1 {else} 2), 5)
self.assertEqual((6 * 2 {if} 1 {else} 4), 12)
self.assertEqual((6 / 2 {if} 1 {else} 3), 3)
self.assertEqual((6 < 4 {if} 0 {else} 2), 2)
{def} test_paren_evaluation(self):
self.assertEqual(16 // (4 // 2), 8)
self.assertEqual((16 // 4) // 2, 2)
self.assertEqual(16 // 4 // 2, 2)
self.assertTrue({False} {is} (2 {is} 3))
self.assertFalse(({False} {is} 2) {is} 3)
self.assertFalse({False} {is} 2 {is} 3)
{def} test_matrix_mul(self):
# This is not intended to be a comprehensive test, rather just to be few
# samples of the @ operator in test_grammar.py.
{class} M:
{def} __matmul__(self, o):
{return} 4
{def} __imatmul__(self, o):
self.other = o
{return} self
m = M()
self.assertEqual(m @ m, 4)
m @= 42
self.assertEqual(m.other, 42)
{def} test_async_await(self):
{async} {def} test():
{def} sum():
{pass}
{if} 1:
{await} someobj()
self.assertEqual(test.__name__, 'test')
self.assertTrue(bool(test.__code__.co_flags & inspect.CO_COROUTINE))
{def} decorator(func):
setattr(func, '_marked', {True})
{return} func
@decorator
{async} {def} test2():
{return} 22
self.assertTrue(test2._marked)
self.assertEqual(test2.__name__, 'test2')
self.assertTrue(bool(test2.__code__.co_flags & inspect.CO_COROUTINE))
{def} test_async_for(self):
{class} Done(Exception): {pass}
{class} AIter:
{def} __aiter__(self):
{return} self
{async} {def} __anext__(self):
{raise} StopAsyncIteration
{async} {def} foo():
{async} {for} i {in} AIter():
{pass}
{async} {for} i, j {in} AIter():
{pass}
{async} {for} i {in} AIter():
{pass}
{else}:
{pass}
{raise} Done
{with} self.assertRaises(Done):
foo().send(None)
{def} test_async_with(self):
{class} Done(Exception): {pass}
{class} manager:
{async} {def} __aenter__(self):
{return} (1, 2)
{async} {def} __aexit__(self, *exc):
{return} {False}
{async} def foo():
{async} {with} manager():
{pass}
{async} {with} manager() {as} x:
{pass}
{async} {with} manager() {as} (x, y):
{pass}
{async} {with} manager(), manager():
{pass}
{async} {with} manager() {as} x, manager() {as} y:
{pass}
{async} {with} manager() {as} x, manager():
{pass}
{raise} Done
{with} self.assertRaises(Done):
foo().send(None)
{if} __name__ == '__main__':
unittest.main()
| 33.546726 | 160 | 0.471224 | 6,470 | 52,769 | 3.72643 | 0.121638 | 0.079013 | 0.012692 | 0.011033 | 0.364206 | 0.292783 | 0.236002 | 0.198839 | 0.163749 | 0.137868 | 0 | 0.070424 | 0.353105 | 52,769 | 1,572 | 161 | 33.568066 | 0.635868 | 0.13271 | 0 | 0.25384 | 0 | 0.005659 | 0.098258 | 0.002702 | 0 | 0 | 0.002153 | 0 | 0.153597 | 0 | null | null | 0.115602 | 0.016977 | null | null | 0.004042 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ab3c88b0fbecfd72245a2773fa780529afe0e793 | 2,017 | py | Python | main/context.py | edazpotato/usefull-discord-bot | 81b35e73b135afc887033510a635738899793264 | [
"MIT"
] | 2 | 2020-10-16T10:01:18.000Z | 2021-11-15T09:43:34.000Z | main/context.py | edazpotato/usefull-discord-bot | 81b35e73b135afc887033510a635738899793264 | [
"MIT"
] | null | null | null | main/context.py | edazpotato/usefull-discord-bot | 81b35e73b135afc887033510a635738899793264 | [
"MIT"
] | null | null | null | from discord.ext import commands
class UserPrefrences():
"""A class that takes a discord.User object and returns an object with their preferences"""
def __init__(self, bot, user):
self.user = user
self.bot = bot
self._language = "en" # TEMPORARY until I get the DB going
self._color = 0xff0000 # TEMPORARY until I get the DB going
@property
def language(self):
return self._language
@property
def color(self):
return self._color
class GuildPrefrences():
"""A class that takes a discord.Guild object and returns an object with their preferences"""
def __init__(self, bot, guild):
self.guild = guild
self.bot = bot
self._language = "en" # TEMPORARY until I get the DB going
self._prefix = self.bot.config.prefixes[0] # TEMPORARY until I get the DB going
@property
def language(self):
return self._language
@property
def prefix(self):
return self._prefix
class CtxConfig():
def __init__(self, context):
self.ctx = context
self._author = UserPrefrences(self.ctx.bot, self.ctx.author)
self._guild = GuildPrefrences(self.ctx.bot, self.ctx.guild)
@property
def author(self):
return self._author
@property
def guild(self):
return self._guild
class Context(commands.Context):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.emoji = self.bot.config.emoji
# Preferences
self.config = CtxConfig(self)
self._language = "en"
if self.config.guild.language is not None:
self._language = self.config.guild.language
if self.config.author.language is not None:
self._language = self.config.author.language
async def error(self, msg: str, **kwargs):
await self.send(f"{self.emoji.no} {msg}", **kwargs)
async def warning(self, msg: str, **kwargs):
await self.send(f"{self.emoji.maybe} {msg}", **kwargs)
async def success(self, msg: str, **kwargs):
await self.send(f"{self.emoji.yes} {msg}", **kwargs)
@property
def language(self):
return self._language
@property
def strings(self):
return self.bot.strings[self._language] | 27.256757 | 93 | 0.718889 | 292 | 2,017 | 4.839041 | 0.222603 | 0.076433 | 0.079264 | 0.050955 | 0.506016 | 0.481953 | 0.449398 | 0.449398 | 0.394197 | 0.357396 | 0 | 0.003538 | 0.159147 | 2,017 | 74 | 94 | 27.256757 | 0.829599 | 0.16113 | 0 | 0.327586 | 0 | 0 | 0.043504 | 0 | 0 | 0 | 0.004768 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.017241 | 0.137931 | 0.431034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ab55544ff5bda54983517c95e917cd0fe9fe6b9e | 1,412 | py | Python | data_structures/deque.py | miguelgfierro/pybase | de8e4f11ed5c655e748178e65195c7e70a9c98af | [
"BSD-3-Clause"
] | 14 | 2020-02-07T21:36:39.000Z | 2022-03-12T22:37:04.000Z | data_structures/deque.py | miguelgfierro/pybase | de8e4f11ed5c655e748178e65195c7e70a9c98af | [
"BSD-3-Clause"
] | 19 | 2019-05-18T23:58:30.000Z | 2022-01-09T16:45:35.000Z | data_structures/deque.py | miguelgfierro/pybase | de8e4f11ed5c655e748178e65195c7e70a9c98af | [
"BSD-3-Clause"
] | 5 | 2020-10-06T06:10:27.000Z | 2021-07-08T12:58:46.000Z | class Deque(object):
"""A deque (double-ended queue) is a linear structure of
ordered items where the addition and removal of items can
take place on any end.
Thus deques can work as FIFO (First In, First Out) or
LIFO (Last In, First Out)
Examples:
>>> d = Deque()
>>> d.is_empty()
True
>>> d.add_front(4)
>>> d.add_front('dog')
>>> print(d)
[4, 'dog']
>>> d.size()
2
>>> d.remove_front()
'dog'
>>> d.add_rear(True)
>>> print(d)
[True, 4]
>>> d.remove_rear()
True
"""
def __init__(self):
self.items = []
def __str__(self):
"""Return the string method of the deque"""
return str(list(self.items))
def is_empty(self):
"""See whether the deque is empty"""
return self.items == []
def add_front(self, item):
"""Add an item in the front"""
self.items.append(item)
def add_rear(self, item):
"""Add an item in the rear"""
self.items.insert(0, item)
def remove_front(self):
"""Remove an item in the front"""
return self.items.pop()
def remove_rear(self):
"""Remove an item in the rear"""
return self.items.pop(0)
def size(self):
"""Return the number of items on the deque"""
return len(self.items)
| 24.77193 | 61 | 0.526204 | 190 | 1,412 | 3.810526 | 0.347368 | 0.099448 | 0.044199 | 0.060773 | 0.143646 | 0.118785 | 0.060773 | 0 | 0 | 0 | 0 | 0.006417 | 0.337819 | 1,412 | 56 | 62 | 25.214286 | 0.767914 | 0.508499 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.470588 | false | 0 | 0 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ab5659c429daf341f55598a6384e61b1e99df07c | 1,273 | py | Python | floodsystem/flood.py | Kaaasttuk/flood-warning-project-60 | d49b11709446e11ecf08ee5921ee8694a9642abe | [
"MIT"
] | null | null | null | floodsystem/flood.py | Kaaasttuk/flood-warning-project-60 | d49b11709446e11ecf08ee5921ee8694a9642abe | [
"MIT"
] | null | null | null | floodsystem/flood.py | Kaaasttuk/flood-warning-project-60 | d49b11709446e11ecf08ee5921ee8694a9642abe | [
"MIT"
] | 1 | 2022-01-30T21:23:04.000Z | 2022-01-30T21:23:04.000Z | from .utils import sorted_by_key
from floodsystem.station import MonitoringStation
def stations_level_over_threshold(stations, tol):
stations_level_over_threshold=[]
for station in stations:
if station.relative_water_level() == None:
pass
else:
if station.relative_water_level() > tol:
stations_level_over_threshold.append((station, station.relative_water_level()))
else:
pass
stations_level_over_threshold=sorted_by_key(stations_level_over_threshold, 1)
stations_level_over_threshold.reverse()
return stations_level_over_threshold
def stations_highest_rel_level(stations, N):
stations_highest_rel_level=[]
for station in stations:
if station.relative_water_level() == None:
pass
elif station.relative_water_level() > 100:
pass
else:
stations_highest_rel_level.insert(0, (station, station.relative_water_level()))
stations_highest_rel_level=sorted_by_key(stations_highest_rel_level, 1)
stations_highest_rel_level.reverse()
new_list=stations_highest_rel_level[0:N]
list_stations = []
for tuple in new_list:
list_stations.append(tuple[0])
return list_stations | 32.641026 | 95 | 0.70542 | 155 | 1,273 | 5.374194 | 0.23871 | 0.109244 | 0.142857 | 0.218487 | 0.310924 | 0.132053 | 0.132053 | 0.132053 | 0.132053 | 0.132053 | 0 | 0.008105 | 0.224666 | 1,273 | 39 | 96 | 32.641026 | 0.835866 | 0 | 0 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.129032 | 0.064516 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
db3d88ae7abc7561350876dd7e87ee2f4912ccd4 | 330 | py | Python | setup.py | Benjscho/mdptetris-experiments | 743113bfdcb309c7b9904d6bc5cc5cc65dc4d2e4 | [
"MIT"
] | null | null | null | setup.py | Benjscho/mdptetris-experiments | 743113bfdcb309c7b9904d6bc5cc5cc65dc4d2e4 | [
"MIT"
] | null | null | null | setup.py | Benjscho/mdptetris-experiments | 743113bfdcb309c7b9904d6bc5cc5cc65dc4d2e4 | [
"MIT"
] | null | null | null | import sys
from setuptools import setup
from distutils.core import setup
setup(name='mdptetris_experiments',
version='0.2.0',
install_requires=['gym', 'gym_mdptetris'],
author="Ben Schofield",
license='MIT',
packages=['mdptetris_experiments',
'mdptetris_experiments.agents'],
zip_safe=False)
| 23.571429 | 46 | 0.7 | 38 | 330 | 5.921053 | 0.684211 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011029 | 0.175758 | 330 | 13 | 47 | 25.384615 | 0.816176 | 0 | 0 | 0 | 0 | 0 | 0.324242 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
db641d4e51babb6a6c8b7f6dcb1d47156088db75 | 3,884 | py | Python | src/cogs/Spam.py | RED-M0CKING-LINE/Discord-Bottobulous-Maximus | db4e24e9b4403cdf8fbb20b1d62bf4b7f499ff4c | [
"Unlicense"
] | null | null | null | src/cogs/Spam.py | RED-M0CKING-LINE/Discord-Bottobulous-Maximus | db4e24e9b4403cdf8fbb20b1d62bf4b7f499ff4c | [
"Unlicense"
] | null | null | null | src/cogs/Spam.py | RED-M0CKING-LINE/Discord-Bottobulous-Maximus | db4e24e9b4403cdf8fbb20b1d62bf4b7f499ff4c | [
"Unlicense"
] | null | null | null | import utils
from discord.ext import commands
class cogSpam(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
async def pog(self, ctx):
await ctx.send('pog')
@commands.Cog.listener("on_message")
async def on_message(self, message): #TODO convert these conditions into a config file so they are not hard coded and be changed easily. use keyword override and have a default setting at the top of the file defining all default values
if not utils.on_message_check(self.bot, message): # stops the process if the check does not pass
return
msg_content = message.content.lower()
if 'padoru' in msg_content:
await message.channel.send('https://cdn.discordapp.com/attachments/897024945949913100/916812926281719818/9convert.com_-_padoru_padoru_1080pFHR.mp4.webm.mp4 ')
elif 'pog' in msg_content:
if 'pogger' in msg_content:
if utils.rng(40):
await message.channel.send('https://cdn.discordapp.com/attachments/896254987200516147/909798898124595200/Poggers.mp4 ')
else:
await message.channel.send('Pog')
return
elif 'rick' in msg_content: # Rick roll
if utils.rng(7):
await message.channel.send('We\'re no strangers to love\nYou know the rules and so do I\nA full commitment\'s what I\'m thinking of\nYou wouldn\'t get this from any other guy\n\nI just wanna tell you how I\'m feeling\nGotta make you understand\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\n\nWe\'ve known each other for so long\nYour heart\'s been aching, but\nYou\'re too shy to say it\nInside, we both know what\'s been going on\nWe know the game and we\'re gonna play it\n\nAnd if you ask me how I\'m feeling\nDon\'t tell me you\'re too blind to see\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\n\n(Ooh, give you up)\n(Ooh, give you up)\nNever gonna give, never gonna give\n(Give you up)\nNever gonna give, never gonna give\n(Give you up)\n\nWe\'ve known each other for so long\nYour heart\'s been aching, but\nYou\'re too shy to say it\nInside, we both know what\'s been going on\nWe know the game and we\'re gonna play it\n\nI just wanna tell you how I\'m feeling\nGotta make you understand\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\n\nNever gonna give you up\nNever gonna let you down\nNever gonna run around and desert you\nNever gonna make you cry\nNever gonna say goodbye\nNever gonna tell a lie and hurt you\nhttps://www.youtube.com/watch?v=dQw4w9WgXcQ @everyone')
elif 'owo' in msg_content:
await message.channel.send('OwO')
elif 'uwu' in msg_content:
await message.channel.send('UwU')
else:
words = ['bussy', 'futa', 'lgbt', 'vore', 'loli', 'shota', 'sus', 'trap']
for x in words:
if ' ' + x.lower() in (' ' + message.content).lower():
await message.channel.send('stfu')
return
def setup(bot): # call this in main.py: bot.load_extension("cogs.Spam")
bot.add_cog(cogSpam(bot))
| 88.272727 | 2,058 | 0.701854 | 649 | 3,884 | 4.167951 | 0.298921 | 0.154529 | 0.033272 | 0.044362 | 0.573013 | 0.566728 | 0.566728 | 0.536414 | 0.536414 | 0.495749 | 0 | 0.027714 | 0.21035 | 3,884 | 43 | 2,059 | 90.325581 | 0.854255 | 0.078785 | 0 | 0.135135 | 0 | 0.027027 | 0.186346 | 0 | 0 | 0 | 0 | 0.023256 | 0 | 1 | 0.054054 | false | 0 | 0.054054 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
db67955c60b989c6ae99bddc69be11b3b805f824 | 1,186 | py | Python | asyncpraw/models/listing/listing.py | Lordshinjo/asyncpraw | d94d89a992d9b5b300439ea6884a9df1d95511e2 | [
"BSD-2-Clause"
] | 72 | 2020-07-14T02:02:21.000Z | 2022-03-15T13:10:02.000Z | asyncpraw/models/listing/listing.py | Lordshinjo/asyncpraw | d94d89a992d9b5b300439ea6884a9df1d95511e2 | [
"BSD-2-Clause"
] | 68 | 2020-07-16T05:29:43.000Z | 2022-03-14T12:04:56.000Z | asyncpraw/models/listing/listing.py | Lordshinjo/asyncpraw | d94d89a992d9b5b300439ea6884a9df1d95511e2 | [
"BSD-2-Clause"
] | 19 | 2020-07-22T15:34:30.000Z | 2022-03-27T20:28:46.000Z | """Provide the Listing class."""
from typing import Any, Optional
from ..base import AsyncPRAWBase
class Listing(AsyncPRAWBase):
"""A listing is a collection of RedditBase instances."""
CHILD_ATTRIBUTE = "children"
def __len__(self) -> int:
"""Return the number of items in the Listing."""
return len(getattr(self, self.CHILD_ATTRIBUTE))
def __getitem__(self, index: int) -> Any:
"""Return the item at position index in the list."""
return getattr(self, self.CHILD_ATTRIBUTE)[index]
def __setattr__(self, attribute: str, value: Any):
"""Objectify the CHILD_ATTRIBUTE attribute."""
if attribute == self.CHILD_ATTRIBUTE:
value = self._reddit._objector.objectify(value)
super().__setattr__(attribute, value)
class FlairListing(Listing):
"""Special Listing for handling flair lists."""
CHILD_ATTRIBUTE = "users"
@property
def after(self) -> Optional[Any]:
"""Return the next attribute or None."""
return getattr(self, "next", None)
class ModeratorListing(Listing):
"""Special Listing for handling moderator lists."""
CHILD_ATTRIBUTE = "moderators"
| 28.238095 | 60 | 0.666948 | 136 | 1,186 | 5.632353 | 0.426471 | 0.127937 | 0.070496 | 0.052219 | 0.159269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216695 | 1,186 | 41 | 61 | 28.926829 | 0.824543 | 0.279089 | 0 | 0 | 0 | 0 | 0.03317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.105263 | 0 | 0.789474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
db8be30ff70554edb179109037665e51c04510ec | 874 | py | Python | espnet/nets/pytorch_backend/transformer/layer_norm.py | Syzygianinfern0/espnet | 3ea59a0050e8a6a40138ac2365c258825b02f9cd | [
"Apache-2.0"
] | 252 | 2020-05-15T14:50:14.000Z | 2022-03-17T08:38:16.000Z | espnet/nets/pytorch_backend/transformer/layer_norm.py | Syzygianinfern0/espnet | 3ea59a0050e8a6a40138ac2365c258825b02f9cd | [
"Apache-2.0"
] | 20 | 2021-06-14T20:15:49.000Z | 2022-02-18T09:05:00.000Z | espnet/nets/pytorch_backend/transformer/layer_norm.py | Syzygianinfern0/espnet | 3ea59a0050e8a6a40138ac2365c258825b02f9cd | [
"Apache-2.0"
] | 41 | 2020-05-15T14:33:35.000Z | 2021-12-22T08:41:30.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2019 Shigeki Karita
# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""Layer normalization module."""
import torch
class LayerNorm(torch.nn.LayerNorm):
"""Layer normalization module.
:param int nout: output dim size
:param int dim: dimension to be normalized
"""
def __init__(self, nout, dim=-1):
"""Construct an LayerNorm object."""
super(LayerNorm, self).__init__(nout, eps=1e-12)
self.dim = dim
def forward(self, x):
"""Apply layer normalization.
:param torch.Tensor x: input tensor
:return: layer normalized tensor
:rtype torch.Tensor
"""
if self.dim == -1:
return super(LayerNorm, self).forward(x)
return super(LayerNorm, self).forward(x.transpose(1, -1)).transpose(1, -1)
| 25.705882 | 82 | 0.620137 | 111 | 874 | 4.810811 | 0.513514 | 0.101124 | 0.101124 | 0.089888 | 0.11985 | 0.11985 | 0 | 0 | 0 | 0 | 0 | 0.028658 | 0.241419 | 874 | 33 | 83 | 26.484848 | 0.776772 | 0.471396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
db9623e0925722b6e142673af1ad85bac6da5f4b | 9,996 | py | Python | shan_convert_web/ShanConvert.py | yathit/waitzar | be0d1f2fe48ebc6094dbe2312ab18f4f05cf9136 | [
"Apache-2.0"
] | 1 | 2017-09-10T03:09:46.000Z | 2017-09-10T03:09:46.000Z | shan_convert_web/ShanConvert.py | yathit/waitzar | be0d1f2fe48ebc6094dbe2312ab18f4f05cf9136 | [
"Apache-2.0"
] | null | null | null | shan_convert_web/ShanConvert.py | yathit/waitzar | be0d1f2fe48ebc6094dbe2312ab18f4f05cf9136 | [
"Apache-2.0"
] | 3 | 2016-07-26T17:42:30.000Z | 2019-11-11T14:18:20.000Z | #
# This is a SIL encoding converter for the old Shan formats.
# Copyright 2011 Seth N. Hetu
# Released under the terms of the Apache License 2.0
# See end of file for license terms
#
# Main conversion function
dirConvert = {
# Normal letters
u'\u0021' : u'\u101E',
u'\u0023' : u'\uAA66',
u'\u0024' : u'\uAA67',
u'\u0025' : u'\uAA68',
u'\u0026' : u'\u102D\u1036',
u'\u0027' : u'\u107E',
u'\u0040' : u'\u101A',
u'\u003F' : u'\u104A',
u'\u003E' : u'\u1036',
u'\u003C' : u'\u108A',
u'\u003B' : u'\u1088',
u'\u003A' : u'\u1038',
u'\u002F' : u'\u104B',
u'\u002C' : u'\u1087',
u'\u002E' : u'\u1089',
u'\u0042' : u'\u103B',
u'\u0043' : u'\u1076',
u'\u0044' : u'\u102E',
u'\u0045' : u'\u107C',
u'\u0046' : u'\u103C',
u'\u0047' : u'\u1082',
u'\u0048' : u'\u0021',
u'\u0041' : u'\u1035',
u'\u004A' : u'\u108C\u108B',
u'\u004F' : u'\u101E',
u'\u0050' : u'\u1081\u1082\u103A',
u'\u0051' : u'\u1022',
u'\u0052' : u'\u103B\u103D',
#u'\u0053' : u'\u1004\u103A\u1039',
u'\u0053' : u'\uAA7F', #NOTE: This is a temp. hack, since kinzi is the only multi-code-point letter here
u'\u0054' : u'\u103C',
u'\u0055' : u'\u1075',
u'\u0056' : u'\uAA6E',
u'\u0057' : u'\u107B',
u'\u0058' : u'\uAA6A',
u'\u0059' : u'\u107F',
u'\u005A' : u'\u107D',
u'\u005B' : u'\u1082\u103A',
u'\u005E' : u'\uAA69',
u'\u0061' : u'\u1031',
u'\u0062' : u'\u101A',
u'\u0063' : u'\u1076',
u'\u0064' : u'\u102D',
u'\u0065' : u'\u107C',
u'\u0066' : u'\u103A',
u'\u0067' : u'\u103D',
u'\u0068' : u'\u1086',
u'\u0069' : u'\u1004',
u'\u006A' : u'\u1083',
u'\u006B' : u'\u102F',
u'\u006C' : u'\u1030',
u'\u006D' : u'\u1064',
u'\u006E' : u'\u107A',
u'\u00A8' : u'\u1080',
u'\u00A9' : u'\u109F',
u'\u00AB' : u'\u107D\u1030',
u'\u00AC' : u'\u1081\u102F',
u'\u007D' : u'\u2019',
u'\u007A' : u'\u107D',
u'\u0079' : u'\u1015',
u'\u004B' : u'\u102F',
u'\u004C' : u'\u1030',
u'\u004D' : u'\u1081\u103D',
u'\u004E' : u'\u1081\u1030',
u'\u006F' : u'\u101D',
u'\u0070' : u'\u1081',
u'\u0076' : u'\u101C',
u'\u0077' : u'\u1010',
u'\u0071' : u'\u1078',
u'\u0072' : u'\u1019',
u'\u0073' : u'\u1084',
u'\u0074' : u'\u1022',
u'\u0075' : u'\u1075',
u'\u0078' : u'\u1011',
u'\u0049' : u'\u101B',
u'\u005D' : u'\u2018',
u'\u00B5' : u'\u1091',
u'\u00B6' : u'\u1092',
u'\u2219' : u'\u1093',
u'\u00B8' : u'\u1094',
u'\u00B9' : u'\u1095',
u'\u00BA' : u'\u1096',
u'\u00BB' : u'\u1097',
u'\u00BC' : u'\u1098',
u'\u00BD' : u'\u1099',
u'\u2022' : u'\u1099',
u'\u201D' : u'\u1098',
u'\u201C' : u'\u1097',
u'\u2019' : u'\u1096',
u'\u2018' : u'\u1095',
u'\u00F6' : u'\u101B\u102F',
u'\u00F7' : u'\u101B\u1030',
u'\u00F8' : u'\u101B\u103D',
u'\u00C7' : u'\u1049',
u'\u00C6' : u'\u1048',
u'\u00C5' : u'\u1047',
u'\u00C4' : u'\u1046',
u'\u00C3' : u'\u1045',
u'\u00C2' : u'\u1044',
u'\u00C1' : u'\u1043',
u'\u00C0' : u'\u1042',
u'\u00BF' : u'\u1041',
u'\u00BE' : u'\u1040',
#Other letters
u'\u00A0' : u'\u2740',
u'\u00A1' : u'\u2638',
u'\u00A2' : u'\u2729',
u'\u00A3' : u'\u263A',
u'\u00A4' : u'\u27A9',
u'\u00A5' : u'\u270D',
u'\u00A6' : u'\u260F',
u'\u00A7' : u'\u231A',
u'\u007E' : u'\u007E',
u'\u005C' : u'\u00F7',
u'\u003D' : u'\u003D',
u'\u002D' : u'\u002D',
u'\u002B' : u'\u002B',
u'\u002A' : u'\u273D',
u'\u0029' : u'\u0029',
u'\u0028' : u'\u0028',
u'\u0022' : u'\u0022',
u'\u007B' : u'\u00D7',
u'\u007C' : u'\u0025',
u'\u005F' : u'\u002F',
u'\u0060' : u'\u003F',
#Numbers (Arabic)
u'\u0030' : u'\u0030',
u'\u0031' : u'\u0031',
u'\u0032' : u'\u0032',
u'\u0033' : u'\u0033',
u'\u0034' : u'\u0034',
u'\u0035' : u'\u0035',
u'\u0036' : u'\u0036',
u'\u0037' : u'\u0037',
u'\u0038' : u'\u0038',
u'\u0039' : u'\u0039',
#Whitespace
u'\t' : u'\t',
u'\n' : u'\n',
u'\u0020' : u'\u0020'
}
# Helper
CONS = u'\u1000\u1001\u1002\u1003\u1004\u1005\u1006\u1007\u1008\u1009\u100A\u100B\u100C\u100D\u100E\u100F' + \
u'\u1010\u1011\u1012\u1013\u1014\u1015\u1016\u1017\u1018\u1019\u101A\u101B\u101C\u101D\u101E\u101F' + \
u'\u1020\u1021\u1022\u1023\u1024\u1025\u1026\u1027\u1028\u1029\u102A' + \
u'\u103F' + \
u'\u1041\u1042\u1043\u1044\u1045\u1046\u1047\u1048\u1049' + \
u'\u104E' + \
u'\u105A\u105B\u105C\u105D' + \
u'\u1061\u1065\u1066' + \
u'\u106E\u106F\u1070' + \
u'\u1075\u1076\u1077\u1078\u1079\u107A\u107B\u107C\u107D\u107E\u107F\u1080\u1081' + \
u'\u108E' + \
u'\uAA60\uAA61\uAA62\uAA63\uAA64\uAA65\uAA66\uAA67\uAA68\uAA69\uAA6A\uAA6B\uAA6C\uAA6D\uAA6E\uAA6F' + \
u'\uAA71\uAA72\uAA73\uAA74\uAA75\uAA76'
#Input normalization order. Cannot go backwards
input_norm = [
#E vowel
u'\u1031\u1084',
#Medial R
u'\u103C',
#Consonant
CONS,
#Everything else
u'\uAA7F' +
u'\u103B\u105E\u105F' +
u'\u103D\u1082' +
u'\u103E\u1060' +
u'\u102D\u102E\u1032\u1033\u1034\u1035\u1036\u1071\u1072\u1073\u1074\u1085\u109D' +
u'\u102F\u1030' +
u'\u1086' +
u'\u102B\u102C\u1062\u1063\u1067\u1068\u1083' +
u'\u1036\u1032' +
u'\u1037' +
u'\u103A' +
u'\u1038\u1087\u1088\u1089\u108A\u108B\u108C\u108D\u108F\u109A\u109B\u109C'
]
#Output normalization order
output_norm = [
#Kinzi
u'\uAA7F',
#Consonant
CONS,
#(No stacked1, stacked2, asat)
#Medial Y, R, W, H
u'\u103B\u105E\u105F',
u'\u103C',
u'\u103D\u1082',
u'\u103E\u1060',
#(No mon asat)
#E vowel
u'\u1031\u1084',
#(No shan E vowel)
#Upper vowel, lower vowel
u'\u102D\u102E\u1032\u1033\u1034\u1035\u1036\u1071\u1072\u1073\u1074\u1085\u109D',
u'\u102F\u1030',
#(No Karen vowel)
#Shan vowel
u'\u1086',
#A vowel
u'\u102B\u102C\u1062\u1063\u1067\u1068\u1083',
#Anusvara
u'\u1036\u1032',
#(No Pwo tone)
#Dot below
u'\u1037',
#(No mon H)
#Visible virama
u'\u103A',
#Visarga
u'\u1038\u1087\u1088\u1089\u108A\u108B\u108C\u108D\u108F\u109A\u109B\u109C'
#(No reduplication)
]
def find_letter(str, letter):
for i in xrange(len(str)):
if str[i]==letter:
return i
return -1
# Match one of these arrays, return an id:
def match_arr(arr, letter):
for id in xrange(len(arr)):
entry = arr[id]
if find_letter(entry, letter) != -1:
return id
return -1
# Flush an array of strings
def flush(arr):
res = u''
for id in xrange(len(arr)):
elem = arr[id]
if elem==u'\uAA7F':
elem = u'\u1004\u103A\u1039'
res += elem
arr[id] = u''
return res
def esc(str):
res = u''
for chr in src:
res += '\\u' + hex(ord(chr))[2:].upper()
return res
def fixQuotes(letter, prevStr, single, double):
if letter!=single or len(prevStr)==0 or prevStr[-1]!=single:
return u''
return double
unknown = u'';
def convert(source):
#First, direct convert
res = u''
unknown = u''
for letter in source:
if dirConvert.has_key(letter):
curr = dirConvert[letter]
#Special case, quotes
fix = fixQuotes(curr, res, u'\u2018', u'\u201C')
fix += fixQuotes(curr, res, u'\u2019', u'\u201D')
if fix:
res = res[:len(res)-1]
curr = fix
res += curr
else:
unknown += u':'*(len(unknown)>0) + letter
#Now, normalize it
final_result = u''
norm_in_id = 0
norm_out = []
#norm_out.length = output_norm.length;
for i in output_norm:
norm_out.append(u'')
for letter in res:
#Get its corresponding entry IDs in input_norm and output_norm
in_id = match_arr(input_norm, letter)
out_id = match_arr(output_norm, letter)
if in_id==-1 and out_id==-1:
#No match; flush the input
final_result += flush(norm_out)
norm_in_id = 0
#Append it
final_result += letter
#Report unknown unicode letters
ordVal = ord(letter[0])
if (ordVal>=0x1000 and ordVal<=0x109F) or (ordVal>=0xAA60 and ordVal<=0xAA7F):
unknown += u':'*(len(unknown)>0) + u'(' + str(ordVal) + u')'
elif in_id==-1 or out_id==-1:
#Error!
unknown = u'*** ' + letter + u' ' + str(in_id) + u' ' + str(out_id)
raise UnicodeError(u'Bad string: %s' % unknown)
else:
#A match exists; add it to output_norm. But first, check if this will make us 'go backwards'
if in_id < norm_in_id:
final_result += flush(norm_out)
norm_in_id = 0
#Slot it
norm_out[out_id] += letter
norm_in_id = in_id
#Special case! Consonant will always advance the ID by 1
if norm_in_id==2:
norm_in_id += 1
#Append any remaining letters
final_result += flush(norm_out)
return final_result
# Main conversion function
# Specify this in the "Function Name" box of the SIL converter.
def ShanConvertString(str):
if not isinstance(str, unicode):
raise UnicodeError(u'Invalid (non-unicode) input string: %s' % str)
return convert(str)
if __name__ == "__main__":
src = u']]Twj:qgrf:vlnf;pof:]cj;} b[,erfaOurf:}}'
expected = '“တြႃးၸွမ်းလူၺ်ႈႁဝ်း‘ၶႃႈ’ ယႂ်ႇၼမ်သေၵမ်း”'
res = ShanConvertString(src)
print src
print esc(res)
print esc(expected)
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 25.178841 | 111 | 0.565626 | 1,503 | 9,996 | 3.735196 | 0.369927 | 0.009263 | 0.0114 | 0.004809 | 0.117919 | 0.087994 | 0.073388 | 0.073388 | 0.060563 | 0.049163 | 0 | 0.226892 | 0.235894 | 9,996 | 396 | 112 | 25.242424 | 0.505368 | 0.180172 | 0 | 0.093284 | 0 | 0.037313 | 0.400285 | 0.126617 | 0 | 0 | 0.003104 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.011194 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
db96f943666f258c29cb0fb7a18e58a7860b38a5 | 171 | py | Python | Algorithms/Implementation/encryption.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | 9 | 2017-03-19T16:27:31.000Z | 2022-02-17T11:42:21.000Z | Algorithms/Implementation/encryption.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | null | null | null | Algorithms/Implementation/encryption.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | 6 | 2019-02-18T11:26:24.000Z | 2022-03-21T14:13:15.000Z | # Python 2
import sys
import math
s = raw_input().replace(" ", "")
sq = math.sqrt(len(s))
col = int(math.ceil(sq))
for j in range(col):
print s[j::col], | 14.25 | 33 | 0.567251 | 29 | 171 | 3.310345 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.245614 | 171 | 12 | 34 | 14.25 | 0.736434 | 0.046784 | 0 | 0 | 0 | 0 | 0.006623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.285714 | null | null | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
db993a6dd77857693ff2c1af0d6b5f25ad596f2f | 3,388 | py | Python | code/default/python27/1.0/lib/noarch/front_base/config.py | xeddmc/XX-Net | d58e7b4258a21dce35b5f5477264d342a7346cdc | [
"BSD-2-Clause"
] | null | null | null | code/default/python27/1.0/lib/noarch/front_base/config.py | xeddmc/XX-Net | d58e7b4258a21dce35b5f5477264d342a7346cdc | [
"BSD-2-Clause"
] | null | null | null | code/default/python27/1.0/lib/noarch/front_base/config.py | xeddmc/XX-Net | d58e7b4258a21dce35b5f5477264d342a7346cdc | [
"BSD-2-Clause"
] | null | null | null |
import xconfig
class ConfigBase(xconfig.Config):
def set_default(self):
# proxy
self.set_var("PROXY_ENABLE", 0)
self.set_var("PROXY_TYPE", "HTTP")
self.set_var("PROXY_HOST", "")
self.set_var("PROXY_PORT", 0)
self.set_var("PROXY_USER", "")
self.set_var("PROXY_PASSWD", "")
# http_dispatcher
self.set_var("dispather_min_idle_workers", 0)
self.set_var("dispather_work_min_idle_time", 0)
self.set_var("dispather_work_max_score", 20000)
self.set_var("dispather_min_workers", 0)
self.set_var("dispather_max_workers", 60)
self.set_var("dispather_score_factor", 1)
self.set_var("dispather_max_idle_workers", 30)
self.set_var("max_task_num", 100)
# http 1.1 worker
self.set_var("http1_first_ping_wait", 300)
self.set_var("http1_ping_interval", 300)
self.set_var("http1_idle_time", 360)
self.set_var("http1_max_process_tasks", 99999999)
# http 2 worker
self.set_var("http2_max_concurrent", 60)
self.set_var("http2_target_concurrent", 60)
self.set_var("http2_max_timeout_tasks", 5)
self.set_var("http2_timeout_active", 15)
self.set_var("http2_status_to_close", [])
self.set_var("http2_show_debug", 0)
self.set_var("http2_ping_min_interval", 5)
# worker_base
self.set_var("show_state_debug", 0)
# connect manager
self.set_var("https_max_connect_thread", 1)
self.set_var("max_connect_thread", 1)
self.set_var("ssl_first_use_timeout", 10)
self.set_var("connection_pool_min", 1)
self.set_var("https_keep_alive", 15)
self.set_var("https_connection_pool_min", 1)
self.set_var("https_connection_pool_max", 2)
self.set_var("https_new_connect_num", 1)
self.set_var("http1_new_connect_num", 1)
# check_ip
self.set_var("check_ip_host", "")
self.set_var("check_ip_path", "/")
self.set_var("check_ip_accept_status", [200])
self.set_var("check_ip_content", "OK")
# connect_creator
self.set_var("connect_receive_buffer", 1024 * 128)
self.set_var("connect_force_http1", 0)
self.set_var("connect_force_http2", 0)
self.set_var("check_pkp", [])
self.set_var("check_commonname", "")
self.set_var("check_sni", 0) # 0, 1, string
self.set_var("min_intermediate_CA", 0)
# ip manager
self.set_var("check_exist_ip_on_startup", 0)
self.set_var("auto_adjust_scan_ip_thread_num", 1)
self.set_var("max_scan_ip_thread_num", 0)
self.set_var("max_good_ip_num", 100)
self.set_var("target_handshake_time", 300)
self.set_var("max_links_per_ip", 1)
self.set_var("ip_connect_interval", 5)
self.set_var("record_ip_history", 0)
self.set_var("down_fail_connect_interval", 60)
self.set_var("long_fail_threshold", 300)
self.set_var("long_fail_connect_interval", 180)
self.set_var("short_fail_connect_interval", 10)
# ip source
self.set_var("use_ipv6", "auto") #force_ipv4/force_ipv6
self.set_var("ipv6_scan_ratio", 50) # 0 - 100
def load(self):
super(ConfigBase, self).load()
if self.check_pkp:
self.CHECK_PKP = set(self.check_pkp) | 36.826087 | 63 | 0.642857 | 491 | 3,388 | 4.010183 | 0.242363 | 0.213306 | 0.304723 | 0.061453 | 0.315896 | 0.144744 | 0.060945 | 0.03352 | 0 | 0 | 0 | 0.046449 | 0.23111 | 3,388 | 92 | 64 | 36.826087 | 0.709405 | 0.049292 | 0 | 0 | 0 | 0 | 0.357967 | 0.2058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0.014925 | 0.014925 | 0 | 0.059701 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
dbad9ea50e185c2d3fe5514570bf24f322e34ca6 | 3,198 | py | Python | Scripts/GridSearch/ModelBuilderRNN.py | bio-hpc/sibila | 337ea84692d6ea4f4d3e4de9da51f5ee53cff6d7 | [
"Apache-2.0"
] | 1 | 2022-03-07T11:05:31.000Z | 2022-03-07T11:05:31.000Z | Scripts/GridSearch/ModelBuilderRNN.py | bio-hpc/sibila | 337ea84692d6ea4f4d3e4de9da51f5ee53cff6d7 | [
"Apache-2.0"
] | null | null | null | Scripts/GridSearch/ModelBuilderRNN.py | bio-hpc/sibila | 337ea84692d6ea4f4d3e4de9da51f5ee53cff6d7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""ModelBuilderRNN.py:
"""
__author__ = "Antonio Jesús Banegas-Luna"
__version__ = "1.0"
__maintainer__ = "Antonio"
__email__ = "ajbanegas@ucam.edu"
__status__ = "Development"
from BaseModelBuilder import BaseModelBuilder
class ModelBuilderRNN(BaseModelBuilder):
def get_default_model(self):
p = {}
p['model'] = self.model_name
p['type_ml'] = 'classification'
p['n_job'] = 1
p['params'] = {}
p['params']['draw_model'] = False
p['params']['random_state'] = 500
p['params']['batch_size'] = 1024
p['params']['epochs'] = 200
p['params']['loss_function'] = 'binary_crossentropy'
p['params']['cv_splits'] = 3
p['params']['optimizer'] = {}
p['params']['optimizer']['type'] = 'tensorflow.keras.optimizers.Adam'
p['params']['optimizer']['properties'] = {}
p['params']['optimizer']['properties']['learning_rate'] = 0.01
p['params']['optimizer']['properties']['beta_1'] = 0.99
p['params']['optimizer']['properties']['beta_2'] = 0.999
p['params']['optimizer']['properties']['epsilon'] = 1e-8
p['params']['layers'] = {}
p['params']['layers']['rnn_0'] = {}
p['params']['layers']['rnn_0']['type'] = 'tensorflow.keras.layers.SimpleRNN'
p['params']['layers']['rnn_0']['properties'] = {}
p['params']['layers']['rnn_0']['properties']['units'] = 16
p['params']['layers']['rnn_0']['properties']['kernel_initializer'] = 'ones'
p['params']['layers']['rnn_0']['properties']['name'] = 'rnn_0'
p['params']['layers']['dense_0'] = {}
p['params']['layers']['dense_0']['type'] = 'tensorflow.keras.layers.Dense'
p['params']['layers']['dense_0']['properties'] = {}
p['params']['layers']['dense_0']['properties']['units'] = 16
p['params']['layers']['dense_0']['properties']['activation'] = 'relu'
p['params']['layers']['dense_0']['properties']['name'] = 'dense_0'
p['params']['layers']['dense_1'] = {}
p['params']['layers']['dense_1']['type'] = 'tensorflow.keras.layers.Dense'
p['params']['layers']['dense_1']['properties'] = {}
p['params']['layers']['dense_1']['properties']['units'] = 8
p['params']['layers']['dense_1']['properties']['activation'] = 'relu'
p['params']['layers']['dense_1']['properties']['name'] = 'dense_1'
p['params']['layers']['dense_2'] = {}
p['params']['layers']['dense_2']['type'] = 'tensorflow.keras.layers.Dense'
p['params']['layers']['dense_2']['properties'] = {}
p['params']['layers']['dense_2']['properties']['units'] = 1
p['params']['layers']['dense_2']['properties']['activation'] = 'sigmoid'
p['params']['layers']['dense_2']['properties']['name'] = 'dense_2'
p['params']['metrics'] = [
'tensorflow.keras.metrics.MeanAbsoluteError',
'tensorflow.keras.metrics.MeanSquaredError',
'tensorflow.keras.metrics.MeanSquaredLogarithmicError',
'tensorflow.keras.metrics.RootMeanSquaredError'
]
return p
| 42.64 | 84 | 0.556598 | 338 | 3,198 | 5.079882 | 0.269231 | 0.163075 | 0.189284 | 0.188701 | 0.488061 | 0.405358 | 0.168899 | 0.083867 | 0.083867 | 0 | 0 | 0.024571 | 0.198249 | 3,198 | 74 | 85 | 43.216216 | 0.645086 | 0.019387 | 0 | 0 | 0 | 0 | 0.48417 | 0.106172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0 | 0.017241 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
dbae924e797361192e9bf8921c5ce2377174f27f | 396 | py | Python | Part_3_advanced/m14_metaclass/register_cls/example_1/example_system/human.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m14_metaclass/register_cls/example_1/example_system/human.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_3_advanced/m14_metaclass/register_cls/example_1/example_system/human.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | from example_system.serializable import Serializable
from example_system.serializable_registry import SerializableRegistry
class Human(Serializable):
def __init__(self, name: str, age: int) -> None:
super().__init__(name, age)
self.name = name
self.age = age
def __str__(self) -> str:
return f"Human: {self.name}"
SerializableRegistry.register(Human)
| 24.75 | 69 | 0.704545 | 46 | 396 | 5.73913 | 0.456522 | 0.090909 | 0.128788 | 0.219697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199495 | 396 | 15 | 70 | 26.4 | 0.832808 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
dbb3e5ebf0996b53a0b9dd9d89f62313bf5fba86 | 2,472 | py | Python | py-scripts/ssa_example2.py | RiccardoAiolfi/bbt | c5450ae3f1cbf43e5e09b3761cbcb70acacb6729 | [
"MIT"
] | 1 | 2020-05-14T07:53:56.000Z | 2020-05-14T07:53:56.000Z | py-scripts/ssa_example2.py | RiccardoAiolfi/bbt | c5450ae3f1cbf43e5e09b3761cbcb70acacb6729 | [
"MIT"
] | null | null | null | py-scripts/ssa_example2.py | RiccardoAiolfi/bbt | c5450ae3f1cbf43e5e09b3761cbcb70acacb6729 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (C) 2017-2020 The btclib developers
#
# This file is part of btclib. It is subject to the license terms in the
# LICENSE file found in the top-level directory of this distribution.
#
# No part of btclib including this file, may be copied, modified, propagated,
# or distributed except according to the terms contained in the LICENSE file.
from hashlib import sha256
from btclib.curvemult import mult, double_mult
from btclib.curves import secp256k1 as ec
from btclib.numbertheory import mod_inv
from btclib.utils import int_from_bits
from btclib.ssa import _challenge
# TODO remove any import from ssa
print("\n*** EC:")
print(ec)
print("1. Key generation")
q = 0x18E14A7B6A307F426A94F8114701E7C8E774E7F9A47E2C2035DB29A206321725
q = q % ec.n
Q = mult(q, ec.G)
if not ec.has_square_y(Q):
q = ec.n - q
Q = (Q[0], ec.p - Q[1])
print(f"prvkey: {hex(q).upper()}")
print(f"PubKey: {hex(Q[0]).upper()}")
print("\n0. Message to be signed")
orig_msg1 = "Paolo is afraid of ephemeral random numbers"
msg1 = sha256(orig_msg1.encode()).digest()
print(msg1.hex().upper())
print("\n*** Ephemeral key and challenge")
# ephemeral key k must be kept secret and never reused !!!!!
# good choice: k = hf(q||msg)
# different for each msg, private because of q
temp = q.to_bytes(32, 'big') + msg1
k1_bytes = sha256(temp).digest()
k1 = int.from_bytes(k1_bytes, 'big') % ec.n
k1 = int_from_bits(k1_bytes, ec.nlen) % ec.n
assert 0 < k1 < ec.n, "Invalid ephemeral key"
print(f"eph k: {hex(k1).upper()}")
K1 = mult(k1, ec.G)
c1 = _challenge(K1[0], Q[0], msg1, ec, sha256)
print(f" c1: {hex(c1).upper()}")
print("2. Sign message")
r1 = K1[0]
s1 = (k1 + c1*q) % ec.n
print(f" r1: {hex(r1).upper()}")
print(f" s1: {hex(s1).upper()}")
print("3. Verify signature")
K = double_mult(-c1, Q, s1, ec.G)
print(K[0] == r1)
print("\n0. Another message to sign")
orig_msg2 = "and Paolo is right to be afraid"
msg2 = sha256(orig_msg2.encode()).digest()
print(msg2.hex().upper())
print("\n*** Ephemeral key and challenge")
# ephemeral key k must be kept secret and never reused !!!!!
k2 = k1
print(f"eph k: {hex(k2).upper()}")
K2 = mult(k2, ec.G)
c2 = _challenge(K2[0], Q[0], msg2, ec, sha256)
print(f" c2: {hex(c2).upper()}")
print("2. Sign message")
r2 = K2[0]
s2 = (k2 + c2*q) % ec.n
print(f" r2: {hex(r2).upper()}")
print(f" s2: {hex(s2).upper()}")
print("3. Verify signature")
K = double_mult(-c2, Q, s2, ec.G)
print(K[0] == r2)
| 26.021053 | 77 | 0.671521 | 427 | 2,472 | 3.836066 | 0.311475 | 0.03663 | 0.009768 | 0.019536 | 0.218559 | 0.144078 | 0.144078 | 0.144078 | 0.098901 | 0.098901 | 0 | 0.075 | 0.158576 | 2,472 | 94 | 78 | 26.297872 | 0.7125 | 0.235032 | 0 | 0.105263 | 0 | 0 | 0.296592 | 0 | 0 | 0 | 0.035144 | 0.010638 | 0.017544 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.438596 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
dbbed0d6be4eec307789dea4891f569c57c71e45 | 1,172 | py | Python | Libraries/Python/wxPython/v4.0.7/Windows/python3/wx/media.py | davidbrownell/Common_EnvironmentEx | 9e20b79b4de0cb472f65ac08b3de83f9ed8e2ca3 | [
"BSL-1.0"
] | null | null | null | Libraries/Python/wxPython/v4.0.7/Windows/python3/wx/media.py | davidbrownell/Common_EnvironmentEx | 9e20b79b4de0cb472f65ac08b3de83f9ed8e2ca3 | [
"BSL-1.0"
] | null | null | null | Libraries/Python/wxPython/v4.0.7/Windows/python3/wx/media.py | davidbrownell/Common_EnvironmentEx | 9e20b79b4de0cb472f65ac08b3de83f9ed8e2ca3 | [
"BSL-1.0"
] | null | null | null | # This file is generated by wxPython's SIP generator. Do not edit by hand.
#
# Copyright: (c) 2018 by Total Control Software
# License: wxWindows License
"""
The ``wx.media`` module provides a widget class that allows displaying various
types of media, such as video and audio files and streaming, using native
system components. The wxWidgets media classes are an optional part of the
build so it may not always be available on your build of wxPython.
"""
from ._media import *
import wx
EVT_MEDIA_LOADED = wx.PyEventBinder( wxEVT_MEDIA_LOADED )
EVT_MEDIA_STOP = wx.PyEventBinder( wxEVT_MEDIA_STOP )
EVT_MEDIA_FINISHED = wx.PyEventBinder( wxEVT_MEDIA_FINISHED )
EVT_MEDIA_STATECHANGED = wx.PyEventBinder( wxEVT_MEDIA_STATECHANGED )
EVT_MEDIA_PLAY = wx.PyEventBinder( wxEVT_MEDIA_PLAY )
EVT_MEDIA_PAUSE = wx.PyEventBinder( wxEVT_MEDIA_PAUSE )
MEDIABACKEND_DIRECTSHOW = "wxAMMediaBackend"
MEDIABACKEND_MCI = "wxMCIMediaBackend"
MEDIABACKEND_QUICKTIME = "wxQTMediaBackend"
MEDIABACKEND_GSTREAMER = "wxGStreamerMediaBackend"
MEDIABACKEND_REALPLAYER = "wxRealPlayerMediaBackend"
MEDIABACKEND_WMP10 = "wxWMP10MediaBackend"
| 37.806452 | 79 | 0.78157 | 145 | 1,172 | 6.103448 | 0.586207 | 0.054237 | 0.135593 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008114 | 0.158703 | 1,172 | 30 | 80 | 39.066667 | 0.889452 | 0.379693 | 0 | 0 | 1 | 0 | 0.167883 | 0.068613 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
dbda4b7492fe12326ae018ea4ebc1ec71736e4b1 | 3,012 | py | Python | juggle/io/HDF5DatasetGenerator.py | laygond/Deep-Tools | bc575d25c7c87f1230cf61df89c23f3370bb0187 | [
"MIT"
] | null | null | null | juggle/io/HDF5DatasetGenerator.py | laygond/Deep-Tools | bc575d25c7c87f1230cf61df89c23f3370bb0187 | [
"MIT"
] | null | null | null | juggle/io/HDF5DatasetGenerator.py | laygond/Deep-Tools | bc575d25c7c87f1230cf61df89c23f3370bb0187 | [
"MIT"
] | 2 | 2021-03-14T22:38:56.000Z | 2021-05-14T08:42:02.000Z | # HDF5DatasetGenerator.py
import h5py
import numpy as np
from tensorflow.keras.utils import to_categorical
class HDF5DatasetGenerator:
'''
Use to generate a dataset for use withing keras framework
form a HDF5 file.
'''
def __init__(self, dbPath, batchSize, preprocessors = None,
aug = None, binarize = True, classes = 2):
'''
'''
# store the batch size, preprocessors, and data augmentor,
# whether or not the labels should be binarized, along with
# the total number of classes
self.batchSize = batchSize
self.preprocessors = preprocessors
self.aug = aug
self.binarize = binarize
self.classes = classes
# open the HDF5 database for reading and determine the total
# number of entries in the database
self.db = h5py.File(dbPath)
self.numImages = self.db["labels"].shape[0]
def generator(self, passes = np.inf):
# initialize the epoch count
epochs = 0
# keep looping infinitely -- the model will stop once we have
# reached the desired number of epochs
while epochs < passes:
# loop over the HDF5 dataset
for i in np.arange(0, self.numImages, self.batchSize):
# extract the iamges and labels from the HDF5 dataset
images = self.db["images"][i: i + self.batchSize]
labels = self.db["labels"][i: i + self.batchSize]
# check to see if the labels should be binarized
if self.binarize:
labels = to_categorical(labels, self.classes)
# check to see if our preprocessors are not None
if self.preprocessors is not None:
# initialize the list of processed images
procImages = []
# loop over the images
for image in images:
# loop over the preprocessors and apply each
# to the image
for p in self.preprocessors:
image = p.preprocess(image)
# update the list of processed images
procImages.append(image)
# update the images array to be the processed
# images
images = np.array(procImages)
# if the data augmentor exists, apply it
if self.aug is not None:
(images, labels) = next(self.aug.flow(images, labels, batch_size = self.batchSize))
# yield a tuple of images and labels
yield (images, labels)
# increment the total number of epochs processed
epochs += 1
print(epochs)
def close(self):
'''
'''
# cose the datab<se
self.db.close() | 35.857143 | 103 | 0.533865 | 326 | 3,012 | 4.911043 | 0.371166 | 0.0406 | 0.026234 | 0.029981 | 0.074953 | 0.042473 | 0 | 0 | 0 | 0 | 0 | 0.007238 | 0.403718 | 3,012 | 84 | 104 | 35.857143 | 0.884187 | 0.331673 | 0 | 0 | 1 | 0 | 0.009188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.057143 | 0.085714 | 0 | 0.2 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
dbdcde0e392f6fc34e3c12bd00c2113762c8933b | 849 | py | Python | athumb/upload_handlers/gunicorn_eventlet.py | tahy/django-athumb | b0b5737e1afa9c6dfe1b27132e38b24e093f36cf | [
"BSD-3-Clause"
] | 10 | 2015-03-18T20:37:25.000Z | 2019-04-23T06:59:48.000Z | athumb/upload_handlers/gunicorn_eventlet.py | tahy/django-athumb | b0b5737e1afa9c6dfe1b27132e38b24e093f36cf | [
"BSD-3-Clause"
] | 8 | 2015-02-09T01:33:09.000Z | 2016-08-01T07:01:49.000Z | athumb/upload_handlers/gunicorn_eventlet.py | tahy/django-athumb | b0b5737e1afa9c6dfe1b27132e38b24e093f36cf | [
"BSD-3-Clause"
] | 15 | 2015-02-09T01:15:42.000Z | 2021-01-17T16:24:46.000Z | """
Upload handlers with small tweaks to work with gunicorn + eventlet async
workers. These should eventually become unnecessary as the supporting libraries
continue to improve.
"""
from django.core.files.uploadhandler import TemporaryFileUploadHandler
import eventlet
class EventletTmpFileUploadHandler(TemporaryFileUploadHandler):
"""
Uploading large files can cause a worker thread to hang long enough to
hit the timeout before the upload can be completed. Sleep long enough
to hand things back to the other threads to avoid a timeout.
"""
def receive_data_chunk(self, raw_data, start):
"""
Over-ridden method to circumvent the worker timeouts on large uploads.
"""
self.file.write(raw_data)
# CHANGED: This un-hangs us long enough to keep things rolling.
eventlet.sleep(0) | 40.428571 | 79 | 0.738516 | 111 | 849 | 5.612613 | 0.693694 | 0.048154 | 0.057785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001484 | 0.206125 | 849 | 21 | 80 | 40.428571 | 0.922849 | 0.599529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
915c1f98e505e2da5ab9e1ec370aa49473148523 | 1,098 | py | Python | configuration.py | bioinfoUQAM/CASTOR_KRFE | 05ecf690b88b54d78a1b95183b94e265f7e51944 | [
"MIT"
] | 4 | 2020-01-13T09:32:45.000Z | 2022-03-19T07:21:56.000Z | configuration.py | DylanLebatteux/CASTOR_KRFE | c5da5ac189ec759fe5d4d6c234c479f5dd81fd3d | [
"MIT"
] | null | null | null | configuration.py | DylanLebatteux/CASTOR_KRFE | c5da5ac189ec759fe5d4d6c234c479f5dd81fd3d | [
"MIT"
] | 2 | 2019-06-12T18:12:07.000Z | 2020-08-08T02:28:15.000Z | # Import
import configparser
# Fonction to get the parameters from the configuration file
def getParameters(configuration_file):
# Initialize the parser
configurationParser = configparser.ConfigParser()
# Read the configuration file
configurationParser.read(configuration_file)
# Get the parameters
parameters = dict()
parameters["T"] = configurationParser.get("parameters", "T")
parameters["k_min"] = configurationParser.get("parameters", "k_min")
parameters["k_max"] = configurationParser.get("parameters", "k_max")
parameters["model_path"] = configurationParser.get("parameters", "model_path")
parameters["k_mers_path"] = configurationParser.get("parameters", "k_mers_path")
parameters["testing_fasta"] = configurationParser.get("parameters", "testing_fasta")
parameters["training_fasta"] = configurationParser.get("parameters", "training_fasta")
parameters["prediction_path"] = configurationParser.get("parameters", "prediction_path")
parameters["evaluation_mode"] = configurationParser.get("parameters", "evaluation_mode")
# Return the parameter dictionary
return parameters
| 47.73913 | 89 | 0.787796 | 113 | 1,098 | 7.477876 | 0.292035 | 0.23432 | 0.340828 | 0.11716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088342 | 1,098 | 22 | 90 | 49.909091 | 0.844156 | 0.151184 | 0 | 0 | 0 | 0 | 0.28973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
916151917b41324a62a7396a67533a2f37db4902 | 671 | py | Python | ds/stack/stack.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | null | null | null | ds/stack/stack.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | null | null | null | ds/stack/stack.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | 1 | 2021-11-19T13:45:59.000Z | 2021-11-19T13:45:59.000Z | class Stack(object):
def __init__(self):
self.stack = []
def is_empty(self):
return not self.stack
def push(self, v):
self.stack.append(v)
def pop(self):
data = self.stack[-1]
del self.stack[-1]
return data
def peek(self):
return self.stack[-1]
def size_stack(self):
return len(self.stack)
if __name__ == "__main__":
s = Stack()
s.push(1)
s.push(2)
s.push(3)
print(s.size_stack())
print("Popped: ", s.pop())
print("Popped: ", s.pop())
print(s.size_stack())
print("Peek: ", s.peek())
print("Popped: ", s.pop())
print(s.size_stack()) | 19.171429 | 30 | 0.535022 | 94 | 671 | 3.638298 | 0.297872 | 0.184211 | 0.087719 | 0.131579 | 0.277778 | 0.175439 | 0.175439 | 0.175439 | 0 | 0 | 0 | 0.012658 | 0.293592 | 671 | 35 | 31 | 19.171429 | 0.708861 | 0 | 0 | 0.222222 | 0 | 0 | 0.056548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.111111 | 0.407407 | 0.259259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
9168ffa76bcff6577bca8997329e9616ad243b44 | 7,021 | py | Python | src/sage/combinat/species/sum_species.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | src/sage/combinat/species/sum_species.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | src/sage/combinat/species/sum_species.py | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | """
Sum species
"""
#*****************************************************************************
# Copyright (C) 2008 Mike Hansen <mhansen@gmail.com>,
#
# Distributed under the terms of the GNU General Public License (GPL)
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# The full text of the GPL is available at:
#
# http://www.gnu.org/licenses/
#*****************************************************************************
from .species import GenericCombinatorialSpecies
from .structure import SpeciesStructureWrapper
from sage.structure.unique_representation import UniqueRepresentation
class SumSpeciesStructure(SpeciesStructureWrapper):
pass
class SumSpecies(GenericCombinatorialSpecies, UniqueRepresentation):
def __init__(self, F, G, min=None, max=None, weight=None):
"""
Returns the sum of two species.
EXAMPLES::
sage: S = species.PermutationSpecies()
sage: A = S+S
sage: A.generating_series().coefficients(5)
[2, 2, 2, 2, 2]
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F._check()
True
sage: F == loads(dumps(F))
True
TESTS::
sage: A = species.SingletonSpecies() + species.SingletonSpecies()
sage: B = species.SingletonSpecies() + species.SingletonSpecies()
sage: C = species.SingletonSpecies() + species.SingletonSpecies(min=2)
sage: A is B
True
sage: (A is C) or (A == C)
False
"""
self._F = F
self._G = G
self._state_info = [F, G]
GenericCombinatorialSpecies.__init__(self, min=None, max=None, weight=None)
_default_structure_class = SumSpeciesStructure
def left_summand(self):
"""
Returns the left summand of this species.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P*P
sage: F.left_summand()
Permutation species
"""
return self._F
def right_summand(self):
"""
Returns the right summand of this species.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P*P
sage: F.right_summand()
Product of (Permutation species) and (Permutation species)
"""
return self._G
def _name(self):
"""
Note that we use a function to return the name of this species
because we can't do it in the __init__ method due to it
requiring that self.left_summand() and self.right_summand()
already be unpickled.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F._name()
'Sum of (Permutation species) and (Permutation species)'
"""
return "Sum of (%s) and (%s)"%(self.left_summand(), self.right_summand())
def _structures(self, structure_class, labels):
"""
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F.structures([1,2]).list()
[[1, 2], [2, 1], [1, 2], [2, 1]]
"""
for res in self.left_summand().structures(labels):
yield structure_class(self, res, tag="left")
for res in self.right_summand().structures(labels):
yield structure_class(self, res, tag="right")
def _isotypes(self, structure_class, labels):
"""
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F.isotypes([1,2]).list()
[[2, 1], [1, 2], [2, 1], [1, 2]]
"""
for res in self._F.isotypes(labels):
yield structure_class(self, res, tag="left")
for res in self._G.isotypes(labels):
yield structure_class(self, res, tag="right")
def _gs(self, series_ring, base_ring):
"""
Returns the cycle index series of this species.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F.generating_series().coefficients(5)
[2, 2, 2, 2, 2]
"""
return (self.left_summand().generating_series(base_ring) +
self.right_summand().generating_series(base_ring))
def _itgs(self, series_ring, base_ring):
"""
Returns the isomorphism type generating series of this species.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F.isotype_generating_series().coefficients(5)
[2, 2, 4, 6, 10]
"""
return (self.left_summand().isotype_generating_series(base_ring) +
self.right_summand().isotype_generating_series(base_ring))
def _cis(self, series_ring, base_ring):
"""
Returns the generating series of this species.
EXAMPLES::
sage: P = species.PermutationSpecies()
sage: F = P + P
sage: F.cycle_index_series().coefficients(5)
[2*p[],
2*p[1],
2*p[1, 1] + 2*p[2],
2*p[1, 1, 1] + 2*p[2, 1] + 2*p[3],
2*p[1, 1, 1, 1] + 2*p[2, 1, 1] + 2*p[2, 2] + 2*p[3, 1] + 2*p[4]]
"""
return (self.left_summand().cycle_index_series(base_ring) +
self.right_summand().cycle_index_series(base_ring))
def weight_ring(self):
"""
Returns the weight ring for this species. This is determined by
asking Sage's coercion model what the result is when you add
elements of the weight rings for each of the operands.
EXAMPLES::
sage: S = species.SetSpecies()
sage: C = S+S
sage: C.weight_ring()
Rational Field
::
sage: S = species.SetSpecies(weight=QQ['t'].gen())
sage: C = S + S
sage: C.weight_ring()
Univariate Polynomial Ring in t over Rational Field
"""
return self._common_parent([self.left_summand().weight_ring(),
self.right_summand().weight_ring()])
def _equation(self, var_mapping):
"""
Returns the right hand side of an algebraic equation satisfied by
this species. This is a utility function called by the
algebraic_equation_system method.
EXAMPLES::
sage: X = species.SingletonSpecies()
sage: S = X + X
sage: S.algebraic_equation_system()
[node1 + (-2*z)]
"""
return sum(var_mapping[operand] for operand in self._state_info)
#Backward compatibility
SumSpecies_class = SumSpecies
| 31.484305 | 83 | 0.551204 | 812 | 7,021 | 4.641626 | 0.225369 | 0.025206 | 0.076943 | 0.071637 | 0.439904 | 0.388697 | 0.329796 | 0.254444 | 0.238525 | 0.188644 | 0 | 0.01701 | 0.321749 | 7,021 | 222 | 84 | 31.626126 | 0.774465 | 0.536676 | 0 | 0.093023 | 0 | 0 | 0.016422 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.255814 | false | 0.023256 | 0.069767 | 0 | 0.581395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91711548834603d786cd09e4824c7fb26eeaec86 | 197 | py | Python | Old/Simulations/old/py_tests.py | mitgobla/Traffic-Imrovement | c94bd9f9925d22b50b0295866a8b57fb3edb2c0d | [
"MIT"
] | null | null | null | Old/Simulations/old/py_tests.py | mitgobla/Traffic-Imrovement | c94bd9f9925d22b50b0295866a8b57fb3edb2c0d | [
"MIT"
] | 17 | 2019-02-08T21:31:18.000Z | 2019-05-02T08:07:56.000Z | Old/Simulations/old/py_tests.py | mitgobla/Traffic-Imrovement | c94bd9f9925d22b50b0295866a8b57fb3edb2c0d | [
"MIT"
] | 2 | 2019-12-11T15:44:04.000Z | 2020-03-15T23:16:11.000Z | array = [0,0,1,1,1,1,2,2,2,2,3,3]
indexEqualCurrentUsage = []
for index in range(len(array)):
if array[index] == 1:
indexEqualCurrentUsage.append(index)
print(indexEqualCurrentUsage) | 21.888889 | 44 | 0.685279 | 29 | 197 | 4.655172 | 0.482759 | 0.044444 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077844 | 0.152284 | 197 | 9 | 45 | 21.888889 | 0.730539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91806f8d341660da6ea0b3ed7c3216829ddebd2d | 328 | py | Python | exercises/CursoemVideo/ex018.py | arthurguerra/cursoemvideo-python | 37f45ec25f422673fa9bbeee682e098f14d8ceab | [
"MIT"
] | null | null | null | exercises/CursoemVideo/ex018.py | arthurguerra/cursoemvideo-python | 37f45ec25f422673fa9bbeee682e098f14d8ceab | [
"MIT"
] | null | null | null | exercises/CursoemVideo/ex018.py | arthurguerra/cursoemvideo-python | 37f45ec25f422673fa9bbeee682e098f14d8ceab | [
"MIT"
] | null | null | null | import math
a = float(input('Digite um ângulo: '))
seno = math.sin(math.radians(a))
print('O ângulo de {} possui as seguintes propriedades:\nSeno: {:.2f}'.format(a, seno))
cosseno = math.cos(math.radians(a))
print('Cosseno: {:.2f}'.format(cosseno))
tangente = math.tan(math.radians(a))
print('Tangente: {:.2f}'.format(tangente)) | 41 | 87 | 0.695122 | 49 | 328 | 4.653061 | 0.510204 | 0.144737 | 0.157895 | 0.223684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010067 | 0.091463 | 328 | 8 | 88 | 41 | 0.755034 | 0 | 0 | 0 | 0 | 0 | 0.337386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9186beab40e07d5e1ed38a72c2f3c64d74f5183f | 255 | py | Python | etl/config.py | mkoeppel/Tweet_Pipe | 2c2d8be51a566b074ac969ca28a64eb144e7e940 | [
"MIT"
] | null | null | null | etl/config.py | mkoeppel/Tweet_Pipe | 2c2d8be51a566b074ac969ca28a64eb144e7e940 | [
"MIT"
] | null | null | null | etl/config.py | mkoeppel/Tweet_Pipe | 2c2d8be51a566b074ac969ca28a64eb144e7e940 | [
"MIT"
] | 1 | 2021-05-25T10:12:24.000Z | 2021-05-25T10:12:24.000Z | ### do not use these settings and passwords for production!
# these settings are required to connect the postgres-db to metabase
POSTGRES_USER='postgres'
POSTGRES_PASSWORD='1234'
POSTGRES_HOST='postgresdb'
POSTGRES_PORT='5432'
POSTGRES_DB_NAME='postgres'
| 31.875 | 68 | 0.815686 | 36 | 255 | 5.611111 | 0.694444 | 0.128713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034935 | 0.101961 | 255 | 7 | 69 | 36.428571 | 0.847162 | 0.478431 | 0 | 0 | 0 | 0 | 0.265625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
91ad018b2a413658d61956a96f83f1c803e64d63 | 281 | py | Python | applications/mupiopolitico/migrations/0001_initial.py | PEM-Humboldt/visor-geografico-I2d-backend | 5b5f4e0eee07e14bd8124cec624b5a5004c4d168 | [
"MIT"
] | null | null | null | applications/mupiopolitico/migrations/0001_initial.py | PEM-Humboldt/visor-geografico-I2d-backend | 5b5f4e0eee07e14bd8124cec624b5a5004c4d168 | [
"MIT"
] | null | null | null | applications/mupiopolitico/migrations/0001_initial.py | PEM-Humboldt/visor-geografico-I2d-backend | 5b5f4e0eee07e14bd8124cec624b5a5004c4d168 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-04-08 19:01
from django.db import migrations
from django.contrib.postgres.operations import CreateExtension
class Migration(migrations.Migration):
dependencies = [
]
operations = [
CreateExtension(name='unaccent'),
]
| 20.071429 | 62 | 0.711744 | 33 | 281 | 6.060606 | 0.757576 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066372 | 0.19573 | 281 | 13 | 63 | 21.615385 | 0.818584 | 0.160142 | 0 | 0 | 1 | 0 | 0.034188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91b49ad57039c4a41f1dfd86192c85613d26a33f | 522 | py | Python | delay.py | GregGrigorop/Decorators | 7e7a026e21afe944e20af983e3f36b4977e11a9c | [
"MIT"
] | null | null | null | delay.py | GregGrigorop/Decorators | 7e7a026e21afe944e20af983e3f36b4977e11a9c | [
"MIT"
] | null | null | null | delay.py | GregGrigorop/Decorators | 7e7a026e21afe944e20af983e3f36b4977e11a9c | [
"MIT"
] | null | null | null | from functools import wraps
from time import sleep # the sleep function suspends execution of the current thread for a given
def delay(time): # number of seconds (it pauses code execution for a certain amount of time)
def inner(fn):
@wraps(fn)
def wrapper(*args,**kwargs):
print(fn.__doc__)
print(f"Waiting {time}s before running {fn.__name__}")
sleep(time)
return fn(*args,**kwargs)
return wrapper
return inner
@delay(3)
def say_hi():
"""this function will greet you"""
return "hi"
print(say_hi()) | 27.473684 | 96 | 0.718391 | 82 | 522 | 4.45122 | 0.560976 | 0.021918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002309 | 0.170498 | 522 | 19 | 97 | 27.473684 | 0.840647 | 0.335249 | 0 | 0 | 0 | 0 | 0.134897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.625 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91b69811715573f26b6ba7e1da971f7da0031871 | 100 | py | Python | tokenfile.py | Vexs/DivisionTalentBot | 3211e55b3267bada874355dbbbf41c4fedb9a57d | [
"MIT"
] | null | null | null | tokenfile.py | Vexs/DivisionTalentBot | 3211e55b3267bada874355dbbbf41c4fedb9a57d | [
"MIT"
] | null | null | null | tokenfile.py | Vexs/DivisionTalentBot | 3211e55b3267bada874355dbbbf41c4fedb9a57d | [
"MIT"
] | null | null | null | token = "some_valid_token"
#Looks like MjM4NDk0NzU2NTIxMzc3Nzky.CunGFQ.wUILz7z6HoJzVeq6pyHPmVgQgV4
| 25 | 71 | 0.87 | 9 | 100 | 9.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086022 | 0.07 | 100 | 3 | 72 | 33.333333 | 0.827957 | 0.7 | 0 | 0 | 0 | 0 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91b7a90f853567cb6b2d07a2a60f3fcf09468e00 | 2,536 | py | Python | mountaintools/cairioserver/examples/basic_usage.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | mountaintools/cairioserver/examples/basic_usage.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | mountaintools/cairioserver/examples/basic_usage.py | tjd2002/spikeforest2 | 2e393564b858b2995aa2ccccd9bd73065681b5de | [
"Apache-2.0"
] | null | null | null | from cairio import client as ca
print('------------------------------------------------')
# Local key/value store for associating relatively short strings (<=80 characters) with arbitrary keys (strings or dicts)
# Setting values (these should be short strings, <=80 characters)
ca.setValue(key='some-key1', value='hello 1')
ca.setValue(key=dict(name='some_name', number=2), value='hello 2')
# Getting values
val1 = ca.getValue(key='some-key1')
val2 = ca.getValue(key=dict(name='some_name', number=2))
print(val1)
print(val2)
print('------------------------------------------------')
# Setting password-protected values
ca.setValue(key='some_key2', password='my_password', value='the-secret-*y$#a')
# Retrieving password-protected values
print(ca.getValue(key='some_key2', password='my_password'))
print('------------------------------------------------')
# Local storage of data and files, retrievable by SHA-1 hash
path = ca.saveText('This is some text', basename='test.txt')
print(path)
# Output: sha1://482cb0cfcbed6740a2bcb659c9ccc22a4d27b369/test.txt
# Later we can use this to retrieve the text
txt = ca.loadText(path=path)
print(txt)
# ... or retrieve the path to a local file containing the text
fname = ca.realizeFile(path=path)
print(fname)
# Output: /tmp/sha1-cache/4/82/482cb0cfcbed6740a2bcb659c9ccc22a4d27b369
# Or we can store some large text by key and retrieve it later
ca.saveText(key=dict(name='key-for-repeating-text'),
text='some large repeating text'*100)
txt = ca.loadText(key=dict(name='key-for-repeating-text'))
print(len(txt)) # Output: 2500
print('------------------------------------------------')
# Similarly we can store python dicts via json content
path = ca.saveObject(dict(some='object'), basename='object.json')
print(path)
# Output: sha1://b77fdda467b03d7a0c3e06f6f441f689ac46e817/object.json
retrieved_object = ca.loadObject(path=path)
print(retrieved_object)
# Or store objects by key
ca.saveObject(object=dict(some_other='object'), key=dict(some='key'))
obj = ca.loadObject(key=dict(some='key'))
print(obj)
print('------------------------------------------------')
# You can do the same with files
with open('test___.txt', 'w') as f:
f.write('some file content')
path = ca.saveFile('test___.txt')
print(path)
# Output: sha1://ee025361a15e3e8074e9c0b44b4f98aabc829b3d/test___.txt
# Then load the text of the file at a later time
txt = ca.loadText(path=path)
print(txt)
# REMOTE DATABASE
# The interesting part comes when we connect to a remote cairio database
| 32.101266 | 121 | 0.673502 | 343 | 2,536 | 4.927114 | 0.361516 | 0.024852 | 0.026036 | 0.033728 | 0.16568 | 0.16568 | 0.100592 | 0 | 0 | 0 | 0 | 0.053635 | 0.11041 | 2,536 | 78 | 122 | 32.512821 | 0.695479 | 0.401814 | 0 | 0.324324 | 0 | 0 | 0.340241 | 0.18984 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.054054 | 0.027027 | 0 | 0.027027 | 0.459459 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 2 |
91da62abdf58a681160af62db0a656fc84797afd | 3,940 | py | Python | gale/databases/mongoConnect.py | adamrpah/GALE | 94ab2613c5d53ea471f664a75c7d780a2689302f | [
"WTFPL"
] | null | null | null | gale/databases/mongoConnect.py | adamrpah/GALE | 94ab2613c5d53ea471f664a75c7d780a2689302f | [
"WTFPL"
] | null | null | null | gale/databases/mongoConnect.py | adamrpah/GALE | 94ab2613c5d53ea471f664a75c7d780a2689302f | [
"WTFPL"
] | null | null | null | '''
File: mongoConnect.py
Author: Adam Pah
Description:
Handle the mongo connections
'''
import sys
import random
from pymongo import MongoClient
from bson.objectid import ObjectId
import bson
class MongoConnection(object):
def __init__(self, cxnSettings):
self.settings = cxnSettings
self.mongoURI = self._constructURI()
self.connect()
def _constructURI(self):
'''
Construct the mongo URI
'''
mongoURI = 'mongodb://'
#User/password handling
if 'user'in self.settings and 'password' in self.settings:
mongoURI += self.settings['user'] + ':' + self.settings['password'] + '@'
elif 'user' in self.settings:
print 'Missing password for given user, proceeding without either'
elif 'password' in self.settings:
print 'Missing user for given passord, proceeding without either'
#Host and port
try:
mongoURI += self.settings['host'] + ':'
except KeyError:
print 'Missing the hostname. Cannot connect without host'
sys.exit()
try:
mongoURI += str(self.settings['port'])
except KeyError:
print 'Missing the port. Substitiuting default port of 27017'
mongoURI += str('27017')
return mongoURI
def _formatId(self, bid):
'''
Checks an Id to see if it is in ObjectId type.
If not, returns as ObjectId type
input:
bson_id
output:
bson_id (ObjectId)
'''
##Check the type
if type(bid)!=bson.objectid.ObjectId:
bid = ObjectId(bid)
return bid
def connect(self):
'''
Establish the connection, database, and collection
'''
self.connection = MongoClient(self.mongoURI)
#########
try:
self.db = self.connection[self.settings['db']]
except KeyError:
print 'Must specify a database as a "db" key in the settings file'
sys.exit()
#########
try:
self.collection = self.db[self.settings['collection']]
except KeyError:
print "Should have a collection. Starting a collection in database for current connection as test"
self.collection = self.db['test']
def tearDown(self):
'''
Closes the connection
'''
self.connection.close()
def pullIds(self, query={}):
'''
Pulls all document Ids and returns as a shuffled list
'''
ids = [tdoc['_id'] for tdoc in self.collection.find(query, {'_id':1})]
random.shuffle(ids)
return ids
def checkIdExistence(self, bid, field='_id'):
'''
Checks for the Existence of an Id
input:
Document Id (as ObjectId or string)
output:
True for existence
False for non-existence
'''
##Check the type
bid = self._formatId(bid)
##Perform the check
if self.collection.find_one({field: bid})==None:
return False
else:
return True
def checkFieldExistence(self, key, value):
'''
Checks for the existence of a document with a specific key, value pair
input:
Document Key
Specific Value
output:
True for existence
False for non-existence
'''
if self.collection.find_one({key: value})==None:
return False
else:
return True
def pullDocument(self, bid, field='_id'):
'''
Pulls a document and returns a dictionary, if no document
returns none
input:
bson_id
output:
document
'''
bid = self._formatId(bid)
tdoc = self.collection.find_one({field: bid})
return tdoc
| 29.185185 | 110 | 0.557107 | 423 | 3,940 | 5.144208 | 0.300236 | 0.060662 | 0.025735 | 0.028952 | 0.177849 | 0.094669 | 0.068015 | 0.038603 | 0 | 0 | 0 | 0.00429 | 0.349239 | 3,940 | 134 | 111 | 29.402985 | 0.844384 | 0.020305 | 0 | 0.272727 | 0 | 0 | 0.169271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.075758 | 0.075758 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
37d4ca5a60c1fa5cd5d8faf75c0cce64c91f163a | 109 | py | Python | ttt.py | maxsheer/xprs | 4776efad681c5d262e7611ec0b47180e8c68d8b3 | [
"MIT"
] | null | null | null | ttt.py | maxsheer/xprs | 4776efad681c5d262e7611ec0b47180e8c68d8b3 | [
"MIT"
] | null | null | null | ttt.py | maxsheer/xprs | 4776efad681c5d262e7611ec0b47180e8c68d8b3 | [
"MIT"
] | null | null | null | from __future__ import division
t = 0
while True:
t += 1
if t*(t+1)/2 >= 6002:
break
print(t)
| 10.9 | 31 | 0.568807 | 19 | 109 | 3.052632 | 0.736842 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.302752 | 109 | 9 | 32 | 12.111111 | 0.657895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37da1dcfa50656e068c8015ab5df7515385c41b6 | 11,449 | py | Python | pysnmp-with-texts/RBN-SYS-SECURITY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/RBN-SYS-SECURITY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/RBN-SYS-SECURITY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module RBN-SYS-SECURITY-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/RBN-SYS-SECURITY-MIB
# Produced by pysmi-0.3.4 at Wed May 1 14:53:26 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, ObjectIdentifier, OctetString = mibBuilder.importSymbols("ASN1", "Integer", "ObjectIdentifier", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ConstraintsUnion, SingleValueConstraint, ValueSizeConstraint, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ConstraintsUnion", "SingleValueConstraint", "ValueSizeConstraint", "ValueRangeConstraint")
CounterBasedGauge64, = mibBuilder.importSymbols("HCNUM-TC", "CounterBasedGauge64")
rbnModules, = mibBuilder.importSymbols("RBN-SMI", "rbnModules")
RbnUnsigned64, = mibBuilder.importSymbols("RBN-TC", "RbnUnsigned64")
ModuleCompliance, NotificationGroup, ObjectGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup", "ObjectGroup")
MibIdentifier, ModuleIdentity, Bits, Gauge32, NotificationType, TimeTicks, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter32, Counter64, iso, Unsigned32, IpAddress, Integer32, ObjectIdentity = mibBuilder.importSymbols("SNMPv2-SMI", "MibIdentifier", "ModuleIdentity", "Bits", "Gauge32", "NotificationType", "TimeTicks", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter32", "Counter64", "iso", "Unsigned32", "IpAddress", "Integer32", "ObjectIdentity")
DisplayString, TextualConvention, DateAndTime = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention", "DateAndTime")
rbnSysSecurityMib = ModuleIdentity((1, 3, 6, 1, 4, 1, 2352, 5, 54))
rbnSysSecurityMib.setRevisions(('2009-11-09 18:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: rbnSysSecurityMib.setRevisionsDescriptions(('Initial version',))
if mibBuilder.loadTexts: rbnSysSecurityMib.setLastUpdated('200911091800Z')
if mibBuilder.loadTexts: rbnSysSecurityMib.setOrganization('Ericsson Inc.')
if mibBuilder.loadTexts: rbnSysSecurityMib.setContactInfo(' Ericsson Inc. 100 Headquarters Drive San Jose, CA 95134 USA Phone: +1 408 750 5000 Fax: +1 408 750 5599 ')
if mibBuilder.loadTexts: rbnSysSecurityMib.setDescription('This MIB module defines attributes and notifications related to system and network level security issues. All mib objects defined in the module are viewed within the context identified in the SNMP protocol (i.e. the community string in v1/v2c or the contextName in v3). ')
rbnSysSecNotifications = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 0))
rbnSysSecObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1))
rbnSysSecConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2))
rbnSysSecThresholdObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 1))
rbnSysSecNotifyEnable = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 1, 1), Bits().clone(namedValues=NamedValues(("maliciousPkt", 0)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: rbnSysSecNotifyEnable.setStatus('current')
if mibBuilder.loadTexts: rbnSysSecNotifyEnable.setDescription('The bit mask to enable/disable notifications for crossing specific threshold.')
rbnMeasurementInterval = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 1, 2), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(1, 3600)).clone(60)).setUnits('seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: rbnMeasurementInterval.setStatus('current')
if mibBuilder.loadTexts: rbnMeasurementInterval.setDescription('Data is sampled at the start and end of a specified interval. The difference between the start and end values |end - start| is called the delta value. When setting the interval, care should be taken that the interval should be short enough that the sampled variable is very unlikely to increase or decrease by more than range of the variable. ')
rbnMaliciousPktsThresholdHi = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 1, 3), RbnUnsigned64()).setUnits('Packets').setMaxAccess("readwrite")
if mibBuilder.loadTexts: rbnMaliciousPktsThresholdHi.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktsThresholdHi.setDescription('When the current sampling interval delta value of the malicious packets counter is greater than or equal to this threshold, and the delta value at the last sampling interval was less than this threshold, a single high threshold exceeded event will be generated. A single high threshold exceeded event will also be generated if the first sampling interval delta value of the malicious IP packets counter is greater than or equal to this threshold. After a high threshold exceeded event is generated, another such event will not be generated until the delta value falls below this threshold and reaches the rbnMaliciousPktsThresholdLow, generating a low threshold exceeded event. In other words there cannot be successive high threshold events without an intervening low threshold event. ')
rbnMaliciousPktsThresholdLow = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 1, 4), RbnUnsigned64()).setUnits('Packets').setMaxAccess("readwrite")
if mibBuilder.loadTexts: rbnMaliciousPktsThresholdLow.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktsThresholdLow.setDescription('When the current sampling interval delta value of the malicious packets counter is less than or equal to this threshold, and the delta value at the last sampling interval was greater than this threshold, a single low threshold exceeded event will be generated. In addition, a high threshold exceeded event must occur before a low threshold exceeded event can be generated. ')
rbnSysSecStatsObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 2))
rbnMaliciousPktsCounter = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 2, 1), Counter64()).setUnits('Packets').setMaxAccess("readonly")
if mibBuilder.loadTexts: rbnMaliciousPktsCounter.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktsCounter.setDescription('A count of all malicious pkts. This includes but is not limited to malformed IP packets, malformed layer 4 IP, packets filtered by ACLs for specific faults, IP packets identified as attempting to spoof a system, and IP packets which failed reassembly.')
rbnMaliciousPktsDelta = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 2, 2), CounterBasedGauge64()).setUnits('packets').setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: rbnMaliciousPktsDelta.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktsDelta.setDescription('The delta value of rbnMaliciousPktsCounter at the most recently completed measurement interval.')
rbnSysSecNotifyObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 4))
rbnThresholdNotifyTime = MibScalar((1, 3, 6, 1, 4, 1, 2352, 5, 54, 1, 4, 1), DateAndTime()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: rbnThresholdNotifyTime.setStatus('current')
if mibBuilder.loadTexts: rbnThresholdNotifyTime.setDescription('The DateAndTime of the notification.')
rbnMaliciousPktThresholdHiExceeded = NotificationType((1, 3, 6, 1, 4, 1, 2352, 5, 54, 0, 1))
if mibBuilder.loadTexts: rbnMaliciousPktThresholdHiExceeded.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktThresholdHiExceeded.setDescription('This notification signifies that one of the delta values is equal to or greater than the corresponding high threshold value. The specific delta value is the last object in the notification varbind list. ')
rbnMaliciousPktThresholdLowExceeded = NotificationType((1, 3, 6, 1, 4, 1, 2352, 5, 54, 0, 2)).setObjects(("RBN-SYS-SECURITY-MIB", "rbnThresholdNotifyTime"))
if mibBuilder.loadTexts: rbnMaliciousPktThresholdLowExceeded.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktThresholdLowExceeded.setDescription('This notification signifies that one of the delta values is less than or equal to the corresponding low threshold value. The specific delta value is the last object in the notification varbind list. ')
rbnSysSecCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 1))
rbnSysSecGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 2))
rbnMaliciousPktGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 2, 1)).setObjects(("RBN-SYS-SECURITY-MIB", "rbnSysSecNotifyEnable"), ("RBN-SYS-SECURITY-MIB", "rbnMeasurementInterval"), ("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktsThresholdHi"), ("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktsThresholdLow"), ("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktsCounter"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
rbnMaliciousPktGroup = rbnMaliciousPktGroup.setStatus('current')
if mibBuilder.loadTexts: rbnMaliciousPktGroup.setDescription('Set of objects for the group.')
rbnSysSecNotifyObjectsGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 2, 4)).setObjects(("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktsDelta"), ("RBN-SYS-SECURITY-MIB", "rbnThresholdNotifyTime"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
rbnSysSecNotifyObjectsGroup = rbnSysSecNotifyObjectsGroup.setStatus('current')
if mibBuilder.loadTexts: rbnSysSecNotifyObjectsGroup.setDescription('Set of objects for the group.')
rbnSysSecNotificationGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 2, 5)).setObjects(("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktThresholdHiExceeded"), ("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktThresholdLowExceeded"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
rbnSysSecNotificationGroup = rbnSysSecNotificationGroup.setStatus('current')
if mibBuilder.loadTexts: rbnSysSecNotificationGroup.setDescription('Set of notifications for the group.')
rbnSysSecCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 2352, 5, 54, 2, 1, 1)).setObjects(("RBN-SYS-SECURITY-MIB", "rbnMaliciousPktGroup"), ("RBN-SYS-SECURITY-MIB", "rbnSysSecNotifyObjectsGroup"), ("RBN-SYS-SECURITY-MIB", "rbnSysSecNotificationGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
rbnSysSecCompliance = rbnSysSecCompliance.setStatus('current')
if mibBuilder.loadTexts: rbnSysSecCompliance.setDescription('The compliance statement for support of this mib module.')
mibBuilder.exportSymbols("RBN-SYS-SECURITY-MIB", rbnMeasurementInterval=rbnMeasurementInterval, rbnSysSecConformance=rbnSysSecConformance, rbnMaliciousPktThresholdHiExceeded=rbnMaliciousPktThresholdHiExceeded, rbnSysSecNotifications=rbnSysSecNotifications, rbnSysSecCompliances=rbnSysSecCompliances, rbnSysSecGroups=rbnSysSecGroups, rbnSysSecNotifyObjectsGroup=rbnSysSecNotifyObjectsGroup, rbnMaliciousPktGroup=rbnMaliciousPktGroup, rbnSysSecObjects=rbnSysSecObjects, rbnMaliciousPktsThresholdHi=rbnMaliciousPktsThresholdHi, rbnSysSecCompliance=rbnSysSecCompliance, rbnSysSecNotifyObjects=rbnSysSecNotifyObjects, rbnSysSecThresholdObjects=rbnSysSecThresholdObjects, rbnSysSecNotificationGroup=rbnSysSecNotificationGroup, PYSNMP_MODULE_ID=rbnSysSecurityMib, rbnSysSecNotifyEnable=rbnSysSecNotifyEnable, rbnMaliciousPktsCounter=rbnMaliciousPktsCounter, rbnMaliciousPktsThresholdLow=rbnMaliciousPktsThresholdLow, rbnSysSecStatsObjects=rbnSysSecStatsObjects, rbnMaliciousPktThresholdLowExceeded=rbnMaliciousPktThresholdLowExceeded, rbnThresholdNotifyTime=rbnThresholdNotifyTime, rbnSysSecurityMib=rbnSysSecurityMib, rbnMaliciousPktsDelta=rbnMaliciousPktsDelta)
| 144.924051 | 1,156 | 0.793781 | 1,325 | 11,449 | 6.857358 | 0.236226 | 0.035659 | 0.062404 | 0.009685 | 0.371671 | 0.244442 | 0.22188 | 0.201189 | 0.18578 | 0.18435 | 0 | 0.050106 | 0.097039 | 11,449 | 78 | 1,157 | 146.782051 | 0.828787 | 0.029173 | 0 | 0.072464 | 0 | 0.115942 | 0.390059 | 0.036377 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37ed4cbc583f39e843cd0cad04203be8b22514d2 | 1,816 | py | Python | IBMAlchemyAnalysis.py | rPortelas/sentimentAnalysis | 4c843edafd6b8311009ad3ee425624ee7ed6273a | [
"Apache-2.0"
] | null | null | null | IBMAlchemyAnalysis.py | rPortelas/sentimentAnalysis | 4c843edafd6b8311009ad3ee425624ee7ed6273a | [
"Apache-2.0"
] | null | null | null | IBMAlchemyAnalysis.py | rPortelas/sentimentAnalysis | 4c843edafd6b8311009ad3ee425624ee7ed6273a | [
"Apache-2.0"
] | null | null | null | from __future__ import division
import sys
import json
from os.path import join, dirname
from watson_developer_cloud import AlchemyLanguageV1
from time import sleep
import csv
#return emotion guesses for paragraphs
def alchemyAnalysis(paragraphs):
alchemy_language = AlchemyLanguageV1(api_key='7ee2d55604ee59df13c9a315603405291131cf6e')
guesses = []
alchemyOutputs = []
#load alchemyOutputs
with open('alchemyRecords.txt', 'r') as f:
alchemyOutputs = json.load(f)
#sentiment analisys on samples
i = 0
for paragraph in paragraphs:
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#WARNING Right now only looking at 50 first paragraphs, TODO finish labelling !!!!!!!!!!!!!!!!!!!!!!!!!!
if (i >= 50):
break
##print(json.dumps(alchemy_language.emotion(text=paragraph), indent=2))
##sentimentDict = alchemy_language.emotion(text=paragraph).get("docEmotions")
##alchemyOutputs.append(alchemy_language.emotion(text=paragraph).get("docEmotions"))
guess = max(alchemyOutputs[i].iterkeys(), key=(lambda key: alchemyOutputs[i][key]))
#disgust is ignored
if (guess == "disgust"):
alchemyOutputs[i][guess] = 0.0
guess = max(alchemyOutputs[i].iterkeys(), key=(lambda key: alchemyOutputs[i][key]))
#if the greatest detected emotion is < 0.5 sample is seen as neutral
if (float(alchemyOutputs[i][guess]) < .55):
guesses.append("Neutral")
elif (guess == "joy"):
guesses.append("Happiness")
elif (guess == "fear"):
guesses.append("Fear")
elif (guess == "sadness"):
guesses.append("Sadness")
elif (guess == "anger"):
guesses.append("Anger")
else:
print "WTF"
print guess
print i
i = i +1
#writing alchemy output in file
#with open('alchemyRecords.txt', 'w') as f:
# json.dump(alchemyOutputs, f)
return guesses
| 31.859649 | 106 | 0.676762 | 215 | 1,816 | 5.665116 | 0.455814 | 0.073892 | 0.054187 | 0.064039 | 0.20936 | 0.180624 | 0.180624 | 0.100164 | 0.100164 | 0.100164 | 0 | 0.027671 | 0.144273 | 1,816 | 56 | 107 | 32.428571 | 0.756113 | 0.376652 | 0 | 0.054054 | 0 | 0 | 0.107527 | 0.035842 | 0 | 0 | 0 | 0.017857 | 0 | 0 | null | null | 0 | 0.189189 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37f228ed86a640c1fc21cc2eb431854b7728d861 | 3,324 | py | Python | test/integration/test_index.py | aerospike/aerospike-tools-backup | 62a266e7525f6088171dbee36eeecfc71c47d2bf | [
"Apache-2.0",
"MIT"
] | 18 | 2015-09-25T21:24:33.000Z | 2022-03-09T16:01:40.000Z | test/integration/test_index.py | aerospike/aerospike-tools-backup | 62a266e7525f6088171dbee36eeecfc71c47d2bf | [
"Apache-2.0",
"MIT"
] | 7 | 2015-11-10T16:12:01.000Z | 2019-04-12T05:19:28.000Z | test/integration/test_index.py | aerospike/aerospike-tools-backup | 62a266e7525f6088171dbee36eeecfc71c47d2bf | [
"Apache-2.0",
"MIT"
] | 20 | 2015-10-21T20:05:50.000Z | 2021-11-23T01:16:37.000Z | # coding=UTF-8
"""
Tests the representation of indexes in backup files.
"""
import lib
SET_NAMES = [None] + lib.index_variations(63)
INDEX_NAMES = ["normal"] + lib.index_variations(63)
INDEX_PATHS = ["normal"] + lib.index_variations(14)
def create_indexes(create_func):
"""
Invokes the given index creation function for all set names, index
names, and index paths.
"""
for set_name, index_path, index_name in zip(SET_NAMES, INDEX_PATHS, INDEX_NAMES):
create_func(set_name, index_path, index_name)
def check_indexes(check_func, value):
"""
Invokes the given index check function for all set names, index names,
and index paths.
"""
for set_name, index_path in zip(SET_NAMES, INDEX_PATHS):
check_func(set_name, index_path, value)
def test_integer_index():
"""
Tests integer indexes across all set names, index names, and index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_integer_index),
None,
lambda context: check_indexes(lib.check_simple_index, 12345)
)
def test_string_index():
"""
Tests string indexes across all set names, index names, and index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_string_index),
None,
lambda context: check_indexes(lib.check_simple_index, "foobar")
)
def test_geo_index():
"""
Tests geo indexes across all set names, index names, and index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_geo_index),
None,
lambda context: check_indexes(lib.check_geo_index, (0.0, 0.0))
)
def test_integer_list_index():
"""
Tests integer list indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_integer_list_index),
None,
lambda context: check_indexes(lib.check_list_index, 12345)
)
def test_string_list_index():
"""
Tests string list indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_string_list_index),
None,
lambda context: check_indexes(lib.check_list_index, "foobar")
)
def test_integer_map_key_index():
"""
Tests integer map key indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_integer_map_key_index),
None,
lambda context: check_indexes(lib.check_map_key_index, 12345)
)
def test_string_map_key_index():
"""
Tests string map key indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_string_map_key_index),
None,
lambda context: check_indexes(lib.check_map_key_index, "foobar")
)
def test_integer_map_value_index():
"""
Tests integer map value indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_integer_map_value_index),
None,
lambda context: check_indexes(lib.check_map_value_index, 12345)
)
def test_string_map_value_index():
"""
Tests string map value indexes across all set names, index names, and
index paths.
"""
lib.backup_and_restore(
lambda context: create_indexes(lib.create_string_map_value_index),
None,
lambda context: check_indexes(lib.check_map_value_index, "foobar")
)
| 26.806452 | 82 | 0.761432 | 495 | 3,324 | 4.828283 | 0.105051 | 0.097908 | 0.070711 | 0.07364 | 0.797071 | 0.751046 | 0.680753 | 0.680753 | 0.66318 | 0.66318 | 0 | 0.010847 | 0.140193 | 3,324 | 123 | 83 | 27.02439 | 0.825402 | 0.285499 | 0 | 0.28125 | 0 | 0 | 0.015929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171875 | false | 0 | 0.015625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
53036be567e64c7bf8078e08439b4f801f44598e | 517 | py | Python | solutions/01x_verbose_concise.py | lsteffenel/dask-tutorial | 686a9914b908c0b9f6ffc8ff73a9386944ffe426 | [
"BSD-3-Clause"
] | null | null | null | solutions/01x_verbose_concise.py | lsteffenel/dask-tutorial | 686a9914b908c0b9f6ffc8ff73a9386944ffe426 | [
"BSD-3-Clause"
] | null | null | null | solutions/01x_verbose_concise.py | lsteffenel/dask-tutorial | 686a9914b908c0b9f6ffc8ff73a9386944ffe426 | [
"BSD-3-Clause"
] | null | null | null | ## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute()) | 23.5 | 53 | 0.729207 | 85 | 517 | 4.247059 | 0.329412 | 0.116343 | 0.155125 | 0.191136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006608 | 0.121857 | 517 | 22 | 54 | 23.5 | 0.788546 | 0.059961 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5314f1f546c51ad6d87279d255164c369b4e1d0a | 1,394 | py | Python | projthetravelling/users/migrations/0002_auto_20170209_2118.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | projthetravelling/users/migrations/0002_auto_20170209_2118.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | projthetravelling/users/migrations/0002_auto_20170209_2118.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2017-02-09 21:18
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='user',
name='city',
field=models.CharField(blank=True, max_length=100, verbose_name='City'),
),
migrations.AddField(
model_name='user',
name='country',
field=models.CharField(blank=True, max_length=100, verbose_name='Country'),
),
migrations.AddField(
model_name='user',
name='phone',
field=models.CharField(blank=True, max_length=100, verbose_name='Phone'),
),
migrations.AddField(
model_name='user',
name='street1',
field=models.CharField(blank=True, max_length=100, verbose_name='Street 1'),
),
migrations.AddField(
model_name='user',
name='street2',
field=models.CharField(blank=True, max_length=100, verbose_name='Street 2'),
),
migrations.AddField(
model_name='user',
name='zipcode',
field=models.CharField(blank=True, max_length=10, verbose_name='ZIP'),
),
]
| 30.304348 | 88 | 0.571019 | 145 | 1,394 | 5.324138 | 0.351724 | 0.139896 | 0.178756 | 0.209845 | 0.673575 | 0.673575 | 0.401554 | 0.352332 | 0.352332 | 0.352332 | 0 | 0.043077 | 0.300574 | 1,394 | 45 | 89 | 30.977778 | 0.748718 | 0.04878 | 0 | 0.473684 | 1 | 0 | 0.085412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
533b714d75027989d54e189c54d163b5d39506a2 | 429 | py | Python | Darlington/phase2/Data Structure/day 60 solution/qtn1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase2/Data Structure/day 60 solution/qtn1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase2/Data Structure/day 60 solution/qtn1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #program to push an item on the heap, then pop and return the smallest item from the heap.
import heapq
heap = []
heapq.heappush(heap, ('V', 3))
heapq.heappush(heap, ('V', 2))
heapq.heappush(heap, ('V', 1))
print("Items in the heap:")
for a in heap:
print(a)
print("----------------------")
print("Using heappushpop push item on the heap and return the smallest item.")
heapq.heappushpop(heap, ('V', 6))
for a in heap:
print(a) | 30.642857 | 90 | 0.657343 | 72 | 429 | 3.916667 | 0.402778 | 0.099291 | 0.180851 | 0.191489 | 0.283688 | 0.113475 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0.146853 | 429 | 14 | 91 | 30.642857 | 0.759563 | 0.207459 | 0 | 0.307692 | 0 | 0 | 0.332353 | 0.064706 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.384615 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
534530255c722de05c63f65268b5ace5fe39e09c | 261 | py | Python | arrayAndString/urlify/urlify_solution.py | slowy07/crackingPythonInterview | c317634593e00b2289adccb31254ec73cce37f32 | [
"MIT"
] | null | null | null | arrayAndString/urlify/urlify_solution.py | slowy07/crackingPythonInterview | c317634593e00b2289adccb31254ec73cce37f32 | [
"MIT"
] | null | null | null | arrayAndString/urlify/urlify_solution.py | slowy07/crackingPythonInterview | c317634593e00b2289adccb31254ec73cce37f32 | [
"MIT"
] | null | null | null | def urlify(s, i):
p1, p2 = len(s) - 1, i
while p1 >= 0 and p2 >= 0:
if s[p2] != " ":
s[p1] = s[p2]
else:
for i in reversed("%20"):
s[p1] = i
p1 -= 1
p1 -= 1
p2 -= 1
| 21.75 | 37 | 0.302682 | 38 | 261 | 2.078947 | 0.447368 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155738 | 0.532567 | 261 | 11 | 38 | 23.727273 | 0.491803 | 0 | 0 | 0.181818 | 0 | 0 | 0.015326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
53472392360b6dacdc1a22f37ee6e587f7b0c36a | 1,323 | py | Python | pymusicterm/music/__init__.py | EGAMAGZ/Terminal-Music-Player | fc5ce7d9cef6ee98f2dde1784d363a8002584d96 | [
"MIT"
] | 2 | 2020-07-12T20:43:06.000Z | 2020-08-07T06:16:12.000Z | pymusicterm/music/__init__.py | EGAMAGZ/Terminal-Music-Player | fc5ce7d9cef6ee98f2dde1784d363a8002584d96 | [
"MIT"
] | null | null | null | pymusicterm/music/__init__.py | EGAMAGZ/Terminal-Music-Player | fc5ce7d9cef6ee98f2dde1784d363a8002584d96 | [
"MIT"
] | null | null | null | import os
MUSIC_EXTENSIONS=(".mp3",".wav") # Valid music extensions
def is_valid_extension(file_name:str):
if file_name.endswith(MUSIC_EXTENSIONS):
return True
return False
class SongFile:
""" Class that stv ores information of a song file
This class with store the path where is found the song and the name
of the file. And with the functions will return some requested information
store.
"""
_path:str
_file_name:str
def __init__(self,path:str,file_name:str):
self._path=path
self._file_name=file_name
def get_name(self) -> str:
""" Gets the name of the file
Return
------
name : str
Name of the file (without extension)
"""
return os.path.splitext(self._file_name)[0]
def get_file_path(self) -> str:
""" Gets the complete file path where is the song
Return
------
file_path : str
Join of the path and file_name, for the location of the song file
"""
return os.path.join(self._path,self._file_name)
def get_path(self) -> str:
""" Gets the path where is found the song file
Return
------
path : str
Path where you can find the song
"""
return self._path
| 24.962264 | 78 | 0.598639 | 182 | 1,323 | 4.186813 | 0.291209 | 0.094488 | 0.043307 | 0.051181 | 0.200787 | 0.068241 | 0.068241 | 0 | 0 | 0 | 0 | 0.00221 | 0.315949 | 1,323 | 52 | 79 | 25.442308 | 0.839779 | 0.429327 | 0 | 0 | 0 | 0 | 0.013559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
536994c2ce6127cc80e44a400fb0442a636d3fa9 | 402 | py | Python | hydra_plugins/bigpipe_response/bigpipe_response_searchpath.py | shay-t/bigpipe-response | 2a68dde1d7cfb3f837e1c108d45df1465607cd25 | [
"MIT"
] | 13 | 2020-01-23T18:30:37.000Z | 2020-02-14T19:05:28.000Z | hydra_plugins/bigpipe_response/bigpipe_response_searchpath.py | shay-t/bigpipe-response | 2a68dde1d7cfb3f837e1c108d45df1465607cd25 | [
"MIT"
] | 3 | 2022-02-14T19:39:36.000Z | 2022-02-27T20:26:05.000Z | hydra_plugins/bigpipe_response/bigpipe_response_searchpath.py | shay-t/bigpipe-response | 2a68dde1d7cfb3f837e1c108d45df1465607cd25 | [
"MIT"
] | 1 | 2021-12-20T14:47:18.000Z | 2021-12-20T14:47:18.000Z | from hydra.core.config_search_path import ConfigSearchPath
from hydra.plugins.search_path_plugin import SearchPathPlugin
class BigpipeResponseSearchPathPlugin(SearchPathPlugin):
def manipulate_search_path(self, search_path: ConfigSearchPath) -> None:
assert isinstance(search_path, ConfigSearchPath)
search_path.append("bigpipe-response", "pkg://bigpipe_response.config")
| 44.666667 | 80 | 0.798507 | 42 | 402 | 7.404762 | 0.547619 | 0.192926 | 0.167203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126866 | 402 | 8 | 81 | 50.25 | 0.88604 | 0 | 0 | 0 | 0 | 0 | 0.114213 | 0.073604 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7260c34553c618eb3d22544e88bfcdd98370612c | 1,224 | py | Python | synapse/nn/activations.py | brymer-meneses/sparknet | b2978e4926522719c73ee0c15c136d5693a72bcb | [
"Apache-2.0"
] | 1 | 2021-05-04T23:05:22.000Z | 2021-05-04T23:05:22.000Z | synapse/nn/activations.py | brymer-meneses/sparknet | b2978e4926522719c73ee0c15c136d5693a72bcb | [
"Apache-2.0"
] | null | null | null | synapse/nn/activations.py | brymer-meneses/sparknet | b2978e4926522719c73ee0c15c136d5693a72bcb | [
"Apache-2.0"
] | null | null | null |
from synapse.nn.layers import Layer
from synapse.core.tensor import Tensor
from synapse.core.differentiable import Differentiable
import numpy as np
from typing import Callable
def tanhBackward(grad: Tensor, t1: Tensor) -> Tensor:
data = grad.data * (1 - np.tanh(t1.data) ** 2)
return Tensor(data)
@Differentiable(tanhBackward)
def Tanh(t1: Tensor) -> Tensor:
data = np.tanh(t1.data)
requires_grad = t1.requires_grad
return Tensor(data, requires_grad)
def reluBackward(grad: Tensor, t1: Tensor) -> Tensor:
data = grad.data * np.where(t1.data > 0, 1, 0)
return Tensor(data)
@Differentiable(reluBackward)
def ReLU(t1: Tensor) -> Tensor:
data = np.maximum(0, t1.data, t1.data) # Use in place operation
return Tensor(data, t1.requires_grad)
class Softmax():
def forward(self, t1: Tensor) -> Tensor:
expData = np.exp(t1.data)
data = expData / np.sum(expData, axis=0)
requires_grad = t1.requires_grad
return Tensor(expData, requires_grad)
def gradFn(self, t1: Tensor) -> Callable[[np.ndarray], Tensor]:
def SoftmaxBackward(grad: np.ndarray) -> Tensor:
"""TODO"""
pass
return
return
| 23.09434 | 67 | 0.65768 | 161 | 1,224 | 4.956522 | 0.285714 | 0.100251 | 0.087719 | 0.090226 | 0.235589 | 0.185464 | 0.185464 | 0.090226 | 0 | 0 | 0 | 0.023231 | 0.226307 | 1,224 | 52 | 68 | 23.538462 | 0.81943 | 0.022876 | 0 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0 | 1 | 0.225806 | false | 0.032258 | 0.16129 | 0 | 0.612903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72679c1b705094e5ac32034c34e78d2f0e614722 | 5,321 | py | Python | mopidy_pandora/listener.py | jcass77/mopidy-pandora | 234acc3ca0f09e733a77fbea56fc38a13b2e4f72 | [
"Apache-2.0"
] | 1 | 2015-03-09T19:07:54.000Z | 2015-03-09T19:07:54.000Z | mopidy_pandora/listener.py | jcass77/mopidy-pandora | 234acc3ca0f09e733a77fbea56fc38a13b2e4f72 | [
"Apache-2.0"
] | null | null | null | mopidy_pandora/listener.py | jcass77/mopidy-pandora | 234acc3ca0f09e733a77fbea56fc38a13b2e4f72 | [
"Apache-2.0"
] | 2 | 2020-06-25T00:43:58.000Z | 2020-09-20T15:32:59.000Z | from __future__ import absolute_import, division, print_function, unicode_literals
from mopidy import backend, listener
class EventMonitorListener(listener.Listener):
"""
Marker interface for recipients of events sent by the event monitor.
"""
@staticmethod
def send(event, **kwargs):
listener.send(EventMonitorListener, event, **kwargs)
def event_triggered(self, track_uri, pandora_event):
"""
Called when one of the Pandora events have been triggered (e.g. thumbs_up, thumbs_down, sleep, etc.).
:param track_uri: the URI of the track that the event should be applied to.
:type track_uri: string
:param pandora_event: the Pandora event that should be called. Needs to correspond with the name of one of
the event handling methods defined in `:class:mopidy_pandora.backend.PandoraBackend`
:type pandora_event: string
"""
pass
def track_changed_previous(self, old_uri, new_uri):
"""
Called when a 'previous' track change has been completed.
:param old_uri: the URI of the Pandora track that was changed from.
:type old_uri: string
:param new_uri: the URI of the Pandora track that was changed to.
:type new_uri: string
"""
pass
def track_changed_next(self, old_uri, new_uri):
"""
Called when a 'next' track change has been completed. Let's the frontend know that it should probably expand
the tracklist by fetching and adding another track to the tracklist, and removing tracks that do not belong to
the currently selected station.
:param old_uri: the URI of the Pandora track that was changed from.
:type old_uri: string
:param new_uri: the URI of the Pandora track that was changed to.
:type new_uri: string
"""
pass
class PandoraFrontendListener(listener.Listener):
"""
Marker interface for recipients of events sent by the frontend actor.
"""
@staticmethod
def send(event, **kwargs):
listener.send(PandoraFrontendListener, event, **kwargs)
def end_of_tracklist_reached(self, station_id, auto_play=False):
"""
Called whenever the tracklist contains only one track, or the last track in the tracklist is being played.
:param station_id: the ID of the station that is currently being played in the tracklist
:type station_id: string
:param auto_play: specifies if the next track should be played as soon as it is added to the tracklist.
:type auto_play: boolean
"""
pass
class PandoraBackendListener(backend.BackendListener):
"""
Marker interface for recipients of events sent by the backend actor.
"""
@staticmethod
def send(event, **kwargs):
listener.send(PandoraBackendListener, event, **kwargs)
def next_track_available(self, track, auto_play=False):
"""
Called when the backend has the next Pandora track available to be added to the tracklist.
:param track: the Pandora track that was fetched
:type track: :class:`mopidy.models.Ref`
:param auto_play: specifies if the track should be played as soon as it is added to the tracklist.
:type auto_play: boolean
"""
pass
def event_processed(self, track_uri, pandora_event):
"""
Called when the backend has successfully processed the event for the given URI.
:param track_uri: the URI of the track that the event was applied to.
:type track_uri: string
:param pandora_event: the Pandora event that was called. Needs to correspond with the name of one of
the event handling methods defined in `:class:mopidy_pandora.backend.PandoraBackend`
:type pandora_event: string
"""
pass
class PandoraPlaybackListener(listener.Listener):
"""
Marker interface for recipients of events sent by the playback provider.
"""
@staticmethod
def send(event, **kwargs):
listener.send(PandoraPlaybackListener, event, **kwargs)
def track_changing(self, track):
"""
Called when a track is being changed to.
:param track: the Pandora track that is being changed to.
:type track: :class:`mopidy.models.Ref`
"""
pass
def track_unplayable(self, track):
"""
Called when the track is not playable. Let's the frontend know that it should probably remove this track
from the tracklist and try to replace it with the next track that Pandora provides.
:param track: the unplayable Pandora track.
:type track: :class:`mopidy.models.Ref`
"""
pass
def skip_limit_exceeded(self):
"""
Called when the playback provider has skipped over the maximum number of permissible unplayable tracks using
:func:`~mopidy_pandora.pandora.PandoraPlaybackProvider.change_track`. This lets the frontend know that the
player should probably be stopped in order to avoid an infinite loop on the tracklist, or to avoid exceeding
the maximum number of station playlist requests as determined by the Pandora server.
"""
pass
| 35.473333 | 118 | 0.670551 | 694 | 5,321 | 5.053314 | 0.231988 | 0.014257 | 0.015398 | 0.01882 | 0.526946 | 0.491873 | 0.451668 | 0.408326 | 0.345595 | 0.310522 | 0 | 0 | 0.266303 | 5,321 | 149 | 119 | 35.711409 | 0.898309 | 0.616989 | 0 | 0.472222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.361111 | false | 0.25 | 0.055556 | 0 | 0.527778 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
726b160f55b0ed90555e64fa2cad7c08ea58cb0e | 317 | py | Python | autocomplete_light/example_apps/dependant_autocomplete/admin.py | bburan/django-autocomplete-light | 064676061b101d5d47655e8598b21cbaf7716ae8 | [
"MIT"
] | 1 | 2015-10-12T21:42:05.000Z | 2015-10-12T21:42:05.000Z | autocomplete_light/example_apps/dependant_autocomplete/admin.py | bburan/django-autocomplete-light | 064676061b101d5d47655e8598b21cbaf7716ae8 | [
"MIT"
] | null | null | null | autocomplete_light/example_apps/dependant_autocomplete/admin.py | bburan/django-autocomplete-light | 064676061b101d5d47655e8598b21cbaf7716ae8 | [
"MIT"
] | null | null | null | from django.contrib import admin
import autocomplete_light
from .models import Dummy
from .forms import DummyForm
class DummyInline(admin.TabularInline):
model = Dummy
form = DummyForm
class DummyAdmin(admin.ModelAdmin):
form = DummyForm
inlines = [DummyInline]
admin.site.register(Dummy, DummyAdmin)
| 17.611111 | 39 | 0.77918 | 37 | 317 | 6.648649 | 0.567568 | 0.113821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15142 | 317 | 17 | 40 | 18.647059 | 0.914498 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
726ec9b79f09e2ee927e7e142d55ad6268a5e232 | 1,900 | py | Python | hash.py | YasinBlackhat/read-file-csv--finde-password-0-10000 | 4a2ecd23d63dff117cd14df9b7f4ce0d65b15bae | [
"Unlicense"
] | 2 | 2019-11-30T21:35:28.000Z | 2021-07-07T15:59:05.000Z | hash.py | YasinBlackhat/read-file-csv--finde-password-0-10000 | 4a2ecd23d63dff117cd14df9b7f4ce0d65b15bae | [
"Unlicense"
] | null | null | null | hash.py | YasinBlackhat/read-file-csv--finde-password-0-10000 | 4a2ecd23d63dff117cd14df9b7f4ce0d65b15bae | [
"Unlicense"
] | null | null | null | # my lib an valid and dict and list
import csv
lihash_filecsv = dict()
li =[]
lihash=[]
countname = 0
count_csv_hash = 0
d = 1
# read file csv for crack
<<<<<<< HEAD
print('Example Type location : E:\\Land program\\new folder\\2.csv')
locat = str(input('Enter your location file csv : '))
with open(locat) as f:
=======
locate = str(input('Type or Paste location file csv : '))
with open(locate) as f:
>>>>>>> 5d2287294494f8125584f9bcc7d1c77a22306ef4
reader = csv.reader(f)
for row in reader:
name = row[0]
d += 1
for code in row[1:]:
hashcode = code
lihash_filecsv[name]=hashcode
a = list(lihash_filecsv.keys())
b = list(lihash_filecsv.values())
#creat password list
for pass_list in range(0,10000):
count = pass_list
hshing = str(pass_list).encode('utf-8')
from hashlib import sha256
t = sha256(hshing).hexdigest()
#if// for find password
try :
with open('E:\\Land program\\maktabkhooneh\\maktabkhooneh\\Begin\\Chapter6 (Project)\\2.csv') as s:
reade_csv = csv.reader(s)
for line_csv in reade_csv:
if t == b[count_csv_hash]:
lihash.append(t)
lihash.append(count)
print('Number =>=> %i' % (countname+1))
print('Name =>=> %s' % a[countname])
print('Password =>=> %i' % count)
print('hash code =>=> %s' % t)
print('********************')
countname += 1
count_csv_hash += 1
except :
print('""WooooW"" These passwords for you :) ')
break
<<<<<<< HEAD
print('')
print('My Working End**')
=======
>>>>>>> 5d2287294494f8125584f9bcc7d1c77a22306ef4
| 29.6875 | 108 | 0.512632 | 213 | 1,900 | 4.497653 | 0.42723 | 0.05428 | 0.037578 | 0.039666 | 0.048017 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066347 | 0.341579 | 1,900 | 63 | 109 | 30.15873 | 0.69944 | 0.051579 | 0 | 0.117647 | 0 | 0 | 0.197232 | 0.031142 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.098039 | 0.039216 | null | null | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
726f5f74367129dc1c0ffd0f78d98de265106c89 | 426 | py | Python | kattis/Broken Calculator.py | jaredliw/python-question-bank | 9c8c246623d8d171f875700b57772df0afcbdcdf | [
"MIT"
] | 1 | 2021-04-08T07:49:15.000Z | 2021-04-08T07:49:15.000Z | kattis/Broken Calculator.py | jaredliw/leetcode-solutions | 9c8c246623d8d171f875700b57772df0afcbdcdf | [
"MIT"
] | null | null | null | kattis/Broken Calculator.py | jaredliw/leetcode-solutions | 9c8c246623d8d171f875700b57772df0afcbdcdf | [
"MIT"
] | 1 | 2022-01-23T02:12:24.000Z | 2022-01-23T02:12:24.000Z | # CPU: 0.08 s
from math import ceil
prev_ans = 1
for _ in range(int(input())):
operand1, operator, operand2 = input().split()
operand1 = int(operand1)
operand2 = int(operand2)
if operator == "+":
ans = operand1 + operand2 - prev_ans
elif operator == "-":
ans = (operand1 - operand2) * prev_ans
elif operator == "*":
ans = (operand1 * operand2) ** 2
else:
ans = ceil(operand1 / 2)
print(ans)
prev_ans = ans
| 21.3 | 47 | 0.638498 | 57 | 426 | 4.684211 | 0.438596 | 0.104869 | 0.213483 | 0.303371 | 0.385768 | 0.385768 | 0.385768 | 0.385768 | 0.385768 | 0.385768 | 0 | 0.05638 | 0.20892 | 426 | 19 | 48 | 22.421053 | 0.735905 | 0.025822 | 0 | 0 | 0 | 0 | 0.007264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.0625 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
728628c57e80bd0056ba4df57b6bc496e6193d62 | 1,317 | py | Python | packs/alertlogic/actions/scan_list_scan_executions.py | userlocalhost2000/st2contrib | 1a5f759e76401743ed9023d298a3d767e3885db1 | [
"Apache-2.0"
] | 164 | 2015-01-17T16:08:33.000Z | 2021-08-03T02:34:07.000Z | packs/alertlogic/actions/scan_list_scan_executions.py | userlocalhost2000/st2contrib | 1a5f759e76401743ed9023d298a3d767e3885db1 | [
"Apache-2.0"
] | 442 | 2015-01-01T11:19:01.000Z | 2017-09-06T23:26:17.000Z | packs/alertlogic/actions/scan_list_scan_executions.py | userlocalhost2000/st2contrib | 1a5f759e76401743ed9023d298a3d767e3885db1 | [
"Apache-2.0"
] | 202 | 2015-01-13T00:37:40.000Z | 2020-11-07T11:30:10.000Z | #!/usr/bin/env python
# Licensed to the StackStorm, Inc ('StackStorm') under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from st2actions.runners.pythonrunner import Action
from lib.get_scan_list import GetScanList
from lib.get_scan_executions import GetScanExecutions
class ListScanExecutions(Action):
def run(self, scan_title, customer_id=None):
"""
The template class for
Returns: An blank Dict.
Raises:
ValueError: On lack of key in config.
"""
scans = GetScanList(self.config, customer_id)
return GetScanExecutions(self.config, scans[scan_title]['id'])
| 34.657895 | 74 | 0.738041 | 184 | 1,317 | 5.23913 | 0.597826 | 0.062241 | 0.026971 | 0.033195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004713 | 0.194381 | 1,317 | 37 | 75 | 35.594595 | 0.903864 | 0.659833 | 0 | 0 | 0 | 0 | 0.005181 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7288a49b74524a3987e89e115748a506f07aca77 | 496 | py | Python | server/models/checkup.py | sidgan/unnamedtoolforgithub | 486707a78d1ce9e10ad14e136e4f51716a0f58d3 | [
"MIT"
] | 1 | 2021-01-02T13:56:59.000Z | 2021-01-02T13:56:59.000Z | server/models/checkup.py | sidgan/unnamedtoolforgithub | 486707a78d1ce9e10ad14e136e4f51716a0f58d3 | [
"MIT"
] | null | null | null | server/models/checkup.py | sidgan/unnamedtoolforgithub | 486707a78d1ce9e10ad14e136e4f51716a0f58d3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from datetime import datetime
from app import db
class Checkup(db.Model):
__tablename__ = 'checkup'
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.DateTime, default=datetime.utcnow)
# TODO: add one unique constraint on the column group of owner and repo
owner = db.Column(db.String)
repo = db.Column(db.String)
criteria = db.relationship('Criterion', backref='criterion',
lazy='dynamic')
| 31 | 75 | 0.655242 | 64 | 496 | 5 | 0.625 | 0.1 | 0.125 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002618 | 0.229839 | 496 | 15 | 76 | 33.066667 | 0.835079 | 0.183468 | 0 | 0 | 0 | 0 | 0.079602 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7288ba8189c38da81bca13e50a60178b7df691af | 1,511 | py | Python | tests/commands/create_character/test_create_werewolf.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | 13 | 2016-02-23T08:15:22.000Z | 2021-07-17T20:54:57.000Z | tests/commands/create_character/test_create_werewolf.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | 1 | 2017-03-30T08:11:40.000Z | 2017-09-07T15:01:08.000Z | tests/commands/create_character/test_create_werewolf.py | Arent128/npc | c8a1e227a1d4d7c540c4f4427b611ffc290535ee | [
"MIT"
] | 1 | 2020-02-21T09:44:40.000Z | 2020-02-21T09:44:40.000Z | import npc
import pytest
def test_creates_character(campaign):
result = npc.commands.create_character.werewolf('werewolf mann', 'cahalith')
character = campaign.get_character('werewolf mann.nwod')
assert result.success
assert character.exists()
assert campaign.get_absolute(result.openable[0]) == str(character)
def test_adds_group_tags(campaign):
result = npc.commands.create_character.werewolf('werewolf mann', 'cahalith', groups=['fork', 'spoon'])
data = campaign.get_character_data('werewolf mann.nwod')
assert 'fork' in data.tags('group')
assert 'spoon' in data.tags('group')
def test_duplicate_character(campaign):
npc.commands.create_character.werewolf('werewolf mann', 'cahalith')
result = npc.commands.create_character.werewolf('werewolf mann', 'cahalith')
assert not result.success
def test_adds_auspice(campaign):
npc.commands.create_character.werewolf('werewolf mann', 'cahalith')
data = campaign.get_character_data('werewolf mann.nwod')
assert 'Cahalith' in data.tags['auspice']
def test_adds_tribe(campaign):
npc.commands.create_character.werewolf('werewolf mann', 'cahalith', tribe='Bone Talons')
data = campaign.get_character_data('werewolf mann.nwod')
assert 'Bone Talons' in data.tags['tribe']
def test_adds_pack(campaign):
npc.commands.create_character.werewolf('werewolf mann', 'cahalith', pack='Foobars')
data = campaign.get_character_data('werewolf mann.nwod')
assert 'Foobars' in data.tags['pack']
| 41.972222 | 106 | 0.746525 | 191 | 1,511 | 5.748691 | 0.204188 | 0.131148 | 0.108379 | 0.165756 | 0.586521 | 0.586521 | 0.586521 | 0.586521 | 0.586521 | 0.123862 | 0 | 0.000757 | 0.125745 | 1,511 | 35 | 107 | 43.171429 | 0.830431 | 0 | 0 | 0.275862 | 0 | 0 | 0.215089 | 0 | 0 | 0 | 0 | 0 | 0.310345 | 1 | 0.206897 | false | 0 | 0.068966 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
728b63137c72126e427747e405ccb4d57fec6d09 | 232 | py | Python | models/ir_model.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 84 | 2019-06-11T08:14:52.000Z | 2022-02-17T13:58:20.000Z | models/ir_model.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 16 | 2019-06-15T14:30:14.000Z | 2020-07-26T04:21:42.000Z | models/ir_model.py | OSevangelist/vsf-odoo | 2133dcaf0ce7d50573d43387fd343be6dfe7c9d7 | [
"MIT"
] | 38 | 2019-06-11T11:44:12.000Z | 2021-11-20T20:55:17.000Z | from odoo import fields, models
class IrModel(models.Model):
_inherit = 'ir.model'
rest_api = fields.Boolean('REST API', default=True,
help="Allow this model to be fetched through REST API")
| 29 | 85 | 0.62931 | 30 | 232 | 4.8 | 0.733333 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.280172 | 232 | 7 | 86 | 33.142857 | 0.862275 | 0 | 0 | 0 | 0 | 0 | 0.271552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
729e9836fbb040d417fd55deb4e2bcefc449cdb1 | 484 | py | Python | nando/read_pickle.py | idsc-frazzoli/SMARTS | bae0a6ea160330921edc94a7161a4e8cf72a1974 | [
"MIT"
] | null | null | null | nando/read_pickle.py | idsc-frazzoli/SMARTS | bae0a6ea160330921edc94a7161a4e8cf72a1974 | [
"MIT"
] | null | null | null | nando/read_pickle.py | idsc-frazzoli/SMARTS | bae0a6ea160330921edc94a7161a4e8cf72a1974 | [
"MIT"
] | null | null | null | import pandas as pd
import os
scenario = 'cross-4'
name = 'PPO_FrameStack_9c0e9_00000_0_2021-12-09_00-21-45'
checkpoint_nr = 10
pickle_path = os.path.join('baselines', 'marl_benchmark', 'log', 'results', 'run',
scenario,
name,
'checkpoint_' + str(checkpoint_nr),
'checkpoint_' + str(checkpoint_nr)
)
df_checkpoint = pd.read_pickle(pickle_path)
| 26.888889 | 82 | 0.539256 | 52 | 484 | 4.711538 | 0.673077 | 0.146939 | 0.187755 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083871 | 0.359504 | 484 | 17 | 83 | 28.470588 | 0.706452 | 0 | 0 | 0 | 0 | 0 | 0.233954 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72a52b681f192b6f1785bcfe1b55078d0b9820fa | 1,966 | py | Python | rllib/agents/dqn/__init__.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | rllib/agents/dqn/__init__.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | rllib/agents/dqn/__init__.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | import ray.rllib.agents.dqn.apex as apex # noqa
import ray.rllib.agents.dqn.simple_q as simple_q # noqa
from ray.rllib.algorithms.apex_dqn.apex_dqn import APEX_DEFAULT_CONFIG
from ray.rllib.algorithms.apex_dqn.apex_dqn import ApexDQN as ApexTrainer
from ray.rllib.algorithms.apex_dqn.apex_dqn import ApexDQNConfig
from ray.rllib.algorithms.dqn.dqn import DEFAULT_CONFIG
from ray.rllib.algorithms.dqn.dqn import DQN as DQNTrainer
from ray.rllib.algorithms.dqn.dqn import DQNConfig
from ray.rllib.algorithms.dqn.dqn_tf_policy import DQNTFPolicy
from ray.rllib.algorithms.dqn.dqn_torch_policy import DQNTorchPolicy
from ray.rllib.algorithms.r2d2.r2d2 import R2D2 as R2D2Trainer
from ray.rllib.algorithms.r2d2.r2d2 import R2D2_DEFAULT_CONFIG, R2D2Config
from ray.rllib.algorithms.r2d2.r2d2_tf_policy import R2D2TFPolicy
from ray.rllib.algorithms.r2d2.r2d2_torch_policy import R2D2TorchPolicy
from ray.rllib.algorithms.simple_q.simple_q import (
DEFAULT_CONFIG as SIMPLE_Q_DEFAULT_CONFIG,
)
from ray.rllib.algorithms.simple_q.simple_q import SimpleQ as SimpleQTrainer
from ray.rllib.algorithms.simple_q.simple_q import SimpleQConfig
from ray.rllib.algorithms.simple_q.simple_q_tf_policy import (
SimpleQTF1Policy,
SimpleQTF2Policy,
)
from ray.rllib.algorithms.simple_q.simple_q_torch_policy import SimpleQTorchPolicy
from ray.rllib.utils.deprecation import deprecation_warning
__all__ = [
"ApexDQNConfig",
"ApexTrainer",
"DQNConfig",
"DQNTFPolicy",
"DQNTorchPolicy",
"DQNTrainer",
"R2D2Config",
"R2D2TFPolicy",
"R2D2TorchPolicy",
"R2D2Trainer",
"SimpleQConfig",
"SimpleQTF1Policy",
"SimpleQTF2Policy",
"SimpleQTorchPolicy",
"SimpleQTrainer",
# Deprecated.
"APEX_DEFAULT_CONFIG",
"DEFAULT_CONFIG",
"R2D2_DEFAULT_CONFIG",
"SIMPLE_Q_DEFAULT_CONFIG",
]
deprecation_warning(
"ray.rllib.agents.dqn",
"ray.rllib.algorithms.[dqn|simple_q|r2d2|apex_dqn]",
error=False,
)
| 35.107143 | 82 | 0.790946 | 259 | 1,966 | 5.791506 | 0.162162 | 0.117333 | 0.144 | 0.249333 | 0.471333 | 0.440667 | 0.337333 | 0.269333 | 0.168 | 0 | 0 | 0.025478 | 0.121567 | 1,966 | 55 | 83 | 35.745455 | 0.84308 | 0.010682 | 0 | 0 | 0 | 0 | 0.173622 | 0.037094 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.392157 | 0 | 0.392157 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
72c1dd902608b72d14f8b58f4429e97a401992f5 | 251 | py | Python | WebMirror/management/rss_parser_funcs/feed_parse_extractMyFirstTimeTranslating.py | fake-name/ReadableWebProxy | ed5c7abe38706acc2684a1e6cd80242a03c5f010 | [
"BSD-3-Clause"
] | 193 | 2016-08-02T22:04:35.000Z | 2022-03-09T20:45:41.000Z | WebMirror/management/rss_parser_funcs/feed_parse_extractMyFirstTimeTranslating.py | fake-name/ReadableWebProxy | ed5c7abe38706acc2684a1e6cd80242a03c5f010 | [
"BSD-3-Clause"
] | 533 | 2016-08-23T20:48:23.000Z | 2022-03-28T15:55:13.000Z | WebMirror/management/rss_parser_funcs/feed_parse_extractMyFirstTimeTranslating.py | rrosajp/ReadableWebProxy | ed5c7abe38706acc2684a1e6cd80242a03c5f010 | [
"BSD-3-Clause"
] | 19 | 2015-08-13T18:01:08.000Z | 2021-07-12T17:13:09.000Z | def extractMyFirstTimeTranslating(item):
"""
'My First Time Translating'
"""
vol, chp, frag, postfix = extractVolChapterFragmentPostfix(item['title'])
if not (chp or vol or frag) or 'preview' in item['title'].lower():
return None
return False
| 27.888889 | 74 | 0.721116 | 31 | 251 | 5.83871 | 0.709677 | 0.099448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14741 | 251 | 8 | 75 | 31.375 | 0.845794 | 0.10757 | 0 | 0 | 0 | 0 | 0.079439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
72d988951477d008555baa8b645574c9d756354d | 448 | py | Python | hi-ml/testhiml/testhiml/utils_testhiml.py | maxilse/hi-ml | 7902a6e1387a74ef0d861109450a5c93674de764 | [
"MIT"
] | 34 | 2021-08-18T13:27:36.000Z | 2022-03-26T01:25:36.000Z | hi-ml/testhiml/testhiml/utils_testhiml.py | maxilse/hi-ml | 7902a6e1387a74ef0d861109450a5c93674de764 | [
"MIT"
] | 111 | 2021-08-18T13:19:46.000Z | 2022-03-30T05:57:01.000Z | hi-ml/testhiml/testhiml/utils_testhiml.py | maxilse/hi-ml | 7902a6e1387a74ef0d861109450a5c93674de764 | [
"MIT"
] | 6 | 2021-09-13T12:07:58.000Z | 2022-03-24T16:31:06.000Z | # ------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
# ------------------------------------------------------------------------------------------
from health_azure.utils import UnitTestWorkspaceWrapper
DEFAULT_WORKSPACE = UnitTestWorkspaceWrapper()
| 56 | 94 | 0.473214 | 31 | 448 | 6.774194 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089286 | 448 | 7 | 95 | 64 | 0.514706 | 0.745536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
72db843aa2afac8805d15b79ccd276eda45de98d | 469 | py | Python | src/test/pythonFiles/testFiles/specificTest/tests/test_unittest_one.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 615 | 2020-11-11T22:55:28.000Z | 2022-03-30T21:48:08.000Z | src/test/pythonFiles/testFiles/specificTest/tests/test_unittest_one.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 8,428 | 2020-11-11T22:06:43.000Z | 2022-03-31T23:42:34.000Z | src/test/pythonFiles/testFiles/specificTest/tests/test_unittest_one.py | vasili8m/vscode-python | 846eee870e8b7bab38172600836faedb5fb80166 | [
"MIT"
] | 158 | 2020-11-12T07:49:02.000Z | 2022-03-27T20:50:20.000Z | import unittest
class Test_test_one_1(unittest.TestCase):
def test_1_1_1(self):
self.assertEqual(1,1,'Not equal')
def test_1_1_2(self):
self.assertEqual(1,2,'Not equal')
@unittest.skip("demonstrating skipping")
def test_1_1_3(self):
self.assertEqual(1,2,'Not equal')
class Test_test_one_2(unittest.TestCase):
def test_1_2_1(self):
self.assertEqual(1,1,'Not equal')
if __name__ == '__main__':
unittest.main()
| 23.45 | 44 | 0.675906 | 73 | 469 | 3.986301 | 0.273973 | 0.041237 | 0.109966 | 0.274914 | 0.570447 | 0.405498 | 0.405498 | 0.206186 | 0 | 0 | 0 | 0.058047 | 0.191898 | 469 | 19 | 45 | 24.684211 | 0.709763 | 0 | 0 | 0.285714 | 0 | 0 | 0.140725 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72dc75da02a6aae3302fcd05904996d284520b05 | 21,170 | py | Python | hoomd/hpmc/test-py/test_hpmc_util.py | kmoskovtsev/HOOMD-Blue-fork | 99560563a5ba9e082b513764bae51a84f48fdc70 | [
"BSD-3-Clause"
] | null | null | null | hoomd/hpmc/test-py/test_hpmc_util.py | kmoskovtsev/HOOMD-Blue-fork | 99560563a5ba9e082b513764bae51a84f48fdc70 | [
"BSD-3-Clause"
] | null | null | null | hoomd/hpmc/test-py/test_hpmc_util.py | kmoskovtsev/HOOMD-Blue-fork | 99560563a5ba9e082b513764bae51a84f48fdc70 | [
"BSD-3-Clause"
] | null | null | null | # Test the helper functions in hpmc.util
from __future__ import division, print_function
from hoomd import *
from hoomd import hpmc
import unittest
import tempfile
import os
import numpy as np
context.initialize()
def create_empty(**kwargs):
snap = data.make_snapshot(**kwargs);
return init.read_snapshot(snap);
Zr4Al3_pos = """boxMatrix 1.18455 0 4.3983 0.018416 4.5582 0.018416 -4.3983 0 -1.18455
def A1 "sphere 0.8 ff0000"
def A2 "sphere 0.8 0000ff"
A2 -2.791425 0 2.791425
A2 -2.791425 -2.297516 2.791425
A1 0.603183 1.148758 -2.10529
A1 -2.10529 1.111926 0.603183
A1 4.4408921E-16 1.130342 -0
A1 -0.891623 -1.148758 0.891623
A1 0.891623 -1.148758 -0.891623
eof
"""
sc2d_pos = """box 1.1 1.1 0
def A "sphere 1 ff0000"
A 0 0 0
"""
# Test latticeToHoomd() function
class latticeToHoomd (unittest.TestCase):
def test_x_with_z_component(self):
a1 = np.asarray([1.,0,0])
a2 = np.asarray([0.,2.,0.])
a3 = np.asarray([0.,0.,3.])
a1 = a1+a3
box, q = hpmc.util.latticeToHoomd(a1,a2,a3)
vecs = np.asarray(hpmc.util.matFromBox(box)).transpose()
v1 = hpmc.util.quatRot(q,a1)
v2 = vecs[0]
v = v2 - v1
self.assertAlmostEqual(np.dot(v,v), 0, places=6)
volume = np.dot(a1,np.cross(a2,a3))
volume -= np.dot(vecs[0], np.cross(vecs[1],vecs[2]))
self.assertAlmostEqual(volume, 0, places=6)
def test_x_with_y_component(self):
a1 = np.asarray([1.,0,0])
a2 = np.asarray([0.,2.,0.])
a3 = np.asarray([0.,0.,3.])
a1 = a1+a2
box, q = hpmc.util.latticeToHoomd(a1,a2,a3)
vecs = np.asarray(hpmc.util.matFromBox(box)).transpose()
v1 = hpmc.util.quatRot(q,a1)
v2 = vecs[0]
v = v2 - v1
self.assertAlmostEqual(np.dot(v,v), 0, places=6)
volume = np.dot(a1,np.cross(a2,a3))
volume -= np.dot(vecs[0], np.cross(vecs[1],vecs[2]))
self.assertAlmostEqual(volume, 0, places=6)
def test_y_with_z_component(self):
a1 = np.asarray([1.,0,0])
a2 = np.asarray([0.,2.,0.])
a3 = np.asarray([0.,0.,3.])
a2 = a2+2*a3
box, q = hpmc.util.latticeToHoomd(a1,a2,a3)
vecs = np.asarray(hpmc.util.matFromBox(box)).transpose()
v1 = hpmc.util.quatRot(q,a2)
v2 = vecs[1]
v = v2 - v1
self.assertAlmostEqual(np.dot(v,v), 0, places=6)
volume = np.dot(a1,np.cross(a2,a3))
volume -= np.dot(vecs[0], np.cross(vecs[1],vecs[2]))
self.assertAlmostEqual(volume, 0, places=6)
def test_z_with_y_component(self):
a1 = np.asarray([1.,1,0])
a2 = np.asarray([-1.,2.,0.])
a3 = np.asarray([0.,0.,3.])
a3 = a2-a3
box, q = hpmc.util.latticeToHoomd(a1,a2,a3)
vecs = np.asarray(hpmc.util.matFromBox(box)).transpose()
v1 = hpmc.util.quatRot(q,a3)
v2 = vecs[2]
v = v2 - v1
self.assertAlmostEqual(np.dot(v,v), 0, places=6)
volume = np.dot(a1,np.cross(a2,a3))
volume -= np.dot(vecs[0], np.cross(vecs[1],vecs[2]))
self.assertAlmostEqual(volume, 0, places=6)
def test_handedness(self):
for i in range(1000):
a1, a2, a3 = np.random.random((3,3))
box, q = hpmc.util.latticeToHoomd(a1,a2,a3)
b1 = box.get_lattice_vector(0)
b2 = box.get_lattice_vector(1)
b3 = box.get_lattice_vector(2)
self.assertAlmostEqual(np.dot(np.cross(a1,a2),a3), np.dot(np.cross(b1,b2),b3), places=5)
class read_pos (unittest.TestCase):
def setUp(self):
# create temporary pos file
fd, self.fname = tempfile.mkstemp(suffix='read_pos_test.pos')
# read a simple 2d simple cubic unit cell
def test_trivial_2d(self):
fh = open(self.fname, 'w')
fh.write(sc2d_pos)
fh.close()
input = hpmc.util.read_pos(self.fname, ndim=2)
self.assertEqual((input['positions'][0] == np.array([0,0,0])).all(), True)
self.assertEqual(input['param_dict']['A']['shape'], 'sphere')
self.assertEqual(input['param_dict']['A']['diameter'], 1.0)
self.assertEqual(set(input['types']), set(['A']))
self.assertEqual(input['box'].Ly, 1.1)
def test_read_triclinic(self):
fh = open(self.fname, 'w')
fh.write(Zr4Al3_pos)
fh.close()
input = hpmc.util.read_pos(self.fname)
self.assertEqual(input['param_dict']['A2']['shape'], 'sphere')
self.assertEqual(input['param_dict']['A1']['diameter'], 0.8)
self.assertEqual(set(input['types']), set(['A1','A2']))
# compare volumes to see that the box hasn't been grossly distorted...
bmatrix = Zr4Al3_pos.split('\n')[0].split()[1:]
bmatrix = np.array([float(n) for n in bmatrix])
bmatrix.resize((3,3))
a1, a2, a3 = bmatrix.transpose()
b1 = input['box'].get_lattice_vector(0)
b2 = input['box'].get_lattice_vector(1)
b3 = input['box'].get_lattice_vector(2)
self.assertAlmostEqual(np.dot(a1,np.cross(a2,a3)), np.dot(b1, np.cross(b2,b3)), places=5)
# check that q rotates a1 to b1
q = input['q']
v1 = hpmc.util.quatRot(q,a1)
v2 = b1
v12 = v2-v1
self.assertAlmostEqual(np.dot(v12,v12), 0.0, places=5)
# Check the first A1 particle position in original frame against A1 particle in new frame
with open(self.fname, 'r') as fh:
for line in fh:
if line.startswith('A1'):
r1 = np.array([float(n) for n in line.rstrip().split()[-3:]])
break
#print("len squared r1 = {}".format(np.dot(r1,r1)))
i = input['types'].index('A1')
r = input['positions'][i]
#print("len squared new r = {}".format(np.dot(r,r)))
q = input['q']
qconj = q * [1,-1,-1,-1]
r2 = hpmc.util.quatRot(qconj, r)
r12 = r2-r1
self.assertAlmostEqual(np.dot(r12,r12), 0.0, places=5)
def tearDown(self):
# remove pos file
os.remove(self.fname)
# modeled on dense_pack example
class compressor (unittest.TestCase):
def setUp(self):
# create temporary log file
fd, self.fname = tempfile.mkstemp()
self.args = {'ptypes':['A'],
'pnums':[1],
'pvolumes':[4./3. * np.pi * 0.5**3],
'pverts':[],
'num_comp_steps':5e4,
'log_file':self.fname,
'pf_tol':0.01,
'relax':1e4}
# one sphere should compress easily
def test_1sphere(self):
system = create_empty(N=1, box=data.boxdim(L=3), particle_types=['A'])
mc = hpmc.integrate.sphere(seed=1)
mc.set_params(d=0.1)
mc.shape_param.set('A', diameter=1.0)
npt = hpmc.update.boxmc(mc, betaP=5.0, seed=1)
npt.length(delta=0.1, weight=1)
npt.shear(delta=0.1, weight=1, reduce=0.6)
compressor = hpmc.util.compress(mc=mc,
npt_updater=npt,
**self.args)
etas, snaps = compressor.run(1)
etas = np.array(etas)
self.assertGreater(etas.max(), 0.7)
del snaps
del compressor
del npt
del mc
del system
context.initialize()
# Two spheres require all box and particle move types to compress.
# Larger unit cell requires more sensitivity for convergence.
# Initialize with overlaps to test overlap resolution.
def test_2sphere(self):
self.args['ptypes'] = ['A']
self.args['pnums'] = [2]
self.args['pvolumes'] = [4./3. * np.pi * 0.5**3]
self.args['num_comp_steps'] = 4e5
self.args['pf_tol'] = 1e-4
self.args['pmin'] = 5
self.args['relax'] = 1e3
system = create_empty(N=2, box=data.boxdim(L=3), particle_types=['A'])
system.particles[1].position = (0.9,0,0)
mc = hpmc.integrate.sphere(seed=1, nselect=1)
mc.set_params(d=0.1)
mc.shape_param.set('A', diameter=1.0)
npt = hpmc.update.boxmc(mc, betaP=5.0, seed=1)
npt.length(delta=0.1, weight=1)
npt.shear(delta=0.1, weight=1, reduce=0.6)
compressor = hpmc.util.compress(mc=mc,
npt_updater=npt,
**self.args)
etas, snaps = compressor.run(2)
etas = np.array(etas)
self.assertGreater(etas.max(), 0.7)
del snaps
del compressor
del npt
del mc
del system
context.initialize()
# Test two orientable shapes
def test_2cube(self):
self.args['ptypes'] = ['A']
self.args['pnums'] = [2]
self.args['pvolumes'] = [8.]
self.args['num_comp_steps'] = 2e5
self.args['pf_tol'] = 1e-5
self.args['pmin'] = 5
self.args['relax'] = 1e3
system = create_empty(N=2, box=data.boxdim(L=3), particle_types=['A'])
system.particles[1].position = (2.0,0,0)
mc = hpmc.integrate.convex_polyhedron(seed=1,max_verts=8)
mc.set_params(d=0.1, a=0.1)
mc.shape_param.set('A', vertices=[ (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1),
(1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) ])
npt = hpmc.update.boxmc(mc, betaP=5.0, seed=1)
npt.length(delta=0.1, weight=1)
npt.shear(delta=0.1, weight=1, reduce=0.6)
compressor = hpmc.util.compress(mc=mc,
npt_updater=npt,
**self.args)
etas, snaps = compressor.run(2)
etas = np.array(etas)
self.assertGreater(etas.max(), 0.9)
del snaps
del compressor
del npt
del mc
del system
context.initialize()
def tearDown(self):
os.remove(self.fname)
# TO DO
class snapshot (unittest.TestCase):
# show that the snapshot class can properly extract data from a simulation
def test_2_poly_types(self):
pass
class tune (unittest.TestCase):
def setUp(self):
self.system = create_empty(N=2, box=data.boxdim(L=4.5), particle_types=['A'])
self.system.particles[1].position = (2.0,0,0)
self.mc = hpmc.integrate.convex_polyhedron(seed=1)
self.mc.set_params(d=0.1, a=0.1)
self.mc.shape_param.set('A', vertices=[ (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1),
(1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) ])
# show that the tuner will adjust d to achieve a reasonable acceptance ratio
def test_d(self):
# Set up
self.mc.set_params(d=1, a=1, move_ratio=0.5)
target = 0.8
old_acceptance = self.mc.get_translate_acceptance()
old_d = self.mc.get_d()
# Create and run the tuner
tuner = hpmc.util.tune(self.mc, tunables=['d'], max_val=[2], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_acceptance = self.mc.get_translate_acceptance()
self.assertLess(abs(new_acceptance - target), abs(old_acceptance - target))
self.assertNotEqual(old_d, self.mc.get_d())
del tuner
# show that the tuner can reduce a to achieve a reasonable acceptance ratio
def test_a(self):
# Set up
self.mc.set_params(d=0.4, a=1, move_ratio=0.5)
target = 0.8
old_acceptance = self.mc.get_rotate_acceptance()
old_a = self.mc.get_a()
# Create and run the tuner
tuner = hpmc.util.tune(self.mc, tunables=['a'], max_val=[2], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_acceptance = self.mc.get_rotate_acceptance()
self.assertLess(abs(new_acceptance - target), abs(old_acceptance - target))
self.assertNotEqual(old_a, self.mc.get_a())
del tuner
# show that the tuner can tune both d and a simultaneously
def test_multiple_tunables(self):
# Set up
self.mc.set_params(d=1, a=1, move_ratio=0.5)
target = 0.8
old_translate_acceptance = self.mc.get_translate_acceptance()
old_rotate_acceptance = self.mc.get_rotate_acceptance()
old_a = self.mc.get_a()
old_d = self.mc.get_d()
# Create and run the tuner
tuner = hpmc.util.tune(self.mc, tunables=['d', 'a'], max_val=[1, 1], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_translate_acceptance = self.mc.get_translate_acceptance()
new_rotate_acceptance = self.mc.get_rotate_acceptance()
self.assertLess(abs(new_translate_acceptance - target), abs(old_translate_acceptance - target))
self.assertLess(abs(new_rotate_acceptance - target), abs(old_rotate_acceptance - target))
self.assertNotEqual(old_a, self.mc.get_a())
self.assertNotEqual(old_d, self.mc.get_d())
del tuner
# show that the npt tuner can reasonably handle volume changes
def test_npt_noshear(self):
target = 0.5
self.mc.set_params(d=0.1, a=0.01, move_ratio=0.5)
updater = hpmc.update.boxmc(self.mc, betaP=10.0, seed=1)
updater.length(delta=(0.01,0.01,0.01), weight=1)
tuner = hpmc.util.tune_npt(updater, tunables=['dLx', 'dLy', 'dLz'], target=target, gamma=0.0)
for i in range(5):
run(1e2)
tuner.update()
print("npt_noshear: ", *updater.length()['delta'])
acceptance = updater.get_volume_acceptance()
self.assertGreater(acceptance, 0.)
self.assertLess(acceptance, 1.0)
del tuner
del updater
# show that the npt tuner can properly handle shear
def test_npt_shear(self):
target = 0.5
self.mc.set_params(d=0.02, a=0.01, move_ratio=0.5)
updater = hpmc.update.boxmc(self.mc, seed=1, betaP=10)
updater.length(delta=(0.1, 0.1, 0.1), weight=1)
updater.shear(delta=(0.1, 0.1, 0.1), weight=1)
tuner = hpmc.util.tune_npt(updater, tunables=['dxy', 'dyz', 'dxz'], target=target, gamma=0.5)
for i in range(5):
run(1e2)
tuner.update()
acceptance = updater.get_shear_acceptance()
self.assertGreater(acceptance, 0.)
self.assertLess(acceptance, 1.0)
del tuner
del updater
# check the tuner for isotropic mode
def test_npt_isotropic(self):
target = 0.5
self.mc.set_params(d=0.1, a=0.01, move_ratio=0.5)
updater = hpmc.update.boxmc(self.mc, seed=1, betaP=10)
updater.volume(delta=0.1, weight=1)
tuner = hpmc.util.tune_npt(updater, tunables=['dV'], target=target, gamma=0.0)
for i in range(5):
run(1e2)
tuner.update()
print("npt_isotropic: ", updater.volume()['delta'])
acceptance = updater.get_volume_acceptance()
self.assertGreater(acceptance, 0.)
self.assertLess(acceptance, 1.0)
del tuner
del updater
def tearDown(self):
del self.mc
del self.system
context.initialize()
# Test tuning of systems where we specify the type
class tune_by_type(unittest.TestCase):
def setUp(self):
self.system = create_empty(N=2, box=data.boxdim(L=4.5), particle_types=['A', 'B'])
self.system.particles[0].position = (1.0,0,0)
self.system.particles[1].position = (-1.0,0,0)
self.system.particles.types = ['A', 'B']
self.mc = hpmc.integrate.convex_polyhedron(seed=1)
self.mc.set_params(d=0.5, a=0.5)
self.mc.shape_param.set('A', vertices=[ (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1),
(1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) ])
self.mc.shape_param.set('B', vertices=[ (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1),
(1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) ])
# show that the tuner will adjust d to achieve a reasonable acceptance ratio
def test_d(self):
# Set up
self.mc.set_params(d=0.5, a=0.5, move_ratio=0.5)
target = 0.8
old_translate_acceptance = self.mc.get_translate_acceptance()
old_d = self.mc.get_d("A")
old_d_fixed = self.mc.get_d("B")
# Create and run the tuner. Make sure to ignore statistics for the unused type
self.mc.shape_param["B"].ignore_statistics = True
tuner = hpmc.util.tune(self.mc, type='A', tunables=['d'], max_val=[1], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_translate_acceptance = self.mc.get_translate_acceptance()
self.assertLess(abs(new_translate_acceptance - target), abs(old_translate_acceptance - target))
self.assertNotEqual(old_d, self.mc.get_d("A"))
self.assertEqual(old_d_fixed, self.mc.get_d("B"))
del tuner
# Test per-type tuning
def test_a(self):
# Set up
self.mc.set_params(d=0.5, a=0.5, move_ratio=0.5)
target = 0.8
old_rotate_acceptance = self.mc.get_rotate_acceptance()
old_a = self.mc.get_a("A")
old_a_fixed = self.mc.get_a("B")
# Create and run the tuner. Make sure to ignore statistics for the unused type
self.mc.shape_param["B"].ignore_statistics = True
tuner = hpmc.util.tune(self.mc, type='A', tunables=['a'], max_val=[1], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_rotate_acceptance = self.mc.get_rotate_acceptance()
self.assertLess(abs(new_rotate_acceptance - target), abs(old_rotate_acceptance - target))
self.assertNotEqual(old_a, self.mc.get_a("A"))
self.assertEqual(old_a_fixed, self.mc.get_a("B"))
del tuner
# Test per-type tuning
def test_multiple_tunables(self):
# Set up
self.mc.set_params(d=0.5, a=0.5, move_ratio=0.5)
target = 0.8
old_translate_acceptance = self.mc.get_translate_acceptance()
old_rotate_acceptance = self.mc.get_rotate_acceptance()
old_a = self.mc.get_a("A")
old_d = self.mc.get_d("A")
old_a_fixed = self.mc.get_a("B")
old_d_fixed = self.mc.get_d("B")
# Create and run the tuner. Make sure to ignore statistics for the unused type
self.mc.shape_param["B"].ignore_statistics = True
tuner = hpmc.util.tune(self.mc, type='A', tunables=['d', 'a'], max_val=[1, 1], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
# Check that the new acceptance has improved
new_translate_acceptance = self.mc.get_translate_acceptance()
new_rotate_acceptance = self.mc.get_rotate_acceptance()
self.assertLess(abs(new_translate_acceptance - target), abs(old_translate_acceptance - target))
self.assertLess(abs(new_rotate_acceptance - target), abs(old_rotate_acceptance - target))
self.assertNotEqual(old_a, self.mc.get_a("A"))
self.assertNotEqual(old_d, self.mc.get_d("A"))
self.assertEqual(old_a_fixed, self.mc.get_a("B"))
self.assertEqual(old_d_fixed, self.mc.get_d("B"))
del tuner
def tearDown(self):
del self.mc
del self.system
context.initialize()
# Test handling of extreme values
class tune_extreme (unittest.TestCase):
# show that the tuner can reduce d to a small value without making it too small
def test_d_small(self):
return # This test will fail until hoomd.util.tune has better minimum value checking.
minimum = 1e-6 # minimum should come from the tuner ultimately
target = 0.99
N = 27
snapshot = data.make_snapshot(N=N, box=data.boxdim(6.0001,6.0001,6.0001), particle_types=['A'])
positions = np.array([ (i,j,k) for i in range(3) for j in range(3) for k in range(3) ], dtype=np.float32)
positions -= (1,1,1)
positions *= 2.00001
snapshot.particles.position[:] = positions
self.system = init.read_snapshot(snapshot)
self.mc = hpmc.integrate.convex_polyhedron(seed=1)
self.mc.set_params(d=0.1, a=0.1)
self.mc.shape_param.set('A', vertices=[ (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1),
(1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) ])
#tuner = hpmc.util.tune(mc, tunables=['d', 'a'], target=0.2, gamma=0.0)
tuner = hpmc.util.tune(self.mc, tunables=['d'], target=target, gamma=0.0)
for i in range(5):
run(2e2)
tuner.update()
d = self.mc.get_d()
self.assertGreater(d, minimum)
del tuner
del self.mc
del self.system
context.initialize()
if __name__ == '__main__':
unittest.main(argv = ['test.py', '-v'])
| 40.094697 | 113 | 0.582428 | 3,141 | 21,170 | 3.823941 | 0.111748 | 0.021147 | 0.028724 | 0.035634 | 0.751728 | 0.733494 | 0.699775 | 0.670136 | 0.6365 | 0.612688 | 0 | 0.061203 | 0.269107 | 21,170 | 527 | 114 | 40.170778 | 0.715052 | 0.10581 | 0 | 0.6097 | 0 | 0.002309 | 0.046777 | 0 | 0 | 0 | 0 | 0 | 0.115473 | 1 | 0.069284 | false | 0.002309 | 0.016166 | 0 | 0.106236 | 0.006928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72faaabdfa45f533f9d5c7c287931f3778de7c1d | 1,078 | py | Python | src/backend/swagger/model/schema/x_ms_parameter_grouping.py | kairu-ms/aaz-dev-tools | 233a70253487ebbc8347bdd1851e07c2a745104f | [
"MIT"
] | null | null | null | src/backend/swagger/model/schema/x_ms_parameter_grouping.py | kairu-ms/aaz-dev-tools | 233a70253487ebbc8347bdd1851e07c2a745104f | [
"MIT"
] | 2 | 2021-12-21T03:49:53.000Z | 2021-12-29T07:32:31.000Z | src/backend/swagger/model/schema/x_ms_parameter_grouping.py | kairu-ms/aaz-dev-tools | 233a70253487ebbc8347bdd1851e07c2a745104f | [
"MIT"
] | 1 | 2021-11-18T09:07:11.000Z | 2021-11-18T09:07:11.000Z | from schematics.models import Model
from schematics.types import StringType, ModelType
class XmsParameterGrouping(Model):
"""
By default operation parameters are generated in the client as method arguments. This behavior can sometimes be undesirable when the number of parameters is high. x-ms-parameter-grouping extension is used to group multiple primitive parameters into a composite type to improve the API.
https://github.com/Azure/autorest/tree/main/docs/extensions#x-ms-parameter-grouping
"""
name = StringType() # When set, specifies the name for the composite type.
postfix = StringType() # Alternative to name parameter. If specified the name of the composite type will be generated as follows {MethodGroup}{Method}{Postfix}
class XmsParameterGroupingField(ModelType):
def __init__(self, **kwargs):
super(XmsParameterGroupingField, self).__init__(
XmsParameterGrouping,
serialized_name="x-ms-parameter-grouping",
deserialize_from="x-ms-parameter-grouping",
**kwargs
)
| 44.916667 | 289 | 0.736549 | 130 | 1,078 | 6.030769 | 0.592308 | 0.015306 | 0.061224 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188312 | 1,078 | 23 | 290 | 46.869565 | 0.896 | 0.517625 | 0 | 0 | 1 | 0 | 0.092184 | 0.092184 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f40e3147d90ea8346e2fc05799922c321a646690 | 9,098 | py | Python | roscraco/response/wireless.py | spantaleev/roscraco | 87a5a7c54931d5586fd7d30c8c67a699bef69c1f | [
"BSD-3-Clause"
] | 13 | 2015-03-01T00:39:43.000Z | 2020-09-06T09:32:52.000Z | roscraco/response/wireless.py | cyroxx/roscraco | fc0279928d0241f4eb475ac40b24ab22cb1d900a | [
"BSD-3-Clause"
] | 3 | 2015-08-08T01:34:35.000Z | 2017-05-14T11:07:50.000Z | roscraco/response/wireless.py | cyroxx/roscraco | fc0279928d0241f4eb475ac40b24ab22cb1d900a | [
"BSD-3-Clause"
] | 11 | 2015-01-29T03:21:08.000Z | 2020-06-30T17:05:19.000Z | from roscraco.helper import validator
from roscraco.exception import RouterSettingsError
class WirelessSettings(object):
"""Represents all available Wireless settings for a router."""
SECURITY_TYPE_NONE = 'none'
SECURITY_TYPE_WEP64 = 'wep64'
SECURITY_TYPE_WEP128 = 'wep128'
SECURITY_TYPE_WPA = 'wpa'
SECURITY_TYPE_WPA2 = 'wpa2'
#: List of properties to export using export()
PROPERTIES = (
'security_type', 'ssid', 'is_enabled', 'is_broadcasting_ssid',
'channel', 'password'
)
def __init__(self):
self._supports_wireless = True
self._ssid = None
self._enabled_status = True
self._ssid_broadcast_status = True
self._channel = None
self._password = None
self._internal_params = {}
self._supported_security_types = set([self.__class__.SECURITY_TYPE_NONE])
self._security_type = None
self._supports_ascii_wep_passwords = True
self._supports_auto_channel = True
self._changes_require_reboot = True
def set_auto_channel_support(self, value):
self._supports_auto_channel = bool(value)
@property
def supports_auto_channel(self):
"""Tells whether auto channel is supported.
Channel 0 is considered the auto channel, because that's
how most routers represent the ``Auto`` value.
Some devices, however, do not support Auto channel at all.
"""
return self._supports_auto_channel
def add_security_support(self, security_type):
"""Adds a new security type to the list of supported
security types.
"""
self._supported_security_types.add(security_type)
@property
def supported_security_types(self):
return self._supported_security_types
def set_security_type(self, security_type):
self._security_type = security_type
@property
def security_type_is_wep(self):
"""Tells whether the current security type is WEP.
Returns true for both WEP64 and WEP128.
"""
return self._security_type in (self.__class__.SECURITY_TYPE_WEP64, self.__class__.SECURITY_TYPE_WEP128)
@property
def security_type_is_wpa(self):
"""Tells whether the current security type is WPA.
Returns true for both WPA and WPA2.
"""
return self._security_type in (self.__class__.SECURITY_TYPE_WPA, self.__class__.SECURITY_TYPE_WPA2)
@property
def security_type(self):
return self._security_type
def set_reboot_requirement_status(self, value):
self._changes_require_reboot = bool(value)
@property
def changes_require_reboot(self):
"""Tells whether the router needs rebooting
for changes to take effect.
"""
return self._changes_require_reboot
def set_support_status(self, value):
self._supports_wireless = bool(value)
@property
def is_supported(self):
"""Tells whether the router supports wireless (most of them do)."""
return self._supports_wireless
def set_ssid(self, value):
self._ssid = value
@property
def ssid(self):
"""The current SSID (wireless network name)."""
return self._ssid
def set_enabled_status(self, value):
self._enabled_status = bool(value)
@property
def is_enabled(self):
return self._enabled_status
def set_ssid_broadcast_status(self, value):
self._ssid_broadcast_status = bool(value)
@property
def is_broadcasting_ssid(self):
"""Tells whether the SSID status is being broadcasted publicly.
If it is, than the network is publicly visible by anyone.
"""
return self._ssid_broadcast_status
def set_channel(self, value):
self._channel = int(value)
@property
def channel(self):
"""The transmission channel for wireless communications."""
return self._channel
def set_password(self, value):
self._password = value
@property
def password(self):
"""The current password for the given security type.
The password is sometimes None for some routers, to indicate
that the password cannot be determined.
Some routers hide the current password from their web-interface,
so we can't detect it (but that doesn't mean that we can't change it
with a new one).
"""
return self._password
@property
def is_wep_password_in_hex(self):
"""Tells whether the current WEP password is in HEX or in ASCII.
Detecting this allows us to set the ASCII/HEX
field in the management interface automatically.
"""
if not self.security_type_is_wep:
raise RouterSettingsError('Not using WEP, but trying to inspect password!')
bit_length = 128 if self.security_type == self.__class__.SECURITY_TYPE_WEP128 else 64
return validator.is_wep_password_in_hex(self.password, bit_length)
def set_ascii_wep_password_support_status(self, value):
self._supports_ascii_wep_passwords = bool(value)
@property
def supports_ascii_wep_passwords(self):
"""Tells whether the current router supports ASCII passwords
for WEP security.
Some devices only support HEX passwords.
"""
return self._supports_ascii_wep_passwords
def set_internal_param(self, key, value):
self._internal_params[key] = value
def get_internal_param(self, key):
return self._internal_params[key] if key in self._internal_params else None
def validate(self):
errors = {}
if not validator.is_valid_ssid(self.ssid):
errors['ssid'] = 'Invalid SSID: %s' % self.ssid
# most routers use channel 0 as the 'Auto' channel
channel_min = 0 if self.supports_auto_channel else 1
if not (channel_min <= self.channel <= 13):
errors['channel'] = 'Invalid channel %d' % self.channel
if self.security_type not in self._supported_security_types:
errors['security_type'] = 'Invalid security type: %s' % self.security_type
else:
result = self.__validate_password()
if result is not None:
errors['password'] = result
return errors
def ensure_valid(self):
errors = self.validate()
if len(errors) != 0:
raise RouterSettingsError(str(errors))
def __validate_password(self):
if self.security_type in (self.__class__.SECURITY_TYPE_WPA, self.__class__.SECURITY_TYPE_WPA2):
if not validator.is_valid_wpa_psk_password(self.password):
return 'Invalid WPA PSK password: %s' % self.password
if self.security_type in (self.__class__.SECURITY_TYPE_WEP64, self.__class__.SECURITY_TYPE_WEP128):
bit_length = 128 if self.security_type == self.__class__.SECURITY_TYPE_WEP128 else 64
if not validator.is_valid_wep_password(self.password, bit_length):
return 'Invalid WEP password for bit length %d: %s' % (bit_length, self.password)
# Some devices only support HEX values for the WEP password field
if not self.supports_ascii_wep_passwords and not self.is_wep_password_in_hex:
return 'ASCII WEP passwords are not supported!'
return None
def eq(self, other, skip_attrs=()):
# WEP passwords that use HEX are not case-sensitive, so we want
# to validate them separately
if self.security_type_is_wep and other.security_type_is_wep and \
self.is_wep_password_in_hex and other.is_wep_password_in_hex:
skip_attrs = skip_attrs + ('password',)
try:
if self.password.lower() != other.password.lower():
return False
except AttributeError:
return False
# Don't try to compare passwords when there's no security type
if self.security_type == self.__class__.SECURITY_TYPE_NONE and \
other.security_type == self.__class__.SECURITY_TYPE_NONE:
skip_attrs = skip_attrs + ('password',)
for attr in self.__class__.PROPERTIES:
if attr in skip_attrs:
continue
if getattr(self, attr, None) != getattr(other, attr, None):
#print('[%s] %s != %s' % (
# attr,
# getattr(self, attr, None),
# getattr(other, attr, None)
#))
return False
return True
def __eq__(self, other):
return self.eq(other)
def __ne__(self, other):
return not self == other
def __hash__(self):
return id(self)
def export(self):
"""Exports the most important settings attributes,
omitting any internal attributes.
"""
export = {}
for attr in self.__class__.PROPERTIES:
export[attr] = getattr(self, attr, None)
return export
| 34.074906 | 111 | 0.647835 | 1,119 | 9,098 | 4.971403 | 0.175156 | 0.107855 | 0.046018 | 0.049074 | 0.287255 | 0.166097 | 0.117922 | 0.110552 | 0.075139 | 0.072263 | 0 | 0.008044 | 0.275775 | 9,098 | 266 | 112 | 34.203008 | 0.836242 | 0.214223 | 0 | 0.149351 | 0 | 0 | 0.05049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233766 | false | 0.175325 | 0.012987 | 0.045455 | 0.474026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f413bf4b1d3c8512f134ed08956caf4205134216 | 1,216 | py | Python | models/vgg_skip/vgg_skip.py | po0ya/tensorflow_vgg_face | b95163fe4596844ab61755b8c9c55e4fc0cd5944 | [
"MIT"
] | 3 | 2017-08-18T07:24:25.000Z | 2019-02-06T16:36:35.000Z | models/vgg_skip/vgg_skip.py | po0ya/tensorflow_vgg_face | b95163fe4596844ab61755b8c9c55e4fc0cd5944 | [
"MIT"
] | null | null | null | models/vgg_skip/vgg_skip.py | po0ya/tensorflow_vgg_face | b95163fe4596844ab61755b8c9c55e4fc0cd5944 | [
"MIT"
] | null | null | null | from kaffe.tensorflow import Network
class VGG16_skip(Network):
def setup(self):
(self.feed('data')
.conv(3, 3, 64, 1, 1, name='conv1_1')
.conv(3, 3, 64, 1, 1, name='conv1_2')
.max_pool(2, 2, 2, 2, name='pool1')
.conv(3, 3, 128, 1, 1, name='conv2_1')
.conv(3, 3, 128, 1, 1, name='conv2_2')
.max_pool(2, 2, 2, 2, name='pool2')
.conv(3, 3, 256, 1, 1, name='conv3_1')
.conv(3, 3, 256, 1, 1, name='conv3_2')
.conv(3, 3, 256, 1, 1, name='conv3_3')
.max_pool(2, 2, 2, 2, name='pool3')
.conv(3, 3, 512, 1, 1, name='conv4_1')
.conv(3, 3, 512, 1, 1, name='conv4_2')
.conv(3, 3, 512, 1, 1, name='conv4_3')
.max_pool(2, 2, 2, 2, name='pool4')
.conv(3, 3, 512, 1, 1, name='conv5_1')
.conv(3, 3, 512, 1, 1, name='conv5_2')
.conv(3, 3, 512, 1, 1, name='conv5_3')
.max_pool(2, 2, 2, 2, name='pool5')
.skip_conv(['pool3', 'pool4'],1000)
.conv(1, 1, 512, 1, 1, name='skip_dim_reduction')
.fc(4096, name='fc6')
.fc(4096, name='fc7'))
| 41.931034 | 62 | 0.45148 | 195 | 1,216 | 2.702564 | 0.210256 | 0.056926 | 0.159393 | 0.119545 | 0.648956 | 0.648956 | 0.648956 | 0.648956 | 0 | 0 | 0 | 0.208333 | 0.348684 | 1,216 | 28 | 63 | 43.428571 | 0.457071 | 0 | 0 | 0 | 0 | 0 | 0.126749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f41a273247c401547dc4dbce4ef596c25a86ac65 | 4,427 | py | Python | OverFitting Simulation.py | escha2019/python_codes | 918a2d5c9a8c62bf951f4d1018e9736f58fd5ee9 | [
"BSD-3-Clause"
] | null | null | null | OverFitting Simulation.py | escha2019/python_codes | 918a2d5c9a8c62bf951f4d1018e9736f58fd5ee9 | [
"BSD-3-Clause"
] | null | null | null | OverFitting Simulation.py | escha2019/python_codes | 918a2d5c9a8c62bf951f4d1018e9736f58fd5ee9 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[1]:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import normalize
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
class overfitting:
def __init__(self):
pass
def TrueTargetFunct(self):
pass
def AddNoiseTargetFunct(self):
pass
def plot(self, trueTarget, proposedTarget):
pass
def TrainTestSplit(self, X, y, degree, test_size = 0.20):
"""fit the data from exercise 4.2 using linear regression
"""
poly = PolynomialFeatures(degree)
data = poly.fit_transform(X)
data = pd.DataFrame(data, columns = poly.get_feature_names())
temp = poly.get_feature_names()
# Remove Cross interactions
[temp.remove(i) for i in poly.get_feature_names() if len(i.split())==2]
data = data[temp]
X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=test_size, random_state=42)
return X_train, X_test, y_train, y_test
def removeDataPoints(self, data, size):
"""data -- a list of tuples
E.g. [(x,y), (x,y), ...]
"""
return data.remove(np.random.Generator.choice(len(data)-1, replace=False, size = size))
def exerciseFourTwo(self, order, sigma, sampleSize):
"""Generate target function data with random noise using legendre polynomial
PARAMS:
order - order of polynomial
sigma - noise sigma for yn data
sampleSize - # of data points to generate
"""
# generate polynomical coefficient
c = np.random.normal(0, 1, order).reshape(-1, 1)
c = normalize(c, axis=0).flatten()
# generate uniform input/feature vector
x = np.random.uniform(-1, 1, sampleSize)
# generate Yi's using legendre polynomial
y = np.polynomial.legendre.legval(x, c = c)
# add noise to label/target
y = y + sigma*np.random.normal(0, 1, len(y))
return x.reshape(-1, 1), y.reshape(-1, 1)
def fitExercisefourTwo(self, X, y):
"""fit the data from exercise 4.2 using linear regression
"""
# run Regression
reg = LinearRegression().fit(X, y)
return reg
def predict(self, model, X, y):
error = np.mean(np.power(model.predict(X) - y, 2))
return error
def legendrePoly2(self, order, domain):
terms = [1, 1 + domain]
for i in range(2, order+1):
terms.append(((2*i - 1)/i)*domain*terms[i-1] - ((i - 1)/i)*terms[i-2])
return sum(terms), terms
# In[2]:
Qf = range(1, 51)
N = range(20, 125, 5)
sigma = np.arange(0, 2.05, 0.05)
# In[3]:
v = overfitting()
error = []
for leg in Qf:
for n in N:
for sig in sigma:
# generate data
X, y = v.exerciseFourTwo(order=leg, sigma=sig, sampleSize=n)
# train test split
X_train2, X_test2, y_train2, y_test2 = v.TrainTestSplit(X, y, 2)
X_train10, X_test10, y_train10, y_test10 = v.TrainTestSplit(X, y, 10)
# run two models
model2, model10 = v.fitExercisefourTwo(X_train2, y_train2),
v.fitExercisefourTwo(X_train10, y_train10)
# get in-sample error
errorTrainDeg2, errorTrainDeg10 = v.predict(model2, X_train2, y_train2),
v.predict(model10, X_train10, y_train10)
# get out-sample error
errorTestDeg2, errorTestDeg10 = v.predict(model2, X_test2, y_test2),
v.predict(model10, X_test10, y_test10)
# errors dictionary
error.append([errorTrainDeg2, errorTrainDeg10, errorTestDeg2,
errorTestDeg10, leg, n, sig])
pd.DataFrame(error, columns=['g2Train','g10Train', 'g2Test','g10Test',
'degLegredre', 'size', 'sigma']).to_excel("overfittingSimulation.xlsx", index=False)
| 29.912162 | 106 | 0.559069 | 530 | 4,427 | 4.575472 | 0.313208 | 0.008247 | 0.01732 | 0.023505 | 0.096495 | 0.055258 | 0.055258 | 0.055258 | 0.037113 | 0.037113 | 0 | 0.039169 | 0.336797 | 4,427 | 147 | 107 | 30.115646 | 0.786785 | 0.07635 | 0 | 0.060606 | 0 | 0 | 0.020533 | 0.007214 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.060606 | 0.106061 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f422c3a81b579a0da783ffaa5c20764e695153f8 | 7,758 | py | Python | bolero/representation/baseline.py | dettmann/bolero | fa88be1a1d4ab1e2855d20f5429ac83ed5eb4925 | [
"BSD-3-Clause"
] | 51 | 2017-05-19T13:33:29.000Z | 2022-01-21T10:59:57.000Z | bolero/representation/baseline.py | dettmann/bolero | fa88be1a1d4ab1e2855d20f5429ac83ed5eb4925 | [
"BSD-3-Clause"
] | 94 | 2017-05-19T19:44:07.000Z | 2021-12-15T13:40:59.000Z | bolero/representation/baseline.py | dettmann/bolero | fa88be1a1d4ab1e2855d20f5429ac83ed5eb4925 | [
"BSD-3-Clause"
] | 31 | 2017-05-19T19:41:39.000Z | 2021-08-25T14:14:19.000Z | # Author: Alexander Fabisch <afabisch@informatik.uni-bremen.de>
import numpy as np
from .behavior import BlackBoxBehavior
from ..utils.validation import check_random_state
class DummyBehavior(BlackBoxBehavior):
"""Dummy behavior allows using environments which do not require behaviors.
Some environments (e.g. the catapult environment) do not require behavior-
search to learn actual behaviors but rather only to learn parameters
(velocity and angle of a shoot in case of the catapult). This behavior
encapsulates the parameters learned by the optimizer and returns them via
get_outputs() to the environment whenever required. It thus connects
environment and optimizer directly.
"""
def __init__(self, **kwargs):
self.params = None
for k, v in kwargs.items():
setattr(self, k, v)
def init(self, n_inputs, n_outputs):
"""Initialize the behavior.
Parameters
----------
n_inputs : int
number of inputs
n_outputs : int
number of outputs
"""
self.n_outputs = n_outputs
self.params = np.ndarray(self.n_outputs, dtype=np.float64)
if hasattr(self, "initial_params"):
self.params[:] = self.initial_params
self.initialized = True
else:
self.initialized = False
def get_n_params(self):
"""Get number of parameters.
Returns
-------
n_params : int
Number of parameters that will be optimized.
"""
return self.n_outputs
def set_meta_parameters(self, keys, meta_parameters):
"""Set meta parameters (none defined for dummy behavior)."""
if len(keys) > 0:
raise NotImplementedError("DummyBehavior does not accept any meta "
"parameters")
def get_params(self):
"""Get current parameters.
Returns
-------
params : array-like, shape = (n_params,)
Current parameters.
"""
if not self.initialized:
raise ValueError("Initial parameters have not been set")
return self.params
def set_params(self, params):
"""Set new parameter values.
Parameters
----------
params : array-like, shape = (n_params,)
New parameters.
"""
self.params[:] = params
self.initialized = True
def set_inputs(self, inputs):
"""Set input for the next step.
Parameters
----------
inputs : array-like, shape = (0,)
inputs, e.g. current state of the system
"""
def get_outputs(self, outputs):
"""Get outputs of the last step.
Parameters
----------
outputs : array-like, shape = (n_outputs,)
outputs, e.g. next action, will be updated
"""
outputs[:] = self.params
def step(self):
"""Does nothing in DummyBehavior."""
def reset(self):
"""Reset behavior.
Does nothing.
"""
class ConstantBehavior(BlackBoxBehavior):
"""Generates constant outputs.
Parameters
----------
outputs : array-like, shape (n_outputs,), optional (default: zeros)
Values of constant outputs.
"""
def __init__(self, outputs=None):
self.outputs = outputs
def init(self, n_inputs, n_outputs):
"""Initialize the behavior.
Parameters
----------
n_inputs : int
number of inputs
n_outputs : int
number of outputs
"""
self.n_outputs = n_outputs
if self.outputs is None:
self.outputs = np.zeros(self.n_outputs)
def set_meta_parameters(self, keys, meta_parameters):
"""Set meta parameters (none defined for constant behavior)."""
if len(keys) > 0:
raise NotImplementedError("ConstantBehavior does not accept any "
"meta parameters")
def set_inputs(self, inputs):
"""Set input for the next step.
Parameters
----------
inputs : array-like, shape = (n_inputs,)
inputs, e.g. current state of the system
"""
def get_outputs(self, outputs):
"""Get outputs of the last step.
Parameters
----------
outputs : array-like, shape = (n_outputs,)
outputs, e.g. next action, will be updated
"""
outputs[:] = self.outputs
def step(self):
"""Compute output for the received input.
Use the inputs and meta-parameters to compute the outputs.
"""
def get_n_params(self):
"""Get number of parameters.
Returns
-------
n_params : int
Number of parameters that will be optimized.
"""
return 0
def get_params(self):
"""Get current parameters.
Returns
-------
params : array-like, shape = (n_params,)
Current parameters.
"""
return np.array([])
def set_params(self, params):
"""Set new parameter values.
Parameters
----------
params : array-like, shape = (n_params,)
New parameters.
"""
if len(params) > 0:
raise ValueError("Length of parameter vector must be 0")
def reset(self):
"""Reset behavior.
Does nothing.
"""
class RandomBehavior(BlackBoxBehavior):
"""Generates random outputs."""
def __init__(self, random_state=None):
self.random_state = random_state
def init(self, n_inputs, n_outputs):
"""Initialize the behavior.
Parameters
----------
n_inputs : int
number of inputs
n_outputs : int
number of outputs
"""
self.n_outputs = n_outputs
self.random_state = check_random_state(self.random_state)
def set_meta_parameters(self, keys, meta_parameters):
"""Set meta parameters (none defined for random behavior)."""
if len(keys) > 0:
raise NotImplementedError("RandomBehavior does not accept any meta "
"parameters")
def set_inputs(self, inputs):
"""Set input for the next step.
Parameters
----------
inputs : array-like, shape = (n_inputs,)
inputs, e.g. current state of the system
"""
def get_outputs(self, outputs):
"""Get outputs of the last step.
Parameters
----------
outputs : array-like, shape = (n_outputs,)
outputs, e.g. next action, will be updated
"""
outputs[:] = self.random_state.randn(self.n_outputs)
def step(self):
"""Compute output for the received input.
Use the inputs and meta-parameters to compute the outputs.
"""
def get_n_params(self):
"""Get number of parameters.
Returns
-------
n_params : int
Number of parameters that will be optimized.
"""
return 0
def get_params(self):
"""Get current parameters.
Returns
-------
params : array-like, shape = (n_params,)
Current parameters.
"""
return np.array([])
def set_params(self, params):
"""Set new parameter values.
Parameters
----------
params : array-like, shape = (n_params,)
New parameters.
"""
if len(params) > 0:
raise ValueError("Length of parameter vector must be 0")
def reset(self):
"""Reset behavior.
Does nothing.
"""
| 26.659794 | 80 | 0.555169 | 835 | 7,758 | 5.051497 | 0.17485 | 0.037933 | 0.043148 | 0.042674 | 0.68587 | 0.68587 | 0.68587 | 0.63964 | 0.628734 | 0.628734 | 0 | 0.002336 | 0.337845 | 7,758 | 290 | 81 | 26.751724 | 0.818766 | 0.434648 | 0 | 0.56962 | 0 | 0 | 0.082378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.379747 | false | 0 | 0.037975 | 0 | 0.531646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f423dd2d6c033124f374a944fe0c0a1314eea692 | 2,365 | py | Python | server/.vim/plugged/python-mode/submodules/pylint/tests/functional/b/bad_reversed_sequence.py | hkdb/sysconf | 99d334f7309657647059c4b37f25e33dffc81fc3 | [
"MIT"
] | 10 | 2020-07-21T21:59:54.000Z | 2021-07-19T11:01:47.000Z | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/b/bad_reversed_sequence.py | OrangeGzY/vimrc | ddcaedce2effbbd1014eddbceebeb8c621cd9f95 | [
"MIT"
] | null | null | null | vimfiles/bundle/vim-python/submodules/pylint/tests/functional/b/bad_reversed_sequence.py | OrangeGzY/vimrc | ddcaedce2effbbd1014eddbceebeb8c621cd9f95 | [
"MIT"
] | 1 | 2021-01-30T18:17:01.000Z | 2021-01-30T18:17:01.000Z | """ Checks that reversed() receive proper argument """
# pylint: disable=missing-docstring, useless-object-inheritance
# pylint: disable=too-few-public-methods,no-self-use,no-absolute-import
from collections import deque, OrderedDict
from enum import IntEnum
class GoodReversed(object):
""" Implements __reversed__ """
def __reversed__(self):
return [1, 2, 3]
class SecondGoodReversed(object):
""" Implements __len__ and __getitem__ """
def __len__(self):
return 3
def __getitem__(self, index):
return index
class BadReversed(object):
""" implements only len() """
def __len__(self):
return 3
class SecondBadReversed(object):
""" implements only __getitem__ """
def __getitem__(self, index):
return index
class ThirdBadReversed(dict):
""" dict subclass """
def uninferable(seq):
""" This can't be infered at this moment,
make sure we don't have a false positive.
"""
return reversed(seq)
def test(path):
""" test function """
seq = reversed() # No argument given
seq = reversed(None) # [bad-reversed-sequence]
seq = reversed([1, 2, 3])
seq = reversed((1, 2, 3))
seq = reversed(set()) # [bad-reversed-sequence]
seq = reversed({'a': 1, 'b': 2}) # [bad-reversed-sequence]
seq = reversed(iter([1, 2, 3])) # [bad-reversed-sequence]
seq = reversed(GoodReversed())
seq = reversed(SecondGoodReversed())
seq = reversed(BadReversed()) # [bad-reversed-sequence]
seq = reversed(SecondBadReversed()) # [bad-reversed-sequence]
seq = reversed(range(100))
seq = reversed(ThirdBadReversed()) # [bad-reversed-sequence]
seq = reversed(lambda: None) # [bad-reversed-sequence]
seq = reversed(deque([]))
seq = reversed("123")
seq = uninferable([1, 2, 3])
seq = reversed(path.split("/"))
return seq
def test_dict_ancestor_and_reversed():
"""Don't emit for subclasses of dict, with __reversed__ implemented."""
class Child(dict):
def __reversed__(self):
return reversed(range(10))
seq = reversed(OrderedDict())
return reversed(Child()), seq
def test_dont_emit_for_reversing_enums():
"""Don't emit when reversing enum classes"""
class Color(IntEnum):
RED = 1
GREEN = 2
BLUE = 3
for color in reversed(Color):
yield color
| 29.197531 | 75 | 0.646934 | 280 | 2,365 | 5.275 | 0.353571 | 0.134056 | 0.102911 | 0.11916 | 0.266757 | 0.11239 | 0.073798 | 0 | 0 | 0 | 0 | 0.016216 | 0.217759 | 2,365 | 80 | 76 | 29.5625 | 0.782162 | 0.303594 | 0 | 0.192308 | 0 | 0 | 0.003824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.038462 | 0.115385 | 0.596154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f42ca5d1f100e43b9f46b3b6227a28c8f2e3ab0a | 48,685 | py | Python | tests/st/ops/gpu/test_lstm_op.py | unseenme/mindspore | 4ba052f0cd9146ac0ccc4880a778706f1b2d0af8 | [
"Apache-2.0"
] | 2 | 2020-04-28T03:49:10.000Z | 2020-04-28T03:49:13.000Z | tests/st/ops/gpu/test_lstm_op.py | liyong126/mindspore | 930a1fb0a8fa9432025442c4f4732058bb7af592 | [
"Apache-2.0"
] | 7 | 2020-03-30T08:31:56.000Z | 2020-04-01T09:54:39.000Z | tests/st/ops/gpu/test_lstm_op.py | liyong126/mindspore | 930a1fb0a8fa9432025442c4f4732058bb7af592 | [
"Apache-2.0"
] | 1 | 2020-03-30T17:07:43.000Z | 2020-03-30T17:07:43.000Z | # Copyright 2019 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import mindspore.nn as nn
from mindspore.common.api import ms_function
import numpy as np
import mindspore.context as context
from mindspore.common.initializer import initializer
from mindspore.ops import functional as F
from mindspore.ops import composite as C
from mindspore.ops import operations as P
from mindspore.common.tensor import Tensor
from mindspore.common.parameter import ParameterTuple, Parameter
context.set_context(device_target='GPU')
class LstmNet(nn.Cell):
def __init__(self, seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout):
super(LstmNet, self).__init__()
num_directions = 1
if bidirectional:
num_directions = 2
self.lstm = P.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
input_np = np.array([[[0.6755, -1.6607, 0.1367, -0.9209, -1.7088, 0.3953, 2.7120, 0.1103, 0.1504, -0.3611],
[0.4276, -0.7850, -0.3758, 0.8604, -0.1361, -1.3618, -0.6251, -0.8391, 0.8142, 0.4068]],
[[-0.6424, -0.6095, 0.6639, -0.7253, 2.1190, -0.2840, 0.3858, 0.1691, 0.6764, 1.2903],
[0.7918, 0.4147, -0.5089, -0.3582, -1.4279, -0.7975, -0.0390, -0.4718, 0.4322, -0.7995]],
[[-1.5612, 0.0120, -0.7289, -1.2479, -0.6197, -0.6099, 0.9543, 0.4362, -1.3141, 0.4273],
[-0.6656, -0.6626, -0.5883, -0.6922, 0.5512, 1.7031, -1.2812, -0.2004, -0.9224, 0.4106]],
[[-0.9667, -0.6296, -0.7310, 1.2503, -0.1650, 1.2050, -0.1704, -0.5215, 0.1595, 0.3904],
[0.1026, -0.6821, -0.4387, -1.1637, -0.5000, 0.0590, 0.5219, -0.6835, 2.4406, 0.7135]],
[[-0.4710, 0.6558, -0.3144, -1.2213, 0.1556, -0.3836, -0.1081, -0.1440, -1.1231, 0.6279],
[-0.8449, -0.2184, -0.1806, -0.0615, -0.5660, -0.3556, 1.6891, -1.0286, 1.3361,
-0.4313]]]).astype(np.float32)
self.x = Parameter(initializer(Tensor(input_np), [seq_len, batch_size, input_size]), name='x')
self.h = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='h')
self.c = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='c')
wih = np.array([[3.4021e-01, -4.6622e-01, 4.5117e-01, 2.3627e-01, 3.7844e-01,
2.8770e-01, 4.1631e-01, -6.2628e-01, -4.8008e-01, -4.9148e-01],
[-6.4257e-02, -2.4807e-01, 1.3550e-02, 6.8946e-01, -1.2608e-02,
-7.1719e-02, -1.3566e-01, -4.9215e-01, 2.8509e-01, -6.3540e-01],
[-6.9863e-01, 5.9773e-01, -3.9062e-01, -7.6151e-02, 5.6803e-04,
-7.0420e-01, -6.1822e-01, 4.1854e-01, 4.0596e-01, 6.4867e-01],
[-3.0253e-01, -1.9464e-01, 7.0591e-01, 4.9368e-01, -5.9758e-01,
1.3251e-02, 3.5685e-01, -3.7640e-01, -4.4612e-01, 5.1794e-01],
[-3.2140e-01, 5.5578e-01, 6.3589e-01, -6.4249e-01, 5.7258e-01,
2.4256e-01, -2.7954e-01, 2.5202e-01, 2.9235e-01, -3.9979e-01],
[1.6547e-01, -7.9030e-02, -2.0045e-01, 6.2484e-01, -1.0727e-01,
-5.0010e-01, -2.9165e-01, -1.7620e-01, 1.5939e-01, -2.2744e-01],
[-4.0835e-01, 3.6751e-01, 4.7989e-01, 5.8886e-01, 5.3598e-01,
-2.9055e-01, -2.8129e-01, 6.0219e-01, 4.9193e-01, 3.3115e-01],
[-5.6894e-01, -5.0359e-01, 4.7491e-01, 5.8110e-01, -5.4921e-01,
-6.1343e-01, -5.8236e-02, -3.7682e-01, 4.8338e-01, -2.1551e-01]]).astype(np.float32).reshape(
[1, -1])
whh = np.array([[-0.4820, -0.2350],
[-0.1195, 0.0519],
[0.4511, -0.3961],
[-0.5962, 0.0906],
[0.2162, -0.1178],
[0.6237, 0.0711],
[0.1867, -0.1225],
[0.1831, 0.0850]]).astype(np.float32).reshape([1, -1])
bih = np.array([-0.2862, 0.0034, 0.2059, -0.6544, 0.3244, -0.2472, 0.0852, -0.3050]).astype(np.float32).reshape(
[1, -1])
bhh = np.array([-0.6575, 0.1562, -0.6434, 0.0212, -0.2493, -0.5626, 0.1530, -0.5235]).astype(
np.float32).reshape([1, -1])
w_np = np.concatenate((wih, whh, bih, bhh), axis=1).reshape([-1, 1, 1])
self.w = Parameter(initializer(Tensor(w_np), w_np.shape), name='w')
@ms_function
def construct(self):
return self.lstm(self.x, self.h, self.c, self.w)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
def test_lstm():
seq_len = 5
batch_size = 2
input_size = 10
hidden_size = 2
num_layers = 1
has_bias = True
bidirectional = False
dropout = 0.0
num_directions = 1
if bidirectional:
num_directions = 2
net = LstmNet(seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
y, h, c, _, _ = net()
expect_y = np.array([[[-2.1429e-02, 1.1760e-01],
[3.1144e-01, 6.3090e-01]],
[[-5.0190e-04, -4.5812e-02],
[2.0324e-02, 2.0392e-01]],
[[-1.0370e-02, -6.0141e-02],
[6.0931e-02, -1.8913e-02]],
[[-1.6031e-01, -2.3428e-01],
[4.1886e-02, -2.2162e-01]],
[[-3.9243e-02, -3.2950e-02],
[-4.1257e-02, -4.5276e-01]]])
error = np.ones([num_layers, batch_size, hidden_size]) * 1.0e-4
diff = y.asnumpy() - expect_y
assert np.all(diff < error)
assert np.all(-diff < error)
expect_h = np.array([[[-0.0392, -0.0329],
[-0.0413, -0.4528]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-4
diff = h.asnumpy() - expect_h
assert np.all(diff < error)
assert np.all(-diff < error)
expect_c = np.array([[[-0.0984, -0.3665],
[-0.1010, -0.6792]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-4
diff = c.asnumpy() - expect_c
assert np.all(diff < error)
assert np.all(-diff < error)
class BiLstmNet(nn.Cell):
def __init__(self, seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout):
super(BiLstmNet, self).__init__()
num_directions = 1
if bidirectional:
num_directions = 2
self.lstm = P.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
input_np = np.array([[[-1.7322, 1.6642, -1.1861, 0.2955, -0.7907, 0.2982, -1.3413, 1.0665, -0.0436, -0.1883],
[0.2195, 0.5917, -0.6739, 0.2388, -0.5364, -1.3309, -0.6018, -0.3081, -0.9648, -1.1627]],
[[-0.5094, -2.6025, -0.9302, -1.1937, 0.6501, -0.1903, -0.0661, 0.1080, 0.9829, -0.2280],
[1.3961, 0.2239, -0.1947, -0.3206, 0.5791, 0.3396, 0.1728, -1.2007, -1.0994, -1.3278]],
[[0.1870, -1.1090, -0.9705, 0.2207, 0.3743, 0.1158, -0.5443, -0.5559, 0.1538, -0.3975],
[-0.2347, -0.1245, -0.2335, 0.3164, 1.0997, -0.3928, -1.8517, 1.1136, -1.5051, -0.0071]],
[[1.2739, 2.5438, -0.4289, -0.7981, -1.3682, -2.2509, 0.2028, 1.3410, 2.9502, -1.1650],
[0.1254, 0.2726, 0.0251, 0.9323, 0.7315, 0.8231, -0.2123, -0.6885, 0.9893, -0.2047]],
[[0.1870, -0.9066, 0.7155, 0.5438, -0.9757, -0.5828, -0.3417, 1.5681, 1.0326, -0.0179],
[-0.7746, -1.0695, -0.5278, 2.5307, -0.1002, -1.5773, 0.7717, 1.0266, -0.0798,
1.2333]]]).astype(np.float32)
self.x = Parameter(initializer(Tensor(input_np), [seq_len, batch_size, input_size]), name='x')
self.h = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='h')
self.c = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='c')
wih = np.array([[-0.2959, -0.1142, 0.3662, 0.5406, 0.1738, 0.2697, -0.6960, -0.0464, 0.3486, 0.1888],
[0.3043, 0.1505, -0.1207, -0.2456, 0.2735, 0.6673, -0.3352, -0.6153, -0.5731, -0.2726],
[-0.2657, -0.5570, 0.6785, -0.1861, -0.0652, 0.5757, 0.6442, -0.4068, -0.3260, 0.7054],
[0.6607, 0.6927, -0.1354, 0.2484, 0.2053, 0.5743, -0.0212, 0.3340, -0.5685, -0.5668],
[0.6701, -0.3013, -0.1202, -0.4200, -0.4280, -0.6329, -0.6074, -0.4997, -0.6215, -0.6259],
[0.0299, -0.6071, -0.4683, -0.3363, -0.0044, -0.0007, 0.2700, 0.0202, -0.2880, -0.6869],
[0.3025, -0.2461, -0.5128, 0.6327, -0.1438, -0.5100, 0.1924, 0.2023, 0.3129, 0.2271],
[0.3777, 0.0546, 0.4790, -0.1895, 0.3588, 0.4490, 0.6850, 0.6240, -0.2739, -0.4474]]).astype(
np.float32).reshape([1, -1])
whh = np.array([[0.6346, -0.6366],
[-0.0248, -0.6156],
[-0.3821, 0.6327],
[-0.6132, -0.5071],
[0.4029, 0.0906],
[-0.5671, 0.2556],
[0.0268, -0.4347],
[0.1152, -0.3124]]).astype(np.float32).reshape([1, -1])
bih = np.array([-0.3839, -0.5365, -0.6691, 0.1697, -0.1564, -0.0451, -0.5921, -0.5367]).astype(
np.float32).reshape([1, -1])
bhh = np.array([0.5952, -0.4905, 0.0423, -0.0293, -0.6638, 0.4348, -0.4291, -0.5541]).astype(
np.float32).reshape([1, -1])
wih_reverse = np.array([[-0.2938, 0.0048, 0.2704, -0.3387, -0.4529, -0.2586, 0.1352, -0.1208, -0.1423, -0.0220],
[-0.3701, 0.0201, -0.0255, 0.1340, -0.1938, -0.7056, -0.2303, 0.4814, 0.3636, -0.5018],
[-0.0284, -0.0108, -0.5788, 0.2389, 0.2604, 0.6774, -0.5525, 0.6265, -0.6126, 0.3197],
[-0.6906, 0.6991, -0.6138, 0.0044, 0.5714, 0.4176, 0.5451, -0.5114, -0.2286, 0.1105],
[0.3547, 0.6233, -0.4543, -0.6799, 0.1109, 0.5601, 0.0212, 0.6926, 0.0597, -0.4383],
[-0.1370, -0.5852, 0.0596, 0.5494, 0.5789, -0.0534, 0.1092, 0.3544, -0.1571, 0.4444],
[-0.5886, -0.4765, -0.3837, -0.6634, 0.0963, -0.1385, -0.0837, -0.1354, 0.0547,
-0.2870],
[0.2049, -0.7057, -0.1736, 0.4724, 0.1957, -0.3037, 0.4626, -0.6465, 0.4575,
0.4230]]).astype(np.float32).reshape([1, -1])
whh_reverse = np.array([[0.2339, -0.0307],
[-0.5850, 0.6328],
[0.5856, -0.5601],
[0.4875, -0.6929],
[0.0314, 0.2531],
[-0.2523, 0.3244],
[0.5199, 0.5146],
[0.3968, 0.4511]]).astype(np.float32).reshape([1, -1])
bih_reverse = np.array([-0.1760, 0.2828, 0.2450, -0.4016, -0.4664, 0.4031, -0.1945, -0.1509]).astype(
np.float32).reshape([1, -1])
bhh_reverse = np.array([0.6427, 0.4806, 0.6278, 0.1596, 0.0038, -0.3418, 0.0549, -0.3900]).astype(
np.float32).reshape([1, -1])
w_np = np.concatenate((wih, whh, wih_reverse, whh_reverse, bih, bhh, bih_reverse, bhh_reverse), axis=1).reshape(
[-1, 1, 1])
self.w = Parameter(initializer(Tensor(w_np), w_np.shape), name='w')
@ms_function
def construct(self):
return self.lstm(self.x, self.h, self.c, self.w)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
def test_bilstm():
seq_len = 5
batch_size = 2
input_size = 10
hidden_size = 2
num_layers = 1
has_bias = True
bidirectional = True
dropout = 0.0
num_directions = 1
if bidirectional:
num_directions = 2
net = BiLstmNet(seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
y, h, c, _, _ = net()
expect_y = np.array([[[-0.0826, 0.0209, 0.1715, -0.0072],
[0.1035, 0.0594, -0.0867, -0.1077]],
[[-0.1647, 0.0293, -0.2189, 0.3809],
[0.0466, 0.4461, 0.0784, 0.0905]],
[[-0.0182, 0.0512, 0.1758, -0.1147],
[0.0460, 0.1588, -0.0314, 0.0886]],
[[-0.0330, 0.0551, 0.2084, -0.1154],
[-0.1641, 0.1118, -0.0122, 0.4916]],
[[-0.2997, 0.0223, 0.1328, 0.3377],
[-0.6669, 0.0089, 0.1138, 0.7786]]])
error = np.ones([num_layers, batch_size, hidden_size * num_directions]) * 1.0e-4
diff = y.asnumpy() - expect_y
assert np.all(diff < error)
assert np.all(-diff < error)
expect_h = np.array([[[-0.2997, 0.0223],
[-0.6669, 0.0089]],
[[0.1715, -0.0072],
[-0.0867, -0.1077]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-4
diff = h.asnumpy() - expect_h
assert np.all(diff < error)
assert np.all(-diff < error)
expect_c = np.array([[[-0.6049, 0.0825],
[-0.9433, 0.1006]],
[[0.3037, -0.2036],
[-0.1633, -0.5663]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-3
diff = c.asnumpy() - expect_c
assert np.all(diff < error)
assert np.all(-diff < error)
class MultiLayerBiLstmNet(nn.Cell):
def __init__(self, seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout):
super(MultiLayerBiLstmNet, self).__init__()
num_directions = 1
if bidirectional:
num_directions = 2
self.lstm = P.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
input_np = np.array([[[-0.1887, -0.4144, -0.0235, 0.7489, 0.7522, 0.5969, 0.3342, 1.2198, 0.6786, -0.9404],
[-0.8643, -1.6835, -2.4965, 2.8093, 0.1741, 0.2707, 0.7387, -0.0939, -1.7990, 0.4765]],
[[-0.5963, -1.2598, -0.7226, 1.1365, -1.7320, -0.7302, 0.1221, -0.2111, -1.6173, -0.0706],
[0.8964, 0.1737, -1.0077, -0.1389, 0.4889, 0.4391, 0.7911, 0.3614, -1.9533, -0.9936]],
[[0.3260, -1.3312, 0.0601, 1.0726, -1.6010, -1.8733, -1.5775, 1.1579, -0.8801, -0.5742],
[-2.2998, -0.6344, -0.5409, -0.9221, -0.6500, 0.1206, 1.5215, 0.7517, 1.3691, 2.0021]],
[[-0.1245, -0.3690, 2.1193, 1.3852, -0.1841, -0.8899, -0.3646, -0.8575, -0.3131, 0.2026],
[1.0218, -1.4331, 0.1744, 0.5442, -0.7808, 0.2527, 0.1566, 1.1484, -0.7766, -0.6747]],
[[-0.6752, 0.9906, -0.4973, 0.3471, -0.1202, -0.4213, 2.0213, 0.0441, 0.9016, 1.0365],
[1.2223, -1.3248, 0.1207, -0.8256, 0.1816, 0.7057, -0.3105, 0.5713, 0.2804,
-1.0685]]]).astype(np.float32)
self.x = Parameter(initializer(Tensor(input_np), [seq_len, batch_size, input_size]), name='x')
self.h = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='h')
self.c = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='c')
wih_l0 = np.array([[0.3715, -0.0723, 0.6017, 0.5115, -0.5357, 0.3794, -0.3752, -0.6205, -0.0370, -0.2904],
[0.7055, -0.4156, -0.3650, -0.0964, 0.4141, -0.2584, -0.4765, -0.0045, 0.2943, -0.2648],
[0.1355, 0.1697, 0.1883, 0.3754, 0.3744, -0.6128, 0.2328, -0.1275, 0.6604, 0.6498],
[-0.0266, 0.5805, -0.5358, -0.0929, 0.0797, 0.3744, 0.3299, -0.3825, 0.5804, -0.0855],
[0.1141, 0.2587, -0.4370, 0.6430, -0.0017, 0.4865, 0.2814, 0.6213, -0.6415, 0.4574],
[-0.3958, -0.5827, -0.1056, 0.6987, -0.6591, -0.1326, 0.5237, 0.4667, -0.7001, -0.2326],
[0.3074, -0.3118, -0.4591, 0.2481, -0.2978, -0.1850, 0.4770, -0.0126, 0.3655, -0.4306],
[0.3033, -0.6264, -0.6551, 0.0069, -0.5238, -0.3950, 0.5681, -0.4931, -0.6258,
0.4079]]).astype(np.float32).reshape([1, -1])
whh_l0 = np.array([[-0.3870, 0.0238],
[-0.3758, 0.2490],
[0.5437, -0.4117],
[0.1181, -0.2043],
[-0.5335, 0.1188],
[-0.0822, 0.2154],
[0.5844, -0.3239],
[-0.6537, 0.0278]]).astype(np.float32).reshape([1, -1])
bih_l0 = np.array([0.5440, 0.5995, 0.0155, -0.6254, 0.5114, 0.3364, -0.1824, -0.6262]).astype(
np.float32).reshape([1, -1])
bhh_l0 = np.array([0.4139, -0.2513, -0.4023, 0.4222, 0.6387, -0.6147, 0.0677, 0.5355]).astype(
np.float32).reshape([1, -1])
wih_reverse_l0 = np.array([[6.5219e-01, 5.6162e-01, -1.8653e-01, 6.8789e-01, 1.3240e-01, 1.7699e-01, 1.2940e-01,
-1.8520e-01, -5.5439e-01, -3.4946e-01],
[3.7645e-01, 6.5475e-01, 3.5964e-01, 2.2433e-01, -1.7869e-01, -2.9047e-01,
1.7615e-01, -5.3353e-01, -7.4204e-02, -2.5270e-01],
[5.8095e-01, -4.6426e-04, 1.9262e-01, -5.1306e-01, -3.6811e-01, 4.4858e-01,
6.2580e-01, 9.5494e-02, -6.9505e-01, 4.9500e-01],
[-3.7810e-01, 1.5485e-01, -1.4735e-01, -1.5327e-01, -4.5702e-01, 3.0816e-01,
-3.4280e-01, 2.1604e-01, 1.4087e-01, -5.7707e-01],
[-3.8700e-01, -6.4653e-01, 6.0653e-01, -4.7297e-01, 6.8413e-02, -1.2681e-01,
6.8464e-02, 6.7011e-01, 3.9950e-01, -2.0577e-01],
[-1.8648e-01, -6.7198e-01, 3.8017e-01, -3.3147e-01, 5.3193e-01, -5.4952e-01,
2.1774e-01, -4.6271e-01, 3.2611e-01, 6.3554e-02],
[-4.5403e-01, -1.5910e-01, -7.5886e-02, 2.6313e-01, 6.8093e-01, -3.9960e-01,
5.5428e-01, 1.0429e-01, 5.1322e-01, 1.9406e-01],
[3.9698e-01, -5.2101e-01, 5.1372e-01, -3.9866e-01, 1.0115e-01, -4.1290e-02,
-3.0980e-01, 2.1607e-01, 4.8420e-01, -1.9267e-01]]).astype(np.float32).reshape(
[1, -1])
whh_reverse_l0 = np.array([[-0.3231, -0.3960],
[-0.1625, -0.3032],
[0.3892, -0.0666],
[0.0159, -0.4870],
[-0.4953, 0.2278],
[-0.5380, -0.5250],
[0.0371, -0.4534],
[-0.5452, 0.5012]]).astype(np.float32).reshape([1, -1])
bih_reverse_l0 = np.array([0.0469, -0.0107, 0.3783, -0.2657, -0.0089, 0.5032, -0.0757, -0.2022]).astype(
np.float32).reshape([1, -1])
bhh_reverse_l0 = np.array([-0.6584, 0.3977, 0.5597, -0.4784, 0.5360, -0.2532, 0.5362, -0.1063]).astype(
np.float32).reshape([1, -1])
wih_l1 = np.array([[0.0602, 0.6977, -0.3882, 0.3734],
[-0.6896, -0.6014, -0.2311, 0.6433],
[-0.6778, -0.5100, -0.1496, 0.5774],
[-0.5824, 0.4656, -0.2835, -0.5688],
[0.5623, 0.3599, 0.1731, 0.3124],
[0.1492, -0.6663, -0.1099, -0.5282],
[0.4696, -0.1795, -0.6712, -0.3903],
[0.4995, 0.0709, -0.1738, 0.2822]]).astype(np.float32).reshape([1, -1])
whh_l1 = np.array([[0.3770, 0.4139],
[0.5351, 0.6394],
[0.3901, -0.1072],
[0.1106, 0.1331],
[0.3970, 0.4693],
[0.2958, -0.3813],
[-0.3064, 0.5519],
[-0.2827, 0.5844]]).astype(np.float32).reshape([1, -1])
bih_l1 = np.array([0.5242, 0.5896, 0.3709, 0.6202, 0.5008, 0.2674, 0.4356, -0.3261]).astype(np.float32).reshape(
[1, -1])
bhh_l1 = np.array([-0.6648, 0.6680, 0.2510, -0.1245, -0.0524, 0.5439, -0.1650, 0.5303]).astype(
np.float32).reshape([1, -1])
wih_reverse_l1 = np.array([[0.6477, 0.4416, 0.3803, -0.4708],
[0.4497, 0.2833, -0.4739, -0.6361],
[-0.5573, -0.3867, -0.0349, -0.4128],
[-0.1545, 0.3720, 0.2354, -0.6090],
[0.5965, 0.6301, -0.4591, -0.0120],
[-0.1253, -0.1881, -0.4388, 0.4335],
[0.1944, -0.1230, -0.6170, 0.1043],
[-0.6700, 0.4343, 0.6474, 0.0113]]).astype(np.float32).reshape([1, -1])
whh_reverse_l1 = np.array([[0.6576, 0.5573],
[0.2318, 0.0187],
[-0.6365, 0.5744],
[-0.6494, -0.1820],
[0.6461, -0.3344],
[0.0906, -0.5405],
[-0.5999, 0.5571],
[-0.0488, 0.5345]]).astype(np.float32).reshape([1, -1])
bih_reverse_l1 = np.array([-0.6058, -0.2812, -0.4449, -0.0802, 0.4931, 0.4066, 0.5960, 0.1968]).astype(
np.float32).reshape([1, -1])
bhh_reverse_l1 = np.array([-0.2490, -0.3402, -0.5089, -0.3875, 0.4852, -0.0402, -0.0072, -0.1017]).astype(
np.float32).reshape([1, -1])
'''
weight
layer0
forward
wih
whh
reverse
wih
whh
layer1
forward
wih
whh
reverse
wih
whh
... ...
bias:
layer0
forward
bih
bhh
reverse
bih
bhh
layer1
forward
bih
bhh
reverse
bih
bhh
... ...
'''
w_np = np.concatenate(
(wih_l0, whh_l0, wih_reverse_l0, whh_reverse_l0, wih_l1, whh_l1, wih_reverse_l1, whh_reverse_l1,
bih_l0, bhh_l0, bih_reverse_l0, bhh_reverse_l0, bih_l1, bhh_l1, bih_reverse_l1, bhh_reverse_l1),
axis=1).reshape([-1, 1, 1])
self.w = Parameter(initializer(Tensor(w_np), w_np.shape), name='w')
@ms_function
def construct(self):
return self.lstm(self.x, self.h, self.c, self.w)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
def test_multi_layer_bilstm():
seq_len = 5
batch_size = 2
input_size = 10
hidden_size = 2
num_layers = 2
has_bias = True
bidirectional = True
dropout = 0.0
num_directions = 1
if bidirectional:
num_directions = 2
net = MultiLayerBiLstmNet(seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional,
dropout)
y, h, c, _, _ = net()
expect_y = np.array([[[0.5186, 0.5419, 0.2710, 0.0384],
[0.6196, 0.5539, 0.3266, 0.0866]],
[[0.5244, 0.5276, 0.3042, 0.0510],
[0.5143, 0.4937, 0.2828, 0.0387]],
[[0.5124, 0.5079, 0.2951, 0.0548],
[0.4051, 0.4493, 0.2369, 0.0077]],
[[0.4532, 0.4749, 0.2557, 0.0611],
[0.4879, 0.4812, 0.3160, 0.0368]],
[[0.4535, 0.4806, 0.3880, 0.0462],
[0.4674, 0.4849, 0.3890, 0.1008]]])
error = np.ones([seq_len, batch_size, hidden_size * num_directions]) * 1.0e-4
diff = y.asnumpy() - expect_y
assert np.all(diff < error)
assert np.all(-diff < error)
expect_h = np.array([[[0.4730, 0.1638],
[0.1406, -0.0697]],
[[0.3887, -0.0518],
[-0.3988, -0.0071]],
[[0.4535, 0.4806],
[0.4674, 0.4849]],
[[0.2710, 0.0384],
[0.3266, 0.0866]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-4
diff = h.asnumpy() - expect_h
assert np.all(diff < error)
assert np.all(-diff < error)
expect_c = np.array([[[0.8713, 0.2694],
[0.2075, -0.2201]],
[[0.5084, -0.0964],
[-0.5155, -0.2452]],
[[1.1724, 1.0334],
[1.2003, 1.1058]],
[[0.5179, 0.0750],
[0.5309, 0.2012]]])
error = np.ones((num_layers * num_directions, batch_size, hidden_size)) * 1.0e-3
diff = c.asnumpy() - expect_c
assert np.all(diff < error)
assert np.all(-diff < error)
class Grad(nn.Cell):
def __init__(self, network):
super(Grad, self).__init__()
self.network = network
self.weights = ParameterTuple(network.trainable_params())
self.grad = C.GradOperation('grad',
get_by_list=True,
sens_param=True)
@ms_function
def construct(self, output_grad):
weights = self.weights
grads = self.grad(self.network, weights)(output_grad)
return grads
class Net(nn.Cell):
def __init__(self, seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout):
super(Net, self).__init__()
num_directions = 1
if bidirectional:
num_directions = 2
self.lstm = P.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
input_np = np.array([[[-0.5907, 1.0557, 1.7283, 0.6706, -1.2550, -0.5298, -0.2290, -0.6735, 0.8555, 1.4836],
[-1.7070, -0.5347, -0.9105, -0.2598, 0.0588, 1.5496, 1.0757, 0.3760, -1.2020, -0.2868]],
[[0.0151, 0.2126, 0.8090, -0.5292, -2.5590, 0.4279, -0.3081, -1.4706, -0.0498, 1.2301],
[0.4165, -0.5391, -0.0996, 0.1928, -0.4909, -0.1255, 0.4444, -1.3687, 1.3096, 0.6553]],
[[-0.7802, -0.2083, -0.6388, 1.3757, 0.4293, 0.5363, 0.3202, -0.6687, -1.3864, -0.2953],
[1.0799, -0.7204, 0.1130, -0.5857, -0.4855, -1.1068, 1.0126, 0.8716, 1.5460, -0.7392]],
[[2.2645, -0.6586, -0.2227, 1.4290, -0.5006, -1.6576, -0.1793, 0.5319, 0.1360, 0.2707],
[-0.4071, 0.1575, 1.4199, -0.9156, 0.1855, 0.4947, 1.0460, -0.6365, 0.1191, -0.6374]],
[[0.2468, 1.0815, -0.4893, 0.0664, 0.6405, -2.2967, 0.7612, 0.8759, 0.5685, -1.0999],
[-0.7272, -1.7750, -0.1164, -0.7159, 0.0061, -0.7839, -1.8329, 0.3434, -0.5634,
0.5384]]]).astype(np.float32)
self.x = Parameter(initializer(Tensor(input_np), [seq_len, batch_size, input_size]), name='x')
self.h = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='h')
self.c = Parameter(initializer(
Tensor(np.ones((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='c')
wih_l0 = np.array([[0.2300, 0.6668, 0.4703, 0.0425, 0.0464, 0.6825, 0.2249, -0.4315, -0.2449, 0.2964],
[-0.2811, -0.3444, 0.2557, -0.5137, -0.5518, 0.1652, -0.6720, 0.1066, 0.3586, 0.6299],
[0.5728, -0.1784, 0.5661, 0.4012, 0.3856, -0.1899, 0.3102, 0.3717, -0.5651, 0.1952],
[0.1026, -0.0527, 0.1198, -0.3080, 0.2292, 0.5757, -0.3567, -0.2731, -0.0586, -0.2849],
[0.2194, -0.1622, 0.3219, -0.3008, -0.3713, -0.3034, -0.2385, 0.0412, -0.5205, 0.0280],
[-0.5499, -0.0733, -0.5236, -0.6753, -0.7045, -0.1839, -0.1037, -0.5026, -0.4055, -0.3416],
[0.1573, -0.1301, -0.2882, -0.3464, 0.6643, 0.1980, -0.6804, 0.5359, 0.5996, 0.0124],
[-0.6436, 0.0587, -0.6520, -0.0471, 0.1667, 0.6042, 0.5752, -0.6296, -0.2976,
-0.3757]]).astype(np.float32).reshape([1, -1])
whh_l0 = np.array([[0.3358, 0.2790],
[-0.5355, 0.0989],
[-0.1402, 0.5120],
[0.1335, 0.1653],
[0.3533, -0.3531],
[0.4166, -0.4420],
[-0.5454, -0.1720],
[0.0041, -0.0799]]).astype(np.float32).reshape([1, -1])
bih_l0 = np.array([0.5518, 0.1083, 0.4829, 0.0607, -0.1770, -0.6944, 0.3059, 0.5354]).astype(
np.float32).reshape([1, -1])
bhh_l0 = np.array([0.5025, -0.1261, -0.5405, 0.3220, -0.3441, 0.6488, -0.0284, -0.2334]).astype(
np.float32).reshape([1, -1])
wih_reverse_l0 = np.array(
[[-0.7048, -0.1768, 0.2288, -0.0760, -0.1319, 0.0820, -0.4132, 0.3644, 0.3919, 0.2449],
[0.0551, -0.0530, -0.5883, 0.0799, -0.5025, 0.1500, -0.4067, -0.3764, -0.3018, 0.2467],
[-0.2279, 0.3144, 0.5705, 0.4617, 0.1729, 0.6539, -0.2086, 0.5355, 0.4439, 0.0122],
[0.6967, -0.5245, 0.3527, 0.3386, 0.0429, -0.3803, -0.4328, -0.4767, 0.4481, -0.2405],
[0.6744, -0.2776, 0.0798, 0.1543, 0.6421, 0.6102, 0.3591, -0.4431, -0.6327, -0.0075],
[-0.4520, 0.4201, -0.2374, -0.1556, -0.4175, -0.6834, 0.3096, -0.1581, 0.0127, 0.6872],
[0.1788, -0.5442, -0.3675, -0.2887, -0.3004, 0.5813, 0.1618, 0.6875, -0.4678, 0.0071],
[-0.6453, -0.2528, 0.5675, -0.5154, -0.4129, -0.0214, 0.5539, 0.0343, 0.1712, 0.5644]]).astype(
np.float32).reshape([1, -1])
whh_reverse_l0 = np.array([[-0.6657, 0.6330],
[-0.2290, 0.6556],
[0.4808, -0.2712],
[0.0407, -0.2587],
[0.3837, 0.0382],
[0.2268, 0.1217],
[-0.6404, -0.3336],
[0.5461, -0.0764]]).astype(np.float32).reshape([1, -1])
bih_reverse_l0 = np.array([0.0314, 0.1009, 0.3664, -0.6732, -0.6944, 0.5098, -0.1251, 0.2644]).astype(
np.float32).reshape([1, -1])
bhh_reverse_l0 = np.array([-0.1961, -0.3836, 0.1191, -0.7022, -0.0961, 0.5493, -0.6979, 0.0017]).astype(
np.float32).reshape([1, -1])
wih_l1 = np.array([[1.2746e-01, -3.3346e-01, 1.5589e-01, -4.7986e-01],
[6.5835e-01, 3.8135e-01, -3.8409e-01, -3.6499e-01],
[-6.0374e-04, -1.2227e-01, -1.5955e-01, 4.2772e-01],
[-1.8281e-01, -5.0484e-01, 7.0204e-01, 6.5872e-01],
[3.7765e-01, -4.3494e-01, 3.1503e-01, -4.2504e-02],
[6.3506e-01, -4.3049e-02, -5.7413e-01, -2.5134e-01],
[8.7181e-02, -5.5216e-01, 5.5436e-01, -3.9599e-01],
[4.4611e-01, -4.2690e-01, 6.6142e-01, 6.3882e-01]]).astype(np.float32).reshape([1, -1])
whh_l1 = np.array([[-0.0049, -0.3267],
[0.0863, -0.6277],
[0.4815, -0.2236],
[0.5996, -0.3441],
[0.3959, -0.0249],
[0.3986, -0.0922],
[-0.5321, 0.0877],
[0.2811, -0.0483]]).astype(np.float32).reshape([1, -1])
bih_l1 = np.array([0.0032, -0.0893, 0.5706, 0.3712, 0.0590, 0.0044, 0.2417, 0.1291]).astype(np.float32).reshape(
[1, -1])
bhh_l1 = np.array([-0.0704, 0.3908, -0.1121, 0.6970, -0.6216, 0.6340, -0.2945, 0.5224]).astype(
np.float32).reshape([1, -1])
wih_reverse_l1 = np.array([[-0.2693, 0.3487, 0.0692, 0.0047],
[0.6187, 0.5649, 0.0680, 0.5110],
[-0.5262, -0.3307, -0.3892, 0.5382],
[-0.2925, 0.5185, -0.1385, 0.3431],
[-0.3252, 0.3809, -0.4680, 0.3379],
[0.4763, -0.5465, 0.0033, -0.5144],
[0.3826, -0.3879, -0.2439, 0.2571],
[-0.0422, -0.0359, -0.4197, -0.2209]]).astype(np.float32).reshape([1, -1])
whh_reverse_l1 = np.array([[-0.4691, 0.5944],
[-0.6885, 0.1708],
[0.6391, -0.3690],
[-0.5919, 0.1805],
[-0.6853, -0.6215],
[-0.4635, -0.6714],
[-0.2050, 0.0513],
[0.3411, -0.2833]]).astype(np.float32).reshape([1, -1])
bih_reverse_l1 = np.array([0.5764, -0.7010, -0.0831, -0.3779, -0.2743, 0.0480, -0.2707, -0.5583]).astype(
np.float32).reshape([1, -1])
bhh_reverse_l1 = np.array([0.3379, -0.2671, -0.2789, -0.6611, -0.5542, -0.0188, 0.1831, 0.3612]).astype(
np.float32).reshape([1, -1])
'''
weight
layer0
forward
wih
whh
reverse
wih
whh
layer1
forward
wih
whh
reverse
wih
whh
... ...
bias:
layer0
forward
bih
bhh
reverse
bih
bhh
layer1
forward
bih
bhh
reverse
bih
bhh
... ...
'''
w_np = np.concatenate(
(wih_l0, whh_l0, wih_reverse_l0, whh_reverse_l0, wih_l1, whh_l1, wih_reverse_l1, whh_reverse_l1,
bih_l0, bhh_l0, bih_reverse_l0, bhh_reverse_l0, bih_l1, bhh_l1, bih_reverse_l1, bhh_reverse_l1),
axis=1).reshape([-1, 1, 1])
self.w = Parameter(initializer(Tensor(w_np), w_np.shape), name='w')
@ms_function
def construct(self):
return self.lstm(self.x, self.h, self.c, self.w)[0]
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
def test_grad():
seq_len = 5
batch_size = 2
input_size = 10
hidden_size = 2
num_layers = 2
has_bias = True
bidirectional = True
dropout = 0.0
num_directions = 1
if bidirectional:
num_directions = 2
net = Grad(Net(seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout))
dy = np.array([[[-3.5471e-01, 7.0540e-01, -7.5945e-01, -1.2322e+00],
[2.7161e-01, 1.0865e+00, -2.1827e-03, 8.8031e-01]],
[[-4.2431e-01, 1.4955e+00, 4.6576e-01, -2.7230e+00],
[-4.0418e-01, -2.3282e-01, 9.1253e-01, -2.7379e-01]],
[[-1.3654e+00, 1.9251e+00, -1.6808e+00, -3.2642e-02],
[-4.6481e-01, 1.3138e+00, 1.2956e-02, 1.0198e+00]],
[[1.2914e+00, -2.3753e-01, 9.4763e-01, 1.7930e-02],
[5.3589e-01, -1.0981e-01, 1.5377e+00, 6.2709e-01]],
[[-1.6032e+00, -1.8818e-01, 7.0441e-01, -2.8765e+00],
[1.0065e-01, 9.2045e-01, 2.7426e-01, 2.6196e-01]]]).astype(np.float32)
dx, dh, dc, dw = net(Tensor(dy))
expect_dx = np.array([[[0.01697153, -0.0096909, 0.01306139, 0.00863109, -0.00122794, -0.00746152, -0.00879683,
0.00643571, 0.0015958, 0.01480642],
[0.05794962, -0.02326604, 0.01862703, 0.02053947, 0.02607713, -0.01278067, 0.04250786,
-0.02686035, -0.07441005, 0.00806021]],
[[-0.026675, -0.01024149, -0.02492021, -0.00457492, -0.0085863, 0.02341479, 0.02188834,
-0.04139283, -0.01367766, -0.00305065],
[-0.00762213, -0.01914341, -0.03233681, -0.03580827, -0.02201782, -0.00153102, -0.00097455,
-0.02708411, -0.03711082, -0.02804472]],
[[-0.0040581, -0.00116989, 0.01652471, 0.02182668, -0.02547193, -0.04171437, 0.04185125,
0.01589275, -0.00517019, 0.06554792],
[-0.02294365, -0.00589715, -0.01425684, -0.01499153, -0.05327821, -0.03133425, 0.00755623,
-0.04192506, -0.02122675, -0.01214214]],
[[-0.00041491, 0.00240709, -0.00942589, 0.00719656, 0.01438523, 0.00931082, 0.00534746,
-0.0004002, 0.01299422, 0.00181135],
[-0.01704482, -0.00887032, -0.01746774, -0.03289891, -0.04259495, -0.01928082, -0.01570587,
-0.01242383, -0.01799918, -0.00610236]],
[[0.00207505, -0.0008109, 0.00114241, 0.00251349, -0.00065676, 0.00151333, -0.00077485,
-0.00034354, -0.00028289, -0.0006986],
[-0.00240827, -0.0001309, 0.01401818, -0.01272261, -0.02665948, -0.01095799, -0.007761,
-0.0087831, 0.01038029, 0.02021475]]]).astype(np.float32)
error = np.ones(dx.asnumpy().shape) * 1.0e-4
diff = dx.asnumpy() - expect_dx
assert np.all(diff < error)
assert np.all(-diff < error)
expect_dh = np.array([[[-0.00696833, 0.00212885],
[0.01416209, 0.0002706]],
[[0.00297393, -0.0021012],
[0.00458834, 0.00400078]],
[[0.08658642, -0.10590762],
[0.1516603, -0.10525411]],
[[0.11888178, -0.04759264],
[0.05898442, -0.08082277]]]).astype(np.float32)
error = np.ones(dh.asnumpy().shape) * 1.0e-4
diff = dh.asnumpy() - expect_dh
assert np.all(diff < error)
assert np.all(-diff < error)
expect_dc = np.array([[[0.00887521, -0.01391486],
[0.03858164, -0.04941981]],
[[0.00665188, 0.00184223],
[-0.00541833, 0.01410913]],
[[-0.2068854, 0.5585638],
[0.01735374, 0.3537254]],
[[0.20350647, -0.2792883],
[0.18456826, 0.02278761]]]).astype(np.float32)
error = np.ones(dc.asnumpy().shape) * 1.0e-4
diff = dc.asnumpy() - expect_dc
assert np.all(diff < error)
assert np.all(-diff < error)
class LstmNetWithDropout(nn.Cell):
def __init__(self, seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout):
super(LstmNetWithDropout, self).__init__()
num_directions = 1
if bidirectional:
num_directions = 2
self.lstm = P.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
input_np = np.array([[[-2.48789445e-01, -2.18991071e-01, -8.41492534e-01, -5.73351622e-01, 8.20644796e-02,
4.14313585e-01, -1.30143976e+00, -4.43366140e-01, -1.21003680e-01, -2.11284861e-01],
[9.94045794e-01, 3.18840504e-01, 4.81898338e-01, -4.83986028e-02, -9.26419497e-02,
-2.57977694e-01, 1.82191110e+00, 5.95121741e-01, 6.30752742e-01, -6.01903737e-01]],
[[7.67166913e-01, 5.41202351e-02, -1.24094069e+00, 1.38814664e+00, 2.05845284e+00,
7.29744852e-01, -1.12405574e+00, 3.78702253e-01, 2.28524983e-01, 2.02445173e+00],
[-1.85264975e-01, -4.55119252e-01, 1.23624969e+00, 1.24347043e+00, -1.68316591e+00,
-3.55918944e-01, 3.07149738e-01, -3.44966322e-01, -1.08978853e-01, 1.80912763e-01]],
[[-6.47622466e-01, 1.31204927e+00, 6.47477210e-01, -7.93370783e-01, 3.08402872e-04,
-5.12097359e-01, -1.69133916e-01, 8.57838035e-01, -3.63963723e-01, 6.35978997e-01],
[-3.92911851e-01, 8.27334300e-02, -1.11347124e-01, 8.79961967e-01, 6.02812059e-02,
-3.76448452e-01, -1.48800862e+00, -9.48699772e-01, -1.24202335e+00, 1.65264118e+00]],
[[4.05404866e-01, 5.67396320e-02, -2.05705926e-01, -8.70196745e-02, -7.34854519e-01,
-1.07580565e-01, 1.33716142e+00, -1.18140256e+00, 2.66074872e+00, -3.26788813e-01],
[6.97183967e-01, -2.32625628e+00, 1.20393467e+00, -2.32532692e+00, 2.03347206e+00,
-7.58083522e-01, 1.35564697e+00, -2.32149422e-01, 9.85125721e-01, 1.00944638e+00]],
[[9.89606023e-01, -5.30669808e-01, -2.66087383e-01, 8.14819038e-01, 1.07067376e-01,
-1.76214290e+00, -5.04977465e-01, 1.94490123e+00, 5.10450959e-01, -2.29238123e-01],
[-1.32928836e+00, -1.18175328e-01, -5.17818272e-01, -1.45089477e-01, 7.13987231e-01,
-7.41293788e-01, -3.67817104e-01, 1.18039274e+00, -6.03745162e-01,
-5.83392143e-01]]]).astype(np.float32)
self.x = Parameter(initializer(Tensor(input_np), [seq_len, batch_size, input_size]), name='x')
self.h = Parameter(initializer(
Tensor(np.array([[[-0.47240502, 1.6824378],
[-0.00978304, 0.8179632]]]).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='h')
self.c = Parameter(initializer(
Tensor(np.array([[[-0.85975164, -0.3198615],
[-0.9821871, 0.26311848]]]).astype(np.float32)),
[num_layers * num_directions, batch_size, hidden_size]), name='c')
wih = np.array([[0.4473, -0.5509, -0.1585, -0.6215, 0.6228, 0.3462, 0.3015, -0.3714, 0.3119, -0.1151],
[-0.6923, 0.1373, 0.2214, 0.2280, 0.6960, -0.6368, 0.5725, -0.1359, 0.0742, -0.6777],
[-0.4432, 0.6162, -0.1066, -0.6138, -0.2529, -0.5638, -0.0603, 0.3039, 0.1068, -0.5300],
[0.4337, -0.1215, -0.5088, -0.0045, 0.2828, 0.1411, 0.0741, 0.6936, -0.4603, 0.6986],
[-0.2079, -0.5518, 0.5375, -0.2168, 0.3662, 0.0948, -0.0564, -0.1808, -0.6672, -0.2410],
[0.5142, 0.0790, -0.1123, -0.2351, 0.3982, -0.6351, 0.5906, 0.3917, -0.0850, -0.5397],
[-0.4795, -0.6576, 0.5693, 0.0047, -0.6626, 0.1013, -0.4015, -0.4040, -0.2817, 0.4430],
[0.0251, -0.3035, -0.6026, 0.2693, -0.2749, 0.1501, -0.5778, 0.5570, -0.7065, -0.6196]]).astype(
np.float32).reshape([1, -1])
whh = np.array([[-0.4344, -0.2529],
[0.0377, 0.7046],
[-0.0579, -0.5240],
[-0.4801, -0.1149],
[-0.4010, -0.5614],
[0.4721, 0.4366],
[-0.4282, 0.0816],
[0.1574, -0.3359]]).astype(np.float32).reshape([1, -1])
bih = np.array([0.2431, 0.5967, -0.2417, -0.4169, -0.5326, 0.5685, -0.2971, -0.4326]).astype(
np.float32).reshape([1, -1])
bhh = np.array([-0.1751, -0.2270, -0.3980, -0.4983, -0.3527, -0.2774, 0.6371, -0.3330]).astype(
np.float32).reshape([1, -1])
w_np = np.concatenate((wih, whh, bih, bhh), axis=1).reshape([-1, 1, 1])
self.w = Parameter(initializer(Tensor(w_np), w_np.shape), name='w')
def construct(self):
return self.lstm(self.x, self.h, self.c, self.w)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
def test_lstm_dropout():
seq_len = 5
batch_size = 2
input_size = 10
hidden_size = 2
num_layers = 1
has_bias = True
bidirectional = False
dropout = 1.0
num_directions = 1
if bidirectional:
num_directions = 2
net = LstmNetWithDropout(seq_len, batch_size, input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)
y, h, c, _, _ = net()
expect_y = np.array([[[-0.45210335, -0.0844336],
[-0.14677924, 0.07140275]],
[[-0.18895914, -0.11084185],
[-0.26356253, -0.06367199]],
[[-0.33480304, 0.00812318],
[-0.0887147, -0.1564593]],
[[-0.33231455, 0.00743252],
[0.428218, 0.00723737]],
[[-0.20026046, 0.43491203],
[0.17739448, 0.5313992]]])
error = np.ones([num_layers, batch_size, hidden_size]) * 1.0e-4
diff = y.asnumpy() - expect_y
assert np.all(diff < error)
assert np.all(-diff < error)
| 49.077621 | 120 | 0.473082 | 6,791 | 48,685 | 3.316743 | 0.300692 | 0.021444 | 0.044619 | 0.046883 | 0.404235 | 0.396644 | 0.389984 | 0.389984 | 0.38368 | 0.382792 | 0 | 0.384063 | 0.347581 | 48,685 | 991 | 121 | 49.127144 | 0.325064 | 0.013105 | 0 | 0.327707 | 0 | 0 | 0.00058 | 0 | 0 | 0 | 0 | 0 | 0.036568 | 1 | 0.02391 | false | 0 | 0.015471 | 0.007032 | 0.056259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f42e1364a865b4cba09dcfe50eb40917d18864f1 | 701 | py | Python | tern/classes/templates/spdx.py | ManishaTripathy/tern | bf3da704d2731417fd070bab888be7b9685080c9 | [
"BSD-2-Clause"
] | null | null | null | tern/classes/templates/spdx.py | ManishaTripathy/tern | bf3da704d2731417fd070bab888be7b9685080c9 | [
"BSD-2-Clause"
] | null | null | null | tern/classes/templates/spdx.py | ManishaTripathy/tern | bf3da704d2731417fd070bab888be7b9685080c9 | [
"BSD-2-Clause"
] | null | null | null | #
# Copyright (c) 2019 VMware, Inc. All Rights Reserved.
# SPDX-License-Identifier: BSD-2-Clause
#
from tern.classes.template import Template
class SPDXTagValue(Template):
'''This is the SPDX Template class
It provides mappings for the SPDX tag-value document format'''
def package(self):
return {'name': 'PackageName',
'version': 'PackageVersion',
'license': 'PackageLicenseDeclared'}
def image_layer(self):
return {'diff_id': 'PackageName',
'fs_hash': 'PackageChecksum'}
def image(self):
return {'name': 'PackageName',
'tag': 'PackageVersion',
'id': 'PackageChecksum'}
| 26.961538 | 66 | 0.606277 | 70 | 701 | 6.028571 | 0.685714 | 0.07109 | 0.066351 | 0.118483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009785 | 0.271041 | 701 | 25 | 67 | 28.04 | 0.816047 | 0.261056 | 0 | 0.153846 | 0 | 0 | 0.304951 | 0.043564 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0.230769 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
f4341213ed77acabe7ee14da316a62b7bb52bd33 | 2,371 | py | Python | get_html.py | Marcel5751/garbage-collection-calendar-bremen | 40faf665ce4d9bc0667a71faa8b8d27ec3ef91e1 | [
"MIT"
] | 1 | 2020-03-13T21:33:48.000Z | 2020-03-13T21:33:48.000Z | get_html.py | Marcel5751/garbage-collection-calendar-bremen | 40faf665ce4d9bc0667a71faa8b8d27ec3ef91e1 | [
"MIT"
] | null | null | null | get_html.py | Marcel5751/garbage-collection-calendar-bremen | 40faf665ce4d9bc0667a71faa8b8d27ec3ef91e1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import codecs
import os
import uuid
from datetime import datetime
import requests
import main
PATH_TO_HTML_FOLDER = "./html-data"
def get_important_part_of_html(filename, html_as_string, start_year, end_year):
"""Removes useless beginning and end of html file. Uses HTML Comments as Orientation
Args:
filename (str): Path to where reduced file should be saved
html_as_string (str): Content of File
Returns:
list: a list of calendar Events
"""
if not "Kontroll-Abschnitt" in html_as_string:
# raise ValueError("Keine gültige Adresse in Bremen!")
return main.NOT_A_VALID_ADDRESS_ERROR_MESSAGE
# Start Inhalt Termine Jahr 2018
substring_to_spilt_at = "Start Inhalt Termine Jahr {} -->".format(start_year)
split_one = html_as_string.split(substring_to_spilt_at, 1)
important_part_one = split_one[1]
# End Inhalt Termine Jahr 2020
substring_to_spilt_at_end = "<!-- End Inhalt Termine Jahr {}".format(end_year)
split_cut_off_end = important_part_one.split(substring_to_spilt_at_end, 1)
important_part_two = split_cut_off_end[0]
write_string_to_html_file(important_part_two, filename)
return filename
def write_string_to_html_file(string_to_write, filename):
"""Writes String to UTF-8 HTML file.
Args:
string_to_write (str): String to be saved
filename (str): Path to the file
Returns:
str: content of file as string
"""
text_file = codecs.open(filename, "w", "utf-8-sig")
n = text_file.write(string_to_write)
text_file.close()
return n
def download_html_file_from_url(url_to_download, start_year, end_year):
"""Downloads content of url and saves it to a html file with a unique filename
Args:
url_to_download (str): Url as string
Returns:
str: filename
"""
html_download = requests.get(url_to_download).text
filename = PATH_TO_HTML_FOLDER + "/" + get_unique_name_for_html_file()
if not os.path.exists(PATH_TO_HTML_FOLDER):
os.makedirs(PATH_TO_HTML_FOLDER)
return get_important_part_of_html(filename, html_download, start_year, end_year)
def get_unique_name_for_html_file():
unique_string = uuid.uuid1()
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
return "{}_{}.html".format(unique_string, timestamp)
| 30.397436 | 88 | 0.714466 | 354 | 2,371 | 4.466102 | 0.30791 | 0.040481 | 0.0253 | 0.040481 | 0.174573 | 0.073371 | 0.043011 | 0 | 0 | 0 | 0 | 0.008412 | 0.197807 | 2,371 | 77 | 89 | 30.792208 | 0.822818 | 0.293969 | 0 | 0 | 0 | 0 | 0.078914 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.363636 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f46800f4a5980bf7f23c95a78f7c0132c0edb032 | 10,116 | py | Python | libs/configs_old/DOTA/gwd/cfgs_res50_dota_v20.py | Artcs1/RotationDetection | 095be17345ee9984d8de8f24eb6b5a0b2d764a06 | [
"Apache-2.0"
] | 850 | 2020-10-27T08:51:54.000Z | 2022-03-30T15:12:06.000Z | libs/configs_old/DOTA/gwd/cfgs_res50_dota_v20.py | Artcs1/RotationDetection | 095be17345ee9984d8de8f24eb6b5a0b2d764a06 | [
"Apache-2.0"
] | 94 | 2020-12-01T02:18:47.000Z | 2022-03-30T08:14:27.000Z | libs/configs_old/DOTA/gwd/cfgs_res50_dota_v20.py | Artcs1/RotationDetection | 095be17345ee9984d8de8f24eb6b5a0b2d764a06 | [
"Apache-2.0"
] | 149 | 2020-10-29T03:30:32.000Z | 2022-03-29T09:53:23.000Z | # -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
import os
import tensorflow as tf
import math
from dataloader.pretrained_weights.pretrain_zoo import PretrainModelZoo
"""
RetinaNet-H + gwd fix bug + sqrt + tau=2 + train set
FLOPs: 860451163; Trainable params: 33002916
iou threshold: 0.5
classname: plane
npos num: 2450
ap: 0.8948394008103565
classname: baseball-diamond
npos num: 209
ap: 0.6580467157774382
classname: bridge
npos num: 424
ap: 0.388917639526009
classname: ground-track-field
npos num: 131
ap: 0.582799082808811
classname: small-vehicle
npos num: 5090
ap: 0.6058372268499183
classname: large-vehicle
npos num: 4293
ap: 0.6297220646782561
classname: ship
npos num: 8861
ap: 0.8143495259256781
classname: tennis-court
npos num: 739
ap: 0.897082428301694
classname: basketball-court
npos num: 124
ap: 0.6194974348503025
classname: storage-tank
npos num: 1869
ap: 0.7888520103937031
classname: soccer-ball-field
npos num: 87
ap: 0.6721727619016967
classname: roundabout
npos num: 164
ap: 0.6740140076462648
classname: harbor
npos num: 2065
ap: 0.6030928319524497
classname: swimming-pool
npos num: 366
ap: 0.532690992577956
classname: helicopter
npos num: 72
ap: 0.45393048522054874
map: 0.6543896406147388
{'0.65': {'mAP': 0.5531255908346647, 'ground-track-field': 0.46874541967164557, 'small-vehicle': 0.5254805842312422, 'soccer-ball-field': 0.49674069740653076, 'harbor': 0.3325998985859663, 'large-vehicle': 0.49237446722103323, 'swimming-pool': 0.3786694115862947, 'roundabout': 0.6127737951332743, 'tennis-court': 0.8955950695702153, 'basketball-court': 0.5642336574393851, 'helicopter': 0.4095234559651532, 'storage-tank': 0.768350569402555, 'bridge': 0.229887299838382, 'baseball-diamond': 0.5172297968073052, 'ship': 0.718831628735693, 'plane': 0.885848110925295},
'0.5': {'mAP': 0.6543896406147388, 'ground-track-field': 0.582799082808811, 'small-vehicle': 0.6058372268499183, 'soccer-ball-field': 0.6721727619016967, 'harbor': 0.6030928319524497, 'large-vehicle': 0.6297220646782561, 'swimming-pool': 0.532690992577956, 'roundabout': 0.6740140076462648, 'tennis-court': 0.897082428301694, 'basketball-court': 0.6194974348503025, 'helicopter': 0.45393048522054874, 'storage-tank': 0.7888520103937031, 'bridge': 0.388917639526009, 'baseball-diamond': 0.6580467157774382, 'ship': 0.8143495259256781, 'plane': 0.8948394008103565},
'0.8': {'mAP': 0.28292248169049333, 'ground-track-field': 0.2325775080634852, 'small-vehicle': 0.1979511661753693, 'soccer-ball-field': 0.29786281543794524, 'harbor': 0.11494252873563218, 'large-vehicle': 0.16034195972421744, 'swimming-pool': 0.10212121212121213, 'roundabout': 0.29187883858274505, 'tennis-court': 0.8003975003061949, 'basketball-court': 0.47053242084058733, 'helicopter': 0.08282828282828283, 'storage-tank': 0.4630236938472425, 'bridge': 0.045454545454545456, 'baseball-diamond': 0.0980392156862745, 'ship': 0.3419243781838527, 'plane': 0.5439611593698137},
'0.85': {'mAP': 0.17732891599288997, 'ground-track-field': 0.13084951639168507, 'small-vehicle': 0.06282073067119796, 'soccer-ball-field': 0.18311688311688312, 'harbor': 0.09090909090909091, 'large-vehicle': 0.05997549072961212, 'swimming-pool': 0.01515151515151515, 'roundabout': 0.1523809523809524, 'tennis-court': 0.777850986366134, 'basketball-court': 0.27146743865010114, 'helicopter': 0.025974025974025972, 'storage-tank': 0.3194857000235097, 'bridge': 0.025974025974025972, 'baseball-diamond': 0.07032306536438768, 'ship': 0.09238611869237975, 'plane': 0.38126819949784874},
'0.9': {'mAP': 0.09261312239028942, 'ground-track-field': 0.045454545454545456, 'small-vehicle': 0.007575757575757575, 'soccer-ball-field': 0.08787878787878788, 'harbor': 0.09090909090909091, 'large-vehicle': 0.006888231631382316, 'swimming-pool': 0.01515151515151515, 'roundabout': 0.05694896083698572, 'tennis-court': 0.6190068314484273, 'basketball-court': 0.1277056277056277, 'helicopter': 0.018181818181818184, 'storage-tank': 0.10310064772905649, 'bridge': 0.012987012987012986, 'baseball-diamond': 0.05454545454545454, 'ship': 0.00899621212121212, 'plane': 0.133866341697667},
'0.6': {'mAP': 0.602003225559061, 'ground-track-field': 0.5117731722941454, 'small-vehicle': 0.5692796674261347, 'soccer-ball-field': 0.591601532425069, 'harbor': 0.42439117183385383, 'large-vehicle': 0.5379528999441402, 'swimming-pool': 0.4552774282858074, 'roundabout': 0.6590275695186874, 'tennis-court': 0.8967502975397331, 'basketball-court': 0.6163602294422292, 'helicopter': 0.42175379721391987, 'storage-tank': 0.7814590420239126, 'bridge': 0.30900189391187255, 'baseball-diamond': 0.6270284107602824, 'ship': 0.7357085211727478, 'plane': 0.892682749593379},
'0.7': {'mAP': 0.47209699491529994, 'ground-track-field': 0.37315990473910204, 'small-vehicle': 0.4462857945106512, 'soccer-ball-field': 0.43301958208470137, 'harbor': 0.24212265985665615, 'large-vehicle': 0.41707228898274396, 'swimming-pool': 0.2672845272755605, 'roundabout': 0.4752231061636024, 'tennis-court': 0.8954629342636613, 'basketball-court': 0.5565887540061711, 'helicopter': 0.3137137929820856, 'storage-tank': 0.6891634802537836, 'bridge': 0.16824841824841824, 'baseball-diamond': 0.3967626112242669, 'ship': 0.6233882592021442, 'plane': 0.7839588099359523},
'0.75': {'mAP': 0.38682933856456475, 'ground-track-field': 0.3505001362890805, 'small-vehicle': 0.32936925454926796, 'soccer-ball-field': 0.35644113950565565, 'harbor': 0.16082435022158342, 'large-vehicle': 0.312014321085313, 'swimming-pool': 0.15053744756715054, 'roundabout': 0.421342806894755, 'tennis-court': 0.8933998458347037, 'basketball-court': 0.5018426096266209, 'helicopter': 0.17586580086580086, 'storage-tank': 0.6481067305855587, 'bridge': 0.11431682090364725, 'baseball-diamond': 0.21312574893137554, 'ship': 0.5086325250920672, 'plane': 0.6661205405158923},
'mmAP': 0.38707336824937255,
'0.95': {'mAP': 0.020635306242343165, 'ground-track-field': 0.045454545454545456, 'small-vehicle': 0.0005790387955993052, 'soccer-ball-field': 0.0, 'harbor': 0.0004434589800443459, 'large-vehicle': 0.00036638424547744445, 'swimming-pool': 0.0, 'roundabout': 0.0053475935828877, 'tennis-court': 0.2304241077310939, 'basketball-court': 0.003189792663476874, 'helicopter': 0.0, 'storage-tank': 0.012987012987012986, 'bridge': 0.0, 'baseball-diamond': 0.0, 'ship': 0.0009404388714733542, 'plane': 0.009797220323536112},
'0.55': {'mAP': 0.6287890656893798, 'ground-track-field': 0.5643322633863954, 'small-vehicle': 0.5913067741856398, 'soccer-ball-field': 0.6335613572261539, 'harbor': 0.5190220297608497, 'large-vehicle': 0.5649195362143626, 'swimming-pool': 0.49227487366542605, 'roundabout': 0.667984152802187, 'tennis-court': 0.897082428301694, 'basketball-court': 0.6163602294422292, 'helicopter': 0.44399239228256077, 'storage-tank': 0.7862921590716214, 'bridge': 0.35810648582284893, 'baseball-diamond': 0.6568440654367499, 'ship': 0.7454706366368675, 'plane': 0.8942866011051104}}
"""
# ------------------------------------------------
VERSION = 'RetinaNet_DOTA_2x_20210124'
NET_NAME = 'resnet50_v1d' # 'MobilenetV2'
# ---------------------------------------- System
ROOT_PATH = os.path.abspath('../../')
print(20*"++--")
print(ROOT_PATH)
GPU_GROUP = "0,1,3"
NUM_GPU = len(GPU_GROUP.strip().split(','))
SHOW_TRAIN_INFO_INTE = 20
SMRY_ITER = 200
SAVE_WEIGHTS_INTE = 20673 * 2
SUMMARY_PATH = os.path.join(ROOT_PATH, 'output/summary')
TEST_SAVE_PATH = os.path.join(ROOT_PATH, 'tools/test_result')
pretrain_zoo = PretrainModelZoo()
PRETRAINED_CKPT = pretrain_zoo.pretrain_weight_path(NET_NAME, ROOT_PATH)
TRAINED_CKPT = os.path.join(ROOT_PATH, 'output/trained_weights')
EVALUATE_R_DIR = os.path.join(ROOT_PATH, 'output/evaluate_result_pickle/')
# ------------------------------------------ Train and Test
RESTORE_FROM_RPN = False
FIXED_BLOCKS = 1 # allow 0~3
FREEZE_BLOCKS = [True, False, False, False, False] # for gluoncv backbone
USE_07_METRIC = True
ADD_BOX_IN_TENSORBOARD = True
MUTILPY_BIAS_GRADIENT = 2.0 # if None, will not multipy
GRADIENT_CLIPPING_BY_NORM = 10.0 # if None, will not clip
CLS_WEIGHT = 1.0
REG_WEIGHT = 2.0
REG_LOSS_MODE = 2
ALPHA = 1.0
BETA = 1.0
BATCH_SIZE = 1
EPSILON = 1e-5
MOMENTUM = 0.9
LR = 1e-3
DECAY_STEP = [SAVE_WEIGHTS_INTE*12, SAVE_WEIGHTS_INTE*16, SAVE_WEIGHTS_INTE*20]
MAX_ITERATION = SAVE_WEIGHTS_INTE*20
WARM_SETP = int(1.0 / 8.0 * SAVE_WEIGHTS_INTE)
# -------------------------------------------- Dataset
DATASET_NAME = 'DOTATrain' # 'pascal', 'coco'
PIXEL_MEAN = [123.68, 116.779, 103.939] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
PIXEL_MEAN_ = [0.485, 0.456, 0.406]
PIXEL_STD = [0.229, 0.224, 0.225] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
IMG_SHORT_SIDE_LEN = 800
IMG_MAX_LENGTH = 800
CLASS_NUM = 15
IMG_ROTATE = False
RGB2GRAY = False
VERTICAL_FLIP = False
HORIZONTAL_FLIP = True
IMAGE_PYRAMID = False
# --------------------------------------------- Network
SUBNETS_WEIGHTS_INITIALIZER = tf.random_normal_initializer(mean=0.0, stddev=0.01, seed=None)
SUBNETS_BIAS_INITIALIZER = tf.constant_initializer(value=0.0)
PROBABILITY = 0.01
FINAL_CONV_BIAS_INITIALIZER = tf.constant_initializer(value=-math.log((1.0 - PROBABILITY) / PROBABILITY))
WEIGHT_DECAY = 1e-4
USE_GN = False
FPN_CHANNEL = 256
NUM_SUBNET_CONV = 4
FPN_MODE = 'fpn'
# --------------------------------------------- Anchor
LEVEL = ['P3', 'P4', 'P5', 'P6', 'P7']
BASE_ANCHOR_SIZE_LIST = [32, 64, 128, 256, 512]
ANCHOR_STRIDE = [8, 16, 32, 64, 128]
ANCHOR_SCALES = [2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]
ANCHOR_RATIOS = [1, 1 / 2, 2., 1 / 3., 3., 5., 1 / 5.]
ANCHOR_ANGLES = [-90, -75, -60, -45, -30, -15]
ANCHOR_SCALE_FACTORS = None
USE_CENTER_OFFSET = True
METHOD = 'H'
USE_ANGLE_COND = False
ANGLE_RANGE = 90 # or 180
# -------------------------------------------- Head
SHARE_NET = True
USE_P5 = True
IOU_POSITIVE_THRESHOLD = 0.5
IOU_NEGATIVE_THRESHOLD = 0.4
NMS = True
NMS_IOU_THRESHOLD = 0.1
MAXIMUM_DETECTIONS = 100
FILTERED_SCORE = 0.05
VIS_SCORE = 0.4
# -------------------------------------------- GWD
GWD_TAU = 2.0
GWD_FUNC = tf.sqrt
| 58.137931 | 583 | 0.732305 | 1,269 | 10,116 | 5.730496 | 0.345154 | 0.016502 | 0.024202 | 0.023377 | 0.095435 | 0.091584 | 0.035754 | 0.023927 | 0.010726 | 0.010726 | 0 | 0.360526 | 0.089957 | 10,116 | 173 | 584 | 58.473988 | 0.429394 | 0.066133 | 0 | 0 | 0 | 0 | 0.060377 | 0.029434 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060241 | 0 | 0.060241 | 0.036145 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f46b3b75aeaa07c38964612beb7c50451d4b0fa7 | 534 | py | Python | hata/discord/user/__init__.py | ToxicKidz/hata | f834c3cee3920d3095254815582325c5232022d7 | [
"0BSD"
] | 1 | 2022-02-04T10:20:34.000Z | 2022-02-04T10:20:34.000Z | hata/discord/user/__init__.py | Tari-dev/hata | a5c3199c845858f997af3b0b2c18770fdc691897 | [
"0BSD"
] | null | null | null | hata/discord/user/__init__.py | Tari-dev/hata | a5c3199c845858f997af3b0b2c18770fdc691897 | [
"0BSD"
] | null | null | null | from .activity_change import *
from .client_user_base import *
from .flags import *
from .guild_profile import *
from .preinstanced import *
from .thread_profile import *
from .user import *
from .user_base import *
from .utils import *
from .voice_state import *
__all__ = (
*activity_change.__all__,
*client_user_base.__all__,
*flags.__all__,
*guild_profile.__all__,
*preinstanced.__all__,
*thread_profile.__all__,
*user.__all__,
*user_base.__all__,
*utils.__all__,
*voice_state.__all__,
)
| 21.36 | 31 | 0.720974 | 65 | 534 | 5.030769 | 0.246154 | 0.275229 | 0.085627 | 0.110092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179775 | 534 | 24 | 32 | 22.25 | 0.746575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f47074d5676798383abb4b84d381af00df5b5e34 | 7,054 | py | Python | app/models/users.py | jchen0506/bse-staging | 9170c732f437cadb15cec95a059448b6354ce49e | [
"BSD-3-Clause"
] | null | null | null | app/models/users.py | jchen0506/bse-staging | 9170c732f437cadb15cec95a059448b6354ce49e | [
"BSD-3-Clause"
] | null | null | null | app/models/users.py | jchen0506/bse-staging | 9170c732f437cadb15cec95a059448b6354ce49e | [
"BSD-3-Clause"
] | null | null | null | from .. import db, login_manager
from datetime import datetime
import hashlib
from werkzeug.security import generate_password_hash, check_password_hash
from itsdangerous import TimedJSONWebSignatureSerializer as Serializer
from flask_login import UserMixin, AnonymousUserMixin
from flask import current_app, url_for
import logging
logger = logging.getLogger(__name__)
class Permission:
READ = 1
WRITE = 2
MODERATE = 4
ADMIN = 8
class Role(db.Document):
"""Users's Roles"""
name = db.StringField(max_length=64, unique=True)
default = db.BooleanField(default=False)
permissions = db.IntField()
meta = {
'indexes': [
'default',
]
}
def __init__(self, **kwargs):
super(Role, self).__init__(**kwargs)
if self.permissions is None:
self.permissions = 0
@staticmethod
def insert_roles():
roles = {
'No access': [Permission.READ, Permission.WRITE],
'Moderator': [Permission.READ, Permission.WRITE,
Permission.MODERATE],
'Administrator': [Permission.READ, Permission.WRITE,
Permission.MODERATE, Permission.ADMIN],
}
default_role = 'No access'
for r in roles:
role = Role.objects(name=r).first()
if role is None:
role = Role(name=r)
role.reset_permissions()
for perm in roles[r]:
role.add_permission(perm)
role.default = (role.name == default_role)
role.save()
def add_permission(self, perm):
if not self.has_permission(perm):
self.permissions += perm
def remove_permission(self, perm):
if self.has_permission(perm):
self.permissions -= perm
def reset_permissions(self):
self.permissions = 0
def has_permission(self, perm):
return self.permissions & perm == perm
def __str__(self):
return '%s' % self.name
class User(UserMixin, db.Document):
"""Users with different access roles"""
email = db.EmailField(max_length=120, unique=True)
username = db.StringField(max_length=64)
role = db.ReferenceField(Role) #####
password_hash = db.StringField(max_length=128)
confirmed = db.BooleanField(default=False)
location = db.StringField(max_length=64)
member_since = db.DateTimeField(default=datetime.utcnow)
avatar_hash = db.StringField(max_length=32)
meta = {
'allow_inheritance': True,
'indexes': [
'email',
]
}
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
if self.role is None:
if self.email == current_app.config['APP_ADMIN']:
self.role = Role.objects(name='Administrator').first()
if self.role is None:
self.role = Role.objects(default=True).first()
if self.email is not None and self.avatar_hash is None:
self.avatar_hash = self.gravatar_hash()
@property
def password(self):
raise AttributeError('password is not a readable attribute')
@password.setter
def password(self, password):
self.password_hash = generate_password_hash(password)
def verify_password(self, password):
return check_password_hash(self.password_hash, password)
def generate_confirmation_token(self, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps({'confirm': str(self.id)}).decode('utf-8')
def confirm(self, token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token.encode('utf-8'))
except:
return False
if data.get('confirm') != str(self.id):
return False
self.confirmed = True
self.save()
return True
def generate_reset_token(self, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps({'reset': str(self.id)}).decode('utf-8')
@staticmethod
def reset_password(token, new_password):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token.encode('utf-8'))
except:
return False
user = User.objects(id=data.get('reset')).first()
if user is None:
return False
user.password = new_password
user.save()
return True
def generate_email_change_token(self, new_email, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'], expiration)
return s.dumps(
{'change_email': str(self.id), 'new_email': new_email}).decode('utf-8')
def change_email(self, token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token.encode('utf-8'))
except:
return False
if data.get('change_email') != str(self.id):
return False
self.email = data.get('new_email')
self.avatar_hash = self.gravatar_hash()
self.save()
return True
def can(self, perm):
return self.role is not None and self.role.has_permission(perm)
def is_administrator(self):
return self.can(Permission.ADMIN)
def gravatar_hash(self):
return hashlib.md5(self.email.lower().encode('utf-8')).hexdigest()
def gravatar(self, size=100, default='identicon', rating='g'):
url = 'https://secure.gravatar.com/avatar'
hash = self.avatar_hash or self.gravatar_hash()
return '{url}/{hash}?s={size}&d={default}&r={rating}'.format(
url=url, hash=hash, size=size, default=default, rating=rating)
def to_json(self):
json_user = {
'id': str(self.id),
'email': self.email,
'member_since': self.member_since,
}
return json_user
def generate_auth_token(self, expiration=3600):
s = Serializer(current_app.config['SECRET_KEY'],
expires_in=expiration)
return s.dumps({'id': str(self.id)}).decode('utf-8')
@staticmethod
def verify_auth_token(token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return None
return User.objects(id=data['id']).first()
def __repr__(self):
return '<User %r>' % self.email
def __str__(self):
return 'username=%s' % self.email
class AnonymousUser(AnonymousUserMixin):
def can(self, permissions):
return False
def is_administrator(self):
return False
login_manager.anonymous_user = AnonymousUser
@login_manager.user_loader
def load_user(user_id):
"""User loader func is needed by flask-login to load users
which DB engine dependent"""
return User.objects(id=user_id).first()
def update_roles():
"""create or update user roles"""
Role.insert_roles()
| 29.763713 | 83 | 0.613411 | 835 | 7,054 | 5.022754 | 0.197605 | 0.023844 | 0.034335 | 0.040057 | 0.341917 | 0.239866 | 0.187172 | 0.187172 | 0.150453 | 0.150453 | 0 | 0.009333 | 0.27091 | 7,054 | 236 | 84 | 29.889831 | 0.806144 | 0.022257 | 0 | 0.276243 | 1 | 0 | 0.067036 | 0.006412 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165746 | false | 0.060773 | 0.044199 | 0.055249 | 0.480663 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f477d5cbff0b6252fffa83628dd5bea75034281c | 1,117 | py | Python | src/init/azext_init/commands.py | HuangYT2000/azure-cli-extensions | 7cceaabc342a3284376fe24b895226599a51ce65 | [
"MIT"
] | null | null | null | src/init/azext_init/commands.py | HuangYT2000/azure-cli-extensions | 7cceaabc342a3284376fe24b895226599a51ce65 | [
"MIT"
] | null | null | null | src/init/azext_init/commands.py | HuangYT2000/azure-cli-extensions | 7cceaabc342a3284376fe24b895226599a51ce65 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=line-too-long
from azure.cli.core.commands import CliCommandType
from azext_init._client_factory import cf_init
def load_command_table(self, _):
# TODO: Add command type here
# init_sdk = CliCommandType(
# operations_tmpl='<PATH>.operations#None.{}',
# client_factory=cf_init)
with self.command_group('init') as g:
g.custom_command('create', 'create_init')
# g.command('delete', 'delete')
g.custom_command('list', 'list_init')
g.custom_command('print-config', 'print_config_init')
# g.show_command('show', 'get')
# g.generic_update_command('update', setter_name='update', custom_func_name='update_init')
with self.command_group('init', is_preview=True):
pass
| 36.032258 | 98 | 0.576544 | 122 | 1,117 | 5.04918 | 0.581967 | 0.034091 | 0.068182 | 0.061688 | 0.090909 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155774 | 1,117 | 30 | 99 | 37.233333 | 0.653234 | 0.576544 | 0 | 0 | 0 | 0 | 0.146288 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 1 | 0.111111 | false | 0.111111 | 0.222222 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.