hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
62c147cfe08df6d900ee8334859a4c5e4115112a | 465 | py | Python | start.py | andreasots/smit-raamatukogu | c7b02e0aa21f6ca7aa0c0212945d21e1c1aab477 | [
"Apache-2.0"
] | null | null | null | start.py | andreasots/smit-raamatukogu | c7b02e0aa21f6ca7aa0c0212945d21e1c1aab477 | [
"Apache-2.0"
] | null | null | null | start.py | andreasots/smit-raamatukogu | c7b02e0aa21f6ca7aa0c0212945d21e1c1aab477 | [
"Apache-2.0"
] | null | null | null | from raamatukogu import app
import raamatukogu.ui
import raamatukogu.api
import raamatukogu.healthcheck
__all__ = ['app']
app.register_blueprint(raamatukogu.ui.blueprint)
app.register_blueprint(raamatukogu.api.blueprint, url_prefix="/api/v1")
app.register_blueprint(raamatukogu.healthcheck.blueprint, url_prefix="/api")
if __name__ == '__main__':
app.run(debug=True, threaded=True)
else:
import logging
app.logger.addHandler(logging.StreamHandler())
| 27.352941 | 76 | 0.793548 | 57 | 465 | 6.175439 | 0.438596 | 0.144886 | 0.170455 | 0.264205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00237 | 0.092473 | 465 | 16 | 77 | 29.0625 | 0.831754 | 0 | 0 | 0 | 0 | 0 | 0.047312 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
62cc5404d7f54b82c65f88c013938bd75fb1eefc | 1,013 | py | Python | queue-in-python/queue1.py | frediness/cl | 642359bf4818dc4a010761d2a06e810fa8c543fd | [
"BSD-3-Clause"
] | null | null | null | queue-in-python/queue1.py | frediness/cl | 642359bf4818dc4a010761d2a06e810fa8c543fd | [
"BSD-3-Clause"
] | null | null | null | queue-in-python/queue1.py | frediness/cl | 642359bf4818dc4a010761d2a06e810fa8c543fd | [
"BSD-3-Clause"
] | null | null | null | class PriorityQueue(object):
def __init__(self):
self.qlist = []
def isEmpty(self):
return len(self) == 0
def __len__(self):
return len(self.qlist)
def enqueue(self, data, priority):
entry = _PriorityQEntry(data, priority)
self.qlist.append(entry)
def dequeue(self):
max = 0
for i in range(len(self.qlist)):
x = self.qlist[max].priority
if self.qlist[i].priority < max:
max = self.qlist[i].priority
item = self.qlist[max]
del self.qlist[max]
return item
def getFrontMost(self):
return self.qlist[0]
def getRearMost(self):
return self.qlist[-1]
class _PriorityQEntry(object):
def __init__(self, data, priority):
self.data = data
self.priority = priority
S = PriorityQueue()
S.enqueue("Jeruk", 4)
S.enqueue("Tomat", 2)
S.enqueue("Mangga", 0)
S.enqueue("Duku", 5)
S.enqueue("Pepaya", 2)
| 28.138889 | 48 | 0.564659 | 123 | 1,013 | 4.536585 | 0.317073 | 0.177419 | 0.064516 | 0.060932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012876 | 0.30997 | 1,013 | 35 | 49 | 28.942857 | 0.785408 | 0 | 0 | 0 | 0 | 0 | 0.026585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0 | 0 | 0.121212 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
62d32e10aa37e6c350924130334a5eb6c54d99d2 | 9,137 | py | Python | wagtail/admin/tests/pages/test_unpublish_page.py | samgans/wagtail | 48a8af71e5333fb701476702bd784fa407567e25 | [
"BSD-3-Clause"
] | 2 | 2019-05-23T01:31:18.000Z | 2020-06-27T21:19:10.000Z | wagtail/admin/tests/pages/test_unpublish_page.py | samgans/wagtail | 48a8af71e5333fb701476702bd784fa407567e25 | [
"BSD-3-Clause"
] | 6 | 2020-08-26T03:00:03.000Z | 2020-09-24T02:59:14.000Z | wagtail/admin/tests/pages/test_unpublish_page.py | samgans/wagtail | 48a8af71e5333fb701476702bd784fa407567e25 | [
"BSD-3-Clause"
] | 1 | 2021-02-15T18:59:53.000Z | 2021-02-15T18:59:53.000Z | from unittest import mock
from django.contrib.auth.models import Permission
from django.http import HttpRequest, HttpResponse
from django.test import TestCase
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
from wagtail.core.models import Page
from wagtail.core.signals import page_unpublished
from wagtail.tests.testapp.models import SimplePage
from wagtail.tests.utils import WagtailTestUtils
class TestPageUnpublish(TestCase, WagtailTestUtils):
def setUp(self):
self.user = self.login()
# Create a page to unpublish
self.root_page = Page.objects.get(id=2)
self.page = SimplePage(
title="Hello world!",
slug='hello-world',
content="hello",
live=True,
)
self.root_page.add_child(instance=self.page)
def test_unpublish_view(self):
"""
This tests that the unpublish view responds with an unpublish confirm page
"""
# Get unpublish page
response = self.client.get(reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )))
# Check that the user received an unpublish confirm page
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'wagtailadmin/pages/confirm_unpublish.html')
def test_unpublish_view_invalid_page_id(self):
"""
This tests that the unpublish view returns an error if the page id is invalid
"""
# Get unpublish page
response = self.client.get(reverse('wagtailadmin_pages:unpublish', args=(12345, )))
# Check that the user received a 404 response
self.assertEqual(response.status_code, 404)
def test_unpublish_view_bad_permissions(self):
"""
This tests that the unpublish view doesn't allow users without unpublish permissions
"""
# Remove privileges from user
self.user.is_superuser = False
self.user.user_permissions.add(
Permission.objects.get(content_type__app_label='wagtailadmin', codename='access_admin')
)
self.user.save()
# Get unpublish page
response = self.client.get(reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )))
# Check that the user received a 403 response
self.assertEqual(response.status_code, 403)
def test_unpublish_view_post(self):
"""
This posts to the unpublish view and checks that the page was unpublished
"""
# Connect a mock signal handler to page_unpublished signal
mock_handler = mock.MagicMock()
page_unpublished.connect(mock_handler)
# Post to the unpublish page
response = self.client.post(reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )))
# Should be redirected to explorer page
self.assertRedirects(response, reverse('wagtailadmin_explore', args=(self.root_page.id, )))
# Check that the page was unpublished
self.assertFalse(SimplePage.objects.get(id=self.page.id).live)
# Check that the page_unpublished signal was fired
self.assertEqual(mock_handler.call_count, 1)
mock_call = mock_handler.mock_calls[0][2]
self.assertEqual(mock_call['sender'], self.page.specific_class)
self.assertEqual(mock_call['instance'], self.page)
self.assertIsInstance(mock_call['instance'], self.page.specific_class)
def test_after_unpublish_page(self):
def hook_func(request, page):
self.assertIsInstance(request, HttpRequest)
self.assertEqual(page.id, self.page.id)
return HttpResponse("Overridden!")
with self.register_hook('after_unpublish_page', hook_func):
post_data = {}
response = self.client.post(
reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )), post_data
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b"Overridden!")
self.page.refresh_from_db()
self.assertEqual(self.page.status_string, _("draft"))
def test_before_unpublish_page(self):
def hook_func(request, page):
self.assertIsInstance(request, HttpRequest)
self.assertEqual(page.id, self.page.id)
return HttpResponse("Overridden!")
with self.register_hook('before_unpublish_page', hook_func):
post_data = {}
response = self.client.post(
reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )), post_data
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b"Overridden!")
# The hook response is served before unpublish is called.
self.page.refresh_from_db()
self.assertEqual(self.page.status_string, _("live"))
def test_unpublish_descendants_view(self):
"""
This tests that the unpublish view responds with an unpublish confirm page that does not contain the form field 'include_descendants'
"""
# Get unpublish page
response = self.client.get(reverse('wagtailadmin_pages:unpublish', args=(self.page.id, )))
# Check that the user received an unpublish confirm page
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'wagtailadmin/pages/confirm_unpublish.html')
# Check the form does not contain the checkbox field include_descendants
self.assertNotContains(response, '<input id="id_include_descendants" name="include_descendants" type="checkbox">')
class TestPageUnpublishIncludingDescendants(TestCase, WagtailTestUtils):
def setUp(self):
self.user = self.login()
# Find root page
self.root_page = Page.objects.get(id=2)
# Create a page to unpublish
self.test_page = self.root_page.add_child(instance=SimplePage(
title="Hello world!",
slug='hello-world',
content="hello",
live=True,
has_unpublished_changes=False,
))
# Create a couple of child pages
self.test_child_page = self.test_page.add_child(instance=SimplePage(
title="Child page",
slug='child-page',
content="hello",
live=True,
has_unpublished_changes=True,
))
self.test_another_child_page = self.test_page.add_child(instance=SimplePage(
title="Another Child page",
slug='another-child-page',
content="hello",
live=True,
has_unpublished_changes=True,
))
def test_unpublish_descendants_view(self):
"""
This tests that the unpublish view responds with an unpublish confirm page that contains the form field 'include_descendants'
"""
# Get unpublish page
response = self.client.get(reverse('wagtailadmin_pages:unpublish', args=(self.test_page.id, )))
# Check that the user received an unpublish confirm page
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'wagtailadmin/pages/confirm_unpublish.html')
# Check the form contains the checkbox field include_descendants
self.assertContains(response, '<input id="id_include_descendants" name="include_descendants" type="checkbox">')
def test_unpublish_include_children_view_post(self):
"""
This posts to the unpublish view and checks that the page and its descendants were unpublished
"""
# Post to the unpublish page
response = self.client.post(reverse('wagtailadmin_pages:unpublish', args=(self.test_page.id, )), {'include_descendants': 'on'})
# Should be redirected to explorer page
self.assertRedirects(response, reverse('wagtailadmin_explore', args=(self.root_page.id, )))
# Check that the page was unpublished
self.assertFalse(SimplePage.objects.get(id=self.test_page.id).live)
# Check that the descendant pages were unpublished as well
self.assertFalse(SimplePage.objects.get(id=self.test_child_page.id).live)
self.assertFalse(SimplePage.objects.get(id=self.test_another_child_page.id).live)
def test_unpublish_not_include_children_view_post(self):
"""
This posts to the unpublish view and checks that the page was unpublished but its descendants were not
"""
# Post to the unpublish page
response = self.client.post(reverse('wagtailadmin_pages:unpublish', args=(self.test_page.id, )), {})
# Should be redirected to explorer page
self.assertRedirects(response, reverse('wagtailadmin_explore', args=(self.root_page.id, )))
# Check that the page was unpublished
self.assertFalse(SimplePage.objects.get(id=self.test_page.id).live)
# Check that the descendant pages were not unpublished
self.assertTrue(SimplePage.objects.get(id=self.test_child_page.id).live)
self.assertTrue(SimplePage.objects.get(id=self.test_another_child_page.id).live)
| 40.973094 | 141 | 0.674948 | 1,090 | 9,137 | 5.512844 | 0.157798 | 0.024963 | 0.021967 | 0.054918 | 0.749875 | 0.739391 | 0.687802 | 0.673323 | 0.65535 | 0.637044 | 0 | 0.005273 | 0.232024 | 9,137 | 222 | 142 | 41.157658 | 0.851076 | 0.213965 | 0 | 0.483607 | 0 | 0 | 0.129467 | 0.076073 | 0 | 0 | 0 | 0 | 0.278689 | 1 | 0.114754 | false | 0 | 0.081967 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
62dd6b074361ce37f4ba7af32927d5936c3d6b16 | 483 | py | Python | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/APPLE/element_array.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/APPLE/element_array.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/lib/Python26/Lib/site-packages/OpenGL/GL/APPLE/element_array.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | '''OpenGL extension APPLE.element_array
This module customises the behaviour of the
OpenGL.raw.GL.APPLE.element_array to provide a more
Python-friendly API
'''
from OpenGL import platform, constants, constant, arrays
from OpenGL import extensions, wrapper
from OpenGL.GL import glget
import ctypes
from OpenGL.raw.GL.APPLE.element_array import *
### END AUTOGENERATED SECTION
from OpenGL.GL import glget
glget.addGLGetConstant( GL_ELEMENT_ARRAY_TYPE_APPLE, (1,) ) # check size...
| 32.2 | 75 | 0.803313 | 70 | 483 | 5.442857 | 0.542857 | 0.131234 | 0.133858 | 0.08399 | 0.267717 | 0.146982 | 0 | 0 | 0 | 0 | 0 | 0.002353 | 0.120083 | 483 | 14 | 76 | 34.5 | 0.894118 | 0.405797 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
62f2ddaa8776e14cb943c273213b5f867f216414 | 1,402 | py | Python | rdr_service/alembic/versions/0cc3557c43d0_cascade_metrics_deletes.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 39 | 2017-10-13T19:16:27.000Z | 2021-09-24T16:58:21.000Z | rdr_service/alembic/versions/0cc3557c43d0_cascade_metrics_deletes.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 312 | 2017-09-08T15:42:13.000Z | 2022-03-23T18:21:40.000Z | rdr_service/alembic/versions/0cc3557c43d0_cascade_metrics_deletes.py | all-of-us/raw-data-repository | d28ad957557587b03ff9c63d55dd55e0508f91d8 | [
"BSD-3-Clause"
] | 19 | 2017-09-15T13:58:00.000Z | 2022-02-07T18:33:20.000Z | """Cascade metrics deletes
Revision ID: 0cc3557c43d0
Revises: 3d4a6433e26d
Create Date: 2017-05-04 11:28:25.951742
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "0cc3557c43d0"
down_revision = "3d4a6433e26d"
branch_labels = None
depends_on = None
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def upgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint("metrics_bucket_ibfk_1", "metrics_bucket", type_="foreignkey")
op.create_foreign_key(
None, "metrics_bucket", "metrics_version", ["metrics_version_id"], ["metrics_version_id"], ondelete="CASCADE"
)
# ### end Alembic commands ###
def downgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, "metrics_bucket", type_="foreignkey")
op.create_foreign_key(
"metrics_bucket_ibfk_1", "metrics_bucket", "metrics_version", ["metrics_version_id"], ["metrics_version_id"]
)
# ### end Alembic commands ###
| 26.45283 | 117 | 0.686163 | 163 | 1,402 | 5.650307 | 0.343558 | 0.084691 | 0.091205 | 0.099891 | 0.600434 | 0.558089 | 0.519001 | 0.519001 | 0.421281 | 0.421281 | 0 | 0.046753 | 0.176177 | 1,402 | 52 | 118 | 26.961538 | 0.750649 | 0.327389 | 0 | 0.173913 | 0 | 0 | 0.310934 | 0.047836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0.086957 | 0.043478 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
62f70252a2e517c356a8523d42bc66267f666c8f | 1,302 | py | Python | agents/PPOAgent.py | decoderkurt/osim-rl-helper | 511c7478853305b7c74260c793fead194574b9e6 | [
"MIT"
] | 1 | 2018-08-06T10:54:25.000Z | 2018-08-06T10:54:25.000Z | agents/PPOAgent.py | decoderkurt/osim-rl-helper | 511c7478853305b7c74260c793fead194574b9e6 | [
"MIT"
] | null | null | null | agents/PPOAgent.py | decoderkurt/osim-rl-helper | 511c7478853305b7c74260c793fead194574b9e6 | [
"MIT"
] | 1 | 2018-09-04T18:44:01.000Z | 2018-09-04T18:44:01.000Z | from helper.templates import Agent
import numpy as np
from baselines import bench, logger
from baselines.common import tf_util as U
import os
from mpi4py import MPI
import gym
from gym.wrappers import FlattenDictWrapper
from baselines import logger
from baselines.bench import Monitor
from baselines.common import set_global_seeds
from baselines.common.vec_env.subproc_vec_env import SubprocVecEnv
class PPOAgent(Agent):
"""
An agent that chooses NOOP action at every timestep.
"""
def __init__(self, observation_space, action_space):
self.observation_space = observation_space
def train(self, env, nb_steps):
from baselines.ppo1 import mlp_policy, pposgd_simple
U.make_session(num_cpu=1).__enter__()
def policy_fn(name, ob_space, ac_space):
return mlp_policy.MlpPolicy(name=name, ob_space=ob_space, ac_space=ac_space,
hid_size=64, num_hid_layers=2)
pposgd_simple.learn(env, policy_fn,
max_timesteps=int(1e6),
timesteps_per_actorbatch=2048,
clip_param=0.2, entcoeff=0.0,
optim_epochs=10, optim_stepsize=3e-4, optim_batchsize=64,
gamma=0.99, lam=0.95, schedule='linear',
)
| 38.294118 | 92 | 0.676651 | 174 | 1,302 | 4.816092 | 0.54023 | 0.108592 | 0.068019 | 0.059666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028807 | 0.253456 | 1,302 | 33 | 93 | 39.454545 | 0.833333 | 0.039939 | 0 | 0 | 0 | 0 | 0.004862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.464286 | 0.035714 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1a0002fe9d6b19813b0e0f0c8613e77833221ada | 521 | py | Python | Todoism/todoism/settings.py | authetic-x/Flask_Practice | 90ca69467cd6de8eb41669d2a87ab072fc11e1c8 | [
"MIT"
] | 1 | 2020-07-29T16:46:32.000Z | 2020-07-29T16:46:32.000Z | Todoism/todoism/settings.py | authetic-x/Flask_Practice | 90ca69467cd6de8eb41669d2a87ab072fc11e1c8 | [
"MIT"
] | null | null | null | Todoism/todoism/settings.py | authetic-x/Flask_Practice | 90ca69467cd6de8eb41669d2a87ab072fc11e1c8 | [
"MIT"
] | null | null | null | import os
basedir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
class Baseconfig:
TODOISM_LOCALES = ['en_US', 'zh_Hans_CN']
TODOISM_ITEM_PER_PAGE = 20
BABEL_DEFAULT_LOCALE = TODOISM_LOCALES[0]
SECRET_KEY = os.getenv('SECRET_KEY', 'a secret string')
SQLALCHEMY_DATABASE_URI = os.getenv('DATABASE_URI', 'sqlite:///' + os.path.join(basedir, 'data.db'))
SQLALCHEMY_TRACK_MODIFICATIONS = False
class Development(Baseconfig):
pass
config = {
'development': Development
} | 22.652174 | 104 | 0.714012 | 67 | 521 | 5.238806 | 0.626866 | 0.068376 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006834 | 0.15739 | 521 | 23 | 105 | 22.652174 | 0.792711 | 0 | 0 | 0 | 0 | 0 | 0.153257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.071429 | 0.071429 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
1a13e16b4142ce95c3d4b43a20000f8a3b91c741 | 6,949 | py | Python | housecanary/response.py | SpainTrain/hc-api-python | 2bb9e2208b34e8617575de45934357ee33b8531c | [
"MIT"
] | 18 | 2017-04-12T20:34:37.000Z | 2021-09-12T01:21:19.000Z | housecanary/response.py | SpainTrain/hc-api-python | 2bb9e2208b34e8617575de45934357ee33b8531c | [
"MIT"
] | 13 | 2016-12-02T01:38:52.000Z | 2021-06-07T15:10:51.000Z | housecanary/response.py | SpainTrain/hc-api-python | 2bb9e2208b34e8617575de45934357ee33b8531c | [
"MIT"
] | 8 | 2017-07-06T15:36:23.000Z | 2021-10-10T17:31:29.000Z | """
Provides Response to encapsulate API responses.
"""
from builtins import next
from builtins import str
from builtins import object
from housecanary.object import Property
from housecanary.object import Block
from housecanary.object import ZipCode
from housecanary.object import Msa
from . import utilities
class Response(object):
"""Encapsulate an API reponse."""
def __init__(self, endpoint_name, json_body, original_response):
"""
Args:
endpoint_name (str) - The endpoint of the request, such as "property/value"
json_body - The response body in json format.
original_response (response object) - server response returned from an http request.
"""
self._endpoint_name = endpoint_name
self._json_body = json_body
self._response = original_response
self._objects = []
self._has_object_error = None
self._object_errors = None
self._rate_limits = None
@classmethod
def create(cls, endpoint_name, json_body, original_response):
"""Factory for creating the correct type of Response based on the data.
Args:
endpoint_name (str) - The endpoint of the request, such as "property/value"
json_body - The response body in json format.
original_response (response object) - server response returned from an http request.
"""
if endpoint_name == "property/value_report":
return ValueReportResponse(endpoint_name, json_body, original_response)
if endpoint_name == "property/rental_report":
return RentalReportResponse(endpoint_name, json_body, original_response)
prefix = endpoint_name.split("/")[0]
if prefix == "block":
return BlockResponse(endpoint_name, json_body, original_response)
if prefix == "zip":
return ZipCodeResponse(endpoint_name, json_body, original_response)
if prefix == "msa":
return MsaResponse(endpoint_name, json_body, original_response)
return PropertyResponse(endpoint_name, json_body, original_response)
@property
def endpoint_name(self):
"""Get the component name of the original request.
Returns:
Component name as a string.
"""
return self._endpoint_name
@property
def response(self):
"""Gets the original response
Returns:
response object passed in during instantiation.
"""
return self._response
def json(self):
"""Gets the response body as json
Returns:
Json of the response body
"""
return self._json_body
def get_object_errors(self):
"""Gets a list of business error message strings
for each of the requested objects that had a business error.
If there was no error, returns an empty list
Returns:
List of strings
"""
if self._object_errors is None:
self._object_errors = [{str(o): o.get_errors()}
for o in self.objects()
if o.has_error()]
return self._object_errors
def has_object_error(self):
"""Returns true if any requested object had a business logic error,
otherwise returns false
Returns:
boolean
"""
if self._has_object_error is None:
# scan the objects for any business error codes
self._has_object_error = next(
(True for o in self.objects()
if o.has_error()),
False)
return self._has_object_error
def objects(self):
"""Override in subclasses"""
raise NotImplementedError()
def _get_objects(self, obj_type):
if not self._objects:
body = self.json()
self._objects = []
if not isinstance(body, list):
# The endpoints return a list in the body.
# This could maybe raise an exception.
return []
for item in body:
prop = obj_type.create_from_json(item)
self._objects.append(prop)
return self._objects
@property
def rate_limits(self):
"""Returns a list of rate limit details."""
if not self._rate_limits:
self._rate_limits = utilities.get_rate_limits(self.response)
return self._rate_limits
class PropertyResponse(Response):
"""Represents the data returned from an Analytics API property endpoint."""
def objects(self):
"""Gets a list of Property objects for the requested properties,
each containing the property's returned json data from the API.
Returns an empty list if the request format was PDF.
Returns:
List of Property objects
"""
return self._get_objects(Property)
def properties(self):
"""Alias method for objects."""
return self.objects()
class BlockResponse(Response):
"""Represents the data returned from an Analytics API block endpoint."""
def objects(self):
"""Gets a list of Block objects for the requested blocks,
each containing the block's returned json data from the API.
Returns:
List of Block objects
"""
return self._get_objects(Block)
def blocks(self):
"""Alias method for objects."""
return self.objects()
class ZipCodeResponse(Response):
"""Represents the data returned from an Analytics API zip endpoint."""
def objects(self):
"""Gets a list of ZipCode objects for the requested zipcodes,
each containing the zipcodes's returned json data from the API.
Returns:
List of ZipCode objects
"""
return self._get_objects(ZipCode)
def zipcodes(self):
"""Alias method for objects."""
return self.objects()
class MsaResponse(Response):
"""Represents the data returned from an Analytics API msa endpoint."""
def objects(self):
"""Gets a list of Msa objects for the requested msas,
each containing the msa's returned json data from the API.
Returns:
List of Msa objects
"""
return self._get_objects(Msa)
def msas(self):
"""Alias method for objects."""
return self.objects()
class ValueReportResponse(Response):
"""The response from a value_report request."""
def objects(self):
"""The value_report endpoint returns a json dict
instead of a list of address results."""
return []
class RentalReportResponse(Response):
"""The response from a rental_report request."""
def objects(self):
"""The rental_report endpoint returns a json dict
instead of a list of address results."""
return []
| 29.824034 | 96 | 0.623111 | 814 | 6,949 | 5.179361 | 0.167076 | 0.048387 | 0.030361 | 0.037951 | 0.411053 | 0.370493 | 0.313567 | 0.304554 | 0.244307 | 0.138046 | 0 | 0.000206 | 0.30249 | 6,949 | 232 | 97 | 29.952586 | 0.86961 | 0.377608 | 0 | 0.2 | 0 | 0 | 0.014508 | 0.011343 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.084211 | 0 | 0.621053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1a18435d71402730a4f607eb87a1f0c64344be98 | 849 | py | Python | openprescribing/pipeline/tests/test_import_regional_teams.py | annapowellsmith/openpresc | cfa9fb07d6fc2ee304159c04fcc132cefcf78745 | [
"MIT"
] | 91 | 2015-10-14T09:10:32.000Z | 2022-03-10T22:09:21.000Z | openprescribing/pipeline/tests/test_import_regional_teams.py | annapowellsmith/openpresc | cfa9fb07d6fc2ee304159c04fcc132cefcf78745 | [
"MIT"
] | 1,828 | 2015-12-04T14:52:27.000Z | 2022-03-31T08:51:14.000Z | openprescribing/pipeline/tests/test_import_regional_teams.py | HDRUK/openprescribing | 510e8c07e841cd42284c109774d1730b6463f376 | [
"MIT"
] | 27 | 2015-12-03T18:26:56.000Z | 2021-01-09T21:58:53.000Z | import os
from django.conf import settings
from django.core.management import call_command
from django.test import TestCase
from frontend.models import RegionalTeam
class TestImportRegionalTeams(TestCase):
def test_import_stps(self):
path = os.path.join(settings.APPS_ROOT, "pipeline", "test-data", "eauth.csv")
call_command("import_regional_teams", "--filename", path)
self.assertEqual(RegionalTeam.objects.count(), 6)
rt = RegionalTeam.objects.get(code="Y54")
self.assertEqual(rt.name, "NORTH OF ENGLAND COMMISSIONING REGION")
self.assertEqual(str(rt.open_date), "2012-10-01")
self.assertEqual(rt.close_date, None)
rt = RegionalTeam.objects.get(code="Y57")
self.assertEqual(str(rt.open_date), "2012-10-01")
self.assertEqual(str(rt.close_date), "2018-03-31")
| 36.913043 | 85 | 0.706714 | 111 | 849 | 5.306306 | 0.513514 | 0.152801 | 0.091681 | 0.101868 | 0.268251 | 0.173175 | 0.173175 | 0.173175 | 0.173175 | 0.173175 | 0 | 0.041018 | 0.167256 | 849 | 22 | 86 | 38.590909 | 0.792079 | 0 | 0 | 0.117647 | 0 | 0 | 0.153121 | 0.024735 | 0 | 0 | 0 | 0 | 0.352941 | 1 | 0.058824 | false | 0 | 0.470588 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1a199ef09dfaa878de7f574574d1aea3986cc33c | 4,704 | py | Python | src/zope/securitypolicy/principalpermission.py | zopefoundation/zope.securitypolicy | c62c2ffd68522cf37099f953df10bfa34c65de71 | [
"ZPL-2.1"
] | 1 | 2016-10-13T15:39:44.000Z | 2016-10-13T15:39:44.000Z | src/zope/securitypolicy/principalpermission.py | zopefoundation/zope.securitypolicy | c62c2ffd68522cf37099f953df10bfa34c65de71 | [
"ZPL-2.1"
] | 11 | 2016-03-24T15:37:39.000Z | 2020-04-09T15:01:58.000Z | src/zope/securitypolicy/principalpermission.py | zopefoundation/zope.securitypolicy | c62c2ffd68522cf37099f953df10bfa34c65de71 | [
"ZPL-2.1"
] | 1 | 2015-04-03T09:36:12.000Z | 2015-04-03T09:36:12.000Z | ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Mappings between principals and permissions, stored in an object locally.
"""
from zope.interface import implementer
from zope.security.permission import allPermissions
from zope.authentication.principal import checkPrincipal
from zope.securitypolicy.interfaces import Allow, Deny, Unset
from zope.securitypolicy.interfaces import IPrincipalPermissionManager
from zope.securitypolicy.securitymap import SecurityMap
from zope.securitypolicy.securitymap import AnnotationSecurityMap
@implementer(IPrincipalPermissionManager)
class AnnotationPrincipalPermissionManager(AnnotationSecurityMap):
"""Mappings between principals and permissions."""
# the annotation key is a holdover from this module's old
# location, but cannot change without breaking existing databases
# It is also is misspelled, but that's OK. It just has to be unique.
# we'll keep it as is, to prevent breaking old data:
key = 'zopel.app.security.AnnotationPrincipalPermissionManager'
def grantPermissionToPrincipal(self, permission_id, principal_id):
AnnotationSecurityMap.addCell(self, permission_id, principal_id, Allow)
def denyPermissionToPrincipal(self, permission_id, principal_id):
AnnotationSecurityMap.addCell(self, permission_id, principal_id, Deny)
unsetPermissionForPrincipal = AnnotationSecurityMap.delCell
getPrincipalsForPermission = AnnotationSecurityMap.getRow
getPermissionsForPrincipal = AnnotationSecurityMap.getCol
def getSetting(self, permission_id, principal_id, default=Unset):
return AnnotationSecurityMap.queryCell(
self, permission_id, principal_id, default)
getPrincipalsAndPermissions = AnnotationSecurityMap.getAllCells
@implementer(IPrincipalPermissionManager)
class PrincipalPermissionManager(SecurityMap):
"""Mappings between principals and permissions."""
def grantPermissionToPrincipal(self, permission_id, principal_id,
check=True):
''' See the interface IPrincipalPermissionManager '''
if check:
checkPrincipal(None, principal_id)
self.addCell(permission_id, principal_id, Allow)
def grantAllPermissionsToPrincipal(self, principal_id):
''' See the interface IPrincipalPermissionManager '''
for permission_id in allPermissions(None):
self.grantPermissionToPrincipal(permission_id, principal_id, False)
def denyPermissionToPrincipal(self, permission_id, principal_id,
check=True):
''' See the interface IPrincipalPermissionManager '''
if check:
checkPrincipal(None, principal_id)
self.addCell(permission_id, principal_id, Deny)
def unsetPermissionForPrincipal(self, permission_id, principal_id):
''' See the interface IPrincipalPermissionManager '''
# Don't check validity intentionally.
# After all, we certianly want to unset invalid ids.
self.delCell(permission_id, principal_id)
def getPrincipalsForPermission(self, permission_id):
''' See the interface IPrincipalPermissionManager '''
return self.getRow(permission_id)
def getPermissionsForPrincipal(self, principal_id):
''' See the interface IPrincipalPermissionManager '''
return self.getCol(principal_id)
def getSetting(self, permission_id, principal_id, default=Unset):
''' See the interface IPrincipalPermissionManager '''
return self.queryCell(permission_id, principal_id, default)
def getPrincipalsAndPermissions(self):
''' See the interface IPrincipalPermissionManager '''
return self.getAllCells()
# Permissions are our rows, and principals are our columns
principalPermissionManager = PrincipalPermissionManager()
# Register our cleanup with Testing.CleanUp to make writing unit tests
# simpler.
try:
from zope.testing.cleanup import addCleanUp
except ImportError: # pragma: no cover
pass
else:
addCleanUp(principalPermissionManager._clear)
del addCleanUp
| 39.2 | 79 | 0.725978 | 460 | 4,704 | 7.33913 | 0.363043 | 0.065166 | 0.093306 | 0.102192 | 0.410249 | 0.313981 | 0.253555 | 0.167062 | 0.167062 | 0.136256 | 0 | 0.002583 | 0.177083 | 4,704 | 119 | 80 | 39.529412 | 0.869543 | 0.314413 | 0 | 0.185185 | 0 | 0 | 0.018432 | 0.018432 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203704 | false | 0.018519 | 0.166667 | 0.018519 | 0.592593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a7e0cad08af0c3c537529844fe94e587a31b3c12 | 4,663 | py | Python | pucadmin/questions/migrations/0001_initial.py | JobDoesburg/PUC-admin | ab61478cbf1cb0ddb57661a7508e70b23642810b | [
"MIT"
] | null | null | null | pucadmin/questions/migrations/0001_initial.py | JobDoesburg/PUC-admin | ab61478cbf1cb0ddb57661a7508e70b23642810b | [
"MIT"
] | null | null | null | pucadmin/questions/migrations/0001_initial.py | JobDoesburg/PUC-admin | ab61478cbf1cb0ddb57661a7508e70b23642810b | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-08 13:20
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
("schools", "0001_initial"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("organisations", "0001_initial"),
]
operations = [
migrations.CreateModel(
name="Question",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("created_at", models.DateTimeField(auto_now_add=True)),
("research_question", models.TextField(blank=True, null=True)),
("sub_questions", models.TextField(blank=True, null=True)),
("message", models.TextField()),
("completed", models.BooleanField(default=False)),
(
"assignee",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to=settings.AUTH_USER_MODEL,
),
),
(
"course",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
to="organisations.course",
),
),
(
"school",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="questions",
to="schools.school",
),
),
],
options={
"verbose_name": "question",
"verbose_name_plural": "questions",
},
),
migrations.CreateModel(
name="Student",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("first_name", models.CharField(max_length=20)),
("last_name", models.CharField(max_length=20)),
("email", models.EmailField(blank=True, max_length=254, null=True)),
(
"question",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="students",
to="questions.question",
),
),
],
options={
"verbose_name": "student",
"verbose_name_plural": "students",
},
),
migrations.CreateModel(
name="CourseAssignee",
fields=[
(
"id",
models.BigAutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"alternative_email",
models.EmailField(blank=True, max_length=254, null=True),
),
(
"assignee",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="assigned_courses",
to=settings.AUTH_USER_MODEL,
),
),
(
"course",
models.OneToOneField(
on_delete=django.db.models.deletion.CASCADE,
related_name="assignee",
related_query_name="assignee",
to="organisations.course",
),
),
],
options={
"verbose_name": "course assignee",
"verbose_name_plural": "course assignees",
},
),
]
| 34.036496 | 84 | 0.397384 | 307 | 4,663 | 5.863192 | 0.29316 | 0.055 | 0.054444 | 0.085556 | 0.503889 | 0.503889 | 0.435 | 0.399444 | 0.390556 | 0.262222 | 0 | 0.014392 | 0.508256 | 4,663 | 136 | 85 | 34.286765 | 0.770606 | 0.00965 | 0 | 0.573643 | 1 | 0 | 0.108536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023256 | 0 | 0.054264 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7e22b5ea509804d4f8e62ba0d76a45bc9b74833 | 174 | py | Python | fintec/version.py | bhenk/fintec | 11688c7f1ac308466f069a36122dcae78aca0b2b | [
"Apache-2.0"
] | null | null | null | fintec/version.py | bhenk/fintec | 11688c7f1ac308466f069a36122dcae78aca0b2b | [
"Apache-2.0"
] | null | null | null | fintec/version.py | bhenk/fintec | 11688c7f1ac308466f069a36122dcae78aca0b2b | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
""" Module that keeps track of library management like version control, release dates etc. """
__version__ = '0.0.2'
__release_date__ = '2019-02-10'
| 29 | 94 | 0.689655 | 25 | 174 | 4.44 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 0.155172 | 174 | 5 | 95 | 34.8 | 0.673469 | 0.632184 | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7e510ceb75a38e99c79cff267a5563b6296a24d | 605 | py | Python | src/data/action/TemplateActionReader.py | DanielWieczorek/FancyReadmeBuilder | a0a91ec60a673be97d7a942ded5e1da659dda885 | [
"MIT"
] | null | null | null | src/data/action/TemplateActionReader.py | DanielWieczorek/FancyReadmeBuilder | a0a91ec60a673be97d7a942ded5e1da659dda885 | [
"MIT"
] | null | null | null | src/data/action/TemplateActionReader.py | DanielWieczorek/FancyReadmeBuilder | a0a91ec60a673be97d7a942ded5e1da659dda885 | [
"MIT"
] | null | null | null | __author__ = 'DWI'
class TemplateActionReader(object):
def __init__(self, template_action_builder):
self.action_builder = template_action_builder
def read(self, file_name):
"""
Reads the given file and returns a template object
:param file_name: name of the template file
:return: the template that was created
"""
with open(file_name, 'r') as file:
data = self._load_data(file)
return self._build_action_list(data)
def _build_action_list(self, data):
pass
def _load_data(self, file):
pass | 24.2 | 58 | 0.639669 | 76 | 605 | 4.75 | 0.460526 | 0.108033 | 0.116343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282645 | 605 | 25 | 59 | 24.2 | 0.831797 | 0.219835 | 0 | 0.166667 | 0 | 0 | 0.009238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a7ec08b70c6c115a91d604515d9f9be152442f3b | 2,886 | py | Python | python/src/nnabla/ext_utils.py | kodavatimahendra/nnabla | 72009f670af075f17ffca9c809b07d48cca30bd9 | [
"Apache-2.0"
] | null | null | null | python/src/nnabla/ext_utils.py | kodavatimahendra/nnabla | 72009f670af075f17ffca9c809b07d48cca30bd9 | [
"Apache-2.0"
] | null | null | null | python/src/nnabla/ext_utils.py | kodavatimahendra/nnabla | 72009f670af075f17ffca9c809b07d48cca30bd9 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2017 Sony Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utilities for NNabla extensions.
"""
def import_extension_module(ext_name):
'''
Import an extension module by name.
The extension modules are installed under the `nnabla_ext` package as
namespace packages. All extension modules provide a unified set of APIs.
Args:
ext_name(str): Extension name. e.g. 'cpu', 'cuda', 'cudnn' etc.
Returns: module
An Python module of a particular NNabla extension.
Example:
.. code-block:: python
ext = import_extension_module('cudnn')
available_devices = ext.get_devices()
print(available_devices)
ext.device_synchronize(available_devices[0])
ext.clear_memory_cache()
'''
import importlib
try:
return importlib.import_module('.' + ext_name, 'nnabla_ext')
except ImportError as e:
from nnabla import logger
logger.error('Extension `{}` does not exist.'.format(ext_name))
raise e
def list_extensions():
'''
List up available extensions.
Note:
It may not work on some platforms/environments since it depends
on the directory structure of the namespace packages.
Returns: list of str
Names of available extensions.
'''
import nnabla_ext.cpu
from os.path import dirname, join, realpath
from os import listdir
ext_dir = realpath((join(dirname(nnabla_ext.cpu.__file__), '..')))
return listdir(ext_dir)
def get_extension_context(ext_name, **kw):
"""Get the context of the specified extension.
All extension's module must provide `context(**kw)` function.
Args:
ext_name (str) : Module path relative to `nnabla_ext`.
kw (dict) : Additional keyword arguments for context function in a extension module.
Returns:
:class:`nnabla.Context`: The current extension context.
Example:
.. code-block:: python
ctx = get_extension_context('cudnn', device_id='0')
nn.set_default_context(ctx)
"""
if ext_name == 'cuda.cudnn':
from nnabla import logger
logger.warn('Deprecated extension name "cuda.cudnn" passed.')
extensin_name = 'cudnn'
mod = import_extension_module(ext_name)
return mod.context(**kw)
| 29.44898 | 92 | 0.674636 | 374 | 2,886 | 5.093583 | 0.435829 | 0.029396 | 0.033071 | 0.016798 | 0.058793 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00455 | 0.238392 | 2,886 | 97 | 93 | 29.752577 | 0.862147 | 0.650728 | 0 | 0.095238 | 0 | 0 | 0.12485 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.047619 | 0.47619 | 0 | 0.761905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a7f7135583f8e912cca1000f9ec1e89089b37569 | 180 | py | Python | pacu/settings.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | 1 | 2022-03-09T14:51:54.000Z | 2022-03-09T14:51:54.000Z | pacu/settings.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | null | null | null | pacu/settings.py | RyanJarv/Pacu2 | 27df4bcf296fc8f467d3dc671a47bf9519ce7a24 | [
"MIT"
] | null | null | null | import os
from pathlib import Path
import typer
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
app_dir = Path(typer.get_app_dir('pacu'))
profile_path = app_dir/'profile'
| 18 | 53 | 0.772222 | 30 | 180 | 4.3 | 0.5 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 180 | 9 | 54 | 20 | 0.80625 | 0 | 0 | 0 | 0 | 0 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a7f97544b2a1256d09901f56051621ba77f4915d | 165 | py | Python | flask_unchained/bundles/security/extensions/__init__.py | achiang/flask-unchained | 12788a6e618904a25ff2b571eb05ff1dc8f1840f | [
"MIT"
] | 69 | 2018-10-10T01:59:11.000Z | 2022-03-29T17:29:30.000Z | flask_unchained/bundles/security/extensions/__init__.py | achiang/flask-unchained | 12788a6e618904a25ff2b571eb05ff1dc8f1840f | [
"MIT"
] | 18 | 2018-11-17T12:42:02.000Z | 2021-05-22T18:45:27.000Z | flask_unchained/bundles/security/extensions/__init__.py | achiang/flask-unchained | 12788a6e618904a25ff2b571eb05ff1dc8f1840f | [
"MIT"
] | 7 | 2018-10-12T16:20:25.000Z | 2021-10-06T12:18:21.000Z | from .security import Security
security = Security()
EXTENSIONS = {
'security': (security, ['csrf', 'db'])
}
__all__ = [
'security',
'Security',
]
| 10.3125 | 42 | 0.587879 | 14 | 165 | 6.642857 | 0.5 | 0.688172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236364 | 165 | 15 | 43 | 11 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c504ca5b2d2d4037e1006ec2a89297a7468a6ef4 | 3,017 | py | Python | pystratis/core/types/tests/test_int32.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 8 | 2021-06-30T20:44:22.000Z | 2021-12-07T14:42:22.000Z | pystratis/core/types/tests/test_int32.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 2 | 2021-07-01T11:50:18.000Z | 2022-01-25T18:39:49.000Z | pystratis/core/types/tests/test_int32.py | TjadenFroyda/pyStratis | 9cc7620d7506637f8a2b84003d931eceb36ac5f2 | [
"MIT"
] | 4 | 2021-07-01T04:36:42.000Z | 2021-09-17T10:54:19.000Z | import pytest
from random import choice
from pystratis.core.types import *
def test_int32(generate_int32):
hex_int = generate_int32
int32(hex_int)
def test_int32_invalid_hexstring():
letters = 'qwerty0'
badvalue = ''.join(choice(letters) for _ in range(64))
with pytest.raises(ValueError):
int32(badvalue)
def test_int32_long_hex_overflow():
letters = '0123456789abcdef'
long_value = ''.join(choice(letters) for _ in range(128))
with pytest.raises(ValueError):
int32(long_value)
def test_int32_short_hex_ok():
letters = '0123456789abcdef'
short_value = ''.join(choice(letters) for _ in range(6))
int32(short_value)
def test_int32_try_init_nonstr(generate_int32):
with pytest.raises(ValueError):
int32(1.5)
with pytest.raises(ValueError):
int32(True)
with pytest.raises(ValueError):
int32([generate_int32])
with pytest.raises(ValueError):
int32({'hash': generate_int32})
with pytest.raises(ValueError):
int32(bytes(generate_int32, 'utf-8'))
def test_int32_add():
a = int32(2)
b = int32(5)
assert a + b == int32(7)
a += b
assert a == int32(7)
def test_int32_sub():
a = int32(9)
b = int32(5)
assert a - b == int32(4)
a -= b
assert a == int32(4)
def test_int32_mul():
a = int32(2)
b = int32(5)
assert a * b == int32(2*5)
a *= b
assert a == int32(2*5)
def test_int32_floordiv():
a = int32(20)
b = int32(5)
assert a // b == int32(20 // 5)
a //= b
assert a == int32(20 // 5)
def test_int32_pow():
a = int32(20)
b = 5
assert a ** b == int32(20 ** 5)
a //= b
assert a == int32(20 // 5)
def test_int32_test_underflow():
numbits = 32
a = int32(2**numbits // -2)
b = int32(5)
with pytest.raises(ValueError):
a - b
with pytest.raises(ValueError):
int32(2**numbits // -2 - 1)
def test_int32_test_overflow():
numbits = 32
with pytest.raises(ValueError):
int32(2**numbits)
with pytest.raises(ValueError):
int32(2**numbits-1) + int32(1)
with pytest.raises(ValueError):
int32(2**numbits) * int32(2)
def test_addition_with_other_customints():
a = int32(2**32//2-1)
b = int64(2**64//2-1)
c = uint32(2**32-1)
d = uint64(2**64-1)
e = uint128(2**128-1)
f = uint160(2**160-1)
g = uint256(2**256-1)
with pytest.raises(ValueError):
a + b
with pytest.raises(ValueError):
a + c
with pytest.raises(ValueError):
a + d
with pytest.raises(ValueError):
a + e
with pytest.raises(ValueError):
a + f
with pytest.raises(ValueError):
a + g
def test_subtracting_from_larger_customints_is_ok():
a = int32(2**32//2-1)
b = int64(2**64//2-1)
c = uint32(2**32-1)
d = uint64(2**64-1)
e = uint128(2**128-1)
f = uint160(2**160-1)
g = uint256(2**256-1)
b - a
c - a
d - a
e - a
f - a
g - a
| 21.55 | 61 | 0.590653 | 443 | 3,017 | 3.893905 | 0.180587 | 0.104348 | 0.166957 | 0.271304 | 0.624928 | 0.471884 | 0.447536 | 0.243478 | 0.243478 | 0.243478 | 0 | 0.129236 | 0.26649 | 3,017 | 139 | 62 | 21.705036 | 0.650249 | 0 | 0 | 0.441441 | 1 | 0 | 0.01591 | 0 | 0 | 0 | 0 | 0 | 0.09009 | 1 | 0.126126 | false | 0 | 0.027027 | 0 | 0.153153 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c50d53f8e3a5291ebebcb3ea7458e52a2bda6364 | 12,795 | py | Python | code/backups/matchedFilter-almost-done.py | geovanecomp/Masters-Degree | 6e816431f76abe576e75013e40fccf8928804a15 | [
"MIT"
] | null | null | null | code/backups/matchedFilter-almost-done.py | geovanecomp/Masters-Degree | 6e816431f76abe576e75013e40fccf8928804a15 | [
"MIT"
] | null | null | null | code/backups/matchedFilter-almost-done.py | geovanecomp/Masters-Degree | 6e816431f76abe576e75013e40fccf8928804a15 | [
"MIT"
] | null | null | null | import numpy as np
from numpy import linalg as LA
from sklearn.decomposition import PCA
from utils import pulse_helper
np.set_printoptions(suppress=True)
if __name__ == '__main__':
TEN_BITS_ADC_VALUE = 1023
pedestal = 0
dimension = 7
number_of_data = 20
qtd_for_training = 10
qtd_for_testing = number_of_data - qtd_for_training
noise = pedestal + np.random.randn(number_of_data, dimension)
# Getting data from boundaries
noise_training = noise[:qtd_for_training, :]
noise_testing = noise[qtd_for_testing:, :]
noise_training = np.matrix([
[-0.8756796, -0.9904594, 0.0763564, 0.2866689, -0.8491597, -2.6331943, 0.4875299],
[0.5691059, 1.0500695, 0.2050344, 0.5682747, 0.9148819, 0.9840557, 0.0631626],
[-0.8314480, -1.2014296, 0.0640685, 1.0649667, 0.3001502, 0.7921802, 0.0182123],
[-1.6749044, 0.9083217, 0.4855328, -0.8777319, -0.4533996, 0.4545650, 0.4935619],
[-2.0737103, -0.3444838, 2.6205070, -2.3721530, 0.3851043, 0.2318063, 0.6319781],
[-0.1389740, -0.7860030, -0.0916941, 2.4410314, 1.6296622, -0.3318915, 0.1720283],
[1.5643478, -0.4838364, -1.3568845, -0.0097646, 0.3173777, 1.0363776, 1.1373946],
[1.2958982, -0.4076244, 0.0509913, -0.3809160, -0.0055843, -0.5429556, -1.2209100],
[-0.4553984, 0.2447099, -0.0911029, 1.5816856, -0.1473872, -0.2368131, 0.7251548],
[0.8811581, 0.2740093, 0.2105274, 0.0017615, -0.1050384, 0.5999477, -0.4440812]
])
noise_testing = np.matrix([
[0.9431525, 0.4929578, 0.8702650, -0.2384978, 1.1779368, 0.9613488, 0.7198087],
[-0.1507800, -0.8556539, -0.3505294, 0.4385733, 0.2599012, 0.6289688, 1.1821911],
[-1.1579471, -0.7761330, 0.5048862, -0.7994381, 0.5862172, -1.0139671, 0.0973860],
[-0.8073489, 0.6542282, 0.2723901, -0.5065714, 1.4904722, 0.4038073, -0.9281103],
[0.7330344, -0.5733226, 0.2751025, 2.3656258, -0.1680237, -0.6095974, -0.9048640],
[-1.0540677, 0.2591932, 0.5585528, 0.5630835, 0.3494722, 0.3133169, 0.1195288],
[1.0160440, 1.0072891, 0.2649281, 0.0984453, 0.8794560, 0.7006237, -0.4429727],
[-0.9385259, 1.1363805, 0.3755675, -0.3434197, -0.2836825, 0.3928002, -0.5461819],
[-0.0046605, 1.2195139, 0.0973187, 1.4662556, -1.0468390, 0.7239156, 0.5695964],
[0.7419353, -0.1169670, 0.8664425, 0.4642078, 0.2907896, -1.3489066, 0.3598890]
])
amplitude = np.zeros(qtd_for_testing)
signal_testing = np.zeros((qtd_for_testing, dimension))
for i in range(0, qtd_for_testing):
amplitude[i] = TEN_BITS_ADC_VALUE * np.random.random(1)
signal_testing[i, :] = pedestal + np.random.randn(1, dimension) + \
np.multiply(amplitude[i], pulse_helper.get_jitter_pulse())
amplitude = [775.902, 10.351, 771.456, 1019.714, 512.243, 396.846, 65.751, 68.398, 439.107, 913.696]
signal_testing = np.matrix([
[0.236776, 11.798491, 352.442790, 777.724908, 436.837438, 115.284537, 31.993758],
[-0.868811, -0.044646, 3.415924, 9.414231, 6.257210, 2.728463, -0.782330],
[0.047594, 14.065240, 348.083577, 772.605168, 432.601323, 114.304198, 33.564886],
[0.696711, 19.768088, 461.249975, 1019.274032, 575.525264, 152.047669, 44.799316],
[-0.508484, 7.670365, 233.040694, 512.310657, 288.821356, 74.425243, 21.451117],
[-0.970319, 5.826021, 181.056663, 398.103175, 223.515092, 59.347880, 18.228919],
[0.413107, 1.553945, 29.551838, 64.758933, 35.236955, 10.107259, 3.132132],
[0.444259, 2.005030, 30.434815, 68.274549, 37.801127, 10.440632, 2.696392],
[-0.177446, 8.735986, 197.462098, 437.059435, 247.049569, 63.410463, 17.028772],
[-1.036739, 15.283146, 413.917933, 912.159329, 513.726378, 135.862961, 38.262066]
])
# Branqueamento
# noise_train_cov = np.cov(noise_training)
noise_train_cov = np.matrix([
[1.5238484, 0.0343165, -0.8543985, 0.4900083, 0.1731590, 0.2918901, -0.3036230],
[0.0343165, 0.5913561, 0.0801816, -0.2130887, -0.0283764, 0.3523750, 0.0020615],
[-0.8543985, 0.0801816, 0.9541332, -0.8122446, -0.0099779, -0.0503286, -0.0313772],
[0.4900083, -0.2130887, -0.8122446, 1.7790657, 0.3740738, -0.1521554, -0.0214180],
[0.1731590, -0.0283764, -0.0099779, 0.3740738, 0.4885224, 0.3277632, -0.0170296],
[0.2918901, 0.3523750, -0.0503286, -0.1521554, 0.3277632, 1.1858336, 0.0485573],
[-0.3036230, 0.0020615, -0.0313772, -0.0214180, -0.0170296, 0.0485573, 0.4439914]
])
[D, V] = LA.eig(noise_train_cov)
# TODO Discover why I cant do it after np.diag for whitening filter.
# If I do it, I am getting a diag matrix with other elements as inf.
D = D**(-.5)
# eig returns D as an array, we need to transform it into a diagonal matrix
D = np.diag(D)
W = D * np.transpose(V)
W = np.matrix([
[-1.6748373, 0.5175131, -2.9246219, -1.3834463, 2.2066308, -0.6199292, -1.4226427],
[0.6572618, 0.6355380, 0.3928698, -0.2736311, 1.1956474, -0.9112701, 1.1674937],
[-0.0684082, 1.2086504, 0.2564293, 0.3959205, -0.2316473, -0.3087873, -0.6167130],
[0.2274185, -0.4449974, 0.6545361, 0.0488708, 0.6400391, -0.0656841, -0.8013342],
[-0.4987594, 0.0996750, 0.1633758, 0.4668367, 0.3917242, 0.4615647, 0.2400612],
[0.3093050, 0.2629946, 0.0019590, -0.3352125, 0.0916674, 0.5640587, -0.0501276],
[0.3268204, -0.0336678, -0.2916859, 0.3680302, 0.0858922, 0.0387934, -0.0389346]
])
W_t = np.transpose(W)
# PCA Part
pure_signal = np.zeros((qtd_for_testing, dimension))
for i in range(0, qtd_for_testing):
pure_signal[i, :] = np.multiply(
TEN_BITS_ADC_VALUE * np.random.random(1),
pulse_helper.get_jitter_pulse()
)
pure_signal = np.matrix([
[0.00089477, 0.66899891, 17.57100806, 38.83567740, 21.87640893, 5.79952588, 1.64507153],
[0.00463039, 3.46201728, 90.92859833, 200.97160612, 113.20871253, 30.01209480, 8.51311704],
[0.00371731, 2.77933154, 72.99811097, 161.34140276, 90.88474157, 24.09391838, 6.83438955],
[0.01695149, 12.67418518, 332.88276802, 735.74195320, 414.44859243, 109.87202458, 31.16588199],
[0.00950488, 7.10654984, 186.65089286, 412.53830380, 232.38571430, 61.60640760, 17.47504004],
[0.01038399, 7.76383156, 203.91415341, 450.69379353, 253.87896875, 67.30435766, 19.09129896],
[0.00079089, 0.59132464, 15.53092201, 34.32665187, 19.33644329, 5.12617056, 1.45407011],
[0.02063659, 15.42943668, 405.24842555, 895.68549890, 504.54591133, 133.75719398, 37.94105860],
[0.00129928, 0.97144042, 25.51452205, 56.39253843, 31.76631165, 8.42137973, 2.38877665],
[0.01686500, 12.60951796, 331.18430740, 731.98799279, 412.33396025, 109.31142690, 31.00686498]
])
n_pca_components = dimension
pca = PCA(n_components=n_pca_components)
coeff = pca.fit(pure_signal * W_t).components_
Y = pca.explained_variance_
# stochastic filter params
# ddof=1 to use Sampled data variance -> N-1
variance = np.var(noise_training[:, 3], ddof=1)
reference_pulse = [0.0000, 0.0172, 0.4524, 1.0000, 0.5633, 0.1493, 0.0424]
bleached_reference_pulse = reference_pulse * W_t
optimal_reference_pulse = bleached_reference_pulse * \
np.transpose(coeff[:, :n_pca_components])
optimal_noise = ((noise_testing - pedestal) * W_t) * np.transpose(coeff[:, :n_pca_components])
optimal_signal = ((signal_testing - pedestal) * W_t) * np.transpose(coeff[:, :n_pca_components])
No = variance * 2
h1 = np.zeros((dimension, dimension))
h2 = np.zeros((dimension, dimension))
for i in range(0, n_pca_components):
h1 = h1 + (Y[i] / (Y[i] + variance)) * (np.transpose(np.asmatrix(coeff[i, :])) * coeff[i, :])
h2 = h2 + (1.0 / (Y[i] + variance)) * (np.transpose(np.asmatrix(coeff[i, :])) * coeff[i, :])
IR_noise = np.zeros((len(noise_testing), 1))
IR_signal = np.zeros((len(signal_testing), 1))
for ev in range(0, len(noise_testing)):
IR_noise[ev] = (1.0 / No) * np.transpose((
(optimal_noise[ev, :] * (coeff[:, :n_pca_components])) * h1 *
np.transpose(optimal_noise[ev, :] * (coeff[:, :n_pca_components]))
))
for ev in range(0, len(signal_testing)):
IR_signal[ev] = (1.0 / No) * np.transpose((
(optimal_signal[ev, :] * (coeff[:, :n_pca_components])) * h1 *
np.transpose(optimal_signal[ev, :] * (coeff[:, :n_pca_components]))
))
ID_noise = np.zeros((len(noise_testing), 1))
ID_signal = np.zeros((len(signal_testing), 1))
for ev in range(0, len(noise_testing)):
ID_noise[ev] = ((optimal_reference_pulse * coeff[:, :n_pca_components]) * h2 *
np.transpose(optimal_noise[ev, :] * coeff[:, :n_pca_components]))
for ev in range(0, len(signal_testing)):
ID_signal[ev] = ((optimal_reference_pulse * coeff[:, :n_pca_components]) * h2 *
np.transpose(optimal_signal[ev, :] * coeff[:, :n_pca_components]))
# Matched Filter estimatives
estimated_noise = ID_noise + IR_noise
estimated_signal = ID_signal + IR_signal
test = np.matrix([
[-0.784071, 0.597765, -0.063745, 0.110153, 0.061466, -0.082475, 0.033644],
[0.244821, 0.182834, 0.514858, 0.442823, 0.432092, -0.506405, -0.048100],
[0.160964, 0.424736, 0.634737, -0.539170, -0.226289, 0.203796, 0.085756],
[0.319035, 0.417696, -0.203085, 0.233144, 0.383461, 0.652085, -0.236409],
[0.410766, 0.486923, -0.435696, -0.011337, -0.437815, -0.438568, -0.142103],
[-0.095613, -0.081021, 0.309344, 0.568404, -0.625782, 0.234851, -0.344613],
[0.140470, 0.103537, -0.034607, 0.351428, -0.167281, 0.150022, 0.891269]
]).T
mEstimacao = np.matrix([
2.050257740, 0.000019448, -0.000026811, 0.000017090, 0.000031557, -0.000072463, 0.000027790
])
# Amplitue estimative
b1 = coeff[:, :n_pca_components].T.dot(coeff[:, :n_pca_components])
b2 = (1.0 / No) * (
coeff[:, :n_pca_components].T * h1 *
coeff[:, :n_pca_components]
)
b3 = (optimal_reference_pulse * coeff[:, :n_pca_components]) * h2 * coeff[:, :n_pca_components]
# ampRuido = zeros(size(ruidoTes,1),1);
# ampSinal = zeros(size(sinalTes,1),1);
# a = (1/No)*((mEstimacao*COEFF(:,1:N)')*h1*(mEstimacao*COEFF(:,1:N)')');
# b = (mEstimacao*COEFF(:,1:N)')*h2*(mEstimacao*COEFF(:,1:N)')';
# cs=0;
# cr=0;
# for i=1:size(sinalTes,1)
# ra = b*b+4*a*FCestSinal(i);
# if ra<0
# ra=0;
# cs=cs+1;
# end
# ampSinal(i) = (-b+sqrt(ra))/(2*a); % amplitude do sinal usando a saida do filtro casado
# end
# for i=1:size(ruidoTes,1)
# ra = b*b+4*a*FCestRuido(i);
# if ra<0
# ra=0;
# cr=cr+1;
# end
# ampRuido(i) = (-b+sqrt(ra))/(2*a); % amplitude do ruido usando a saida do filtro casado
# end
# from pudb import set_trace; set_trace()
print('==================== b2')
print(b2)
print('====================')
# print('==================== np.transpose(coeff[:, :n_pca_components])')
# print(np.transpose(coeff[:, :n_pca_components]))
print('====================')
# print('==================== coeff[:, :n_pca_components]')
# print(coeff[:, :n_pca_components])
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
print('====================')
| 52.012195 | 119 | 0.557483 | 1,688 | 12,795 | 4.122038 | 0.391588 | 0.013797 | 0.04829 | 0.057344 | 0.21213 | 0.196321 | 0.181805 | 0.157373 | 0.125898 | 0.102185 | 0 | 0.395633 | 0.259007 | 12,795 | 245 | 120 | 52.22449 | 0.338255 | 0.101602 | 0 | 0.163743 | 0 | 0 | 0.025402 | 0 | 0 | 0 | 0 | 0.004082 | 0 | 1 | 0 | false | 0 | 0.023392 | 0 | 0.023392 | 0.093567 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c519aac994ff8021a19610dba65cddabd07b75ab | 714 | py | Python | apman/config.py | willycs40/apman | 818b7a23a0109b157d9240da9522599f2f78e6e7 | [
"Apache-2.0"
] | null | null | null | apman/config.py | willycs40/apman | 818b7a23a0109b157d9240da9522599f2f78e6e7 | [
"Apache-2.0"
] | null | null | null | apman/config.py | willycs40/apman | 818b7a23a0109b157d9240da9522599f2f78e6e7 | [
"Apache-2.0"
] | null | null | null | import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config:
DATABASE_URI = os.environ.get('DATABASE_URI') or 'sqlite:///' + os.path.join(basedir, 'data-dev.sqlite')
SMTP_ADDRESS = os.environ.get('EMAIL_SMTP_ADDRESS') or 'localhost'
SMTP_USERNAME = os.environ.get('SMTP_USERNAME') or ''
SMTP_PASSWORD = os.environ.get('SMTP_PASSWORD') or ''
LOG_RUN_TO_DB = True
SEND_NOTIFICATION_EMAILS = True
NOTIFY_SUCCESS = False
NOTIFICATION_EMAILS_FROM = 'a@b.co.uk'
NOTIFICATION_EMAILS_TO = ['a@b.co.uk']
class DevelopmentConfig(Config):
DEBUG = True
config = {
'development': DevelopmentConfig,
'default': DevelopmentConfig,
'production': Config
} | 27.461538 | 108 | 0.70028 | 92 | 714 | 5.184783 | 0.48913 | 0.075472 | 0.100629 | 0.067086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 714 | 26 | 109 | 27.461538 | 0.801681 | 0 | 0 | 0 | 0 | 0 | 0.19021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.052632 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
c5255a38c9076e5a086c95723a8a0247a84b0912 | 50 | py | Python | Codewars/7kyu/sum-of-angles/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/7kyu/sum-of-angles/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/7kyu/sum-of-angles/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
angle = lambda n: (n - 2) * 180
| 12.5 | 31 | 0.52 | 10 | 50 | 2.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 0.28 | 50 | 3 | 32 | 16.666667 | 0.527778 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c535db3c1dcf39dae149ea21cca6bedcbc321be8 | 1,317 | py | Python | src/discolight/writers/image/factory.py | denzel-datature/discolight | 7c8309d3f883263b2e4cae0b289f17be1d1c07ea | [
"MIT"
] | 27 | 2020-07-23T08:09:25.000Z | 2022-03-01T08:24:43.000Z | src/discolight/writers/image/factory.py | denzel-datature/discolight | 7c8309d3f883263b2e4cae0b289f17be1d1c07ea | [
"MIT"
] | 7 | 2020-08-05T07:26:55.000Z | 2020-12-31T04:20:40.000Z | src/discolight/writers/image/factory.py | denzel-datature/discolight | 7c8309d3f883263b2e4cae0b289f17be1d1c07ea | [
"MIT"
] | 6 | 2020-07-27T04:30:01.000Z | 2020-08-13T02:39:25.000Z | """A factory for image writers."""
from pathlib import Path
from discolight.objectset.loader import ObjectSetLoader
from discolight.util.decorators import singleton
from .types import ImageWriter
@singleton
class ImageWriterLoader:
"""A loader for all image writer objects."""
def __init__(self):
"""Construct the image writer loader.
Image writers are loaded from the modules in this directory.
"""
image_writers_directory = Path(__file__).resolve().parent
self.loader = ObjectSetLoader(image_writers_directory, __name__,
ImageWriter)
def get_image_writer_set():
"""Return the set of installed image writers.
The set is returned as a dictionary where names of the image writers are
the keys, and the image writer class objects are the values.
"""
return ImageWriterLoader().loader.get_object_set()
def make_image_writer_factory():
"""Generate a factory function for constructing image writers.
Invoke the returned factory function by passing the name of the image
writer class you want to construct, followed by the parameters for
the constructor as named arguments
(e.g., factory('Directory', directory=...))
"""
return ImageWriterLoader().loader.make_object_factory()
| 31.357143 | 76 | 0.712984 | 163 | 1,317 | 5.601227 | 0.429448 | 0.092004 | 0.046002 | 0.041621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215642 | 1,317 | 41 | 77 | 32.121951 | 0.883833 | 0.470008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c5375383d2e74d7f7fc3f310139279b618706930 | 300 | py | Python | demoDay10-NLPAndBayes/mr/test.py | mozhumz/machine_learning_py | 880f6778ac16b0a16a80b31972a35304caa91dc1 | [
"MulanPSL-1.0"
] | null | null | null | demoDay10-NLPAndBayes/mr/test.py | mozhumz/machine_learning_py | 880f6778ac16b0a16a80b31972a35304caa91dc1 | [
"MulanPSL-1.0"
] | null | null | null | demoDay10-NLPAndBayes/mr/test.py | mozhumz/machine_learning_py | 880f6778ac16b0a16a80b31972a35304caa91dc1 | [
"MulanPSL-1.0"
] | null | null | null | # a = 'a b c'
# # print(a.strip())
# print(a)
# # print(a.strip())
# print(a.split(' '))
import re
word = '--?' # note2
p = re.compile(r'\w+')
word = p.findall(word) # ['grandfather']
print(word)
print(word[0])
print(word[0].lower())
# # list index out of range数组越界
# a = [1,2,3,4]
# print(a[4]) | 17.647059 | 42 | 0.56 | 51 | 300 | 3.294118 | 0.529412 | 0.178571 | 0.130952 | 0.190476 | 0.202381 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.173333 | 300 | 17 | 43 | 17.647059 | 0.645161 | 0.51 | 0 | 0 | 0 | 0 | 0.044776 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
c5409ef110044801a168f078137ab2364caccd21 | 562 | py | Python | greatsoft/contact/views.py | YakhyaevRasul/greatsoft-new | 146f669786cfca0e68ed37c40b904f30bf05e99c | [
"MIT"
] | null | null | null | greatsoft/contact/views.py | YakhyaevRasul/greatsoft-new | 146f669786cfca0e68ed37c40b904f30bf05e99c | [
"MIT"
] | null | null | null | greatsoft/contact/views.py | YakhyaevRasul/greatsoft-new | 146f669786cfca0e68ed37c40b904f30bf05e99c | [
"MIT"
] | null | null | null | from rest_framework import generics
from .models import ContactTo, ContactUs
from .serializers import ContactToSerializer, ContactUsSerializer
from rest_framework.permissions import AllowAny, IsAdminUser, IsAuthenticated
class ContactToAPIView(generics.ListAPIView):
queryset = ContactTo.objects.all()
serializer_class = ContactToSerializer
permission_classes = [IsAdminUser,]
class ContactUsAPIView(generics.ListAPIView):
queryset = ContactUs.objects.all()
serializer_class = ContactUsSerializer
permission_classes = [IsAdminUser,]
| 33.058824 | 78 | 0.811388 | 51 | 562 | 8.823529 | 0.490196 | 0.035556 | 0.075556 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126335 | 562 | 16 | 79 | 35.125 | 0.916497 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c5428e494e8cd1dc934606b52eef14092e946aff | 2,144 | py | Python | ooobuild/lo/sheet/range_selection_arguments.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/sheet/range_selection_arguments.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/lo/sheet/range_selection_arguments.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2022 :Barry-Thomas-Paul: Moss
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Service Class
# this is a auto generated file generated by Cheetah
# Libre Office Version: 7.3
# Namespace: com.sun.star.sheet
from abc import abstractproperty, ABC
class RangeSelectionArguments(ABC):
"""
Service Class
contains the arguments for starting the range selection.
**since**
OOo 2.0.3
See Also:
`API RangeSelectionArguments <https://api.libreoffice.org/docs/idl/ref/servicecom_1_1sun_1_1star_1_1sheet_1_1RangeSelectionArguments.html>`_
"""
__ooo_ns__: str = 'com.sun.star.sheet'
__ooo_full_ns__: str = 'com.sun.star.sheet.RangeSelectionArguments'
__ooo_type_name__: str = 'service'
@abstractproperty
def CloseOnMouseRelease(self) -> bool:
"""
specifies if the range selection is finished when the mouse button is released, after selecting cells.
"""
@abstractproperty
def InitialValue(self) -> str:
"""
contains the initial value for the range descriptor.
"""
@abstractproperty
def SingleCellMode(self) -> bool:
"""
specifies if the range selection is limited to a single cell only.
If TRUE, the selection is restricted to a single cell. If FALSE, multiple adjoining cells can be selected. The default value is FALSE.
**since**
OOo 2.0.3
"""
@abstractproperty
def Title(self) -> str:
"""
contains a title for the operation.
"""
__all__ = ['RangeSelectionArguments']
| 28.972973 | 148 | 0.675373 | 273 | 2,144 | 5.194139 | 0.542125 | 0.042313 | 0.021157 | 0.031735 | 0.09732 | 0.081805 | 0.053597 | 0.053597 | 0 | 0 | 0 | 0.015404 | 0.243004 | 2,144 | 73 | 149 | 29.369863 | 0.858287 | 0.640392 | 0 | 0.285714 | 0 | 0 | 0.15873 | 0.114638 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c549ad1539b72855607d2a639b91f5500a1b285d | 880 | py | Python | Temperature/anomaly_HTM.py | CarlosLing/Algoritms | 853631f38af3fb1caa4f085aca9f70dc25576847 | [
"MIT"
] | null | null | null | Temperature/anomaly_HTM.py | CarlosLing/Algoritms | 853631f38af3fb1caa4f085aca9f70dc25576847 | [
"MIT"
] | null | null | null | Temperature/anomaly_HTM.py | CarlosLing/Algoritms | 853631f38af3fb1caa4f085aca9f70dc25576847 | [
"MIT"
] | null | null | null | from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
import nupic
from nupic.encoders import RandomDistributedScalarEncoder
from nupic.encoders.date import DateEncoder
from nupic.algorithms.spatial_pooler import SpatialPooler
from nupic.algorithms.temporal_memory import TemporalMemory
from scipy.stats import norm
class Encoder:
def __init__(self, variable, encoders):
self.variable = variable
self.encoders = encoders
def multiencode(encoders, Data, iter):
res = [0]
for x in encoders:
for y in x.encoders:
if x.variable != '_index':
exec("enc = " + y + ".encode(Data['" + x.variable + "'][iter])")
else:
exec("enc = " + y + ".encode(Data.index[iter])")
res = np.concatenate([res, enc])
return res[1:] | 27.5 | 80 | 0.664773 | 108 | 880 | 5.314815 | 0.481481 | 0.062718 | 0.059233 | 0.04878 | 0.062718 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002985 | 0.238636 | 880 | 32 | 81 | 27.5 | 0.853731 | 0 | 0 | 0 | 0 | 0 | 0.074915 | 0.028377 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.44 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c54bedd6303b98990f7adc94b0379d09ed5e9a14 | 6,760 | py | Python | data/external/repositories_2to3/168569/kaggle-Otto-master/exp_XGB_CF_tc_mb_mf_ntree.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories_2to3/168569/kaggle-Otto-master/exp_XGB_CF_tc_mb_mf_ntree.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories_2to3/168569/kaggle-Otto-master/exp_XGB_CF_tc_mb_mf_ntree.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | 1 | 2019-12-04T08:23:33.000Z | 2019-12-04T08:23:33.000Z | """
Experiment for XGBoost + CF
Aim: To find the best tc(max_depth), mb(min_child_weight), mf(colsample_bytree * 93), ntree
tc: [13, 15, 17]
mb: [5, 7, 9]
mf: [40, 45, 50, 55, 60]
ntree: [160, 180, 200, 220, 240, 260, 280, 300, 320, 340, 360]
Averaging 20 models
Summary
Best
loss ntree
mf 40 45 50 55 60 40 45 50 55 60
tc mb
13 5 0.4471 0.4471 0.4473 0.4471 0.4476 300 300 280 280 260
7 0.4477 0.4475 0.4469 0.4472 0.4481 340 320 300 300 300
9 0.4485 0.4484 0.4487 0.4488 0.4487 360 360 340 340 340
15 5 0.4471 *0.4465* 0.4471 0.4476 0.4478 260 *260* 240 240 240
7 0.4473 0.4468 0.4473 0.4474 0.4478 300 280 260 260 260
9 0.4483 0.4480 0.4483 0.4484 0.4492 340 320 300 300 280
17 5 0.4471 0.4472 0.4474 0.4476 0.4478 240 240 220 220 200
7 0.4474 0.4470 0.4468 0.4475 0.4473 280 260 260 240 240
9 0.4481 0.4480 0.4476 0.4480 0.4486 320 300 280 260 260
Time: 1 day, 7:37:21 on i7-4790k 32G MEM GTX660
"""
import numpy as np
import scipy as sp
import pandas as pd
from sklearn.cross_validation import StratifiedKFold
from sklearn.metrics import log_loss
from datetime import datetime
import os
from sklearn.grid_search import ParameterGrid
import xgboost as xgb
from utility import *
path = os.getcwd() + '/'
path_log = path + 'logs/'
file_train = path + 'train.csv'
training = pd.read_csv(file_train, index_col = 0)
num_train = training.shape[0]
y = training['target'].values
yMat = pd.get_dummies(training['target']).values
X = training.iloc[:,:93].values
kf = StratifiedKFold(y, n_folds=5, shuffle = True, random_state = 345)
for train_idx, valid_idx in kf:
break
y_train_1 = yMat[train_idx].argmax(1)
y_train = yMat[train_idx]
y_valid = yMat[valid_idx]
X2, ignore = count_feature(X)
dtrain , dvalid= xgb.DMatrix(X2[train_idx], label = y_train_1), xgb.DMatrix(X2[valid_idx])
#
nIter = 20
nt = 360
nt_lst = list(range(160, 370, 20))
nt_len = len(nt_lst)
bf = .8 # subsample
sh = .1 # eta
# tc:max_depth, mb:min_child_weight, mf(max features):colsample_bytree * 93
param_grid = {'tc':[13, 15, 17], 'mb':[5, 7, 9], 'mf':[40, 45, 50, 55, 60]}
scores = []
t0 = datetime.now()
for params in ParameterGrid(param_grid):
tc = params['tc']
mb = params['mb']
mf = params['mf']
cs = float(mf) / X.shape[1]
print(tc, mb, mf)
predAll = [np.zeros(y_valid.shape) for k in range(nt_len)]
for i in range(nIter):
seed = 112233 + i
param = {'bst:max_depth':tc, 'bst:eta':sh,'objective':'multi:softprob','num_class':9,
'min_child_weight':mb, 'subsample':bf, 'colsample_bytree':cs,
'silent':1, 'nthread':8, 'seed':seed}
plst = list(param.items())
bst = xgb.train(plst, dtrain, nt)
for s in range(nt_len):
ntree = nt_lst[s]
pred = bst.predict(dvalid, ntree_limit = ntree).reshape(y_valid.shape)
predAll[s] += pred
scores.append({'tc':tc, 'mb':mb, 'mf':mf, 'ntree':ntree, 'nModels':i+1, 'seed':seed,
'valid':log_loss(y_valid, pred),
'valid_avg':log_loss(y_valid, predAll[s] / (i+1))})
print(scores[-4], datetime.now() - t0)
df = pd.DataFrame(scores)
if os.path.exists(path_log) is False:
print('mkdir', path_log)
os.mkdir(path_log)
df.to_csv(path_log + 'exp_XGB_CF_tc_mb_mf_ntree.csv')
keys = ['tc', 'mb', 'mf', 'ntree']
grouped = df.groupby(keys)
pd.set_option('display.precision', 5)
print(pd.DataFrame({'loss':grouped['valid_avg'].last().unstack().min(1),
'ntree':grouped['valid_avg'].last().unstack().idxmin(1)}).unstack())
# loss ntree
# mf 40 45 50 55 60 40 45 50 55 60
# tc mb
# 13 5 0.4471 0.4471 0.4473 0.4471 0.4476 300 300 280 280 260
# 7 0.4477 0.4475 0.4469 0.4472 0.4481 340 320 300 300 300
# 9 0.4485 0.4484 0.4487 0.4488 0.4487 360 360 340 340 340
# 15 5 0.4471 0.4465 0.4471 0.4476 0.4478 260 260 240 240 240
# 7 0.4473 0.4468 0.4473 0.4474 0.4478 300 280 260 260 260
# 9 0.4483 0.4480 0.4483 0.4484 0.4492 340 320 300 300 280
# 17 5 0.4471 0.4472 0.4474 0.4476 0.4478 240 240 220 220 200
# 7 0.4474 0.4470 0.4468 0.4475 0.4473 280 260 260 240 240
# 9 0.4481 0.4480 0.4476 0.4480 0.4486 320 300 280 260 260
print(pd.DataFrame({'loss':grouped['valid'].mean().unstack().min(1),
'ntree':grouped['valid'].mean().unstack().idxmin(1)}).unstack())
# loss ntree
# mf 40 45 50 55 60 40 45 50 55 60
# tc mb
# 13 5 0.4563 0.4564 0.4564 0.4561 0.4566 280 260 260 260 240
# 7 0.4565 0.4563 0.4557 0.4561 0.4569 320 300 300 300 280
# 9 0.4571 0.4569 0.4571 0.4573 0.4570 340 340 320 300 300
# 15 5 0.4567 0.4559 0.4565 0.4571 0.4571 260 240 240 220 220
# 7 0.4565 0.4558 0.4562 0.4564 0.4568 280 260 260 260 240
# 9 0.4570 0.4567 0.4570 0.4570 0.4577 300 300 280 280 260
# 17 5 0.4568 0.4569 0.4570 0.4572 0.4574 220 220 200 200 200
# 7 0.4567 0.4563 0.4559 0.4567 0.4564 260 240 240 220 220
# 9 0.4571 0.4569 0.4565 0.4567 0.4573 280 280 260 260 240
#
criterion = df.apply(lambda x: x['tc']==15 and x['mb']==5 and x['mf']==45, axis = 1)
grouped = df[criterion].groupby('ntree')
g = grouped[['valid']].mean()
g['valid_avg'] = grouped['valid_avg'].last()
print(g)
# valid valid_avg
# ntree
# 160 0.461023 0.452912
# 180 0.458513 0.450111
# 200 0.456939 0.448232
# 220 0.456147 0.447141
# 240 0.455870 0.446598
# 260 0.456097 0.446525
# 280 0.456657 0.446827
# 300 0.457434 0.447327
# 320 0.458462 0.448101
# 340 0.459635 0.449036
# 360 0.460977 0.450160
ax = g.plot()
ax.set_title('XGB+CF max_depth=15\n min_child_weight=5, colsample_bytree=45/93.')
ax.set_ylabel('Logloss')
fig = ax.get_figure()
fig.savefig(path_log + 'exp_XGB_CF_tc_mb_mf_ntree.png')
| 40 | 97 | 0.550444 | 1,114 | 6,760 | 3.263016 | 0.236086 | 0.024759 | 0.019807 | 0.017607 | 0.355433 | 0.330949 | 0.291334 | 0.291334 | 0.275928 | 0.261623 | 0 | 0.336418 | 0.32855 | 6,760 | 168 | 98 | 40.238095 | 0.464419 | 0.496302 | 0 | 0 | 0 | 0 | 0.134253 | 0.025408 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.126582 | 0 | 0.126582 | 0.075949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c551f743866ee6d5b6c910bd86ce49d14d0d1503 | 595 | wsgi | Python | django.wsgi | dotKom/onlinepetition | 2860a2db933a4fa60c3d24906886d2bb29838709 | [
"Apache-2.0"
] | 1 | 2016-02-03T08:28:36.000Z | 2016-02-03T08:28:36.000Z | django.wsgi | dotkom/onlinepetition | 2860a2db933a4fa60c3d24906886d2bb29838709 | [
"Apache-2.0"
] | null | null | null | django.wsgi | dotkom/onlinepetition | 2860a2db933a4fa60c3d24906886d2bb29838709 | [
"Apache-2.0"
] | 1 | 2016-02-04T20:58:47.000Z | 2016-02-04T20:58:47.000Z | #
# mod_wsgi handler for the onlineweb django project
#
import site
site.addsitedir('/usr/local/pythonenv/onlinepetition/lib/python2.6/site-packages')
import os
import sys
#sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../..')
#sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/..')
sys.path.append('/var/websites/prod/onlinepetition')
sys.path.append('/var/websites/prod/onlinepetition/onlinepetition')
os.environ['DJANGO_SETTINGS_MODULE'] = 'onlinepetition.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
| 24.791667 | 82 | 0.764706 | 78 | 595 | 5.692308 | 0.461538 | 0.063063 | 0.117117 | 0.067568 | 0.382883 | 0.382883 | 0.382883 | 0.222973 | 0.222973 | 0.222973 | 0 | 0.003617 | 0.070588 | 595 | 23 | 83 | 25.869565 | 0.799277 | 0.312605 | 0 | 0 | 0 | 0 | 0.470149 | 0.470149 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
c564d69fb262d70ec7dc92fe37b0168711981880 | 29,254 | py | Python | src/commercetools/platform/models/product_discount.py | labd/commercetools-python-sdk | d8ec285f08d56ede2e4cad45c74833f5b609ab5c | [
"MIT"
] | 15 | 2018-11-02T14:35:52.000Z | 2022-03-16T07:51:44.000Z | src/commercetools/platform/models/product_discount.py | labd/commercetools-python-sdk | d8ec285f08d56ede2e4cad45c74833f5b609ab5c | [
"MIT"
] | 84 | 2018-11-02T12:50:32.000Z | 2022-03-22T01:25:54.000Z | src/commercetools/platform/models/product_discount.py | labd/commercetools-python-sdk | d8ec285f08d56ede2e4cad45c74833f5b609ab5c | [
"MIT"
] | 13 | 2019-01-03T09:16:50.000Z | 2022-02-15T18:37:19.000Z | # This file is automatically generated by the rmf-codegen project.
#
# The Python code generator is maintained by Lab Digital. If you want to
# contribute to this project then please do not edit this file directly
# but send a pull request to the Lab Digital fork of rmf-codegen at
# https://github.com/labd/rmf-codegen
import datetime
import enum
import typing
from ._abstract import _BaseType
from .common import BaseResource, Reference, ReferenceTypeId, ResourceIdentifier
if typing.TYPE_CHECKING:
from .common import (
CreatedBy,
LastModifiedBy,
LocalizedString,
Money,
QueryPrice,
Reference,
ReferenceTypeId,
TypedMoney,
)
__all__ = [
"ProductDiscount",
"ProductDiscountChangeIsActiveAction",
"ProductDiscountChangeNameAction",
"ProductDiscountChangePredicateAction",
"ProductDiscountChangeSortOrderAction",
"ProductDiscountChangeValueAction",
"ProductDiscountDraft",
"ProductDiscountMatchQuery",
"ProductDiscountPagedQueryResponse",
"ProductDiscountReference",
"ProductDiscountResourceIdentifier",
"ProductDiscountSetDescriptionAction",
"ProductDiscountSetKeyAction",
"ProductDiscountSetValidFromAction",
"ProductDiscountSetValidFromAndUntilAction",
"ProductDiscountSetValidUntilAction",
"ProductDiscountUpdate",
"ProductDiscountUpdateAction",
"ProductDiscountValue",
"ProductDiscountValueAbsolute",
"ProductDiscountValueAbsoluteDraft",
"ProductDiscountValueDraft",
"ProductDiscountValueExternal",
"ProductDiscountValueExternalDraft",
"ProductDiscountValueRelative",
"ProductDiscountValueRelativeDraft",
]
class ProductDiscount(BaseResource):
#: Present on resources updated after 1/02/2019 except for events not tracked.
last_modified_by: typing.Optional["LastModifiedBy"]
#: Present on resources created after 1/02/2019 except for events not tracked.
created_by: typing.Optional["CreatedBy"]
name: "LocalizedString"
#: User-specific unique identifier for a product discount.
#: Must be unique across a project.
key: typing.Optional[str]
description: typing.Optional["LocalizedString"]
value: "ProductDiscountValue"
#: A valid ProductDiscount Predicate.
predicate: str
#: The string contains a number between 0 and 1.
#: A discount with greater sortOrder is prioritized higher than a discount with lower sortOrder.
#: A sortOrder must be unambiguous.
sort_order: str
#: Only active discount will be applied to product prices.
is_active: bool
#: The platform will generate this array from the predicate.
#: It contains the references of all the resources that are addressed in the predicate.
references: typing.List["Reference"]
#: The time from which the discount should be effective.
#: Please take Eventual Consistency into account for calculated product discount values.
valid_from: typing.Optional[datetime.datetime]
#: The time from which the discount should be ineffective.
#: Please take Eventual Consistency into account for calculated undiscounted values.
valid_until: typing.Optional[datetime.datetime]
def __init__(
self,
*,
id: str,
version: int,
created_at: datetime.datetime,
last_modified_at: datetime.datetime,
last_modified_by: typing.Optional["LastModifiedBy"] = None,
created_by: typing.Optional["CreatedBy"] = None,
name: "LocalizedString",
key: typing.Optional[str] = None,
description: typing.Optional["LocalizedString"] = None,
value: "ProductDiscountValue",
predicate: str,
sort_order: str,
is_active: bool,
references: typing.List["Reference"],
valid_from: typing.Optional[datetime.datetime] = None,
valid_until: typing.Optional[datetime.datetime] = None
):
self.last_modified_by = last_modified_by
self.created_by = created_by
self.name = name
self.key = key
self.description = description
self.value = value
self.predicate = predicate
self.sort_order = sort_order
self.is_active = is_active
self.references = references
self.valid_from = valid_from
self.valid_until = valid_until
super().__init__(
id=id,
version=version,
created_at=created_at,
last_modified_at=last_modified_at,
)
@classmethod
def deserialize(cls, data: typing.Dict[str, typing.Any]) -> "ProductDiscount":
from ._schemas.product_discount import ProductDiscountSchema
return ProductDiscountSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountSchema
return ProductDiscountSchema().dump(self)
class ProductDiscountDraft(_BaseType):
name: "LocalizedString"
#: User-specific unique identifier for a product discount.
#: Must be unique across a project.
#: The field can be reset using the Set Key UpdateAction
key: typing.Optional[str]
description: typing.Optional["LocalizedString"]
value: "ProductDiscountValueDraft"
#: A valid ProductDiscount Predicate.
predicate: str
#: The string must contain a decimal number between 0 and 1.
#: A discount with greater sortOrder is prioritized higher than a discount with lower sortOrder.
sort_order: str
#: If set to `true` the discount will be applied to product prices.
is_active: bool
#: The time from which the discount should be effective.
#: Please take Eventual Consistency into account for calculated product discount values.
valid_from: typing.Optional[datetime.datetime]
#: The time from which the discount should be effective.
#: Please take Eventual Consistency into account for calculated undiscounted values.
valid_until: typing.Optional[datetime.datetime]
def __init__(
self,
*,
name: "LocalizedString",
key: typing.Optional[str] = None,
description: typing.Optional["LocalizedString"] = None,
value: "ProductDiscountValueDraft",
predicate: str,
sort_order: str,
is_active: bool,
valid_from: typing.Optional[datetime.datetime] = None,
valid_until: typing.Optional[datetime.datetime] = None
):
self.name = name
self.key = key
self.description = description
self.value = value
self.predicate = predicate
self.sort_order = sort_order
self.is_active = is_active
self.valid_from = valid_from
self.valid_until = valid_until
super().__init__()
@classmethod
def deserialize(cls, data: typing.Dict[str, typing.Any]) -> "ProductDiscountDraft":
from ._schemas.product_discount import ProductDiscountDraftSchema
return ProductDiscountDraftSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountDraftSchema
return ProductDiscountDraftSchema().dump(self)
class ProductDiscountMatchQuery(_BaseType):
product_id: str
variant_id: int
staged: bool
price: "QueryPrice"
def __init__(
self, *, product_id: str, variant_id: int, staged: bool, price: "QueryPrice"
):
self.product_id = product_id
self.variant_id = variant_id
self.staged = staged
self.price = price
super().__init__()
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountMatchQuery":
from ._schemas.product_discount import ProductDiscountMatchQuerySchema
return ProductDiscountMatchQuerySchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountMatchQuerySchema
return ProductDiscountMatchQuerySchema().dump(self)
class ProductDiscountPagedQueryResponse(_BaseType):
limit: int
count: int
total: typing.Optional[int]
offset: int
results: typing.List["ProductDiscount"]
def __init__(
self,
*,
limit: int,
count: int,
total: typing.Optional[int] = None,
offset: int,
results: typing.List["ProductDiscount"]
):
self.limit = limit
self.count = count
self.total = total
self.offset = offset
self.results = results
super().__init__()
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountPagedQueryResponse":
from ._schemas.product_discount import ProductDiscountPagedQueryResponseSchema
return ProductDiscountPagedQueryResponseSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountPagedQueryResponseSchema
return ProductDiscountPagedQueryResponseSchema().dump(self)
class ProductDiscountReference(Reference):
obj: typing.Optional["ProductDiscount"]
def __init__(self, *, id: str, obj: typing.Optional["ProductDiscount"] = None):
self.obj = obj
super().__init__(id=id, type_id=ReferenceTypeId.PRODUCT_DISCOUNT)
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountReference":
from ._schemas.product_discount import ProductDiscountReferenceSchema
return ProductDiscountReferenceSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountReferenceSchema
return ProductDiscountReferenceSchema().dump(self)
class ProductDiscountResourceIdentifier(ResourceIdentifier):
def __init__(
self, *, id: typing.Optional[str] = None, key: typing.Optional[str] = None
):
super().__init__(id=id, key=key, type_id=ReferenceTypeId.PRODUCT_DISCOUNT)
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountResourceIdentifier":
from ._schemas.product_discount import ProductDiscountResourceIdentifierSchema
return ProductDiscountResourceIdentifierSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountResourceIdentifierSchema
return ProductDiscountResourceIdentifierSchema().dump(self)
class ProductDiscountUpdate(_BaseType):
version: int
actions: typing.List["ProductDiscountUpdateAction"]
def __init__(
self, *, version: int, actions: typing.List["ProductDiscountUpdateAction"]
):
self.version = version
self.actions = actions
super().__init__()
@classmethod
def deserialize(cls, data: typing.Dict[str, typing.Any]) -> "ProductDiscountUpdate":
from ._schemas.product_discount import ProductDiscountUpdateSchema
return ProductDiscountUpdateSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountUpdateSchema
return ProductDiscountUpdateSchema().dump(self)
class ProductDiscountUpdateAction(_BaseType):
action: str
def __init__(self, *, action: str):
self.action = action
super().__init__()
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountUpdateAction":
if data["action"] == "changeIsActive":
from ._schemas.product_discount import (
ProductDiscountChangeIsActiveActionSchema,
)
return ProductDiscountChangeIsActiveActionSchema().load(data)
if data["action"] == "changeName":
from ._schemas.product_discount import ProductDiscountChangeNameActionSchema
return ProductDiscountChangeNameActionSchema().load(data)
if data["action"] == "changePredicate":
from ._schemas.product_discount import (
ProductDiscountChangePredicateActionSchema,
)
return ProductDiscountChangePredicateActionSchema().load(data)
if data["action"] == "changeSortOrder":
from ._schemas.product_discount import (
ProductDiscountChangeSortOrderActionSchema,
)
return ProductDiscountChangeSortOrderActionSchema().load(data)
if data["action"] == "changeValue":
from ._schemas.product_discount import (
ProductDiscountChangeValueActionSchema,
)
return ProductDiscountChangeValueActionSchema().load(data)
if data["action"] == "setDescription":
from ._schemas.product_discount import (
ProductDiscountSetDescriptionActionSchema,
)
return ProductDiscountSetDescriptionActionSchema().load(data)
if data["action"] == "setKey":
from ._schemas.product_discount import ProductDiscountSetKeyActionSchema
return ProductDiscountSetKeyActionSchema().load(data)
if data["action"] == "setValidFrom":
from ._schemas.product_discount import (
ProductDiscountSetValidFromActionSchema,
)
return ProductDiscountSetValidFromActionSchema().load(data)
if data["action"] == "setValidFromAndUntil":
from ._schemas.product_discount import (
ProductDiscountSetValidFromAndUntilActionSchema,
)
return ProductDiscountSetValidFromAndUntilActionSchema().load(data)
if data["action"] == "setValidUntil":
from ._schemas.product_discount import (
ProductDiscountSetValidUntilActionSchema,
)
return ProductDiscountSetValidUntilActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountUpdateActionSchema
return ProductDiscountUpdateActionSchema().dump(self)
class ProductDiscountValue(_BaseType):
type: str
def __init__(self, *, type: str):
self.type = type
super().__init__()
@classmethod
def deserialize(cls, data: typing.Dict[str, typing.Any]) -> "ProductDiscountValue":
if data["type"] == "absolute":
from ._schemas.product_discount import ProductDiscountValueAbsoluteSchema
return ProductDiscountValueAbsoluteSchema().load(data)
if data["type"] == "external":
from ._schemas.product_discount import ProductDiscountValueExternalSchema
return ProductDiscountValueExternalSchema().load(data)
if data["type"] == "relative":
from ._schemas.product_discount import ProductDiscountValueRelativeSchema
return ProductDiscountValueRelativeSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueSchema
return ProductDiscountValueSchema().dump(self)
class ProductDiscountValueAbsolute(ProductDiscountValue):
money: typing.List["TypedMoney"]
def __init__(self, *, money: typing.List["TypedMoney"]):
self.money = money
super().__init__(type="absolute")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueAbsolute":
from ._schemas.product_discount import ProductDiscountValueAbsoluteSchema
return ProductDiscountValueAbsoluteSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueAbsoluteSchema
return ProductDiscountValueAbsoluteSchema().dump(self)
class ProductDiscountValueDraft(_BaseType):
type: str
def __init__(self, *, type: str):
self.type = type
super().__init__()
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueDraft":
if data["type"] == "absolute":
from ._schemas.product_discount import (
ProductDiscountValueAbsoluteDraftSchema,
)
return ProductDiscountValueAbsoluteDraftSchema().load(data)
if data["type"] == "external":
from ._schemas.product_discount import (
ProductDiscountValueExternalDraftSchema,
)
return ProductDiscountValueExternalDraftSchema().load(data)
if data["type"] == "relative":
from ._schemas.product_discount import (
ProductDiscountValueRelativeDraftSchema,
)
return ProductDiscountValueRelativeDraftSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueDraftSchema
return ProductDiscountValueDraftSchema().dump(self)
class ProductDiscountValueAbsoluteDraft(ProductDiscountValueDraft):
money: typing.List["Money"]
def __init__(self, *, money: typing.List["Money"]):
self.money = money
super().__init__(type="absolute")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueAbsoluteDraft":
from ._schemas.product_discount import ProductDiscountValueAbsoluteDraftSchema
return ProductDiscountValueAbsoluteDraftSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueAbsoluteDraftSchema
return ProductDiscountValueAbsoluteDraftSchema().dump(self)
class ProductDiscountValueExternal(ProductDiscountValue):
def __init__(self):
super().__init__(type="external")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueExternal":
from ._schemas.product_discount import ProductDiscountValueExternalSchema
return ProductDiscountValueExternalSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueExternalSchema
return ProductDiscountValueExternalSchema().dump(self)
class ProductDiscountValueExternalDraft(ProductDiscountValueDraft):
def __init__(self):
super().__init__(type="external")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueExternalDraft":
from ._schemas.product_discount import ProductDiscountValueExternalDraftSchema
return ProductDiscountValueExternalDraftSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueExternalDraftSchema
return ProductDiscountValueExternalDraftSchema().dump(self)
class ProductDiscountValueRelative(ProductDiscountValue):
permyriad: int
def __init__(self, *, permyriad: int):
self.permyriad = permyriad
super().__init__(type="relative")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueRelative":
from ._schemas.product_discount import ProductDiscountValueRelativeSchema
return ProductDiscountValueRelativeSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueRelativeSchema
return ProductDiscountValueRelativeSchema().dump(self)
class ProductDiscountValueRelativeDraft(ProductDiscountValueDraft):
permyriad: int
def __init__(self, *, permyriad: int):
self.permyriad = permyriad
super().__init__(type="relative")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountValueRelativeDraft":
from ._schemas.product_discount import ProductDiscountValueRelativeDraftSchema
return ProductDiscountValueRelativeDraftSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountValueRelativeDraftSchema
return ProductDiscountValueRelativeDraftSchema().dump(self)
class ProductDiscountChangeIsActiveAction(ProductDiscountUpdateAction):
is_active: bool
def __init__(self, *, is_active: bool):
self.is_active = is_active
super().__init__(action="changeIsActive")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountChangeIsActiveAction":
from ._schemas.product_discount import ProductDiscountChangeIsActiveActionSchema
return ProductDiscountChangeIsActiveActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountChangeIsActiveActionSchema
return ProductDiscountChangeIsActiveActionSchema().dump(self)
class ProductDiscountChangeNameAction(ProductDiscountUpdateAction):
name: "LocalizedString"
def __init__(self, *, name: "LocalizedString"):
self.name = name
super().__init__(action="changeName")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountChangeNameAction":
from ._schemas.product_discount import ProductDiscountChangeNameActionSchema
return ProductDiscountChangeNameActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountChangeNameActionSchema
return ProductDiscountChangeNameActionSchema().dump(self)
class ProductDiscountChangePredicateAction(ProductDiscountUpdateAction):
#: A valid ProductDiscount Predicate.
predicate: str
def __init__(self, *, predicate: str):
self.predicate = predicate
super().__init__(action="changePredicate")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountChangePredicateAction":
from ._schemas.product_discount import (
ProductDiscountChangePredicateActionSchema,
)
return ProductDiscountChangePredicateActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import (
ProductDiscountChangePredicateActionSchema,
)
return ProductDiscountChangePredicateActionSchema().dump(self)
class ProductDiscountChangeSortOrderAction(ProductDiscountUpdateAction):
#: The string must contain a number between 0 and 1.
#: A discount with greater sortOrder is prioritized higher than a discount with lower sortOrder.
sort_order: str
def __init__(self, *, sort_order: str):
self.sort_order = sort_order
super().__init__(action="changeSortOrder")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountChangeSortOrderAction":
from ._schemas.product_discount import (
ProductDiscountChangeSortOrderActionSchema,
)
return ProductDiscountChangeSortOrderActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import (
ProductDiscountChangeSortOrderActionSchema,
)
return ProductDiscountChangeSortOrderActionSchema().dump(self)
class ProductDiscountChangeValueAction(ProductDiscountUpdateAction):
value: "ProductDiscountValueDraft"
def __init__(self, *, value: "ProductDiscountValueDraft"):
self.value = value
super().__init__(action="changeValue")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountChangeValueAction":
from ._schemas.product_discount import ProductDiscountChangeValueActionSchema
return ProductDiscountChangeValueActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountChangeValueActionSchema
return ProductDiscountChangeValueActionSchema().dump(self)
class ProductDiscountSetDescriptionAction(ProductDiscountUpdateAction):
description: typing.Optional["LocalizedString"]
def __init__(self, *, description: typing.Optional["LocalizedString"] = None):
self.description = description
super().__init__(action="setDescription")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountSetDescriptionAction":
from ._schemas.product_discount import ProductDiscountSetDescriptionActionSchema
return ProductDiscountSetDescriptionActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountSetDescriptionActionSchema
return ProductDiscountSetDescriptionActionSchema().dump(self)
class ProductDiscountSetKeyAction(ProductDiscountUpdateAction):
#: The key to set.
#: If you provide a `null` value or do not set this field at all, the existing `key` field is removed.
key: typing.Optional[str]
def __init__(self, *, key: typing.Optional[str] = None):
self.key = key
super().__init__(action="setKey")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountSetKeyAction":
from ._schemas.product_discount import ProductDiscountSetKeyActionSchema
return ProductDiscountSetKeyActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountSetKeyActionSchema
return ProductDiscountSetKeyActionSchema().dump(self)
class ProductDiscountSetValidFromAction(ProductDiscountUpdateAction):
#: The time from which the discount should be effective.
#: Please take Eventual Consistency into account for calculated product discount values.
valid_from: typing.Optional[datetime.datetime]
def __init__(self, *, valid_from: typing.Optional[datetime.datetime] = None):
self.valid_from = valid_from
super().__init__(action="setValidFrom")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountSetValidFromAction":
from ._schemas.product_discount import ProductDiscountSetValidFromActionSchema
return ProductDiscountSetValidFromActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountSetValidFromActionSchema
return ProductDiscountSetValidFromActionSchema().dump(self)
class ProductDiscountSetValidFromAndUntilAction(ProductDiscountUpdateAction):
valid_from: typing.Optional[datetime.datetime]
#: The timeframe for which the discount should be effective.
#: Please take Eventual Consistency into account for calculated undiscounted values.
valid_until: typing.Optional[datetime.datetime]
def __init__(
self,
*,
valid_from: typing.Optional[datetime.datetime] = None,
valid_until: typing.Optional[datetime.datetime] = None
):
self.valid_from = valid_from
self.valid_until = valid_until
super().__init__(action="setValidFromAndUntil")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountSetValidFromAndUntilAction":
from ._schemas.product_discount import (
ProductDiscountSetValidFromAndUntilActionSchema,
)
return ProductDiscountSetValidFromAndUntilActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import (
ProductDiscountSetValidFromAndUntilActionSchema,
)
return ProductDiscountSetValidFromAndUntilActionSchema().dump(self)
class ProductDiscountSetValidUntilAction(ProductDiscountUpdateAction):
#: The time from which the discount should be ineffective.
#: Please take Eventual Consistency into account for calculated undiscounted values.
valid_until: typing.Optional[datetime.datetime]
def __init__(self, *, valid_until: typing.Optional[datetime.datetime] = None):
self.valid_until = valid_until
super().__init__(action="setValidUntil")
@classmethod
def deserialize(
cls, data: typing.Dict[str, typing.Any]
) -> "ProductDiscountSetValidUntilAction":
from ._schemas.product_discount import ProductDiscountSetValidUntilActionSchema
return ProductDiscountSetValidUntilActionSchema().load(data)
def serialize(self) -> typing.Dict[str, typing.Any]:
from ._schemas.product_discount import ProductDiscountSetValidUntilActionSchema
return ProductDiscountSetValidUntilActionSchema().dump(self)
| 35.502427 | 106 | 0.706091 | 2,533 | 29,254 | 7.980655 | 0.098697 | 0.053426 | 0.057878 | 0.083601 | 0.719861 | 0.698244 | 0.68004 | 0.677962 | 0.54044 | 0.523522 | 0 | 0.000865 | 0.209749 | 29,254 | 823 | 107 | 35.545565 | 0.873562 | 0.090723 | 0 | 0.57315 | 1 | 0 | 0.094894 | 0.059346 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134251 | false | 0 | 0.122203 | 0 | 0.500861 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3d834cc1a297db8a3f17fa26343f9f910f390b75 | 71 | py | Python | hivs_cd/__init__.py | tehamalab/hivs | db7dfa7f89174be07d42bd469fd23c8553c0eff2 | [
"MIT"
] | null | null | null | hivs_cd/__init__.py | tehamalab/hivs | db7dfa7f89174be07d42bd469fd23c8553c0eff2 | [
"MIT"
] | null | null | null | hivs_cd/__init__.py | tehamalab/hivs | db7dfa7f89174be07d42bd469fd23c8553c0eff2 | [
"MIT"
] | null | null | null | __version__ = '0.1.0'
default_app_config = 'hivs_cd.apps.HivsCdConfig'
| 23.666667 | 48 | 0.774648 | 11 | 71 | 4.363636 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 0.084507 | 71 | 2 | 49 | 35.5 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0.422535 | 0.352113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3d860e707720e0be2eda0fee56f554fbbd271d62 | 756 | py | Python | test/zold/test_time.py | DronMDF/zold-lazy-node | 835a4c505dbfccb800af69146487f18d10e3c39c | [
"MIT"
] | 1 | 2018-06-13T20:39:45.000Z | 2018-06-13T20:39:45.000Z | test/zold/test_time.py | DronMDF/zold-lazy-node | 835a4c505dbfccb800af69146487f18d10e3c39c | [
"MIT"
] | 200 | 2018-06-13T17:18:20.000Z | 2018-11-04T09:14:09.000Z | test/zold/test_time.py | DronMDF/zold-lazy-node | 835a4c505dbfccb800af69146487f18d10e3c39c | [
"MIT"
] | null | null | null | # Copyright (c) 2018 Andrey Valyaev <dron.valyaev@gmail.com>
# Copyright (c) 2018 Alexey Tsurkan
#
# This software may be modified and distributed under the terms
# of the MIT license. See the LICENSE file for details.
''' Score '''
from datetime import datetime, timezone
from zold.time import NowTime, DatetimeTime
class TestDateTime:
''' Тестирование времени времени '''
def test_representation(self):
''' Проверяем, что в нем выставляется зона '''
time = datetime(2018, 6, 19, 14, 17, 22, tzinfo=timezone.utc)
assert str(DatetimeTime(time)) == '2018-06-19T14:17:22Z'
class TestNowTime:
''' Тестирование текущего времени '''
def test_zone(self):
''' Проверяем, как отображается временная зона '''
assert str(NowTime()).endswith('Z')
| 30.24 | 63 | 0.724868 | 100 | 756 | 5.46 | 0.7 | 0.03663 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054773 | 0.154762 | 756 | 24 | 64 | 31.5 | 0.799687 | 0.478836 | 0 | 0 | 0 | 0 | 0.058496 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3d889c6f46781a90c5e347705c9d7296f42fd0e7 | 2,129 | py | Python | kratos/python_scripts/response_functions/response_function_interface.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 778 | 2017-01-27T16:29:17.000Z | 2022-03-30T03:01:51.000Z | kratos/python_scripts/response_functions/response_function_interface.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 6,634 | 2017-01-15T22:56:13.000Z | 2022-03-31T15:03:36.000Z | kratos/python_scripts/response_functions/response_function_interface.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 224 | 2017-02-07T14:12:49.000Z | 2022-03-06T23:09:34.000Z | class ResponseFunctionInterface(object):
"""
This response function interface provides a unique interface for all possible ways
to calculate the value and gradient of a response.
The interface is designed to be used in e.g. optimization, where the value and gradient
of a response is required, however the exact method of gradient calculation is of
secondary importance.
This might be done using e.g. adjoint sensitivity analysis capabilities of Kratos,
or even a simple finite differencing method.
(Do not confuse this class with the kratos/response_functions/adjoint_response_function.h,
which is an implementation detail for the adjoint sensitivity analysis in Kratos)
"""
def RunCalculation(self, calculate_gradient):
self.Initialize()
self.InitializeSolutionStep()
self.CalculateValue()
if calculate_gradient:
self.CalculateGradient()
self.FinalizeSolutionStep()
self.Finalize()
def Initialize(self):
pass
def UpdateDesign(self, updated_model_part, variable):
pass
def InitializeSolutionStep(self):
pass
def CalculateValue(self):
raise NotImplementedError("CalculateValue needs to be implemented by the derived class")
def CalculateGradient(self):
raise NotImplementedError("CalculateGradient needs to be implemented by the derived class")
def FinalizeSolutionStep(self):
pass
def Finalize(self):
pass
def GetValue(self):
raise NotImplementedError("GetValue needs to be implemented by the derived class")
def GetNodalGradient(self, variable):
raise NotImplementedError("GetNodalGradient needs to be implemented by the derived class")
def GetElementalGradient(self, variable):
raise NotImplementedError("GetElementalGradient needs to be implemented by the derived class")
def GetConditionalGradient(self, variable):
raise NotImplementedError("GetConditionalGradient needs to be implemented by the derived class")
def IsEvaluatedInFolder(self):
return False
| 34.901639 | 104 | 0.726163 | 238 | 2,129 | 6.466387 | 0.37395 | 0.018194 | 0.035088 | 0.077973 | 0.194932 | 0.194932 | 0.194932 | 0.155945 | 0.155945 | 0 | 0 | 0 | 0.220761 | 2,129 | 60 | 105 | 35.483333 | 0.927667 | 0.295444 | 0 | 0.151515 | 0 | 0 | 0.252755 | 0.015152 | 0 | 0 | 0 | 0 | 0 | 1 | 0.393939 | false | 0.151515 | 0 | 0.030303 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3d8e686ace25bea914a318b9dacaa0bd13cf7dea | 1,513 | py | Python | convoy/api/application.py | frain-dev/convoy-python | 7607a6b65615cc83c38bfb7dba4ad6ed564860bc | [
"MIT"
] | null | null | null | convoy/api/application.py | frain-dev/convoy-python | 7607a6b65615cc83c38bfb7dba4ad6ed564860bc | [
"MIT"
] | null | null | null | convoy/api/application.py | frain-dev/convoy-python | 7607a6b65615cc83c38bfb7dba4ad6ed564860bc | [
"MIT"
] | null | null | null | from convoy.client import Client
class Application():
"""Initializes an Application object to make calls to the /application endpoint.
Parameters
----------
config : dict of config values
"""
def __init__(self, config):
self.client = Client(config)
def all(self, query):
'''
Get all applicaitons.
'''
response = self.client.httpGet("/applications", query)
return response
def create(self, query, data):
'''
Create a new application.
Parameters
----------
data = {
"name": "",
"support_email": ""
}
'''
response = self.client.httpPost("/applications", query, data)
return response
def find(self, id, query):
'''
Find a particular application.
'''
response = self.client.httpGet("/applications/%s" % (id), query)
return response
def update(self, id, query, data):
'''
Update an application.
Parameters
----------
data = {
"name": "",
"support_email": ""
}
'''
response = self.client.httpPut("/applications/%s" % id, query, data)
return response
def delete(self, id, query, data):
'''
Delete an application.
'''
response = self.client.httpDelete("/applications/%s" % id, query, data)
return response
| 25.216667 | 84 | 0.501652 | 134 | 1,513 | 5.619403 | 0.343284 | 0.079681 | 0.119522 | 0.091633 | 0.394422 | 0.257636 | 0.257636 | 0.156707 | 0.156707 | 0 | 0 | 0 | 0.366821 | 1,513 | 59 | 85 | 25.644068 | 0.786013 | 0.292135 | 0 | 0.263158 | 0 | 0 | 0.088942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.052632 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3d9c4dd174dde6c5610cb68e47b5f50604d5d424 | 843 | py | Python | catkin_ws/bbinstance/src/bbinstance/monitor.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | null | null | null | catkin_ws/bbinstance/src/bbinstance/monitor.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | null | null | null | catkin_ws/bbinstance/src/bbinstance/monitor.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
###############################################################################################
from python_qt_binding import QtWidgets
from python_qt_binding import QtCore
from python_qt_binding import QtGui
# from PyQt5 import QtCore, QtGui, QtWidgets
import rospy
from blackboard.msg import TaskMsg
from blackboard.Task import Task,TaskType,TaskStep,TaskState
from std_msgs.msg import String
from std_msgs.msg import Float32
from blackboard.msg import bbBackup
from geometry_msgs.msg import Pose
from threading import Lock
import cv2
class Cbot:
def __init__(self,x,y,radius):
self.x = x
self.y = y
self.radius = radius
self.color = (0,0,255)
return cv2.circle(img,(self.x,self.y),self.radius,self.color,(-1))
if __name__ == "__main__":
robot = Cbot(250,250,50) | 25.545455 | 95 | 0.652432 | 115 | 843 | 4.6 | 0.443478 | 0.085066 | 0.068053 | 0.10775 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026836 | 0.160142 | 843 | 33 | 96 | 25.545455 | 0.720339 | 0.074733 | 0 | 0 | 0 | 0 | 0.011696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.571429 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3da1acbf9c327a6d20b6910be135f872625c6705 | 351 | py | Python | client/tracebin/aborts.py | alex/tracebin | cd9717e750718dcb94140f60957c3f543915317f | [
"BSD-3-Clause"
] | 3 | 2015-11-08T12:45:45.000Z | 2019-05-09T20:03:57.000Z | client/tracebin/aborts.py | alex/tracebin | cd9717e750718dcb94140f60957c3f543915317f | [
"BSD-3-Clause"
] | null | null | null | client/tracebin/aborts.py | alex/tracebin | cd9717e750718dcb94140f60957c3f543915317f | [
"BSD-3-Clause"
] | null | null | null | class BaseAbort(object):
def __init__(self, reason):
self.reason = reason
class PythonAbort(BaseAbort):
def __init__(self, reason, pycode, lineno):
super(PythonAbort, self).__init__(reason)
self.pycode = pycode
self.lineno = lineno
def visit(self, visitor):
return visitor.visit_python_abort(self) | 29.25 | 49 | 0.669516 | 40 | 351 | 5.525 | 0.4 | 0.135747 | 0.099548 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 351 | 12 | 50 | 29.25 | 0.818519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.1 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3da72bcb2255adc5997f5ca31cf15cd9bdcbfe16 | 233 | py | Python | democelery/network/urls.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | 1 | 2021-12-02T05:29:37.000Z | 2021-12-02T05:29:37.000Z | democelery/network/urls.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | null | null | null | democelery/network/urls.py | alexdzul/democelery | 4627d155ccc3a6daf21ec4e4b8d554891446288d | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = 'network'
urlpatterns = [
path('', views.Index.as_view(), name='index'),
path('create-subscriber/', views.CreateSubscriber.as_view(), name='create-subscriber'),
]
| 23.3 | 91 | 0.699571 | 29 | 233 | 5.517241 | 0.551724 | 0.075 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133047 | 233 | 9 | 92 | 25.888889 | 0.792079 | 0 | 0 | 0 | 0 | 0 | 0.201717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3db896f3b71dd546f4d2355724d4870ad95ce857 | 415 | py | Python | fabfile.py | esenti/ld-poznan | e3b5e5e846bc8ddebe9ede0804b5e985089b5897 | [
"MIT"
] | null | null | null | fabfile.py | esenti/ld-poznan | e3b5e5e846bc8ddebe9ede0804b5e985089b5897 | [
"MIT"
] | null | null | null | fabfile.py | esenti/ld-poznan | e3b5e5e846bc8ddebe9ede0804b5e985089b5897 | [
"MIT"
] | null | null | null | from fabric.contrib.project import rsync_project
from fabric.api import run
def sync_files():
rsync_project(remote_dir='/var/www/vhosts/ludumdare.pl/ldpoznan',
local_dir='ldpoznan/',
exclude=['local_settings.py', '*.pyc', '*.wsgi', '*/static/*'],
delete=True,
extra_opts='')
def reload_wsgi():
run('touch /var/www/vhosts/ludumdare.pl/ldpoznan/ldpoznan.wsgi')
def deploy():
sync_files()
reload_wsgi() | 24.411765 | 66 | 0.722892 | 58 | 415 | 5 | 0.568966 | 0.068966 | 0.082759 | 0.144828 | 0.213793 | 0.213793 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098795 | 415 | 17 | 67 | 24.411765 | 0.775401 | 0 | 0 | 0 | 0 | 0 | 0.338942 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | true | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3db89beb4618f9b94f478ef445369d35005c9602 | 1,557 | py | Python | scripts/todo.py | blanham/ChickenOS | 47e8d65e955255a8686598ddba6d239999b8ed37 | [
"NCSA"
] | 14 | 2015-11-03T12:17:10.000Z | 2021-11-19T06:32:10.000Z | scripts/todo.py | blanham/ChickenOS | 47e8d65e955255a8686598ddba6d239999b8ed37 | [
"NCSA"
] | null | null | null | scripts/todo.py | blanham/ChickenOS | 47e8d65e955255a8686598ddba6d239999b8ed37 | [
"NCSA"
] | 5 | 2016-06-20T17:57:05.000Z | 2020-02-17T01:52:12.000Z | #!/usr/bin/python
import os
import re
def grep(path, regex):
rg = re.compile(regex)
res = []
tag = 0
comment = ""
linenum = 0
for root, dir, fnames in os.walk(path):
for fname in fnames:
filename = os.path.join(root,fname)
if(filename.find("cscope.out") != -1):
continue
if(filename.find("tags") != -1):
continue
if(filename.find("types_c.taghl") != -1):
continue
if(filename.find("font") != -1):
continue
if(filename.find("img") != -1):
continue
if(filename[-4:-1] == ".sw"):
continue
if(filename[-4:] == ".vim"):
continue
if(filename[-2:] == ".o"):
continue
if(filename[-3:] == ".py"):
continue
if os.path.isfile(filename):
with open(filename) as f:
linenum = 0
for line in f:
linenum += 1
if rg.search(line):
tag = 6
res.append(filename + "\n")
line = line.strip()
if(line.find("/*") != -1):
comment = "*/"
res.append(str(linenum) + ": " + line + "\n")
elif tag > 1:
line = line.strip()
if(line.find("/*") != -1):
comment = "*/"
elif(line.find("*/") != -1):
comment = ""
res.append(str(linenum) + ": " + line + "\n")
tag-=1
elif tag == 1:
tag = 0
res.append(comment + "\n")
comment = ""
return res
def main():
result = grep('.',"TODO")
result += grep('.',"XXX")
result += grep('.',"FIXME")
#for i in result:
print "".join(result)
print "/* ex: set syntax=c: */"
if __name__ == "__main__":
main();
| 22.242857 | 52 | 0.507386 | 198 | 1,557 | 3.944444 | 0.353535 | 0.115237 | 0.184379 | 0.121639 | 0.279129 | 0.161332 | 0.161332 | 0.161332 | 0.102433 | 0.102433 | 0 | 0.019713 | 0.283237 | 1,557 | 69 | 53 | 22.565217 | 0.680108 | 0.020552 | 0 | 0.387097 | 0 | 0 | 0.074852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.032258 | null | null | 0.032258 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3db8c664e62e9f350beda966cd1f40a59d711e6c | 1,252 | py | Python | wrapp_class.py | 5lunk/psec | a6bec5102759057a15d39aed3de7a20aa878e55a | [
"CC0-1.0"
] | null | null | null | wrapp_class.py | 5lunk/psec | a6bec5102759057a15d39aed3de7a20aa878e55a | [
"CC0-1.0"
] | null | null | null | wrapp_class.py | 5lunk/psec | a6bec5102759057a15d39aed3de7a20aa878e55a | [
"CC0-1.0"
] | null | null | null | #! /usr/bin/env python3
from service_funcs import end_task
class Wrapp:
"""
Class for handling connection methods
"""
@staticmethod
def failed_check(method):
"""
Decorator
If passed, go to the next
"""
def wrapp_failed_check(self):
if method(self) == False:
task_result = 'Task failed'
end_task(self.log_file_name, self.mac, task_result, self.config)
return wrapp_failed_check
@staticmethod
def next_check(method):
"""
Decorator
If not passed, go to the next
"""
def wrapp_next_check(self):
if method(self) == True:
task_result = 'Task completed'
end_task(self.log_file_name, self.mac, task_result, self.config)
return wrapp_next_check
@staticmethod
def pass_check(method):
"""
Decorator
Last check
"""
def wrapp_pass_check(self):
if method(self) == True:
task_result = 'Task completed'
else:
task_result = 'Task failed'
end_task(self.log_file_name, self.mac, task_result, self.config)
return wrapp_pass_check
| 26.083333 | 80 | 0.561502 | 142 | 1,252 | 4.71831 | 0.302817 | 0.104478 | 0.083582 | 0.076119 | 0.577612 | 0.546269 | 0.546269 | 0.471642 | 0.471642 | 0.471642 | 0 | 0.001238 | 0.354633 | 1,252 | 47 | 81 | 26.638298 | 0.82797 | 0.125399 | 0 | 0.48 | 0 | 0 | 0.050761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0.12 | 0.04 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3dc39e07a7614061daa6e3a5ebed1db463f5c098 | 197 | py | Python | honcho/compat.py | laobaox/honcho | 0d40eb82152a96d1d0d21c0f69be0beb8fa0e923 | [
"MIT"
] | 1,079 | 2015-01-02T14:32:37.000Z | 2022-03-24T15:53:52.000Z | honcho/compat.py | laobaox/honcho | 0d40eb82152a96d1d0d21c0f69be0beb8fa0e923 | [
"MIT"
] | 123 | 2015-01-10T11:22:17.000Z | 2022-03-27T04:22:02.000Z | honcho/compat.py | laobaox/honcho | 0d40eb82152a96d1d0d21c0f69be0beb8fa0e923 | [
"MIT"
] | 100 | 2015-01-10T08:49:51.000Z | 2022-02-10T10:34:18.000Z | """
Compatibility layer and utilities, mostly for proper Windows and Python 3
support.
"""
import sys
# This works for both 32 and 64 bit Windows
ON_WINDOWS = 'win32' in str(sys.platform).lower()
| 21.888889 | 73 | 0.746193 | 31 | 197 | 4.709677 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042424 | 0.162437 | 197 | 8 | 74 | 24.625 | 0.842424 | 0.634518 | 0 | 0 | 0 | 0 | 0.078125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3ddaab00d279359042f5e614e164a276a0a6cccc | 223 | py | Python | Exercicios/ex057.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | Exercicios/ex057.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | Exercicios/ex057.py | Dobravoski/Exercicios-Python | e7169e1ee6954a7bc9216063845611107a13759f | [
"MIT"
] | null | null | null | s = str(input('Digite seu sexo [M/F]: ')).strip().upper()[0]
while s != 'M' and s != 'F':
s = str(input('Digite seu sexo [M/F]: ')).strip().upper()
if s == 'M':
print('Olá senhor.')
else:
print('Olá, senhora.')
| 27.875 | 61 | 0.529148 | 37 | 223 | 3.189189 | 0.513514 | 0.067797 | 0.152542 | 0.254237 | 0.576271 | 0.576271 | 0.576271 | 0.576271 | 0.576271 | 0.576271 | 0 | 0.005556 | 0.192825 | 223 | 7 | 62 | 31.857143 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0.327354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3df01c8dd3832615b50729c5f7cefa125287ad00 | 3,310 | py | Python | PiCN/Playground/AssistedSharing/FetchLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | PiCN/Playground/AssistedSharing/FetchLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | 5 | 2020-07-15T09:01:42.000Z | 2020-09-28T08:45:21.000Z | PiCN/Playground/AssistedSharing/FetchLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | """Fetch Layer: fetch a high-level object"""
import multiprocessing
from PiCN.Packets import Content, Interest, Packet, Nack, Name
from PiCN.Processes import LayerProcess
from PiCN.Playground.AssistedSharing.WrapperDescription import WrapperDescription
class FetchLayer(LayerProcess):
def __init__(self, log_level=255):
super().__init__(logger_name="FetchLayer", log_level=log_level)
self.high_level_name = None
self.wrapper_description = None
def trigger_fetching(self, high_level_name: Name, face_id: int):
"""
Trigger fetching of high level object
:param high_level_name: Name of high level object
:param face_id: Face id to send interest
:return: None
"""
assert (self.high_level_name is None)
self.high_level_name = high_level_name
self.queue_to_lower.put([face_id, Interest(high_level_name)])
self.logger.info("Send interest for wrapper description")
def data_from_higher(self, to_lower: multiprocessing.Queue, to_higher: multiprocessing.Queue, data):
pass # this is the highest layer in the stack
def data_from_lower(self, to_lower: multiprocessing.Queue, to_higher: multiprocessing.Queue, data):
if len(data) != 2:
self.logger.info("ICN Layer expects to receive [face id, packet] from lower layer")
return
if type(data[0]) != int:
self.logger.info("ICN Layer expects to receive [face id, packet] from lower layer")
return
if not isinstance(data[1], Packet):
self.logger.info("ICN Layer expects to receive [face id, packet] from lower layer")
return
face_id = data[0]
packet = data[1]
if isinstance(packet, Interest):
self.logger.info("Received Interest Packet, do nothing")
elif isinstance(packet, Content):
self.logger.info("Received Data Packet: " + packet.name)
self.handle_content(packet, face_id, to_lower)
return
elif isinstance(packet, Nack):
self.logger.info("Received NACK, do nothing")
return
else:
self.logger.info("Received Unknown Packet, do nothing")
return
def handle_content(self, content: Content, face_id: int, to_lower: multiprocessing.Queue):
"""
Handle received content object
:param content: Received data packet
:param face_id: Incoming face id
:param to_lower: Queue to lower layer
:return: None
"""
if self.wrapper_description is None:
if content.name is self.high_level_name:
self.wrapper_description = WrapperDescription(content.content)
self.do_encapsulation()
else:
self.logger.info("Received data but wrapper description not ready. Skip.")
return
else:
pass # todo -- hand over to unwrapping procedure: do_encapsulation(..)
def do_encapsulation(self):
"""
Start encapsulation (self.wrapper_description must be initialized)
:return: None
"""
assert (self.wrapper_description is not None)
pass # TODO
def ageing(self):
pass # no ageing in this layer
| 38.488372 | 104 | 0.641088 | 400 | 3,310 | 5.1575 | 0.23 | 0.034901 | 0.061076 | 0.041202 | 0.225885 | 0.164809 | 0.164809 | 0.164809 | 0.164809 | 0.164809 | 0 | 0.003338 | 0.275831 | 3,310 | 85 | 105 | 38.941176 | 0.857322 | 0.165559 | 0 | 0.309091 | 0 | 0 | 0.155725 | 0 | 0 | 0 | 0 | 0.023529 | 0.036364 | 1 | 0.127273 | false | 0.072727 | 0.072727 | 0 | 0.345455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3df53aaf93b0bf21f38c1570d1225126f3188682 | 687 | py | Python | wetterbot_dbmodel.py | Hofei90/Wetterbot | c57ce8e2ce193904e9d2053fc975f7b95757c932 | [
"MIT"
] | null | null | null | wetterbot_dbmodel.py | Hofei90/Wetterbot | c57ce8e2ce193904e9d2053fc975f7b95757c932 | [
"MIT"
] | null | null | null | wetterbot_dbmodel.py | Hofei90/Wetterbot | c57ce8e2ce193904e9d2053fc975f7b95757c932 | [
"MIT"
] | null | null | null | from peewee import *
database = SqliteDatabase('wetterbot.sqlite', **{})
class UnknownField(object):
def __init__(self, *_, **__): pass
class BaseModel(Model):
class Meta:
database = database
class Benachrichtigung(BaseModel):
inhalt = TextField(null=True)
next = TextField(null=True)
typ = TextField(null=True)
uhrzeit = TextField(null=True)
userid = IntegerField(null=True)
class Meta:
table_name = 'benachrichtigung'
class User(BaseModel):
nachname = TextField(null=True)
userid = AutoField(null=True)
username = TextField(null=True)
vorname = TextField(null=True)
class Meta:
table_name = 'user'
| 20.205882 | 51 | 0.668122 | 73 | 687 | 6.164384 | 0.465753 | 0.16 | 0.264444 | 0.102222 | 0.115556 | 0.115556 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218341 | 687 | 33 | 52 | 20.818182 | 0.837989 | 0 | 0 | 0.136364 | 0 | 0 | 0.052402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.045455 | 0.045455 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3dfd254259bbb290f494f3aff485e408a0d9a350 | 7,164 | py | Python | sdk/python/pulumi_yandex/get_iot_core_device.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2021-04-20T15:39:41.000Z | 2022-02-20T09:14:39.000Z | sdk/python/pulumi_yandex/get_iot_core_device.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 56 | 2021-04-20T11:31:03.000Z | 2022-03-31T15:53:06.000Z | sdk/python/pulumi_yandex/get_iot_core_device.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = [
'GetIotCoreDeviceResult',
'AwaitableGetIotCoreDeviceResult',
'get_iot_core_device',
'get_iot_core_device_output',
]
@pulumi.output_type
class GetIotCoreDeviceResult:
"""
A collection of values returned by getIotCoreDevice.
"""
def __init__(__self__, aliases=None, certificates=None, created_at=None, description=None, device_id=None, id=None, name=None, passwords=None, registry_id=None):
if aliases and not isinstance(aliases, dict):
raise TypeError("Expected argument 'aliases' to be a dict")
pulumi.set(__self__, "aliases", aliases)
if certificates and not isinstance(certificates, list):
raise TypeError("Expected argument 'certificates' to be a list")
pulumi.set(__self__, "certificates", certificates)
if created_at and not isinstance(created_at, str):
raise TypeError("Expected argument 'created_at' to be a str")
pulumi.set(__self__, "created_at", created_at)
if description and not isinstance(description, str):
raise TypeError("Expected argument 'description' to be a str")
pulumi.set(__self__, "description", description)
if device_id and not isinstance(device_id, str):
raise TypeError("Expected argument 'device_id' to be a str")
pulumi.set(__self__, "device_id", device_id)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if passwords and not isinstance(passwords, list):
raise TypeError("Expected argument 'passwords' to be a list")
pulumi.set(__self__, "passwords", passwords)
if registry_id and not isinstance(registry_id, str):
raise TypeError("Expected argument 'registry_id' to be a str")
pulumi.set(__self__, "registry_id", registry_id)
@property
@pulumi.getter
def aliases(self) -> Mapping[str, str]:
"""
A set of key/value aliases pairs to assign to the IoT Core Device
"""
return pulumi.get(self, "aliases")
@property
@pulumi.getter
def certificates(self) -> Sequence[str]:
"""
A set of certificate's fingerprints for the IoT Core Device
"""
return pulumi.get(self, "certificates")
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> str:
"""
Creation timestamp of the IoT Core Device
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter
def description(self) -> str:
"""
Description of the IoT Core Device
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="deviceId")
def device_id(self) -> Optional[str]:
return pulumi.get(self, "device_id")
@property
@pulumi.getter
def id(self) -> str:
"""
The provider-assigned unique ID for this managed resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> Optional[str]:
return pulumi.get(self, "name")
@property
@pulumi.getter
def passwords(self) -> Sequence[str]:
"""
A set of passwords's id for the IoT Core Device
"""
return pulumi.get(self, "passwords")
@property
@pulumi.getter(name="registryId")
def registry_id(self) -> str:
"""
IoT Core Registry ID for the IoT Core Device
"""
return pulumi.get(self, "registry_id")
class AwaitableGetIotCoreDeviceResult(GetIotCoreDeviceResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetIotCoreDeviceResult(
aliases=self.aliases,
certificates=self.certificates,
created_at=self.created_at,
description=self.description,
device_id=self.device_id,
id=self.id,
name=self.name,
passwords=self.passwords,
registry_id=self.registry_id)
def get_iot_core_device(device_id: Optional[str] = None,
name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetIotCoreDeviceResult:
"""
Get information about a Yandex IoT Core device. For more information about IoT Core, see
[Yandex.Cloud IoT Device](https://cloud.yandex.com/docs/iot-core/quickstart).
```python
import pulumi
import pulumi_yandex as yandex
my_device = yandex.get_iot_core_device(device_id="are1sampleregistry11")
```
This data source is used to define [Yandex.Cloud IoT Device](https://cloud.yandex.com/docs/iot-core/quickstart) that can be used by other resources.
:param str device_id: IoT Core Device id used to define device
:param str name: IoT Core Device name used to define device
"""
__args__ = dict()
__args__['deviceId'] = device_id
__args__['name'] = name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('yandex:index/getIotCoreDevice:getIotCoreDevice', __args__, opts=opts, typ=GetIotCoreDeviceResult).value
return AwaitableGetIotCoreDeviceResult(
aliases=__ret__.aliases,
certificates=__ret__.certificates,
created_at=__ret__.created_at,
description=__ret__.description,
device_id=__ret__.device_id,
id=__ret__.id,
name=__ret__.name,
passwords=__ret__.passwords,
registry_id=__ret__.registry_id)
@_utilities.lift_output_func(get_iot_core_device)
def get_iot_core_device_output(device_id: Optional[pulumi.Input[Optional[str]]] = None,
name: Optional[pulumi.Input[Optional[str]]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetIotCoreDeviceResult]:
"""
Get information about a Yandex IoT Core device. For more information about IoT Core, see
[Yandex.Cloud IoT Device](https://cloud.yandex.com/docs/iot-core/quickstart).
```python
import pulumi
import pulumi_yandex as yandex
my_device = yandex.get_iot_core_device(device_id="are1sampleregistry11")
```
This data source is used to define [Yandex.Cloud IoT Device](https://cloud.yandex.com/docs/iot-core/quickstart) that can be used by other resources.
:param str device_id: IoT Core Device id used to define device
:param str name: IoT Core Device name used to define device
"""
...
| 36 | 165 | 0.6555 | 864 | 7,164 | 5.221065 | 0.167824 | 0.040346 | 0.054755 | 0.059854 | 0.444247 | 0.374418 | 0.324762 | 0.285968 | 0.240745 | 0.215917 | 0 | 0.001296 | 0.245812 | 7,164 | 198 | 166 | 36.181818 | 0.833611 | 0.247627 | 0 | 0.128205 | 1 | 0 | 0.136844 | 0.024471 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.068376 | 0.042735 | 0.017094 | 0.264957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
9a71f894759e135ca8ad8f9a0d8a0924ddee0165 | 302 | py | Python | src/freeSession/api/api.py | inpritamkundu/python-webinar | 0e30ac53ff076fc38315dbb507983cdceaa32455 | [
"MIT"
] | null | null | null | src/freeSession/api/api.py | inpritamkundu/python-webinar | 0e30ac53ff076fc38315dbb507983cdceaa32455 | [
"MIT"
] | null | null | null | src/freeSession/api/api.py | inpritamkundu/python-webinar | 0e30ac53ff076fc38315dbb507983cdceaa32455 | [
"MIT"
] | null | null | null | from flask import Flask
# Create app object
app = Flask(__name__)
@app.route('/')
def hello():
return "hello"
@app.route('/user')
def user():
return "Teckat"
@app.route('/name/<name>')
def name(name):
print(name)
return name
if __name__ == "__main__":
app.run(debug=True)
| 11.615385 | 26 | 0.622517 | 41 | 302 | 4.292683 | 0.463415 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205298 | 302 | 25 | 27 | 12.08 | 0.733333 | 0.056291 | 0 | 0 | 0 | 0 | 0.130742 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0.142857 | 0.5 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
9a7b1f55e71d0a5f057f91a2508f05b60e4d7129 | 447 | py | Python | main/admin.py | getmobilehq/mykontainer-backend | 20ee12f056ce7be75a71b02bcf2b055fe6643a36 | [
"Unlicense",
"MIT"
] | null | null | null | main/admin.py | getmobilehq/mykontainer-backend | 20ee12f056ce7be75a71b02bcf2b055fe6643a36 | [
"Unlicense",
"MIT"
] | null | null | null | main/admin.py | getmobilehq/mykontainer-backend | 20ee12f056ce7be75a71b02bcf2b055fe6643a36 | [
"Unlicense",
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
# Register your models here.
# admin.site.register([ShippingCompany, BayArea])
@admin.register(ShippingCompany)
class ShippingCompanyAdmin(admin.ModelAdmin):
list_display = ['id', "name", "is_active"]
list_editable = ("is_active",)
@admin.register(BayArea)
class BayAreaAdmin(admin.ModelAdmin):
list_display = ['id', "name", "is_active"]
list_editable = ("is_active",) | 27.9375 | 49 | 0.722595 | 51 | 447 | 6.176471 | 0.45098 | 0.101587 | 0.120635 | 0.165079 | 0.380952 | 0.380952 | 0.380952 | 0.380952 | 0.380952 | 0.380952 | 0 | 0 | 0.136465 | 447 | 16 | 50 | 27.9375 | 0.816062 | 0.165548 | 0 | 0.4 | 0 | 0 | 0.12938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9a81f9702d0e398fac9c97df7535d7c9bf8cc498 | 383 | py | Python | TWLight/users/migrations/0056_remove_authorization_partner.py | nicole331/TWLight | fab9002e76868f8a2ef36f9279c777de34243b2c | [
"MIT"
] | 67 | 2017-12-14T22:27:48.000Z | 2022-03-13T18:21:31.000Z | TWLight/users/migrations/0056_remove_authorization_partner.py | nicole331/TWLight | fab9002e76868f8a2ef36f9279c777de34243b2c | [
"MIT"
] | 433 | 2017-03-24T22:51:23.000Z | 2022-03-31T19:36:22.000Z | TWLight/users/migrations/0056_remove_authorization_partner.py | Mahuton/TWLight | 90b299d07b0479f21dc90e17b8d05f5a221b0de1 | [
"MIT"
] | 105 | 2017-06-23T03:53:41.000Z | 2022-03-30T17:24:29.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2020-05-08 17:26
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("users", "0055_authorization_data_partners_foreignkey_to_manytomany")
]
operations = [migrations.RemoveField(model_name="authorization", name="partner")]
| 25.533333 | 85 | 0.736292 | 46 | 383 | 5.869565 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067692 | 0.151436 | 383 | 14 | 86 | 27.357143 | 0.763077 | 0.180157 | 0 | 0 | 1 | 0 | 0.263666 | 0.18328 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9a8958e1ff4b068f8b762332c266c91b74ba994e | 311 | py | Python | io_mesh_pd2model/export_pd2model.py | ModWorkshop/payday-model-blender-addon | fc2b3040056baa3f9f61c01d4ebd046be51e667f | [
"MIT"
] | 1 | 2016-02-10T17:43:40.000Z | 2016-02-10T17:43:40.000Z | io_mesh_pd2model/export_pd2model.py | Last-Bullet-Development/payday-model-blender-addon | fc2b3040056baa3f9f61c01d4ebd046be51e667f | [
"MIT"
] | null | null | null | io_mesh_pd2model/export_pd2model.py | Last-Bullet-Development/payday-model-blender-addon | fc2b3040056baa3f9f61c01d4ebd046be51e667f | [
"MIT"
] | 1 | 2019-05-13T18:50:33.000Z | 2019-05-13T18:50:33.000Z | """
This script exports PayDay 2 Model File format files to Blender.
Usage:
Execute this script from the "File->Import" menu and choose where to
save the file.
"""
import bpy
def write(filepath):
#TODO: implement export code
# write to a file
file = open(filepath, "w")
file.close()
| 14.809524 | 68 | 0.672026 | 46 | 311 | 4.543478 | 0.717391 | 0.095694 | 0.124402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004219 | 0.237942 | 311 | 20 | 69 | 15.55 | 0.877637 | 0.643087 | 0 | 0 | 0 | 0 | 0.009901 | 0 | 0 | 0 | 0 | 0.05 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9a92e0ca5d8bd4a65703818ab749be25409d351e | 72 | py | Python | private/messages/__init__.py | shimataro/brocadefw | c49e6a448a83d47d3b7046194748a97c19a3e1d7 | [
"MIT"
] | 1 | 2015-11-22T06:27:56.000Z | 2015-11-22T06:27:56.000Z | private/messages/__init__.py | shimataro/brocadefw | c49e6a448a83d47d3b7046194748a97c19a3e1d7 | [
"MIT"
] | null | null | null | private/messages/__init__.py | shimataro/brocadefw | c49e6a448a83d47d3b7046194748a97c19a3e1d7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" ラベルを定義 """
NAME = "name"
MYNAME = "myname"
| 12 | 23 | 0.5 | 8 | 72 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.208333 | 72 | 5 | 24 | 14.4 | 0.614035 | 0.416667 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9ab4d29aa6845d370b8d7d60096f720129be0e66 | 227 | py | Python | Lesson07/random_normal.py | TrainingByPackt/Beginning-Python-AI | b1e68d892e65b1f7b347330ef2a90a1b546bdd25 | [
"MIT"
] | 14 | 2018-12-26T23:07:28.000Z | 2021-06-30T19:51:57.000Z | Lesson07/random_normal.py | TrainingByPackt/Beginning-Python-AI | b1e68d892e65b1f7b347330ef2a90a1b546bdd25 | [
"MIT"
] | 16 | 2018-10-20T13:37:51.000Z | 2018-10-22T22:21:52.000Z | Lesson07/random_normal.py | TrainingByPackt/Beginning-AI-Machine-Learning-and-Python | b1e68d892e65b1f7b347330ef2a90a1b546bdd25 | [
"MIT"
] | 13 | 2018-12-21T07:07:31.000Z | 2022-02-05T12:34:27.000Z | import tensorflow as tf
randomMatrix = tf.Variable(tf.random_normal([3, 4]))
with tf.Session() as session:
initializer = tf.global_variables_initializer()
session.run(initializer)
print(session.run(randomMatrix))
| 25.222222 | 52 | 0.748899 | 29 | 227 | 5.758621 | 0.586207 | 0.11976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.136564 | 227 | 8 | 53 | 28.375 | 0.841837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9ab9dde5ff068e3fbee8407c86a3718d396063cf | 2,372 | py | Python | tests/correctness/targets/CppCompilationWithTargetDeps/Input/root.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 9 | 2017-02-06T16:45:46.000Z | 2021-12-05T09:42:58.000Z | tests/correctness/targets/CppCompilationWithTargetDeps/Input/root.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 15 | 2019-01-11T19:39:34.000Z | 2022-01-08T11:11:35.000Z | tests/correctness/targets/CppCompilationWithTargetDeps/Input/root.xpybuild.py | xpybuild/xpybuild | c71a73e47414871c8192381d0356ab62f5a58127 | [
"Apache-2.0"
] | 5 | 2017-02-06T16:51:17.000Z | 2020-12-02T17:36:30.000Z | #
# Copyright (c) 2013 - 2017, 2019 Software AG, Darmstadt, Germany and/or its licensors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
from xpybuild.propertysupport import *
from xpybuild.buildcommon import *
from xpybuild.pathsets import *
from xpybuild.targets.native import *
from xpybuild.targets.copy import Copy
from xpybuild.utils.compilers import GCC, VisualStudio
include(os.environ['PYSYS_TEST_ROOT_DIR']+'/build_utilities/native_config.xpybuild.py')
setGlobalOption('native.include.upToDateCheckIgnoreRegex', '(c:/program files.*|.*/tESt4.h)' if IS_WINDOWS else '.*/test4.h')
setGlobalOption('native.include.upToDateCheckIgnoreSystemHeaders', True) # only works on linux/gcc currently
Copy('${OUTPUT_DIR}/my-generated-include-files/', FindPaths('./include-src/'))
Copy('${OUTPUT_DIR}/my-generated-include-files2/generatedpath/test3.h', FindPaths('./include-src/generatedpath/'))
Copy('${OUTPUT_DIR}/test-generated.cpp', './test.cpp')
Cpp(objectname('${OUTPUT_DIR}/no-target-deps'), './test.cpp',
includes=[
"./include/",
'./include-src/',
]
)
Cpp(objectname('${OUTPUT_DIR}/target-cpp-and-include-dir'), '${OUTPUT_DIR}/test-generated.cpp',
includes=[
"./include/",
'${OUTPUT_DIR}/my-generated-include-files/', # a target
]
)
Cpp(objectname('${OUTPUT_DIR}/target-cpp'), '${OUTPUT_DIR}/test-generated.cpp',
includes=[
"./include/",
'./include-src/',
]
)
Cpp(objectname('${OUTPUT_DIR}/target-include-dir'), './test.cpp',
includes=[
"./include/",
'${OUTPUT_DIR}/my-generated-include-files/', # a target
]
)
# generated include files in non-target directories are no longer supported
Cpp(objectname('${OUTPUT_DIR}/target-include-file'), './test.cpp',
includes=[
"./include/",
TargetsWithinDir('${OUTPUT_DIR}/my-generated-include-files2/'), # NOT a target, but contains one
]
)
| 33.885714 | 125 | 0.718381 | 308 | 2,372 | 5.470779 | 0.422078 | 0.069436 | 0.032641 | 0.059347 | 0.283086 | 0.268249 | 0.160237 | 0.134125 | 0.134125 | 0.134125 | 0 | 0.010116 | 0.124789 | 2,372 | 69 | 126 | 34.376812 | 0.801541 | 0.329258 | 0 | 0.325581 | 0 | 0 | 0.528025 | 0.405732 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.162791 | 0 | 0.162791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9ac832435867b6cdb2355d3b87e34e012aa5b8f4 | 181 | py | Python | ci/paths.py | dkistner/gardenlinux | 3b6d94af0c32a68f487007ada1d82d9b2f1bedc6 | [
"MIT"
] | null | null | null | ci/paths.py | dkistner/gardenlinux | 3b6d94af0c32a68f487007ada1d82d9b2f1bedc6 | [
"MIT"
] | null | null | null | ci/paths.py | dkistner/gardenlinux | 3b6d94af0c32a68f487007ada1d82d9b2f1bedc6 | [
"MIT"
] | null | null | null | import os
own_dir = os.path.abspath(os.path.dirname(__file__))
flavour_cfg_path = os.path.join(own_dir, os.pardir, 'build.yaml')
cicd_cfg_path = os.path.join(own_dir, 'cicd.yaml')
| 30.166667 | 65 | 0.756906 | 33 | 181 | 3.818182 | 0.454545 | 0.190476 | 0.126984 | 0.206349 | 0.365079 | 0.365079 | 0.365079 | 0 | 0 | 0 | 0 | 0 | 0.082873 | 181 | 5 | 66 | 36.2 | 0.759036 | 0 | 0 | 0 | 0 | 0 | 0.104972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9acc513c7e496a29068229baf05cf2098e944c72 | 1,682 | py | Python | office365/calendar/event_collection.py | andrewcchoi/Office365-REST-Python-Client | 43db12ae532c804c75a3a34f7b0d7d79e30fdac3 | [
"MIT"
] | null | null | null | office365/calendar/event_collection.py | andrewcchoi/Office365-REST-Python-Client | 43db12ae532c804c75a3a34f7b0d7d79e30fdac3 | [
"MIT"
] | null | null | null | office365/calendar/event_collection.py | andrewcchoi/Office365-REST-Python-Client | 43db12ae532c804c75a3a34f7b0d7d79e30fdac3 | [
"MIT"
] | null | null | null | from office365.calendar.attendee import Attendee
from office365.calendar.dateTimeTimeZone import DateTimeTimeZone
from office365.calendar.emailAddress import EmailAddress
from office365.calendar.event import Event
from office365.entity_collection import EntityCollection
from office365.mail.itemBody import ItemBody
from office365.runtime.client_value_collection import ClientValueCollection
class EventCollection(EntityCollection):
"""Event's collection"""
def __init__(self, context, resource_path=None):
super(EventCollection, self).__init__(context, Event, resource_path)
def add(self, subject, body, start, end, attendees_emails):
"""
:param list[str] attendees_emails: The collection of attendees emails for the event.
:param datetime.datetime end: The date, time, and time zone that the event ends.
By default, the end time is in UTC.
:param datetime.datetime start: The date, time, and time zone that the event starts.
By default, the start time is in UTC.
:param str body: The body of the message associated with the event. It can be in HTML format.
:param str subject: The text of the event's subject line.
:rtype: Event
"""
attendees_list = [Attendee(EmailAddress(email), attendee_type="required") for email in attendees_emails]
payload = {
"subject": subject,
"body": ItemBody(body, "HTML"),
"start": DateTimeTimeZone.parse(start),
"end": DateTimeTimeZone.parse(end),
"attendees": ClientValueCollection(Attendee, attendees_list)
}
return self.add_from_json(payload)
| 48.057143 | 112 | 0.706302 | 201 | 1,682 | 5.80597 | 0.358209 | 0.077978 | 0.071979 | 0.023993 | 0.08569 | 0.058269 | 0.058269 | 0.058269 | 0.058269 | 0 | 0 | 0.015909 | 0.21522 | 1,682 | 34 | 113 | 49.470588 | 0.868182 | 0.307967 | 0 | 0 | 0 | 0 | 0.037071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.35 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
9ae2d2c28f8c9bbcd2c2abfa7f3763e43bc945e5 | 1,285 | py | Python | release/stubs.min/System/__init___parts/IAsyncResult.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs.min/System/__init___parts/IAsyncResult.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs.min/System/__init___parts/IAsyncResult.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | class IAsyncResult:
""" Represents the status of an asynchronous operation. """
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
AsyncState=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a user-defined object that qualifies or contains information about an asynchronous operation.
Get: AsyncState(self: IAsyncResult) -> object
"""
AsyncWaitHandle=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a System.Threading.WaitHandle that is used to wait for an asynchronous operation to complete.
Get: AsyncWaitHandle(self: IAsyncResult) -> WaitHandle
"""
CompletedSynchronously=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a value that indicates whether the asynchronous operation completed synchronously.
Get: CompletedSynchronously(self: IAsyncResult) -> bool
"""
IsCompleted=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a value that indicates whether the asynchronous operation has completed.
Get: IsCompleted(self: IAsyncResult) -> bool
"""
| 25.196078 | 147 | 0.720623 | 153 | 1,285 | 5.869281 | 0.366013 | 0.13363 | 0.080178 | 0.106904 | 0.436526 | 0.436526 | 0.436526 | 0.436526 | 0.367483 | 0.367483 | 0 | 0 | 0.173541 | 1,285 | 50 | 148 | 25.7 | 0.845574 | 0.147082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.142857 | 0 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
9af365d00877e47d07884c3c8a0bdc1e0aa6d4b2 | 9,645 | py | Python | pb_model/fields.py | platonfloria/django-pb-model | bb536760dfc6edbe18557e9438ff48db436853d1 | [
"MIT"
] | null | null | null | pb_model/fields.py | platonfloria/django-pb-model | bb536760dfc6edbe18557e9438ff48db436853d1 | [
"MIT"
] | null | null | null | pb_model/fields.py | platonfloria/django-pb-model | bb536760dfc6edbe18557e9438ff48db436853d1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import logging
import json
import uuid
from django.db import models
from django.conf import settings
from django.utils import timezone
from google.protobuf.descriptor import FieldDescriptor
logging.basicConfig()
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.WARNING)
if settings.DEBUG:
LOGGER.setLevel(logging.DEBUG)
PB_FIELD_TYPE_TIMESTAMP = FieldDescriptor.MAX_TYPE + 1
PB_FIELD_TYPE_REPEATED = FieldDescriptor.MAX_TYPE + 2
PB_FIELD_TYPE_MAP = FieldDescriptor.MAX_TYPE + 3
PB_FIELD_TYPE_MESSAGE = FieldDescriptor.MAX_TYPE + 4
PB_FIELD_TYPE_REPEATED_MESSAGE = FieldDescriptor.MAX_TYPE + 5
PB_FIELD_TYPE_MESSAGE_MAP = FieldDescriptor.MAX_TYPE + 6
def _defaultfield_to_pb(pb_obj, pb_field, dj_field_value):
""" handling any fields conversion to protobuf
"""
LOGGER.debug("Django Value field, assign proto msg field: {} = {}".format(pb_field.name, dj_field_value))
if sys.version_info < (3,) and type(dj_field_value) is buffer:
dj_field_value = bytes(dj_field_value)
setattr(pb_obj, pb_field.name, dj_field_value)
def _defaultfield_from_pb(instance, dj_field_name, pb_field, pb_value):
""" handling any fields setting from protobuf
"""
LOGGER.debug("Django Value Field, set dj field: {} = {}".format(dj_field_name, pb_value))
setattr(instance, dj_field_name, pb_value)
def _datetimefield_to_pb(pb_obj, pb_field, dj_field_value):
"""handling Django DateTimeField field
:param pb_obj: protobuf message obj which is return value of to_pb()
:param pb_field: protobuf message field which is current processing field
:param dj_field_value: Currently proecessing django field value
:returns: None
"""
if getattr(getattr(pb_obj, pb_field.name), 'FromDatetime', False):
if settings.USE_TZ:
dj_field_value = timezone.make_naive(dj_field_value, timezone=timezone.utc)
getattr(pb_obj, pb_field.name).FromDatetime(dj_field_value)
def _datetimefield_from_pb(instance, dj_field_name, pb_field, pb_value):
"""handling datetime field (Timestamp) object to dj field
:param dj_field_name: Currently target django field's name
:param pb_value: Currently processing protobuf message value
:returns: None
"""
dt = pb_value.ToDatetime()
if settings.USE_TZ:
dt = timezone.localtime(timezone.make_aware(dt, timezone.utc))
# FIXME: not datetime field
setattr(instance, dj_field_name, dt)
def _uuid_to_pb(pb_obj, pb_field, dj_field_value):
"""handling Django UUIDField field
:param pb_obj: protobuf message obj which is return value of to_pb()
:param pb_field: protobuf message field which is current processing field
:param dj_field_value: Currently proecessing django field value
:returns: None
"""
setattr(pb_obj, pb_field.name, str(dj_field_value))
def _uuid_from_pb(instance, dj_field_name, pb_field, pb_value):
"""handling string object to dj UUIDField
:param dj_field_name: Currently target django field's name
:param pb_value: Currently processing protobuf message value
:returns: None
"""
setattr(instance, dj_field_name, uuid.UUID(pb_value))
class ProtoBufFieldMixin(object):
@staticmethod
def to_pb(pb_obj, pb_field, dj_field_value):
raise NotImplementedError()
@staticmethod
def from_pb(instance, dj_field_name, pb_field, pb_value):
raise NotImplementedError()
class JSONField(models.TextField):
def from_db_value(self, value, expression, connection, context=None):
return self._deserialize(value)
def to_python(self, value):
if isinstance(value, str):
return self._deserialize(value)
return value
def get_prep_value(self, value):
return json.dumps(value)
def _deserialize(self, value):
if value is None:
return None
return json.loads(value)
class ArrayField(JSONField, ProtoBufFieldMixin):
@staticmethod
def to_pb(pb_obj, pb_field, dj_field_value):
getattr(pb_obj, pb_field.name).extend(dj_field_value)
@staticmethod
def from_pb(instance, dj_field_name, pb_field, pb_value):
setattr(instance, dj_field_name, list(pb_value))
class MapField(JSONField, ProtoBufFieldMixin):
@staticmethod
def to_pb(pb_obj, pb_field, dj_field_value):
getattr(pb_obj, pb_field.name).update(dj_field_value)
@staticmethod
def from_pb(instance, dj_field_name, pb_field, pb_value):
setattr(instance, dj_field_name, dict(pb_value))
class RepeatedMessageField(models.ManyToManyField, ProtoBufFieldMixin):
class Descriptor(models.fields.related_descriptors.ManyToManyDescriptor):
def __init__(self, field_name, index_field_name, rel, reverse=False):
super(RepeatedMessageField.Descriptor, self).__init__(rel, reverse)
self._field_name = field_name
self._index_field_name = index_field_name
def __get__(self, instance, cls=None):
if instance is None:
raise AttributeError('Can only be accessed via an instance.')
if self._field_name not in instance.__dict__:
instance.__dict__[self._field_name] = [self.related_manager_cls(instance).get(id=id_) for id_ in getattr(instance, self._index_field_name)]
return instance.__dict__[self._field_name]
def __set__(self, instance, value):
instance.__dict__[self._field_name] = value
def __init__(self, *args, **kwargs):
super(RepeatedMessageField, self).__init__(default=[], *args, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super(RepeatedMessageField, self).deconstruct()
kwargs.pop('default')
return name, path, args, kwargs
def contribute_to_class(self, cls, name):
index_field_name = '%s_index' % name
index_field = JSONField(default=[], editable=False, blank=True)
index_field.creation_counter = self.creation_counter
cls.add_to_class(index_field_name, index_field)
super(RepeatedMessageField, self).contribute_to_class(cls, name)
setattr(cls, self.attname, RepeatedMessageField.Descriptor(name, index_field_name, self.remote_field, reverse=False))
def save(self, instance):
for message in getattr(instance, self.attname):
type(instance).__dict__[self.attname].related_manager_cls(instance).add(message)
setattr(instance, '%s_index' % self.attname, [q.id for q in instance.__dict__[self.attname]])
def load(self, instance):
getattr(instance, self.attname)
@staticmethod
def to_pb(pb_obj, pb_field, dj_field_value):
getattr(pb_obj, pb_field.name).extend([m.to_pb() for m in dj_field_value])
@staticmethod
def from_pb(instance, dj_field_name, pb_field, pb_value):
related_model = instance._meta.get_field(dj_field_name).related_model
setattr(instance, dj_field_name, [related_model().from_pb(pb_message) for pb_message in pb_value])
class MessageMapField(models.ManyToManyField, ProtoBufFieldMixin):
class Descriptor(models.fields.related_descriptors.ManyToManyDescriptor):
def __init__(self, field_name, index_field_name, rel, reverse=False):
super(MessageMapField.Descriptor, self).__init__(rel, reverse)
self._field_name = field_name
self._index_field_name = index_field_name
def __get__(self, instance, cls=None):
if instance is None:
raise AttributeError('Can only be accessed via an instance.')
if self._field_name not in instance.__dict__:
instance.__dict__[self._field_name] = {key: self.related_manager_cls(instance).get(id=id_) for key, id_ in getattr(instance, self._index_field_name).items()}
return instance.__dict__[self._field_name]
def __set__(self, instance, value):
instance.__dict__[self._field_name] = value
def __init__(self, *args, **kwargs):
super(MessageMapField, self).__init__(default={}, *args, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super(MessageMapField, self).deconstruct()
kwargs.pop('default')
return name, path, args, kwargs
def contribute_to_class(self, cls, name):
index_field_name = '%s_index' % name
index_field = JSONField(default={}, editable=False, blank=True)
index_field.creation_counter = self.creation_counter
cls.add_to_class(index_field_name, index_field)
super(MessageMapField, self).contribute_to_class(cls, name)
setattr(cls, self.attname, MessageMapField.Descriptor(name, index_field_name, self.remote_field, reverse=False))
def save(self, instance):
for message in getattr(instance, self.attname).values():
type(instance).__dict__[self.attname].related_manager_cls(instance).add(message)
setattr(instance, '%s_index' % self.attname, {key: message.id for key, message in instance.__dict__[self.attname].items()})
def load(self, instance):
getattr(instance, self.attname)
@staticmethod
def to_pb(pb_obj, pb_field, dj_field_value):
for key in dj_field_value:
getattr(pb_obj, pb_field.name)[key].CopyFrom(dj_field_value[key].to_pb())
@staticmethod
def from_pb(instance, dj_field_name, pb_field, pb_value):
related_model = instance._meta.get_field(dj_field_name).related_model
setattr(instance, dj_field_name, {key: related_model().from_pb(pb_message) for key, pb_message in pb_value.items()})
| 38.73494 | 173 | 0.7155 | 1,276 | 9,645 | 5.066614 | 0.13558 | 0.07935 | 0.044548 | 0.029698 | 0.719876 | 0.684919 | 0.661562 | 0.63635 | 0.624903 | 0.607425 | 0 | 0.001022 | 0.188491 | 9,645 | 248 | 174 | 38.891129 | 0.824965 | 0.108139 | 0 | 0.474359 | 0 | 0 | 0.026381 | 0 | 0 | 0 | 0 | 0.004032 | 0 | 1 | 0.230769 | false | 0 | 0.051282 | 0.012821 | 0.397436 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9af5db8381d30e58d296c76c6750491174a9b9cf | 2,107 | py | Python | hydras/utils.py | Gilnaa/Hydra | 4d24863819bdcdd7c757e2dfb8a8996b009521b6 | [
"MIT"
] | 5 | 2019-07-11T09:24:29.000Z | 2020-10-07T08:11:29.000Z | hydras/utils.py | Gilnaa/Hydras | 4d24863819bdcdd7c757e2dfb8a8996b009521b6 | [
"MIT"
] | 3 | 2019-11-05T11:33:30.000Z | 2020-08-20T12:15:29.000Z | hydras/utils.py | Gilnaa/Hydras | 4d24863819bdcdd7c757e2dfb8a8996b009521b6 | [
"MIT"
] | 2 | 2019-10-20T12:21:45.000Z | 2019-10-20T17:50:20.000Z | """
Contains various utility methods.
:file: utils.py
:date: 27/08/2015
:authors:
- Gilad Naaman <gilad@naaman.io>
"""
from typing import Any, Type, Union
import inspect
import enum
import sys
class Endianness(enum.Enum):
BIG = '>'
LITTLE = '<'
HOST = '='
TARGET = None
def is_equivalent_to_little_endian(self):
return self == Endianness.LITTLE or (self == Endianness.HOST and sys.byteorder == 'little')
def is_equivalent_to_big_endian(self):
return self == Endianness.LITTLE or (self == Endianness.HOST and sys.byteorder == 'little')
def create_array(size: Union[int, slice], underlying_type):
# Importing locally in order to avoid weird import-cycle issues
from .array import Array
return Array[size, underlying_type]
def fit_bytes_to_size(byte_string, length):
"""
Ensure the given byte_string is in the correct length
A long byte_string will be truncated, while a short one will be padded.
:param byte_string: The string to fit.
:param length: The required string size.
"""
if length is None:
return byte_string
if len(byte_string) < length:
return padto(byte_string, length)
return byte_string[:length]
def get_as_type(t):
return t if inspect.isclass(t) else type(t)
def get_as_value(v):
return v() if inspect.isclass(v) else v
def mask(length, offset=0):
"""
Generate a bitmask with the given parameter.
:param length: The bit length of the mask.
:param offset: The offset of the mask from the LSB bit. [default: 0]
:return: An integer representing the bit mask.
"""
return ((1 << length) - 1) << offset
def get_type_name(t: Union[Type, Any]) -> str:
return get_as_type(t).__name__
def padto(data, size, pad_val=b'\x00', leftpad=False):
assert type(pad_val) == bytes and len(pad_val) == 1, 'Padding value must be 1 byte'
if len(data) < size:
padding = pad_val * (size - len(data))
if not leftpad:
data += padding
else:
data = padding + data
return data
| 24.5 | 99 | 0.656383 | 304 | 2,107 | 4.424342 | 0.371711 | 0.05948 | 0.047584 | 0.025279 | 0.118959 | 0.118959 | 0.118959 | 0.118959 | 0.118959 | 0.118959 | 0 | 0.010013 | 0.241576 | 2,107 | 85 | 100 | 24.788235 | 0.831665 | 0.287138 | 0 | 0.051282 | 0 | 0 | 0.032639 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.230769 | false | 0 | 0.128205 | 0.128205 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b10c9d120deabe454a263c058459d87e0693ad46 | 1,217 | py | Python | hazelcast/protocol/codec/atomic_ref_set_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 98 | 2015-12-08T14:26:27.000Z | 2022-03-23T17:44:11.000Z | hazelcast/protocol/codec/atomic_ref_set_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 396 | 2016-02-23T11:07:55.000Z | 2022-03-31T14:26:34.000Z | hazelcast/protocol/codec/atomic_ref_set_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 62 | 2015-12-09T11:20:53.000Z | 2022-01-28T01:30:54.000Z | from hazelcast.serialization.bits import *
from hazelcast.protocol.builtin import FixSizedTypesCodec
from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer
from hazelcast.protocol.codec.custom.raft_group_id_codec import RaftGroupIdCodec
from hazelcast.protocol.builtin import StringCodec
from hazelcast.protocol.builtin import DataCodec
from hazelcast.protocol.builtin import CodecUtil
# hex: 0x0A0500
_REQUEST_MESSAGE_TYPE = 656640
# hex: 0x0A0501
_RESPONSE_MESSAGE_TYPE = 656641
_REQUEST_RETURN_OLD_VALUE_OFFSET = REQUEST_HEADER_SIZE
_REQUEST_INITIAL_FRAME_SIZE = _REQUEST_RETURN_OLD_VALUE_OFFSET + BOOLEAN_SIZE_IN_BYTES
def encode_request(group_id, name, new_value, return_old_value):
buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE)
FixSizedTypesCodec.encode_boolean(buf, _REQUEST_RETURN_OLD_VALUE_OFFSET, return_old_value)
RaftGroupIdCodec.encode(buf, group_id)
StringCodec.encode(buf, name)
CodecUtil.encode_nullable(buf, new_value, DataCodec.encode, True)
return OutboundMessage(buf, False)
def decode_response(msg):
msg.next_frame()
return CodecUtil.decode_nullable(msg, DataCodec.decode)
| 40.566667 | 105 | 0.8447 | 156 | 1,217 | 6.198718 | 0.333333 | 0.094105 | 0.1303 | 0.115822 | 0.279214 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021818 | 0.096138 | 1,217 | 29 | 106 | 41.965517 | 0.857273 | 0.022186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.333333 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b114dbd3f5c925000096ab88f1b8fca9b6485cab | 640 | py | Python | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/common/djangoapps/xblock_django/management/commands/ensure_indexes.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 3 | 2021-12-15T04:58:18.000Z | 2022-02-06T12:15:37.000Z | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/common/djangoapps/xblock_django/management/commands/ensure_indexes.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | null | null | null | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/common/djangoapps/xblock_django/management/commands/ensure_indexes.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 1 | 2019-01-02T14:38:50.000Z | 2019-01-02T14:38:50.000Z | """
Creates Indexes on contentstore and modulestore databases.
"""
from django.core.management.base import BaseCommand
from xmodule.contentstore.django import contentstore
from xmodule.modulestore.django import modulestore
class Command(BaseCommand):
"""
This command will create indexes on the stores used for both contentstore and modulestore.
"""
args = ''
help = 'Creates the indexes for ContentStore and ModuleStore databases'
def handle(self, *args, **options):
contentstore().ensure_indexes()
modulestore().ensure_indexes()
print('contentstore and modulestore indexes created!')
| 27.826087 | 94 | 0.734375 | 70 | 640 | 6.685714 | 0.485714 | 0.128205 | 0.222222 | 0.149573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182813 | 640 | 22 | 95 | 29.090909 | 0.894837 | 0.232813 | 0 | 0 | 0 | 0 | 0.228632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.7 | 0.1 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b1293d248fc0f7be8b58e98d054afaca6200581f | 1,328 | py | Python | record/views.py | mhendricks96/record_tracker | fc96a97e6e5053d82cdf84eaf2b2ccc8b5afc4b4 | [
"MIT"
] | null | null | null | record/views.py | mhendricks96/record_tracker | fc96a97e6e5053d82cdf84eaf2b2ccc8b5afc4b4 | [
"MIT"
] | null | null | null | record/views.py | mhendricks96/record_tracker | fc96a97e6e5053d82cdf84eaf2b2ccc8b5afc4b4 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.views.generic import TemplateView, ListView, DetailView, CreateView, DeleteView, UpdateView
from .models import Record
from django.urls import reverse_lazy
from .permissions import IsOwnerOrReadOnly
# Create your views here.
class HomeView(TemplateView):
template_name = 'pages/home.html'
class RecordListView(ListView):
permission_classes = (IsOwnerOrReadOnly,)
template_name = 'pages/record_list.html'
model = Record
context_object_name = 'list_of_records'
class RecordDetailView(DetailView):
permission_classes = (IsOwnerOrReadOnly,)
template_name = 'pages/record_detail.html'
model = Record
class RecordCreateView(CreateView):
permission_classes = (IsOwnerOrReadOnly,)
template_name = 'pages/create_record.html'
model = Record
fields = ['title', 'artist' ,'genre', 'added_by']
#success_url = reverse_lazy('record_list')
class RecordUpdateView(UpdateView):
permission_classes = (IsOwnerOrReadOnly,)
template_name = 'pages/update_record.html'
model = Record
fields = ['title', 'artist' ,'genre', 'added_by']
#success_url = reverse_lazy('record_list')
class RecordDeleteView(DeleteView):
permission_classes = (IsOwnerOrReadOnly,)
template_name = 'pages/delete_record.html'
model = Record
success_url = reverse_lazy('record_list') | 33.2 | 103 | 0.776355 | 148 | 1,328 | 6.75 | 0.364865 | 0.072072 | 0.102102 | 0.21021 | 0.47047 | 0.47047 | 0.286286 | 0.172172 | 0.172172 | 0.172172 | 0 | 0 | 0.122741 | 1,328 | 40 | 104 | 33.2 | 0.857511 | 0.079066 | 0 | 0.387097 | 0 | 0 | 0.169672 | 0.096721 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16129 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b12a5e12f9a6915ea17c0204f228d822323ba045 | 135 | py | Python | ref_task/datasets.py | asappresearch/texrel | dff447a99d56f2f92284df866fa01e7762dc6eac | [
"MIT"
] | null | null | null | ref_task/datasets.py | asappresearch/texrel | dff447a99d56f2f92284df866fa01e7762dc6eac | [
"MIT"
] | null | null | null | ref_task/datasets.py | asappresearch/texrel | dff447a99d56f2f92284df866fa01e7762dc6eac | [
"MIT"
] | null | null | null | all_datasets = {
'macdebug': 'macdebug',
'128leftonly': 'ords062',
'128w': 'ords064sc9',
'128valsame': 'ords064sc9',
}
| 19.285714 | 31 | 0.592593 | 10 | 135 | 7.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186916 | 0.207407 | 135 | 6 | 32 | 22.5 | 0.551402 | 0 | 0 | 0 | 0 | 0 | 0.503704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b137b4d02cf265bfb204ed595d85918dc53f79e4 | 677 | py | Python | manage_chat/migrations/0007_auto_20170424_0015.py | dduk-ddak/coding-night-live | be7b0b9a89a9a5332c0980dbc3698602266a1e8c | [
"MIT"
] | 73 | 2017-01-26T16:45:12.000Z | 2021-07-05T20:27:38.000Z | manage_chat/migrations/0007_auto_20170424_0015.py | dduk-ddak/coding-night-live | be7b0b9a89a9a5332c0980dbc3698602266a1e8c | [
"MIT"
] | 93 | 2017-01-25T18:28:02.000Z | 2019-06-10T22:11:38.000Z | manage_chat/migrations/0007_auto_20170424_0015.py | dduk-ddak/coding-night-live | be7b0b9a89a9a5332c0980dbc3698602266a1e8c | [
"MIT"
] | 22 | 2017-02-12T12:51:17.000Z | 2020-09-08T02:38:20.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-04-24 00:15
from __future__ import unicode_literals
import django.contrib.postgres.fields.jsonb
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('manage_chat', '0006_poll_hash_value'),
]
operations = [
migrations.AlterField(
model_name='poll',
name='answer',
field=django.contrib.postgres.fields.jsonb.JSONField(),
),
migrations.AlterField(
model_name='poll',
name='answer_count',
field=django.contrib.postgres.fields.jsonb.JSONField(),
),
]
| 25.074074 | 67 | 0.618907 | 72 | 677 | 5.652778 | 0.597222 | 0.095823 | 0.154791 | 0.199017 | 0.515971 | 0.437346 | 0.437346 | 0 | 0 | 0 | 0 | 0.042084 | 0.262925 | 677 | 26 | 68 | 26.038462 | 0.773547 | 0.100443 | 0 | 0.421053 | 1 | 0 | 0.094059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b13aa7df2949c78be0fae3875324dc6a4d169d0a | 4,292 | py | Python | assessment/migrations/0001_initial.py | kenware/Assessment | 69f5e3fbf18dfa2c59eaf3b083ebdba7ca66c9b7 | [
"MIT"
] | null | null | null | assessment/migrations/0001_initial.py | kenware/Assessment | 69f5e3fbf18dfa2c59eaf3b083ebdba7ca66c9b7 | [
"MIT"
] | 3 | 2020-02-11T23:31:01.000Z | 2021-06-10T21:04:34.000Z | assessment/migrations/0001_initial.py | kenware/Assessment | 69f5e3fbf18dfa2c59eaf3b083ebdba7ca66c9b7 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.4 on 2018-12-12 19:39
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Answer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(default=datetime.date(2018, 12, 12))),
('updated_at', models.DateField(blank=True, null=True)),
('created_by', models.CharField(blank=True, max_length=250)),
('deleted_at', models.DateField(blank=True, null=True)),
('choice_text', models.TextField()),
('image_url', models.CharField(blank=True, max_length=250)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Assessment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(default=datetime.date(2018, 12, 12))),
('updated_at', models.DateField(blank=True, null=True)),
('created_by', models.CharField(blank=True, max_length=250)),
('deleted_at', models.DateField(blank=True, null=True)),
('title', models.CharField(max_length=50)),
('total_mark', models.DecimalField(decimal_places=10, max_digits=19)),
('max_time', models.TimeField(default=datetime.time(0, 59))),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Question',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(default=datetime.date(2018, 12, 12))),
('updated_at', models.DateField(blank=True, null=True)),
('created_by', models.CharField(blank=True, max_length=250)),
('deleted_at', models.DateField(blank=True, null=True)),
('mark', models.DecimalField(decimal_places=10, max_digits=19)),
('number', models.IntegerField()),
('question_text', models.TextField()),
('multi_choice', models.BooleanField(default=False)),
('image_url', models.CharField(blank=True, max_length=250, null=True)),
('assessments', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='assessment.Assessment')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Score',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(default=datetime.date(2018, 12, 12))),
('updated_at', models.DateField(blank=True, null=True)),
('created_by', models.CharField(blank=True, max_length=250)),
('deleted_at', models.DateField(blank=True, null=True)),
('assessment_score', models.DecimalField(decimal_places=10, max_digits=19)),
('start_time', models.DateField(blank=True, null=True)),
('end_time', models.DateField(blank=True, null=True)),
('assessments', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='assessment.Assessment')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.AddField(
model_name='answer',
name='questions',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='assessment.Question'),
),
]
| 46.652174 | 124 | 0.572227 | 431 | 4,292 | 5.556845 | 0.215777 | 0.060125 | 0.085177 | 0.100209 | 0.736117 | 0.736117 | 0.679749 | 0.653027 | 0.634656 | 0.557829 | 0 | 0.026719 | 0.284949 | 4,292 | 91 | 125 | 47.164835 | 0.753666 | 0.010485 | 0 | 0.559524 | 1 | 0 | 0.108363 | 0.009894 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b1465b16c1808ec61302cc1b8df78787a5b68743 | 476 | py | Python | app/admin/controllers.py | RobinRheem/mememime | 86b81cc3708d58a142229ebfe192c8234ae2fce2 | [
"MIT"
] | null | null | null | app/admin/controllers.py | RobinRheem/mememime | 86b81cc3708d58a142229ebfe192c8234ae2fce2 | [
"MIT"
] | 1 | 2021-04-30T20:57:05.000Z | 2021-04-30T20:57:05.000Z | app/admin/controllers.py | cladup/objection | 86b81cc3708d58a142229ebfe192c8234ae2fce2 | [
"MIT"
] | null | null | null | from flask import Blueprint
admin_page = Blueprint('admin', __name__)
@admin_page.route('/admin/objects', methods=['GET'])
def get():
"""
FIXME: Create debug purpose file input UI
"""
return '''
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form method=post enctype=multipart/form-data action=/api/v1/objects>
<input type=file name=graphic_model>
<input type=submit value=Upload>
</form>
'''
| 21.636364 | 73 | 0.642857 | 61 | 476 | 4.901639 | 0.655738 | 0.093645 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007979 | 0.210084 | 476 | 21 | 74 | 22.666667 | 0.787234 | 0.086134 | 0 | 0 | 0 | 0 | 0.667464 | 0.119617 | 0 | 0 | 0 | 0.047619 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.230769 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b14bee16d6a5e6e87929efedb960da604bc6fe71 | 962 | py | Python | mouse_burrows/hpc/templates/pass3_single.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | 1 | 2016-03-06T05:16:38.000Z | 2016-03-06T05:16:38.000Z | mouse_burrows/hpc/templates/pass3_single.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | null | null | null | mouse_burrows/hpc/templates/pass3_single.py | david-zwicker/cv-mouse-burrows | 906476f49ff9711cd672feca5f70efedaab82b01 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python2
from __future__ import division
import sys
import logging
{ADD_PYTHON_PATHS} # @UndefinedVariable
from numpy import array # @UnusedImport
from mouse_burrows.algorithm.pass3 import ThirdPass
from mouse_burrows.hpc.project import process_trials
from video.io.backend_ffmpeg import FFmpegError
# configure basic logging, which will be overwritten later
logging.basicConfig()
# set specific parameters for this job
parameters = {SPECIFIC_PARAMETERS} # @UndefinedVariable
# set job parameters
job_id = sys.argv[1]
parameters.update({{
'base_folder': "{JOB_DIRECTORY}",
'logging/folder': ".",
'output/folder': ".",
'resources/pass3/job_id': job_id,
}})
# do the second pass scan
for trial in process_trials("{LOG_FILE}" % job_id, 10):
try:
pass3 = ThirdPass("{NAME}", parameters=parameters, read_data=True)
pass3.process()
except FFmpegError:
print('FFmpeg error occurred. Continue.')
| 26 | 74 | 0.731809 | 120 | 962 | 5.7 | 0.616667 | 0.02924 | 0.046784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009926 | 0.162162 | 962 | 36 | 75 | 26.722222 | 0.83871 | 0.218295 | 0 | 0 | 0 | 0 | 0.168011 | 0.02957 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.173913 | 0.304348 | 0 | 0.304348 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
b14d7b20e822073f7a6ab1f49179d686fd6801cc | 214 | py | Python | ternary.py | meme-squad/ternary-boolean | 74088012db6f884b16173570d243c7a90fc07b73 | [
"MIT"
] | null | null | null | ternary.py | meme-squad/ternary-boolean | 74088012db6f884b16173570d243c7a90fc07b73 | [
"MIT"
] | null | null | null | ternary.py | meme-squad/ternary-boolean | 74088012db6f884b16173570d243c7a90fc07b73 | [
"MIT"
] | null | null | null | ternary = [0, 1, 2]
ternary[0] = "true"
ternary[1] = "maybe"
ternary[2] = "false"
x = 34
y = 34
if x > y:
print(ternary[0])
elif x < y:
print(ternary[2])
else:
print(ternary[1]) | 11.263158 | 22 | 0.5 | 33 | 214 | 3.242424 | 0.424242 | 0.224299 | 0.130841 | 0.261682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089041 | 0.317757 | 214 | 19 | 23 | 11.263158 | 0.643836 | 0 | 0 | 0 | 0 | 0 | 0.071066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b152816aecda32d2f8268e7ebe9fccf218b884aa | 258 | py | Python | biophi/common/web/celery_config.py | kvetab/BioPhi | fd73ec538f1e4166d28760853f56266967c6dc87 | [
"MIT"
] | 34 | 2021-08-09T15:37:07.000Z | 2022-03-29T19:33:49.000Z | biophi/common/web/celery_config.py | kvetab/BioPhi | fd73ec538f1e4166d28760853f56266967c6dc87 | [
"MIT"
] | 19 | 2021-08-11T21:23:29.000Z | 2022-03-13T18:32:15.000Z | biophi/common/web/celery_config.py | kvetab/BioPhi | fd73ec538f1e4166d28760853f56266967c6dc87 | [
"MIT"
] | 17 | 2021-08-10T08:05:43.000Z | 2022-02-21T13:02:46.000Z | import os
broker_url = os.environ.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'),
result_backend = os.environ.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/0')
accept_content = ['pickle']
task_serializer = 'pickle'
result_serializer = 'pickle'
| 32.25 | 84 | 0.75969 | 35 | 258 | 5.342857 | 0.514286 | 0.096257 | 0.128342 | 0.192513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042017 | 0.077519 | 258 | 7 | 85 | 36.857143 | 0.743697 | 0 | 0 | 0 | 0 | 0 | 0.403101 | 0.267442 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b15626250dc6664fca265ebce282329473b2f604 | 388 | py | Python | run.py | newinh/blinker-redis-event-example | dffc98d4ded5700141857e7e18f9bd27f27ff435 | [
"MIT"
] | null | null | null | run.py | newinh/blinker-redis-event-example | dffc98d4ded5700141857e7e18f9bd27f27ff435 | [
"MIT"
] | null | null | null | run.py | newinh/blinker-redis-event-example | dffc98d4ded5700141857e7e18f9bd27f27ff435 | [
"MIT"
] | null | null | null | import atexit
from threading import Thread
from app.event_broker import event_broker
from app import app
from app.dispatcher import run_event_handling_loop
event_thread = run_event_handling_loop()
def close_running_thread(t: Thread):
event_broker.rpush('dbtool:evet', 'quit')
t.join()
atexit.register(close_running_thread, event_thread)
app.run(host='0.0.0.0', port=5000) | 19.4 | 51 | 0.786082 | 61 | 388 | 4.754098 | 0.42623 | 0.072414 | 0.110345 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02346 | 0.121134 | 388 | 20 | 52 | 19.4 | 0.826979 | 0 | 0 | 0 | 0 | 0 | 0.056555 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b161f7ff307a00581598708a9fa710719fb8470a | 1,385 | py | Python | gooey/gui/components/widgets/core/text_input.py | Qteal/Gooey | 6f0efd848466f9eaf169672d44f528e2f47619b9 | [
"MIT"
] | null | null | null | gooey/gui/components/widgets/core/text_input.py | Qteal/Gooey | 6f0efd848466f9eaf169672d44f528e2f47619b9 | [
"MIT"
] | null | null | null | gooey/gui/components/widgets/core/text_input.py | Qteal/Gooey | 6f0efd848466f9eaf169672d44f528e2f47619b9 | [
"MIT"
] | null | null | null | import wx
from gooey.gui.util.filedrop import FileDrop
from gooey.util.functional import merge
from gooey.gui.components.mouse import notifyMouseEvent
class TextInput(wx.Panel):
def __init__(self, parent, *args, **kwargs):
super(TextInput, self).__init__(parent)
self.widget = wx.TextCtrl(self, *args, **kwargs)
dt = FileDrop(self.widget)
self.widget.SetDropTarget(dt)
self.widget.SetMinSize((0, -1))
self.widget.SetDoubleBuffered(True)
self.widget.AppendText('')
self.layout()
self.widget.Bind(wx.EVT_LEFT_DOWN, notifyMouseEvent)
def layout(self):
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.widget, 0, wx.EXPAND)
self.SetSizer(sizer)
def setValue(self, value):
self.widget.Clear()
self.widget.AppendText(str(value))
self.widget.SetInsertionPoint(0)
def getValue(self):
return self.widget.GetValue()
def SetDropTarget(self, target):
self.widget.SetDropTarget(target)
def PasswordInput(_, parent, *args, **kwargs):
style = {'style': wx.TE_PASSWORD}
return TextInput(parent, *args, **merge(kwargs, style))
def MultilineTextInput(_, parent, *args, **kwargs):
style = {'style': wx.TE_MULTILINE}
return TextInput(parent, *args, **merge(kwargs, style))
| 28.854167 | 61 | 0.640433 | 157 | 1,385 | 5.56051 | 0.363057 | 0.148912 | 0.054983 | 0.04811 | 0.162658 | 0.162658 | 0.162658 | 0 | 0 | 0 | 0 | 0.003766 | 0.233213 | 1,385 | 47 | 62 | 29.468085 | 0.818267 | 0 | 0 | 0.060606 | 0 | 0 | 0.007474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0.060606 | 0.121212 | 0.030303 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
b165c98eda84120dcb5bed5cbf5dba4163cb8d6e | 537 | py | Python | liaoxuefeng.com/0322-RESTfulApi.py | snownothing/Python | 9b9e909b66db1c1eccfa41225919acdcd59ab4e3 | [
"MIT"
] | null | null | null | liaoxuefeng.com/0322-RESTfulApi.py | snownothing/Python | 9b9e909b66db1c1eccfa41225919acdcd59ab4e3 | [
"MIT"
] | null | null | null | liaoxuefeng.com/0322-RESTfulApi.py | snownothing/Python | 9b9e909b66db1c1eccfa41225919acdcd59ab4e3 | [
"MIT"
] | null | null | null | # !/usr/bin/env python
# coding: utf-8
class Chain(object):
def __init__(self, path = ''):
self._path = path
def __getattr__(self, path):
return Chain('%s/%s' % (self._path, path))
def __str__(self):
return self._path
__repr__ = __str__
# def __call__(self, path = ''):
# return Chain('%s/%s' % (self._path, path))
def __call__(self, *args, **kw):
return Chain('%s/%s' % (self._path, args[0]))
print Chain().status.user.timeline.list
print Chain().users('michael').repos
| 28.263158 | 53 | 0.590317 | 71 | 537 | 4 | 0.43662 | 0.225352 | 0.126761 | 0.158451 | 0.327465 | 0.327465 | 0.253521 | 0.253521 | 0.253521 | 0.253521 | 0 | 0.004819 | 0.227188 | 537 | 18 | 54 | 29.833333 | 0.679518 | 0.208566 | 0 | 0 | 0 | 0 | 0.040476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b16cec26ba38f661aca756e0438c6d8859fa1e82 | 279 | py | Python | Blog_Website/BlogApp/views.py | MetinIlgar/BlogWebsite | 014ae848dbed2d42c86654f0fbdb4450ccd9ef00 | [
"MIT"
] | 1 | 2022-02-25T10:16:04.000Z | 2022-02-25T10:16:04.000Z | Blog_Website/BlogApp/views.py | MetinIlgar/BlogWebsite | 014ae848dbed2d42c86654f0fbdb4450ccd9ef00 | [
"MIT"
] | null | null | null | Blog_Website/BlogApp/views.py | MetinIlgar/BlogWebsite | 014ae848dbed2d42c86654f0fbdb4450ccd9ef00 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from .models import *
# Create your views here.
def index(request):
blogPosts = BlogPost.objects.all()
aboutMe = AboutMe.objects.all().first()
return render(request, "index.html", {"blogPosts" : blogPosts, "aboutMe": aboutMe})
| 25.363636 | 87 | 0.706093 | 33 | 279 | 5.969697 | 0.636364 | 0.101523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 279 | 10 | 88 | 27.9 | 0.84188 | 0.082437 | 0 | 0 | 0 | 0 | 0.102362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b17c59c0333676826375de175d1d58258634b7dc | 938 | py | Python | jpylib/jtests/_tests.py | JiniousChoi/encyclopedia-in-code | 77bc551a03a2a3e3808e50016ece14adb5cfbd96 | [
"MIT"
] | 2 | 2018-07-20T10:15:49.000Z | 2018-07-20T10:16:54.000Z | jpylib/jtests/_tests.py | JiniousChoi/encyclopedia-in-code | 77bc551a03a2a3e3808e50016ece14adb5cfbd96 | [
"MIT"
] | 2 | 2018-06-26T09:12:44.000Z | 2019-12-18T00:09:14.000Z | jpylib/jtests/_tests.py | JiniousChoi/encyclopedia-in-code | 77bc551a03a2a3e3808e50016ece14adb5cfbd96 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
## author: jinchoiseoul@gmail.com
def parse_io(inp, out):
''' io means input/output for testcases;
It splitlines them and strip the elements
@param inp: multi-lined str
@param out: multi-lined str
@return (inp::[str], out::[str]) '''
inp = [i.strip() for i in inp.splitlines() if i.strip()]
out = [o.strip() for o in out.splitlines() if o.strip()]
return inp, out
def joiner(iterable, sep=' '):
''' @return e.g. [1, 2] -> "1 2" '''
return sep.join(map(str, iterable))
def strips(doc):
''' @return strip each line of doc '''
return '\n'.join(line.strip() for line in doc.splitlines())
def lstrips(doc):
''' @return lstrip each line of doc '''
return '\n'.join(line.lstrip() for line in doc.splitlines())
def rstrips(doc):
''' @return rstrip each line of doc '''
return '\n'.join(line.rstrip() for line in doc.splitlines())
| 27.588235 | 64 | 0.601279 | 139 | 938 | 4.05036 | 0.374101 | 0.095915 | 0.053286 | 0.069272 | 0.277087 | 0.238011 | 0.149201 | 0.149201 | 0 | 0 | 0 | 0.006906 | 0.228145 | 938 | 33 | 65 | 28.424242 | 0.770718 | 0.371002 | 0 | 0 | 0 | 0 | 0.013359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0 | 0 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b186a3fdbf0d1d7bf5fb4784580e2d24462dac6c | 312 | py | Python | docs/assets/greeting.py | pablodevs/StarwarsBlog | 7f491c4a91bbd6145c78e6a404151205cfaca7b9 | [
"OML"
] | null | null | null | docs/assets/greeting.py | pablodevs/StarwarsBlog | 7f491c4a91bbd6145c78e6a404151205cfaca7b9 | [
"OML"
] | null | null | null | docs/assets/greeting.py | pablodevs/StarwarsBlog | 7f491c4a91bbd6145c78e6a404151205cfaca7b9 | [
"OML"
] | null | null | null | def blue(_str):
return f"\033[0;33m{_str}\033[0m"
print(f"""
Hello 😁 ! Use the terminal to code!
1. Start the dev server by running {blue("$ npm run start")}
2. You can find a video tutorial and explanation on the README.md file.
3. Always read the terminal output, it's your best tool for debugging!
""") | 31.2 | 71 | 0.698718 | 58 | 312 | 3.741379 | 0.844828 | 0.101382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050781 | 0.179487 | 312 | 10 | 72 | 31.2 | 0.792969 | 0 | 0 | 0 | 0 | 0 | 0.84984 | 0.073482 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0.125 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b1a8385b905d16e794b096b0b31827b9d8084ca1 | 180 | py | Python | File Handling/WriteToFile.py | UgainJain/LearnPythonByDoing | 4784c334d7f485223a29592ab47c6c017ec67145 | [
"MIT"
] | 5 | 2018-11-06T11:15:35.000Z | 2020-07-29T21:54:28.000Z | File Handling/WriteToFile.py | UgainJain/LearnPythonByDoing | 4784c334d7f485223a29592ab47c6c017ec67145 | [
"MIT"
] | 1 | 2018-11-13T13:22:11.000Z | 2018-11-13T13:22:11.000Z | File Handling/WriteToFile.py | UgainJain/LearnPythonByDoing | 4784c334d7f485223a29592ab47c6c017ec67145 | [
"MIT"
] | 11 | 2018-11-06T11:12:21.000Z | 2019-07-12T11:43:05.000Z | f = open("Writetofile.txt", "a")
f.write("Lipika\n")
f.write("Ugain\n")
f.write("Shivam\n")
f.write("Sanjeev\n")
print("Data written to the file using append mode")
f.close() | 25.714286 | 52 | 0.655556 | 32 | 180 | 3.6875 | 0.65625 | 0.20339 | 0.177966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 180 | 7 | 53 | 25.714286 | 0.746835 | 0 | 0 | 0 | 0 | 0 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
49083243ae42c89fd9ee888937159c1342b91e9c | 466 | py | Python | src/electionguard_tools/scripts/__init__.py | thommignot/electionguard-python | 2357cdae3d1718f760dfe6fed983bf256a1af8e8 | [
"MIT"
] | null | null | null | src/electionguard_tools/scripts/__init__.py | thommignot/electionguard-python | 2357cdae3d1718f760dfe6fed983bf256a1af8e8 | [
"MIT"
] | 1 | 2021-11-01T19:37:32.000Z | 2021-11-01T19:37:32.000Z | src/electionguard_tools/scripts/__init__.py | thommignot/electionguard-python | 2357cdae3d1718f760dfe6fed983bf256a1af8e8 | [
"MIT"
] | 1 | 2021-11-01T19:15:31.000Z | 2021-11-01T19:15:31.000Z | from electionguard_tools.scripts import sample_generator
from electionguard_tools.scripts.sample_generator import (
DEFAULT_NUMBER_OF_BALLOTS,
DEFAULT_SPOIL_RATE,
DEFAULT_USE_ALL_GUARDIANS,
DEFAULT_USE_PRIVATE_DATA,
ElectionSampleDataGenerator,
)
__all__ = [
"DEFAULT_NUMBER_OF_BALLOTS",
"DEFAULT_SPOIL_RATE",
"DEFAULT_USE_ALL_GUARDIANS",
"DEFAULT_USE_PRIVATE_DATA",
"ElectionSampleDataGenerator",
"sample_generator",
]
| 24.526316 | 58 | 0.783262 | 49 | 466 | 6.816327 | 0.387755 | 0.11976 | 0.131737 | 0.173653 | 0.646707 | 0.646707 | 0.646707 | 0.646707 | 0.646707 | 0.646707 | 0 | 0 | 0.150215 | 466 | 18 | 59 | 25.888889 | 0.843434 | 0 | 0 | 0 | 0 | 0 | 0.2897 | 0.216738 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
490aa41a9f4da0e31ef78980cfe9ef53478a1fc1 | 1,525 | py | Python | _GTW/_OMP/_PAP/Legal_Entity.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 6 | 2016-12-10T17:51:10.000Z | 2021-10-11T07:51:48.000Z | _GTW/_OMP/_PAP/Legal_Entity.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | null | null | null | _GTW/_OMP/_PAP/Legal_Entity.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 3 | 2020-03-29T07:37:03.000Z | 2021-01-21T16:08:40.000Z | # -*- coding: utf-8 -*-
# Copyright (C) 2013 Mag. Christian Tanzer All rights reserved
# Glasauergasse 32, A--1130 Wien, Austria. tanzer@swing.co.at
# #*** <License> ************************************************************#
# This module is part of the package GTW.OMP.PAP.
#
# This module is licensed under the terms of the BSD 3-Clause License
# <http://www.c-tanzer.at/license/bsd_3c.html>.
# #*** </License> ***********************************************************#
#
#++
# Name
# GTW.OMP.PAP.Legal_Entity
#
# Purpose
# Model a legal entity that isn't a natural person
#
# Revision Dates
# 4-Mar-2013 (CT) Creation
# 13-Jun-2014 (RS) `_Ancestor_Essence` is now `_PAP.Group`
# remove attributes inherited from ancestor
# ««revision-date»»···
#--
from _MOM.import_MOM import *
from _GTW._OMP._PAP.Attr_Type import *
from _GTW import GTW
from _GTW._OMP._PAP import PAP
from _TFL.I18N import _
import _GTW._OMP._PAP.Group
_Ancestor_Essence = PAP.Group
class _PAP_Legal_Entity_ (_Ancestor_Essence) :
"""Model a legal entity that isn't a natural person."""
_real_name = "Legal_Entity"
is_partial = True
class _Attributes (_Ancestor_Essence._Attributes) :
_Ancestor = _Ancestor_Essence._Attributes
# end class _Attributes
Legal_Entity = _PAP_Legal_Entity_ # end class
if __name__ != "__main__" :
GTW.OMP.PAP._Export ("*")
### __END__ GTW.OMP.PAP.Legal_Entity
| 28.240741 | 78 | 0.600656 | 189 | 1,525 | 4.566138 | 0.460317 | 0.10197 | 0.073001 | 0.032445 | 0.136732 | 0.090382 | 0.090382 | 0.090382 | 0.090382 | 0.090382 | 0 | 0.021776 | 0.217049 | 1,525 | 53 | 79 | 28.773585 | 0.695142 | 0.562623 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
49116954cf8923e1cfe9a680112c0cf31141ac3a | 1,337 | py | Python | lampy/tests/test_typing_math.py | pyccel/lampy | 696f65218f251095054003e87f8c05324be02a29 | [
"MIT"
] | 1 | 2019-04-12T21:41:32.000Z | 2019-04-12T21:41:32.000Z | lampy/tests/test_typing_math.py | pyccel/lampy | 696f65218f251095054003e87f8c05324be02a29 | [
"MIT"
] | null | null | null | lampy/tests/test_typing_math.py | pyccel/lampy | 696f65218f251095054003e87f8c05324be02a29 | [
"MIT"
] | null | null | null | # TODO
#g = lambda xs,ys,z: [[x + y*z for x in xs] for y in ys]
#g = lambda xs,y,z: [x + y*z for x in xs]
import numpy as np
import time
from pyccel.decorators import types, pure
from pyccel.ast.datatypes import NativeInteger, NativeReal, NativeComplex, NativeBool
from lampy.lambdify import _lambdify
from lampy import TypeVariable, TypeTuple, TypeList
from lampy import add, mul
#=========================================================
def test_map_sin_list(**settings):
settings['semantic_only'] = True
L = lambda xs: map(sin, xs)
type_L = _lambdify( L, **settings )
assert( isinstance( type_L, TypeList ) )
parent = type_L.parent
assert( isinstance( parent.dtype, NativeReal ) )
assert( parent.rank == 0 )
assert( parent.precision == 8 )
assert( not parent.is_stack_array )
print('DONE.')
#==============================================================================
# CLEAN UP SYMPY NAMESPACE
#==============================================================================
def teardown_module():
from sympy import cache
cache.clear_cache()
def teardown_function():
from sympy import cache
cache.clear_cache()
###########################################
#if __name__ == '__main__':
# settings = {'semantic_only' : True}
#
# test_map_sin_list(**settings)
| 26.74 | 85 | 0.555722 | 153 | 1,337 | 4.679739 | 0.457516 | 0.03352 | 0.02514 | 0.011173 | 0.192737 | 0.131285 | 0.131285 | 0.03352 | 0 | 0 | 0 | 0.001789 | 0.1638 | 1,337 | 49 | 86 | 27.285714 | 0.63864 | 0.325355 | 0 | 0.166667 | 0 | 0 | 0.021251 | 0 | 0 | 0 | 0 | 0.020408 | 0.208333 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.5 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
493b96a4ce3ff8c7518f2976a3cbcc9fb60108b3 | 496 | py | Python | carberretta/utils/checks.py | namanyt/Carberretta | 182733caeff3da475f052bfc7fa6090244d5f2dc | [
"BSD-3-Clause"
] | 21 | 2020-09-30T11:15:40.000Z | 2022-03-11T09:46:48.000Z | carberretta/utils/checks.py | namanyt/Carberretta | 182733caeff3da475f052bfc7fa6090244d5f2dc | [
"BSD-3-Clause"
] | 110 | 2020-09-27T14:37:05.000Z | 2021-12-17T17:21:33.000Z | carberretta/utils/checks.py | namanyt/Carberretta | 182733caeff3da475f052bfc7fa6090244d5f2dc | [
"BSD-3-Clause"
] | 26 | 2020-09-21T19:38:12.000Z | 2022-02-08T18:41:58.000Z | from discord.ext import commands
from carberretta import Config
class CustomCheckFailure(commands.CheckAnyFailure):
def __init__(self, message):
self.msg = message
class CanNotVerifyQt(CustomCheckFailure):
def __init__(self):
super().__init__("You can not verify QTs.")
def can_verify_qts():
async def predicate(ctx):
if ctx.message.author.id != Config.QT_ID:
raise CanNotVerifyQt()
return True
return commands.check(predicate)
| 21.565217 | 51 | 0.695565 | 57 | 496 | 5.789474 | 0.578947 | 0.042424 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217742 | 496 | 22 | 52 | 22.545455 | 0.850515 | 0 | 0 | 0 | 0 | 0 | 0.046371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
49408cce7d07833fb3285d69389cde3132428174 | 520 | py | Python | src/apps/datatables/models/draw_request_column_filter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | 192 | 2015-07-29T15:20:35.000Z | 2021-09-06T21:42:01.000Z | src/apps/datatables/models/draw_request_column_filter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | 120 | 2015-10-27T21:43:11.000Z | 2021-08-12T15:15:43.000Z | src/apps/datatables/models/draw_request_column_filter.py | RogerTangos/datahub-stub | 8c3e89c792e45ccc9ad067fcf085ddd52f7ecd89 | [
"MIT"
] | 56 | 2015-09-19T05:58:41.000Z | 2021-09-14T09:46:11.000Z | '''
Represents a single filter on a column.
'''
class DrawRequestColumnFilter:
'''
Initialize the filter with the column name, filter text,
and operation (must be "=", "<=", ">=", "<", ">", or "!=").
'''
def __init__(self, column_name, filter_text, operation):
self.name = column_name
self.text = filter_text
self.operation = operation
def __repr__(self):
return "ColFilter(name=%s, text=%s, op=%s)" % (self.name, self.text, self.operation)
__str__ = __repr__
| 32.5 | 92 | 0.613462 | 61 | 520 | 4.901639 | 0.442623 | 0.100334 | 0.107023 | 0.133779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 520 | 15 | 93 | 34.666667 | 0.7475 | 0.3 | 0 | 0 | 0 | 0 | 0.10089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.125 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
494bb5ebd2fd0f774543482c2916f31729e8d002 | 122 | py | Python | brepo/__init__.py | jonmoore/brepo | 60a5e2a5d315db9ebfa90f1d6f545873aebd2665 | [
"BSD-3-Clause"
] | null | null | null | brepo/__init__.py | jonmoore/brepo | 60a5e2a5d315db9ebfa90f1d6f545873aebd2665 | [
"BSD-3-Clause"
] | null | null | null | brepo/__init__.py | jonmoore/brepo | 60a5e2a5d315db9ebfa90f1d6f545873aebd2665 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
__author__ = 'Jonathan Moore'
__email__ = 'firstnamelastnamephd@gmail.com'
__version__ = '0.1.0'
| 20.333333 | 44 | 0.688525 | 14 | 122 | 5.142857 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.131148 | 122 | 5 | 45 | 24.4 | 0.641509 | 0.172131 | 0 | 0 | 0 | 0 | 0.494949 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
494ecf355004fe476376c1063f5b74927ab0e53f | 1,638 | py | Python | core_admin/old_apps/praia/admin.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | null | null | null | core_admin/old_apps/praia/admin.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | 112 | 2018-04-24T19:10:55.000Z | 2022-02-26T16:55:02.000Z | core_admin/old_apps/praia/admin.py | linea-it/tno | f973381280504ceb1b606b5b3ccc79b6b8c2aa4f | [
"MIT"
] | null | null | null | # from django.contrib import admin
# from .models import (AstrometryAsteroid, AstrometryInput, AstrometryJob,
# AstrometryOutput, Configuration, Run)
# @admin.register(Run)
# class PraiaRunsAdmin(admin.ModelAdmin):
# list_display = ('id', 'status', 'owner', 'proccess', 'catalog', 'configuration',
# 'start_time', 'finish_time', 'count_objects', 'count_success', 'count_failed', 'count_warning', 'count_not_executed', 'step')
# @admin.register(Configuration)
# class PraiaConfigAdmin(admin.ModelAdmin):
# list_display = ('id', 'displayname', 'owner', 'creation_date', )
# @admin.register(AstrometryAsteroid)
# class AstrometryAsteroidAdmin(admin.ModelAdmin):
# list_display = ('id', 'astrometry_run', 'name', 'number', 'status',
# 'ccd_images', 'catalog_rows', 'execution_time', 'error_msg',)
# search_fields = ('name',)
# @admin.register(AstrometryInput)
# class AstrometryInputAdmin(admin.ModelAdmin):
# list_display = ('id', 'asteroid', 'input_type', 'filename',
# 'file_size', 'file_type', 'file_path', 'execution_time', 'error_msg',)
# @admin.register(AstrometryOutput)
# class AstrometryOutputAdmin(admin.ModelAdmin):
# list_display = ('id', 'asteroid', 'type', 'catalog', 'ccd_image', 'filename',
# 'file_size', 'file_type', 'file_path',)
# @admin.register(AstrometryJob)
# class AstrometryJobs(admin.ModelAdmin):
# list_display = ('id', 'astrometry_run', 'asteroid', 'job_status', 'clusterid',
# 'procid', 'submit_time', 'start_time', 'finish_time', 'execution_time')
| 39.95122 | 147 | 0.656899 | 156 | 1,638 | 6.660256 | 0.416667 | 0.075072 | 0.109721 | 0.150144 | 0.263715 | 0.209817 | 0.14052 | 0 | 0 | 0 | 0 | 0 | 0.177656 | 1,638 | 40 | 148 | 40.95 | 0.771344 | 0.958486 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
495f8bf25ef6ed3798c189d2f15c2e162209270e | 214 | py | Python | sim/model/environmentmetric.py | bennyboer/py_epi_sim | 655702a9f391a55bc9f5cb4d34d480c844efb80f | [
"MIT"
] | 2 | 2020-06-03T12:06:14.000Z | 2020-11-03T17:30:21.000Z | sim/model/environmentmetric.py | bennyboer/py_epi_sim | 655702a9f391a55bc9f5cb4d34d480c844efb80f | [
"MIT"
] | null | null | null | sim/model/environmentmetric.py | bennyboer/py_epi_sim | 655702a9f391a55bc9f5cb4d34d480c844efb80f | [
"MIT"
] | 1 | 2020-11-03T17:32:17.000Z | 2020-11-03T17:32:17.000Z | from enum import Enum
"""
AUTOR: Benjamin Eder
"""
class EnvironmentMetric(Enum):
"""Enumeration of possible environment metrics in the cell matrix"""
EUCLIDEAN = 'Euclidean'
MANHATTAN = 'Manhattan'
| 17.833333 | 72 | 0.705607 | 23 | 214 | 6.565217 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191589 | 214 | 11 | 73 | 19.454545 | 0.872832 | 0.28972 | 0 | 0 | 0 | 0 | 0.152542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
496a022c553aa8b5603951cc14aa543827fa702b | 336 | py | Python | djangocms_articles/cms_plugins.py | divio/djangocms-articles | 4329b98de1e0b52122139eafb73955fa6fe33f63 | [
"BSD-3-Clause"
] | null | null | null | djangocms_articles/cms_plugins.py | divio/djangocms-articles | 4329b98de1e0b52122139eafb73955fa6fe33f63 | [
"BSD-3-Clause"
] | 1 | 2019-02-21T11:22:39.000Z | 2019-02-21T11:22:39.000Z | djangocms_articles/cms_plugins.py | divio/djangocms-articles | 4329b98de1e0b52122139eafb73955fa6fe33f63 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# from cms.plugin_base import CMSPluginBase
# from cms.plugin_pool import plugin_pool
# from . import models
# class ...(CMSPluginBase):
# model = models.YourModel
# name = 'YourModel Plugin'
# render_template = 'templates/plugin.html'
# allow_children = True
# plugin_pool.register_plugin(...)
| 21 | 46 | 0.693452 | 39 | 336 | 5.794872 | 0.589744 | 0.132743 | 0.115044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00361 | 0.175595 | 336 | 15 | 47 | 22.4 | 0.812274 | 0.922619 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
497557d0a9f291b9e68e288320172c719e075398 | 481 | py | Python | classification/views/views_autocomplete.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 5 | 2021-01-14T03:34:42.000Z | 2022-03-07T15:34:18.000Z | classification/views/views_autocomplete.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 551 | 2020-10-19T00:02:38.000Z | 2022-03-30T02:18:22.000Z | classification/views/views_autocomplete.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | null | null | null | from django.utils.decorators import method_decorator
from django.views.decorators.cache import cache_page
from classification.models import EvidenceKey
from library.constants import MINUTE_SECS
from library.django_utils.autocomplete_utils import AutocompleteView
@method_decorator(cache_page(MINUTE_SECS), name='dispatch')
class EvidenceKeyAutocompleteView(AutocompleteView):
fields = ['key']
def get_user_queryset(self, user):
return EvidenceKey.objects.all()
| 32.066667 | 68 | 0.821206 | 57 | 481 | 6.754386 | 0.578947 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 481 | 14 | 69 | 34.357143 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0.022869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.5 | 0.1 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
497f546609a95d8e4c366d29a3d75112d3d95ae8 | 742 | py | Python | src/origin/models/auth.py | RasmusGodske/eo-platform-utils | 4d7c5bdc102d1eb7a5edff096f2783dbdbaa283d | [
"Apache-2.0"
] | null | null | null | src/origin/models/auth.py | RasmusGodske/eo-platform-utils | 4d7c5bdc102d1eb7a5edff096f2783dbdbaa283d | [
"Apache-2.0"
] | null | null | null | src/origin/models/auth.py | RasmusGodske/eo-platform-utils | 4d7c5bdc102d1eb7a5edff096f2783dbdbaa283d | [
"Apache-2.0"
] | null | null | null | from typing import List
from dataclasses import dataclass
from datetime import datetime, timezone
from origin.serialize import Serializable
@dataclass
class InternalToken(Serializable):
"""
TODO
"""
issued: datetime
expires: datetime
# The user performing action(s)
actor: str
# The subject we're working with data on behalf of
subject: str
# Scopes granted on subject's data
# meteringpoints.read, measurements.read, emissions.read, etc
scope: List[str]
@property
def is_valid(self) -> bool:
"""
A token is valid only if its issued before now, and expires
after now.
"""
return self.issued <= datetime.now(tz=timezone.utc) < self.expires
| 22.484848 | 74 | 0.668464 | 92 | 742 | 5.380435 | 0.630435 | 0.056566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254717 | 742 | 32 | 75 | 23.1875 | 0.895118 | 0.334232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4985c159113f767c309e89adf4ac505e23aee0a8 | 184 | py | Python | lambda/say_hello/lambda_function.py | aws-samples/strong-typing-practices-with-cdk | b1d871d545bfad2629d844bf7b598ecfaf65fd0a | [
"MIT-0"
] | 3 | 2021-04-01T00:58:36.000Z | 2022-03-04T05:25:11.000Z | lambda/say_hello/lambda_function.py | aws-samples/strong-typing-practices-with-cdk | b1d871d545bfad2629d844bf7b598ecfaf65fd0a | [
"MIT-0"
] | 1 | 2021-03-25T17:23:07.000Z | 2021-04-01T09:18:48.000Z | lambda/say_hello/lambda_function.py | aws-samples/strong-typing-practices-with-cdk | b1d871d545bfad2629d844bf7b598ecfaf65fd0a | [
"MIT-0"
] | 2 | 2021-06-10T18:54:03.000Z | 2021-10-25T00:28:51.000Z | def lambda_handler(event, context):
name = event.get("name")
if not name:
name = "person who does not want to give their name"
return { "hello": f"hello {name}"}
| 23 | 60 | 0.619565 | 27 | 184 | 4.185185 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255435 | 184 | 7 | 61 | 26.285714 | 0.824818 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4987cc8f19101d6bf79486256e833217ae3fa2e2 | 311 | py | Python | WEB-INF/mvc/root/dienste/doabfrage/weitereabfragen/WeitereAbfragenController.py | wnagy/pymframe | 1b0a220ad943ad4ca2d712e308d9a72d7388e831 | [
"Apache-2.0"
] | null | null | null | WEB-INF/mvc/root/dienste/doabfrage/weitereabfragen/WeitereAbfragenController.py | wnagy/pymframe | 1b0a220ad943ad4ca2d712e308d9a72d7388e831 | [
"Apache-2.0"
] | null | null | null | WEB-INF/mvc/root/dienste/doabfrage/weitereabfragen/WeitereAbfragenController.py | wnagy/pymframe | 1b0a220ad943ad4ca2d712e308d9a72d7388e831 | [
"Apache-2.0"
] | null | null | null | # -*- coding: iso-8859-15 -*-
"""
Dieser Controller wird vom Framework aufgerufen.
Er ist fuer ein GRID Layout optimiert.
"""
from controller import Controller
from helper.utility import Utility
class WeitereAbfragenController(Controller):
def get(self):
self.view('info.tpl')
| 17.277778 | 48 | 0.684887 | 36 | 311 | 5.916667 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02459 | 0.215434 | 311 | 17 | 49 | 18.294118 | 0.848361 | 0.376206 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4990113cee3b77a0ddeae3e7b0da2be2d4d4dbf0 | 426 | py | Python | 01-first-python-project/while_exercises.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | 64 | 2018-05-25T01:26:31.000Z | 2022-03-03T20:42:20.000Z | 01-first-python-project/while_exercises.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | null | null | null | 01-first-python-project/while_exercises.py | mkumar-552/learn-programming-with-python- | cb0aa11c741019959ce3db84552a7e012486092e | [
"MIT"
] | 72 | 2018-05-24T15:04:46.000Z | 2022-03-08T04:19:18.000Z | # print_squares_upto_limit(30)
# //For limit = 30, output would be 1 4 9 16 25
#
# print_cubes_upto_limit(30)
# //For limit = 30, output would be 1 8 27
def print_squares_upto_limit(limit):
i = 1
while i * i < limit:
print(i*i, end = " ")
i = i + 1
def print_cubes_upto_limit(limit):
i = 1
while i * i * i < limit:
print(i*i*i, end = " ")
i = i + 1
print_cubes_upto_limit(80) | 22.421053 | 47 | 0.584507 | 75 | 426 | 3.12 | 0.293333 | 0.068376 | 0.179487 | 0.24359 | 0.65812 | 0.65812 | 0.495727 | 0.495727 | 0.299145 | 0.299145 | 0 | 0.082781 | 0.29108 | 426 | 19 | 48 | 22.421053 | 0.692053 | 0.333333 | 0 | 0.363636 | 0 | 0 | 0.007168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.181818 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
4994e2101a309bde18a4634a9dd6c24e839f789f | 1,432 | py | Python | servo/servo2.py | dambergn/rpi-py-test-code | 944e479dc478d63a27964270cf0efbf6cb60f8ec | [
"MIT"
] | null | null | null | servo/servo2.py | dambergn/rpi-py-test-code | 944e479dc478d63a27964270cf0efbf6cb60f8ec | [
"MIT"
] | null | null | null | servo/servo2.py | dambergn/rpi-py-test-code | 944e479dc478d63a27964270cf0efbf6cb60f8ec | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
# Connect to pin 7 not GPIO 7
GPIO.setup(7, GPIO.OUT)
p = GPIO.PWM(7, 50)
p.start(7.5)
try:
while True:
print("Full Range Movement")
print("Move to Neutral")
p.ChangeDutyCycle(7.5)
time.sleep(1)
print("Move to 180")
p.ChangeDutyCycle(12.5)
time.sleep(1)
print("Move to 0")
p.ChangeDutyCycle(2.5)
time.sleep(1)
print("Precision Movements")
print("Move to 18")
p.ChangeDutyCycle(3.5)
time.sleep(1)
print("Move to 36")
p.ChangeDutyCycle(4.5)
time.sleep(1)
print("Move to 54")
p.ChangeDutyCycle(5.5)
time.sleep(1)
print("Move to 72")
p.ChangeDutyCycle(6.5)
time.sleep(1)
print("Move to Neutral")
p.ChangeDutyCycle(7.5)
time.sleep(1)
print("Move to 108")
p.ChangeDutyCycle(8.5)
time.sleep(1)
print("Move to 126")
p.ChangeDutyCycle(9.5)
time.sleep(1)
print("Move to 144")
p.ChangeDutyCycle(10.5)
time.sleep(1)
print("Move to 162")
p.ChangeDutyCycle(11.5)
time.sleep(1)
print("Move to 180")
p.ChangeDutyCycle(12.5)
time.sleep(1)
except KeyboardInterrupt:
print("stopping program")
p.stop()
GPIO.cleanup()
| 21.69697 | 36 | 0.549581 | 197 | 1,432 | 3.994924 | 0.309645 | 0.148666 | 0.181703 | 0.181703 | 0.47014 | 0.449809 | 0.449809 | 0.25413 | 0.25413 | 0.25413 | 0 | 0.080943 | 0.318436 | 1,432 | 65 | 37 | 22.030769 | 0.72541 | 0.031425 | 0 | 0.396226 | 0 | 0 | 0.143682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037736 | 0 | 0.037736 | 0.301887 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
499729d4385b060cdf0db6860bc31c626076e973 | 24,846 | py | Python | aguaclara_research/ProCoDA_Parser.py | AguaClara/aguaclara_research | 27dcab01b3f53a5221988552fa45d1715da35978 | [
"MIT"
] | 1 | 2018-04-17T02:50:09.000Z | 2018-04-17T02:50:09.000Z | aguaclara_research/ProCoDA_Parser.py | AguaClara/aguaclara_research | 27dcab01b3f53a5221988552fa45d1715da35978 | [
"MIT"
] | 18 | 2018-04-16T19:03:58.000Z | 2018-09-05T19:28:30.000Z | aguaclara_research/ProCoDA_Parser.py | AguaClara/aguaclara_research | 27dcab01b3f53a5221988552fa45d1715da35978 | [
"MIT"
] | 3 | 2018-06-14T18:11:25.000Z | 2018-06-28T14:01:25.000Z | from aide_design.shared.units import unit_registry as u
from datetime import datetime, timedelta
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
from pathlib import Path
def ftime(data_file_path, start, end=-1):
"""This function extracts the column of times from a ProCoDA data file.
Parameters
----------
data_file_path : string
File path. If the file is in the working directory, then the file name
is sufficient.
start : int or float
Index of first row of data to extract from the data file
end : int or float, optional
Index of last row of data to extract from the data
Defaults to -1, which extracts all the data in the file
Returns
-------
numpy array
Experimental times starting at 0 day with units of days.
Examples
--------
ftime(Reactor_data.txt, 0)
"""
if not isinstance(start, int):
start = int(start)
if not isinstance(end, int):
end = int(end)
df = pd.read_csv(data_file_path, delimiter='\t')
start_time = pd.to_numeric(df.iloc[start, 0])*u.day
day_times = pd.to_numeric(df.iloc[start:end, 0])
time_data = np.subtract((np.array(day_times)*u.day), start_time)
return time_data
def column_of_data(data_file_path, start, column, end="-1", units=""):
"""This function extracts a column of data from a ProCoDA data file.
Parameters
----------
data_file_path : string
File path. If the file is in the working directory, then the file name
is sufficient.
start : int
Index of first row of data to extract from the data file
end : int, optional
Index of last row of data to extract from the data
Defaults to -1, which extracts all the data in the file
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
units : string, optional
The units you want to apply to the data, e.g. 'mg/L'.
Defaults to "" which indicates no units
Returns
-------
numpy array
Experimental data with the units applied.
Examples
--------
column_of_data(Reactor_data.txt, 0, 1, -1, "mg/L")
"""
if not isinstance(start, int):
start = int(start)
if not isinstance(end, int):
end = int(end)
df = pd.read_csv(data_file_path, delimiter='\t')
if units == "":
if isinstance(column, int):
data = np.array(pd.to_numeric(df.iloc[start:end, column]))
else:
df[column][0:len(df)]
else:
if isinstance(column, int):
data = np.array(pd.to_numeric(df.iloc[start:end, column]))*u(units)
else:
df[column][0:len(df)]*u(units)
return data
def notes(data_file_path):
"""This function extracts any experimental notes from a ProCoDA data file.
Parameters
----------
data_file_path : string
File path. If the file is in the working directory, then the file name
is sufficient.
Returns
-------
dataframe
The rows of the data file that contain text notes inserted during the
experiment. Use this to identify the section of the data file that you
want to extract.
Examples
--------
"""
df = pd.read_csv(data_file_path, delimiter='\t')
text_row = df.iloc[0:-1, 0].str.contains('[a-z]', '[A-Z]')
text_row_index = text_row.index[text_row].tolist()
notes = df.loc[text_row_index]
return notes
def read_state(dates, state, column, units="", path="", extension=".xls"):
"""Reads a ProCoDA file and outputs the data column and time vector for
each iteration of the given state.
Parameters
----------
dates : string (list)
A list of dates or single date for which data was recorded, in
the form "M-D-Y"
state : int
The state ID number for which data should be extracted
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
units : string, optional
The units you want to apply to the data, e.g. 'mg/L'.
Defaults to "" which indicates no units
path : string, optional
Optional argument of the path to the folder containing your ProCoDA
files. Defaults to the current directory if no argument is passed in
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
Returns
-------
time : numpy array
Times corresponding to the data (with units)
data : numpy array
Data in the given column during the given state with units
Examples
--------
time, data = read_state(["6-19-2013", "6-20-2013"], 1, 28, "mL/s")
"""
data_agg = []
day = 0
first_day = True
overnight = False
if not isinstance(dates, list):
dates = [dates]
for d in dates:
state_file = path + "statelog " + d + extension
data_file = path + "datalog " + d + extension
states = pd.read_csv(state_file, delimiter='\t')
data = pd.read_csv(data_file, delimiter='\t')
states = np.array(states)
data = np.array(data)
# get the start and end times for the state
state_start_idx = states[:, 1] == state
state_start = states[state_start_idx, 0]
state_end_idx = np.append([False], state_start_idx[0:(np.size(state_start_idx)-1)])
state_end = states[state_end_idx, 0]
if overnight:
state_start = np.insert(state_start, 0, 0)
state_end = np.insert(state_end, 0, states[0, 0])
if state_start_idx[-1]:
state_end.append(data[0, -1])
# get the corresponding indices in the data array
data_start = []
data_end = []
for i in range(np.size(state_start)):
add_start = True
for j in range(np.size(data[:, 0])):
if (data[j, 0] > state_start[i]) and add_start:
data_start.append(j)
add_start = False
if (data[j, 0] > state_end[i]):
data_end.append(j-1)
break
if first_day:
start_time = data[1, 0]
# extract data at those times
for i in range(np.size(data_start)):
t = data[data_start[i]:data_end[i], 0] + day - start_time
if isinstance(column, int):
c = data[data_start[i]:data_end[i], column]
else:
c = data[column][data_start[i]:data_end[i]]
if overnight and i == 0:
data_agg = np.insert(data_agg[-1], np.size(data_agg[-1][:, 0]),
np.vstack((t, c)).T)
else:
data_agg.append(np.vstack((t, c)).T)
day += 1
if first_day:
first_day = False
if state_start_idx[-1]:
overnight = True
data_agg = np.vstack(data_agg)
if units != "":
return data_agg[:, 0]*u.day, data_agg[:, 1]*u(units)
else:
return data_agg[:, 0]*u.day, data_agg[:, 1]
def average_state(dates, state, column, units="", path="", extension=".xls"):
"""Outputs the average value of the data for each instance of a state in
the given ProCoDA files
Parameters
----------
dates : string (list)
A list of dates or single date for which data was recorded, in
the form "M-D-Y"
state : int
The state ID number for which data should be extracted
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
units : string, optional
The units you want to apply to the data, e.g. 'mg/L'.
Defaults to "" which indicates no units
path : string, optional
Optional argument of the path to the folder containing your ProCoDA
files. Defaults to the current directory if no argument is passed in
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
Returns
-------
float list
A list of averages for each instance of the given state
Examples
--------
data_avgs = average_state(["6-19-2013", "6-20-2013"], 1, 28, "mL/s")
"""
data_agg = []
day = 0
first_day = True
overnight = False
if not isinstance(dates, list):
dates = [dates]
for d in dates:
state_file = path + "statelog " + d + extension
data_file = path + "datalog " + d + extension
states = pd.read_csv(state_file, delimiter='\t')
data = pd.read_csv(data_file, delimiter='\t')
states = np.array(states)
data = np.array(data)
# get the start and end times for the state
state_start_idx = states[:, 1] == state
state_start = states[state_start_idx, 0]
state_end_idx = np.append([False], state_start_idx[0:(np.size(state_start_idx)-1)])
state_end = states[state_end_idx, 0]
if overnight:
state_start = np.insert(state_start, 0, 0)
state_end = np.insert(state_end, 0, states[0, 0])
if state_start_idx[-1]:
state_end.append(data[0, -1])
# get the corresponding indices in the data array
data_start = []
data_end = []
for i in range(np.size(state_start)):
add_start = True
for j in range(np.size(data[:, 0])):
if (data[j, 0] > state_start[i]) and add_start:
data_start.append(j)
add_start = False
if (data[j, 0] > state_end[i]):
data_end.append(j-1)
break
if first_day:
start_time = data[1, 0]
# extract data at those times
for i in range(np.size(data_start)):
if isinstance(column, int):
c = data[data_start[i]:data_end[i], column]
else:
c = data[column][data_start[i]:data_end[i]]
if overnight and i == 0:
data_agg = np.insert(data_agg[-1], np.size(data_agg[-1][:]), c)
else:
data_agg.append(c)
day += 1
if first_day:
first_day = False
if state_start_idx[-1]:
overnight = True
averages = np.zeros(np.size(data_agg))
for i in range(np.size(data_agg)):
averages[i] = np.average(data_agg[i])
if units != "":
return averages*u(units)
else:
return averages
def perform_function_on_state(func, dates, state, column, units="", path="", extension=".xls"):
"""Performs the function given on each state of the data for the given state
in the given column and outputs the result for each instance of the state
Parameters
----------
func : function
A function which will be applied to data from each instance of the state
dates : string (list)
A list of dates or single date for which data was recorded, in
the form "M-D-Y"
state : int
The state ID number for which data should be extracted
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
units : string, optional
The units you want to apply to the data, e.g. 'mg/L'.
Defaults to "" which indicates no units
path : string, optional
Optional argument of the path to the folder containing your ProCoDA
files. Defaults to the current directory if no argument is passed in
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
Returns
-------
list
The outputs of the given function for each instance of the given state
Requires
--------
func takes in a list of data with units and outputs the correct units
Examples
--------
def avg_with_units(lst):
num = np.size(lst)
acc = 0
for i in lst:
acc = i + acc
return acc / num
data_avgs = perform_function_on_state(avg_with_units, ["6-19-2013", "6-20-2013"], 1, 28, "mL/s")
"""
data_agg = []
day = 0
first_day = True
overnight = False
if not isinstance(dates, list):
dates = [dates]
for d in dates:
state_file = path + "statelog " + d + extension
data_file = path + "datalog " + d + extension
states = pd.read_csv(state_file, delimiter='\t')
data = pd.read_csv(data_file, delimiter='\t')
states = np.array(states)
data = np.array(data)
# get the start and end times for the state
state_start_idx = states[:, 1] == state
state_start = states[state_start_idx, 0]
state_end_idx = np.append([False], state_start_idx[0:(np.size(state_start_idx)-1)])
state_end = states[state_end_idx, 0]
if overnight:
state_start = np.insert(state_start, 0, 0)
state_end = np.insert(state_end, 0, states[0, 0])
if state_start_idx[-1]:
state_end.append(data[0, -1])
# get the corresponding indices in the data array
data_start = []
data_end = []
for i in range(np.size(state_start)):
add_start = True
for j in range(np.size(data[:, 0])):
if (data[j, 0] > state_start[i]) and add_start:
data_start.append(j)
add_start = False
if (data[j, 0] > state_end[i]):
data_end.append(j-1)
break
if first_day:
start_time = data[1, 0]
# extract data at those times
for i in range(np.size(data_start)):
if isinstance(column, int):
c = data[data_start[i]:data_end[i], column]
else:
c = data[column][data_start[i]:data_end[i]]
if overnight and i == 0:
data_agg = np.insert(data_agg[-1], np.size(data_agg[-1][:]), c)
else:
data_agg.append(c)
day += 1
if first_day:
first_day = False
if state_start_idx[-1]:
overnight = True
output = np.zeros(np.size(data_agg))
for i in range(np.size(data_agg)):
if units != "":
output[i] = func(data_agg[i]*u(units)).magnitude
else:
output[i] = func(data_agg[i])
if units != "":
return output*func(data_agg[i]*u(units)).units
else:
return output
def plot_state(dates, state, column, path="", extension=".xls"):
"""Reads a ProCoDA file and plots the data column for each iteration of
the given state.
Parameters
----------
dates : string (list)
A list of dates or single date for which data was recorded, in
the form "M-D-Y"
state : int
The state ID number for which data should be plotted
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
path : string, optional
Optional argument of the path to the folder containing your ProCoDA
files. Defaults to the current directory if no argument is passed in
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
Returns
-------
None
Examples
--------
plot_state(["6-19-2013", "6-20-2013"], 1, 28)
"""
data_agg = []
day = 0
first_day = True
overnight = False
if not isinstance(dates, list):
dates = [dates]
for d in dates:
state_file = path + "statelog " + d + extension
data_file = path + "datalog " + d + extension
states = pd.read_csv(state_file, delimiter='\t')
data = pd.read_csv(data_file, delimiter='\t')
states = np.array(states)
data = np.array(data)
# get the start and end times for the state
state_start_idx = states[:, 1] == state
state_start = states[state_start_idx, 0]
state_end_idx = np.append([False], state_start_idx[0:(np.size(state_start_idx)-1)])
state_end = states[state_end_idx, 0]
if overnight:
state_start = np.insert(state_start, 0, 0)
state_end = np.insert(state_end, 0, states[0, 0])
if state_start_idx[-1]:
state_end.append(data[0, -1])
# get the corresponding indices in the data array
data_start = []
data_end = []
for i in range(np.size(state_start)):
add_start = True
for j in range(np.size(data[:, 0])):
if (data[j, 0] > state_start[i]) and add_start:
data_start.append(j)
add_start = False
if (data[j, 0] > state_end[i]):
data_end.append(j-1)
break
if first_day:
start_time = data[1, 0]
# extract data at those times
for i in range(np.size(data_start)):
t = data[data_start[i]:data_end[i], 0] + day - start_time
if isinstance(column, int):
c = data[data_start[i]:data_end[i], column]
else:
c = data[column][data_start[i]:data_end[i]]
if overnight and i == 0:
data_agg = np.insert(data_agg[-1], np.size(data_agg[-1][:, 0]),
np.vstack((t, c)).T)
else:
data_agg.append(np.vstack((t, c)).T)
day += 1
if first_day:
first_day = False
if state_start_idx[-1]:
overnight = True
plt.figure()
for i in data_agg:
t = i[:, 0] - i[0, 0]
plt.plot(t, i[:, 1])
plt.show()
def read_state_with_metafile(func, state, column, path, metaids=[],
extension=".xls", units=""):
"""Takes in a ProCoDA meta file and performs a function for all data of a
certain state in each of the experiments (denoted by file paths in then
metafile)
Parameters
----------
func : function
A function which will be applied to data from each instance of the state
state : int
The state ID number for which data should be extracted
column : int or string
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
path : string
Path to your ProCoDA metafile (must be tab-delimited)
metaids : string list, optional
a list of the experiment IDs you'd like to analyze from the metafile
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
units : string, optional
The units you want to apply to the data, e.g. 'mg/L'.
Defaults to "" which indicates no units
Returns
-------
ids : string list
The list of experiment ids given in the metafile
outputs : list
The outputs of the given function for each experiment
Examples
--------
def avg_with_units(lst):
num = np.size(lst)
acc = 0
for i in lst:
acc = i + acc
return acc / num
path = "../tests/data/Test Meta File.txt"
ids, answer = read_state_with_metafile(avg_with_units, 1, 28, path, [], ".xls", "mg/L")
"""
outputs = []
metafile = pd.read_csv(path, delimiter='\t', header=None)
metafile = np.array(metafile)
ids = metafile[1:, 0]
if not isinstance(ids[0], str):
ids = list(map(str, ids))
if metaids:
paths = []
for i in range(len(ids)):
if ids[i] in metaids:
paths.append(metafile[i, 4])
else:
paths = metafile[1:, 4]
basepath = os.path.join(os.path.split(path)[0], metafile[0, 4])
# use a loop to evaluate each experiment in the metafile
for i in range(len(paths)):
# get the range of dates for experiment i
day1 = metafile[i+1, 1]
# modify the metafile date so that it works with datetime format
if not (day1[2] == "-" or day1[2] == "/"):
day1 = "0" + day1
if not (day1[5] == "-" or day1[5] == "/"):
day1 = day1[:3] + "0" + day1[3:]
if day1[2] == "-":
dt = datetime.strptime(day1, "%m-%d-%Y")
else:
dt = datetime.strptime(day1, "%m/%d/%y")
duration = metafile[i+1, 3]
if not isinstance(duration, int):
duration = int(duration)
date_list = []
for j in range(duration):
curr_day = dt.strftime("%m-%d-%Y")
if curr_day[3] == "0":
curr_day = curr_day[:3] + curr_day[4:]
if curr_day[0] == "0":
curr_day = curr_day[1:]
date_list.append(curr_day)
dt = dt + timedelta(days=1)
path = str(Path(os.path.join(basepath, paths[i]))) + os.sep
_, data = read_state(date_list, state, column, units, path, extension)
outputs.append(func(data))
return ids, outputs
def write_calculations_to_csv(funcs, states, columns, path, headers, out_name,
metaids=[], extension=".xls"):
"""Writes each output of the given functions on the given states and data
columns to a new column in a
Parameters
----------
funcs : function (list)
A function or list of functions which will be applied in order to the
data. If only one function is given it is applied to all the
states/columns
states : string (list)
The state ID numbers for which data should be extracted. List should be
in order of calculation or if only one state is given then it will be
used for all the calculations
columns : int or string (list)
If only one column is given it is used for all the calculations
int:
Index of the column that you want to extract. Column 0 is time.
The first data column is column 1.
string:
Name of the column header that you want to extract
path : string
Path to your ProCoDA metafile (must be tab-delimited)
headers : string list
List of the desired header for each calculation, in order
out_name : string
Desired name for the output file. Can include a relative path
metaids : string list, optional
a list of the experiment IDs you'd like to analyze from the metafile
extension : string, optional
The file extension of the tab delimited file. Defaults to ".xls" if
no argument is passed in
Returns
-------
out_name.csv
A CSV file with the each column being a new calcuation and each row
being a new experiment on which the calcuations were performed
output : DataFrame
Pandas dataframe which is the same data that was written to CSV
Requires
--------
funcs, states, columns, and headers are all of the same length if they are
lists. Some being lists and some single values are okay.
Examples
--------
"""
if not isinstance(funcs, list):
funcs = [funcs] * len(headers)
if not isinstance(states, list):
states = [states] * len(headers)
if not isinstance(columns, list):
columns = [columns] * len(headers)
data_agg = []
for i in range(len(headers)):
ids, data = read_state_with_metafile(funcs[i], states[i], columns[i],
path, metaids, extension)
data_agg = np.append(data_agg, [data])
output = pd.DataFrame(data=np.vstack((ids, data_agg)).T,
columns=["ID"]+headers)
output.to_csv(out_name, sep='\t')
return output
| 30.788104 | 100 | 0.575988 | 3,493 | 24,846 | 3.999427 | 0.079301 | 0.031496 | 0.022334 | 0.013958 | 0.721618 | 0.700429 | 0.686042 | 0.67423 | 0.664209 | 0.650966 | 0 | 0.015371 | 0.324439 | 24,846 | 806 | 101 | 30.826303 | 0.81692 | 0.420953 | 0 | 0.7 | 0 | 0 | 0.012496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027273 | false | 0 | 0.021212 | 0 | 0.081818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8da764089e996f379b2de9042b73e7c49bf8c41 | 8,023 | py | Python | ipythonblocks/test/test_ipborg.py | just4jc/ipythonblocks | 18414d79e83c9309273796f540e9d63381c0fc06 | [
"MIT"
] | null | null | null | ipythonblocks/test/test_ipborg.py | just4jc/ipythonblocks | 18414d79e83c9309273796f540e9d63381c0fc06 | [
"MIT"
] | null | null | null | ipythonblocks/test/test_ipborg.py | just4jc/ipythonblocks | 18414d79e83c9309273796f540e9d63381c0fc06 | [
"MIT"
] | null | null | null | """
Tests for ipythonblocks communication with ipythonblocks.org.
"""
import json
import string
import sys
import mock
import pytest
import responses
from .. import ipythonblocks as ipb
A10 = [a for a in string.ascii_lowercase[:10]]
def setup_module(module):
"""
mock out the get_ipython() function for the tests.
"""
def get_ipython():
class ip(object):
user_ns = {'In': A10}
return ip()
ipb.get_ipython = get_ipython
def teardown_module(module):
del ipb.get_ipython
@pytest.fixture
def data_2x2():
return [[(1, 2, 3, 4), (5, 6, 7, 8)],
[(9, 10, 11, 12), (13, 14, 15, 16)]]
@pytest.fixture
def block_grid(data_2x2):
bg = ipb.BlockGrid(2, 2)
bg[0, 0].rgb = data_2x2[0][0][:3]
bg[0, 0].size = data_2x2[0][0][3]
bg[0, 1].rgb = data_2x2[0][1][:3]
bg[0, 1].size = data_2x2[0][1][3]
bg[1, 0].rgb = data_2x2[1][0][:3]
bg[1, 0].size = data_2x2[1][0][3]
bg[1, 1].rgb = data_2x2[1][1][:3]
bg[1, 1].size = data_2x2[1][1][3]
return bg
@pytest.fixture
def image_grid_ll(data_2x2):
ig = ipb.ImageGrid(2, 2, origin='lower-left')
ig[0, 0].rgb = data_2x2[1][0][:3]
ig[0, 0].size = data_2x2[1][0][3]
ig[0, 1].rgb = data_2x2[0][0][:3]
ig[0, 1].size = data_2x2[0][0][3]
ig[1, 0].rgb = data_2x2[1][1][:3]
ig[1, 0].size = data_2x2[1][1][3]
ig[1, 1].rgb = data_2x2[0][1][:3]
ig[1, 1].size = data_2x2[0][1][3]
return ig
@pytest.fixture
def image_grid_ul(data_2x2):
ig = ipb.ImageGrid(2, 2, origin='upper-left')
ig[0, 0].rgb = data_2x2[0][0][:3]
ig[0, 0].size = data_2x2[0][0][3]
ig[0, 1].rgb = data_2x2[1][0][:3]
ig[0, 1].size = data_2x2[1][0][3]
ig[1, 0].rgb = data_2x2[0][1][:3]
ig[1, 0].size = data_2x2[0][1][3]
ig[1, 1].rgb = data_2x2[1][1][:3]
ig[1, 1].size = data_2x2[1][1][3]
return ig
class Test_parse_cells_spec(object):
def test_single_int(self):
assert ipb._parse_cells_spec(5, 100) == [5]
def test_single_int_str(self):
assert ipb._parse_cells_spec('5', 100) == [5]
def test_multi_int_str(self):
assert ipb._parse_cells_spec('2,9,4', 100) == [2, 4, 9]
def test_slice(self):
assert ipb._parse_cells_spec(slice(2, 5), 100) == [2, 3, 4]
def test_slice_str(self):
assert ipb._parse_cells_spec('2:5', 100) == [2, 3, 4]
def test_slice_and_int(self):
assert ipb._parse_cells_spec('4,9:12', 100) == [4, 9, 10, 11]
assert ipb._parse_cells_spec('9:12,4', 100) == [4, 9, 10, 11]
assert ipb._parse_cells_spec('4,9:12,16', 100) == [4, 9, 10, 11, 16]
assert ipb._parse_cells_spec('10,9:12', 100) == [9, 10, 11]
class Test_get_code_cells(object):
def test_single_int(self):
assert ipb._get_code_cells(5) == [A10[5]]
def test_single_int_str(self):
assert ipb._get_code_cells('5') == [A10[5]]
def test_multi_int_str(self):
assert ipb._get_code_cells('2,9,4') == [A10[x] for x in [2, 4, 9]]
def test_slice(self):
assert ipb._get_code_cells(slice(2, 5)) == [A10[x] for x in [2, 3, 4]]
def test_slice_str(self):
assert ipb._get_code_cells('2:5') == [A10[x] for x in [2, 3, 4]]
def test_slice_and_int(self):
assert ipb._get_code_cells('1,3:6') == [A10[x] for x in [1, 3, 4, 5]]
assert ipb._get_code_cells('3:6,1') == [A10[x] for x in [1, 3, 4, 5]]
assert ipb._get_code_cells('1,3:6,8') == [A10[x] for x in [1, 3, 4, 5, 8]]
assert ipb._get_code_cells('4,3:6') == [A10[x] for x in [3, 4, 5]]
@pytest.mark.parametrize('fixture',
[block_grid, image_grid_ll, image_grid_ul])
def test_to_simple_grid(fixture, data_2x2):
grid = fixture(data_2x2)
assert grid._to_simple_grid() == data_2x2
@pytest.mark.parametrize('test_grid, ref_grid',
[(ipb.BlockGrid(2, 2), block_grid),
(ipb.ImageGrid(2, 2, origin='upper-left'), image_grid_ul),
(ipb.ImageGrid(2, 2, origin='lower-left'), image_grid_ll)])
def test_load_simple_grid(test_grid, ref_grid, data_2x2):
ref_grid = ref_grid(data_2x2)
test_grid._load_simple_grid(data_2x2)
assert test_grid == ref_grid
@responses.activate
@mock.patch('sys.version_info', ('python', 'version'))
@mock.patch.object(ipb, '__version__', 'ipb_version')
@mock.patch.object(ipb, '_POST_URL', 'http://www.ipythonblocks.org/post_url')
def test_BlockGrid_post_to_web():
data = data_2x2()
grid = block_grid(data)
expected = {
'python_version': tuple(sys.version_info),
'ipb_version': ipb.__version__,
'ipb_class': 'BlockGrid',
'code_cells': None,
'secret': False,
'grid_data': {
'lines_on': grid.lines_on,
'width': grid.width,
'height': grid.height,
'blocks': data
}
}
expected = json.dumps(expected)
responses.add(responses.POST, ipb._POST_URL,
body=json.dumps({'url': 'url'}).encode('utf-8'),
status=200, content_type='application/json')
url = grid.post_to_web()
assert url == 'url'
assert len(responses.calls) == 1
req = responses.calls[0].request
assert req.url == ipb._POST_URL
assert req.body == expected
@responses.activate
@mock.patch('sys.version_info', ('python', 'version'))
@mock.patch.object(ipb, '__version__', 'ipb_version')
@mock.patch.object(ipb, '_POST_URL', 'http://www.ipythonblocks.org/post_url')
def test_ImageGrid_ul_post_to_web():
data = data_2x2()
grid = image_grid_ul(data)
expected = {
'python_version': tuple(sys.version_info),
'ipb_version': ipb.__version__,
'ipb_class': 'ImageGrid',
'code_cells': None,
'secret': False,
'grid_data': {
'lines_on': grid.lines_on,
'width': grid.width,
'height': grid.height,
'blocks': data
}
}
expected = json.dumps(expected)
responses.add(responses.POST, ipb._POST_URL,
body=json.dumps({'url': 'url'}).encode('utf-8'),
status=200, content_type='application/json')
url = grid.post_to_web()
assert url == 'url'
assert len(responses.calls) == 1
req = responses.calls[0].request
assert req.url == ipb._POST_URL
assert req.body == expected
@responses.activate
@mock.patch.object(ipb, '_GET_URL_PUBLIC', 'http://www.ipythonblocks.org/get_url/{0}')
def test_BlockGrid_from_web():
data = data_2x2()
grid_id = 'abc'
get_url = ipb._GET_URL_PUBLIC.format(grid_id)
resp = {
'lines_on': True,
'width': 2,
'height': 2,
'blocks': data
}
responses.add(responses.GET, get_url,
body=json.dumps(resp).encode('utf-8'), status=200,
content_type='application/json')
grid = ipb.BlockGrid.from_web(grid_id)
assert grid.height == resp['height']
assert grid.width == resp['width']
assert grid.lines_on == resp['lines_on']
assert grid._to_simple_grid() == data
assert len(responses.calls) == 1
assert responses.calls[0].request.url == get_url
@responses.activate
@mock.patch.object(ipb, '_GET_URL_SECRET', 'http://www.ipythonblocks.org/get_url/{0}')
def test_ImageGrid_ul_from_web():
data = data_2x2()
grid_id = 'abc'
get_url = ipb._GET_URL_SECRET.format(grid_id)
resp = {
'lines_on': True,
'width': 2,
'height': 2,
'blocks': data
}
responses.add(responses.GET, get_url,
body=json.dumps(resp).encode('utf-8'), status=200,
content_type='application/json')
origin = 'upper-left'
grid = ipb.ImageGrid.from_web(grid_id, secret=True, origin=origin)
assert grid.height == resp['height']
assert grid.width == resp['width']
assert grid.lines_on == resp['lines_on']
assert grid._to_simple_grid() == data
assert grid.origin == origin
assert len(responses.calls) == 1
assert responses.calls[0].request.url == get_url
| 29.068841 | 86 | 0.605385 | 1,250 | 8,023 | 3.6552 | 0.1064 | 0.058218 | 0.026264 | 0.037426 | 0.771504 | 0.741081 | 0.726198 | 0.679142 | 0.558547 | 0.499453 | 0 | 0.068295 | 0.224355 | 8,023 | 275 | 87 | 29.174545 | 0.665917 | 0.01396 | 0 | 0.492754 | 0 | 0 | 0.105503 | 0 | 0 | 0 | 0 | 0 | 0.198068 | 1 | 0.120773 | false | 0 | 0.033816 | 0.004831 | 0.198068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8dbd023407f37bc4d6832c36d681e3f7942aeff | 5,080 | py | Python | src/simmate/toolkit/base_data_types/_to_do/lattice.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 9 | 2021-12-21T02:58:21.000Z | 2022-01-25T14:00:06.000Z | src/simmate/toolkit/base_data_types/_to_do/lattice.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 51 | 2022-01-01T15:59:58.000Z | 2022-03-26T21:25:42.000Z | src/simmate/toolkit/base_data_types/_to_do/lattice.py | laurenmm/simmate-1 | c06b94c46919b01cda50f78221ad14f75c100a14 | [
"BSD-3-Clause"
] | 7 | 2022-01-01T03:44:32.000Z | 2022-03-29T19:59:27.000Z | # -*- coding: utf-8 -*-
# OPTIMIZE: using caching improves speed significantly at the cost of memory. I
# need to see which is preferable in higher-level tests.
from functools import cached_property
import numpy
from numba import njit as numba_speedup
class Lattice:
"""
This class is for 3D crystal lattices, which are represented by a 3x3 matrix.
This class provides a bunch of tools to pull out common values like vector
lengths and angles, and also allows for lattice conversions.
For some basic formulas and introduction, see here:
http://gisaxs.com/index.php/Unit_cell
http://gisaxs.com/index.php/Lattices
"""
def __init__(self, matrix):
"""
Create a lattice from a 3x3 matrix. Each vector is defined via xyz on
one row of the matrix. For example, a cubic lattice where all vectors
are 1 Angstrom and 90deg from eachother would be defined as
matrix = [[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
"""
# Convert the matrix to a numpy array and store it
self.matrix = numpy.array(matrix)
@cached_property
def lengths(self):
"""
Gives the lengths of the lattice as (a, b, c).
"""
# The easiest way to do this is with numpy via...
# lengths = numpy.linalg.norm(matrix, axis=1)
# However, we instead write a super-optimized numba function to make this
# method really fast. This is defined below as a static method.
# The length of a 3D vector (xyz) is defined by...
# length = sqrt(a**2 + b**2 + c**2)
# and we are doing this for all three vectors at once.
return self._lengths_fast(self.matrix)
@staticmethod
@numba_speedup(cache=True)
def _lengths_fast(matrix):
# NOTE: users should not call this method! Use lattice.lengths instead
# OPTIMIZE: is there an alternative way to write this that numba will
# run faster?
# numpy.linalg.norm(matrix, axis=1) --> doesn't work with numba
return numpy.sqrt(numpy.sum(matrix**2, axis=1))
@property
def a(self):
"""
The length of the lattice vector "a" (in Angstroms)
"""
return self.lengths[0]
@property
def b(self):
"""
The length of the lattice vector "b" (in Angstroms)
"""
return self.lengths[1]
@property
def c(self):
"""
The length of the lattice vector "c" (in Angstroms)
"""
return self.lengths[2]
@cached_property
def angles(self):
"""
The angles of the lattice as (alpha, beta, gamma).
"""
# The angle between two 3D vectors (xyz) is defined by...
# angle_radians = arccos(dot(vector1, vector2)/(vector1.length*vector2.length))
# And alpha, beta, gamma are just the angles between the b&c, a&c, and a&b
# vectors, respectively. We cacluate all of these at once here. We also
# write a super-optimized numba function to make this method really fast.
# This is defined below as a static method.
return self._angles_fast(self.matrix, self.lengths)
@staticmethod
@numba_speedup(cache=True)
def _angles_fast(matrix, lengths):
# NOTE: users should not call this method! Use lattice.angles instead
# OPTIMIZE: is there an alternative way to write this that numba will
# run faster?
# keep a list of the angles
angles = []
# this establishes the three angles we want to calculate
# alpha --> angle between b and c
# beta --> angle between a and c
# gamma --> angle between a and b
# So the numbers below are indexes of the matrix!
# (for example, vector b has index 1)
vector_pairs = [(1, 2), (0, 2), (0, 1)]
# The angle between two 3D vectors (xyz) is defined by...
# angle_radians = arccos(dot(vector1, vector2)/(vector1.length*vector2.length))
for vector_index_1, vector_index_2 in vector_pairs:
# calculate single angle using formula
unit_vector_1 = matrix[vector_index_1] / lengths[vector_index_1]
unit_vector_2 = matrix[vector_index_2] / lengths[vector_index_2]
dot_product = numpy.dot(unit_vector_1, unit_vector_2)
angle = numpy.arccos(dot_product)
# convert to degrees before saving
angles.append(numpy.degrees(angle))
# return the list as a numpy array
return numpy.array(angles)
@property
def alpha(self):
"""
The angle between the lattice vectors b and c (in degrees).
"""
return self.angles[0]
@property
def beta(self):
"""
The angle between the lattice vectors a and c (in degrees).
"""
return self.angles[1]
@property
def gamma(self):
"""
The angle between the lattice vectors a and b (in degrees).
"""
return self.angles[2]
| 34.09396 | 89 | 0.61063 | 695 | 5,080 | 4.398561 | 0.266187 | 0.013085 | 0.019627 | 0.013739 | 0.39385 | 0.327772 | 0.28721 | 0.22702 | 0.22702 | 0.173373 | 0 | 0.016676 | 0.303543 | 5,080 | 148 | 90 | 34.324324 | 0.847371 | 0.568504 | 0 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.23913 | false | 0 | 0.065217 | 0.021739 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b8de270367996a98c43a86f6f1d2d873af0414bc | 14,436 | py | Python | swagger_client/models/get_corporations_corporation_id_blueprints_200_ok.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | swagger_client/models/get_corporations_corporation_id_blueprints_200_ok.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | swagger_client/models/get_corporations_corporation_id_blueprints_200_ok.py | rseichter/bootini-star | a80258f01a05e4df38748b8cb47dfadabd42c20d | [
"MIT"
] | null | null | null | # coding: utf-8
"""
EVE Swagger Interface
An OpenAPI for EVE Online # noqa: E501
OpenAPI spec version: 0.8.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class GetCorporationsCorporationIdBlueprints200Ok(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'item_id': 'int',
'type_id': 'int',
'location_id': 'int',
'quantity': 'int',
'time_efficiency': 'int',
'material_efficiency': 'int',
'runs': 'int',
'location_flag': 'str'
}
attribute_map = {
'item_id': 'item_id',
'type_id': 'type_id',
'location_id': 'location_id',
'quantity': 'quantity',
'time_efficiency': 'time_efficiency',
'material_efficiency': 'material_efficiency',
'runs': 'runs',
'location_flag': 'location_flag'
}
def __init__(self, item_id=None, type_id=None, location_id=None, quantity=None, time_efficiency=None, material_efficiency=None, runs=None, location_flag=None): # noqa: E501
"""GetCorporationsCorporationIdBlueprints200Ok - a model defined in Swagger""" # noqa: E501
self._item_id = None
self._type_id = None
self._location_id = None
self._quantity = None
self._time_efficiency = None
self._material_efficiency = None
self._runs = None
self._location_flag = None
self.discriminator = None
self.item_id = item_id
self.type_id = type_id
self.location_id = location_id
self.quantity = quantity
self.time_efficiency = time_efficiency
self.material_efficiency = material_efficiency
self.runs = runs
self.location_flag = location_flag
@property
def item_id(self):
"""Gets the item_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
Unique ID for this item. # noqa: E501
:return: The item_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._item_id
@item_id.setter
def item_id(self, item_id):
"""Sets the item_id of this GetCorporationsCorporationIdBlueprints200Ok.
Unique ID for this item. # noqa: E501
:param item_id: The item_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if item_id is None:
raise ValueError("Invalid value for `item_id`, must not be `None`") # noqa: E501
self._item_id = item_id
@property
def type_id(self):
"""Gets the type_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
type_id integer # noqa: E501
:return: The type_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._type_id
@type_id.setter
def type_id(self, type_id):
"""Sets the type_id of this GetCorporationsCorporationIdBlueprints200Ok.
type_id integer # noqa: E501
:param type_id: The type_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if type_id is None:
raise ValueError("Invalid value for `type_id`, must not be `None`") # noqa: E501
self._type_id = type_id
@property
def location_id(self):
"""Gets the location_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
References a solar system, station or item_id if this blueprint is located within a container. # noqa: E501
:return: The location_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._location_id
@location_id.setter
def location_id(self, location_id):
"""Sets the location_id of this GetCorporationsCorporationIdBlueprints200Ok.
References a solar system, station or item_id if this blueprint is located within a container. # noqa: E501
:param location_id: The location_id of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if location_id is None:
raise ValueError("Invalid value for `location_id`, must not be `None`") # noqa: E501
self._location_id = location_id
@property
def quantity(self):
"""Gets the quantity of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
A range of numbers with a minimum of -2 and no maximum value where -1 is an original and -2 is a copy. It can be a positive integer if it is a stack of blueprint originals fresh from the market (e.g. no activities performed on them yet). # noqa: E501
:return: The quantity of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._quantity
@quantity.setter
def quantity(self, quantity):
"""Sets the quantity of this GetCorporationsCorporationIdBlueprints200Ok.
A range of numbers with a minimum of -2 and no maximum value where -1 is an original and -2 is a copy. It can be a positive integer if it is a stack of blueprint originals fresh from the market (e.g. no activities performed on them yet). # noqa: E501
:param quantity: The quantity of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if quantity is None:
raise ValueError("Invalid value for `quantity`, must not be `None`") # noqa: E501
if quantity is not None and quantity < -2: # noqa: E501
raise ValueError("Invalid value for `quantity`, must be a value greater than or equal to `-2`") # noqa: E501
self._quantity = quantity
@property
def time_efficiency(self):
"""Gets the time_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
Time Efficiency Level of the blueprint. # noqa: E501
:return: The time_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._time_efficiency
@time_efficiency.setter
def time_efficiency(self, time_efficiency):
"""Sets the time_efficiency of this GetCorporationsCorporationIdBlueprints200Ok.
Time Efficiency Level of the blueprint. # noqa: E501
:param time_efficiency: The time_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if time_efficiency is None:
raise ValueError("Invalid value for `time_efficiency`, must not be `None`") # noqa: E501
if time_efficiency is not None and time_efficiency > 20: # noqa: E501
raise ValueError("Invalid value for `time_efficiency`, must be a value less than or equal to `20`") # noqa: E501
if time_efficiency is not None and time_efficiency < 0: # noqa: E501
raise ValueError("Invalid value for `time_efficiency`, must be a value greater than or equal to `0`") # noqa: E501
self._time_efficiency = time_efficiency
@property
def material_efficiency(self):
"""Gets the material_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
Material Efficiency Level of the blueprint. # noqa: E501
:return: The material_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._material_efficiency
@material_efficiency.setter
def material_efficiency(self, material_efficiency):
"""Sets the material_efficiency of this GetCorporationsCorporationIdBlueprints200Ok.
Material Efficiency Level of the blueprint. # noqa: E501
:param material_efficiency: The material_efficiency of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if material_efficiency is None:
raise ValueError("Invalid value for `material_efficiency`, must not be `None`") # noqa: E501
if material_efficiency is not None and material_efficiency > 25: # noqa: E501
raise ValueError("Invalid value for `material_efficiency`, must be a value less than or equal to `25`") # noqa: E501
if material_efficiency is not None and material_efficiency < 0: # noqa: E501
raise ValueError("Invalid value for `material_efficiency`, must be a value greater than or equal to `0`") # noqa: E501
self._material_efficiency = material_efficiency
@property
def runs(self):
"""Gets the runs of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
Number of runs remaining if the blueprint is a copy, -1 if it is an original. # noqa: E501
:return: The runs of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: int
"""
return self._runs
@runs.setter
def runs(self, runs):
"""Sets the runs of this GetCorporationsCorporationIdBlueprints200Ok.
Number of runs remaining if the blueprint is a copy, -1 if it is an original. # noqa: E501
:param runs: The runs of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: int
"""
if runs is None:
raise ValueError("Invalid value for `runs`, must not be `None`") # noqa: E501
if runs is not None and runs < -1: # noqa: E501
raise ValueError("Invalid value for `runs`, must be a value greater than or equal to `-1`") # noqa: E501
self._runs = runs
@property
def location_flag(self):
"""Gets the location_flag of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
Type of the location_id # noqa: E501
:return: The location_flag of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:rtype: str
"""
return self._location_flag
@location_flag.setter
def location_flag(self, location_flag):
"""Sets the location_flag of this GetCorporationsCorporationIdBlueprints200Ok.
Type of the location_id # noqa: E501
:param location_flag: The location_flag of this GetCorporationsCorporationIdBlueprints200Ok. # noqa: E501
:type: str
"""
if location_flag is None:
raise ValueError("Invalid value for `location_flag`, must not be `None`") # noqa: E501
allowed_values = ["AssetSafety", "AutoFit", "Bonus", "Booster", "BoosterBay", "Capsule", "Cargo", "CorpDeliveries", "CorpSAG1", "CorpSAG2", "CorpSAG3", "CorpSAG4", "CorpSAG5", "CorpSAG6", "CorpSAG7", "CrateLoot", "Deliveries", "DroneBay", "DustBattle", "DustDatabank", "FighterBay", "FighterTube0", "FighterTube1", "FighterTube2", "FighterTube3", "FighterTube4", "FleetHangar", "Hangar", "HangarAll", "HiSlot0", "HiSlot1", "HiSlot2", "HiSlot3", "HiSlot4", "HiSlot5", "HiSlot6", "HiSlot7", "HiddenModifiers", "Implant", "Impounded", "JunkyardReprocessed", "JunkyardTrashed", "LoSlot0", "LoSlot1", "LoSlot2", "LoSlot3", "LoSlot4", "LoSlot5", "LoSlot6", "LoSlot7", "Locked", "MedSlot0", "MedSlot1", "MedSlot2", "MedSlot3", "MedSlot4", "MedSlot5", "MedSlot6", "MedSlot7", "OfficeFolder", "Pilot", "PlanetSurface", "QuafeBay", "Reward", "RigSlot0", "RigSlot1", "RigSlot2", "RigSlot3", "RigSlot4", "RigSlot5", "RigSlot6", "RigSlot7", "SecondaryStorage", "ServiceSlot0", "ServiceSlot1", "ServiceSlot2", "ServiceSlot3", "ServiceSlot4", "ServiceSlot5", "ServiceSlot6", "ServiceSlot7", "ShipHangar", "ShipOffline", "Skill", "SkillInTraining", "SpecializedAmmoHold", "SpecializedCommandCenterHold", "SpecializedFuelBay", "SpecializedGasHold", "SpecializedIndustrialShipHold", "SpecializedLargeShipHold", "SpecializedMaterialBay", "SpecializedMediumShipHold", "SpecializedMineralHold", "SpecializedOreHold", "SpecializedPlanetaryCommoditiesHold", "SpecializedSalvageHold", "SpecializedShipHold", "SpecializedSmallShipHold", "StructureActive", "StructureFuel", "StructureInactive", "StructureOffline", "SubSystemBay", "SubSystemSlot0", "SubSystemSlot1", "SubSystemSlot2", "SubSystemSlot3", "SubSystemSlot4", "SubSystemSlot5", "SubSystemSlot6", "SubSystemSlot7", "Unlocked", "Wallet", "Wardrobe"] # noqa: E501
if location_flag not in allowed_values:
raise ValueError(
"Invalid value for `location_flag` ({0}), must be one of {1}" # noqa: E501
.format(location_flag, allowed_values)
)
self._location_flag = location_flag
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, GetCorporationsCorporationIdBlueprints200Ok):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 42.836795 | 1,804 | 0.655999 | 1,591 | 14,436 | 5.817725 | 0.177876 | 0.05618 | 0.169404 | 0.137424 | 0.578652 | 0.509291 | 0.465644 | 0.407736 | 0.284356 | 0.187878 | 0 | 0.036636 | 0.251247 | 14,436 | 336 | 1,805 | 42.964286 | 0.819687 | 0.361319 | 0 | 0.064935 | 1 | 0 | 0.307758 | 0.036 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.019481 | 0 | 0.272727 | 0.025974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b8eb33e4b3cf90a50824bd9ce65e273b6244d694 | 1,266 | py | Python | src/core/ServerBase.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | 1 | 2021-02-23T12:54:05.000Z | 2021-02-23T12:54:05.000Z | src/core/ServerBase.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | null | null | null | src/core/ServerBase.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | 1 | 2021-02-23T12:54:07.000Z | 2021-02-23T12:54:07.000Z | import sys
from direct.showbase.ShowBase import ShowBase
from direct.directnotify import DirectNotifyGlobal
from panda3d.core import UniqueIdAllocator
from src.util.LogManager import LogManager
from src.dclass.DCManager import DCManager
from src.core.ConnectionManager import ConnectionManager
from src.message.MessageManager import MessageManager
from src.distributed.DistributedObjectManager import DistributedObjectManager
from src.interest.InterestManager import InterestManager
class ServerBase(ShowBase):
notify = DirectNotifyGlobal.directNotify.newCategory("ServerBase")
serverVersion = 'pcsv1.0.34.31'
def __init__(self):
ShowBase.__init__(self)
self.activeConnections = {}
maxChannels = self.config.GetInt('max-channel-id', 1000000)
self.channelAllocator = UniqueIdAllocator(0, 0+maxChannels-1)
self.configManager = None
self.logManager = LogManager()
self.dcManager = DCManager()
self.dcManager.readDCFile()
self.notify.warning(str(self.dcManager.dclassesByName))
self.connectionManager = ConnectionManager()
self.messageManager = MessageManager()
self.doManager = DistributedObjectManager()
self.interestManager = InterestManager() | 36.171429 | 77 | 0.760664 | 121 | 1,266 | 7.892562 | 0.421488 | 0.043979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016038 | 0.162717 | 1,266 | 35 | 78 | 36.171429 | 0.884906 | 0 | 0 | 0 | 0 | 0 | 0.029203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.37037 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b8f98da134482244b0b0d4585afd3b9f99cab24f | 557 | py | Python | projects/es.um.unosql.subtypes/python/src/mainGantt.py | catedrasaes-umu/NoSQLDataEngineering | 77c12d83bac0ae36fce85d602e8f4c9b0b80dd65 | [
"MIT"
] | 20 | 2016-12-10T09:13:28.000Z | 2021-11-12T12:09:58.000Z | projects/es.um.unosql.subtypes/python/src/mainGantt.py | catedrasaes-umu/NoSQLDataEngineering | 77c12d83bac0ae36fce85d602e8f4c9b0b80dd65 | [
"MIT"
] | 8 | 2018-07-05T08:19:15.000Z | 2020-10-06T10:25:56.000Z | projects/es.um.unosql.subtypes/python/src/mainGantt.py | catedrasaes-umu/NoSQLDataEngineering | 77c12d83bac0ae36fce85d602e8f4c9b0b80dd65 | [
"MIT"
] | 6 | 2018-02-10T20:54:43.000Z | 2021-07-31T09:20:13.000Z | import sys
from charts.GanttChart import GanttChart
REDDIT_CSV_ROUTE = "../../output/reddit/reddit.csv"
REDDIT_CSV_OUTLIERS_ROUTE = "../../output/reddit/reddit_filtered.csv"
REDDIT_CSV_LIVEVARS_ROUTE = "../../output/reddit/reddit.csv"
SOF_CSV_ROUTE = "../../output/stackoverflow/stackoverflow.csv"
SOF_CSV_OUTLIERS_ROUTE = "../../output/stackoverflow/stackoverflow_outliers.csv"
SOF_CSV_LIVEVARS_ROUTE = "../../output/stackoverflow/stackoverflow_livevars.csv"
def main():
chartCreator = GanttChart(REDDIT_CSV_ROUTE)
chartCreator.showCharts()
main()
| 32.764706 | 80 | 0.779174 | 66 | 557 | 6.257576 | 0.272727 | 0.130751 | 0.123487 | 0.16707 | 0.125908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070018 | 557 | 16 | 81 | 34.8125 | 0.797297 | 0 | 0 | 0 | 0 | 0 | 0.447038 | 0.447038 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7709164c76bace6e09ec43c72782692898055343 | 170 | py | Python | python_docs/08PassagemDeParametro/aula21e.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_docs/08PassagemDeParametro/aula21e.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_docs/08PassagemDeParametro/aula21e.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | def fatorial(num=1):
f = 1
for c in range(num, 0, -1):
f *= c
return f
print(f'O resultado é: {fatorial(int(input("Digite um valor: ")))}')
| 18.888889 | 69 | 0.517647 | 28 | 170 | 3.142857 | 0.714286 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034188 | 0.311765 | 170 | 8 | 70 | 21.25 | 0.717949 | 0 | 0 | 0 | 0 | 0 | 0.358025 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77096429b97313fc95c6bb146826b71045bb62c1 | 3,269 | py | Python | tests/test_click_demo.py | parashuram-patil/click_demo | 78facdde4a936fe2f600d963e563039c06f41b61 | [
"MIT"
] | null | null | null | tests/test_click_demo.py | parashuram-patil/click_demo | 78facdde4a936fe2f600d963e563039c06f41b61 | [
"MIT"
] | null | null | null | tests/test_click_demo.py | parashuram-patil/click_demo | 78facdde4a936fe2f600d963e563039c06f41b61 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for `click_demo` package."""
import os
import unittest
from click.testing import CliRunner
from click_demo import click_demo
from click_demo import cli
class TestClick_demo(unittest.TestCase):
"""Tests for `click_demo` package."""
def setUp(self):
"""Set up test fixtures, if any."""
def tearDown(self):
"""Tear down test fixtures, if any."""
def test_000_something(self):
"""Test something."""
def test_command_hi(self):
"""Testing Command Hi"""
runner = CliRunner()
result = runner.invoke(click_demo.hi)
assert result.exit_code == 0
result = runner.invoke(click_demo.hi, ['--help'])
assert '--name TEXT' in result.output
result = runner.invoke(click_demo.hi, ['-n', 'PSP'])
assert 'Hi PSP' in result.output
result = runner.invoke(click_demo.hi, ['--name', 'PSP'])
assert 'Hi PSP' in result.output
result = runner.invoke(click_demo.hi)
assert 'Hi there!' in result.output
def test_command_bye(self):
"""Testing Command Bye"""
runner = CliRunner()
result = runner.invoke(click_demo.bye)
assert result.exit_code == 0
result = runner.invoke(click_demo.bye, ['--help'])
assert '--name TEXT' in result.output
result = runner.invoke(click_demo.bye, ['-n', 'PSP'])
assert 'Bye PSP' in result.output
result = runner.invoke(click_demo.bye, ['--name', 'PSP'])
assert 'Bye PSP' in result.output
result = runner.invoke(click_demo.bye)
assert 'Bye there!' in result.output
def test_command_hello(self):
"""Testing Command Hello"""
runner = CliRunner()
result = runner.invoke(click_demo.hello)
assert result.exit_code == 0
result = runner.invoke(click_demo.hello, ['--version'])
assert 'Version 1.0' in result.output
result = runner.invoke(click_demo.hello, ['--username', 'PSP'])
assert 'Hello PSP' in result.output
def test_command_dropdb(self):
"""Testing Command Drop DB"""
runner = CliRunner()
result = runner.invoke(click_demo.dropdb)
# assert result.exit_code == 0 # not working
result = runner.invoke(click_demo.dropdb, ['--yes', '--username', 'PSP'])
assert 'DB dropped by user PSP!' in result.output
result = runner.invoke(click_demo.dropdb, ['n'])
assert 'Aborted!' in result.output # actually not working, confirmed with with
def test_command_greet(self):
"""Testing Command Hello"""
runner = CliRunner()
result = runner.invoke(click_demo.greet)
assert result.exit_code == 0
result = runner.invoke(click_demo.greet, ['--dob', '24/03/1991'])
assert 'Your DOB is 24/03/1991' in result.output
result = runner.invoke(click_demo.greet, ['--username', 'PSP', '--dob', '24/03/1991'])
assert 'Hey PSP!' in result.output
os.environ['USERNAME'] = 'Alien'
# os.environ['DOB'] = '32/13/0000'
result = runner.invoke(click_demo.greet)
assert 'Hey Alien!' in result.output
# assert '32/13/0000' in result.output
| 30.551402 | 94 | 0.611808 | 412 | 3,269 | 4.75 | 0.203884 | 0.114972 | 0.183955 | 0.235054 | 0.664793 | 0.591722 | 0.53909 | 0.410322 | 0.368421 | 0.343383 | 0 | 0.020799 | 0.249924 | 3,269 | 106 | 95 | 30.839623 | 0.777325 | 0.136739 | 0 | 0.355932 | 0 | 0 | 0.106691 | 0 | 0 | 0 | 0 | 0 | 0.322034 | 1 | 0.135593 | false | 0 | 0.084746 | 0 | 0.237288 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7714672e370032f8e233b4a90238705f7d646cc0 | 1,328 | py | Python | entities/ticking_trait.py | MatthewRobertDunn/PyVaders | 31297da6a50a8f85503cd78f59bdb236ddef20ac | [
"BSD-3-Clause"
] | null | null | null | entities/ticking_trait.py | MatthewRobertDunn/PyVaders | 31297da6a50a8f85503cd78f59bdb236ddef20ac | [
"BSD-3-Clause"
] | null | null | null | entities/ticking_trait.py | MatthewRobertDunn/PyVaders | 31297da6a50a8f85503cd78f59bdb236ddef20ac | [
"BSD-3-Clause"
] | null | null | null | from entities.entity import Entity
import random
#An entity that receives ticks
class TickingTrait(Entity):
delta_time = 0.0 #Time passed per frame, secs
time = 0.0 #time passed since simulation start, secs
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.at_most_funcs = {}
self.after_funcs = {}
#Main game logic
def tick(self):
pass
def update_graphics_model(self):
pass
#Rate limits a function to at most time seconds
def at_most(self, task_name, func, limit):
if(TickingTrait.time - self.at_most_funcs.get(task_name, -limit) >= limit):
func()
self.at_most_funcs[task_name] = TickingTrait.time #We last fired this func now
def chance(self, chance, func):
if random.random() < chance:
func()
def once(self, task_name, func):
self.once_after(task_name,func,0)
#Runs a function once after a length of time has passed
def once_after(self,task_name, func, limit):
fireTime = self.after_funcs.get(task_name, None)
if fireTime is None:
self.after_funcs[task_name] = self.time + limit
return
elif self.time >= fireTime:
func() #fire it
self.after_funcs[task_name] = float('inf')
| 32.390244 | 92 | 0.624247 | 181 | 1,328 | 4.39779 | 0.38674 | 0.090452 | 0.070352 | 0.056533 | 0.148241 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005236 | 0.280873 | 1,328 | 40 | 93 | 33.2 | 0.828272 | 0.184488 | 0 | 0.166667 | 0 | 0 | 0.002791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0.066667 | 0.066667 | 0 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
771a3c7c186bcdee1104a1dc9d9e85a0b7640f86 | 357 | py | Python | A U L A - 0 7/ex014.py | DarthVeigan/PythonCursoEmVideo | c619b95563531a969078a5f1e041406355415363 | [
"Apache-2.0"
] | null | null | null | A U L A - 0 7/ex014.py | DarthVeigan/PythonCursoEmVideo | c619b95563531a969078a5f1e041406355415363 | [
"Apache-2.0"
] | null | null | null | A U L A - 0 7/ex014.py | DarthVeigan/PythonCursoEmVideo | c619b95563531a969078a5f1e041406355415363 | [
"Apache-2.0"
] | null | null | null | """
CONVERSOR DE TEMPERATURAS ºC E ºF
"""
c = float(input('Informe a temperatura em ºC: '))
f = float(input('Informe a temperatura em ºF: '))
Tc = (f-32)/1.8
Tf = (c*1.8)+32
print('A temperatura de {:.2f}ºC Corresponde a {:.2f}ºF '.format(c,Tf))
print('A temperatura de {:.2f}ºF Corresponde a {:.2f}ºC'.format(f,Tc))
#ºC->ºF = ((9*c)/5)+32
#ºF->ºC = (( | 22.3125 | 71 | 0.613445 | 66 | 357 | 3.318182 | 0.378788 | 0.219178 | 0.155251 | 0.164384 | 0.474886 | 0.283105 | 0 | 0 | 0 | 0 | 0 | 0.05298 | 0.154062 | 357 | 16 | 72 | 22.3125 | 0.672185 | 0.184874 | 0 | 0 | 0 | 0 | 0.551601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7726683549d7f8a172e83a0ddeff2f43d065ee6a | 14,737 | py | Python | ground_station/python/kml_animator/process_animation.py | jlivingstonsg/Cellbots-2019 | 2e4635beab0cabef7a75e9d863d588b51db0e74d | [
"Apache-2.0"
] | 2 | 2018-10-11T16:11:11.000Z | 2018-10-11T16:15:53.000Z | ground_station/python/kml_animator/process_animation.py | jlivingstonsg/Cellbots-2019 | 2e4635beab0cabef7a75e9d863d588b51db0e74d | [
"Apache-2.0"
] | 38 | 2015-03-03T22:32:20.000Z | 2015-03-03T22:32:47.000Z | ground_station/python/kml_animator/process_animation.py | jlivingstonsg/Cellbots-2019 | 2e4635beab0cabef7a75e9d863d588b51db0e74d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#-119.160238,40.855990,05374 <!-- 16110 sats:8 UTC 20:41:00 -->
index_page = '''
<html>
<head>
<title>X20 Rocket Launch</title>
<script src="https://www.google.com/jsapi"> </script>
<script src="http://earth-api-samples.googlecode.com/svn/trunk/lib/kmldomwalk.js" type="text/javascript"> </script>
<script type="text/javascript">
var ge;
var tour;
google.load("earth", "1");
function init() {
google.earth.createInstance('map3d', initCB, failureCB);
}
function initCB(instance) {
ge = instance;
ge.getWindow().setVisibility(true);
ge.getNavigationControl().setVisibility(ge.VISIBILITY_SHOW);
var href = '%s/x20.kml';
google.earth.fetchKml(ge, href, fetchCallback);
function fetchCallback(fetchedKml) {
// Alert if no KML was found at the specified URL.
if (!fetchedKml) {
setTimeout(function() {
alert('Bad or null KML');
}, 0);
return;
}
// Add the fetched KML into this Earth instance.
ge.getFeatures().appendChild(fetchedKml);
// Walk through the KML to find the tour object; assign to variable 'tour.'
walkKmlDom(fetchedKml, function() {
if (this.getType() == 'KmlTour') {
tour = this;
return false;
}
});
}
}
function failureCB(errorCode) {
}
// Tour control functions.
function enterTour() {
if (!tour) {
alert('No tour found!');
return;
}
ge.getTourPlayer().setTour(tour);
ge.getTourPlayer().play();
}
function pauseTour() {
ge.getTourPlayer().pause();
}
function resetTour() {
ge.getTourPlayer().reset();
}
function exitTour() {
ge.getTourPlayer().setTour(null);
}
google.setOnLoadCallback(init);
</script>
</head>
<body>
<div id="map3d" style="height: 700px; width: 1200px;"></div>
<div id ="controls">
<input type="button" onclick="enterTour()" value="Travel To Black Rock Desert and Launch X20 Rocket"/>
<input type="button" onclick="resetTour()" value="Reset"/>
</div>
</body>
</html>'''
#HEADER DEFINITION
header = '''<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2"
xmlns:gx="http://www.google.com/kml/ext/2.2"
xmlns:kml="http://www.opengis.net/kml/2.2"
xmlns:atom="http://www.w3.org/2005/Atom">
<Document>
<!-- HUNDRED K LINE! -->
<Style id="hundredk">
<LineStyle>
<color>7700ff00</color>
<Width>100</Width>
</LineStyle>
</Style>
<Placemark>
<styleUrl>#hundredk</styleUrl>
<LineString id="hundredk">
<extrude>0</extrude>
<tessellate>1</tessellate>
<altitudeMode>absolute</altitudeMode>
<coordinates>
-119.112413,40.853570,31669
-119.515,40.903570,31669
</coordinates>
</LineString>
</Placemark>
<Placemark id="hundredkballoon">
<name>100K feet above launch altitude</name>
<Style>
<IconStyle>
<Icon>
</Icon>
</IconStyle>
</Style>
<Point>
<gx:altitudeMode>absolute</gx:altitudeMode>
<coordinates>-119.3,40.87,31669</coordinates>
</Point>
</Placemark>
<Placemark id="achievementballoon">
<Style>
<IconStyle>
<Icon>
</Icon>
</IconStyle>
<BalloonStyle>
<bgColor>ff444444</bgColor>
<text><![CDATA[
<font face="sans-serif" color="white" size="+3"><b>Achievement Unlocked:</b> Reached Carmack Micro Prize altitude!</font>
]]></text>
</BalloonStyle>
</Style>
<description>
Achievement Unlocked: Reached Carmack Micro Prize altitude!
</description>
<Point>
<gx:altitudeMode>absolute</gx:altitudeMode>
<coordinates>-119.3,40.87,31669</coordinates>
</Point>
</Placemark>
<!-- WHOLEROCKET TRAIL -->
<Style id="wholeplume">
<LineStyle>
<color>ff0077ff</color>
<Width>10</Width>
</LineStyle>
</Style>
<Placemark>
<styleUrl>#wholeplume</styleUrl>
<LineString id="wholeTrack">
<extrude>0</extrude>
<tessellate>1</tessellate>
<altitudeMode>absolute</altitudeMode>
<coordinates>
%s
</coordinates>
</LineString>
</Placemark>
<!-- WHOLE MODEL -->
<Placemark>
<Model>
<altitudeMode>absolute</altitudeMode>
<Location id="wholeLocation">
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
</Location>
<Orientation id="wholeOrientation">>
<heading>0</heading>
<tilt>0</tilt>
<roll>0</roll>
</Orientation>
<Scale id="wholeScale">
<x>10</x>
<y>10</y>
<z>10</z>
</Scale>
<Link>
<href>%s/whole.dae</href>
</Link>
</Model>
</Placemark>
<!-- SUSTAINERROCKET TRAIL -->
<Style id="sustainerplume">
<LineStyle>
<color>ff0077ff</color>
<Width>10</Width>
</LineStyle>
</Style>
<Placemark>
<styleUrl>#sustainerplume</styleUrl>
<LineString id="sustainerTrack">
<extrude>0</extrude>
<tessellate>1</tessellate>
<altitudeMode>absolute</altitudeMode>
<coordinates>
%s
</coordinates>
</LineString>
</Placemark>
<!-- SUSTAINER MODEL -->
<Placemark>
<Model>
<altitudeMode>absolute</altitudeMode>
<Location id="sustainerLocation">
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
</Location>
<Orientation id="sustainerOrientation">>
<heading>0</heading>
<tilt>0</tilt>
<roll>0</roll>
</Orientation>
<Scale id="sustainerScale">
<x>0</x>
<y>0</y>
<z>0</z>
</Scale>
<Link>
<href>%s/sustainer.dae</href>
</Link>
</Model>
</Placemark>
<!-- BOOSTERROCKET TRAIL -->
<Style id="boosterplume">
<LineStyle>
<color>ff0077ff</color>
<Width>10</Width>
</LineStyle>
</Style>
<Placemark>
<styleUrl>#boosterplume</styleUrl>
<LineString id="boosterTrack">
<extrude>0</extrude>
<tessellate>1</tessellate>
<altitudeMode>absolute</altitudeMode>
<coordinates>
%s
</coordinates>
</LineString>
</Placemark>
<!-- BOOSTER MODEL -->
<Placemark>
<Model>
<altitudeMode>absolute</altitudeMode>
<Location id="boosterLocation">
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
</Location>
<Orientation id="boosterOrientation">>
<heading>0</heading>
<tilt>0</tilt>
<roll>0</roll>
</Orientation>
<Scale id="boosterScale">
<x>0</x>
<y>0</y>
<z>0</z>
</Scale>
<Link>
<href>%s/booster.dae</href>
</Link>
</Model>
</Placemark>
<gx:Tour>
<name>X20 Rocket Launch</name>
<gx:Playlist>
<!-- Fly to our start location -->
<gx:FlyTo>
<gx:duration>%d</gx:duration>
<Camera>
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
<heading>0</heading>
<tilt>90</tilt>
<roll>0</roll>
<altitudeMode>absolute</altitudeMode>
</Camera>
</gx:FlyTo>'''
#### END OF HEADER
def vehicle_kml(name, duration, long, lat, alt, heading, tilt, roll, track_coord, balloon_actions, scale):
vehicle_template = '''
<!-- Rocket -->
<gx:AnimatedUpdate>
<gx:duration>%d</gx:duration>
<Update>
<targetHref></targetHref>
<Change>
<Location targetId="%sLocation">
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
</Location>
</Change>
<Change>
<Orientation targetId="%sOrientation">
<heading>%f</heading>
<tilt>%f</tilt>
<roll>%f</roll>
</Orientation>
</Change>
<Change>
<LineString targetId="%sTrack">
<coordinates>
%s
</coordinates>
</LineString>
</Change>
%s
<Change>
<Scale targetId="%sScale">
<x>%d</x>
<y>%d</y>
<z>%d</z>
</Scale>
</Change>
</Update>
</gx:AnimatedUpdate>
'''
return vehicle_template % (duration, name,long, lat, alt, name, heading, tilt, roll, name, track_coord, balloon_actions, name, scale, scale, scale)
def camera_kml(duration, long, lat, alt, heading, tilt, roll):
camera_template = '''
<!-- Camera -->
<gx:FlyTo>
<gx:duration>%d</gx:duration>
<gx:flyToMode>smooth</gx:flyToMode>
<Camera>
<longitude>%f</longitude>
<latitude>%f</latitude>
<altitude>%f</altitude>
<heading>%f</heading>
<tilt>%f</tilt>
<roll>%f</roll>
<altitudeMode>absolute</altitudeMode>
</Camera>
</gx:FlyTo>
'''
return camera_template % (duration, long, lat, alt, heading, tilt, roll)
tail = '''
<!-- Final Camera Zoom-->
<gx:FlyTo>
<gx:duration>2</gx:duration>
<gx:flyToMode>smooth</gx:flyToMode>
<Camera>
<longitude>-119.32</longitude>
<latitude>39.75</latitude>
<altitude>10000</altitude>
<heading>0</heading>
<tilt>90</tilt>
<roll>0</roll>
<altitudeMode>absolute</altitudeMode>
</Camera>
</gx:FlyTo>
</gx:Playlist>
</gx:Tour>
</Document>
</kml>
'''
import math
import sys
def distance(origin, destination):
lat1, lon1 = origin
lat2, lon2 = destination
radius = 6371 # km
dlat = math.radians(lat2-lat1)
dlon = math.radians(lon2-lon1)
a = math.sin(dlat/2) * math.sin(dlat/2) + math.cos(math.radians(lat1)) \
* math.cos(math.radians(lat2)) * math.sin(dlon/2) * math.sin(dlon/2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a))
d = radius * c
return d
if sys.argv.__len__() == 1:
print "must specify a serverroot - eg http://www.corp.google.com/~wdavies"
sys.exit(1)
findex = open('index.html', 'w')
findex.write(index_page % sys.argv[1])
findex.close()
curr_time = 0
total_time = 0
log_time = 10
prev_long = 0
prev_lat = 0
prev_alt = 0
prev_degree = 0
fkml = open('x20.kml', 'w')
f = open('./input.kml', 'r')
coords = ""
balloon = ""
seperation = 0
sep_long = 0
sep_lat = 0
sep_alt = 0
for line in f:
long, lat, alt, time = line.strip().split("\t")
long = float(long)
lat = float(lat)
alt = float(alt)
hr, mi, sec = map(int, time.split(":"))
if curr_time == 0:
coords = "%f,%f,%f\n" % (long, lat, alt)
fkml.write(header % (coords, long, lat, alt, sys.argv[1],
coords, long, lat, alt, sys.argv[1],
coords, long, lat, alt, sys.argv[1],
5, (long - 0.0), (lat - 0.05), alt ))
start_time = hr * 3600 + mi * 60 + sec
prev_time = hr * 3600 + mi * 60 + sec
prev_long = long
prev_lat = lat
prev_alt = alt
curr_time = hr * 3600 + mi * 60 + sec
time_diff = curr_time - prev_time
total_time = curr_time - start_time
if alt == 33089:
balloon = '''
<Change>
<Placemark targetId="achievementballoon">
<gx:balloonVisibility>1</gx:balloonVisibility>
</Placemark>
</Change>'''
else:
balloon = '''<Change>
<Placemark targetId="achievementballoon">
<gx:balloonVisibility>0</gx:balloonVisibility>
</Placemark>
</Change>'''
sys.stderr.write("time: %d, (%d), alt: %d\n" % (curr_time - start_time, time_diff, alt))
if time_diff >= 10 or alt == 33089 or alt == 7533:
if alt == 7533.0 :
sys.stderr.write("SEPERATION ADJUSTMENT!\n")
long = -119.159620
lat = 40.859520
alt = 8279
sep_long = long
sep_lat = lat
sep_alt = alt
seperation = 1
sys.stderr.write("DISPLAY: time: %d, (%d), alt: %d\n" % (curr_time - start_time, time_diff, alt))
horiz = distance([prev_lat, prev_long], [lat, long])
vert = (alt - prev_alt)/1000
degree = math.degrees(math.atan(vert/horiz))
coords = coords + "%f,%f,%f\n" % (long, lat, alt)
if seperation == 0:
fkml.write(vehicle_kml("sustainer", log_time, long, lat, alt, 0, 0, 90-degree, coords, balloon, 0))
fkml.write(vehicle_kml("booster", log_time, long, lat, alt, 0, 0, 90-degree, coords, balloon, 10))
fkml.write(vehicle_kml("whole", log_time, long, lat, alt, 0, 0, 90-degree, coords, balloon, 10))
fkml.write(camera_kml(log_time, (long - 0.0), (lat - 0.1), alt*0.8, 0, 90, 0))
if seperation == 1:
fkml.write(vehicle_kml("sustainer", 1, long, lat, alt, 0, 0, 90-degree, coords, balloon, 10))
fkml.write(vehicle_kml("booster", 1, sep_long, sep_lat, sep_alt, 0, 0, 90-degree, "", balloon, 10))
fkml.write(vehicle_kml("whole", 1, long, lat, alt, 0, 0, 90-degree, "", balloon, 0))
fkml.write(camera_kml(1, (long - 0.0), (lat - 0.1), alt*0.8, 0, 90, 0))
if seperation > 1:
sep_alt = sep_alt * 0.35
fkml.write(vehicle_kml("sustainer", log_time, long, lat, alt, 0, 0, 90-degree, coords, balloon, 10))
fkml.write(vehicle_kml("booster", log_time, sep_long, sep_lat, sep_alt, 0, 0, 90-degree, "", balloon, 10))
fkml.write(vehicle_kml("whole", log_time, long, lat, alt, 0, 0, 90-degree, "", balloon, 0))
fkml.write(camera_kml(log_time, (long - 0.0), (lat - 0.3), alt*0.8, 0, 90, 0))
if seperation == 1:
seperation = 2
prev_time = curr_time
prev_long = long
prev_lat = lat
prev_alt = alt
prev_degree = degree
if log_time > 1:
log_time -= 1
fkml.write(tail)
fkml.close()
f.close()
| 29.124506 | 151 | 0.534369 | 1,601 | 14,737 | 4.86321 | 0.22361 | 0.017082 | 0.021834 | 0.021963 | 0.469304 | 0.425507 | 0.423324 | 0.372592 | 0.336758 | 0.306447 | 0 | 0.042414 | 0.307254 | 14,737 | 505 | 152 | 29.182178 | 0.720247 | 0.007668 | 0 | 0.45098 | 0 | 0.006536 | 0.693602 | 0.187205 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.004357 | null | null | 0.002179 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
773b6fa5822a629c9c6e788c1fdc1ef762479406 | 3,358 | py | Python | mutalyzer_spdi_parser/convert.py | mutalyzer/spdi-parser | c20279296b306b978a79211195f52d121562d6d2 | [
"MIT"
] | null | null | null | mutalyzer_spdi_parser/convert.py | mutalyzer/spdi-parser | c20279296b306b978a79211195f52d121562d6d2 | [
"MIT"
] | 1 | 2021-10-14T07:43:28.000Z | 2021-10-14T07:43:28.000Z | mutalyzer_spdi_parser/convert.py | mutalyzer/spdi-parser | c20279296b306b978a79211195f52d121562d6d2 | [
"MIT"
] | null | null | null | """
Module for converting SPDI descriptions and lark parse trees
to their equivalent dictionary models.
"""
from lark import Transformer
from .spdi_parser import parse
def to_spdi_model(description):
"""
Convert an SPDI description to a dictionary model.
:arg str description: SPDI description.
:returns: Description dictionary model.
:rtype: dict
"""
return parse_tree_to_model(parse(description))
def to_hgvs_internal_model(description):
"""
Convert an SPDI description to an internal HGVS dictionary model
(delins variants with internal locations).
:arg str description: SPDI description.
:returns: HGVS internal dictionary model.
:rtype: dict
"""
return _to_hgvs_internal(parse_tree_to_model(parse(description)))
def parse_tree_to_model(parse_tree):
"""
Convert a parse tree to a dictionary model.
:arg lark.Tree parse_tree: SPDI description equivalent parse tree.
:returns: Description dictionary model.
:rtype: dict
"""
return Converter().transform(parse_tree)
class Converter(Transformer):
def description(self, children):
return {k: v for d in children for k, v in d.items()}
def deleted_sequence(self, children):
return {"deleted_sequence": children[0]}
def inserted_sequence(self, children):
return {"inserted_sequence": children[0]}
def position(self, children):
return {"position": children[0]}
def deleted_length(self, children):
return {"deleted_length": children[0]}
def sequence(self, children):
return children[0]
def NUMBER(self, name):
return int(name)
def D_SEQUENCE(self, name):
return name.value
def R_SEQUENCE(self, name):
return name.value
def P_SEQUENCE(self, name):
return name.value
def ID(self, name):
return {"seq_id": name.value}
def _to_hgvs_internal(s_m):
m = {"type": "description_dna", "reference": {"id": s_m["seq_id"]}}
v = {"type": "deletion_insertion"}
if s_m.get("deleted_sequence"):
v["location"] = _range(
s_m["position"], s_m["position"] + len(s_m["deleted_sequence"])
)
v["deleted"] = [{"sequence": s_m["deleted_sequence"], "source": "description"}]
elif s_m.get("deleted_length"):
v["location"] = _range(s_m["position"], s_m["position"] + s_m["deleted_length"])
else:
v["location"] = _range(s_m["position"], s_m["position"])
if s_m.get("inserted_sequence"):
v["inserted"] = [
{"sequence": s_m["inserted_sequence"], "source": "description"}
]
if not s_m.get("inserted_sequence") and not (
s_m.get("deleted_sequence") or s_m.get("deleted_length")
):
v["location"] = _range(s_m["position"], s_m["position"] + 1)
v["inserted"] = [
{
"location": _range(s_m["position"], s_m["position"] + 1),
"source": "reference",
}
]
m["variants"] = [v]
return m
def _range(s, e):
return {
"type": "range",
"start": {
"type": "point",
"position": s,
},
"end": {
"type": "point",
"position": e,
},
}
def _point(p):
return {
"type": "point",
"position": p,
}
| 26.031008 | 88 | 0.601549 | 404 | 3,358 | 4.811881 | 0.205446 | 0.022634 | 0.05144 | 0.033951 | 0.412551 | 0.324074 | 0.287551 | 0.106481 | 0.106481 | 0.053498 | 0 | 0.00282 | 0.26087 | 3,358 | 128 | 89 | 26.234375 | 0.780419 | 0.181656 | 0 | 0.131579 | 0 | 0 | 0.210923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.223684 | false | 0 | 0.026316 | 0.171053 | 0.486842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
77464dd73b6563bfef2f452f4f811872c16b70ae | 196 | py | Python | _modules/_site/week_10.py | CoffeePoweredComputers/489-data-structures | e2a56f3cecde42bf1520030e24750cbe17e1b704 | [
"MIT"
] | null | null | null | _modules/_site/week_10.py | CoffeePoweredComputers/489-data-structures | e2a56f3cecde42bf1520030e24750cbe17e1b704 | [
"MIT"
] | null | null | null | _modules/_site/week_10.py | CoffeePoweredComputers/489-data-structures | e2a56f3cecde42bf1520030e24750cbe17e1b704 | [
"MIT"
] | null | null | null | Oct 24
:
Oct 25
:
**Lecture**{: .label .label-light-blue} X
Oct 26
:
Oct 27
: **Lecture**
Oct 28
: **zyBooks**{: .label .label-orange}
Oct 29
: **Nothing Due**
Oct 30
: **Nothing Due**
| 8.521739 | 41 | 0.571429 | 29 | 196 | 3.862069 | 0.586207 | 0.178571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092105 | 0.22449 | 196 | 22 | 42 | 8.909091 | 0.644737 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.