hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dcdd023a81feca70c98120ea168d3604a0c94976 | 416 | py | Python | app/config.py | dogukangungordi/cinetify-Movie | 85946010f4471cef0fb42873d50d59493372d060 | [
"MIT"
] | null | null | null | app/config.py | dogukangungordi/cinetify-Movie | 85946010f4471cef0fb42873d50d59493372d060 | [
"MIT"
] | null | null | null | app/config.py | dogukangungordi/cinetify-Movie | 85946010f4471cef0fb42873d50d59493372d060 | [
"MIT"
] | null | null | null | import os
TWO_WEEKS = 1209600
SECRET_KEY = os.getenv('SECRET_KEY', None)
assert SECRET_KEY
TOKEN_EXPIRES = TWO_WEEKS
DATABASE_URL = os.getenv(
'DATABASE_URL',
'postgres://postgres@{0}:5432/postgres'.format(os.getenv('DB_PORT_5432_TCP_ADDR', None)))
assert DATABASE_URL
REDIS_HOST = os.getenv('REDIS_HOST', os.getenv('REDIS_PORT_6379_TCP_ADDR', None))
REDIS_PASSWORD = os.getenv('REDIS_PASSWORD', None)
| 23.111111 | 93 | 0.759615 | 63 | 416 | 4.68254 | 0.412698 | 0.162712 | 0.132203 | 0.115254 | 0.132203 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053908 | 0.108173 | 416 | 17 | 94 | 24.470588 | 0.74124 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 0.197115 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0 | false | 0.090909 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dcdd6e685bb422c18bab7a2d6e2b60a9ba328309 | 597 | py | Python | 2020/day02/password_philosopy.py | rycmak/advent-of-code | 2a3289516f4c1d0bc1d24a38d495a93edcb19e29 | [
"MIT"
] | 1 | 2021-03-03T01:40:09.000Z | 2021-03-03T01:40:09.000Z | 2020/day02/password_philosopy.py | rycmak/advent-of-code | 2a3289516f4c1d0bc1d24a38d495a93edcb19e29 | [
"MIT"
] | null | null | null | 2020/day02/password_philosopy.py | rycmak/advent-of-code | 2a3289516f4c1d0bc1d24a38d495a93edcb19e29 | [
"MIT"
] | null | null | null | file = open("input.txt", "r")
num_valid = 0
for line in file:
# policy = part before colon
policy = line.strip().split(":")[0]
# get min/max number allowed for given letter
min_max = policy.split(" ")[0]
letter = policy.split(" ")[1]
min = int(min_max.split("-")[0])
max = int(min_max.split("-")[1])
# password = part after colon
password = line.strip().split(":")[1]
# check if password contains between min and max of given letter
if password.count(letter) >= min and password.count(letter) <= max:
num_valid += 1
print("Number of valid passwords = ", num_valid) | 28.428571 | 69 | 0.644891 | 90 | 597 | 4.211111 | 0.411111 | 0.063325 | 0.073879 | 0.073879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016529 | 0.18928 | 597 | 21 | 70 | 28.428571 | 0.766529 | 0.269682 | 0 | 0 | 0 | 0 | 0.101852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dce163daae46473015d3b2a1132e2c0325c306ae | 669 | py | Python | landmarkrest/field_predictor/field_models/TwoDigitYear.py | inferlink/landmark-rest | 5bda40424bd1d62c64c9f4931855b4e341742b95 | [
"BSD-4-Clause"
] | null | null | null | landmarkrest/field_predictor/field_models/TwoDigitYear.py | inferlink/landmark-rest | 5bda40424bd1d62c64c9f4931855b4e341742b95 | [
"BSD-4-Clause"
] | null | null | null | landmarkrest/field_predictor/field_models/TwoDigitYear.py | inferlink/landmark-rest | 5bda40424bd1d62c64c9f4931855b4e341742b95 | [
"BSD-4-Clause"
] | null | null | null | from BaseModel import BaseModel
class TwoDigitYear(BaseModel):
def __init__(self):
super(TwoDigitYear, self).__init__()
def generate_confidence(self, preceding_stripes, slot_values, following_stripes):
# only care about ints for this model, so strip out anything that isn't
valid_values = [z for z in slot_values if str(z).isdigit()]
# two digit number
matches = list(enumerate([(0 <= int(a) <= 99) and str(a).isdigit() and
len(str(a)) == 2 for a in valid_values]))
confidence = float(len([z for z in matches if z[1]])) / float(len(slot_values))
return confidence
| 33.45 | 87 | 0.630792 | 91 | 669 | 4.461538 | 0.571429 | 0.073892 | 0.024631 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010163 | 0.264574 | 669 | 19 | 88 | 35.210526 | 0.815041 | 0.12855 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dce9859de967085bbcf63975cb47a3c6a5bf26ec | 1,087 | py | Python | monsterapi/migrations/0023_check.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | 1 | 2018-11-05T13:08:48.000Z | 2018-11-05T13:08:48.000Z | monsterapi/migrations/0023_check.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | null | null | null | monsterapi/migrations/0023_check.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2018-11-24 13:52
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('monsterapi', '0022_auto_20181123_2339'),
]
operations = [
migrations.CreateModel(
name='Check',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('result', models.BooleanField(default=None)),
('created_date', models.DateTimeField(default=django.utils.timezone.now)),
('game', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='monsterapi.Game')),
('melody', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='monsterapi.Melody')),
('monster', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='monsterapi.Monster')),
],
),
]
| 40.259259 | 140 | 0.643054 | 122 | 1,087 | 5.647541 | 0.47541 | 0.058055 | 0.081277 | 0.127721 | 0.357039 | 0.357039 | 0.357039 | 0.357039 | 0.357039 | 0.357039 | 0 | 0.036385 | 0.216191 | 1,087 | 26 | 141 | 41.807692 | 0.7723 | 0.041398 | 0 | 0 | 1 | 0 | 0.122115 | 0.022115 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dcec3b942ff3cfa0abdd8f17276dd930a550b6c9 | 936 | py | Python | test cases/unittest_homophone_module.py | johnbumgarner/wordhoard | c71ad970505801ffe6d5c640c63f073c434b9a47 | [
"MIT"
] | 40 | 2020-10-21T19:49:51.000Z | 2022-03-05T20:46:58.000Z | test cases/unittest_homophone_module.py | johnbumgarner/wordhoard | c71ad970505801ffe6d5c640c63f073c434b9a47 | [
"MIT"
] | 10 | 2021-08-15T13:56:03.000Z | 2022-03-03T14:15:26.000Z | test cases/unittest_homophone_module.py | johnbumgarner/wordhoard | c71ad970505801ffe6d5c640c63f073c434b9a47 | [
"MIT"
] | 4 | 2020-12-30T15:22:07.000Z | 2022-02-01T21:05:49.000Z | #!/usr/bin/env python3
"""
This Python script is designed to perform unit testing of Wordhoard's
Homophones module.
"""
__author__ = 'John Bumgarner'
__date__ = 'September 20, 2020'
__status__ = 'Quality Assurance'
__license__ = 'MIT'
__copyright__ = "Copyright (C) 2021 John Bumgarner"
import unittest
from wordhoard import Homophones
class TestHomophoneFunction(unittest.TestCase):
def test_homophone_always_pass(self):
"""
This test is designed to pass, because the word "horse" has a known Homophones
and the default output format is a list
:return:
"""
self.assertIsInstance(Homophones('horse').find_homophones(), list)
def test_homophone_always_fail(self):
"""
This test is designed to fail, because the word "pig" has no known Homophones
:return:
"""
self.assertIsNone(Homophones('horse').find_homophones())
unittest.main()
| 25.297297 | 86 | 0.690171 | 111 | 936 | 5.567568 | 0.585586 | 0.048544 | 0.058252 | 0.071197 | 0.07767 | 0.07767 | 0 | 0 | 0 | 0 | 0 | 0.015007 | 0.21688 | 936 | 36 | 87 | 26 | 0.828104 | 0.347222 | 0 | 0 | 0 | 0 | 0.178236 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0.076923 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dcec6ff0d9def2fe9c68c69ed39626402f66ee06 | 3,492 | py | Python | resist/types/models/message.py | an-dyy/Resist | db4526e2db78bbd8d16567ae3e3880cf2c64eda1 | [
"MIT"
] | 4 | 2022-03-05T21:54:14.000Z | 2022-03-13T07:51:07.000Z | resist/types/models/message.py | an-dyy/Resist | db4526e2db78bbd8d16567ae3e3880cf2c64eda1 | [
"MIT"
] | 1 | 2022-03-09T20:15:09.000Z | 2022-03-10T10:39:25.000Z | resist/types/models/message.py | an-dyy/Resist | db4526e2db78bbd8d16567ae3e3880cf2c64eda1 | [
"MIT"
] | 1 | 2022-03-09T10:58:54.000Z | 2022-03-09T10:58:54.000Z | from __future__ import annotations
from typing import Literal, TypedDict, Union, final
from typing_extensions import NotRequired
from .asset import AssetData
class YoutubeLinkEmbedMetadata(TypedDict):
type: Literal["YouTube"]
id: str
timestamp: NotRequired[str]
class TwitchLinkEmbedMetadata(TypedDict):
type: Literal["Twitch"]
content_type: Literal["Channel", "Clip", "Video"]
id: str
class SpotifyLinkEmbedMetadata(TypedDict):
type: Literal["Spotify"]
content_type: str
id: str
SoundcloudLinkEmbedMetadata = TypedDict(
"SoundcloudLinkEmbedMetadata", {"type": Literal["Soundcloud"]}
)
class BandcampLinkEmbedMetadata(TypedDict):
type: Literal["Bandcamp"]
content_type: Literal["Album", "Track"]
id: str
class EmbedMediaData(TypedDict):
# base fields that both videos and images sent in embeds will have.
url: str
width: int
height: int
class EmbedImageData(EmbedMediaData):
# this contains the data about an image sent in an embed
# for example: a banner image in a URL's embed
size: Literal["Large", "Preview"]
class WebsiteEmbedData(TypedDict):
"""Represents the data of an embed for a URL."""
type: Literal["Website"]
url: NotRequired[str]
special: NotRequired[
YoutubeLinkEmbedMetadata
| SpotifyLinkEmbedMetadata
| TwitchLinkEmbedMetadata
| SoundcloudLinkEmbedMetadata
| BandcampLinkEmbedMetadata
]
title: NotRequired[str]
description: NotRequired[str]
image: NotRequired[EmbedImageData]
video: NotRequired[EmbedMediaData]
site_name: NotRequired[str]
icon_url: NotRequired[str]
colour: NotRequired[str]
class ImageEmbedData(EmbedImageData):
"""Represents the data of an image embed."""
type: Literal["Image"]
class TextEmbedData(TypedDict):
type: Literal["Text"]
icon_url: NotRequired[str]
url: NotRequired[str]
title: NotRequired[str]
description: NotRequired[str]
media: NotRequired[AssetData]
colour: NotRequired[str]
NoneEmbed = TypedDict("NoneEmbed", {"type": Literal["None"]})
@final
class SystemMessageContent(TypedDict):
type: Literal["text"]
content: str
@final
class UserActionSystemMessageContent(TypedDict):
type: Literal[
"user_added",
"user_remove",
"user_joined",
"user_left",
"user_kicked",
"user_banned",
]
id: str
by: NotRequired[str] # sent only with user_added and user_remove
@final
class ChannelActionSystemMessageContent(TypedDict):
type: Literal[
"channel_renamed", "channel_description_changed", "channel_icon_changed"
]
by: str
name: NotRequired[str] # sent only with channel_renamed
MessageEditedData = TypedDict("MessageEditedData", {"$date": str})
class MasqueradeData(TypedDict):
name: NotRequired[str]
avatar: NotRequired[str]
EmbedType = Union[WebsiteEmbedData, ImageEmbedData, TextEmbedData, NoneEmbed]
class MessageData(TypedDict):
_id: str
nonce: NotRequired[str]
channel: str
author: str
content: (
SystemMessageContent
| UserActionSystemMessageContent
| ChannelActionSystemMessageContent
| str
)
attachments: NotRequired[list[AssetData]]
edited: NotRequired[MessageEditedData]
embeds: NotRequired[list[EmbedType]]
mentions: NotRequired[list[str]]
replies: NotRequired[list[str]]
masquerade: NotRequired[MasqueradeData]
| 23.436242 | 80 | 0.70189 | 339 | 3,492 | 7.153392 | 0.336283 | 0.098144 | 0.065979 | 0.01567 | 0.075052 | 0.036289 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203322 | 3,492 | 148 | 81 | 23.594595 | 0.871675 | 0.091924 | 0 | 0.215686 | 0 | 0 | 0.092205 | 0.01711 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.039216 | 0 | 0.715686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dcf39fbfef9164b52a639bb7ce9ec336fdfee6b7 | 635 | py | Python | datenight/urls.py | SarahJaine/date-night | fb63b68cfb115f52c5d3ec39f2e73454c5d63bb6 | [
"MIT"
] | null | null | null | datenight/urls.py | SarahJaine/date-night | fb63b68cfb115f52c5d3ec39f2e73454c5d63bb6 | [
"MIT"
] | null | null | null | datenight/urls.py | SarahJaine/date-night | fb63b68cfb115f52c5d3ec39f2e73454c5d63bb6 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls import include, url
from django.contrib import admin
from datenight.views import HomePageView
urlpatterns = [
# Examples:
url(r'^$', HomePageView.as_view(), name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^admin/rq/', include('django_rq.urls')),
url(r'^admin/', include(admin.site.urls)),
]
if settings.DEBUG:
import debug_toolbar
from django.conf.urls.static import static
urlpatterns = [
url(r'^__debug__/', include(debug_toolbar.urls))
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) + urlpatterns
| 27.608696 | 83 | 0.696063 | 83 | 635 | 5.192771 | 0.361446 | 0.046404 | 0.097448 | 0.083527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159055 | 635 | 22 | 84 | 28.863636 | 0.807116 | 0.074016 | 0 | 0.133333 | 0 | 0 | 0.082051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dcf4b8560b916ceca9700c2e7bf16bb6a53c4588 | 3,601 | py | Python | src/python/nimbusml/internal/entrypoints/models_ovamodelcombiner.py | GalOshri/NimbusML | a2ba6f51b7c8cdd3c3316d5ecf4605621be3bd8d | [
"MIT"
] | 2 | 2019-03-01T01:22:54.000Z | 2019-07-10T19:57:38.000Z | src/python/nimbusml/internal/entrypoints/models_ovamodelcombiner.py | GalOshri/NimbusML | a2ba6f51b7c8cdd3c3316d5ecf4605621be3bd8d | [
"MIT"
] | null | null | null | src/python/nimbusml/internal/entrypoints/models_ovamodelcombiner.py | GalOshri/NimbusML | a2ba6f51b7c8cdd3c3316d5ecf4605621be3bd8d | [
"MIT"
] | null | null | null | # - Generated by tools/entrypoint_compiler.py: do not edit by hand
"""
Models.OvaModelCombiner
"""
from ..utils.entrypoints import EntryPoint
from ..utils.utils import try_set, unlist
def models_ovamodelcombiner(
training_data,
predictor_model=None,
model_array=None,
use_probabilities=True,
feature_column='Features',
label_column='Label',
weight_column=None,
normalize_features='Auto',
caching='Auto',
**params):
"""
**Description**
Combines a sequence of PredictorModels into a single model
:param model_array: Input models (inputs).
:param training_data: The data to be used for training (inputs).
:param use_probabilities: Use probabilities from learners instead
of raw values. (inputs).
:param feature_column: Column to use for features (inputs).
:param label_column: Column to use for labels (inputs).
:param weight_column: Column to use for example weight (inputs).
:param normalize_features: Normalize option for the feature
column (inputs).
:param caching: Whether learner should cache input training data
(inputs).
:param predictor_model: Predictor model (outputs).
"""
entrypoint_name = 'Models.OvaModelCombiner'
inputs = {}
outputs = {}
if model_array is not None:
inputs['ModelArray'] = try_set(
obj=model_array,
none_acceptable=True,
is_of_type=list)
if training_data is not None:
inputs['TrainingData'] = try_set(
obj=training_data,
none_acceptable=False,
is_of_type=str)
if use_probabilities is not None:
inputs['UseProbabilities'] = try_set(
obj=use_probabilities,
none_acceptable=True,
is_of_type=bool)
if feature_column is not None:
inputs['FeatureColumn'] = try_set(
obj=feature_column,
none_acceptable=True,
is_of_type=str,
is_column=True)
if label_column is not None:
inputs['LabelColumn'] = try_set(
obj=label_column,
none_acceptable=True,
is_of_type=str,
is_column=True)
if weight_column is not None:
inputs['WeightColumn'] = try_set(
obj=weight_column,
none_acceptable=True,
is_of_type=str,
is_column=True)
if normalize_features is not None:
inputs['NormalizeFeatures'] = try_set(
obj=normalize_features,
none_acceptable=True,
is_of_type=str,
values=[
'No',
'Warn',
'Auto',
'Yes'])
if caching is not None:
inputs['Caching'] = try_set(
obj=caching,
none_acceptable=True,
is_of_type=str,
values=[
'Auto',
'Memory',
'Disk',
'None'])
if predictor_model is not None:
outputs['PredictorModel'] = try_set(
obj=predictor_model, none_acceptable=False, is_of_type=str)
input_variables = {
x for x in unlist(inputs.values())
if isinstance(x, str) and x.startswith("$")}
output_variables = {
x for x in unlist(outputs.values())
if isinstance(x, str) and x.startswith("$")}
entrypoint = EntryPoint(
name=entrypoint_name, inputs=inputs, outputs=outputs,
input_variables=input_variables,
output_variables=output_variables)
return entrypoint
| 31.867257 | 71 | 0.599833 | 401 | 3,601 | 5.182045 | 0.241895 | 0.028874 | 0.03898 | 0.057748 | 0.27334 | 0.214148 | 0.16795 | 0.139076 | 0.070741 | 0.070741 | 0 | 0 | 0.312969 | 3,601 | 112 | 72 | 32.151786 | 0.839935 | 0.212163 | 0 | 0.253012 | 1 | 0 | 0.068429 | 0.008327 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0 | 0.024096 | 0 | 0.048193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dcf7760ee0ea08cc59fe587411f1de19eb0c37fe | 374 | py | Python | web/game/migrations/0002_message_visible.py | ihsgnef/kuiperbowl | a0c3e346bc05ed149fdb34f12b872c983a40613e | [
"MIT"
] | null | null | null | web/game/migrations/0002_message_visible.py | ihsgnef/kuiperbowl | a0c3e346bc05ed149fdb34f12b872c983a40613e | [
"MIT"
] | 5 | 2019-10-01T03:34:43.000Z | 2020-05-26T14:28:40.000Z | web/game/migrations/0002_message_visible.py | jasmaa/quizbowl | 282fe17217891266da96bcf1a9da4af5eff80fcc | [
"MIT"
] | 1 | 2021-05-10T01:46:45.000Z | 2021-05-10T01:46:45.000Z | # Generated by Django 2.2.7 on 2020-05-29 19:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('game', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='message',
name='visible',
field=models.BooleanField(default=True),
),
]
| 19.684211 | 52 | 0.585561 | 39 | 374 | 5.564103 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072243 | 0.296791 | 374 | 18 | 53 | 20.777778 | 0.752852 | 0.120321 | 0 | 0 | 1 | 0 | 0.091743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d058e1ee4a48662be58d98c65151d8e58a92c4b | 430 | py | Python | InterAutoTest_W/testcase/t_pytest/pytest_class.py | xuguoyan/pytest_api3 | c83d8b1fbd2b061db9d6dee40068ac84ae81c708 | [
"MIT"
] | 7 | 2019-11-28T07:17:37.000Z | 2020-10-28T08:24:09.000Z | InterAutoTest_W/testcase/t_pytest/pytest_class.py | xuguoyan/pytest_api3 | c83d8b1fbd2b061db9d6dee40068ac84ae81c708 | [
"MIT"
] | null | null | null | InterAutoTest_W/testcase/t_pytest/pytest_class.py | xuguoyan/pytest_api3 | c83d8b1fbd2b061db9d6dee40068ac84ae81c708 | [
"MIT"
] | 7 | 2021-01-10T14:11:10.000Z | 2022-02-28T12:41:04.000Z | #coding=utf-8
"""
1.定义类;
2.创建测试方法test开头
3.创建setup_class, teardown_class
4.运行查看结果
"""
import pytest
class TestClass():
def test_a(self):
print('test_a')
def test_b(self):
print('test_b')
def setup_class(self):
print('------setup_class------')
def teardown_class(self):
print('------teardown_class------')
if __name__ == "__main__":
pytest.main(['-s', 'pytest_class.py']) | 17.2 | 43 | 0.588372 | 55 | 430 | 4.254545 | 0.509091 | 0.153846 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.209302 | 430 | 25 | 44 | 17.2 | 0.673529 | 0.174419 | 0 | 0 | 0 | 0 | 0.247126 | 0.140805 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.083333 | 0 | 0.5 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d094d21102a554dd26ab1f57fd940c5c211f30e | 1,469 | py | Python | tests/test_cmd.py | mick88/poeditor-sync | 2326b0ff6c0537b3d4ff729fd45b079e0789d2c7 | [
"MIT"
] | 6 | 2021-07-16T14:19:44.000Z | 2022-03-10T10:27:39.000Z | tests/test_cmd.py | mick88/poeditor-sync | 2326b0ff6c0537b3d4ff729fd45b079e0789d2c7 | [
"MIT"
] | 9 | 2021-07-10T15:57:52.000Z | 2021-10-17T11:44:24.000Z | tests/test_cmd.py | mick88/poeditor-sync | 2326b0ff6c0537b3d4ff729fd45b079e0789d2c7 | [
"MIT"
] | null | null | null | from unittest import TestCase
from click.testing import CliRunner, Result
from poeditor_sync.cmd import poeditor
class CmdReadOnlyTokenTest(TestCase):
def setUp(self) -> None:
super().setUp()
self.runner = CliRunner(env={
'POEDITOR_CONFIG_FILE': 'tests/test.yml',
'POEDITOR_TOKEN': 'e1fc095d70eba2395fec56c6ad9e61c3',
})
def test_poeditor(self):
result: Result = self.runner.invoke(poeditor)
self.assertEqual(result.exit_code, 0)
self.assertTrue(result.stdout.startswith('Usage: poeditor'))
def test_poeditor_pull(self):
result: Result = self.runner.invoke(poeditor, ['pull'])
self.assertEqual(result.exit_code, 0, result.stdout)
def test_poeditor_push(self):
result: Result = self.runner.invoke(poeditor, 'push')
self.assertEqual(result.exit_code, 1)
def test_poeditor_push_terms(self):
result: Result = self.runner.invoke(poeditor, 'push')
self.assertEqual(result.exit_code, 1)
def test_poeditor_init_blank(self):
result: Result = self.runner.invoke(poeditor, args=['--config-file', 'test_blank_init.yml', 'init'])
self.assertEqual(result.exit_code, 0, result.stdout)
def test_poeditor_init_project_id(self):
result: Result = self.runner.invoke(poeditor, args=['--config-file', 'test_init_projectid.yml', 'init', '458528'])
self.assertEqual(result.exit_code, 0, result.stdout)
| 36.725 | 122 | 0.684139 | 176 | 1,469 | 5.545455 | 0.267045 | 0.071721 | 0.092213 | 0.122951 | 0.57377 | 0.57377 | 0.543033 | 0.461066 | 0.418033 | 0.418033 | 0 | 0.024411 | 0.191287 | 1,469 | 39 | 123 | 37.666667 | 0.797138 | 0 | 0 | 0.241379 | 0 | 0 | 0.128659 | 0.03744 | 0 | 0 | 0 | 0 | 0.241379 | 1 | 0.241379 | false | 0 | 0.103448 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d0cfb3214f9d822574bd483b912ab38da359e78 | 13,211 | py | Python | Project_2/source/project_2.py | larsjbro/FYS4150 | 95ac4e09b5aad133b29c9aabb5be1302abdd8e65 | [
"BSD-2-Clause"
] | null | null | null | Project_2/source/project_2.py | larsjbro/FYS4150 | 95ac4e09b5aad133b29c9aabb5be1302abdd8e65 | [
"BSD-2-Clause"
] | null | null | null | Project_2/source/project_2.py | larsjbro/FYS4150 | 95ac4e09b5aad133b29c9aabb5be1302abdd8e65 | [
"BSD-2-Clause"
] | null | null | null | '''
Created on 14. sep. 2017 v2
@author: LJB
'''
from __future__ import division, absolute_import
from numba import jit, float64, int64, void
import numpy as np
import matplotlib.pyplot as plt
import timeit
_DPI = 250
_SIZE = 0.7
def figure_set_default_size():
F = plt.gcf()
DefaultSize = [8 * _SIZE, 6 * _SIZE]
print "Default size in Inches", DefaultSize
print "Which should result in a %i x %i Image" % (_DPI * DefaultSize[0],
_DPI * DefaultSize[1])
F.set_size_inches(DefaultSize)
return F
def my_savefig(filename):
F = figure_set_default_size()
F.tight_layout()
F.savefig(filename, dpi=_DPI)
def cpu_time(repetition=10, n=100):
'''
Grid n =10^6 and two repetitions gave an average of 7.62825565887 seconds.
'''
time_per_call = timeit.timeit('solve_schroedinger({},5)'.format(n),
setup='from __main__ import solve_schroedinger',
number=repetition) / repetition
return time_per_call
def cpu_time_jacobi(repetition=10, n=100):
'''
Grid n =10^6 and two repetitions gave an average of 7.512299363 seconds.
'''
time_per_call = timeit.timeit('solve_schroedinger_jacobi({},5)'.format(n),
setup='from __main__ import solve_schroedinger_jacobi',
number=repetition) / repetition
return time_per_call
def jacobi_method(A, epsilon=1.0e-8): #2b,2d
'''
Jacobi's method for finding eigen_values
eigenvectors of the symetric matrix A.
The eigen_values of A will be on the diagonal
of A, with eigenvalue i being A[i][i].
The jth component of the ith eigenvector
is stored in R[i][j].
A: input matrix (n x n)
R: empty matrix for eigenvectors (n x n)
n: dimension of matrices
7.4 Jacobi's method 219
'''
# Setting up the eigenvector matrix
A = np.array(A) # or A=np.atleast_2d(A)
n = len(A)
eigen_vectors = np.eye(n)
# for i in range(n):
# for j in range(n):
# if i == j:
# eigen_vectors[i][j] = 1.0
# else:
# eigen_vectors[i][j] = 0.0
max_number_iterations = n**3
iterations = 0
max_value, k, l = max_off_diag(A)
while max_value > epsilon and iterations < max_number_iterations:
max_value, k, l = max_off_diag(A)
rotate(A, eigen_vectors, k, l, n)
iterations += 1
# print "Number of iterations: {}".format(iterations)
eigen_values = np.diag(A) # eigen_values are the diagonal elements of A
# return eigenvectors and eigen_values
return eigen_vectors, eigen_values, iterations
# @jit(float64(float64[:, :], int32[1], int32[1]), nopython=True)
@jit(nopython=True)
def max_off_diag(A):
''' Function to find the maximum matrix element. Can you figure out a more
elegant algorithm?'''
n = len(A)
max_val_out = 0.0
for i in range(n):
for j in range(i + 1, n):
absA = abs(A[i][j])
if absA > max_val_out:
max_val_out = absA
l = i
k = j
return max_val_out, k, l
@jit(void(float64[:, :], float64[:, :], int64, int64, int64), nopython=True)
def rotate(A, R, k, l, n):
'''Function to find the values of cos and sin'''
if A[k][l] != 0.0:
tau = (A[l][l] - A[k][k]) / (2 * A[k][l])
if tau > 0:
t = -tau + np.sqrt(1.0 + tau * tau)
else:
t = -tau - np.sqrt(1.0 + tau * tau)
c = 1 / np.sqrt(1 + t * t)
s = c * t
else:
c = 1.0
s = 0.0
# p.220 7 Eigensystems
a_kk = A[k][k]
a_ll = A[l][l]
# changing the matrix elements with indices k and l
A[k][k] = c * c * a_kk - 2.0 * c * s * A[k][l] + s * s * a_ll
A[l][l] = s * s * a_kk + 2.0 * c * s * A[k][l] + c * c * a_ll
A[k][l] = 0.0 # hard-coding of the zeros
A[l][k] = 0.0
# and then we change the remaining elements
for i in range(n):
if i != k and i != l:
a_ik = A[i][k]
a_il = A[i][l]
A[i][k] = c * a_ik - s * a_il
A[k][i] = A[i][k]
A[i][l] = c * a_il + s * a_ik
A[l][i] = A[i][l]
# Finally, we compute the new eigenvectors
r_ik = R[i][k]
r_il = R[i][l]
R[i][k] = c * r_ik - s * r_il
R[i][l] = c * r_il + s * r_ik
return
class Potential(object):
def __init__(self, omega):
self.omega = omega
def __call__(self, rho):
omega = self.omega
return omega**2 * rho**2 + 1.0 / rho
def test_rho_max_jacobi_interactive_case(omega=0.01, rho_max=40, n=512): #2d
potential = Potential(omega=omega)
# now plot the results for the three lowest lying eigenstates
r, eigenvectors, eigenvalues, iterations = solve_schroedinger_jacobi(
n=n, rho_max=rho_max, potential=potential)
# errors = []
#for i, trueeigenvalue in enumerate([3, 7, 11]):
#errors.append(np.abs(eigenvalues[i] - trueeigenvalue))
# print eigenvalues[i] - trueeigenvalue, eigenvalues[i]
FirstEigvector = eigenvectors[:, 0]
SecondEigvector = eigenvectors[:, 1]
ThirdEigvector = eigenvectors[:, 2]
plt.plot(r, FirstEigvector**2, 'b-',
r, SecondEigvector ** 2, 'g-',
r, ThirdEigvector**2, 'r-')
m0 = max(FirstEigvector**2)
we = np.sqrt(3)*omega
print((we/np.pi)**(1/4)/m0)
r0 = (2*omega**2)**(-1/3)
g = lambda r: m0*np.exp(-0.5*we*(r-r0)**2)
plt.plot(r, g(r), ':')
#plt.axis([0, 4.6, 0.0, 0.025])
plt.xlabel(r'$\rho$')
plt.ylabel(r'$u(\rho)$')
max_r = np.max(r)
print omega
#omega = np.max(errors)
plt.suptitle(r'Normalized energy for the three lowest states interactive case.') #as a function of various omega_r
plt.title(r'$\rho$ = {0:2.1f}, n={1}, omega={2:2.1g}'.format(
max_r, len(r), omega))
plt.savefig('eigenvector_rho{0}n{1}omega{2}.png'.format(int(max_r * 10), len(r),int(omega*100)))
def solve_schroedinger_jacobi(n=160, rho_max=5, potential=None):
if potential is None:
potential = lambda r: r**2
#n = 128*4
#n = 160
rho_min = 0
#rho_max = 5
h = (rho_max - rho_min) / (n + 1) # step_length
rho = np.arange(1, n + 1) * h
vi = potential(rho)
rho = rho
e = -np.ones(n - 1) / h**2
d = 2 / h**2 + vi # di
A = np.diag(d) + np.diag(e, -1) + np.diag(e, +1)
# Solve Schrodingers equation:
eigenvectors, eigenvalues, iterations = jacobi_method(A)
# self.eigenvalues, self.eigenvectors = np.linalg.eig(self.A)
r = rho
permute = eigenvalues.argsort()
eigenvalues = eigenvalues[permute]
eigenvectors = eigenvectors[:, permute]
return r, eigenvectors, eigenvalues, iterations
def test_iterations():
# now plot the results for the three lowest lying eigenstates
num_iterations = []
dims = [8, 16, 32, 64, 128, 256, 320, 512]
if False:
for n in dims:
r, eigenvectors, eigenvalues, iterations = solve_schroedinger_jacobi(
n=n, rho_max=5)
num_iterations.append(iterations)
else:
num_iterations = [80, 374, 1623, 6741, 27070, 109974, 171973, 442946]
step = np.linspace(0, 1.1 * dims[-1], 100)
coeff = np.polyfit(dims, np.array(num_iterations)/np.array(dims), deg=1)
# coeff = np.round(coeff)
coeff = np.hstack((coeff, 0))
print coeff
for plot_type, plot in zip(['linear', 'logy', 'loglog'], [plt.plot, plt.semilogy, plt.loglog]):
plt.figure()
plot(dims, num_iterations, '.', label='Exact number of iterations')
plot(step, np.polyval(coeff, step), '-',
label='{:0.2f}n**2{:0.2f}n'.format(coeff[0], coeff[1]))
# plot(step, 1.7*step**2, '-', label='1.7n^2')
plot(step, 3*step**2-5*step, '-', label='3n^2-5*n')
# plot(step, 1.5*step**2-5*step+10, '-', label='1.5n^2-5*n+10')
plt.xlabel('n')
plt.ylabel('Iterations')
plt.title('Number of similarity transformations')
plt.legend(loc=2)
plt.grid(True)
plt.savefig('num_iterations{0}n{1}{2}.png'.format(dims[-1], len(dims), plot_type))
plt.show()
def test_rho_max_jacobi(): #2b
# now plot the results for the three lowest lying eigenstates
r, eigenvectors, eigenvalues, iterations = solve_schroedinger_jacobi(
n=320, rho_max=5)
errors = []
for i, trueeigenvalue in enumerate([3, 7, 11]):
errors.append(np.abs(eigenvalues[i] - trueeigenvalue))
# print eigenvalues[i] - trueeigenvalue, eigenvalues[i]
FirstEigvector = eigenvectors[:, 0]
SecondEigvector = eigenvectors[:, 1]
ThirdEigvector = eigenvectors[:, 2]
plt.plot(r, FirstEigvector**2, 'b-', r, SecondEigvector **
2, 'g-', r, ThirdEigvector**2, 'r-')
#plt.axis([0, 4.6, 0.0, 0.025])
plt.xlabel(r'$\rho$')
plt.ylabel(r'$u(\rho)$')
max_r = np.max(r)
max_errors = np.max(errors)
plt.suptitle(r'Normalized energy for the three lowest states.')
plt.title(r'$\rho$ = {0:2.1f}, n={1}, max_errors={2:2.1g}'.format(
max_r, len(r), max_errors))
plt.savefig('eigenvector_rho{0}n{1}.png'.format(int(max_r * 10), len(r)))
plt.show()
def solve_schroedinger(Dim=400, RMax=10.0, RMin=0.0, lOrbital=0):
# Get the boundary, orbital momentum and number of integration points
# Program which solves the one-particle Schrodinger equation
# for a potential specified in function
# potential(). This example is for the harmonic oscillator in 3d
# from matplotlib import pyplot as plt
# import numpy as np
# Here we set up the harmonic oscillator potential
def potential(r):
return r * r
# Initialize constants
Step = RMax / (Dim + 1)
DiagConst = 2.0 / (Step * Step)
NondiagConst = -1.0 / (Step * Step)
OrbitalFactor = lOrbital * (lOrbital + 1.0)
# Calculate array of potential values
v = np.zeros(Dim)
r = np.linspace(RMin, RMax, Dim)
for i in xrange(Dim):
r[i] = RMin + (i + 1) * Step
v[i] = potential(r[i]) + OrbitalFactor / (r[i] * r[i])
# Setting up a tridiagonal matrix and finding eigenvectors and eigenvalues
Hamiltonian = np.zeros((Dim, Dim))
Hamiltonian[0, 0] = DiagConst + v[0]
Hamiltonian[0, 1] = NondiagConst
for i in xrange(1, Dim - 1):
Hamiltonian[i, i - 1] = NondiagConst
Hamiltonian[i, i] = DiagConst + v[i]
Hamiltonian[i, i + 1] = NondiagConst
Hamiltonian[Dim - 1, Dim - 2] = NondiagConst
Hamiltonian[Dim - 1, Dim - 1] = DiagConst + v[Dim - 1]
# diagonalize and obtain eigenvalues, not necessarily sorted
EigValues, EigVectors = np.linalg.eig(Hamiltonian)
# sort eigenvectors and eigenvalues
permute = EigValues.argsort()
EigValues = EigValues[permute]
EigVectors = EigVectors[:, permute]
return r, EigVectors, EigValues
def test_rho_max(Dim=400.0, RMax=10.0):
r, EigVectors, EigValues = solve_schroedinger(Dim, RMax)
# now plot the results for the three lowest lying eigenstates
for i in xrange(3):
print EigValues[i]
FirstEigvector = EigVectors[:, 0]
SecondEigvector = EigVectors[:, 1]
ThirdEigvector = EigVectors[:, 2]
plt.plot(r, FirstEigvector**2, 'b-', r, SecondEigvector **
2, 'g-', r, ThirdEigvector**2, 'r-')
plt.axis([0, 4.6, 0.0, 0.025])
plt.xlabel(r'$r$')
plt.ylabel(r'Radial probability $r^2|R(r)|^2$')
plt.title(
r'Radial probability distributions for three lowest-lying states')
plt.savefig('eigenvector.pdf')
plt.show()
def cpu_times_vs_dimension_plot(): #2c
'''Jacobi loeser kvadratisk tid.
Det vil si tiden er bestemt av similaritetstransformasjonen for O(n**2) operasjoner.
Eig loeser egenverdiene i lineaer tid. Det vil si at tiden er bestemt av O(n)
'''
cpu_times = []
cpu_times_jacobi = []
dims = [8, 16, 32, 64, 128]
for n in dims:
cpu_times.append(cpu_time(5, n))
cpu_times_jacobi.append(cpu_time_jacobi(5, n))
plt.plot(dims, cpu_times, label='np.linalg.eig') #
plt.plot(dims, cpu_times_jacobi, label='jacobi')
plt.xlabel('dimension')
plt.ylabel('cpu time')
plt.title('CPU time vs dimension of matrix')
plt.legend(loc=2)
filename = 'cpu_time{0}n{1}.png'.format(dims[-1], len(dims))
my_savefig(filename)
plt.show()
if __name__ == '__main__':
#cpu_times_vs_dimension_plot()
# test_compare()
# solve_poisson_with_lu(10)
# error_test()
# lu_test_compare()
# for n in [10, 100, 1000, 2000]:
# cpu_time_specific(10, n)
# cpu_time(10, n)
# cpu_time_lu_solve(10, n)
#
# plt.show()
# solve_schroedinger()
# test_rho_max_jacobi()
#test_iterations()
for rho_max, omega in zip([60, 10, 6, 3], [0.01, 0.5, 1, 5]):
plt.figure()
print(omega)
test_rho_max_jacobi_interactive_case(omega, rho_max=rho_max, n=128)
#test_rho_max_jacobi_interactive_case(omega=0.01, rho_max=60, n=128)
plt.show() | 32.379902 | 119 | 0.594202 | 1,963 | 13,211 | 3.885889 | 0.186449 | 0.015732 | 0.018091 | 0.013372 | 0.321185 | 0.289591 | 0.270713 | 0.258915 | 0.213818 | 0.188909 | 0 | 0.048291 | 0.264855 | 13,211 | 408 | 120 | 32.379902 | 0.737129 | 0.180229 | 0 | 0.17623 | 0 | 0.008197 | 0.087543 | 0.017181 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028689 | null | null | 0.028689 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d0e450b602d94ac05485326e1145ffc9479645d | 29,665 | py | Python | mars/tensor/datasource/tests/test_datasource.py | HarshCasper/mars | 4c12c968414d666c7a10f497bc22de90376b1932 | [
"Apache-2.0"
] | 2 | 2019-03-29T04:11:10.000Z | 2020-07-08T10:19:54.000Z | mars/tensor/datasource/tests/test_datasource.py | HarshCasper/mars | 4c12c968414d666c7a10f497bc22de90376b1932 | [
"Apache-2.0"
] | null | null | null | mars/tensor/datasource/tests/test_datasource.py | HarshCasper/mars | 4c12c968414d666c7a10f497bc22de90376b1932 | [
"Apache-2.0"
] | null | null | null | # Copyright 1999-2020 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import shutil
import tempfile
from weakref import ReferenceType
from copy import copy
import numpy as np
import scipy.sparse as sps
try:
import tiledb
except (ImportError, OSError): # pragma: no cover
tiledb = None
from mars import dataframe as md
from mars import opcodes
from mars.graph import DAG
from mars.tensor import ones, zeros, tensor, full, arange, diag, linspace, triu, tril, ones_like, dot
from mars.tensor.datasource import array, fromtiledb, TensorTileDBDataSource, fromdense
from mars.tensor.datasource.tri import TensorTriu, TensorTril
from mars.tensor.datasource.zeros import TensorZeros
from mars.tensor.datasource.from_dense import DenseToSparse
from mars.tensor.datasource.array import CSRMatrixDataSource
from mars.tensor.datasource.ones import TensorOnes, TensorOnesLike
from mars.tensor.fuse.core import TensorFuseChunk
from mars.tensor.core import Tensor, SparseTensor, TensorChunk
from mars.tensor.datasource.from_dataframe import from_dataframe
from mars.tests.core import TestBase
from mars.tiles import get_tiled
from mars.utils import build_fuse_chunk, enter_mode
class Test(TestBase):
def testChunkSerialize(self):
t = ones((10, 3), chunk_size=(5, 2)).tiles()
# pb
chunk = t.chunks[0]
serials = self._pb_serial(chunk)
op, pb = serials[chunk.op, chunk.data]
self.assertEqual(tuple(pb.index), chunk.index)
self.assertEqual(pb.key, chunk.key)
self.assertEqual(tuple(pb.shape), chunk.shape)
self.assertEqual(int(op.type.split('.', 1)[1]), opcodes.TENSOR_ONES)
chunk2 = self._pb_deserial(serials)[chunk.data]
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.key, chunk2.key)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertEqual(chunk.op.dtype, chunk2.op.dtype)
# json
chunk = t.chunks[0]
serials = self._json_serial(chunk)
chunk2 = self._json_deserial(serials)[chunk.data]
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.key, chunk2.key)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertEqual(chunk.op.dtype, chunk2.op.dtype)
t = tensor(np.random.random((10, 3)), chunk_size=(5, 2)).tiles()
# pb
chunk = t.chunks[0]
serials = self._pb_serial(chunk)
op, pb = serials[chunk.op, chunk.data]
self.assertEqual(tuple(pb.index), chunk.index)
self.assertEqual(pb.key, chunk.key)
self.assertEqual(tuple(pb.shape), chunk.shape)
self.assertEqual(int(op.type.split('.', 1)[1]), opcodes.TENSOR_DATA_SOURCE)
chunk2 = self._pb_deserial(serials)[chunk.data]
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.key, chunk2.key)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertTrue(np.array_equal(chunk.op.data, chunk2.op.data))
# json
chunk = t.chunks[0]
serials = self._json_serial(chunk)
chunk2 = self._json_deserial(serials)[chunk.data]
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.key, chunk2.key)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertTrue(np.array_equal(chunk.op.data, chunk2.op.data))
t1 = tensor(np.random.random((10, 3)), chunk_size=(5, 2))
t2 = (t1 + 1).tiles()
# pb
chunk1 = get_tiled(t1).chunks[0]
chunk2 = t2.chunks[0]
composed_chunk = build_fuse_chunk([chunk1.data, chunk2.data], TensorFuseChunk)
serials = self._pb_serial(composed_chunk)
op, pb = serials[composed_chunk.op, composed_chunk.data]
self.assertEqual(pb.key, composed_chunk.key)
self.assertEqual(int(op.type.split('.', 1)[1]), opcodes.FUSE)
composed_chunk2 = self._pb_deserial(serials)[composed_chunk.data]
self.assertEqual(composed_chunk.key, composed_chunk2.key)
self.assertEqual(type(composed_chunk.op), type(composed_chunk2.op))
self.assertEqual(composed_chunk.composed[0].key,
composed_chunk2.composed[0].key)
self.assertEqual(composed_chunk.composed[-1].key,
composed_chunk2.composed[-1].key)
# json
chunk1 = get_tiled(t1).chunks[0]
chunk2 = t2.chunks[0]
composed_chunk = build_fuse_chunk([chunk1.data, chunk2.data], TensorFuseChunk)
serials = self._json_serial(composed_chunk)
composed_chunk2 = self._json_deserial(serials)[composed_chunk.data]
self.assertEqual(composed_chunk.key, composed_chunk2.key)
self.assertEqual(type(composed_chunk.op), type(composed_chunk2.op))
self.assertEqual(composed_chunk.composed[0].key,
composed_chunk2.composed[0].key)
self.assertEqual(composed_chunk.composed[-1].key,
composed_chunk2.composed[-1].key)
t1 = ones((10, 3), chunk_size=2)
t2 = ones((3, 5), chunk_size=2)
c = dot(t1, t2).tiles().chunks[0].inputs[0]
# pb
serials = self._pb_serial(c)
c2 = self._pb_deserial(serials)[c]
self.assertEqual(c.key, c2.key)
# json
serials = self._json_serial(c)
c2 = self._json_deserial(serials)[c]
self.assertEqual(c.key, c2.key)
def testTensorSerialize(self):
from mars.tensor import split
t = ones((10, 10, 8), chunk_size=(3, 3, 5))
serials = self._pb_serial(t)
dt = self._pb_deserial(serials)[t.data]
self.assertEqual(dt.extra_params.raw_chunk_size, (3, 3, 5))
serials = self._json_serial(t)
dt = self._json_deserial(serials)[t.data]
self.assertEqual(dt.extra_params.raw_chunk_size, (3, 3, 5))
t2, _ = split(t, 2)
serials = self._pb_serial(t2)
dt = self._pb_deserial(serials)[t2.data]
self.assertEqual(dt.op.indices_or_sections, 2)
t2, _, _ = split(t, ones(2, chunk_size=2))
serials = self._pb_serial(t2)
dt = self._pb_deserial(serials)[t2.data]
with enter_mode(build=True):
self.assertIn(dt.op.indices_or_sections, dt.inputs)
def testOnes(self):
tensor = ones((10, 10, 8), chunk_size=(3, 3, 5))
tensor = tensor.tiles()
self.assertEqual(tensor.shape, (10, 10, 8))
self.assertEqual(len(tensor.chunks), 32)
tensor = ones((10, 3), chunk_size=(4, 2))
tensor = tensor.tiles()
self.assertEqual(tensor.shape, (10, 3))
chunk = tensor.cix[1, 1]
self.assertEqual(tensor.get_chunk_slices(chunk.index), (slice(4, 8), slice(2, 3)))
tensor = ones((10, 5), chunk_size=(2, 3), gpu=True)
tensor = tensor.tiles()
self.assertTrue(tensor.op.gpu)
self.assertTrue(tensor.chunks[0].op.gpu)
tensor1 = ones((10, 10, 8), chunk_size=(3, 3, 5))
tensor1 = tensor1.tiles()
tensor2 = ones((10, 10, 8), chunk_size=(3, 3, 5))
tensor2 = tensor2.tiles()
self.assertEqual(tensor1.chunks[0].op.key, tensor2.chunks[0].op.key)
self.assertEqual(tensor1.chunks[0].key, tensor2.chunks[0].key)
self.assertNotEqual(tensor1.chunks[0].op.key, tensor1.chunks[1].op.key)
self.assertNotEqual(tensor1.chunks[0].key, tensor1.chunks[1].key)
tensor = ones((2, 3, 4))
self.assertEqual(len(list(tensor)), 2)
tensor2 = ones((2, 3, 4), chunk_size=1)
# tensor's op key must be equal to tensor2
self.assertEqual(tensor.op.key, tensor2.op.key)
self.assertNotEqual(tensor.key, tensor2.key)
tensor3 = ones((2, 3, 3))
self.assertNotEqual(tensor.op.key, tensor3.op.key)
self.assertNotEqual(tensor.key, tensor3.key)
# test create chunk op of ones manually
chunk_op1 = TensorOnes(dtype=tensor.dtype)
chunk1 = chunk_op1.new_chunk(None, shape=(3, 3), index=(0, 0))
chunk_op2 = TensorOnes(dtype=tensor.dtype)
chunk2 = chunk_op2.new_chunk(None, shape=(3, 4), index=(0, 1))
self.assertNotEqual(chunk1.op.key, chunk2.op.key)
self.assertNotEqual(chunk1.key, chunk2.key)
tensor = ones((100, 100), chunk_size=50)
tensor = tensor.tiles()
self.assertEqual(len({c.op.key for c in tensor.chunks}), 1)
self.assertEqual(len({c.key for c in tensor.chunks}), 1)
def testZeros(self):
tensor = zeros((2, 3, 4))
self.assertEqual(len(list(tensor)), 2)
self.assertFalse(tensor.op.gpu)
tensor2 = zeros((2, 3, 4), chunk_size=1)
# tensor's op key must be equal to tensor2
self.assertEqual(tensor.op.key, tensor2.op.key)
self.assertNotEqual(tensor.key, tensor2.key)
tensor3 = zeros((2, 3, 3))
self.assertNotEqual(tensor.op.key, tensor3.op.key)
self.assertNotEqual(tensor.key, tensor3.key)
# test create chunk op of zeros manually
chunk_op1 = TensorZeros(dtype=tensor.dtype)
chunk1 = chunk_op1.new_chunk(None, shape=(3, 3), index=(0, 0))
chunk_op2 = TensorZeros(dtype=tensor.dtype)
chunk2 = chunk_op2.new_chunk(None, shape=(3, 4), index=(0, 1))
self.assertNotEqual(chunk1.op.key, chunk2.op.key)
self.assertNotEqual(chunk1.key, chunk2.key)
tensor = zeros((100, 100), chunk_size=50)
tensor = tensor.tiles()
self.assertEqual(len({c.op.key for c in tensor.chunks}), 1)
self.assertEqual(len({c.key for c in tensor.chunks}), 1)
def testDataSource(self):
from mars.tensor.base.broadcast_to import TensorBroadcastTo
data = np.random.random((10, 3))
t = tensor(data, chunk_size=2)
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertTrue((t.chunks[0].op.data == data[:2, :2]).all())
self.assertTrue((t.chunks[1].op.data == data[:2, 2:3]).all())
self.assertTrue((t.chunks[2].op.data == data[2:4, :2]).all())
self.assertTrue((t.chunks[3].op.data == data[2:4, 2:3]).all())
self.assertEqual(t.key, tensor(data, chunk_size=2).tiles().key)
self.assertNotEqual(t.key, tensor(data, chunk_size=3).tiles().key)
self.assertNotEqual(t.key, tensor(np.random.random((10, 3)), chunk_size=2).tiles().key)
t = tensor(data, chunk_size=2, gpu=True)
t = t.tiles()
self.assertTrue(t.op.gpu)
self.assertTrue(t.chunks[0].op.gpu)
t = full((2, 2), 2, dtype='f4')
self.assertFalse(t.op.gpu)
self.assertEqual(t.shape, (2, 2))
self.assertEqual(t.dtype, np.float32)
t = full((2, 2), [1.0, 2.0], dtype='f4')
self.assertEqual(t.shape, (2, 2))
self.assertEqual(t.dtype, np.float32)
self.assertIsInstance(t.op, TensorBroadcastTo)
with self.assertRaises(ValueError):
full((2, 2), [1.0, 2.0, 3.0], dtype='f4')
def testTensorGraphSerialize(self):
t = ones((10, 3), chunk_size=(5, 2)) + tensor(np.random.random((10, 3)), chunk_size=(5, 2))
graph = t.build_graph(tiled=False)
pb = graph.to_pb()
graph2 = DAG.from_pb(pb)
self.assertEqual(len(graph), len(graph2))
t = next(c for c in graph if c.inputs)
t2 = next(c for c in graph2 if c.key == t.key)
self.assertTrue(t2.op.outputs[0], ReferenceType) # make sure outputs are all weak reference
self.assertBaseEqual(t.op, t2.op)
self.assertEqual(t.shape, t2.shape)
self.assertEqual(sorted(i.key for i in t.inputs), sorted(i.key for i in t2.inputs))
jsn = graph.to_json()
graph2 = DAG.from_json(jsn)
self.assertEqual(len(graph), len(graph2))
t = next(c for c in graph if c.inputs)
t2 = next(c for c in graph2 if c.key == t.key)
self.assertTrue(t2.op.outputs[0], ReferenceType) # make sure outputs are all weak reference
self.assertBaseEqual(t.op, t2.op)
self.assertEqual(t.shape, t2.shape)
self.assertEqual(sorted(i.key for i in t.inputs), sorted(i.key for i in t2.inputs))
# test graph with tiled tensor
t2 = ones((10, 10), chunk_size=(5, 4)).tiles()
graph = DAG()
graph.add_node(t2)
pb = graph.to_pb()
graph2 = DAG.from_pb(pb)
self.assertEqual(len(graph), len(graph2))
chunks = next(iter(graph2)).chunks
self.assertEqual(len(chunks), 6)
self.assertIsInstance(chunks[0], TensorChunk)
self.assertEqual(chunks[0].index, t2.chunks[0].index)
self.assertBaseEqual(chunks[0].op, t2.chunks[0].op)
jsn = graph.to_json()
graph2 = DAG.from_json(jsn)
self.assertEqual(len(graph), len(graph2))
chunks = next(iter(graph2)).chunks
self.assertEqual(len(chunks), 6)
self.assertIsInstance(chunks[0], TensorChunk)
self.assertEqual(chunks[0].index, t2.chunks[0].index)
self.assertBaseEqual(chunks[0].op, t2.chunks[0].op)
def testTensorGraphTiledSerialize(self):
t = ones((10, 3), chunk_size=(5, 2)) + tensor(np.random.random((10, 3)), chunk_size=(5, 2))
graph = t.build_graph(tiled=True)
pb = graph.to_pb()
graph2 = DAG.from_pb(pb)
self.assertEqual(len(graph), len(graph2))
chunk = next(c for c in graph if c.inputs)
chunk2 = next(c for c in graph2 if c.key == chunk.key)
self.assertBaseEqual(chunk.op, chunk2.op)
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertEqual(sorted(i.key for i in chunk.inputs), sorted(i.key for i in chunk2.inputs))
jsn = graph.to_json()
graph2 = DAG.from_json(jsn)
self.assertEqual(len(graph), len(graph2))
chunk = next(c for c in graph if c.inputs)
chunk2 = next(c for c in graph2 if c.key == chunk.key)
self.assertBaseEqual(chunk.op, chunk2.op)
self.assertEqual(chunk.index, chunk2.index)
self.assertEqual(chunk.shape, chunk2.shape)
self.assertEqual(sorted(i.key for i in chunk.inputs), sorted(i.key for i in chunk2.inputs))
t = ones((10, 3), chunk_size=((3, 5, 2), 2)) + 2
graph = t.build_graph(tiled=True)
pb = graph.to_pb()
graph2 = DAG.from_pb(pb)
chunk = next(c for c in graph)
chunk2 = next(c for c in graph2 if c.key == chunk.key)
self.assertBaseEqual(chunk.op, chunk2.op)
self.assertEqual(sorted(i.key for i in chunk.composed), sorted(i.key for i in chunk2.composed))
jsn = graph.to_json()
graph2 = DAG.from_json(jsn)
chunk = next(c for c in graph)
chunk2 = next(c for c in graph2 if c.key == chunk.key)
self.assertBaseEqual(chunk.op, chunk2.op)
self.assertEqual(sorted(i.key for i in chunk.composed), sorted(i.key for i in chunk2.composed))
def testUfunc(self):
t = ones((3, 10), chunk_size=2)
x = np.add(t, [[1], [2], [3]])
self.assertIsInstance(x, Tensor)
y = np.sum(t, axis=1)
self.assertIsInstance(y, Tensor)
def testArange(self):
t = arange(10, chunk_size=3)
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(t.shape, (10,))
self.assertEqual(t.nsplits, ((3, 3, 3, 1),))
self.assertEqual(t.chunks[1].op.start, 3)
self.assertEqual(t.chunks[1].op.stop, 6)
t = arange(0, 10, 3, chunk_size=2)
t = t.tiles()
self.assertEqual(t.shape, (4,))
self.assertEqual(t.nsplits, ((2, 2),))
self.assertEqual(t.chunks[0].op.start, 0)
self.assertEqual(t.chunks[0].op.stop, 6)
self.assertEqual(t.chunks[0].op.step, 3)
self.assertEqual(t.chunks[1].op.start, 6)
self.assertEqual(t.chunks[1].op.stop, 12)
self.assertEqual(t.chunks[1].op.step, 3)
self.assertRaises(TypeError, lambda: arange(10, start=0))
self.assertRaises(TypeError, lambda: arange(0, 10, stop=0))
self.assertRaises(TypeError, lambda: arange())
self.assertRaises(ValueError, lambda: arange('1066-10-13', dtype=np.datetime64, chunks=3))
def testDiag(self):
# test 2-d, shape[0] == shape[1], k == 0
v = tensor(np.arange(16).reshape(4, 4), chunk_size=2)
t = diag(v)
self.assertEqual(t.shape, (4,))
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(t.nsplits, ((2, 2),))
v = tensor(np.arange(16).reshape(4, 4), chunk_size=(2, 3))
t = diag(v)
self.assertEqual(t.shape, (4,))
t = t.tiles()
self.assertEqual(t.nsplits, ((2, 1, 1),))
# test 1-d, k == 0
v = tensor(np.arange(3), chunk_size=2)
t = diag(v, sparse=True)
self.assertEqual(t.shape, (3, 3))
t = t.tiles()
self.assertEqual(t.nsplits, ((2, 1), (2, 1)))
self.assertEqual(len([c for c in t.chunks
if c.op.__class__.__name__ == 'TensorDiag']), 2)
self.assertTrue(t.chunks[0].op.sparse)
# test 2-d, shape[0] != shape[1]
v = tensor(np.arange(24).reshape(4, 6), chunk_size=2)
t = diag(v)
self.assertEqual(t.shape, np.diag(np.arange(24).reshape(4, 6)).shape)
t = t.tiles()
self.assertEqual(tuple(sum(s) for s in t.nsplits), t.shape)
v = tensor(np.arange(24).reshape(4, 6), chunk_size=2)
t = diag(v, k=1)
self.assertEqual(t.shape, np.diag(np.arange(24).reshape(4, 6), k=1).shape)
t = t.tiles()
self.assertEqual(tuple(sum(s) for s in t.nsplits), t.shape)
t = diag(v, k=2)
self.assertEqual(t.shape, np.diag(np.arange(24).reshape(4, 6), k=2).shape)
t = t.tiles()
self.assertEqual(tuple(sum(s) for s in t.nsplits), t.shape)
t = diag(v, k=-1)
self.assertEqual(t.shape, np.diag(np.arange(24).reshape(4, 6), k=-1).shape)
t = t.tiles()
self.assertEqual(tuple(sum(s) for s in t.nsplits), t.shape)
t = diag(v, k=-2)
self.assertEqual(t.shape, np.diag(np.arange(24).reshape(4, 6), k=-2).shape)
t = t.tiles()
self.assertEqual(tuple(sum(s) for s in t.nsplits), t.shape)
# test tiled zeros' keys
a = arange(5, chunk_size=2)
t = diag(a)
t = t.tiles()
# 1 and 2 of t.chunks is ones, they have different shapes
self.assertNotEqual(t.chunks[1].op.key, t.chunks[2].op.key)
def testLinspace(self):
a = linspace(2.0, 3.0, num=5, chunk_size=2)
self.assertEqual(a.shape, (5,))
a = a.tiles()
self.assertEqual(a.nsplits, ((2, 2, 1),))
self.assertEqual(a.chunks[0].op.start, 2.)
self.assertEqual(a.chunks[0].op.stop, 2.25)
self.assertEqual(a.chunks[1].op.start, 2.5)
self.assertEqual(a.chunks[1].op.stop, 2.75)
self.assertEqual(a.chunks[2].op.start, 3.)
self.assertEqual(a.chunks[2].op.stop, 3.)
a = linspace(2.0, 3.0, num=5, endpoint=False, chunk_size=2)
self.assertEqual(a.shape, (5,))
a = a.tiles()
self.assertEqual(a.nsplits, ((2, 2, 1),))
self.assertEqual(a.chunks[0].op.start, 2.)
self.assertEqual(a.chunks[0].op.stop, 2.2)
self.assertEqual(a.chunks[1].op.start, 2.4)
self.assertEqual(a.chunks[1].op.stop, 2.6)
self.assertEqual(a.chunks[2].op.start, 2.8)
self.assertEqual(a.chunks[2].op.stop, 2.8)
_, step = linspace(2.0, 3.0, num=5, chunk_size=2, retstep=True)
self.assertEqual(step, .25)
def testTriuTril(self):
a_data = np.arange(12).reshape(4, 3)
a = tensor(a_data, chunk_size=2)
t = triu(a)
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTriu)
self.assertIsInstance(t.chunks[1].op, TensorTriu)
self.assertIsInstance(t.chunks[2].op, TensorZeros)
self.assertIsInstance(t.chunks[3].op, TensorTriu)
t = triu(a, k=1)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTriu)
self.assertIsInstance(t.chunks[1].op, TensorTriu)
self.assertIsInstance(t.chunks[2].op, TensorZeros)
self.assertIsInstance(t.chunks[3].op, TensorZeros)
t = triu(a, k=2)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorZeros)
self.assertIsInstance(t.chunks[1].op, TensorTriu)
self.assertIsInstance(t.chunks[2].op, TensorZeros)
self.assertIsInstance(t.chunks[3].op, TensorZeros)
t = triu(a, k=-1)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTriu)
self.assertIsInstance(t.chunks[1].op, TensorTriu)
self.assertIsInstance(t.chunks[2].op, TensorTriu)
self.assertIsInstance(t.chunks[3].op, TensorTriu)
t = tril(a)
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTril)
self.assertIsInstance(t.chunks[1].op, TensorZeros)
self.assertIsInstance(t.chunks[2].op, TensorTril)
self.assertIsInstance(t.chunks[3].op, TensorTril)
t = tril(a, k=1)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTril)
self.assertIsInstance(t.chunks[1].op, TensorTril)
self.assertIsInstance(t.chunks[2].op, TensorTril)
self.assertIsInstance(t.chunks[3].op, TensorTril)
t = tril(a, k=-1)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorTril)
self.assertIsInstance(t.chunks[1].op, TensorZeros)
self.assertIsInstance(t.chunks[2].op, TensorTril)
self.assertIsInstance(t.chunks[3].op, TensorTril)
t = tril(a, k=-2)
t = t.tiles()
self.assertEqual(len(t.chunks), 4)
self.assertIsInstance(t.chunks[0].op, TensorZeros)
self.assertIsInstance(t.chunks[1].op, TensorZeros)
self.assertIsInstance(t.chunks[2].op, TensorTril)
self.assertIsInstance(t.chunks[3].op, TensorZeros)
def testSetTensorInputs(self):
t1 = tensor([1, 2], chunk_size=2)
t2 = tensor([2, 3], chunk_size=2)
t3 = t1 + t2
t1c = copy(t1)
t2c = copy(t2)
self.assertIsNot(t1c, t1)
self.assertIsNot(t2c, t2)
self.assertIs(t3.op.lhs, t1.data)
self.assertIs(t3.op.rhs, t2.data)
self.assertEqual(t3.op.inputs, [t1.data, t2.data])
self.assertEqual(t3.inputs, [t1.data, t2.data])
with self.assertRaises(StopIteration):
t3.inputs = []
t1 = tensor([1, 2], chunk_size=2)
t2 = tensor([True, False], chunk_size=2)
t3 = t1[t2]
t1c = copy(t1)
t2c = copy(t2)
t3c = copy(t3)
t3c.inputs = [t1c, t2c]
with enter_mode(build=True):
self.assertIs(t3c.op.input, t1c.data)
self.assertIs(t3c.op.indexes[0], t2c.data)
def testFromSpmatrix(self):
t = tensor(sps.csr_matrix([[0, 0, 1], [1, 0, 0]], dtype='f8'), chunk_size=2)
self.assertIsInstance(t, SparseTensor)
self.assertIsInstance(t.op, CSRMatrixDataSource)
self.assertTrue(t.issparse())
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(t.chunks[0].index, (0, 0))
self.assertIsInstance(t.op, CSRMatrixDataSource)
self.assertFalse(t.op.gpu)
m = sps.csr_matrix([[0, 0], [1, 0]])
self.assertTrue(np.array_equal(t.chunks[0].op.indices, m.indices))
self.assertTrue(np.array_equal(t.chunks[0].op.indptr, m.indptr))
self.assertTrue(np.array_equal(t.chunks[0].op.data, m.data))
self.assertTrue(np.array_equal(t.chunks[0].op.shape, m.shape))
def testFromDense(self):
t = fromdense(tensor([[0, 0, 1], [1, 0, 0]], chunk_size=2))
self.assertIsInstance(t, SparseTensor)
self.assertIsInstance(t.op, DenseToSparse)
self.assertTrue(t.issparse())
t = t.tiles()
self.assertEqual(t.chunks[0].index, (0, 0))
self.assertIsInstance(t.op, DenseToSparse)
def testOnesLike(self):
t1 = tensor([[0, 0, 1], [1, 0, 0]], chunk_size=2).tosparse()
t = ones_like(t1, dtype='f8')
self.assertIsInstance(t, SparseTensor)
self.assertIsInstance(t.op, TensorOnesLike)
self.assertTrue(t.issparse())
self.assertFalse(t.op.gpu)
t = t.tiles()
self.assertEqual(t.chunks[0].index, (0, 0))
self.assertIsInstance(t.op, TensorOnesLike)
self.assertTrue(t.chunks[0].issparse())
def testFromArray(self):
x = array([1, 2, 3])
self.assertEqual(x.shape, (3,))
y = array([x, x])
self.assertEqual(y.shape, (2, 3))
z = array((x, x, x))
self.assertEqual(z.shape, (3, 3))
@unittest.skipIf(tiledb is None, 'TileDB not installed')
def testFromTileDB(self):
ctx = tiledb.Ctx()
for sparse in (True, False):
dom = tiledb.Domain(
tiledb.Dim(ctx=ctx, name="i", domain=(1, 30), tile=7, dtype=np.int32),
tiledb.Dim(ctx=ctx, name="j", domain=(1, 20), tile=3, dtype=np.int32),
tiledb.Dim(ctx=ctx, name="k", domain=(1, 10), tile=4, dtype=np.int32),
ctx=ctx,
)
schema = tiledb.ArraySchema(ctx=ctx, domain=dom, sparse=sparse,
attrs=[tiledb.Attr(ctx=ctx, name='a', dtype=np.float32)])
tempdir = tempfile.mkdtemp()
try:
# create tiledb array
array_type = tiledb.DenseArray if not sparse else tiledb.SparseArray
array_type.create(tempdir, schema)
tensor = fromtiledb(tempdir)
self.assertIsInstance(tensor.op, TensorTileDBDataSource)
self.assertEqual(tensor.op.issparse(), sparse)
self.assertEqual(tensor.shape, (30, 20, 10))
self.assertEqual(tensor.extra_params.raw_chunk_size, (7, 3, 4))
self.assertIsNone(tensor.op.tiledb_config)
self.assertEqual(tensor.op.tiledb_uri, tempdir)
self.assertIsNone(tensor.op.tiledb_key)
self.assertIsNone(tensor.op.tiledb_timestamp)
tensor = tensor.tiles()
self.assertEqual(len(tensor.chunks), 105)
self.assertIsInstance(tensor.chunks[0].op, TensorTileDBDataSource)
self.assertEqual(tensor.chunks[0].op.issparse(), sparse)
self.assertEqual(tensor.chunks[0].shape, (7, 3, 4))
self.assertIsNone(tensor.chunks[0].op.tiledb_config)
self.assertEqual(tensor.chunks[0].op.tiledb_uri, tempdir)
self.assertIsNone(tensor.chunks[0].op.tiledb_key)
self.assertIsNone(tensor.chunks[0].op.tiledb_timestamp)
self.assertEqual(tensor.chunks[0].op.tiledb_dim_starts, (1, 1, 1))
# test axis_offsets of chunk op
self.assertEqual(tensor.chunks[0].op.axis_offsets, (0, 0, 0))
self.assertEqual(tensor.chunks[1].op.axis_offsets, (0, 0, 4))
self.assertEqual(tensor.cix[0, 2, 2].op.axis_offsets, (0, 6, 8))
self.assertEqual(tensor.cix[0, 6, 2].op.axis_offsets, (0, 18, 8))
self.assertEqual(tensor.cix[4, 6, 2].op.axis_offsets, (28, 18, 8))
tensor2 = fromtiledb(tempdir, ctx=ctx)
self.assertEqual(tensor2.op.tiledb_config, ctx.config().dict())
tensor2 = tensor2.tiles()
self.assertEqual(tensor2.chunks[0].op.tiledb_config, ctx.config().dict())
finally:
shutil.rmtree(tempdir)
@unittest.skipIf(tiledb is None, 'TileDB not installed')
def testDimStartFloat(self):
ctx = tiledb.Ctx()
dom = tiledb.Domain(
tiledb.Dim(ctx=ctx, name="i", domain=(0.0, 6.0), tile=6, dtype=np.float64),
ctx=ctx,
)
schema = tiledb.ArraySchema(ctx=ctx, domain=dom, sparse=True,
attrs=[tiledb.Attr(ctx=ctx, name='a', dtype=np.float32)])
tempdir = tempfile.mkdtemp()
try:
# create tiledb array
tiledb.SparseArray.create(tempdir, schema)
with self.assertRaises(ValueError):
fromtiledb(tempdir, ctx=ctx)
finally:
shutil.rmtree(tempdir)
def testFromDataFrame(self):
mdf = md.DataFrame({'a': [0, 1, 2], 'b': [3, 4, 5],
'c': [0.1, 0.2, 0.3]}, index=['c', 'd', 'e'], chunk_size=2)
tensor = from_dataframe(mdf)
self.assertEqual(tensor.shape, (3, 3))
self.assertEqual(np.float64, tensor.dtype)
| 38.228093 | 103 | 0.607821 | 4,114 | 29,665 | 4.32596 | 0.082158 | 0.133169 | 0.049559 | 0.048548 | 0.738551 | 0.685621 | 0.627072 | 0.590043 | 0.561331 | 0.522054 | 0 | 0.041376 | 0.247194 | 29,665 | 775 | 104 | 38.277419 | 0.755553 | 0.03789 | 0 | 0.524306 | 0 | 0 | 0.002982 | 0 | 0 | 0 | 0 | 0 | 0.486111 | 1 | 0.034722 | false | 0 | 0.046875 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d13f252a44ac30eabb61fd5ab7b47904eed9525 | 3,113 | py | Python | astropy/wcs/tests/test_tabprm.py | MatiasRepetto/astropy | 689f9d3b063145150149e592a879ee40af1fac06 | [
"BSD-3-Clause"
] | 4 | 2021-03-25T15:49:56.000Z | 2021-12-15T09:10:04.000Z | astropy/wcs/tests/test_tabprm.py | MatiasRepetto/astropy | 689f9d3b063145150149e592a879ee40af1fac06 | [
"BSD-3-Clause"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | astropy/wcs/tests/test_tabprm.py | MatiasRepetto/astropy | 689f9d3b063145150149e592a879ee40af1fac06 | [
"BSD-3-Clause"
] | 3 | 2021-03-28T16:13:00.000Z | 2021-07-16T10:27:25.000Z | # Licensed under a 3-clause BSD style license - see LICENSE.rst
from copy import deepcopy
import pytest
import numpy as np
from astropy import wcs
from . helper import SimModelTAB
def test_wcsprm_tab_basic(tab_wcs_2di):
assert len(tab_wcs_2di.wcs.tab) == 1
t = tab_wcs_2di.wcs.tab[0]
assert tab_wcs_2di.wcs.tab[0] is not t
def test_tabprm_coord(tab_wcs_2di_f):
t = tab_wcs_2di_f.wcs.tab[0]
c0 = t.coord
c1 = np.ones_like(c0)
t.coord = c1
assert np.allclose(tab_wcs_2di_f.wcs.tab[0].coord, c1)
def test_tabprm_crval_and_deepcopy(tab_wcs_2di_f):
w = deepcopy(tab_wcs_2di_f)
t = tab_wcs_2di_f.wcs.tab[0]
pix = np.array([[2, 3]], dtype=np.float32)
rd1 = tab_wcs_2di_f.wcs_pix2world(pix, 1)
c = t.crval.copy()
d = 0.5 * np.ones_like(c)
t.crval += d
assert np.allclose(tab_wcs_2di_f.wcs.tab[0].crval, c + d)
rd2 = tab_wcs_2di_f.wcs_pix2world(pix - d, 1)
assert np.allclose(rd1, rd2)
rd3 = w.wcs_pix2world(pix, 1)
assert np.allclose(rd1, rd3)
def test_tabprm_delta(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert np.allclose([0.0, 0.0], t.delta)
def test_tabprm_K(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert np.all(t.K == [4, 2])
def test_tabprm_M(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert t.M == 2
def test_tabprm_nc(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert t.nc == 8
def test_tabprm_extrema(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
extrema = np.array(
[[[-0.0026, -0.5], [1.001, -0.5]],
[[-0.0026, 0.5], [1.001, 0.5]]]
)
assert np.allclose(t.extrema, extrema)
def test_tabprm_map(tab_wcs_2di_f):
t = tab_wcs_2di_f.wcs.tab[0]
assert np.allclose(t.map, [0, 1])
t.map[1] = 5
assert np.all(tab_wcs_2di_f.wcs.tab[0].map == [0, 5])
t.map = [1, 4]
assert np.all(tab_wcs_2di_f.wcs.tab[0].map == [1, 4])
def test_tabprm_sense(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert np.all(t.sense == [1, 1])
def test_tabprm_p0(tab_wcs_2di):
t = tab_wcs_2di.wcs.tab[0]
assert np.all(t.p0 == [0, 0])
def test_tabprm_print(tab_wcs_2di_f, capfd):
tab_wcs_2di_f.wcs.tab[0].print_contents()
captured = capfd.readouterr()
s = str(tab_wcs_2di_f.wcs.tab[0])
out = str(captured.out)
lout= out.split('\n')
assert out == s
assert lout[0] == ' flag: 137'
assert lout[1] == ' M: 2'
def test_wcstab_copy(tab_wcs_2di_f):
t = tab_wcs_2di_f.wcs.tab[0]
c0 = t.coord
c1 = np.ones_like(c0)
t.coord = c1
assert np.allclose(tab_wcs_2di_f.wcs.tab[0].coord, c1)
def test_tabprm_crval(tab_wcs_2di_f):
w = deepcopy(tab_wcs_2di_f)
t = tab_wcs_2di_f.wcs.tab[0]
pix = np.array([[2, 3]], dtype=np.float32)
rd1 = tab_wcs_2di_f.wcs_pix2world(pix, 1)
c = t.crval.copy()
d = 0.5 * np.ones_like(c)
t.crval += d
assert np.allclose(tab_wcs_2di_f.wcs.tab[0].crval, c + d)
rd2 = tab_wcs_2di_f.wcs_pix2world(pix - d, 1)
assert np.allclose(rd1, rd2)
rd3 = w.wcs_pix2world(pix, 1)
assert np.allclose(rd1, rd3)
| 22.395683 | 63 | 0.641182 | 595 | 3,113 | 3.097479 | 0.142857 | 0.139989 | 0.209984 | 0.135648 | 0.674986 | 0.662507 | 0.640803 | 0.622355 | 0.595768 | 0.595768 | 0 | 0.069739 | 0.212335 | 3,113 | 138 | 64 | 22.557971 | 0.681892 | 0.019595 | 0 | 0.488636 | 0 | 0 | 0.010492 | 0 | 0 | 0 | 0 | 0 | 0.261364 | 1 | 0.159091 | false | 0 | 0.056818 | 0 | 0.215909 | 0.022727 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d1a4439199c9ef84e882067c41f55f763548eaa | 4,138 | py | Python | src/permission/backends.py | dkopitsa/django-permission | 0319ea3bf0993ca1bd7232e4d60c4b8ec635787d | [
"MIT"
] | 234 | 2015-01-05T17:09:08.000Z | 2021-11-15T09:52:43.000Z | src/permission/backends.py | dkopitsa/django-permission | 0319ea3bf0993ca1bd7232e4d60c4b8ec635787d | [
"MIT"
] | 54 | 2015-02-13T08:06:32.000Z | 2021-05-19T14:07:03.000Z | src/permission/backends.py | dkopitsa/django-permission | 0319ea3bf0993ca1bd7232e4d60c4b8ec635787d | [
"MIT"
] | 35 | 2015-04-13T09:10:38.000Z | 2022-02-15T01:43:03.000Z | # coding=utf-8
"""
Logical permission backends module
"""
from permission.conf import settings
from permission.utils.handlers import registry
from permission.utils.permissions import perm_to_permission
__all__ = ('PermissionBackend',)
class PermissionBackend(object):
"""
A handler based permission backend
"""
supports_object_permissions = True
supports_anonymous_user = True
supports_inactive_user = True
# pylint:disable=unused-argument
def authenticate(self, username, password):
"""
Always return ``None`` to prevent authentication within this backend.
"""
return None
def has_perm(self, user_obj, perm, obj=None):
"""
Check if user have permission (of object) based on registered handlers.
It will raise ``ObjectDoesNotExist`` exception when the specified
string permission does not exist and
``PERMISSION_CHECK_PERMISSION_PRESENCE`` is ``True`` in ``settings``
module.
Parameters
----------
user_obj : django user model instance
A django user model instance which be checked
perm : string
`app_label.codename` formatted permission string
obj : None or django model instance
None or django model instance for object permission
Returns
-------
boolean
Whether the specified user have specified permission (of specified
object).
Raises
------
django.core.exceptions.ObjectDoesNotExist
If the specified string permission does not exist and
``PERMISSION_CHECK_PERMISSION_PRESENCE`` is ``True`` in ``settings``
module.
"""
if settings.PERMISSION_CHECK_PERMISSION_PRESENCE:
# get permission instance from string permission (perm)
# it raise ObjectDoesNotExists when the permission is not exists
try:
perm_to_permission(perm)
except AttributeError:
# Django 1.2 internally use wrong permission string thus ignore
pass
# get permission handlers fot this perm
cache_name = '_%s_cache' % perm
if hasattr(self, cache_name):
handlers = getattr(self, cache_name)
else:
handlers = [h for h in registry.get_handlers()
if perm in h.get_supported_permissions()]
setattr(self, cache_name, handlers)
for handler in handlers:
if handler.has_perm(user_obj, perm, obj=obj):
return True
return False
def has_module_perms(self, user_obj, app_label):
"""
Check if user have permission of specified app based on registered
handlers.
It will raise ``ObjectDoesNotExist`` exception when the specified
string permission does not exist and
``PERMISSION_CHECK_PERMISSION_PRESENCE`` is ``True`` in ``settings``
module.
Parameters
----------
user_obj : django user model instance
A django user model instance which is checked
app_label : string
`app_label.codename` formatted permission string
Returns
-------
boolean
Whether the specified user have specified permission.
Raises
------
django.core.exceptions.ObjectDoesNotExist
If the specified string permission does not exist and
``PERMISSION_CHECK_PERMISSION_PRESENCE`` is ``True`` in ``settings``
module.
"""
# get permission handlers fot this perm
cache_name = '_%s_cache' % app_label
if hasattr(self, cache_name):
handlers = getattr(self, cache_name)
else:
handlers = [h for h in registry.get_handlers()
if app_label in h.get_supported_app_labels()]
setattr(self, cache_name, handlers)
for handler in handlers:
if handler.has_module_perms(user_obj, app_label):
return True
return False
| 33.918033 | 80 | 0.61479 | 444 | 4,138 | 5.578829 | 0.261261 | 0.029067 | 0.03149 | 0.066613 | 0.592652 | 0.572467 | 0.550666 | 0.512717 | 0.512717 | 0.464271 | 0 | 0.001059 | 0.31537 | 4,138 | 121 | 81 | 34.198347 | 0.873279 | 0.482117 | 0 | 0.410256 | 0 | 0 | 0.020673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.051282 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0d264d356130367d822124ae49b5e060a6d3d6dd | 21,781 | py | Python | ncov_ism/_visualization.py | z2e2/ncov_ism | 14e3dcd7c568e21b437dfdb7d74353ed8bd93c8f | [
"BSD-3-Clause"
] | null | null | null | ncov_ism/_visualization.py | z2e2/ncov_ism | 14e3dcd7c568e21b437dfdb7d74353ed8bd93c8f | [
"BSD-3-Clause"
] | null | null | null | ncov_ism/_visualization.py | z2e2/ncov_ism | 14e3dcd7c568e21b437dfdb7d74353ed8bd93c8f | [
"BSD-3-Clause"
] | 1 | 2020-08-04T23:59:26.000Z | 2020-08-04T23:59:26.000Z | import logging
import matplotlib
import pickle
matplotlib.use('Agg')
import matplotlib.colors as mcolors
import numpy as np
import matplotlib.pyplot as plt
plt.ioff()
font = {# 'family' : 'serif', # Times (source: https://matplotlib.org/tutorials/introductory/customizing.html)
'family': 'sans-serif', # Helvetica
'size' : 12}
matplotlib.rc('font', **font)
text = {'usetex': False}
matplotlib.rc('text', **text)
monospace_font = {'fontname':'monospace'}
CSS4_COLORS = mcolors.CSS4_COLORS
logging.basicConfig(format='%(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S', level=logging.INFO)
def ISM_filter(dict_freq, threshold):
"""
collapse low frequency ISMs into "OTHER" per location
Parameters
----------
dict_freq: dictionary
ISM frequency of a location of interest
threshold: float
ISMs lower than this threshold will be collapsed into "OTHER"
Returns
-------
res_dict: dictionary
filtered ISM frequency of a location of interest
"""
res_dict = {'OTHER': [0, 0]}
total = sum([int(dict_freq[ISM][1]) for ISM in dict_freq])
for ISM in dict_freq:
if int(dict_freq[ISM][1])/total < threshold:
res_dict['OTHER'] = [0, res_dict['OTHER'][1] + int(dict_freq[ISM][1])]
else:
res_dict[ISM] = [dict_freq[ISM][0], int(dict_freq[ISM][1]) + res_dict.get(ISM, [0, 0])[1]]
if res_dict['OTHER'][1] == 0:
del res_dict['OTHER']
return res_dict
def ISM_time_series_filter(dict_freq, threshold):
"""
collapse low frequency ISMs into "OTHER" per location
Parameters
----------
dict_freq: dictionary
ISM frequency of a location of interest
threshold: float
ISMs lower than this threshold will be collapsed into "OTHER"
Returns
-------
res_dict: dictionary
filtered ISM frequency of a location of interest
"""
res_dict = {'OTHER': [0, 0]}
total = sum([int(dict_freq[ISM]) for ISM in dict_freq])
for ISM in dict_freq:
if int(dict_freq[ISM])/total < threshold:
res_dict['OTHER'] = [0, res_dict['OTHER'][1] + int(dict_freq[ISM])]
else:
res_dict[ISM] = [dict_freq[ISM], int(dict_freq[ISM]) + res_dict.get(ISM, [0, 0])[1]]
if res_dict['OTHER'][1] == 0:
del res_dict['OTHER']
return res_dict
def ISM_visualization(region_raw_count, state_raw_count, count_dict, region_list, state_list, time_series_region_list,
output_folder, ISM_FILTER_THRESHOLD=0.05, ISM_TIME_SERIES_FILTER_THRESHOLD=0.025):
'''
Informative Subtype Marker analysis visualization
Parameters
----------
region_raw_count: dictionary
ISM frequency per region
state_raw_count: dictionary
ISM frequency per state
count_dict: dictionary
ISM frequency time series per region
region_list: list
regions of interest
state_list: list
states of interest
time_series_region_list: list
regions of interest for time series analysis
output_folder: str
path to the output folder
ISM_FILTER_THRESHOLD: float
ISM filter threshold
ISM_TIME_SERIES_FILTER_THRESHOLD: float
ISM filter threshold for time series
Returns
-------
Objects for downstream visualization
'''
ISM_set = set([])
region_pie_chart = {}
for idx, region in enumerate(region_list):
dict_freq_filtered = ISM_filter(region_raw_count[region], ISM_FILTER_THRESHOLD)
region_pie_chart[region] = dict_freq_filtered
ISM_set.update(dict_freq_filtered.keys())
state_pie_chart = {}
for idx, state in enumerate(state_list):
dict_freq_filtered = ISM_filter(state_raw_count[state], ISM_FILTER_THRESHOLD)
state_pie_chart[state] = dict_freq_filtered
ISM_set.update(dict_freq_filtered.keys())
count_list = []
date_list = []
sorted_date = sorted(count_dict.keys())
for date in sorted_date:
dict_freq = {}
for region in time_series_region_list:
regional_dict_freq = count_dict[date][region]
dict_freq_filtered = ISM_time_series_filter(regional_dict_freq, ISM_TIME_SERIES_FILTER_THRESHOLD )
ISM_set.update(list(dict_freq_filtered.keys()))
dict_freq[region] = dict_freq_filtered
count_list.append(dict_freq)
date_list.append(date)
return ISM_set, region_pie_chart, state_pie_chart, count_list, date_list
def customized_ISM_visualization(region_raw_count, count_dict, region_list, output_folder,
ISM_FILTER_THRESHOLD=0.05, ISM_TIME_SERIES_FILTER_THRESHOLD=0.025):
'''
Informative Subtype Marker analysis visualization
Parameters
----------
region_raw_count: dictionary
ISM frequency per region
state_raw_count: dictionary
ISM frequency per state
count_dict: dictionary
ISM frequency time series per region
region_list: list
regions of interest
state_list: list
states of interest
time_series_region_list: list
regions of interest for time series analysis
output_folder: str
path to the output folder
ISM_FILTER_THRESHOLD: float
ISM filter threshold
ISM_TIME_SERIES_FILTER_THRESHOLD: float
ISM filter threshold for time series
Returns
-------
Objects for downstream visualization
'''
ISM_set = set([])
region_pie_chart = {}
for idx, region in enumerate(region_list):
dict_freq_filtered = ISM_filter(region_raw_count[region], ISM_FILTER_THRESHOLD)
region_pie_chart[region] = dict_freq_filtered
ISM_set.update(dict_freq_filtered.keys())
count_list = []
date_list = []
sorted_date = sorted(count_dict.keys())
for date in sorted_date:
dict_freq = {}
for region in region_list:
regional_dict_freq = count_dict[date][region]
dict_freq_filtered = ISM_time_series_filter(regional_dict_freq, ISM_TIME_SERIES_FILTER_THRESHOLD )
ISM_set.update(list(dict_freq_filtered.keys()))
dict_freq[region] = dict_freq_filtered
count_list.append(dict_freq)
date_list.append(date)
return ISM_set, region_pie_chart, count_list, date_list
def get_color_names(CSS4_COLORS, num_colors):
'''
Prepare colors for each ISM.
'''
bad_colors = set(['seashell', 'linen', 'ivory', 'oldlace','floralwhite',
'lightyellow', 'lightgoldenrodyellow', 'honeydew',
'mintcream', 'azure', 'lightcyan', 'aliceblue',
'ghostwhite', 'lavenderblush'
])
by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgb(color))),
name)
for name, color in CSS4_COLORS.items())
names = [name for hsv, name in by_hsv][14:]
prime_names = ['red', 'orange', 'green', 'blue', 'gold',
'lightskyblue', 'brown', 'black', 'pink',
'yellow']
OTHER = 'gray'
name_list = [name for name in names if name not in prime_names and name != OTHER and name not in bad_colors]
if num_colors > len(name_list) - 10:
logging.info('NOTE: Repetitive colors for different ISMs (inadequate distinctive colors)')
name_list = name_list + ceil(num_colors/len(name_list)) * name_list
if num_colors > len(prime_names):
ind_list = np.linspace(0, len(name_list), num_colors - 10, dtype = int, endpoint=False).tolist()
color_names = prime_names + [name_list[ind] for ind in ind_list]
else:
color_names = prime_names[:num_colors]
return color_names
def global_color_map(COLOR_DICT, ISM_list, out_dir):
'''
Plot color-ISM map for reference.
Adapted from https://matplotlib.org/3.1.0/gallery/color/named_colors.html
'''
ncols = 3
n = len(COLOR_DICT)
nrows = n // ncols + int(n % ncols > 0)
cell_width = 1300
cell_height = 100
swatch_width = 180
margin = 30
topmargin = 40
width = cell_width * 3 + 2 * margin
height = cell_height * nrows + margin + topmargin
dpi = 300
fig, ax = plt.subplots(figsize=(width / dpi, height / dpi), dpi=dpi)
fig.subplots_adjust(margin/width, margin/height,
(width-margin)/width, (height-topmargin)/height)
ax.set_xlim(0, cell_width * 4)
ax.set_ylim(cell_height * (nrows-0.5), -cell_height/2.)
ax.yaxis.set_visible(False)
ax.xaxis.set_visible(False)
ax.set_axis_off()
# ax.set_title(title, fontsize=24, loc="left", pad=10)
ISM_list.append('OTHER')
for i, name in enumerate(ISM_list):
row = i % nrows
col = i // nrows
y = row * cell_height
swatch_start_x = cell_width * col
swatch_end_x = cell_width * col + swatch_width
text_pos_x = cell_width * col + swatch_width + 50
ax.text(text_pos_x, y, name, fontsize=14,
fontname='monospace',
horizontalalignment='left',
verticalalignment='center')
ax.hlines(y, swatch_start_x, swatch_end_x,
color=COLOR_DICT[name], linewidth=18)
plt.savefig('{}/COLOR_MAP.png'.format(out_dir), bbox_inches='tight', dpi=dpi)
plt.close(fig)
def func(pct, allvals):
'''
covert to absolute value for pie chart plot.
'''
absolute = int(round(pct/100.*np.sum(allvals)))
return "{:d}".format(absolute)
def plot_pie_chart(sizes, labels, colors, ax):
'''
plot pie chart
Adapted from https://matplotlib.org/3.1.1/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py
'''
wedges, texts, autotexts = ax.pie(sizes, autopct=lambda pct: func(pct, sizes), colors = colors, textprops=dict(color="w"))
time_labels = ['-' if label == 'OTHER' else label.split(' ')[1] for label in labels]
ax.legend(wedges, time_labels,
# title="Oligotypes",
loc="lower left",
bbox_to_anchor=(0.8, 0, 0.5, 1))
ax.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
return wedges, labels
def regional_growth_plot(region, ISM_df, REFERENCE_date, count_list, date_list, COLOR_DICT, OUTPUT_FOLDER):
'''
time series plot for a region of interest
'''
xlim_len = (ISM_df[ISM_df['country/region'] == region]['date'].max().date() - REFERENCE_date).days
fig = plt.figure(figsize = (30, 15))
n = 4
ax=plt.subplot(1, 1, 1)
regional_total = []
ISM_regional_set = set([])
for i in range(len(count_list)):
regional_dict_freq = count_list[i][region]
regional_total.append(sum([regional_dict_freq[ISM][1] for ISM in regional_dict_freq]))
ISM_regional_set.update(regional_dict_freq.keys())
ISM_regional_list = []
for ISM in ISM_regional_set:
if ISM != 'OTHER':
ISM_regional_list.append(ISM)
NONOTHER = len(ISM_regional_list)
if 'OTHER' in ISM_regional_set:
ISM_regional_list.append('OTHER')
for ISM in ISM_regional_list:
ISM_regional_growth = []
for i in range(len(count_list)):
regional_dict_freq = count_list[i][region]
if ISM in regional_dict_freq and regional_dict_freq[ISM][1]!= 0:
ISM_regional_growth.append(regional_dict_freq[ISM][1]/regional_total[i])
else:
if ISM == 'OTHER':
other_count = sum([regional_dict_freq[ISM][1] for ISM in regional_dict_freq if ISM not in ISM_regional_set])
if regional_total[i] != 0:
ISM_regional_growth.append(other_count/regional_total[i])
else:
ISM_regional_growth.append(0)
else:
ISM_regional_growth.append(0)
ax.plot(ISM_regional_growth, color = COLOR_DICT[ISM], label = ISM, linewidth = 4, marker = 'o', markersize = 4)
major_ticks = np.arange(0, len(date_list), 5)
minor_ticks = np.arange(0, len(date_list))
major_label = []
for i in major_ticks.tolist():
major_label.append(str(date_list[i]))
ax.set_xticks(minor_ticks, minor=True)
ax.set_xticks(major_ticks)
ax.set_xticklabels(major_label)
plt.setp(ax.get_xticklabels(), rotation=90)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.legend(
loc="lower left",
bbox_to_anchor=(1, 0, 0.5, 1),
prop={'family': monospace_font['fontname']})
plt.xlim([-1, xlim_len])
plt.ylabel('Relative abundance')
ax.grid(which='minor', alpha=0.3, linestyle='--')
ax.grid(which='major', alpha=0.8)
plt.savefig('{}/3_ISM_growth_{}.png'.format(OUTPUT_FOLDER, region), bbox_inches='tight')
plt.close(fig)
def ISM_plot(ISM_df, ISM_set, region_list, region_pie_chart, state_list, state_pie_chart, REFERENCE_date, time_series_region_list, count_list, date_list, OUTPUT_FOLDER):
'''
Generate figures for ISM analysis.
'''
ISM_index = {}
idx = 0
for ISM, counts in ISM_df['ISM'].value_counts().items():
ISM_index[ISM] = idx
idx += 1
logging.info('{} ISMs will show up in the visualizations'.format(len(ISM_set)))
ISM_list = []
for ISM in ISM_set:
if ISM == 'OTHER':
continue
ISM_list.append((ISM, ISM_index[ISM]))
ISM_list = sorted(ISM_list, key = lambda x: x[1])
ISM_list = [item[0] for item in ISM_list]
color_map = get_color_names(CSS4_COLORS, len(ISM_list))
COLOR_DICT = {}
for idx, ISM in enumerate(ISM_list):
COLOR_DICT[ISM] = color_map[idx]
COLOR_DICT['OTHER'] = 'gray'
pickle.dump(COLOR_DICT, open('COLOR_DICT.pkl', 'wb'))
global_color_map(COLOR_DICT, ISM_list, OUTPUT_FOLDER)
DPI = 100
fig = plt.figure(figsize=(25, 15))
wedges_list = []
for idx, region in enumerate(region_list):
dict_freq = region_pie_chart[region]
total = sum([dict_freq[ISM][1] for ISM in dict_freq])
labels = []
sizes = []
colors = []
for ISM in dict_freq:
if ISM == 'OTHER':
continue
labels.append('{}: {}'.format(ISM, dict_freq[ISM][0]))
colors.append(COLOR_DICT[ISM])
sizes.append(dict_freq[ISM][1])
if 'OTHER' in dict_freq:
labels.append('OTHER')
colors.append(COLOR_DICT['OTHER'])
sizes.append(dict_freq['OTHER'][1])
ax=plt.subplot(5, 5, idx+1)
wedges, labels = plot_pie_chart(sizes, labels, colors, ax)
ax.set_title(region)
wedges_list.append((wedges, labels))
labels_handles = {}
handles_OTHER = None
for wedges, labels in wedges_list:
for idx, label in enumerate(labels):
label = label.split(':')[0]
if label == 'OTHER':
handles_OTHER = [wedges[idx], label]
continue
if label not in labels_handles:
labels_handles[label] = wedges[idx]
if handles_OTHER:
handles_list = list(labels_handles.values()) + [handles_OTHER[0]]
labels_list = list(labels_handles.keys()) + [handles_OTHER[1]]
fig.legend(
handles_list,
labels_list,
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
else:
fig.legend(
labels_handles.values(),
labels_handles.keys(),
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
plt.savefig('{}/1_regional_ISM.png'.format(OUTPUT_FOLDER), bbox_inches='tight', dpi=DPI, transparent=True)
plt.close(fig)
fig = plt.figure(figsize=(25, 20))
subplot_y = int(np.sqrt(len(state_list)))
subplot_x = int(np.sqrt(len(state_list))) + 1
if subplot_x * subplot_y < len(state_list):
subplot_y = subplot_x
wedges_list = []
for idx, state in enumerate(state_list):
dict_freq = state_pie_chart[state]
total = sum([dict_freq[ISM][1] for ISM in dict_freq])
labels = []
sizes = []
colors = []
for ISM in dict_freq:
if ISM == 'OTHER':
continue
labels.append('{}: {}'.format(ISM, dict_freq[ISM][0]))
colors.append(COLOR_DICT[ISM])
sizes.append(dict_freq[ISM][1])
if 'OTHER' in dict_freq:
labels.append('OTHER')
colors.append(COLOR_DICT['OTHER'])
sizes.append(dict_freq['OTHER'][1])
ax=plt.subplot(subplot_x, subplot_y, idx+1)
wedges, labels = plot_pie_chart(sizes, labels, colors, ax)
ax.set_title(state)
wedges_list.append((wedges, labels))
labels_handles = {}
handles_OTHER = None
for wedges, labels in wedges_list:
for idx, label in enumerate(labels):
label = label.split(':')[0]
if label == 'OTHER':
handles_OTHER = [wedges[idx], label]
continue
if label not in labels_handles:
labels_handles[label] = wedges[idx]
if handles_OTHER:
handles_list = list(labels_handles.values()) + [handles_OTHER[0]]
labels_list = list(labels_handles.keys()) + [handles_OTHER[1]]
fig.legend(
handles_list,
labels_list,
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
else:
fig.legend(
labels_handles.values(),
labels_handles.keys(),
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
plt.savefig('{}/2_intra-US_ISM.png'.format(OUTPUT_FOLDER), bbox_inches='tight', dpi=DPI, transparent=True)
plt.close(fig)
font = {'family': 'sans-serif', # Helvetica
'size' : 25}
matplotlib.rc('font', **font)
for region in time_series_region_list:
regional_growth_plot(region, ISM_df, REFERENCE_date, count_list, date_list, COLOR_DICT, OUTPUT_FOLDER)
def customized_ISM_plot(ISM_df, ISM_set, region_list, region_pie_chart, REFERENCE_date, count_list, date_list, OUTPUT_FOLDER):
'''
Generate figures for ISM analysis.
'''
ISM_index = {}
idx = 0
for ISM, counts in ISM_df['ISM'].value_counts().items():
ISM_index[ISM] = idx
idx += 1
logging.info('{} ISMs will show up in the visualizations'.format(len(ISM_set)))
ISM_list = []
for ISM in ISM_set:
if ISM == 'OTHER':
continue
ISM_list.append((ISM, ISM_index[ISM]))
ISM_list = sorted(ISM_list, key = lambda x: x[1])
ISM_list = [item[0] for item in ISM_list]
color_map = get_color_names(CSS4_COLORS, len(ISM_list))
COLOR_DICT = {}
for idx, ISM in enumerate(ISM_list):
COLOR_DICT[ISM] = color_map[idx]
COLOR_DICT['OTHER'] = 'gray'
pickle.dump(COLOR_DICT, open('COLOR_DICT.pkl', 'wb'))
global_color_map(COLOR_DICT, ISM_list, OUTPUT_FOLDER)
DPI = 100
fig = plt.figure(figsize=(25, 15))
wedges_list = []
for idx, region in enumerate(region_list):
dict_freq = region_pie_chart[region]
total = sum([dict_freq[ISM][1] for ISM in dict_freq])
labels = []
sizes = []
colors = []
for ISM in dict_freq:
if ISM == 'OTHER':
continue
labels.append('{}: {}'.format(ISM, dict_freq[ISM][0]))
colors.append(COLOR_DICT[ISM])
sizes.append(dict_freq[ISM][1])
if 'OTHER' in dict_freq:
labels.append('OTHER')
colors.append(COLOR_DICT['OTHER'])
sizes.append(dict_freq['OTHER'][1])
ax=plt.subplot(5, 5, idx+1)
wedges, labels = plot_pie_chart(sizes, labels, colors, ax)
ax.set_title(region)
wedges_list.append((wedges, labels))
labels_handles = {}
handles_OTHER = None
for wedges, labels in wedges_list:
for idx, label in enumerate(labels):
label = label.split(':')[0]
if label == 'OTHER':
handles_OTHER = [wedges[idx], label]
continue
if label not in labels_handles:
labels_handles[label] = wedges[idx]
if handles_OTHER:
handles_list = list(labels_handles.values()) + [handles_OTHER[0]]
labels_list = list(labels_handles.keys()) + [handles_OTHER[1]]
fig.legend(
handles_list,
labels_list,
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
else:
fig.legend(
labels_handles.values(),
labels_handles.keys(),
bbox_to_anchor=(0.82, 0.25),
bbox_transform=plt.gcf().transFigure,
ncol=5,
prop={'family': monospace_font['fontname']}
)
plt.savefig('{}/1_regional_ISM.png'.format(OUTPUT_FOLDER), bbox_inches='tight', dpi=DPI, transparent=True)
plt.close(fig)
font = {'family': 'sans-serif', # Helvetica
'size' : 25}
matplotlib.rc('font', **font)
for region in region_list:
regional_growth_plot(region, ISM_df, REFERENCE_date, count_list, date_list, COLOR_DICT, OUTPUT_FOLDER) | 36.301667 | 169 | 0.618567 | 2,849 | 21,781 | 4.486837 | 0.121095 | 0.048189 | 0.022373 | 0.013142 | 0.718376 | 0.687632 | 0.664007 | 0.642807 | 0.641242 | 0.628647 | 0 | 0.01507 | 0.262752 | 21,781 | 600 | 170 | 36.301667 | 0.780981 | 0.122263 | 0 | 0.622685 | 0 | 0 | 0.057781 | 0.004552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025463 | false | 0 | 0.013889 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d269caf9870407b800e22ac3a36faa1d6165a79 | 656 | py | Python | algDev/visualization/plot_indicators.py | ajmal017/ralph-usa | 41a7f910da04cfa88f603313fad2ff44c82b9dd4 | [
"Apache-2.0"
] | null | null | null | algDev/visualization/plot_indicators.py | ajmal017/ralph-usa | 41a7f910da04cfa88f603313fad2ff44c82b9dd4 | [
"Apache-2.0"
] | 7 | 2021-03-10T10:08:30.000Z | 2022-03-02T07:38:13.000Z | algDev/visualization/plot_indicators.py | ajmal017/ralph-usa | 41a7f910da04cfa88f603313fad2ff44c82b9dd4 | [
"Apache-2.0"
] | 1 | 2020-04-17T19:15:06.000Z | 2020-04-17T19:15:06.000Z | from models.indicators import Indicators
import numpy as np
import matplotlib.pyplot as plt
def plot_prices(ax, prices, line_style):
i = np.arange(len(prices))
ax.plot(ax, prices, line_style)
return ax
def plot_macd(ax, prices, slow_period, fast_period, line_style='k-'):
macd = Indicators.macd(prices, slow_period, fast_period)[slow_period - 1:]
i = np.arange(len(prices))[slow_period-1:]
ax.plot(i, macd, line_style)
return ax
def plot_ema(ax, prices, period, line_style='k-'):
ema = Indicators.ema(prices, period)[period-1:]
i = np.arange(len(prices))[period-1:]
ax.plot(i, ema, line_style)
return ax
| 26.24 | 78 | 0.692073 | 104 | 656 | 4.221154 | 0.259615 | 0.123007 | 0.061503 | 0.082005 | 0.446469 | 0.223235 | 0.113895 | 0 | 0 | 0 | 0 | 0.00738 | 0.17378 | 656 | 24 | 79 | 27.333333 | 0.802583 | 0 | 0 | 0.176471 | 0 | 0 | 0.006098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0d273c1d1f4925a594d10dc698fbd7f793d46ab3 | 449 | py | Python | tests/strings/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | null | null | null | tests/strings/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | 5 | 2017-08-25T20:43:16.000Z | 2019-10-18T16:49:43.000Z | tests/strings/test_basic.py | jaebradley/python_problems | 24b8ecd49e3095f5c607906cb36019b9e865a20f | [
"MIT"
] | null | null | null | """
Unit Test for strings.basic problems
"""
from unittest import TestCase
from strings.basic import alphabetize
class TestAlphabetize(TestCase):
"""
Unit Test for alphabetize method
"""
def test_should_return_alphabet(self):
"""
Test alphabetize method using every uppercase and lowercase character
"""
self.assertEqual('aBbcDeFgHiJkLmNoPqRsTuVwXyZ', alphabetize('ZyXwVuTsRqPoNmLkJiHgFeDcBba'))
| 22.45 | 99 | 0.714922 | 43 | 449 | 7.395349 | 0.651163 | 0.050314 | 0.069182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2049 | 449 | 19 | 100 | 23.631579 | 0.890756 | 0.309577 | 0 | 0 | 0 | 0 | 0.204545 | 0.204545 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0d281e7eb3d40eae3e191f01e52cfee3344410ff | 6,162 | py | Python | Python3/HayStack_API.py | ConsensusGroup/Haystack | c2d0b8fb7b2064b05a5d256bb949dda9a0ef569d | [
"MIT"
] | 1 | 2019-11-28T08:50:26.000Z | 2019-11-28T08:50:26.000Z | Python3/HayStack_API.py | ConsensusGroup/Haystack | c2d0b8fb7b2064b05a5d256bb949dda9a0ef569d | [
"MIT"
] | 3 | 2019-11-22T04:23:47.000Z | 2019-11-30T07:11:24.000Z | Python3/HayStack_API.py | ConsensusGroup/Haystack | c2d0b8fb7b2064b05a5d256bb949dda9a0ef569d | [
"MIT"
] | 3 | 2018-03-19T05:20:44.000Z | 2019-11-22T00:56:31.000Z | #This script is going to be used API calls but first it will serve as a testing script.
from IOTA_Module import *
from Configuration_Module import *
from Tools_Module import *
from UserProfile_Module import *
from Cryptography_Module import *
from NodeFinder_Module import *
from DynamicPublicLedger_Module import *
import config
from time import sleep
class HayStack:
def __init__(self):
pass
def Seed_Generator(self):
Output = Seed_Generator()
#Output: A 81 character seed for IOTA
return Output
def Write_File(self, File_Directory, Data, Setting = "w"):
Output = Tools().Write_File(File_Directory, Data, Setting)
#Output: True if file was written, False if failed
return None
def Delete_File(self, File_Directory):
Output = Tools().File_Manipulation(File_Directory, Setting = "d")
#Output: True if file deleted, False if failed to delete file
return Output
def Read_File(self, File_Directory):
Output = Tools().Read_File(File_Directory)
#Output: False if file not found/read, Else contents get returned
return Output
def Initialization(self):
Output = Initialization()
#Output: None
return None
def Asymmetric_KeyGen(self, Password):
Output = Key_Generation().Asymmetric_KeyGen(Password)
#Output: Private key as bytes
return Output
def Import_PrivateKey(self, PrivateKey, Password):
Output = Key_Generation().Import_PrivateKey(PrivateKey, Password)
#Output Objects: PrivateKey, PublicKey
return Output
def JSON_Manipulation(self, File_Directory, **kwargs):
Output = Tools().JSON_Manipulation(File_Directory, **kwargs)
#Optional Input: Dictionary
#Output: Write to file -> True, Error(FileNotFoundError) -> False, Read from file = Dictionary
return Output
def UserProfile_Keys(self, Password):
Output = UserProfile().Get_Keys(Password)
#Output: Output.PrivateKey (bytes), Output.PrivateSeed [Decrypted = bytes, Failed Decryption = False], Output.PublicKey
return Output
def IOTA_Generate_Address(self, Seed, Node, Index):
Output = IOTA(Seed = Seed, Node = Node).Generate_Address(Index = Index)
#Output: 81 tryte address in 'bytes'
return Output
def IOTA_Send(self, Seed, Node, PoW, Receiver_Address, Message):
Output = IOTA(Seed = Seed, Node = Node, PoW = PoW).Send(Receiver_Address = Receiver_Address, Message = Message)
#Output: TX_Hash (81 tryte Tx hash, otherwise False [Bool])
return Output
def IOTA_Receive(self, Seed, Node, Start, Stop):
Output = IOTA(Seed = Seed, Node = Node).Receive(Start = Start, Stop = Stop)
#Output: Dictionary {"BundleHash":{"ReceiverAddress", "Tokens", "Timestamp (ms)", "Index", "Message", "Message_Tag"}}, else False [Bool]
return Output
def Test_IOTA_Nodes(self):
Output = Test_Nodes()
# Output: Nothing
return None
def Fastest_Node(self):
Output = Return_Optimal_Node()
# Output: [Fastest_Sending: {"Node", "PoW"}, Fastest_Receiving: {"Node", "PoW"}]
return Output
def Tangle_Block(self, Seed, Node):
Output = IOTA(Seed = Seed, Node = Node).TangleTime()
#Output: Output.Current_Time (time in ms)[int], Output.Block_Remainder (fraction of block left)[float], Output.CurrentBlock (Current block)[int]
return self
#Code to later delete!!!!
def Start_Dynamic_Ledger(self):
#First initialize the directories
self.Initialization()
#self.Test_IOTA_Nodes()
for i in range(1000000):
Submission = DynamicPublicLedger().Check_Current_Ledger()
if Submission == True:
delay = 5
elif Submission == False:
delay = 60
else:
delay = 120
print(Submission)
sleep(5)
if __name__ == "__main__":
x = HayStack()
c = Configuration()
#Change this to test module
Function = "Start_Dynamic_Ledger"
if Function == "Start_Dynamic_Ledger":
x.Start_Dynamic_Ledger()
if Function == "Fastest_Node":
print(x.Fastest_Node())
if Function == "Tangle_Block":
Seed = c.PublicSeed
Node = c.Preloaded_Nodes[0]
x.Tangle_Block(Seed = Seed, Node = Node)
if Function == "Test_IOTA_Nodes":
x.Test_IOTA_Nodes()
if Function == "Seed_Generator":
print(x.Seed_Generator())
if Function == "Write_File":
x.Write_File(File_Directory = c.User_Folder+"/"+c.Keys_Folder+"/"+c.PrivateKey_File, Data = "Hello")
if Function == "Delete_File":
x.Delete_File(File_Directory = c.User_Folder+"/"+c.Keys_Folder+"/"+c.PrivateKey_File)
if Function == "Read_File":
print(x.Read_File(File_Directory = c.User_Folder+"/"+c.Keys_Folder+"/"+c.PrivateKey_File))
if Function == "Initialization":
x.Initialization()
if Function == "Asymmetric_KeyGen":
print(x.Asymmetric_KeyGen(Password = ""))
if Function == "JSON_Manipulation":
x.JSON_Manipulation(File_Directory = c.User_Folder+"/"+c.Keys_Folder+"/"+c.PrivateKey_File, Dictionary = {})
if Function == "UserProfile_Keys":
print(x.UserProfile_Keys(Password = config.Password).PrivateSeed)
if Function == "IOTA_Generate_Address":
Seed = c.PublicSeed
Node = c.Preloaded_Nodes[0]
print(x.IOTA_Generate_Address(Seed = Seed, Node = Node, Index = 0))
if Function == "IOTA_Send":
Seed = c.PublicSeed
Node = c.Preloaded_Nodes[2]
Test_Message = "Test12134"
Address = x.IOTA_Generate_Address(Seed = Seed, Node = Node, Index = 7)
print(x.IOTA_Send(Seed = Seed, Node = Node, PoW = True, Receiver_Address = Address, Message = Test_Message))
print(x.Tangle_Block(Seed = c.PublicSeed, Node = Node))
if Function == "IOTA_Receive":
Seed = c.PublicSeed
Node = c.Preloaded_Nodes[0]
print(x.IOTA_Receive(Seed = Seed, Node = Node, Start = 6, Stop = 7))
| 35.618497 | 152 | 0.648815 | 737 | 6,162 | 5.242877 | 0.222524 | 0.03882 | 0.042702 | 0.037267 | 0.207816 | 0.175207 | 0.121118 | 0.112319 | 0.103261 | 0.083333 | 0 | 0.007091 | 0.244726 | 6,162 | 172 | 153 | 35.825581 | 0.823163 | 0.191334 | 0 | 0.184211 | 0 | 0 | 0.0526 | 0.004232 | 0 | 0 | 0 | 0 | 0 | 1 | 0.149123 | false | 0.078947 | 0.096491 | 0 | 0.385965 | 0.087719 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0d2eef0e6b41c739ce4208807368ff89025b240e | 358 | py | Python | setup.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | setup.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | setup.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | #!/usr/bin/env python
from distutils.core import setup
setup(
name="python-tdlib",
version="1.4.0",
author="andrew-ld",
license="MIT",
url="https://github.com/andrew-ld/python-tdlib",
packages=["py_tdlib", "py_tdlib.constructors", "py_tdlib.factory"],
install_requires=["werkzeug", "simplejson"],
python_requires=">=3.6",
)
| 23.866667 | 71 | 0.656425 | 47 | 358 | 4.893617 | 0.723404 | 0.091304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.148045 | 358 | 14 | 72 | 25.571429 | 0.737705 | 0.055866 | 0 | 0 | 0 | 0 | 0.409496 | 0.062315 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d3113fefbcc2c8c41ba544c0d489f999a22dfe6 | 4,752 | py | Python | rhapsody_web/models.py | wbadart/rhapsody | 433a376b4a3881d4b12bebbbbdf08194c62fa8a2 | [
"MIT"
] | null | null | null | rhapsody_web/models.py | wbadart/rhapsody | 433a376b4a3881d4b12bebbbbdf08194c62fa8a2 | [
"MIT"
] | 12 | 2018-03-21T02:26:45.000Z | 2018-05-09T07:12:55.000Z | rhapsody_web/models.py | wbadart/rhapsody | 433a376b4a3881d4b12bebbbbdf08194c62fa8a2 | [
"MIT"
] | null | null | null | from itertools import chain
from django.db import models
from random import choices
class Node(object):
def neighbors(self):
raise NotImplementedError
def graph(self, depth=1):
if not depth:
return {self: set()}
elif depth == 1:
return {self: set(self.neighbors())}
else:
init = self.graph(depth=1)
for n in self.neighbors():
init.update(n.graph(depth - 1))
return init
def edges(self, depth=1):
self.g = self.graph(depth)
for vertex, edgelist in self.g.items():
for edge in edgelist:
yield (vertex, edge)
class Artist(models.Model, Node):
spotify_id = models.CharField(max_length=22, primary_key=True)
popularity = models.IntegerField(null=True)
name = models.CharField(max_length=30, default="")
# albums - ManyToManyField included in Album
# songs - ManyToManyField included in Song
# concerts = models.ManyToManyField(Concert)
def __str__(self):
return self.name + " (" + self.spotify_id + ")"
def neighbors(self):
#albums = (a for a in Album.objects.all() if self in a.artists.all())
#songs = Song.objects.filter(artist=self)
#return chain(albums, songs)
adj_songs = Song.objects.filter(artist=self)
if len(adj_songs) > 4:
return choices(Song.objects.filter(artist=self), k=4)
else:
return adj_songs
class Genre(models.Model):
name = models.CharField(max_length=30, primary_key=True)
artists = models.ManyToManyField(Artist)
# albums - ManyToManyField included in Album
# songs -
# In the spotify data, individual songs don't have genre
# data. We could extrapolate this from the album or artist genre
# data later though
class Album(models.Model, Node):
ALBUM = "A"
SINGLE = "S"
COMPILATION = "C"
ALBUM_TYPE_CHOICES = (
(ALBUM, "album"),
(SINGLE, "single"),
(COMPILATION, "compilation")
)
album_type = models.CharField(
max_length=1, choices=ALBUM_TYPE_CHOICES, default=ALBUM)
artists = models.ManyToManyField(Artist)
spotify_id = models.CharField(max_length=22, primary_key=True)
genres = models.ManyToManyField(Genre)
label = models.CharField(max_length=30, default="")
name = models.CharField(max_length=30, default="")
# Note this is going to come in
# as a string from the spotify
# API, so some conversion will
# have to be done
release_date = models.DateField(null=True)
def __str__(self):
return self.name + " (" + self.spotify_id + ")"
def neighbors(self):
songs = choices(Song.objects.filter(album=self), k=4)
return chain(self.artists.all(), songs)
class Song(models.Model, Node):
spotify_id = models.CharField(max_length=22, primary_key=True)
artist = models.ForeignKey(Artist, on_delete=models.CASCADE)
album = models.ForeignKey(Album, null=True, on_delete=models.CASCADE)
title = models.CharField(max_length=30, default="")
name = models.CharField(max_length=30, default="")
def __str__(self):
return self.title + " (" + self.spotify_id + ")"
def neighbors(self):
return [self.artist, self.album]
class Playlist(models.Model):
spotify_id = models.CharField(max_length=22, primary_key=True)
owner = models.ForeignKey('User', null=True, on_delete=models.CASCADE)
songs = models.ManyToManyField(Song)
collaborative = models.BooleanField(default=False)
description = models.CharField(max_length=5000, default="")
# followers - see ManyToManyField in User
name = models.CharField(max_length=30, default="")
public = models.BooleanField(default=True)
class RadioStation(models.Model):
pass
class Concert(models.Model):
pass
class User(models.Model):
# abstract = True
username = models.CharField(max_length=30, unique=True)
spotify_id = models.CharField(max_length=22, primary_key=True)
artist = models.ManyToManyField(Artist)
genre = models.ManyToManyField(Genre)
album = models.ManyToManyField(Album)
song = models.ManyToManyField(Song)
playlist_followed = models.ManyToManyField(Playlist)
radio_station = models.ManyToManyField(RadioStation)
friends = models.ForeignKey("self", on_delete=models.SET_NULL, null=True)
class Admin(User):
pass
class Regular(User):
pass
class Song_Graph(models.Model):
song1_id = models.CharField(max_length=22, null=True)
song2_id = models.CharField(max_length=22, null=True)
edge_weight = models.IntegerField(null=True)
class Meta:
unique_together = ("song1_id", "song2_id")
| 31.058824 | 77 | 0.666877 | 589 | 4,752 | 5.268251 | 0.242784 | 0.082179 | 0.098614 | 0.131486 | 0.33097 | 0.307444 | 0.222688 | 0.19884 | 0.175636 | 0.175636 | 0 | 0.012703 | 0.22138 | 4,752 | 152 | 78 | 31.263158 | 0.825946 | 0.129209 | 0 | 0.26 | 0 | 0 | 0.014078 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09 | false | 0.04 | 0.03 | 0.04 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0d315a6eab2cc3aa2454ad8e379488130a26267e | 1,076 | py | Python | examples/kddcup2011/track1.py | zenogantner/MML-KDD | 4c66101439d83bdcd15a464bf95c7ae74f1abbed | [
"BSD-3-Clause"
] | 1 | 2021-03-07T15:29:48.000Z | 2021-03-07T15:29:48.000Z | examples/kddcup2011/track1.py | zenogantner/MML-KDD | 4c66101439d83bdcd15a464bf95c7ae74f1abbed | [
"BSD-3-Clause"
] | null | null | null | examples/kddcup2011/track1.py | zenogantner/MML-KDD | 4c66101439d83bdcd15a464bf95c7ae74f1abbed | [
"BSD-3-Clause"
] | 3 | 2015-03-17T20:22:48.000Z | 2019-11-20T06:25:55.000Z | #!/usr/bin/env ipy
import clr
clr.AddReference("MyMediaLite.dll")
clr.AddReference("MyMediaLiteExperimental.dll")
from MyMediaLite import *
train_file = "trainIdx1.firstLines.txt"
validation_file = "validationIdx1.firstLines.txt"
test_file = "testIdx1.firstLines.txt"
# load the data
training_data = IO.KDDCup2011.Ratings.Read(train_file)
validation_data = IO.KDDCup2011.Ratings.Read(validation_file)
test_data = IO.KDDCup2011.Ratings.ReadTest(test_file)
item_relations = IO.KDDCup2011.Items.Read("trackData1.txt", "albumData1.txt", "artistData1.txt", "genreData1.txt", 1);
print item_relations
# set up the recommender
recommender = RatingPrediction.ItemAverage()
recommender.MinRating = 0
recommender.MaxRating = 100
recommender.Ratings = training_data
print "Training ..."
recommender.Train()
print "done."
# measure the accuracy on the validation set
print Eval.RatingEval.Evaluate(recommender, validation_data)
# predict on the test set
print "Predicting ..."
Eval.KDDCup.PredictTrack1(recommender, test_data, "track1-output.txt")
print "done."
| 29.888889 | 118 | 0.77974 | 132 | 1,076 | 6.25 | 0.454545 | 0.058182 | 0.058182 | 0.083636 | 0.065455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031217 | 0.106877 | 1,076 | 35 | 119 | 30.742857 | 0.827263 | 0.112454 | 0 | 0.086957 | 0 | 0 | 0.24 | 0.108421 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.086957 | null | null | 0.26087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d315c941d5c258d24b3ce3f264e4496447f3b10 | 754 | py | Python | resdk/resources/kb/mapping.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | resdk/resources/kb/mapping.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | resdk/resources/kb/mapping.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | """KB mapping resource."""
from __future__ import absolute_import, division, print_function, unicode_literals
from ..base import BaseResource
class Mapping(BaseResource):
"""Knowledge base Mapping resource."""
endpoint = 'kb.mapping.admin'
query_endpoint = 'kb.mapping.search'
query_method = 'POST'
WRITABLE_FIELDS = ()
UPDATE_PROTECTED_FIELDS = ()
READ_ONLY_FIELDS = ('id', 'relation_type', 'source_db', 'source_id', 'target_db', 'target_id')
def __repr__(self):
"""Format mapping representation."""
# pylint: disable=no-member
return "<Mapping source_db='{}' source_id='{}' target_db='{}' target_id='{}'>".format(
self.source_db, self.source_id, self.target_db, self.target_id)
| 32.782609 | 98 | 0.676393 | 88 | 754 | 5.443182 | 0.5 | 0.056367 | 0.070981 | 0.066806 | 0.133612 | 0.133612 | 0.133612 | 0.133612 | 0 | 0 | 0 | 0 | 0.179045 | 754 | 22 | 99 | 34.272727 | 0.773829 | 0.147215 | 0 | 0 | 0 | 0 | 0.250399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.916667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0d350e50046900dd997c091b0cd94feb9afe2441 | 9,510 | py | Python | 2_functions.py | codernayeem/python-cheat-sheet | ec6fe9f33e9175251df65899cef89f65219b9cb4 | [
"MIT"
] | null | null | null | 2_functions.py | codernayeem/python-cheat-sheet | ec6fe9f33e9175251df65899cef89f65219b9cb4 | [
"MIT"
] | null | null | null | 2_functions.py | codernayeem/python-cheat-sheet | ec6fe9f33e9175251df65899cef89f65219b9cb4 | [
"MIT"
] | null | null | null | # Functions
print("************* Function ***********")
# Simple function without any arguments/parameters
def say_welocme():
return print('Welocme')
# Simple function with arguments/parameters
def say_helo(name, age):
print('Helo', name, age)
# this function returns None
say_helo('Nayeem', 18) # passing args as positional args
say_helo(age=19, name='Sami') # passing args as keyword args (if you mismatch the serial, use keywords)
def check_odd_number(n):
return True if n % 2 else False
if check_odd_number(43):
print(43, " is a odd number")
print("********* Default parameter **********")
# Simple function with a default arguments/parameters
def say_somethings(name, message="Welcome"):
print(message, name)
# Type hint:
print("********* Type hint **********")
def greeting(name: str) -> str:
# Type hints improve IDEs and linters. They make it much easier to statically reason about your code
# The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc
# here we defined name should be str and a str will be returned
return 'Hello ' + name
greeting("Nayeem")
# scope
print("************ Scope *************")
parent_name = "Anything" # this is a global variable
def show_parent1():
print(parent_name) # this will print the global variable
def show_parent2():
parent_name = "Lovely" # this will not change global variable. it will create a new local variable
print(parent_name) # print local variable
def show_parent3():
# we can use global variable in function
# but cannot modify them directly
# TO modify:
# method 1:
global parent_name
parent_name = "Something" # this will change the global variable
print(parent_name)
# method 2:
globals()['parent_name'] = "Something_Nothing" # this will change the global variable
print(globals()['parent_name'])
def show_parent4(parent_name):
print(parent_name) # this parent_name is a local variable
# to use the global variable here
print(globals()['parent_name']) # this will print the global variable, not the local one
# A variable can not be both : parameter and global
# So you can not do that here:
# global parent_name
# print(parent_name)
show_parent1()
show_parent2()
show_parent3()
show_parent4("Long Lasting")
l1 = [56, 87, 89, 45, 57]
d1 = {'Karim': 50, 'Rafiq': 90, 'Sabbir': 60}
# Lambda function
print("************ Lambda function *************")
# lambda function is just a one line simple anonymous function.
# It's defination ==> lambda parameter_list: expression
# lambda function is used when we need a function once and as a argument to another function
print(min(d1.items(), key=lambda item: item[1])) # returns the smallest element
# Python built-in functions/methods
print("************ Some Built-in functions *************")
print(len(l1)) # returns the length of that iterable
print(sum(l1)) # return the sum of an iterable
print(max(l1)) # returns the biggext element
print(min(l1)) # returns the smallest element
print(max(d1, key=lambda k: d1[k])) # returns the biggext element
print(min(d1.items(), key=lambda item: item[1])) # returns the smallest element
print(all([0, 1, 5])) # returns True if all the elements is True, otherwise False
print(any([0, 1, 5])) # returns True if any of the elements is True, otherwise False
print(repr('hi')) # call __repr__() for that object. Represent object
print(id(l1)) # returns a unique integer number which represents identity
print(type(56)) # returns the class type of that object
print(dir(567)) # Returns a list of the specified object's properties and methods
print(ord('A')) # 65 : Return the Unicode code point for a one-character string
print(chr(65)) # 'A' : Return a Unicode string of one character with ordina
print(abs(-62)) # 62 : Return a absolute value of a number
eval('print("hi")') # Evaluates and executes an expression
print(eval('(58*9)+3**2')) # Evaluates and executes an expression
print("************ All Built-in functions *************")
# abs() Returns the absolute value of a number
# all() Returns True if all items in an iterable object are true
# any() Returns True if any item in an iterable object is true
# ascii() Returns a readable version of an object. Replaces none-ascii characters with escape character
# bin() Returns the binary version of a number
# bool() Returns the boolean value of the specified object
# bytearray() Returns an array of bytes
# bytes() Returns a bytes object
# callable() Returns True if the specified object is callable, otherwise False
# chr() Returns a character from the specified Unicode code.
# classmethod() Converts a method into a class method
# compile() Returns the specified source as an object, ready to be executed
# complex() Returns a complex number
# delattr() Deletes the specified attribute (property or method) from the specified object
# dict() Returns a dictionary (Array)
# dir() Returns a list of the specified object's properties and methods
# divmod() Returns the quotient and the remainder when argument1 is divided by argument2
# enumerate() Takes a collection (e.g. a tuple) and returns it as an enumerate object
# eval() Evaluates and executes an expression
# exec() Executes the specified code (or object)
# filter() Use a filter function to exclude items in an iterable object
# float() Returns a floating point number
# format() Formats a specified value
# frozenset() Returns a frozenset object
# getattr() Returns the value of the specified attribute (property or method)
# globals() Returns the current global symbol table as a dictionary
# hasattr() Returns True if the specified object has the specified attribute (property/method)
# hash() Returns the hash value of a specified object
# help() Executes the built-in help system
# hex() Converts a number into a hexadecimal value
# id() Returns the id of an object
# input() Allowing user input
# int() Returns an integer number
# isinstance() Returns True if a specified object is an instance of a specified object
# issubclass() Returns True if a specified class is a subclass of a specified object
# iter() Returns an iterator object
# len() Returns the length of an object
# list() Returns a list
# locals() Returns an updated dictionary of the current local symbol table
# map() Returns the specified iterator with the specified function applied to each item
# max() Returns the largest item in an iterable
# memoryview() Returns a memory view object
# min() Returns the smallest item in an iterable
# next() Returns the next item in an iterable
# object() Returns a new object
# oct() Converts a number into an octal
# open() Opens a file and returns a file object
# ord() Convert an integer representing the Unicode of the specified character
# pow() Returns the value of x to the power of y
# print() Prints to the standard output device
# property() Gets, sets, deletes a property
# range() Returns a sequence of numbers, starting from 0 and increments by 1 (by default)
# repr() Returns a readable version of an object
# reversed() Returns a reversed iterator
# round() Rounds a numbers
# set() Returns a new set object
# setattr() Sets an attribute (property/method) of an object
# slice() Returns a slice object
# sorted() Returns a sorted list
# @staticmethod() Converts a method into a static method
# str() Returns a string object
# sum() Sums the items of an iterator
# super() Returns an object that represents the parent class
# tuple() Returns a tuple
# type() Returns the type of an object
# vars() Returns the __dict__ property of an object
# zip() Returns an iterator, from two or more iterators
# Decorators
print('*********** Decorators ************')
from functools import wraps
def star(func):
def inner(*args, **kwargs):
print("*" * 30)
func(*args, **kwargs)
print("*" * 30)
return inner
@star
def printer1(msg):
print(msg)
def percent(func):
def inner(*args, **kwargs):
print("%" * 30)
func(*args, **kwargs)
print("%" * 30)
return inner
@star
@percent
def printer2(msg):
print(msg)
printer1("Hello")
printer2("Hello")
# Function caching
print('*********** Function caching ************')
import time
from functools import lru_cache
@lru_cache(maxsize=32)
def some_work(n):
time.sleep(3)
return n * 2
print('Running work')
some_work(5)
print('Calling again ..')
some_work(9) # tihs time, this run immedietly
print('finished')
# Coroutines
print('*********** Coroutines ************')
import time
def searcher():
time.sleep(3)
book = "Tihs is ok"
while True:
text = (yield) # this means its a Coroutine function
if text in book:
print(f'"{text}" found')
else:
print(f'"{text}" not found')
search = searcher()
next(search) # this runs until that while loop
search.send('ok')
print('Going for next')
search.send('okk')
print('Going for next')
search.send('is')
print('Finished')
search.close()
| 35.222222 | 161 | 0.674238 | 1,339 | 9,510 | 4.753547 | 0.283047 | 0.037706 | 0.016339 | 0.011312 | 0.209112 | 0.156167 | 0.100864 | 0.067871 | 0.055302 | 0.055302 | 0 | 0.012738 | 0.215773 | 9,510 | 269 | 162 | 35.35316 | 0.840708 | 0.652681 | 0 | 0.241379 | 0 | 0 | 0.221487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146552 | false | 0 | 0.034483 | 0.025862 | 0.232759 | 0.482759 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
0d3f74b53d1976c4b1197848cb8716e04cb65c67 | 2,535 | py | Python | djangocms_html_tags/cms_plugins.py | radity/djangocms-html-tags | d9d8d8b2609685d896e05af8fc9e2271c1dc0c26 | [
"MIT"
] | null | null | null | djangocms_html_tags/cms_plugins.py | radity/djangocms-html-tags | d9d8d8b2609685d896e05af8fc9e2271c1dc0c26 | [
"MIT"
] | 2 | 2019-02-17T22:15:40.000Z | 2019-02-20T22:40:21.000Z | djangocms_html_tags/cms_plugins.py | radity/djangocms-html-tags | d9d8d8b2609685d896e05af8fc9e2271c1dc0c26 | [
"MIT"
] | 2 | 2019-02-01T09:03:52.000Z | 2020-01-14T12:56:52.000Z | from cms.plugin_base import CMSPluginBase
from cms.plugin_pool import plugin_pool
from django.utils.translation import ugettext_lazy as _
from djangocms_html_tags.forms import HTMLTextInputForm, HTMLFormForm, HTMLTextareaForm
from djangocms_html_tags.models import HTMLTag, HTMLText
from djangocms_html_tags.utils import FormMethod
class HTMLTextBase(CMSPluginBase):
model = HTMLText
module = _("HTML Tags")
render_template = 'djangocms_html_tags/html_text.html'
fields = ('value', 'attributes')
form = HTMLTextInputForm
tag = None
def save_model(self, request, obj, form, change):
obj.tag = self.tag
return super(HTMLTextBase, self).save_model(request, obj, form, change)
class Heading1Plugin(HTMLTextBase):
name = _("Heading 1")
tag = HTMLTag.H1
class Heading2Plugin(HTMLTextBase):
name = _("Heading 2")
tag = HTMLTag.H2
class Heading3Plugin(HTMLTextBase):
name = _("Heading 3")
tag = HTMLTag.H3
class Heading4Plugin(HTMLTextBase):
name = _("Heading 4")
tag = HTMLTag.H4
class Heading5Plugin(HTMLTextBase):
name = _("Heading 5")
tag = HTMLTag.H5
class Heading6Plugin(HTMLTextBase):
name = _("Heading 6")
tag = HTMLTag.H6
class ParagraphPlugin(HTMLTextBase):
name = _("Paragraph")
tag = HTMLTag.P
form = HTMLTextareaForm
allow_children = True
class ButtonPlugin(HTMLTextBase):
name = _("Button")
tag = HTMLTag.BUTTON
allow_children = True
class InputPlugin(HTMLTextBase):
name = _("Input")
tag = HTMLTag.INPUT
render_template = 'djangocms_html_tags/input.html'
class FormPlugin(HTMLTextBase):
name = _("Form")
tag = HTMLTag.FORM
model = HTMLText
form = HTMLFormForm
fields = (('method', 'action'), 'value', 'attributes')
render_template = 'djangocms_html_tags/form.html'
allow_children = True
def render(self, context, instance, placeholder):
context.update({'is_post': instance.attributes.get('method') == FormMethod.POST})
return super(FormPlugin, self).render(context, instance, placeholder)
plugin_pool.register_plugin(Heading1Plugin)
plugin_pool.register_plugin(Heading2Plugin)
plugin_pool.register_plugin(Heading3Plugin)
plugin_pool.register_plugin(Heading4Plugin)
plugin_pool.register_plugin(Heading5Plugin)
plugin_pool.register_plugin(Heading6Plugin)
plugin_pool.register_plugin(ParagraphPlugin)
plugin_pool.register_plugin(ButtonPlugin)
plugin_pool.register_plugin(InputPlugin)
plugin_pool.register_plugin(FormPlugin)
| 26.40625 | 89 | 0.740039 | 284 | 2,535 | 6.401408 | 0.302817 | 0.066007 | 0.09901 | 0.132013 | 0.051155 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01131 | 0.162919 | 2,535 | 95 | 90 | 26.684211 | 0.845429 | 0 | 0 | 0.073529 | 0 | 0 | 0.092702 | 0.036686 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.088235 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0d40d5a5294c9c8290f11ed2656dfcbf8016ab4e | 12,878 | py | Python | qa327/frontend/sessions.py | rickyzhangca/CISC-327 | e419caafa6ae3fe77aa411228b6b58b237fe6a61 | [
"MIT"
] | null | null | null | qa327/frontend/sessions.py | rickyzhangca/CISC-327 | e419caafa6ae3fe77aa411228b6b58b237fe6a61 | [
"MIT"
] | 39 | 2020-10-11T02:31:14.000Z | 2020-12-15T20:18:56.000Z | qa327/frontend/sessions.py | rickyzhangca/CISC-327 | e419caafa6ae3fe77aa411228b6b58b237fe6a61 | [
"MIT"
] | 1 | 2020-10-17T02:44:43.000Z | 2020-10-17T02:44:43.000Z | import helpers
import exceptions
import datetime as dt
'''
This is the sessions module:
'''
'''
Base class with the basic structure of all frontend sessions.
'''
class Session:
# username is None when no one logged in.
def __init__(self, username = None):
self.username = username
# return with the object of the next session.
def routing(self):
return self
# functionality of the current session.
def operate(self):
pass
'''
Base class for sessions that required login.
'''
class LoggedInSession(Session):
# If logged in, show the menu item buy, sell, update, and logout. Also, print out the user's balance.
# raise exceptions if user have not logged in.
def __init__(self, username):
super().__init__(username)
if not username:
print('\nInvaild command, user must be logged in first')
raise exceptions.CannotAccessPageException()
def routing(self):
return LandingSession(self.username)
def getMenu(self):
return 'buy, sell, update, and logout'
'''
Base class for sessions that does not required login.
'''
class UnloggedInSession(Session):
# if not logged in, show the menu item login, register, and exits.
# raise exceptions if user have logged in.
def __init__(self, username):
super().__init__()
if username:
print('\nInvaild command, user must be logged out first')
raise exceptions.CannotAccessPageException()
def routing(self):
return LandingSession()
def getMenu(self):
return 'login, register, and exits'
'''
Landing page that displays usermenu and balance.
'''
class LandingSession(Session):
def __init__(self, username = None):
super().__init__(username)
# go to corresponding sessions.
def routing(self):
try:
if self.command == 'login':
new_session = LoginSession(self.username)
elif self.command == 'register':
new_session = RegisterSession(self.username)
elif self.command == 'buy':
new_session = BuySession(self.username)
elif self.command == 'sell':
new_session = SellSession(self.username)
elif self.command == 'update':
new_session = UpdateSession(self.username)
elif self.command == 'logout':
new_session = LogoutSession(self.username)
elif self.command == 'exits':
new_session = ExitSession(self.username)
else:
print('\nCommand undefind.')
new_session = self
except exceptions.CannotAccessPageException:
new_session = self
return new_session
def operate(self):
print('\nLanding Screen...')
self.showbalance()
self.displayMenu()
self.getUserCommand()
# display user menu depend on whether the user logged in.
def displayMenu(self):
print('Menu options - ', end = '')
if self.username:
print(LoggedInSession.getMenu(self))
else:
print(UnloggedInSession.getMenu(self))
def showbalance(self):
if self.username:
print('Hi', self.username + '!')
print('Your balance is: $' + str(helpers.ResourcesHelper.getUserInfo()[self.username]['balance']) + '.\n')
def getUserCommand(self):
self.command = input('Your command: ')
'''
session that guide the user's login process.
'''
class LoginSession(UnloggedInSession):
def __init__(self, username):
super().__init__(username)
self.username = None
def routing(self):
return LandingSession(self.username)
def operate(self):
print('\nLog in session starts...')
#check email
try:
email = helpers.UserIOHelper.acceptEmail()
password = helpers.UserIOHelper.acceptPassword()
self.authorize(email, password)
except exceptions.WrongFormatException as e:
print(str(e))
print('\nLogin failed, ending session...')
# authorize email and password the user inputed. Setup username.
def authorize(self, email, password):
for i in helpers.ResourcesHelper.getUserInfo():
if helpers.ResourcesHelper.getUserInfo()[i]['email'] == email and helpers.ResourcesHelper.getUserInfo()[i]['password'] == password:
print('\nAccount logged in!')
self.username = i
return
print('\nEmail or password incorrect.')
'''
user register
'''
class RegisterSession(UnloggedInSession):
def __init__(self, username):
super().__init__(username)
self.username = None
def operate(self):
try:
user_email = helpers.UserIOHelper.acceptEmail()
if self.checkExistence(user_email):
raise exceptions.EmailAlreadyExistsException()
user_name = helpers.UserIOHelper.acceptUserName()
user_password = helpers.UserIOHelper.acceptPassword()
user_password2 = helpers.UserIOHelper.acceptPassword2()
if user_password != user_password2:
raise exceptions.PasswordsNotMatchingException()
self.addNewUser(user_name, user_email, user_password)
except exceptions.EmailAlreadyExistsException:
print('\nThis email already exists in the system')
print('\nRegistration failed, ending session...')
except exceptions.PasswordsNotMatchingException:
print('\nThe password entered first time does not match the one enter the second time.')
print('\nRegistration failed, ending session...')
except exceptions.WrongFormatException as e:
print(str(e))
print('\nRegistration failed, ending session...')
def checkExistence(self, user_email):
for i in helpers.ResourcesHelper.getUserInfo():
if user_email == helpers.ResourcesHelper.getUserInfo()[i]['email']:
return True
return False
def addNewUser(self, user_name, user_email, user_password):
helpers.TransactionsHelper.newUserTransaction("register", user_name, user_email, user_password, 3000)
print('\nRegistered successfully.')
'''
update ticket
'''
class UpdateSession(LoggedInSession):
# only appear after user logged in
def __init__(self, username):
super().__init__(username)
def operate(self):
try:
ticket_name = helpers.UserIOHelper.acceptTicketName()
ticket_quantity = helpers.UserIOHelper.acceptTicketQuantity()
ticket_price = helpers.UserIOHelper.acceptTicketPrice()
ticket_date = helpers.UserIOHelper.acceptDate()
if ticket_name not in helpers.ResourcesHelper.getTicketInfo():
raise exceptions.WrongTicketNameException
self.updateTicket(ticket_name, ticket_price, ticket_quantity, ticket_date)
except exceptions.WrongFormatException as e:
print(str(e))
print('\nUpdate failed, ending session...')
except exceptions.WrongTicketNameException:
print('\nThe ticket name you entered cannot be found, ending session...')
def updateTicket(self, ticket_name, ticket_price, ticket_quantity, ticket_date):
helpers.TransactionsHelper.newTicketTransaction("update", self.username, ticket_name, ticket_price, ticket_quantity, ticket_date)
helpers.ResourcesHelper.getTicketInfo()[ticket_name]['price'] = ticket_price
helpers.ResourcesHelper.getTicketInfo()[ticket_name]['number'] = ticket_quantity
helpers.ResourcesHelper.getTicketInfo()[ticket_name]['date'] = ticket_date
'''
User logout.
'''
class LogoutSession(LoggedInSession):
# only appear after user logged in
def __init__(self, username):
super().__init__(username)
def operate(self):
print('\nLogout Successfully!')
def routing(self):
return LandingSession(None)
'''
Exiting the program.
'''
class ExitSession(UnloggedInSession):
# only appear after user not logged in
def __init__(self, username):
super().__init__(username)
def operate(self):
print('\nSaving transactions & exit...')
def routing(self):
return None
'''
Selling session.
'''
class SellSession(LoggedInSession):
# only appear after user logged in
def __init__(self, username):
super().__init__(username)
def operate(self):
print('\nSelling Session starts...')
try:
ticket_name = helpers.UserIOHelper.acceptTicketName()
if ticket_name in helpers.ResourcesHelper.getTicketInfo():
raise exceptions.WrongTicketNameException
ticket_quantity = helpers.UserIOHelper.acceptTicketQuantity()
ticket_price = helpers.UserIOHelper.acceptTicketPrice()
ticket_date = helpers.UserIOHelper.acceptDate()
self.addNewTicket(ticket_name, ticket_price, ticket_quantity, ticket_date)
except exceptions.WrongFormatException as e:
print(str(e))
print('\nAdd new ticket failed, ending session...')
except exceptions.WrongTicketNameException:
print('\nTicket with this name already exist, ending session...')
except exceptions.WrongTicketQuantityException:
print('\nThe ticket quantity you entered is not available, ending session...')
except exceptions.WrongTicketPriceException as e:
print(str(e))
print('\nThe ticket price you entered is not available, ending session...')
def addNewTicket(self, ticket_name, ticket_price, ticket_quantity, ticket_date):
helpers.TransactionsHelper.newTicketTransaction("sell", self.username, ticket_name, ticket_price, ticket_quantity, ticket_date)
helpers.ResourcesHelper.getTicketInfo()[ticket_name] = {
'price': ticket_price,
'number': ticket_quantity,
'email': helpers.ResourcesHelper.getUserInfo()[self.username]['email'],
'date': ticket_date
}
print('\nTicket info added successfully.')
'''
Buying session.
'''
class BuySession(LoggedInSession):
def __init__(self, username):
super().__init__(username)
def operate(self):
print('\nBuying Session starts...')
self.printTicketList()
try:
ticket_name = helpers.UserIOHelper.acceptTicketName()
if ticket_name not in helpers.ResourcesHelper.getTicketInfo():
raise exceptions.WrongTicketNameException
ticket_quantity = helpers.UserIOHelper.acceptTicketQuantity()
if ticket_quantity > helpers.ResourcesHelper.getTicketInfo()[ticket_name]['number']:
raise exceptions.WrongTicketQuantityException
ticket_price = helpers.ResourcesHelper.getTicketInfo()[ticket_name]['price']
if self.checkBalance(ticket_price, ticket_quantity):
self.processOrder(ticket_name, ticket_price, ticket_quantity)
else:
print('\nInsufficient funds, ending session...')
except exceptions.WrongFormatException as e:
print(str(e))
print('\nBuy ticket failed, ending session...')
except exceptions.WrongTicketNameException:
print('\nThe ticket name you entered cannot be found, ending session...')
except exceptions.WrongTicketQuantityException:
print('\nThe ticket quantity you entered is not available, ending session...')
def printTicketList(self):
print('\nTicket avilable:\nTicket Name\tPrice\tNumber\tDate')
for i in helpers.ResourcesHelper.getTicketInfo():
print(i + '\t' + str(helpers.ResourcesHelper.getTicketInfo()[i]['price']) + '\t' + str(helpers.ResourcesHelper.getTicketInfo()[i]['number']) + '\t' + str(helpers.ResourcesHelper.getTicketInfo()[i]['date']))
def checkBalance(self, ticket_price, ticket_quantity):
return helpers.ResourcesHelper.getUserInfo()[self.username]['balance'] >= ticket_price * ticket_quantity
def processOrder(self, ticket_name, ticket_price, ticket_quantity):
helpers.ResourcesHelper.getUserInfo()[self.username]['balance'] -= ticket_price * ticket_quantity
helpers.ResourcesHelper.getTicketInfo()[ticket_name]['number'] -= ticket_quantity
helpers.TransactionsHelper.newTicketTransaction("buy", self.username, ticket_name, ticket_price, ticket_quantity, helpers.ResourcesHelper.getTicketInfo()[ticket_name]['date'])
print('\nTicket "' + ticket_name + '" sold successfully.') | 38.100592 | 218 | 0.650101 | 1,263 | 12,878 | 6.482977 | 0.168646 | 0.049829 | 0.064118 | 0.039692 | 0.582193 | 0.513312 | 0.459941 | 0.433195 | 0.37555 | 0.310454 | 0 | 0.000725 | 0.250427 | 12,878 | 338 | 219 | 38.100592 | 0.847509 | 0.051949 | 0 | 0.421739 | 0 | 0 | 0.132725 | 0.002131 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.065217 | 0.013043 | 0.03913 | 0.291304 | 0.195652 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0d42aee94bc382588eed7920090962df76b33a83 | 1,574 | py | Python | PageObjectLibrary/loginpage_v2_using_composition.py | rama-bornfree/simple-pageobject | 6a66a256867f7b1604005818b12e7c9f8dc6c027 | [
"Apache-2.0"
] | null | null | null | PageObjectLibrary/loginpage_v2_using_composition.py | rama-bornfree/simple-pageobject | 6a66a256867f7b1604005818b12e7c9f8dc6c027 | [
"Apache-2.0"
] | null | null | null | PageObjectLibrary/loginpage_v2_using_composition.py | rama-bornfree/simple-pageobject | 6a66a256867f7b1604005818b12e7c9f8dc6c027 | [
"Apache-2.0"
] | null | null | null | from pageobject import PageObject
from homepage import HomePage
from locatormap import LocatorMap
from robot.api import logger
class LoginPage():
PAGE_TITLE = "Login - PageObjectLibrary Demo"
PAGE_URL = "/login.html"
# these are accessible via dot notaton with self.locator
# (eg: self.locator.username, etc)
_locators = {
"username": "id=id_username",
"password": "id=id_password",
"submit_button": "id=id_submit",
}
def __init__(self):
self.logger = logger
self.po = PageObject()
self.se2lib = self.po.se2lib
self.locator = LocatorMap(getattr(self, "_locators", {}))
def navigate_to(self, url):
logger.console ("Navigating to %s".format(url))
self.se2lib.go_to(url)
if 'yahoo' in url:
logger.console ("Navigating to homepage")
return HomePage()
def create_browser(self, browser_name):
self.se2lib.create_webdriver(browser_name)
def enter_username(self, username):
"""Enter the given string into the username field"""
self.se2lib.input_text(self.locator.username, username)
def enter_password(self, password):
"""Enter the given string into the password field"""
self.se2lib.input_text(self.locator.password, password)
def click_the_submit_button(self):
"""Click the submit button, and wait for the page to reload"""
with self.po._wait_for_page_refresh():
self.se2lib.click_button(self.locator.submit_button)
return HomePage() | 32.122449 | 70 | 0.65629 | 192 | 1,574 | 5.21875 | 0.354167 | 0.065868 | 0.037924 | 0.051896 | 0.177645 | 0.121756 | 0.06986 | 0 | 0 | 0 | 0 | 0.005853 | 0.240152 | 1,574 | 49 | 71 | 32.122449 | 0.83194 | 0.151842 | 0 | 0.060606 | 0 | 0 | 0.12282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.121212 | 0 | 0.484848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0d442d2358091616d21b759e490009907755740d | 2,569 | py | Python | reefbot-controller/bin/ButtonMapper.py | MRSD2018/reefbot-1 | a595ca718d0cda277726894a3105815cef000475 | [
"MIT"
] | null | null | null | reefbot-controller/bin/ButtonMapper.py | MRSD2018/reefbot-1 | a595ca718d0cda277726894a3105815cef000475 | [
"MIT"
] | null | null | null | reefbot-controller/bin/ButtonMapper.py | MRSD2018/reefbot-1 | a595ca718d0cda277726894a3105815cef000475 | [
"MIT"
] | null | null | null | '''Maps buttons for the Reefbot control.'''
import roslib; roslib.load_manifest('reefbot-controller')
import rospy
class JoystickButtons:
DPAD_LR = 4
DPAD_UD = 5
ALOG_LEFT_UD = 1
ALOG_LEFT_LR = 0
ALOG_RIGHT_UD = 3
ALOG_RIGHT_LR = 2
BUTTON_1 = 0
BUTTON_2 = 1
BUTTON_3 = 2
BUTTON_4 = 3
BUTTON_5 = 4
BUTTON_6 = 5
BUTTON_7 = 6
BUTTON_8 = 7
BUTTON_9 = 8
BUTTON_10 = 9
BUTTON_11 = 10
BUTTON_12 = 11
class ButtonMapper:
def __init__(self):
self.diveDownButton = rospy.get_param("~dive_down_button",
JoystickButtons.BUTTON_7)
self.diveUpButton = rospy.get_param("~dive_up_button",
JoystickButtons.BUTTON_5)
self.diveAxis = rospy.get_param("~dive_axis", None)
self.leftTurnButton = rospy.get_param("~left_turn_button", None)
self.rightTurnButton = rospy.get_param("~right_turn_button", None)
self.turnAxis = rospy.get_param("~turn_axis",
JoystickButtons.ALOG_LEFT_LR)
self.fwdButton = rospy.get_param("~fwd_button", None)
self.backButton = rospy.get_param("~back_button", None)
self.fwdBackAxis = rospy.get_param("~fwd_back_axis",
JoystickButtons.ALOG_LEFT_UD)
def GetFwdAxis(self, joyMsg):
'''Returns the value of the move fwd/backward axis. +1 is full forward.'''
return self._GetAxisValue(joyMsg, self.fwdBackAxis, self.fwdButton,
self.backButton)
def GetTurnAxis(self, joyMsg):
'''Returns the value of the turning axis. +1 is full left.'''
return self._GetAxisValue(joyMsg, self.turnAxis, self.leftTurnButton,
self.rightTurnButton)
def GetDiveAxis(self, joyMsg):
'''Returns the value of the dive axis. +1 is full up.'''
return self._GetAxisValue(joyMsg, self.diveAxis, self.diveUpButton,
self.diveDownButton)
def _GetAxisValue(self, joyMsg, axis, posButton, negButton):
if axis is not None and axis >= 0:
return joyMsg.axes[axis]
axisVal = 0.
if joyMsg.buttons[posButton] and not joyMsg.buttons[negButton]:
axisVal = 1.
if not joyMsg.buttons[posButton] and joyMsg.buttons[negButton]:
axisVal = -1.
return axisVal
def GetCeilingDisable(self, joyMsg):
'''Returns the value of the button that disables the ceiling.'''
return self._GetButtonValue(joyMsg, JoystickButtons.BUTTON_10)
def _GetButtonValue(self, joyMsg, button):
return joyMsg.buttons[button]
| 32.935897 | 78 | 0.651615 | 319 | 2,569 | 5.050157 | 0.263323 | 0.044693 | 0.072626 | 0.049659 | 0.171322 | 0.074488 | 0.074488 | 0 | 0 | 0 | 0 | 0.023946 | 0.252238 | 2,569 | 77 | 79 | 33.363636 | 0.81468 | 0.105878 | 0 | 0 | 0 | 0 | 0.062528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12069 | false | 0 | 0.034483 | 0.017241 | 0.62069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0d475beab3b1cd6b2f3e149dfdb979b4179e340d | 614 | py | Python | FatherSon/HelloWorld2_source_code/listing_7-1.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | 1 | 2019-01-04T05:47:50.000Z | 2019-01-04T05:47:50.000Z | FatherSon/HelloWorld2_source_code/listing_7-1.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | FatherSon/HelloWorld2_source_code/listing_7-1.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | # Listing_7-1.py
# Copyright Warren & Carter Sande, 2013
# Released under MIT license http://www.opensource.org/licenses/mit-license.php
# Version $version ----------------------------
# Using comparison operators
num1 = float(raw_input("Enter the first number: "))
num2 = float(raw_input("Enter the second number: "))
if num1 < num2:
print num1, "is less than", num2
if num1 > num2:
print num1, "is greater than", num2
if num1 == num2: #Remember that this is a double equal sign
print num1, "is equal to", num2
if num1 != num2:
print num1, "is not equal to", num2
| 34.111111 | 82 | 0.63355 | 87 | 614 | 4.436782 | 0.551724 | 0.062176 | 0.103627 | 0.11658 | 0.349741 | 0.183938 | 0.129534 | 0 | 0 | 0 | 0 | 0.050104 | 0.21987 | 614 | 17 | 83 | 36.117647 | 0.755741 | 0.40228 | 0 | 0 | 0 | 0 | 0.297376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d492d307402409cd402271070306ff0cee7ae12 | 2,593 | py | Python | read_testdata.py | veralily/MLKD-mission3-resentForIC | f652f80ad848fca321f912e9c1594517f1942e42 | [
"MIT"
] | null | null | null | read_testdata.py | veralily/MLKD-mission3-resentForIC | f652f80ad848fca321f912e9c1594517f1942e42 | [
"MIT"
] | null | null | null | read_testdata.py | veralily/MLKD-mission3-resentForIC | f652f80ad848fca321f912e9c1594517f1942e42 | [
"MIT"
] | null | null | null | import skimage.io # bug. need to import this before tensorflow
import skimage.transform # bug. need to import this before tensorflow
from resnet_train import train
from resnet import inference
import tensorflow as tf
import time
import os
import sys
import re
import numpy as np
from image_processing import image_preprocessing
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('filename_list', 'check.doc.list', 'file list')
'''def file_list(filename_list):
reader = open(filename_list, 'r')
filenames = reader.readlines()
filenames = [int(f) for f in filenames]
return filenames'''
def file_list(data_dir):
i = 0
filenames = []
for root, dirs, files in os.walk(data_dir):
for file in files:
if os.path.splitext(file)[1] == '.jpg':
filename = os.path.splitext(file)[0]
i = i + 1
filenames.append(int(filename))
print("number of files")
print(i)
return filenames
def load_data(data_dir):
data = []
start_time = time.time()
files = file_list(data_dir)
duration = time.time() - start_time
print "took %f sec" % duration
for img_fn in files:
img_fn = str(img_fn) + '.jpg'
fn = os.path.join(data_dir, img_fn)
data.append(fn)
return data
def distorted_inputs(data_dir):
filenames = load_data(data_dir)
files = []
images = []
i = 0
files_b = []
images_b = []
height = FLAGS.input_size
width = FLAGS.input_size
depth = 3
step = 0
for filename in filenames:
image_buffer = tf.read_file(filename)
bbox = []
train = False
image = image_preprocessing(image_buffer, bbox, train, 0)
files_b.append(filename)
images_b.append(image)
i = i + 1
#print(image)
if i == 20:
print(i)
files.append(files_b)
images_b = tf.reshape(images_b, [20, height, width, depth])
images.append(images_b)
files_b = []
images_b = []
i = 0
#files = files_b
#images = tf.reshape(images_b, [13, height, width, depth])
images = np.array(images, ndmin=1)
#images = tf.cast(images, tf.float32)
#images = tf.reshape(images, shape=[-1, height, width, depth])
print(type(files))
print(type(images))
print(images.shape)
#files = tf.reshape(files, [len(files)])
# print(files)
# print(images)
return files, images
_, images = distorted_inputs("check_ic//check")
| 22.745614 | 74 | 0.600463 | 342 | 2,593 | 4.415205 | 0.283626 | 0.03245 | 0.031788 | 0.025828 | 0.046358 | 0.046358 | 0.046358 | 0 | 0 | 0 | 0 | 0.01084 | 0.288469 | 2,593 | 113 | 75 | 22.946903 | 0.807588 | 0.128037 | 0 | 0.15942 | 0 | 0 | 0.040924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.15942 | null | null | 0.101449 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0d4e981d336496f51c5ebd89178a51218846e23a | 514 | py | Python | visualize/preprocess.py | peitaosu/SpectralClustering | 5c679ce0f9f2974fa7be2abe9caa1265dbbd4a2c | [
"MIT"
] | null | null | null | visualize/preprocess.py | peitaosu/SpectralClustering | 5c679ce0f9f2974fa7be2abe9caa1265dbbd4a2c | [
"MIT"
] | null | null | null | visualize/preprocess.py | peitaosu/SpectralClustering | 5c679ce0f9f2974fa7be2abe9caa1265dbbd4a2c | [
"MIT"
] | null | null | null | import os, sys
class Preprocesser():
def __init__(self):
self.data = {
"X": [],
"Y": []
}
def process(self, input):
if not os.path.isfile(input):
print(input + " is not exists.")
sys.exit(-1)
with open(input) as in_file:
for line in in_file.readlines():
self.data["X"].append(float(line.split("\t")[0]))
self.data["Y"].append(float(line.split("\t")[1]))
return self.data
| 25.7 | 65 | 0.478599 | 63 | 514 | 3.809524 | 0.571429 | 0.133333 | 0.075 | 0.166667 | 0.175 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009063 | 0.356031 | 514 | 19 | 66 | 27.052632 | 0.716012 | 0 | 0 | 0 | 0 | 0 | 0.044834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.3125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b495cef87e530613f2c8277610c653f49cd1a833 | 2,003 | py | Python | library/aq6315.py | mjasperse/telepythic | fbf24a885cb195dc5cecf78e112b8ff4b993043d | [
"BSD-3-Clause"
] | 2 | 2020-10-06T15:55:26.000Z | 2021-04-01T04:09:01.000Z | library/aq6315.py | mjasperse/telepythic | fbf24a885cb195dc5cecf78e112b8ff4b993043d | [
"BSD-3-Clause"
] | null | null | null | library/aq6315.py | mjasperse/telepythic | fbf24a885cb195dc5cecf78e112b8ff4b993043d | [
"BSD-3-Clause"
] | null | null | null | """
AQ6315E DATA EXTRACTOR
Extracts all visible traces from Ando AQ-6315E Optical Spectrum Analyser
Usage: ./aq6315.py [filename]
If specified, extracted data is saved to CSV called "filename"
Relevant list of commands available at
http://support.us.yokogawa.com/downloads/TMI/COMM/AQ6317B/AQ6317B%20R0101.pdf
> GPIB commands, section 9
> Trace query format, section 9-42
"""
import sys
from telepythic import TelepythicDevice, PrologixInterface
import numpy as np
# connect to device
bridge = PrologixInterface(gpib=1,host=177,timeout=0.5)
dev = TelepythicDevice(bridge)
# confirm device identity
id = dev.id(expect=b'ANDO,AQ6315')
print 'Device ID:',id
res = dev.query(b'RESLN?') # resolution
ref = dev.query(b'REFL?') # reference level
npts = dev.query(b'SEGP?') # number of points in sweep
expectedlen = 12*npts+8 # estimate size of trace (ASCII format)
def get_trace(cmd):
# device returns a comma-separated list of values
Y = dev.ask(cmd).strip().split(',')
# first value is an integer, listing how many values follow
n = int(Y.pop(0))
# check that it matches what we got (i.e. no data was lost)
assert len(Y) == n, 'Got %i elems, expected %i'%(len(Y),n)
# convert to a numpy array
return np.asarray(Y,'f')
import pylab
pylab.clf()
res = {}
for t in b'ABC': # device has 3 traces
if dev.ask(b'DSP%s?'%t): # if the trace is visible
print 'Reading Trace',t # download this trace
res[t+b'V'] = get_trace(b'LDAT'+t) # download measurement values (Y)
res[t+b'L'] = get_trace(b'WDAT'+t) # download wavelength values (X)
pylab.plot(res[t+b'L'],res[t+b'V']) # plot results
# close connection to prologix
dev.close()
# convert results dict to a pandas dataframe
import pandas as pd
df = pd.DataFrame(res)
if len(sys.argv) > 1:
# write to csv if filename was specified
df.to_csv(sys.argv[1],index=False)
# show graph
pylab.show()
| 32.306452 | 81 | 0.6665 | 313 | 2,003 | 4.252396 | 0.552716 | 0.012021 | 0.015026 | 0.009016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02986 | 0.214179 | 2,003 | 61 | 82 | 32.836066 | 0.815756 | 0.291063 | 0 | 0 | 0 | 0 | 0.098196 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | null | null | 0 | 0.16129 | null | null | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b49ba128005050a703dec74a0221d62f301b5b8c | 1,656 | py | Python | plugins/vpn.py | alobbs/autome | faf4c836ccb896d03020aa0dbb2c1332ffc791a2 | [
"MIT"
] | null | null | null | plugins/vpn.py | alobbs/autome | faf4c836ccb896d03020aa0dbb2c1332ffc791a2 | [
"MIT"
] | null | null | null | plugins/vpn.py | alobbs/autome | faf4c836ccb896d03020aa0dbb2c1332ffc791a2 | [
"MIT"
] | null | null | null | import os
import plugin
import pluginconf
util = plugin.get("util")
FILE_COUNTER = "~/.vpn_counter"
FILE_VPN_SH = "~/.vpn_sh"
EXPECT_SCRIPT = """#!/usr/bin/expect
spawn {cmd}
expect -exact "Enter Auth Username:"
send -- "{user}\\n"
expect -exact "Enter Auth Password:"
send -- "{password}\\n"
interact
"""
class VPN:
def __init__(self):
# Read configuration
self.conf = pluginconf.get('vpn')
def is_connected(self):
with os.popen("ps aux") as f:
pcs = f.read()
return self.conf['openvpn_conf'] in pcs
def get_password(self):
file_counter = os.path.expanduser(FILE_COUNTER)
# Read usage counter
with open(file_counter, 'r') as f:
raw = f.read()
counter = int(raw.strip()) + 1
# OAuth
cmd = "oathtool -b %s -c %s" % (self.conf['secret'], counter)
with os.popen(cmd, 'r') as f:
code = f.read().strip()
# Update counter
with open(file_counter, 'w') as f:
f.write(str(counter))
password = "%s%s" % (self.conf['pin'], code)
return password
def connect(self):
# Compose connection script
cmd = "sudo /usr/local/sbin/openvpn --config %s" % self.conf['openvpn_conf']
user = self.conf['user']
password = self.get_password()
script = EXPECT_SCRIPT.format(cmd=cmd, user=user, password=password)
# Write it to a file
vpn_script = os.path.expanduser(FILE_VPN_SH)
with open(vpn_script, 'w+') as f:
f.write(script)
os.chmod(vpn_script, 0o770)
# Run
os.system(vpn_script)
| 24.352941 | 84 | 0.577899 | 218 | 1,656 | 4.270642 | 0.366972 | 0.051557 | 0.029001 | 0.042965 | 0.077336 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004205 | 0.282005 | 1,656 | 67 | 85 | 24.716418 | 0.778806 | 0.064614 | 0 | 0 | 0 | 0 | 0.194679 | 0.014925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0.162791 | 0.069767 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b49bc707c55e530402f7bf7501d6b84e68f26a0e | 7,321 | py | Python | jasypt4py/generator.py | fareliner/jasypt4py | 6ea7cdbb4ee1e3249cc9dcadfa3c54e603614458 | [
"Apache-2.0"
] | 7 | 2018-04-04T02:56:48.000Z | 2021-09-23T01:34:57.000Z | jasypt4py/generator.py | fareliner/jasypt4py | 6ea7cdbb4ee1e3249cc9dcadfa3c54e603614458 | [
"Apache-2.0"
] | 3 | 2018-07-31T08:56:56.000Z | 2022-03-04T01:03:03.000Z | jasypt4py/generator.py | fareliner/jasypt4py | 6ea7cdbb4ee1e3249cc9dcadfa3c54e603614458 | [
"Apache-2.0"
] | 4 | 2018-07-31T08:04:01.000Z | 2021-07-07T01:55:34.000Z | # Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
from abc import ABCMeta, abstractmethod
from Crypto import Random
from jasypt4py.exceptions import ArgumentError
class PBEParameterGenerator(object):
__metaclass__ = ABCMeta
@staticmethod
def adjust(a, a_off, b):
"""
Adjusts the byte array as per PKCS12 spec
:param a: byte[] - the target array
:param a_off: int - offset to operate on
:param b: byte[] - the bitsy array to pick from
:return: nothing as operating on array by reference
"""
x = (b[len(b) - 1] & 0xff) + (a[a_off + len(b) - 1] & 0xff) + 1
a[a_off + len(b) - 1] = x & 0xff
x = x >> 8
for i in range(len(b) - 2, -1, -1):
x = x + (b[i] & 0xff) + (a[a_off + i] & 0xff)
a[a_off + i] = x & 0xff
x = x >> 8
@staticmethod
def pkcs12_password_to_bytes(password):
"""
Converts a password string to a PKCS12 v1.0 compliant byte array.
:param password: byte[] - the password as simple string
:return: The unsigned byte array holding the password
"""
pkcs12_pwd = [0x00] * (len(password) + 1) * 2
for i in range(0, len(password)):
digit = ord(password[i])
pkcs12_pwd[i * 2] = digit >> 8
pkcs12_pwd[i * 2 + 1] = digit
return bytearray(pkcs12_pwd)
class PKCS12ParameterGenerator(PBEParameterGenerator):
"""
Equivalent of the Bouncycastle PKCS12ParameterGenerator.
"""
__metaclass__ = ABCMeta
KEY_SIZE_256 = 256
KEY_SIZE_128 = 128
DEFAULT_IV_SIZE = 128
KEY_MATERIAL = 1
IV_MATERIAL = 2
MAC_MATERIAL = 3
def __init__(self, digest_factory, key_size_bits=KEY_SIZE_256, iv_size_bits=DEFAULT_IV_SIZE):
"""
:param digest_factory: object - the digest algoritm to use (e.g. SHA256 or MD5)
:param key_size_bits: int - key size in bits
:param iv_size_bits: int - iv size in bits
"""
super(PKCS12ParameterGenerator, self).__init__()
self.digest_factory = digest_factory
self.key_size_bits = key_size_bits
self.iv_size_bits = iv_size_bits
def generate_derived_parameters(self, password, salt, iterations=1000):
"""
Generates the key and iv that can be used with the cipher.
:param password: str - the password used for the key material
:param salt: byte[] - random salt
:param iterations: int - number if hash iterations for key material
:return: key and iv that can be used to setup the cipher
"""
key_size = (self.key_size_bits // 8)
iv_size = (self.iv_size_bits // 8)
# pkcs12 padded password (unicode byte array with 2 trailing 0x0 bytes)
password_bytes = PKCS12ParameterGenerator.pkcs12_password_to_bytes(password)
d_key = self.generate_derived_key(password_bytes, salt, iterations, self.KEY_MATERIAL, key_size)
if iv_size and iv_size > 0:
d_iv = self.generate_derived_key(password_bytes, salt, iterations, self.IV_MATERIAL, iv_size)
else:
d_iv = None
return d_key, d_iv
def generate_derived_key(self, password, salt, iterations, id_byte, key_size):
"""
Generate a derived key as per PKCS12 v1.0 spec
:param password: bytearray - pkcs12 padded password (unicode byte array with 2 trailing 0x0 bytes)
:param salt: bytearray - random salt
:param iterations: int - number if hash iterations for key material
:param id_byte: int - the material padding
:param key_size: int - the key size in bytes (e.g. AES is 256/8 = 32, IV is 128/8 = 16)
:return: the sha256 digested pkcs12 key
"""
u = int(self.digest_factory.digest_size)
v = int(self.digest_factory.block_size)
d_key = bytearray(key_size)
# Step 1
D = bytearray(v)
for i in range(0, v):
D[i] = id_byte
# Step 2
if salt and len(salt) != 0:
salt_size = len(salt)
s_size = v * ((salt_size + v - 1) // v)
S = bytearray(s_size)
for i in range(s_size):
S[i] = salt[i % salt_size]
else:
S = bytearray(0)
# Step 3
if password and len(password) != 0:
password_size = len(password)
p_size = v * ((password_size + v - 1) // v)
P = bytearray(p_size)
for i in range(p_size):
P[i] = password[i % password_size]
else:
P = bytearray(0)
# Step 4
I = S + P
B = bytearray(v)
# Step 5
c = ((key_size + u - 1) // u)
# Step 6
for i in range(1, c + 1):
# Step 6 - a
digest = self.digest_factory.new()
digest.update(bytes(D))
digest.update(bytes(I))
A = digest.digest() # bouncycastle now resets the digest, we will create a new digest
for j in range(1, iterations):
A = self.digest_factory.new(A).digest()
# Step 6 - b
for k in range(0, v):
B[k] = A[k % u]
# Step 6 - c
for j in range(0, (len(I) // v)):
self.adjust(I, j * v, B)
if i == c:
for j in range(0, key_size - ((i - 1) * u)):
d_key[(i - 1) * u + j] = A[j]
else:
for j in range(0, u):
d_key[(i - 1) * u + j] = A[j]
# we string encode as Crypto functions need strings
return bytes(d_key)
class SaltGenerator(object):
"""
Base for a salt generator
"""
__metaclass__ = ABCMeta
DEFAULT_SALT_SIZE_BYTE = 16
def __init__(self, salt_block_size=DEFAULT_SALT_SIZE_BYTE):
self.salt_block_size = salt_block_size
@abstractmethod
def generate_salt(self):
pass
class RandomSaltGenerator(SaltGenerator):
"""
A basic random salt generator
"""
__metaclass__ = ABCMeta
def __init__(self, salt_block_size=SaltGenerator.DEFAULT_SALT_SIZE_BYTE, **kwargs):
"""
:param salt_block_size: the salt block size in bytes
"""
super(RandomSaltGenerator, self).__init__(salt_block_size)
def generate_salt(self):
return bytearray(Random.get_random_bytes(self.salt_block_size))
class FixedSaltGenerator(SaltGenerator):
"""
A fixed string salt generator
"""
__metaclass__ = ABCMeta
def __init__(self, salt_block_size=SaltGenerator.DEFAULT_SALT_SIZE_BYTE, salt=None, **kwargs):
"""
:param salt_block_size: the salt block size in bytes
"""
super(FixedSaltGenerator, self).__init__(salt_block_size)
if not salt:
raise ArgumentError('salt not provided')
# ensure supplied type matches
if isinstance(salt, str):
self.salt = bytearray(salt, 'utf-8')
elif isinstance(salt, bytearray):
self.salt = salt
else:
raise TypeError('salt must either be a string or bytearray but not %s' % type(salt))
def generate_salt(self):
return self.salt
| 30.377593 | 106 | 0.584483 | 970 | 7,321 | 4.207216 | 0.18866 | 0.02916 | 0.038226 | 0.016173 | 0.256065 | 0.190395 | 0.168096 | 0.157804 | 0.157804 | 0.12644 | 0 | 0.02943 | 0.32236 | 7,321 | 240 | 107 | 30.504167 | 0.793187 | 0.250785 | 0 | 0.161017 | 0 | 0 | 0.014501 | 0 | 0 | 0 | 0.005487 | 0 | 0 | 1 | 0.09322 | false | 0.118644 | 0.033898 | 0.016949 | 0.313559 | 0.008475 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b4a04bd98861d4add44fe78a70e2497100399370 | 1,176 | py | Python | Ago-Dic-2020/Ejemplos/clase-2.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 41 | 2017-09-26T09:36:32.000Z | 2022-03-19T18:05:25.000Z | Ago-Dic-2020/Ejemplos/clase-2.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 67 | 2017-09-11T05:06:12.000Z | 2022-02-14T04:44:04.000Z | Ago-Dic-2020/Ejemplos/clase-2.py | bryanbalderas/DAS_Sistemas | 1e31f088c0de7134471025a5730b0abfc19d936e | [
"MIT"
] | 210 | 2017-09-01T00:10:08.000Z | 2022-03-19T18:05:12.000Z | # Importando librerías
from numpy import array
# Listas y arreglos
a = array(['h', 101, 'l', 'l', 'o'])
x = ['h', 101, 'l', 'l', 'o']
print(a)
print(x)
print("Tamaño: ", len(x))
# Condicionales
if isinstance(x[1], int):
x[1] = chr(x[1])
elif isinstance(x[1], str):
pass
else:
raise TypeError("Tipo no soportado!. No te pases! >:c")
print(' uwu '.join(x))
# Ciclos
for item in x:
print(item)
for i in range(len(x)):
print(x[i])
for i in range(1, 10, 2):
print(i)
while len(x):
print(x.pop(0))
while len(x):
print(x.pop(0))
else:
print('F para x :C')
# Operaciones con listas
x.append('H')
x.append('o')
x.append('l')
x.append('a')
x.insert(1, 'o')
# Entrada de datos
print(x)
respuesta = input("Hola?")
print(respuesta)
# Operadores aritméticos y booleanos
print(x)
print(10.1)
print(1 + 2 - 4 * 5 / 8 % 2)
print(2 ** 5)
print(True and True)
print(False and True)
print(False or True)
print(not False)
# Listas comprimidas
print([i for i in range(1, 11) if i % 2 == 0])
print([j for j in range(2, 101) if all(j % i != 0 for i in range(2, j))])
print([j for j in range(2, 101) if not(j % 2 or j % 3 or j % 5)]) | 17.294118 | 73 | 0.593537 | 220 | 1,176 | 3.172727 | 0.35 | 0.051576 | 0.034384 | 0.063037 | 0.17765 | 0.157593 | 0.120344 | 0.065903 | 0.065903 | 0 | 0 | 0.049784 | 0.214286 | 1,176 | 68 | 74 | 17.294118 | 0.705628 | 0.130102 | 0 | 0.204545 | 0 | 0 | 0.076847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.022727 | 0.022727 | 0 | 0.022727 | 0.522727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b4a0c7230303a692bcc4f7d71deebfb17b27263a | 3,751 | py | Python | constants.py | lanthony42/Snek | e463b58eeba32bd26279a57fd3a523f4fb773da7 | [
"MIT"
] | null | null | null | constants.py | lanthony42/Snek | e463b58eeba32bd26279a57fd3a523f4fb773da7 | [
"MIT"
] | null | null | null | constants.py | lanthony42/Snek | e463b58eeba32bd26279a57fd3a523f4fb773da7 | [
"MIT"
] | null | null | null | import math
SCREEN_WIDTH = 1400
SCREEN_HEIGHT = 800
TEXT = (5, 5)
FPS = 60
BASE_SIZE = 10
EYE_SIZE = 4
PUPIL_SIZE = EYE_SIZE - 2
BASE_SPEED = 2
MIN_DISTANCE = 1
MAX_DISTANCE = MIN_DISTANCE + 3
SIZE_INC = 15
EYE_INC = SIZE_INC * 4
PUPIL_INC = SIZE_INC * 8
GROWTH_INC = SIZE_INC * 20
BOOST_MIN = 10
BOOST_FACTOR = 2
BOOST_DCR = 5
ENEMIES = 5
AI_RADIUS = 150
BOOST_RADIUS = 100
FOOD_RADIUS = 5
DEAD_FOOD_RADIUS = FOOD_RADIUS + 1
FOOD_INIT = 150
FOOD_DEATH = 4
FOOD_COLOUR = (240, 40, 40)
BLACK = (0, 0, 0)
BLUE = (0, 0, 255)
FADED = (60, 60, 160)
GREEN = (30, 180, 30)
RED = (240, 0, 0)
PURPLE = (160, 30, 160)
YELLOW = (215, 215, 70)
TAN = (215, 125, 70)
WHITE = (220, 220, 220)
ENEMY_COLOURS = [RED, GREEN, PURPLE, YELLOW, TAN]
BOOST_OFFSET = 40
PING_PONG = 100
class Vector:
def __init__(self, x=0.0, y=0.0):
self.x = x
self.y = y
@staticmethod
def t(vector):
return Vector(vector[0], vector[1])
def tuple(self):
return round(self.x), round(self.y)
def copy(self):
return Vector(self.x, self.y)
def __add__(self, other):
return Vector(self.x + other.x, self.y + other.y)
def __iadd__(self, other):
self.x += other.x
self.y += other.y
return self
def __sub__(self, other):
return Vector(self.x - other.x, self.y - other.y)
def __isub__(self, other):
self.x -= other.x
self.y -= other.y
return self
def __mul__(self, other: float):
return Vector(self.x * other, self.y * other)
def __imul__(self, other: float):
self.x *= other
self.y *= other
return self
def __truediv__(self, other: float):
return Vector(self.x / other, self.y / other)
def __itruediv__(self, other: float):
self.x /= other
self.y /= other
return self
def __eq__(self, other):
return self.x == other.x and self.y == other.y
def __neg__(self):
return Vector(-self.x, -self.y)
def __str__(self):
return f'({self.x}, {self.y})'
__repr__ = __str__
def mag_squared(self):
return self.x ** 2 + self.y ** 2
def mag(self):
return math.sqrt(self.x ** 2 + self.y ** 2)
def normalized(self):
mag = self.mag()
if mag > 0:
return Vector(self.x / mag, self.y / mag)
else:
return Vector()
def normalize(self):
mag = self.mag()
if mag > 0:
self.x /= mag
self.y /= mag
else:
return self
def perpendicular(self, first=True):
return Vector(-self.y if first else self.y, self.x if first else -self.x).normalized()
def lerp(self, target, distance, gap=0):
direction = target - self
mag = direction.mag()
if gap > 0:
mag -= gap
direction.normalize()
direction *= mag
if mag <= 0:
return 0, Vector()
elif mag < distance:
self.x += direction.x
self.y += direction.y
return mag, direction
else:
direction *= distance / mag
self.x += direction.x
self.y += direction.y
return distance, direction
class Circle:
def __init__(self, x=0.0, y=0.0, radius=1, position: Vector = None, colour=FOOD_COLOUR):
if position is not None:
self.position = position
else:
self.position = Vector(x, y)
self.radius = radius
self.colour = colour
def __str__(self):
return f'Circle(position={self.position}, radius={self.radius}, colour={self.colour})'
__repr__ = __str__
START = Vector(BASE_SIZE * 2, SCREEN_HEIGHT // 2)
| 22.327381 | 94 | 0.564916 | 531 | 3,751 | 3.787194 | 0.218456 | 0.059672 | 0.029836 | 0.059175 | 0.36002 | 0.3272 | 0.3272 | 0.292392 | 0.237693 | 0.184983 | 0 | 0.052857 | 0.31405 | 3,751 | 167 | 95 | 22.461078 | 0.728721 | 0 | 0 | 0.162791 | 0 | 0 | 0.025593 | 0.019728 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178295 | false | 0 | 0.007752 | 0.108527 | 0.403101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b4a0edc51d50de306ea838cb4bc96583019d8526 | 831 | py | Python | app/routers/auth.py | nicolunardi/travela-server | 79537ed428c01bac90d078216c7513411b7695ad | [
"CNRI-Python"
] | null | null | null | app/routers/auth.py | nicolunardi/travela-server | 79537ed428c01bac90d078216c7513411b7695ad | [
"CNRI-Python"
] | null | null | null | app/routers/auth.py | nicolunardi/travela-server | 79537ed428c01bac90d078216c7513411b7695ad | [
"CNRI-Python"
] | null | null | null | from fastapi import APIRouter, Depends, status
from fastapi.security import OAuth2PasswordRequestForm
from sqlalchemy.orm import Session
from app.controllers.authControllers import login_user, register_user
from app.schemas.users import UserCreate
from app.schemas.tokens import Token
from app.config.database import get_db
router = APIRouter()
@router.post(
"/register",
status_code=status.HTTP_201_CREATED,
response_model=Token,
tags=["User"],
)
async def register(user: UserCreate, db: Session = Depends(get_db)):
return register_user(db, user)
@router.post(
"/login",
status_code=status.HTTP_200_OK,
response_model=Token,
tags=["User"],
)
async def login(
form_data: OAuth2PasswordRequestForm = Depends(),
db: Session = Depends(get_db),
):
return login_user(form_data, db)
| 24.441176 | 69 | 0.749699 | 107 | 831 | 5.663551 | 0.392523 | 0.046205 | 0.046205 | 0.066007 | 0.20132 | 0.20132 | 0.112211 | 0 | 0 | 0 | 0 | 0.011348 | 0.151625 | 831 | 33 | 70 | 25.181818 | 0.848227 | 0 | 0 | 0.222222 | 0 | 0 | 0.027678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.074074 | 0.259259 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b4a6ed73413c10935d4c8fa52b9e4361216bd892 | 850 | py | Python | beatsaver/entity/MapTestplay.py | jundoll/bs-api-py | 1e12e1d68d6cbc4c8e25c0da961396854391be5b | [
"MIT"
] | null | null | null | beatsaver/entity/MapTestplay.py | jundoll/bs-api-py | 1e12e1d68d6cbc4c8e25c0da961396854391be5b | [
"MIT"
] | null | null | null | beatsaver/entity/MapTestplay.py | jundoll/bs-api-py | 1e12e1d68d6cbc4c8e25c0da961396854391be5b | [
"MIT"
] | null | null | null | # load modules
from dataclasses import dataclass
from typing import Union
from ...beatsaver.entity import UserDetail
# definition class
@dataclass(frozen=True)
class MapTestplay:
createdAt: str
feedback: str
feedbackAt: str
user: Union[UserDetail.UserDetail, None]
video: str
# definition function
def gen(response):
if response is not None:
instance = MapTestplay(
createdAt=response.get('createdAt'),
feedback=response.get('feedback'),
feedbackAt=response.get('feedbackAt'),
user=UserDetail.gen(response.get('user')),
video=response.get('video')
)
return instance
def gen_list(response):
if response is not None:
if len(response) == 0:
return []
else:
return [gen(v) for v in response]
| 21.25 | 54 | 0.628235 | 93 | 850 | 5.731183 | 0.430108 | 0.103189 | 0.067542 | 0.075047 | 0.101313 | 0.101313 | 0 | 0 | 0 | 0 | 0 | 0.001626 | 0.276471 | 850 | 39 | 55 | 21.794872 | 0.865041 | 0.057647 | 0 | 0.076923 | 0 | 0 | 0.045169 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b4c0a42efb0c3bbdc0fcf9baf9ae460765b29cd0 | 999 | py | Python | setup.py | db48x/flask-digest | 6a3138aef4baa1c1a129eb655c2644bf61387af1 | [
"MIT"
] | 8 | 2015-07-18T10:34:38.000Z | 2019-11-04T01:50:15.000Z | setup.py | db48x/flask-digest | 6a3138aef4baa1c1a129eb655c2644bf61387af1 | [
"MIT"
] | 1 | 2019-07-22T14:08:12.000Z | 2020-05-10T16:36:36.000Z | setup.py | db48x/flask-digest | 6a3138aef4baa1c1a129eb655c2644bf61387af1 | [
"MIT"
] | 3 | 2016-05-02T19:04:34.000Z | 2021-07-01T10:58:31.000Z | from setuptools import setup, find_packages
setup(
name = 'Flask-Digest',
version = '0.2.1',
author = 'Victor Andrade de Almeida',
author_email = 'vct.a.almeida@gmail.com',
url = 'https://github.com/vctandrade/flask-digest',
description = 'A RESTful authentication service for Flask applications',
long_description = open('README.rst').read(),
license = 'MIT',
platforms = ['Platform Independent'],
install_requires = ['Flask >= 0.10.1'],
packages = find_packages(),
keywords = ['digest', 'authentication', 'flask'],
classifiers = [
'Development Status :: 3 - Alpha',
'Environment :: Web Environment',
'Framework :: Flask',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: Implementation'
]
)
| 31.21875 | 76 | 0.618619 | 99 | 999 | 6.191919 | 0.707071 | 0.039152 | 0.081566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.239239 | 999 | 31 | 77 | 32.225806 | 0.793421 | 0 | 0 | 0 | 0 | 0 | 0.52953 | 0.023023 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4c286e145a31477357b61710d97854704934bc6 | 7,816 | py | Python | Testing/Python/TestLinearOrthotropicMaterial.py | Numerics88/vtkbone | 5a6ab2870679e9e7ea51926c34911607b9d85235 | [
"MIT"
] | 3 | 2017-04-04T04:59:22.000Z | 2022-03-13T11:22:40.000Z | Testing/Python/TestLinearOrthotropicMaterial.py | Numerics88/vtkbone | 5a6ab2870679e9e7ea51926c34911607b9d85235 | [
"MIT"
] | 5 | 2017-04-06T19:46:39.000Z | 2019-12-11T23:41:41.000Z | Testing/Python/TestLinearOrthotropicMaterial.py | Numerics88/vtkbone | 5a6ab2870679e9e7ea51926c34911607b9d85235 | [
"MIT"
] | 2 | 2017-04-29T20:54:57.000Z | 2017-04-29T22:28:10.000Z | from __future__ import division
import sys
import numpy
from numpy.core import *
import vtk
from vtk.util.numpy_support import vtk_to_numpy, numpy_to_vtk
import vtkbone
import traceback
import unittest
class TestLinearOrthotropicMaterial (unittest.TestCase):
def test_isotropic (self):
material = vtkbone.vtkboneLinearOrthotropicMaterial()
material.SetYoungsModulusX(1234.5)
material.SetYoungsModulusY(1234.5)
material.SetYoungsModulusZ(1234.5)
material.SetPoissonsRatioYZ(0.246)
material.SetPoissonsRatioZX(0.246)
material.SetPoissonsRatioXY(0.246)
G = 1234.5/(2*(1+0.246))
material.SetShearModulusYZ(G)
material.SetShearModulusZX(G)
material.SetShearModulusXY(G)
self.assertEqual (material.GetYoungsModulusX(), 1234.5)
self.assertEqual (material.GetYoungsModulusY(), 1234.5)
self.assertEqual (material.GetYoungsModulusZ(), 1234.5)
self.assertEqual (material.GetPoissonsRatioYZ(), 0.246)
self.assertEqual (material.GetPoissonsRatioZY(), 0.246)
self.assertEqual (material.GetPoissonsRatioZX(), 0.246)
self.assertEqual (material.GetPoissonsRatioXZ(), 0.246)
self.assertEqual (material.GetPoissonsRatioXY(), 0.246)
self.assertEqual (material.GetPoissonsRatioYX(), 0.246)
self.assertEqual (material.GetShearModulusYZ(), G)
self.assertEqual (material.GetShearModulusZY(), G)
self.assertEqual (material.GetShearModulusZX(), G)
self.assertEqual (material.GetShearModulusXZ(), G)
self.assertEqual (material.GetShearModulusXY(), G)
self.assertEqual (material.GetShearModulusYX(), G)
def test_orthotropic (self):
material = vtkbone.vtkboneLinearOrthotropicMaterial()
material.SetYoungsModulusX(1000)
material.SetYoungsModulusY(1100)
material.SetYoungsModulusZ(1200)
material.SetPoissonsRatioYZ(0.25)
material.SetPoissonsRatioZX(0.3)
material.SetPoissonsRatioXY(0.2)
# These values are not necessarily consistent
GYZ = 1000/(2*(1+0.25))
GZX = 1100/(2*(1+0.3))
GXY = 1200/(2*(1+0.2))
material.SetShearModulusYZ(GYZ)
material.SetShearModulusZX(GZX)
material.SetShearModulusXY(GXY)
self.assertEqual (material.GetYoungsModulusX(), 1000)
self.assertEqual (material.GetYoungsModulusY(), 1100)
self.assertEqual (material.GetYoungsModulusZ(), 1200)
self.assertEqual (material.GetPoissonsRatioYZ(), 0.25)
self.assertEqual (material.GetPoissonsRatioZX(), 0.3)
self.assertEqual (material.GetPoissonsRatioXY(), 0.2)
self.assertAlmostEqual (material.GetPoissonsRatioYZ() / material.GetYoungsModulusY(), material.GetPoissonsRatioZY() / material.GetYoungsModulusZ(), delta=1E-8)
self.assertAlmostEqual(material.GetPoissonsRatioZX() / material.GetYoungsModulusZ(), material.GetPoissonsRatioXZ() / material.GetYoungsModulusX(), delta=1E-8 )
self.assertAlmostEqual (material.GetPoissonsRatioXY() / material.GetYoungsModulusX(), material.GetPoissonsRatioYX() / material.GetYoungsModulusY(), delta=1E-8)
self.assertEqual (material.GetShearModulusYZ(), GYZ)
self.assertEqual (material.GetShearModulusZY(), GYZ)
self.assertEqual (material.GetShearModulusZX(), GZX)
self.assertEqual (material.GetShearModulusXZ(), GZX)
self.assertEqual (material.GetShearModulusXY(), GXY)
self.assertEqual (material.GetShearModulusYX(), GXY)
def test_copy (self):
material = vtkbone.vtkboneLinearOrthotropicMaterial()
material.SetYoungsModulusX(1000)
material.SetYoungsModulusY(1100)
material.SetYoungsModulusZ(1200)
material.SetPoissonsRatioYZ(0.25)
material.SetPoissonsRatioZX(0.3)
material.SetPoissonsRatioXY(0.2)
# These values are not necessarily consistent
GYZ = 1000/(2*(1+0.25))
GZX = 1100/(2*(1+0.3))
GXY = 1200/(2*(1+0.2))
material.SetShearModulusYZ(GYZ)
material.SetShearModulusZX(GZX)
material.SetShearModulusXY(GXY)
scaled_material = material.Copy()
self.assertEqual (scaled_material.GetYoungsModulusX(), 1000)
self.assertEqual (scaled_material.GetYoungsModulusY(), 1100)
self.assertEqual (scaled_material.GetYoungsModulusZ(), 1200)
self.assertEqual (scaled_material.GetPoissonsRatioYZ(), 0.25)
self.assertEqual (scaled_material.GetPoissonsRatioZX(), 0.3)
self.assertEqual (scaled_material.GetPoissonsRatioXY(), 0.2)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioYZ() / scaled_material.GetYoungsModulusY(), scaled_material.GetPoissonsRatioZY() / scaled_material.GetYoungsModulusZ(), delta=1E-8)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioZX() / scaled_material.GetYoungsModulusZ(), scaled_material.GetPoissonsRatioXZ() / scaled_material.GetYoungsModulusX(), delta=1E-8)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioXY() / scaled_material.GetYoungsModulusX(), scaled_material.GetPoissonsRatioYX() / scaled_material.GetYoungsModulusY(), delta=1E-8)
self.assertEqual (scaled_material.GetShearModulusYZ(), GYZ)
self.assertEqual (scaled_material.GetShearModulusZY(), GYZ)
self.assertEqual (scaled_material.GetShearModulusZX(), GZX)
self.assertEqual (scaled_material.GetShearModulusXZ(), GZX)
self.assertEqual (scaled_material.GetShearModulusXY(), GXY)
self.assertEqual (scaled_material.GetShearModulusYX(), GXY)
def test_scaled_copy (self):
material = vtkbone.vtkboneLinearOrthotropicMaterial()
material.SetYoungsModulusX(1000)
material.SetYoungsModulusY(1100)
material.SetYoungsModulusZ(1200)
material.SetPoissonsRatioYZ(0.25)
material.SetPoissonsRatioZX(0.3)
material.SetPoissonsRatioXY(0.2)
# These values are not necessarily consistent
GYZ = 1000/(2*(1+0.25))
GZX = 1100/(2*(1+0.3))
GXY = 1200/(2*(1+0.2))
material.SetShearModulusYZ(GYZ)
material.SetShearModulusZX(GZX)
material.SetShearModulusXY(GXY)
scaled_material = material.ScaledCopy(0.5)
self.assertEqual (scaled_material.GetYoungsModulusX(), 0.5*1000)
self.assertEqual (scaled_material.GetYoungsModulusY(), 0.5*1100)
self.assertEqual (scaled_material.GetYoungsModulusZ(), 0.5*1200)
self.assertEqual (scaled_material.GetPoissonsRatioYZ(), 0.25)
self.assertEqual (scaled_material.GetPoissonsRatioZX(), 0.3)
self.assertEqual (scaled_material.GetPoissonsRatioXY(), 0.2)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioYZ() / scaled_material.GetYoungsModulusY(), scaled_material.GetPoissonsRatioZY() / scaled_material.GetYoungsModulusZ(), delta=1E-8)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioZX() / scaled_material.GetYoungsModulusZ(), scaled_material.GetPoissonsRatioXZ() / scaled_material.GetYoungsModulusX(), delta=1E-8)
self.assertAlmostEqual (scaled_material.GetPoissonsRatioXY() / scaled_material.GetYoungsModulusX(), scaled_material.GetPoissonsRatioYX() / scaled_material.GetYoungsModulusY(), delta=1E-8)
self.assertEqual (scaled_material.GetShearModulusYZ(), 0.5*GYZ)
self.assertEqual (scaled_material.GetShearModulusZY(), 0.5*GYZ)
self.assertEqual (scaled_material.GetShearModulusZX(), 0.5*GZX)
self.assertEqual (scaled_material.GetShearModulusXZ(), 0.5*GZX)
self.assertEqual (scaled_material.GetShearModulusXY(), 0.5*GXY)
self.assertEqual (scaled_material.GetShearModulusYX(), 0.5*GXY)
if __name__ == '__main__':
unittest.main()
| 51.084967 | 195 | 0.714304 | 738 | 7,816 | 7.46748 | 0.108401 | 0.138813 | 0.112684 | 0.126293 | 0.794956 | 0.652695 | 0.509345 | 0.45636 | 0.45636 | 0.45636 | 0 | 0.047234 | 0.176561 | 7,816 | 152 | 196 | 51.421053 | 0.809043 | 0.01676 | 0 | 0.40625 | 0 | 0 | 0.001042 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 1 | 0.03125 | false | 0 | 0.070313 | 0 | 0.109375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4c33dadc7de5b116b0004a7e5d4f5eac3a9ab0a | 2,297 | py | Python | fashion/warehouse/fashion.core/xform/generateJinja2.py | braddillman/fashion | 2588f3712a72e81f3cb7733e40b6c3751aa5ece2 | [
"Apache-2.0"
] | 1 | 2021-05-23T09:01:39.000Z | 2021-05-23T09:01:39.000Z | fashion/warehouse/fashion.core/xform/generateJinja2.py | braddillman/fashion | 2588f3712a72e81f3cb7733e40b6c3751aa5ece2 | [
"Apache-2.0"
] | null | null | null | fashion/warehouse/fashion.core/xform/generateJinja2.py | braddillman/fashion | 2588f3712a72e81f3cb7733e40b6c3751aa5ece2 | [
"Apache-2.0"
] | null | null | null | '''
Created on 2018-12-21
Copyright (c) 2018 Bradford Dillman
Generate code from a model and a jinja2 template.
'''
import logging
from pathlib import Path
from jinja2 import FileSystemLoader, Environment
from jinja2.exceptions import TemplateNotFound
from munch import munchify
from fashion.mirror import Mirror
# Module level code is executed when this file is loaded.
# cwd is where segment file was loaded.
def init(config, codeRegistry, verbose=False, tags=None):
'''cwd is where segment file was loaded.'''
codeRegistry.addXformObject(Generate(config))
class Generate(object):
'''Generate output by merging a model into a template to produce a file.'''
def __init__(self, config):
'''Constructor.'''
self.version = "1.0.0"
self.templatePath = []
self.name = config.moduleName
self.tags = config.tags
self.inputKinds = ["fashion.core.generate.jinja2.spec",
"fashion.core.mirror"]
self.outputKinds = [ 'fashion.core.output.file' ]
def execute(self, codeRegistry, verbose=False, tags=None):
'''cwd is project root directory.'''
# set up mirrored directories
mdb = codeRegistry.getService('fashion.prime.modelAccess')
mirCfg = munchify(mdb.getSingleton("fashion.core.mirror"))
mirror = Mirror(Path(mirCfg.projectPath), Path(mirCfg.mirrorPath), force=mirCfg.force)
genSpecs = mdb.getByKind(self.inputKinds[0])
for genSpec in genSpecs:
gs = munchify(genSpec)
if mirror.isChanged(Path(gs.targetFile)):
logging.warning("Skipping {0}, file has changed.".format(gs.targetFile))
else:
try:
env = Environment(loader=FileSystemLoader(gs.templatePath))
template = env.get_template(gs.template)
result = template.render(gs.model)
targetPath = Path(gs.targetFile)
with targetPath.open(mode="w") as tf:
tf.write(result)
mirror.copyToMirror(targetPath)
mdb.outputFile(targetPath)
except TemplateNotFound:
logging.error("TemplateNotFound: {0}".format(gs.template))
| 33.779412 | 94 | 0.62734 | 250 | 2,297 | 5.744 | 0.48 | 0.030641 | 0.013928 | 0.023677 | 0.089833 | 0.089833 | 0.089833 | 0 | 0 | 0 | 0 | 0.013174 | 0.272965 | 2,297 | 67 | 95 | 34.283582 | 0.846707 | 0.16761 | 0 | 0 | 1 | 0 | 0.09458 | 0.043571 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.157895 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4c4c46e92d0cc32c81af417d80cbffbc1577999 | 488 | py | Python | Grokking-Algorithms/Selection_Sort.py | AzuLiu/Algorithms-by-Python | 4c907725e3c55222642990827ca0aba302ab2a8c | [
"MIT"
] | 1 | 2018-03-17T19:51:46.000Z | 2018-03-17T19:51:46.000Z | Grokking-Algorithms/Selection_Sort.py | AzuLiu/Algorithms-by-Python | 4c907725e3c55222642990827ca0aba302ab2a8c | [
"MIT"
] | null | null | null | Grokking-Algorithms/Selection_Sort.py | AzuLiu/Algorithms-by-Python | 4c907725e3c55222642990827ca0aba302ab2a8c | [
"MIT"
] | null | null | null | def findSmallest(arr):
smallest = arr[0]
smallest_index = 0
for i in range(1, len(arr)):
if arr[i] < smallest:
smallest = arr[i]
smallest_index = i
return smallest_index
def selection_sort(arr):
newarr = []
for i in range(len(arr)):
smallest_index = findSmallest(arr)
newarr.append(arr.pop(smallest_index))
return newarr
test_arr = [5, 3, 6, 1, 0, 0, 2, 10]
print(selection_sort(test_arr))
| 24.4 | 47 | 0.581967 | 67 | 488 | 4.104478 | 0.38806 | 0.236364 | 0.043636 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035398 | 0.305328 | 488 | 19 | 48 | 25.684211 | 0.775811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4cefef49f2369f3f8e23dfe6b70e513930b24a7 | 2,229 | py | Python | causal_world/intervention_actors/visual_actor.py | michaelfeil/CausalWorld | ff866159ef0ee9c407893ae204e93eb98dd68be2 | [
"MIT"
] | 2 | 2021-09-22T08:20:12.000Z | 2021-11-16T14:20:45.000Z | causal_world/intervention_actors/visual_actor.py | michaelfeil/CausalWorld | ff866159ef0ee9c407893ae204e93eb98dd68be2 | [
"MIT"
] | null | null | null | causal_world/intervention_actors/visual_actor.py | michaelfeil/CausalWorld | ff866159ef0ee9c407893ae204e93eb98dd68be2 | [
"MIT"
] | null | null | null | from causal_world.intervention_actors.base_actor import \
BaseInterventionActorPolicy
import numpy as np
class VisualInterventionActorPolicy(BaseInterventionActorPolicy):
def __init__(self, **kwargs):
"""
This intervention actor intervenes on all visual components of the
robot, (i.e: colors).
:param kwargs:
"""
super(VisualInterventionActorPolicy, self).__init__()
self.task_intervention_space = None
def initialize(self, env):
"""
This functions allows the intervention actor to query things from the env, such
as intervention spaces or to have access to sampling funcs for goals..etc
:param env: (causal_world.env.CausalWorld) the environment used for the
intervention actor to query
different methods from it.
:return:
"""
self.task_intervention_space = env.get_variable_space_used()
return
def _act(self, variables_dict):
"""
:param variables_dict:
:return:
"""
interventions_dict = dict()
for variable in self.task_intervention_space:
if isinstance(self.task_intervention_space[variable], dict):
if 'color' in self.task_intervention_space[variable]:
interventions_dict[variable] = dict()
interventions_dict[variable]['color'] = np.random.uniform(
self.task_intervention_space[variable]['color'][0],
self.task_intervention_space[variable]['color'][1])
elif 'color' in variable:
interventions_dict[variable] = np.random.uniform(
self.task_intervention_space[variable][0],
self.task_intervention_space[variable][1])
return interventions_dict
def get_params(self):
"""
returns parameters that could be used in recreating this intervention
actor.
:return: (dict) specifying paramters to create this intervention actor
again.
"""
return {'visual_actor': dict()}
| 35.951613 | 87 | 0.599372 | 219 | 2,229 | 5.90411 | 0.388128 | 0.055684 | 0.139211 | 0.174014 | 0.249807 | 0.134571 | 0.074246 | 0.074246 | 0 | 0 | 0 | 0.002654 | 0.323912 | 2,229 | 61 | 88 | 36.540984 | 0.855342 | 0.30821 | 0 | 0 | 0 | 0 | 0.027206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4d2816f6506147c7e1180e32379768ecd8e932b | 945 | py | Python | tppm/auth.py | timtumturutumtum/TraktPlaybackProgressManager | 6b3b6f81a6de5c1b7f11d1b2ae34c1b1cc6a2b10 | [
"MIT"
] | 36 | 2017-08-06T13:47:21.000Z | 2022-02-19T03:33:07.000Z | tppm/auth.py | timtumturutumtum/TraktPlaybackProgressManager | 6b3b6f81a6de5c1b7f11d1b2ae34c1b1cc6a2b10 | [
"MIT"
] | 5 | 2018-07-20T13:01:35.000Z | 2021-12-12T21:03:05.000Z | tppm/auth.py | timtumturutumtum/TraktPlaybackProgressManager | 6b3b6f81a6de5c1b7f11d1b2ae34c1b1cc6a2b10 | [
"MIT"
] | 3 | 2018-11-20T13:16:37.000Z | 2021-10-13T01:57:55.000Z | # coding: utf-8
""" Trakt Playback Manager """
from __future__ import absolute_import
from __future__ import unicode_literals
import io
import json
import os.path
def save(path, data):
with io.open(path, 'w', encoding='utf-8', newline='\n') as fh:
# Must NOT use `json.dump` due to a Python 2 bug:
# https://stackoverflow.com/a/14870531/7597273
fh.write(json.dumps(
data, sort_keys=True, ensure_ascii=False,
indent=2, separators=(',', ': ')
))
def load(path):
if not os.path.isfile(path):
return None
with io.open(path, 'r', encoding='utf-8') as fh:
try:
return json.load(fh)
except ValueError:
return None
def remove(path):
if not os.path.isfile(path):
return False
try:
os.remove(path)
except OSError:
return False
return True
class NotAuthenticatedError(Exception):
pass
| 21 | 66 | 0.607407 | 125 | 945 | 4.496 | 0.536 | 0.021352 | 0.05694 | 0.049822 | 0.11032 | 0.11032 | 0.11032 | 0.11032 | 0 | 0 | 0 | 0.029326 | 0.278307 | 945 | 44 | 67 | 21.477273 | 0.794721 | 0.138624 | 0 | 0.275862 | 0 | 0 | 0.021118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0.034483 | 0.172414 | 0 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b4da893de386e17a0bd776a5eb2220d68a53a7ab | 710 | py | Python | data-exporter/brix/settings.py | dzwiedziu-nkg/credo-api-tools | 37adce8c858d2997b90ce7a1397e68dd281b8249 | [
"MIT"
] | null | null | null | data-exporter/brix/settings.py | dzwiedziu-nkg/credo-api-tools | 37adce8c858d2997b90ce7a1397e68dd281b8249 | [
"MIT"
] | null | null | null | data-exporter/brix/settings.py | dzwiedziu-nkg/credo-api-tools | 37adce8c858d2997b90ce7a1397e68dd281b8249 | [
"MIT"
] | null | null | null | import csv
DIR = 'credo-data-export/detections'
CSV = 'credo-data-export/credocut.tsv'
PLOT = 'credo-data-export/credocut.plot'
JSON = 'credo-data-export/credocut.json'
DEVICES = 'credo-data-export/device_mapping.json'
PNG = 'credo-data-export/png'
CREDOCUT = 10069
DELIMITER='\t'
QUOTECHAR='"'
QUOTING=csv.QUOTE_MINIMAL
COLUMNS = [
'id',
'user_id',
'device_id',
'team_id',
'width',
'height',
'x',
'y',
'latitude',
'longitude',
'altitude',
'accuracy',
'provider',
'source',
'time_received',
'timestamp',
'visible',
'frame_content'
]
TSV_COLUMNS = {}
for i in range(0, len(COLUMNS)):
TSV_COLUMNS[COLUMNS[i]] = i
BLACKLIST = set()
| 17.317073 | 49 | 0.623944 | 86 | 710 | 5.046512 | 0.581395 | 0.124424 | 0.207373 | 0.158986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010582 | 0.201408 | 710 | 40 | 50 | 17.75 | 0.75485 | 0 | 0 | 0 | 0 | 0 | 0.433803 | 0.250704 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.028571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4db3df141a7dfd438923171f46933d4cbc0dace | 13,456 | py | Python | {{cookiecutter.project_name}}/core/vertebral.py | rzavarce/cookiecutter-vertebral | 29a72b6bfb5c4ca76b1a36ee1e8ff9e0fedcb421 | [
"MIT"
] | null | null | null | {{cookiecutter.project_name}}/core/vertebral.py | rzavarce/cookiecutter-vertebral | 29a72b6bfb5c4ca76b1a36ee1e8ff9e0fedcb421 | [
"MIT"
] | null | null | null | {{cookiecutter.project_name}}/core/vertebral.py | rzavarce/cookiecutter-vertebral | 29a72b6bfb5c4ca76b1a36ee1e8ff9e0fedcb421 | [
"MIT"
] | null | null | null | import re
import json
import logging
import hmac
import base64
import hashlib
import jsonschema
from uuid import uuid4
from aiohttp import web
from pathlib import Path
from yaml import safe_load
from http import HTTPStatus
from datetime import datetime, timezone
from {{cookiecutter.project_name}}.routes import setup_routes, EXCLUDED_ROUTES
from aiohttp_swagger3 import SwaggerDocs, SwaggerUiSettings
from .models.auth import Auth
from .catalogs.response import CATALOG
METHODS_ALLOWED = ["post", "get"]
RESERVED = frozenset(
(
"args",
"asctime",
"created",
"exc_info",
"exc_text",
"filename",
"funcName",
"id",
"levelname",
"levelno",
"lineno",
"module",
"msecs",
"message",
"msg",
"name",
"pathname",
"process",
"processName",
"relativeCreated",
"stack_info",
"thread",
"threadName",
)
)
class Vertebral:
def __init__(self):
self.config: dict = {}
self.reserved: frozenset = RESERVED
self.logger = logging
self.exclude_routes: list = EXCLUDED_ROUTES
self.methods_allowed: list = METHODS_ALLOWED
self.catalog = CATALOG
self.prefix: str = ""
def load_config(self, config_path: Path) -> dict:
"""
Load config file from a given path.
-----------------
Args:
config_path (Parh): Path to config YAML file.
Returns:
config (dict): Config file loaded
"""
try:
with config_path.open() as config_file:
self.config: dict = safe_load(config_file)
self.logger.info(f'Config File has been loaded')
except:
self.logger.error("Config file no exist, please check config path",
extra={"config_path": config_path})
self.set_logger_in_file()
return self.config
def set_swagger_config(self, app):
"""
Swagger configuration parameters loader
-----------------
Args:
app (web.app): Aiohhtp web app.
Returns:
SwaggerDocs: SwaggerDocs configuration loaded
"""
swagger_config = self.config['swagger']
return SwaggerDocs(
app,
title=swagger_config["title"],
version=swagger_config["version"],
swagger_ui_settings=SwaggerUiSettings(
path=swagger_config["path"],
layout=swagger_config["layout"],
deepLinking=swagger_config["deepLinking"],
displayOperationId=swagger_config["displayOperationId"],
defaultModelsExpandDepth=swagger_config[
"defaultModelsExpandDepth"],
defaultModelExpandDepth=swagger_config[
"defaultModelExpandDepth"],
defaultModelRendering=swagger_config["defaultModelRendering"],
displayRequestDuration=swagger_config["displayRequestDuration"],
docExpansion=swagger_config["docExpansion"],
filter=swagger_config["filter"],
showExtensions=swagger_config["showExtensions"],
showCommonExtensions=swagger_config["showCommonExtensions"],
supportedSubmitMethods=swagger_config["test"].split(","),
validatorUrl=swagger_config["validatorUrl"],
withCredentials=swagger_config["withCredentials"],
),
)
def load_routes(self, app):
"""
Register existing routes in the app instance.
-----------------
Args:
app (web.app) : application instance
Returns:
No return anythings
"""
routes = setup_routes()
final_routes = []
for route in routes:
if route[0].lower() in self.methods_allowed:
final_routes.append(
web.post(self.prefix + route[1], route[2]))
else:
self.logger.error('Method is not allowed, route no setted',
extra={"route": {
"method": route[0].lower(),
"path": self.prefix + route[1]}})
app.add_routes(final_routes)
async def load_initial_auth_data(self, clientdb):
"""
Register existing routes in the app instance.
-----------------
Args:
clientdb (web.app) : application instance
Returns:
No return anythings
"""
auth = Auth(clientdb)
print()
print("Entro para chequear los datos en la bbdd")
print()
load = await auth.load_initial_data()
if load:
print()
print("cargo la da data inicial")
print()
self.logger.error('Load Initial authentication Data')
del auth
def set_path_prefix(self):
"""
Set path prefix atributte
-----------------
Args:
No accept anythins
Return:
prefix (str): Set and retunr path prefix
"""
app_name = self.config["app_name"]
version = self.config["version"]
self.prefix = f'/{app_name}/api/v{version}/'
return self.prefix
def is_exclude(self, request):
"""Check if a request is inside in path exclude list.
Its validate if path request is out of autentification
-----------------
Args:
request (objc): Aiohttp Web Request
Returns:
status (bool): Path Validation status
"""
for pattern in self.exclude_routes:
if re.fullmatch(pattern, request.path):
return True
return False
def set_response(self, data: dict):
""" Take a response data, search a key inside Response schema and set
response data
-----------------
Args:
data (dict): Data Dictionary to set in response
Returns:
response (dict): Response data serializered
"""
key = data['key']
response = self.catalog.get(key, False)
if response:
response["payload"] = data["payload"]
response["uuid"] = str(uuid4())
return response
def set_error_response(self, data: dict):
""" Take a error data, search a key inside Response schema and set
response data
-----------------
Args:
data (dict): Data Dictionary to set in response
Returns:
response (dict): Response data serializered
"""
key = data['key']
response = self.catalog.get(key, False)
if response:
response["payload"] = data["payload"]
response["uuid"] = str(uuid4())
self.logger.error(response["detail"], extra=response)
return web.json_response(
response,
status=HTTPStatus.UNPROCESSABLE_ENTITY.value)
async def validate_schema(self, data, schema):
"""Schemas Request/Response validator
-----------------
Args:
data (json): Json Request/Response object to check.
schema (dict): Schema Object definition
Returns:
status (bool): Schema Validation status
error_list (list): Errors List if any
"""
v = jsonschema.Draft7Validator(schema)
errors = sorted(v.iter_errors(data), key=lambda e: e.path)
error_list = []
if errors:
status = False
for error in errors:
error_list.append(error.message)
else:
status = True
return status, error_list
async def verify_signature(self, signature, api_secret, body_encoded):
"""Schemas Request/Response validator
-----------------
Args:
signature (str): Headre content signature
api_secret (str): Token session registered
body_encoded (str): Body request econded
Returns:
status (bool): Status signature
"""
signature_hash = hmac.new(api_secret, body_encoded,
hashlib.sha512).digest()
base64_signature_hash = base64.b64encode(signature_hash).decode()
if signature == base64_signature_hash:
return True
return False
def verify_token_timeout(self, time_out: int, last_request: datetime):
""" Check if token is valid, take a time_out and compare delta time of
last requeste date and return status
-----------------
Args:
time_out (int): Token time out in seconds
last_request (datetime): Date of the last request from session
Returns:
status (bool): Path Validation status
"""
now = datetime.now(tz=timezone.utc)
dt_object = datetime.fromtimestamp(last_request, tz=timezone.utc)
delta = now - dt_object
status = False
if time_out > delta.total_seconds():
status = True
return status
def set_logger_in_file(self, level=logging.DEBUG):
"""
Set logger with a alternative handles (StackloggingHandler
Class)
-----------------
Args:
Its no necesary
Return:
logger: Logger instance loaded
"""
logger_enable = self.config['logger']['enable']
if logger_enable:
logger_file_path = self.config['logger']['logs_file_path']
logger_handler = StackFileHandler(logger_file_path)
logger_handler.setLevel(level)
self.logger.addHandler(logger_handler)
def getLogger(self, name=None, level=logging.DEBUG, formatter=None):
"""
Set logger with a alternative handles (StackloggingHandler Class)
-----------------
Args:
Its no necesary
Return:
logger: Logger instance loaded
"""
logger = logging.getLogger(name)
logger.setLevel(level)
logger_handler = StackloggingHandler()
logger_handler.setLevel(level)
if formatter:
logger_handler.setFormatter(formatter)
logger.addHandler(logger_handler)
self.logger = logger
logger.info(f'Log Utility has been setting')
return logger
def get_extra_keys(self, record):
"""
Take a logger record and clean it, only Extra parameters are returned
-----------------
Args:
record (logger.record): Logger record to clean
Return:
extra_keys (list): Extra parameter list
"""
extra_keys = []
for key, value in record.__dict__.items():
if key not in self.reserved and not key.startswith("_"):
extra_keys.append(key)
return extra_keys
def format_stackdriver_json(self, record, message):
"""
Take a string message and format the new logger record with the correct
logger format to show
-----------------
Args:
message (str): Logger message string
record (logger.record): Logger record to clean
Return:
extra_keys (list): Extra parameter list
"""
date_format = '%Y-%m-%dT%H:%M:%SZ'
dt = datetime.utcfromtimestamp(record.created).strftime(date_format)
log_text = f'[{dt}] [{record.process}] [{record.levelname}] ' \
f'[{record.filename}:{record.lineno}] ' \
f'- Msg: {message} - Extra: '
payload = {}
extra_keys = self.get_extra_keys(record)
for key in extra_keys:
try:
# serialization/type error check
json.dumps(record.__dict__[key])
payload[key] = record.__dict__[key]
except TypeError:
payload[key] = str(record.__dict__[key])
dumps = json.dumps(payload)
return log_text + dumps
class StackloggingHandler(logging.StreamHandler):
"""
Handler class localed in logging.handler to support alternative formats and
add extra data in the logger record
"""
def __init__(self, stream=None):
super(StackloggingHandler, self).__init__()
def format(self, record):
"""
Add logger format to record
-----------------
Args:
record (logger.record): Logger record to formatter
Return:
record (logger.record): Logger record formatted
"""
message = super(StackloggingHandler, self).format(record)
return Vertebral().format_stackdriver_json(record, message)
class StackFileHandler(logging.FileHandler):
"""
Handler class to support alternative formats and add extra data in
the logger record
"""
def format(self, record):
"""
Add logger format to record
-----------------
Args:
record (logger.record): Logger record to formatter
Return:
record (logger.record): Logger record formatted
"""
message = super(StackFileHandler, self).format(record)
return Vertebral().format_stackdriver_json(record, message)
| 30.306306 | 80 | 0.56198 | 1,329 | 13,456 | 5.562077 | 0.222724 | 0.033415 | 0.029221 | 0.019481 | 0.243642 | 0.227679 | 0.210904 | 0.210363 | 0.185741 | 0.185741 | 0 | 0.002561 | 0.332565 | 13,456 | 443 | 81 | 30.374718 | 0.82051 | 0.002229 | 0 | 0.146789 | 0 | 0 | 0.104399 | 0.017324 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.077982 | null | null | 0.027523 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4df7dcc36490cc477f25560dc32bb1832158dd7 | 835 | py | Python | example-query.py | ericwhyne/http-ricochet | bd8edb8591047e00a45727457fd09c089f591836 | [
"BSD-3-Clause"
] | 19 | 2015-05-06T16:45:50.000Z | 2020-07-31T10:26:17.000Z | example-query.py | ericwhyne/http-ricochet | bd8edb8591047e00a45727457fd09c089f591836 | [
"BSD-3-Clause"
] | 2 | 2015-05-06T17:00:33.000Z | 2015-07-29T19:51:58.000Z | example-query.py | ericwhyne/http-ricochet | bd8edb8591047e00a45727457fd09c089f591836 | [
"BSD-3-Clause"
] | 3 | 2015-05-06T23:16:00.000Z | 2019-08-13T15:09:44.000Z | #!/usr/bin/python
import urllib2
import random
# A list of places we've deployed ricochet
ricochet_servers = [
"http://127.0.0.1:8080/ricochet/ricochet?url=",
"http://127.0.0.1:8080/ricochet/ricochet?url="
]
# We're identifying ourselves to ourself here, this will show up in the server logs (unless you've disabled them).
headers = { 'User-Agent' : 'Its me!' }
# Pick a random server, build the query, then make the query.
ricochet_server = random.choice(ricochet_servers)
content_type = "&ct=text/html"
url = "http://news.ycombinator.com"
# use urllib2.quote if your url contains parameters, the ricochet proxy will unquote before making the request
# url = urllib2.quote("https://news.ycombinator.com/newest?n=31")
query = ricochet_server + url + content_type
print urllib2.urlopen(urllib2.Request(query, None, headers)).read()
| 36.304348 | 114 | 0.747305 | 129 | 835 | 4.790698 | 0.596899 | 0.07767 | 0.02589 | 0.029126 | 0.106796 | 0.106796 | 0.106796 | 0.106796 | 0.106796 | 0 | 0 | 0.036885 | 0.123353 | 835 | 22 | 115 | 37.954545 | 0.807377 | 0.482635 | 0 | 0 | 0 | 0 | 0.340376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4e4e45949d2b3e2692bb5ce21125c0005114be5 | 780 | py | Python | buildmsi.py | vivainio/pylauncher-import | 584b7127bfcde114a55188f1edafdd768213e51e | [
"BSD-2-Clause"
] | null | null | null | buildmsi.py | vivainio/pylauncher-import | 584b7127bfcde114a55188f1edafdd768213e51e | [
"BSD-2-Clause"
] | null | null | null | buildmsi.py | vivainio/pylauncher-import | 584b7127bfcde114a55188f1edafdd768213e51e | [
"BSD-2-Clause"
] | 1 | 2021-11-09T02:37:35.000Z | 2021-11-09T02:37:35.000Z | import getpass
import os
import sys
VER = '1.0.1.7'
VERSION = 'Version=%s' % VER
MANUFACTURER = 'Manufacturer=Vinay Sajip'
X86 = 'Platform=x86'
X64 = 'Platform=x64'
TOWIN = 'ToWindows'
def main():
signpwd = getpass.getpass('Password for signing:')
import builddoc
builddoc.main()
os.environ['SIGNPWD'] = signpwd
import makemsi
makemsi.main(['-o', 'launchwin-%s' % VER, X86, VERSION, MANUFACTURER, TOWIN, 'launcher'])
makemsi.main(['-o', 'launcher-%s' % VER, X86, VERSION, MANUFACTURER, 'launcher'])
makemsi.main(['-o', 'launchwin-%s' % VER, X64, VERSION, MANUFACTURER, TOWIN, 'launcher'])
makemsi.main(['-o', 'launcher-%s' % VER, X64, VERSION, MANUFACTURER, 'launcher'])
if __name__ == '__main__':
sys.exit(main()) | 32.5 | 94 | 0.635897 | 93 | 780 | 5.247312 | 0.354839 | 0.040984 | 0.098361 | 0.122951 | 0.434426 | 0.331967 | 0.229508 | 0.229508 | 0.229508 | 0.229508 | 0 | 0.031546 | 0.187179 | 780 | 24 | 95 | 32.5 | 0.73817 | 0 | 0 | 0 | 0 | 0 | 0.258575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.095238 | 0.238095 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b4e905f07ef9267e0151e885d12dea14423eaf4d | 1,063 | py | Python | source/miscellaneous/test_get_sub_dir_dates.py | youdar/usesul_functions | 7cca9f8e241f2334f9eb0eab46d40b4c109e8518 | [
"MIT"
] | null | null | null | source/miscellaneous/test_get_sub_dir_dates.py | youdar/usesul_functions | 7cca9f8e241f2334f9eb0eab46d40b4c109e8518 | [
"MIT"
] | null | null | null | source/miscellaneous/test_get_sub_dir_dates.py | youdar/usesul_functions | 7cca9f8e241f2334f9eb0eab46d40b4c109e8518 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import division
from get_sub_dir_dates import get_sub_dir_dates
from get_table_hdfs_location import get_table_hdfs_location
import unittest
import sys
__author__ = 'youval.dar'
class CollectDates(unittest.TestCase):
def test_get_sub_dir_dates(self):
print sys._getframe().f_code.co_name
acme_table_name = 'youval_db.acme_with_account_info'
dr = get_table_hdfs_location(acme_table_name,print_out=False)
dates = list(get_sub_dir_dates(dr))
print dates[0]
def run_selected_tests():
""" Run selected tests
1) List in "tests" the names of the particular test you want to run
2) Comment out unittest.main()
3) Un-comment unittest.TextTestRunner().run(run_selected_tests())
"""
tests = ['test_something','test_something_else']
suite = unittest.TestSuite(map(MyTestCase,tests))
return suite
if __name__ == '__main__':
# use for individual tests
# unittest.TextTestRunner().run(run_selected_tests())
# Use to run all tests
unittest.main()
| 27.25641 | 69 | 0.732832 | 152 | 1,063 | 4.736842 | 0.473684 | 0.033333 | 0.05 | 0.077778 | 0.186111 | 0.113889 | 0 | 0 | 0 | 0 | 0 | 0.004566 | 0.175917 | 1,063 | 38 | 70 | 27.973684 | 0.817352 | 0.111007 | 0 | 0 | 0 | 0 | 0.112162 | 0.043243 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.263158 | null | null | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4ee34ec8c2a387636b7c753e6e3f2ecebd11bb6 | 3,534 | py | Python | hangman/ps3_hangman.py | kriyaaseela/Hangman | 8733ced9f915890b74d53587e67c3b9b36815484 | [
"MIT"
] | 1 | 2016-11-13T20:13:06.000Z | 2016-11-13T20:13:06.000Z | hangman/ps3_hangman.py | kriyaaseela/Hangman | 8733ced9f915890b74d53587e67c3b9b36815484 | [
"MIT"
] | null | null | null | hangman/ps3_hangman.py | kriyaaseela/Hangman | 8733ced9f915890b74d53587e67c3b9b36815484 | [
"MIT"
] | null | null | null | # Hangman game
import random
WORDLIST_FILENAME = "words.txt"
def loadWords():
"""
Returns a list of valid words. Words are strings of lowercase letters.
Depending on the size of the word list, this function may
take a while to finish.
"""
# inFile: file
inFile = open(WORDLIST_FILENAME, 'r')
# line: string
line = inFile.readline()
# wordlist: list of strings
wordlist = line.split()
return wordlist
def chooseWord(wordlist):
"""
wordlist (list): list of words (strings)
Returns a word from wordlist at random
"""
return random.choice(wordlist)
# end of helper code
# -----------------------------------
# Load the list of words into the variable wordlist
# so that it can be accessed from anywhere in the program
wordlist = loadWords()
def isWordGuessed(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: boolean, True if all the letters of secretWord are in lettersGuessed;
False otherwise
'''
# FILL IN YOUR CODE HERE...
t = True
for x in secretWord:
if x not in lettersGuessed:
t = False
break
return t
def getGuessedWord(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters and underscores that represents
what letters in secretWord have been guessed so far.
'''
# FILL IN YOUR CODE HERE...
x = ""
for a in secretWord:
if a in lettersGuessed:
x += a + " "
else:
x += "_ "
return x
def getAvailableLetters(lettersGuessed):
'''
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters that represents what letters have not
yet been guessed.
'''
# FILL IN YOUR CODE HERE...
alphabet = "abcdefghijklmnopqrstuvwxyz"
x = ""
for a in alphabet:
if a not in lettersGuessed:
x += a
return x
def hangman(secretWord):
'''
secretWord: string, the secret word to guess.
'''
# FILL IN YOUR CODE HERE...
print("Welcome to the game, Hangman!")
print("I am thinking of a word that is "+str(len(secretWord))+" letters long.")
lettersGuessed=[]
mistakesMade=0
while not isWordGuessed(secretWord, lettersGuessed):
if not mistakesMade<8:
break
print("-----------")
print("You have "+str(8-mistakesMade)+" guesses left.")
availableLetters=getAvailableLetters(lettersGuessed)
print("Available Letters: "+availableLetters)
c=input("Please guess a letter: ")
if c[0] in lettersGuessed:
print("Oops! You've already guessed that letter: "+getGuessedWord(secretWord, lettersGuessed))
continue
lettersGuessed.append(c[0])
if c[0] in secretWord:
print("Good guess: "+getGuessedWord(secretWord, lettersGuessed))
else:
print("Oops! That letter is not in my word: "+getGuessedWord(secretWord, lettersGuessed))
mistakesMade+=1
print("-----------")
if isWordGuessed(secretWord, lettersGuessed):
print("Congratulations, you won!")
else:
print("Sorry, you ran out of guesses. The word was "+secretWord+".")
secretWord = chooseWord(wordlist).lower()
hangman(secretWord)
| 28.272 | 106 | 0.631862 | 411 | 3,534 | 5.425791 | 0.318735 | 0.075336 | 0.026906 | 0.030493 | 0.195516 | 0.15426 | 0.15426 | 0.15426 | 0.15426 | 0.15426 | 0 | 0.002698 | 0.265705 | 3,534 | 124 | 107 | 28.5 | 0.856647 | 0.342105 | 0 | 0.180328 | 0 | 0 | 0.166437 | 0.011954 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.098361 | false | 0 | 0.016393 | 0 | 0.196721 | 0.180328 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4f0d8d809078663bd768dd9f4f542a1be3ac6fe | 520 | py | Python | src/utils/logger.py | hao-wang/Montage | d1c98ec7dbe20d0449f0d02694930cf1f69a5cea | [
"MIT"
] | 65 | 2020-01-03T11:59:03.000Z | 2022-03-19T07:10:47.000Z | src/utils/logger.py | hao-wang/Montage | d1c98ec7dbe20d0449f0d02694930cf1f69a5cea | [
"MIT"
] | 5 | 2020-01-10T01:55:26.000Z | 2020-09-23T10:44:00.000Z | src/utils/logger.py | hao-wang/Montage | d1c98ec7dbe20d0449f0d02694930cf1f69a5cea | [
"MIT"
] | 10 | 2020-10-07T02:39:06.000Z | 2021-06-04T07:06:54.000Z | class Colors:
END = '\033[0m'
ERROR = '\033[91m[ERROR] '
INFO = '\033[94m[INFO] '
WARN = '\033[93m[WARN] '
def get_color(msg_type):
if msg_type == 'ERROR':
return Colors.ERROR
elif msg_type == 'INFO':
return Colors.INFO
elif msg_type == 'WARN':
return Colors.WARN
else:
return Colors.END
def get_msg(msg, msg_type=None):
color = get_color(msg_type)
msg = ''.join([color, msg, Colors.END])
return msg
def print_msg(msg, msg_type=None):
msg = get_msg(msg, msg_type)
print(msg)
| 20.8 | 41 | 0.642308 | 82 | 520 | 3.914634 | 0.280488 | 0.174455 | 0.084112 | 0.121495 | 0.165109 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045894 | 0.203846 | 520 | 24 | 42 | 21.666667 | 0.729469 | 0 | 0 | 0 | 0 | 0 | 0.126923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.619048 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b4f5a7efcd7626395f15699df12dd9127abee6af | 187 | py | Python | text/code/exact-string-matching/naive-forward.py | pmikolajczyk41/string-algorithms | faa7c7b3ab18a157a27e8c08081f2efebf8be900 | [
"MIT"
] | 1 | 2020-06-27T01:33:43.000Z | 2020-06-27T01:33:43.000Z | text/code/exact-string-matching/naive-forward.py | TenGumis/string-algorithms | e57a9dc6150e92ab65cad4a5c1e68533b7166eb7 | [
"MIT"
] | null | null | null | text/code/exact-string-matching/naive-forward.py | TenGumis/string-algorithms | e57a9dc6150e92ab65cad4a5c1e68533b7166eb7 | [
"MIT"
] | null | null | null | def naive_string_matching(t, w, n, m):
for i in range(n - m + 1):
j = 0
while j < m and t[i + j + 1] == w[j + 1]:
j = j + 1
if j == m:
return True
return False | 23.375 | 45 | 0.475936 | 38 | 187 | 2.289474 | 0.552632 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042735 | 0.374332 | 187 | 8 | 46 | 23.375 | 0.700855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4f7eeae12122017454859c15609e21806fad1d5 | 3,003 | py | Python | imgtools/modules/segmentation.py | bhklab/med-imagetools | 0cce0ee6666d052d4f76a1b6dc5d088392d309f4 | [
"Apache-2.0"
] | 9 | 2021-12-14T19:53:57.000Z | 2022-01-18T18:45:26.000Z | imgtools/modules/segmentation.py | bhklab/med-imagetools | 0cce0ee6666d052d4f76a1b6dc5d088392d309f4 | [
"Apache-2.0"
] | 4 | 2021-12-05T02:54:00.000Z | 2021-12-10T20:32:20.000Z | imgtools/modules/segmentation.py | bhklab/imgtools | 0f0414533cb6667b68aa48541feb376226fd5515 | [
"Apache-2.0"
] | 1 | 2021-07-30T20:22:46.000Z | 2021-07-30T20:22:46.000Z | from functools import wraps
import numpy as np
import SimpleITK as sitk
from ..utils import array_to_image, image_to_array
def accepts_segmentations(f):
@wraps(f)
def wrapper(img, *args, **kwargs):
result = f(img, *args, **kwargs)
if isinstance(img, Segmentation):
result = sitk.Cast(result, sitk.sitkVectorUInt8)
return Segmentation(result, roi_names=img.roi_names)
else:
return result
return wrapper
def map_over_labels(segmentation, f, include_background=False, return_segmentation=True, **kwargs):
if include_background:
labels = range(segmentation.num_labels + 1)
else:
labels = range(1, segmentation.num_labels + 1)
res = [f(segmentation.get_label(label=label), **kwargs) for label in labels]
if return_segmentation and isinstance(res[0], sitk.Image):
res = [sitk.Cast(r, sitk.sitkUInt8) for r in res]
res = Segmentation(sitk.Compose(*res), roi_names=segmentation.roi_names)
return res
class Segmentation(sitk.Image):
def __init__(self, segmentation, roi_names=None):
super().__init__(segmentation)
self.num_labels = self.GetNumberOfComponentsPerPixel()
if not roi_names:
self.roi_names = {f"label_{i}": i for i in range(1, self.num_labels+1)}
else:
self.roi_names = roi_names
if 0 in self.roi_names.values():
self.roi_names = {k : v+1 for k, v in self.roi_names.items()}
if len(self.roi_names) != self.num_labels:
for i in range(1, self.num_labels+1):
if i not in self.roi_names.values():
self.roi_names[f"label_{i}"] = i
def get_label(self, label=None, name=None, relabel=False):
if label is None and name is None:
raise ValueError("Must pass either label or name.")
if label is None:
label = self.roi_names[name]
if label == 0:
# background is stored implicitly and needs to be computed
label_arr = sitk.GetArrayViewFromImage(self)
label_img = sitk.GetImageFromArray((label_arr.sum(-1) == 0).astype(np.uint8))
else:
label_img = sitk.VectorIndexSelectionCast(self, label - 1)
if relabel:
label_img *= label
return label_img
def to_label_image(self):
arr, *_ = image_to_array(self)
# TODO handle overlapping labels
label_arr = np.where(arr.sum(-1) != 0, arr.argmax(-1) + 1, 0)
label_img = array_to_image(label_arr, reference_image=self)
return label_img
# TODO also overload other operators (arithmetic, etc.)
# with some sensible behaviour
def __getitem__(self, idx):
res = super().__getitem__(idx)
if isinstance(res, sitk.Image):
res = Segmentation(res, self.roi_names)
return res
def __repr__(self):
return f"<Segmentation with ROIs: {self.roi_names!r}>"
| 35.75 | 99 | 0.628705 | 395 | 3,003 | 4.58481 | 0.265823 | 0.079514 | 0.072888 | 0.023192 | 0.079514 | 0.079514 | 0.079514 | 0.064053 | 0.028713 | 0 | 0 | 0.010032 | 0.26973 | 3,003 | 83 | 100 | 36.180723 | 0.815777 | 0.05661 | 0 | 0.126984 | 0 | 0 | 0.032885 | 0 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.126984 | false | 0.015873 | 0.063492 | 0.015873 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4fe009337e782310f135a773be727eb38fed3e9 | 3,455 | py | Python | leo/modes/kivy.py | ATikhonov2/leo-editor | 225aac990a9b2804aaa9dea29574d6e072e30474 | [
"MIT"
] | 1,550 | 2015-01-14T16:30:37.000Z | 2022-03-31T08:55:58.000Z | leo/modes/kivy.py | ATikhonov2/leo-editor | 225aac990a9b2804aaa9dea29574d6e072e30474 | [
"MIT"
] | 2,009 | 2015-01-13T16:28:52.000Z | 2022-03-31T18:21:48.000Z | leo/modes/kivy.py | ATikhonov2/leo-editor | 225aac990a9b2804aaa9dea29574d6e072e30474 | [
"MIT"
] | 200 | 2015-01-05T15:07:41.000Z | 2022-03-07T17:05:01.000Z | # Leo colorizer control file for kivy mode.
# This file is in the public domain.
# Properties for kivy mode.
properties = {
"ignoreWhitespace": "false",
"lineComment": "#",
}
# Attributes dict for kivy_main ruleset.
kivy_main_attributes_dict = {
"default": "null",
"digit_re": "",
"escape": "",
"highlight_digits": "true",
"ignore_case": "true",
"no_word_sep": "",
}
# Dictionary of attributes dictionaries for kivy mode.
attributesDictDict = {
"kivy_main": kivy_main_attributes_dict,
}
# Keywords dict for kivy_main ruleset.
kivy_main_keywords_dict = {
"app": "keyword2",
"args": "keyword2",
"canvas": "keyword1",
"id": "keyword1",
"root": "keyword2",
"self": "keyword2",
"size": "keyword1",
"text": "keyword1",
"x": "keyword1",
"y": "keyword1",
}
# Dictionary of keywords dictionaries for kivy mode.
keywordsDictDict = {
"kivy_main": kivy_main_keywords_dict,
}
# Rules for kivy_main ruleset.
def kivy_rule0(colorer, s, i):
return colorer.match_eol_span(s, i, kind="comment1", seq="#",
at_line_start=False, at_whitespace_end=False, at_word_start=False,
delegate="", exclude_match=False)
def kivy_rule1(colorer, s, i):
return colorer.match_span(s, i, kind="literal1", begin="\"", end="\"",
at_line_start=False, at_whitespace_end=False, at_word_start=False,
delegate="kivy::literal_one",exclude_match=False,
no_escape=False, no_line_break=False, no_word_break=False)
def kivy_rule2(colorer, s, i):
return colorer.match_keywords(s, i)
# Rules dict for kivy_main ruleset.
rulesDict1 = {
"\"": [kivy_rule1,],
"#": [kivy_rule0,],
"0": [kivy_rule2,],
"1": [kivy_rule2,],
"2": [kivy_rule2,],
"3": [kivy_rule2,],
"4": [kivy_rule2,],
"5": [kivy_rule2,],
"6": [kivy_rule2,],
"7": [kivy_rule2,],
"8": [kivy_rule2,],
"9": [kivy_rule2,],
"@": [kivy_rule2,],
"A": [kivy_rule2,],
"B": [kivy_rule2,],
"C": [kivy_rule2,],
"D": [kivy_rule2,],
"E": [kivy_rule2,],
"F": [kivy_rule2,],
"G": [kivy_rule2,],
"H": [kivy_rule2,],
"I": [kivy_rule2,],
"J": [kivy_rule2,],
"K": [kivy_rule2,],
"L": [kivy_rule2,],
"M": [kivy_rule2,],
"N": [kivy_rule2,],
"O": [kivy_rule2,],
"P": [kivy_rule2,],
"Q": [kivy_rule2,],
"R": [kivy_rule2,],
"S": [kivy_rule2,],
"T": [kivy_rule2,],
"U": [kivy_rule2,],
"V": [kivy_rule2,],
"W": [kivy_rule2,],
"X": [kivy_rule2,],
"Y": [kivy_rule2,],
"Z": [kivy_rule2,],
"a": [kivy_rule2,],
"b": [kivy_rule2,],
"c": [kivy_rule2,],
"d": [kivy_rule2,],
"e": [kivy_rule2,],
"f": [kivy_rule2,],
"g": [kivy_rule2,],
"h": [kivy_rule2,],
"i": [kivy_rule2,],
"j": [kivy_rule2,],
"k": [kivy_rule2,],
"l": [kivy_rule2,],
"m": [kivy_rule2,],
"n": [kivy_rule2,],
"o": [kivy_rule2,],
"p": [kivy_rule2,],
"q": [kivy_rule2,],
"r": [kivy_rule2,],
"s": [kivy_rule2,],
"t": [kivy_rule2,],
"u": [kivy_rule2,],
"v": [kivy_rule2,],
"w": [kivy_rule2,],
"x": [kivy_rule2,],
"y": [kivy_rule2,],
"z": [kivy_rule2,],
}
# x.rulesDictDict for kivy mode.
rulesDictDict = {
"kivy_main": rulesDict1,
}
# Import dict for kivy mode.
importDict = {}
| 25.218978 | 75 | 0.546165 | 413 | 3,455 | 4.288136 | 0.27845 | 0.32524 | 0.037267 | 0.040655 | 0.458498 | 0.446076 | 0.400339 | 0.36646 | 0.36646 | 0.36646 | 0 | 0.035412 | 0.248046 | 3,455 | 136 | 76 | 25.404412 | 0.646266 | 0.116643 | 0 | 0.017857 | 0 | 0 | 0.120565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026786 | false | 0 | 0.008929 | 0.026786 | 0.0625 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37058b29e2d2eb0d2cb3136d55f0713da2693a83 | 643 | py | Python | hw_asr/augmentations/spectrogram_augmentations/SpecAug.py | isdevnull/asr_hw | 9650506b80d4e38574b63390f79a6f01786b7d18 | [
"MIT"
] | null | null | null | hw_asr/augmentations/spectrogram_augmentations/SpecAug.py | isdevnull/asr_hw | 9650506b80d4e38574b63390f79a6f01786b7d18 | [
"MIT"
] | null | null | null | hw_asr/augmentations/spectrogram_augmentations/SpecAug.py | isdevnull/asr_hw | 9650506b80d4e38574b63390f79a6f01786b7d18 | [
"MIT"
] | null | null | null | import torchaudio.transforms
from torch import nn
from hw_asr.augmentations.base import AugmentationBase
from hw_asr.augmentations.random_apply import RandomApply
class SpecAug(AugmentationBase):
def __init__(self, freq_mask: int, time_mask: int, prob: float, *args, **kwargs):
self.augmentation = nn.Sequential(
torchaudio.transforms.FrequencyMasking(freq_mask),
torchaudio.transforms.TimeMasking(time_mask)
)
self.prob = prob
self.random_caller = RandomApply(self.augmentation, self.prob)
def __call__(self, data, *args, **kwargs):
return self.random_caller(data)
| 32.15 | 85 | 0.720062 | 74 | 643 | 6.027027 | 0.472973 | 0.134529 | 0.040359 | 0.098655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191291 | 643 | 19 | 86 | 33.842105 | 0.857692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3708da2d53e985416aa3068357d2ebc357bc0355 | 7,558 | py | Python | loom/group.py | probcomp/loom | 825188eae76e7106a6959f6a18312b0aa3338f83 | [
"BSD-3-Clause"
] | 2 | 2019-10-25T17:57:22.000Z | 2020-07-14T02:37:34.000Z | loom/group.py | probcomp/loom | 825188eae76e7106a6959f6a18312b0aa3338f83 | [
"BSD-3-Clause"
] | 1 | 2019-12-13T03:08:05.000Z | 2019-12-13T03:08:05.000Z | loom/group.py | probcomp/loom | 825188eae76e7106a6959f6a18312b0aa3338f83 | [
"BSD-3-Clause"
] | 1 | 2020-06-22T11:23:43.000Z | 2020-06-22T11:23:43.000Z | # Copyright (c) 2014, Salesforce.com, Inc. All rights reserved.
# Copyright (c) 2015, Google, Inc.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# - Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# - Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# - Neither the name of Salesforce.com nor the names of its contributors
# may be used to endorse or promote products derived from this
# software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import numpy
import pymetis
import pymetis._internal # HACK to avoid errors finding .so files in path
from itertools import izip
from collections import defaultdict
from collections import namedtuple
from distributions.io.stream import json_dump
from distributions.io.stream import open_compressed
from loom.schema_pb2 import CrossCat
from loom.cFormat import assignment_stream_load
from loom.util import LoomError
from loom.util import parallel_map
import loom.store
METIS_ARGS_TEMPFILE = 'temp.metis_args.json'
Row = namedtuple('Row', ['row_id', 'group_id', 'confidence'])
def collate(pairs):
groups = defaultdict(lambda: [])
for key, value in pairs:
groups[key].append(value)
return groups.values()
def group(root, feature_name, parallel=False):
paths = loom.store.get_paths(root, sample_count=None)
map_ = parallel_map if parallel else map
groupings = map_(group_sample, [
(sample, feature_name)
for sample in paths['samples']
])
return group_reduce(groupings)
def group_sample((sample, featureid)):
model = CrossCat()
with open_compressed(sample['model']) as f:
model.ParseFromString(f.read())
for kindid, kind in enumerate(model.kinds):
if featureid in kind.featureids:
break
assignments = assignment_stream_load(sample['assign'])
return collate((a.groupids(kindid), a.rowid) for a in assignments)
def group_reduce(groupings):
return find_consensus_grouping(groupings)
def find_consensus_grouping(groupings, debug=False):
'''
This implements Strehl et al's Meta-Clustering Algorithm [1].
Inputs:
groupings - a list of lists of lists of object ids, for example
[
[ # sample 0
[0, 1, 2], # sample 0, group 0
[3, 4], # sample 0, group 1
[5] # sample 0, group 2
],
[ # sample 1
[0, 1], # sample 1, group 0
[2, 3, 4, 5] # sample 1, group 1
]
]
Returns:
a list of Row instances sorted by (- row.group_id, row.confidence)
References:
[1] Alexander Strehl, Joydeep Ghosh, Claire Cardie (2002)
"Cluster Ensembles - A Knowledge Reuse Framework
for Combining Multiple Partitions"
Journal of Machine Learning Research
http://jmlr.csail.mit.edu/papers/volume3/strehl02a/strehl02a.pdf
'''
if not groupings:
raise LoomError('tried to find consensus among zero groupings')
# ------------------------------------------------------------------------
# Set up consensus grouping problem
allgroups = sum(groupings, [])
objects = list(set(sum(allgroups, [])))
objects.sort()
index = {item: i for i, item in enumerate(objects)}
vertices = [numpy.array(map(index.__getitem__, g), dtype=numpy.intp)
for g in allgroups]
contains = numpy.zeros((len(vertices), len(objects)), dtype=numpy.float32)
for v, vertex in enumerate(vertices):
contains[v, vertex] = 1 # i.e. for u in vertex: contains[v, u] = i
# We use the binary Jaccard measure for similarity
overlap = numpy.dot(contains, contains.T)
diag = overlap.diagonal()
denom = (diag.reshape(len(vertices), 1) +
diag.reshape(1, len(vertices)) - overlap)
similarity = overlap / denom
# ------------------------------------------------------------------------
# Format for metis
if not (similarity.max() <= 1):
raise LoomError('similarity.max() = {}'.format(similarity.max()))
similarity *= 2**16 # metis segfaults if this is too large
int_similarity = numpy.zeros(similarity.shape, dtype=numpy.int32)
int_similarity[:] = numpy.rint(similarity)
edges = int_similarity.nonzero()
edge_weights = map(int, int_similarity[edges])
edges = numpy.transpose(edges)
adjacency = [[] for _ in vertices]
for i, j in edges:
adjacency[i].append(j)
# FIXME is there a better way to choose the final group count?
group_count = int(numpy.median(map(len, groupings)))
metis_args = {
'nparts': group_count,
'adjacency': adjacency,
'eweights': edge_weights,
}
if debug:
json_dump(metis_args, METIS_ARGS_TEMPFILE, indent=4)
edge_cut, partition = pymetis.part_graph(**metis_args)
if debug:
os.remove(METIS_ARGS_TEMPFILE)
# ------------------------------------------------------------------------
# Clean up solution
parts = range(group_count)
if len(partition) != len(vertices):
raise LoomError('metis output vector has wrong length')
represents = numpy.zeros((len(parts), len(vertices)))
for v, p in enumerate(partition):
represents[p, v] = 1
contains = numpy.dot(represents, contains)
represent_counts = represents.sum(axis=1)
represent_counts[numpy.where(represent_counts == 0)] = 1 # avoid NANs
contains /= represent_counts.reshape(group_count, 1)
bestmatch = contains.argmax(axis=0)
confidence = contains[bestmatch, range(len(bestmatch))]
if not all(numpy.isfinite(confidence)):
raise LoomError('confidence is nan')
nonempty_groups = list(set(bestmatch))
nonempty_groups.sort()
reindex = {j: i for i, j in enumerate(nonempty_groups)}
grouping = [
Row(row_id=objects[i], group_id=reindex[g], confidence=c)
for i, (g, c) in enumerate(izip(bestmatch, confidence))
]
groups = collate((row.group_id, row) for row in grouping)
groups.sort(key=len, reverse=True)
grouping = [
Row(row_id=row.row_id, group_id=group_id, confidence=row.confidence)
for group_id, group in enumerate(groups)
for row in group
]
grouping.sort(key=lambda x: (x.group_id, -x.confidence, x.row_id))
return grouping
| 36.162679 | 78 | 0.653744 | 957 | 7,558 | 5.086729 | 0.371996 | 0.011504 | 0.006574 | 0.009449 | 0.056697 | 0.027938 | 0.027938 | 0.027938 | 0.027938 | 0.027938 | 0 | 0.010626 | 0.22797 | 7,558 | 208 | 79 | 36.336538 | 0.82365 | 0.271897 | 0 | 0.036697 | 0 | 0 | 0.046034 | 0 | 0 | 0 | 0 | 0.004808 | 0 | 0 | null | null | 0 | 0.12844 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
370a96e1434086705b394298ea7d87edaa65f00a | 4,309 | py | Python | easyci/app/easyCI/tasksPool.py | 9OMShitikov/anytask | 71354543f467f6c824dfb194bf48ee76c391ff53 | [
"MIT"
] | null | null | null | easyci/app/easyCI/tasksPool.py | 9OMShitikov/anytask | 71354543f467f6c824dfb194bf48ee76c391ff53 | [
"MIT"
] | null | null | null | easyci/app/easyCI/tasksPool.py | 9OMShitikov/anytask | 71354543f467f6c824dfb194bf48ee76c391ff53 | [
"MIT"
] | null | null | null | import json
import requests
import tempfile
import shutil
import subprocess
import os
import logging
import urllib.request
from multiprocessing import Pool
import app.easyCI.docker as docker
from contextlib import contextmanager
LOG = logging.getLogger(__name__)
CONFIG = "config.json"
PASSWORDS = "passwords.json"
MAX_COMMENT_SIZE = 10000
PROCS = 1
REQUEST_TIMEOUT = 300
class QueueTask(object):
host = None
auth = None
config = None
id = None
course = None
task = None
issue = None
event = None
files = None
def __repr__(self):
return repr(self.__dict__)
@contextmanager
def tmp_dir():
t = tempfile.mkdtemp(dir="/var/tmp")
try:
yield t
finally:
shutil.rmtree(t)
def git_clone(repo, dst_dir):
cmd = ["git", "clone", repo, dst_dir]
LOG.info("RUN: %s", cmd)
subprocess.check_call(cmd)
def prepare_dir(qtask, dirname):
git_dir = os.path.join(dirname, "git")
task_dir = os.path.join(dirname, "task")
git_clone(qtask.course["repo"], git_dir)
os.mkdir(task_dir)
for url in qtask.files:
filename = url.split('/')[-1]
dst_path = os.path.join(task_dir, filename)
LOG.info("Download '%s' -> '%s'", url, dst_path)
print(url, dst_path)
urllib.request.urlretrieve(url, dst_path)
def process_task(qtask):
LOG.info("Proccess task %s", qtask.id)
with tmp_dir() as dirname:
prepare_dir(qtask, dirname)
run_cmd = qtask.course["run_cmd"] + [qtask.task, "/task_dir/task"]
#run_cmd = ["ls", "/task_dir/task"]
ret = docker.execute(run_cmd, cwd="/task_dir/git", timeout=qtask.course["timeout"], user='root',
network='bridge', image=qtask.course["docker_image"],
volumes=["{}:/task_dir:ro".format(os.path.abspath(dirname))])
status, retcode, is_timeout, output = ret
LOG.info("Task %d done, status:%s, retcode:%d, is_timeout:%d",
qtask.id, status, retcode, is_timeout)
LOG.info(" == Task %d output start", qtask.id)
for line in output.split("\n"):
LOG.info(line)
LOG.info(" == Task %d output end", qtask.id)
if len(output) > MAX_COMMENT_SIZE:
output = output[:MAX_COMMENT_SIZE]
output += u"\n...\nTRUNCATED"
if is_timeout:
output += u"\nTIMEOUT ({} sec)".format(qtask.course["timeout"])
comment = u"[id:{}] Check DONE!<br>\nSubmited on {}<br>\n<pre>{}</pre>\n".format(qtask.id,
qtask.event_timestamp,
output)
LOG.info("{}/api/v1/issue/{}/add_comment".format(qtask.host, qtask.issue_id))
response = requests.post("{}/api/v1/issue/{}/add_comment".format(qtask.host, qtask.issue_id),
auth=qtask.auth, data={"comment":comment.encode("utf-8")}, timeout=REQUEST_TIMEOUT)
response.raise_for_status()
LOG.info(" == Task %d DONE!, URL: %s/issue/%d", qtask.id, qtask.host, qtask.issue_id)
return qtask
def load_passwords(filename=PASSWORDS):
with open(filename) as config_fn:
return json.load(config_fn)
def load_config(filename=CONFIG):
with open(filename) as config_fn:
config_arr = json.load(config_fn)
config_dict = {}
for course in config_arr:
config_dict[course["course_id"]] = course
return config_dict
def get_auth(passwords, host):
host_auth = passwords[host]
return (host_auth["username"], host_auth["password"])
config = load_config()
passwords = load_passwords()
pool = Pool(processes=PROCS)
def put_to_pool(task):
course_id = task["course_id"]
course = config[course_id]
auth = get_auth(passwords, course["host"])
files = task["files"]
qtask = QueueTask()
qtask.host = course["host"]
qtask.auth = auth
qtask.course = course
qtask.task = task["title"]
qtask.issue_id = task["issue_id"]
qtask.files = files
qtask.id = task["event"]["id"]
qtask.event_timestamp = task["event"]["timestamp"]
print(qtask)
pool.apply_async(process_task, args=(qtask,))
| 29.312925 | 116 | 0.601764 | 548 | 4,309 | 4.569343 | 0.260949 | 0.02516 | 0.017572 | 0.019169 | 0.144968 | 0.058307 | 0.03754 | 0.03754 | 0.03754 | 0.03754 | 0 | 0.004085 | 0.261546 | 4,309 | 146 | 117 | 29.513699 | 0.782841 | 0.00789 | 0 | 0.017857 | 0 | 0 | 0.131493 | 0.01942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080357 | false | 0.0625 | 0.098214 | 0.008929 | 0.3125 | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
370bb514404727469781e53bb089355e3b933806 | 1,201 | py | Python | polling_stations/apps/data_collection/management/commands/import_monmouthshire.py | mtravis/UK-Polling-Stations | 26e0331dc29253dc436a0462ffaa01e974c5dc52 | [
"BSD-3-Clause"
] | null | null | null | polling_stations/apps/data_collection/management/commands/import_monmouthshire.py | mtravis/UK-Polling-Stations | 26e0331dc29253dc436a0462ffaa01e974c5dc52 | [
"BSD-3-Clause"
] | null | null | null | polling_stations/apps/data_collection/management/commands/import_monmouthshire.py | mtravis/UK-Polling-Stations | 26e0331dc29253dc436a0462ffaa01e974c5dc52 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib.gis.geos import Point
from data_collection.management.commands import BaseShpStationsShpDistrictsImporter
class Command(BaseShpStationsShpDistrictsImporter):
srid = 27700
council_id = "W06000021"
districts_name = "polling_district"
stations_name = "polling_station.shp"
elections = ["local.monmouthshire.2017-05-04", "parl.2017-06-08"]
def district_record_to_dict(self, record):
return {
"internal_council_id": str(record[1]).strip(),
"name": str(record[1]).strip(),
"polling_station_id": record[3],
}
def station_record_to_dict(self, record):
station = {
"internal_council_id": record[0],
"postcode": "",
"address": "%s\n%s" % (record[2].strip(), record[4].strip()),
}
if str(record[1]).strip() == "10033354925":
"""
There is a dodgy point in this file.
It has too many digits for a UK national grid reference.
Joe queried, Monmouthshire provided this corrected point by email
"""
station["location"] = Point(335973, 206322, srid=27700)
return station
| 32.459459 | 83 | 0.613655 | 133 | 1,201 | 5.406015 | 0.609023 | 0.037552 | 0.041725 | 0.062587 | 0.061196 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072893 | 0.268943 | 1,201 | 36 | 84 | 33.361111 | 0.746014 | 0 | 0 | 0 | 0 | 0 | 0.191878 | 0.030457 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0.043478 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
370fc257e4ad1d9ff8e001439c3aa8ae3d6aba1a | 808 | py | Python | lab-sessions/lab-3/ex3_gray_scale.py | DatacollectorVN/BME-Bio-Image-Processing-class | bc750f190398a1c29e2a8cd8092ced2072ce02e9 | [
"MIT"
] | null | null | null | lab-sessions/lab-3/ex3_gray_scale.py | DatacollectorVN/BME-Bio-Image-Processing-class | bc750f190398a1c29e2a8cd8092ced2072ce02e9 | [
"MIT"
] | null | null | null | lab-sessions/lab-3/ex3_gray_scale.py | DatacollectorVN/BME-Bio-Image-Processing-class | bc750f190398a1c29e2a8cd8092ced2072ce02e9 | [
"MIT"
] | null | null | null | import cv2
import numpy as np
import argparse
def main(image_file_path):
img = cv2.imread(image_file_path)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
name_window_1 = "original"
name_window_2 = "grayscale"
while True:
cv2.imshow(name_window_1, img)
cv2.imshow(name_window_2, img_gray)
key = cv2.waitKey(0)
# press ESC to close
if key == 27:
break
# destroy all windows
cv2.destroyAllWindows()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--imagepath", dest = "image_file_path", type = str,
default = None, help = "Image file path")
args = parser.parse_args()
image_file_path = args.image_file_path
main(image_file_path) | 27.862069 | 76 | 0.632426 | 104 | 808 | 4.596154 | 0.509615 | 0.131799 | 0.190377 | 0.07113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027257 | 0.273515 | 808 | 29 | 77 | 27.862069 | 0.787053 | 0.04703 | 0 | 0 | 0 | 0 | 0.085938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.136364 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
371029d250aeabea72732a867201c7c53e2e6057 | 862 | py | Python | project/tests/GUI/tools.py | RemuTeam/Remu | a7d100ff9002b1b1d27249f8adf510b5a89c09e3 | [
"MIT"
] | 2 | 2017-09-18T11:04:38.000Z | 2017-09-25T17:23:21.000Z | project/tests/GUI/tools.py | RemuTeam/Remu | a7d100ff9002b1b1d27249f8adf510b5a89c09e3 | [
"MIT"
] | 26 | 2017-09-20T09:11:10.000Z | 2017-12-11T12:21:56.000Z | project/tests/GUI/tools.py | RemuTeam/Remu | a7d100ff9002b1b1d27249f8adf510b5a89c09e3 | [
"MIT"
] | null | null | null | from functools import partial
from kivy.clock import Clock
def to_task(s):
s.press("//MenuButtonTitled[@name='LOGO']")
s.assert_on_screen('activity')
s.press('//StartNowButton')
s.assert_on_screen('tasks')
s.tap("//TestIntro//TestCarouselForwardButton")
s.assert_on_screen("test", manager_selector="//TasksScreen/ScreenManager")
s.tap("//BlinkImageButton[@name='task_icon']")
def without_schedule_seconds(function):
def inner(*args, **kwargs):
function(*args[:-1], **kwargs)
return inner
def simulate(function):
def simulate_inner(simulator, params):
simulator.start(function, params or {})
return simulate_inner
def execution_step(function):
def execution_step_inner(self, *args, **kwargs):
self.execution_queue.append((function, args, kwargs))
return execution_step_inner
| 24.628571 | 78 | 0.701856 | 103 | 862 | 5.68932 | 0.475728 | 0.035836 | 0.046075 | 0.076792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001379 | 0.158933 | 862 | 34 | 79 | 25.352941 | 0.806897 | 0 | 0 | 0 | 0 | 0 | 0.193961 | 0.155633 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.318182 | false | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3710f1bb3487fa331fc10580553cc5631bd8c85e | 808 | py | Python | auctions/migrations/0020_bids_bidder_alter_list_date.py | AncientSoup/cs50w_commerce | fb4cb8a47279e562f1d4a859abbf44ea5a7d9891 | [
"MIT"
] | 1 | 2022-01-25T10:40:44.000Z | 2022-01-25T10:40:44.000Z | auctions/migrations/0020_bids_bidder_alter_list_date.py | AncientSoup/cs50w_commerce | fb4cb8a47279e562f1d4a859abbf44ea5a7d9891 | [
"MIT"
] | null | null | null | auctions/migrations/0020_bids_bidder_alter_list_date.py | AncientSoup/cs50w_commerce | fb4cb8a47279e562f1d4a859abbf44ea5a7d9891 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.1 on 2022-02-12 11:07
import datetime
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('auctions', '0019_alter_list_date_alter_list_price'),
]
operations = [
migrations.AddField(
model_name='bids',
name='bidder',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='list',
name='date',
field=models.DateTimeField(default=datetime.datetime(2022, 2, 12, 11, 7, 47, 65691, tzinfo=utc)),
),
]
| 28.857143 | 133 | 0.653465 | 98 | 808 | 5.27551 | 0.581633 | 0.058027 | 0.054159 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0.235149 | 808 | 27 | 134 | 29.925926 | 0.778317 | 0.055693 | 0 | 0.095238 | 1 | 0 | 0.082786 | 0.04862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.238095 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37164ce047902fb36b1255b04be946281d2676f6 | 2,583 | py | Python | review/migrations/0004_auto_20170315_0930.py | kgdunn/peer-review-system | 1fd5ac9d0f84d7637a86682e9e5fc068ac404afd | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | review/migrations/0004_auto_20170315_0930.py | kgdunn/peer-review-system | 1fd5ac9d0f84d7637a86682e9e5fc068ac404afd | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | review/migrations/0004_auto_20170315_0930.py | kgdunn/peer-review-system | 1fd5ac9d0f84d7637a86682e9e5fc068ac404afd | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-03-15 08:30
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('review', '0003_auto_20170314_2217'),
]
operations = [
migrations.CreateModel(
name='GradeComponent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('order', models.PositiveSmallIntegerField(default=0.0, help_text='Used to order the display of grade items')),
('explanation', models.TextField(help_text='HTML is possible; used in the template. Can include template elements.', max_length=500)),
('weight', models.FloatField(default=0.0, help_text=('Values must be between 0.0 and 1.0.', ' It is your responsibility to make sure the total weights do not sum to over 1.0 (i.e. 100%)'))),
('extra_detail', models.CharField(blank=True, choices=[('peer', 'peer'), ('instructor', 'instructor')], help_text=('Extra information used to help distinguish a phase. For ', 'example, the Peer-Evaluation phase is used for instructors as well as peers to evaluate. But the instructor(s) grades must get a higher weight. This is used to split the code.'), max_length=50)),
],
),
migrations.CreateModel(
name='GradeReportPhase',
fields=[
('prphase_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='review.PRPhase')),
],
bases=('review.prphase',),
),
migrations.AlterField(
model_name='prphase',
name='end_dt',
field=models.DateTimeField(blank=True, verbose_name='End of this phase'),
),
migrations.AlterField(
model_name='prphase',
name='start_dt',
field=models.DateTimeField(blank=True, verbose_name='Start of this phase'),
),
migrations.AddField(
model_name='gradecomponent',
name='phase',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='review.PRPhase'),
),
migrations.AddField(
model_name='gradecomponent',
name='pr',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='review.PR_process'),
),
]
| 47.833333 | 387 | 0.629113 | 302 | 2,583 | 5.268212 | 0.466887 | 0.025141 | 0.035198 | 0.055311 | 0.332495 | 0.311125 | 0.164048 | 0.140792 | 0.082967 | 0.082967 | 0 | 0.026194 | 0.246225 | 2,583 | 53 | 388 | 48.735849 | 0.79096 | 0.026326 | 0 | 0.434783 | 1 | 0.043478 | 0.303344 | 0.009156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.065217 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
372f9e118f442669abaa10df5175221694562ac7 | 22,776 | py | Python | pypy/module/_ssl/interp_ssl.py | camillobruni/pygirl | ddbd442d53061d6ff4af831c1eab153bcc771b5a | [
"MIT"
] | 12 | 2016-01-06T07:10:28.000Z | 2021-05-13T23:02:02.000Z | pypy/module/_ssl/interp_ssl.py | camillobruni/pygirl | ddbd442d53061d6ff4af831c1eab153bcc771b5a | [
"MIT"
] | null | null | null | pypy/module/_ssl/interp_ssl.py | camillobruni/pygirl | ddbd442d53061d6ff4af831c1eab153bcc771b5a | [
"MIT"
] | 2 | 2016-07-29T07:09:50.000Z | 2016-10-16T08:50:26.000Z | from pypy.rpython.rctypes.tool import ctypes_platform
from pypy.rpython.rctypes.tool.libc import libc
import pypy.rpython.rctypes.implementation # this defines rctypes magic
from pypy.interpreter.error import OperationError
from pypy.interpreter.baseobjspace import W_Root, ObjSpace, Wrappable
from pypy.interpreter.typedef import TypeDef
from pypy.interpreter.gateway import interp2app
from ctypes import *
import ctypes.util
import sys
import socket
import select
from ssl import SSL_CTX, SSL, X509, SSL_METHOD, X509_NAME
from bio import BIO
c_void = None
libssl = cdll.LoadLibrary(ctypes.util.find_library("ssl"))
## user defined constants
X509_NAME_MAXLEN = 256
# these mirror ssl.h
PY_SSL_ERROR_NONE, PY_SSL_ERROR_SSL = 0, 1
PY_SSL_ERROR_WANT_READ, PY_SSL_ERROR_WANT_WRITE = 2, 3
PY_SSL_ERROR_WANT_X509_LOOKUP = 4
PY_SSL_ERROR_SYSCALL = 5 # look at error stack/return value/errno
PY_SSL_ERROR_ZERO_RETURN, PY_SSL_ERROR_WANT_CONNECT = 6, 7
# start of non ssl.h errorcodes
PY_SSL_ERROR_EOF = 8 # special case of SSL_ERROR_SYSCALL
PY_SSL_ERROR_INVALID_ERROR_CODE = 9
SOCKET_IS_NONBLOCKING, SOCKET_IS_BLOCKING = 0, 1
SOCKET_HAS_TIMED_OUT, SOCKET_HAS_BEEN_CLOSED = 2, 3
SOCKET_TOO_LARGE_FOR_SELECT, SOCKET_OPERATION_OK = 4, 5
class CConfig:
_header_ = """
#include <openssl/ssl.h>
#include <openssl/opensslv.h>
#include <openssl/bio.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/poll.h>
"""
OPENSSL_VERSION_NUMBER = ctypes_platform.ConstantInteger(
"OPENSSL_VERSION_NUMBER")
SSL_FILETYPE_PEM = ctypes_platform.ConstantInteger("SSL_FILETYPE_PEM")
SSL_OP_ALL = ctypes_platform.ConstantInteger("SSL_OP_ALL")
SSL_VERIFY_NONE = ctypes_platform.ConstantInteger("SSL_VERIFY_NONE")
SSL_ERROR_WANT_READ = ctypes_platform.ConstantInteger(
"SSL_ERROR_WANT_READ")
SSL_ERROR_WANT_WRITE = ctypes_platform.ConstantInteger(
"SSL_ERROR_WANT_WRITE")
SSL_ERROR_ZERO_RETURN = ctypes_platform.ConstantInteger(
"SSL_ERROR_ZERO_RETURN")
SSL_ERROR_WANT_X509_LOOKUP = ctypes_platform.ConstantInteger(
"SSL_ERROR_WANT_X509_LOOKUP")
SSL_ERROR_WANT_CONNECT = ctypes_platform.ConstantInteger(
"SSL_ERROR_WANT_CONNECT")
SSL_ERROR_SYSCALL = ctypes_platform.ConstantInteger("SSL_ERROR_SYSCALL")
SSL_ERROR_SSL = ctypes_platform.ConstantInteger("SSL_ERROR_SSL")
FD_SETSIZE = ctypes_platform.ConstantInteger("FD_SETSIZE")
SSL_CTRL_OPTIONS = ctypes_platform.ConstantInteger("SSL_CTRL_OPTIONS")
BIO_C_SET_NBIO = ctypes_platform.ConstantInteger("BIO_C_SET_NBIO")
pollfd = ctypes_platform.Struct("struct pollfd",
[("fd", c_int), ("events", c_short), ("revents", c_short)])
nfds_t = ctypes_platform.SimpleType("nfds_t", c_uint)
POLLOUT = ctypes_platform.ConstantInteger("POLLOUT")
POLLIN = ctypes_platform.ConstantInteger("POLLIN")
class cConfig:
pass
cConfig.__dict__.update(ctypes_platform.configure(CConfig))
OPENSSL_VERSION_NUMBER = cConfig.OPENSSL_VERSION_NUMBER
HAVE_OPENSSL_RAND = OPENSSL_VERSION_NUMBER >= 0x0090500fL
SSL_FILETYPE_PEM = cConfig.SSL_FILETYPE_PEM
SSL_OP_ALL = cConfig.SSL_OP_ALL
SSL_VERIFY_NONE = cConfig.SSL_VERIFY_NONE
SSL_ERROR_WANT_READ = cConfig.SSL_ERROR_WANT_READ
SSL_ERROR_WANT_WRITE = cConfig.SSL_ERROR_WANT_WRITE
SSL_ERROR_ZERO_RETURN = cConfig.SSL_ERROR_ZERO_RETURN
SSL_ERROR_WANT_X509_LOOKUP = cConfig.SSL_ERROR_WANT_X509_LOOKUP
SSL_ERROR_WANT_CONNECT = cConfig.SSL_ERROR_WANT_CONNECT
SSL_ERROR_SYSCALL = cConfig.SSL_ERROR_SYSCALL
SSL_ERROR_SSL = cConfig.SSL_ERROR_SSL
FD_SETSIZE = cConfig.FD_SETSIZE
SSL_CTRL_OPTIONS = cConfig.SSL_CTRL_OPTIONS
BIO_C_SET_NBIO = cConfig.BIO_C_SET_NBIO
POLLOUT = cConfig.POLLOUT
POLLIN = cConfig.POLLIN
pollfd = cConfig.pollfd
nfds_t = cConfig.nfds_t
arr_x509 = c_char * X509_NAME_MAXLEN
constants = {}
constants["SSL_ERROR_ZERO_RETURN"] = PY_SSL_ERROR_ZERO_RETURN
constants["SSL_ERROR_WANT_READ"] = PY_SSL_ERROR_WANT_READ
constants["SSL_ERROR_WANT_WRITE"] = PY_SSL_ERROR_WANT_WRITE
constants["SSL_ERROR_WANT_X509_LOOKUP"] = PY_SSL_ERROR_WANT_X509_LOOKUP
constants["SSL_ERROR_SYSCALL"] = PY_SSL_ERROR_SYSCALL
constants["SSL_ERROR_SSL"] = PY_SSL_ERROR_SSL
constants["SSL_ERROR_WANT_CONNECT"] = PY_SSL_ERROR_WANT_CONNECT
constants["SSL_ERROR_EOF"] = PY_SSL_ERROR_EOF
constants["SSL_ERROR_INVALID_ERROR_CODE"] = PY_SSL_ERROR_INVALID_ERROR_CODE
libssl.SSL_load_error_strings.restype = c_void
libssl.SSL_library_init.restype = c_int
if HAVE_OPENSSL_RAND:
libssl.RAND_add.argtypes = [c_char_p, c_int, c_double]
libssl.RAND_add.restype = c_void
libssl.RAND_status.restype = c_int
libssl.RAND_egd.argtypes = [c_char_p]
libssl.RAND_egd.restype = c_int
libssl.SSL_CTX_new.argtypes = [POINTER(SSL_METHOD)]
libssl.SSL_CTX_new.restype = POINTER(SSL_CTX)
libssl.SSLv23_method.restype = POINTER(SSL_METHOD)
libssl.SSL_CTX_use_PrivateKey_file.argtypes = [POINTER(SSL_CTX), c_char_p, c_int]
libssl.SSL_CTX_use_PrivateKey_file.restype = c_int
libssl.SSL_CTX_use_certificate_chain_file.argtypes = [POINTER(SSL_CTX), c_char_p]
libssl.SSL_CTX_use_certificate_chain_file.restype = c_int
libssl.SSL_CTX_ctrl.argtypes = [POINTER(SSL_CTX), c_int, c_int, c_void_p]
libssl.SSL_CTX_ctrl.restype = c_int
libssl.SSL_CTX_set_verify.argtypes = [POINTER(SSL_CTX), c_int, c_void_p]
libssl.SSL_CTX_set_verify.restype = c_void
libssl.SSL_new.argtypes = [POINTER(SSL_CTX)]
libssl.SSL_new.restype = POINTER(SSL)
libssl.SSL_set_fd.argtypes = [POINTER(SSL), c_int]
libssl.SSL_set_fd.restype = c_int
libssl.BIO_ctrl.argtypes = [POINTER(BIO), c_int, c_int, c_void_p]
libssl.BIO_ctrl.restype = c_int
libssl.SSL_get_rbio.argtypes = [POINTER(SSL)]
libssl.SSL_get_rbio.restype = POINTER(BIO)
libssl.SSL_get_wbio.argtypes = [POINTER(SSL)]
libssl.SSL_get_wbio.restype = POINTER(BIO)
libssl.SSL_set_connect_state.argtypes = [POINTER(SSL)]
libssl.SSL_set_connect_state.restype = c_void
libssl.SSL_connect.argtypes = [POINTER(SSL)]
libssl.SSL_connect.restype = c_int
libssl.SSL_get_error.argtypes = [POINTER(SSL), c_int]
libssl.SSL_get_error.restype = c_int
have_poll = False
if hasattr(libc, "poll"):
have_poll = True
libc.poll.argtypes = [POINTER(pollfd), nfds_t, c_int]
libc.poll.restype = c_int
libssl.ERR_get_error.restype = c_int
libssl.ERR_error_string.argtypes = [c_int, c_char_p]
libssl.ERR_error_string.restype = c_char_p
libssl.SSL_get_peer_certificate.argtypes = [POINTER(SSL)]
libssl.SSL_get_peer_certificate.restype = POINTER(X509)
libssl.X509_get_subject_name.argtypes = [POINTER(X509)]
libssl.X509_get_subject_name.restype = POINTER(X509_NAME)
libssl.X509_get_issuer_name.argtypes = [POINTER(X509)]
libssl.X509_get_issuer_name.restype = POINTER(X509_NAME)
libssl.X509_NAME_oneline.argtypes = [POINTER(X509_NAME), arr_x509, c_int]
libssl.X509_NAME_oneline.restype = c_char_p
libssl.X509_free.argtypes = [POINTER(X509)]
libssl.X509_free.restype = c_void
libssl.SSL_free.argtypes = [POINTER(SSL)]
libssl.SSL_free.restype = c_void
libssl.SSL_CTX_free.argtypes = [POINTER(SSL_CTX)]
libssl.SSL_CTX_free.restype = c_void
libssl.SSL_write.argtypes = [POINTER(SSL), c_char_p, c_int]
libssl.SSL_write.restype = c_int
libssl.SSL_pending.argtypes = [POINTER(SSL)]
libssl.SSL_pending.restype = c_int
libssl.SSL_read.argtypes = [POINTER(SSL), c_char_p, c_int]
libssl.SSL_read.restype = c_int
def _init_ssl():
libssl.SSL_load_error_strings()
libssl.SSL_library_init()
if HAVE_OPENSSL_RAND:
# helper routines for seeding the SSL PRNG
def RAND_add(space, string, entropy):
"""RAND_add(string, entropy)
Mix string into the OpenSSL PRNG state. entropy (a float) is a lower
bound on the entropy contained in string."""
buf = c_char_p(string)
libssl.RAND_add(buf, len(string), entropy)
RAND_add.unwrap_spec = [ObjSpace, str, float]
def RAND_status(space):
"""RAND_status() -> 0 or 1
Returns 1 if the OpenSSL PRNG has been seeded with enough data and 0 if not.
It is necessary to seed the PRNG with RAND_add() on some platforms before
using the ssl() function."""
res = libssl.RAND_status()
return space.wrap(res)
RAND_status.unwrap_spec = [ObjSpace]
def RAND_egd(space, path):
"""RAND_egd(path) -> bytes
Queries the entropy gather daemon (EGD) on socket path. Returns number
of bytes read. Raises socket.sslerror if connection to EGD fails or
if it does provide enough data to seed PRNG."""
socket_path = c_char_p(path)
bytes = libssl.RAND_egd(socket_path)
if bytes == -1:
msg = "EGD connection failed or EGD did not return"
msg += " enough data to seed the PRNG"
raise OperationError(space.w_Exception, space.wrap(msg))
return space.wrap(bytes)
RAND_egd.unwrap_spec = [ObjSpace, str]
class SSLObject(Wrappable):
def __init__(self, space):
self.space = space
self.w_socket = None
self.ctx = POINTER(SSL_CTX)()
self.ssl = POINTER(SSL)()
self.server_cert = POINTER(X509)()
self._server = arr_x509()
self._issuer = arr_x509()
def server(self):
return self.space.wrap(self._server.value)
server.unwrap_spec = ['self']
def issuer(self):
return self.space.wrap(self._issuer.value)
issuer.unwrap_spec = ['self']
def __del__(self):
if self.server_cert:
libssl.X509_free(self.server_cert)
if self.ssl:
libssl.SSL_free(self.ssl)
if self.ctx:
libssl.SSL_CTX_free(self.ctx)
def write(self, data):
"""write(s) -> len
Writes the string s into the SSL object. Returns the number
of bytes written."""
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, True)
if sockstate == SOCKET_HAS_TIMED_OUT:
raise OperationError(self.space.w_Exception,
self.space.wrap("The write operation timed out"))
elif sockstate == SOCKET_HAS_BEEN_CLOSED:
raise OperationError(self.space.w_Exception,
self.space.wrap("Underlying socket has been closed."))
elif sockstate == SOCKET_TOO_LARGE_FOR_SELECT:
raise OperationError(self.space.w_Exception,
self.space.wrap("Underlying socket too large for select()."))
num_bytes = 0
while True:
err = 0
num_bytes = libssl.SSL_write(self.ssl, data, len(data))
err = libssl.SSL_get_error(self.ssl, num_bytes)
if err == SSL_ERROR_WANT_READ:
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, False)
elif err == SSL_ERROR_WANT_WRITE:
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, True)
else:
sockstate = SOCKET_OPERATION_OK
if sockstate == SOCKET_HAS_TIMED_OUT:
raise OperationError(self.space.w_Exception,
self.space.wrap("The connect operation timed out"))
elif sockstate == SOCKET_HAS_BEEN_CLOSED:
raise OperationError(self.space.w_Exception,
self.space.wrap("Underlying socket has been closed."))
elif sockstate == SOCKET_IS_NONBLOCKING:
break
if err == SSL_ERROR_WANT_READ or err == SSL_ERROR_WANT_WRITE:
continue
else:
break
if num_bytes > 0:
return self.space.wrap(num_bytes)
else:
errstr, errval = _ssl_seterror(self.space, self, num_bytes)
raise OperationError(self.space.w_Exception,
self.space.wrap("%s: %d" % (errstr, errval)))
write.unwrap_spec = ['self', str]
def read(self, num_bytes=1024):
"""read([len]) -> string
Read up to len bytes from the SSL socket."""
count = libssl.SSL_pending(self.ssl)
if not count:
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, False)
if sockstate == SOCKET_HAS_TIMED_OUT:
raise OperationError(self.space.w_Exception,
self.space.wrap("The read operation timed out"))
elif sockstate == SOCKET_TOO_LARGE_FOR_SELECT:
raise OperationError(self.space.w_Exception,
self.space.wrap("Underlying socket too large for select()."))
buf = create_string_buffer(num_bytes)
while True:
err = 0
count = libssl.SSL_read(self.ssl, buf, num_bytes)
err = libssl.SSL_get_error(self.ssl, count)
if err == SSL_ERROR_WANT_READ:
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, False)
elif err == SSL_ERROR_WANT_WRITE:
sockstate = check_socket_and_wait_for_timeout(self.space,
self.w_socket, True)
else:
sockstate = SOCKET_OPERATION_OK
if sockstate == SOCKET_HAS_TIMED_OUT:
raise OperationError(self.space.w_Exception,
self.space.wrap("The read operation timed out"))
elif sockstate == SOCKET_IS_NONBLOCKING:
break
if err == SSL_ERROR_WANT_READ or err == SSL_ERROR_WANT_WRITE:
continue
else:
break
if count <= 0:
errstr, errval = _ssl_seterror(self.space, self, count)
raise OperationError(self.space.w_Exception,
self.space.wrap("%s: %d" % (errstr, errval)))
if count != num_bytes:
# resize
data = buf.raw
assert count >= 0
try:
new_data = data[0:count]
except:
raise OperationError(self.space.w_MemoryException,
self.space.wrap("error in resizing of the buffer."))
buf = create_string_buffer(count)
buf.raw = new_data
return self.space.wrap(buf.value)
read.unwrap_spec = ['self', int]
SSLObject.typedef = TypeDef("SSLObject",
server = interp2app(SSLObject.server,
unwrap_spec=SSLObject.server.unwrap_spec),
issuer = interp2app(SSLObject.issuer,
unwrap_spec=SSLObject.issuer.unwrap_spec),
write = interp2app(SSLObject.write,
unwrap_spec=SSLObject.write.unwrap_spec),
read = interp2app(SSLObject.read, unwrap_spec=SSLObject.read.unwrap_spec)
)
def new_sslobject(space, w_sock, w_key_file, w_cert_file):
ss = SSLObject(space)
sock_fd = space.int_w(space.call_method(w_sock, "fileno"))
w_timeout = space.call_method(w_sock, "gettimeout")
if space.is_w(w_timeout, space.w_None):
has_timeout = False
else:
has_timeout = True
if space.is_w(w_key_file, space.w_None):
key_file = None
else:
key_file = space.str_w(w_key_file)
if space.is_w(w_cert_file, space.w_None):
cert_file = None
else:
cert_file = space.str_w(w_cert_file)
if ((key_file and not cert_file) or (not key_file and cert_file)):
raise OperationError(space.w_Exception,
space.wrap("Both the key & certificate files must be specified"))
ss.ctx = libssl.SSL_CTX_new(libssl.SSLv23_method()) # set up context
if not ss.ctx:
raise OperationError(space.w_Exception, space.wrap("SSL_CTX_new error"))
if key_file:
ret = libssl.SSL_CTX_use_PrivateKey_file(ss.ctx, key_file,
SSL_FILETYPE_PEM)
if ret < 1:
raise OperationError(space.w_Exception,
space.wrap("SSL_CTX_use_PrivateKey_file error"))
ret = libssl.SSL_CTX_use_certificate_chain_file(ss.ctx, cert_file)
libssl.SSL_CTX_ctrl(ss.ctx, SSL_CTRL_OPTIONS, SSL_OP_ALL, c_void_p())
if ret < 1:
raise OperationError(space.w_Exception,
space.wrap("SSL_CTX_use_certificate_chain_file error"))
libssl.SSL_CTX_set_verify(ss.ctx, SSL_VERIFY_NONE, c_void_p()) # set verify level
ss.ssl = libssl.SSL_new(ss.ctx) # new ssl struct
libssl.SSL_set_fd(ss.ssl, sock_fd) # set the socket for SSL
# If the socket is in non-blocking mode or timeout mode, set the BIO
# to non-blocking mode (blocking is the default)
if has_timeout:
# Set both the read and write BIO's to non-blocking mode
libssl.BIO_ctrl(libssl.SSL_get_rbio(ss.ssl), BIO_C_SET_NBIO, 1, c_void_p())
libssl.BIO_ctrl(libssl.SSL_get_wbio(ss.ssl), BIO_C_SET_NBIO, 1, c_void_p())
libssl.SSL_set_connect_state(ss.ssl)
# Actually negotiate SSL connection
# XXX If SSL_connect() returns 0, it's also a failure.
sockstate = 0
while True:
ret = libssl.SSL_connect(ss.ssl)
err = libssl.SSL_get_error(ss.ssl, ret)
if err == SSL_ERROR_WANT_READ:
sockstate = check_socket_and_wait_for_timeout(space, w_sock, False)
elif err == SSL_ERROR_WANT_WRITE:
sockstate = check_socket_and_wait_for_timeout(space, w_sock, True)
else:
sockstate = SOCKET_OPERATION_OK
if sockstate == SOCKET_HAS_TIMED_OUT:
raise OperationError(space.w_Exception,
space.wrap("The connect operation timed out"))
elif sockstate == SOCKET_HAS_BEEN_CLOSED:
raise OperationError(space.w_Exception,
space.wrap("Underlying socket has been closed."))
elif sockstate == SOCKET_TOO_LARGE_FOR_SELECT:
raise OperationError(space.w_Exception,
space.wrap("Underlying socket too large for select()."))
elif sockstate == SOCKET_IS_NONBLOCKING:
break
if err == SSL_ERROR_WANT_READ or err == SSL_ERROR_WANT_WRITE:
continue
else:
break
if ret < 0:
errstr, errval = _ssl_seterror(space, ss, ret)
raise OperationError(space.w_Exception,
space.wrap("%s: %d" % (errstr, errval)))
ss.server_cert = libssl.SSL_get_peer_certificate(ss.ssl)
if ss.server_cert:
libssl.X509_NAME_oneline(libssl.X509_get_subject_name(ss.server_cert),
ss._server, X509_NAME_MAXLEN)
libssl.X509_NAME_oneline(libssl.X509_get_issuer_name(ss.server_cert),
ss._issuer, X509_NAME_MAXLEN)
ss.w_socket = w_sock
return ss
new_sslobject.unwrap_spec = [ObjSpace, W_Root, str, str]
def check_socket_and_wait_for_timeout(space, w_sock, writing):
"""If the socket has a timeout, do a select()/poll() on the socket.
The argument writing indicates the direction.
Returns one of the possibilities in the timeout_state enum (above)."""
w_timeout = space.call_method(w_sock, "gettimeout")
if space.is_w(w_timeout, space.w_None):
return SOCKET_IS_BLOCKING
elif space.int_w(w_timeout) == 0.0:
return SOCKET_IS_NONBLOCKING
sock_timeout = space.int_w(w_timeout)
# guard against closed socket
try:
space.call_method(w_sock, "fileno")
except:
return SOCKET_HAS_BEEN_CLOSED
sock_fd = space.int_w(space.call_method(w_sock, "fileno"))
# Prefer poll, if available, since you can poll() any fd
# which can't be done with select().
if have_poll:
_pollfd = pollfd()
_pollfd.fd = sock_fd
if writing:
_pollfd.events = POLLOUT
else:
_pollfd.events = POLLIN
# socket's timeout is in seconds, poll's timeout in ms
timeout = int(sock_timeout * 1000 + 0.5)
rc = libc.poll(byref(_pollfd), 1, timeout)
if rc == 0:
return SOCKET_HAS_TIMED_OUT
else:
return SOCKET_OPERATION_OK
if sock_fd >= FD_SETSIZE:
return SOCKET_TOO_LARGE_FOR_SELECT
# construct the arguments for select
sec = int(sock_timeout)
usec = int((sock_timeout - sec) * 1e6)
timeout = sec + usec * 0.000001
# see if the socket is ready
if writing:
ret = select.select([], [sock_fd], [], timeout)
r, w, e = ret
if not w:
return SOCKET_HAS_TIMED_OUT
else:
return SOCKET_OPERATION_OK
else:
ret = select.select([sock_fd], [], [], timeout)
r, w, e = ret
if not r:
return SOCKET_HAS_TIMED_OUT
else:
return SOCKET_OPERATION_OK
def _ssl_seterror(space, ss, ret):
assert ret <= 0
err = libssl.SSL_get_error(ss.ssl, ret)
errstr = ""
errval = 0
if err == SSL_ERROR_ZERO_RETURN:
errstr = "TLS/SSL connection has been closed"
errval = PY_SSL_ERROR_ZERO_RETURN
elif err == SSL_ERROR_WANT_READ:
errstr = "The operation did not complete (read)"
errval = PY_SSL_ERROR_WANT_READ
elif err == SSL_ERROR_WANT_WRITE:
errstr = "The operation did not complete (write)"
errval = PY_SSL_ERROR_WANT_WRITE
elif err == SSL_ERROR_WANT_X509_LOOKUP:
errstr = "The operation did not complete (X509 lookup)"
errval = PY_SSL_ERROR_WANT_X509_LOOKUP
elif err == SSL_ERROR_WANT_CONNECT:
errstr = "The operation did not complete (connect)"
errval = PY_SSL_ERROR_WANT_CONNECT
elif err == SSL_ERROR_SYSCALL:
e = libssl.ERR_get_error()
if e == 0:
if ret == 0 or space.is_w(ss.w_socket, space.w_None):
errstr = "EOF occurred in violation of protocol"
errval = PY_SSL_ERROR_EOF
elif ret == -1:
# the underlying BIO reported an I/0 error
return errstr, errval # sock.errorhandler()?
else:
errstr = "Some I/O error occurred"
errval = PY_SSL_ERROR_SYSCALL
else:
errstr = libssl.ERR_error_string(e, None)
errval = PY_SSL_ERROR_SYSCALL
elif err == SSL_ERROR_SSL:
e = libssl.ERR_get_error()
errval = PY_SSL_ERROR_SSL
if e != 0:
errstr = libssl.ERR_error_string(e, None)
else:
errstr = "A failure in the SSL library occurred"
else:
errstr = "Invalid error code"
errval = PY_SSL_ERROR_INVALID_ERROR_CODE
return errstr, errval
def ssl(space, w_socket, w_key_file=None, w_cert_file=None):
"""ssl(socket, [keyfile, certfile]) -> sslobject"""
return space.wrap(new_sslobject(space, w_socket, w_key_file, w_cert_file))
ssl.unwrap_spec = [ObjSpace, W_Root, W_Root, W_Root]
| 38.472973 | 85 | 0.67514 | 3,160 | 22,776 | 4.539241 | 0.105696 | 0.047964 | 0.040156 | 0.016732 | 0.52705 | 0.415784 | 0.326896 | 0.252301 | 0.224484 | 0.203291 | 0 | 0.011976 | 0.237443 | 22,776 | 591 | 86 | 38.538071 | 0.813911 | 0.036881 | 0 | 0.277542 | 0 | 0 | 0.079422 | 0.011949 | 0 | 0 | 0 | 0 | 0.004237 | 0 | null | null | 0.002119 | 0.029661 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3730b8d9d97aea9d5b10c216bcc20f5a6594936c | 1,084 | py | Python | tests/test_easy_patient_name.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 1 | 2020-07-22T12:46:58.000Z | 2020-07-22T12:46:58.000Z | tests/test_easy_patient_name.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 54 | 2019-10-09T16:19:04.000Z | 2022-01-19T20:28:59.000Z | tests/test_easy_patient_name.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 2 | 2019-10-30T19:54:43.000Z | 2020-12-03T18:57:15.000Z | from phc.easy.patients.name import expand_name_value
def test_name():
assert expand_name_value(
[{"text": "ARA251 LO", "given": ["ARA251"], "family": "LO"}]
) == {"name_given_0": "ARA251", "name_family": "LO"}
def test_name_with_multiple_values():
# NOTE: Official names are preferred first and then remaining names are put
# in separate column
assert expand_name_value(
[
{
"text": "Christian Di Lorenzo",
"given": ["Christian"],
"family": "Di Lorenzo",
},
{
"use": "official",
"given": ["Robert", "Christian"],
"family": "Di Lorenzo",
},
]
) == {
"name_given_0": "Robert",
"name_given_1": "Christian",
"name_family": "Di Lorenzo",
"name_use": "official",
"other_names": [
{
"text": "Christian Di Lorenzo",
"given": ["Christian"],
"family": "Di Lorenzo",
},
],
}
| 27.794872 | 79 | 0.47048 | 98 | 1,084 | 4.989796 | 0.408163 | 0.110429 | 0.122699 | 0.147239 | 0.302658 | 0.208589 | 0.208589 | 0.208589 | 0.208589 | 0 | 0 | 0.017857 | 0.380074 | 1,084 | 38 | 80 | 28.526316 | 0.709821 | 0.084871 | 0 | 0.28125 | 0 | 0 | 0.308392 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | true | 0 | 0.03125 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2e9e6bf6e35ae4a1dfdd7e51e95b5be403c30f21 | 7,126 | py | Python | src/headtracking_network/live_training.py | NaviRice/HeadTracking | 8227cc247425ecacd3e789dbbac11d3e5103d3e2 | [
"MIT"
] | 1 | 2019-10-24T14:29:00.000Z | 2019-10-24T14:29:00.000Z | src/headtracking_network/live_training.py | NaviRice/HeadTracking | 8227cc247425ecacd3e789dbbac11d3e5103d3e2 | [
"MIT"
] | 7 | 2017-11-28T23:58:40.000Z | 2022-03-11T23:12:12.000Z | src/headtracking_network/live_training.py | NaviRice/HeadTracking | 8227cc247425ecacd3e789dbbac11d3e5103d3e2 | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
import os
import navirice_image_pb2
import cv2
import random
import sys
from navirice_generate_data import generate_bitmap_label
from navirice_helpers import navirice_image_to_np
from navirice_helpers import navirice_ir_to_np
from navirice_helpers import map_depth_and_rgb
from navirice_head_detect import get_head_from_img
tf.logging.set_verbosity(tf.logging.INFO)
def cnn_model_fn(features):
# unkown amount, higrt and width, channel
input_layer = tf.reshape(features, [-1, 424, 512, 1])
mp0 = input_layer
mp1 = max_pool_2x2(mp0)
mp2 = max_pool_2x2(mp1)
mp3 = max_pool_2x2(mp2)
encoder1 = coder(mp1, [10,10,1,2], True)
encoder2 = coder(mp2, [10,10,1,4], True)
encoder3 = coder(mp3, [10,10,1,4], True)
encoder4 = coder(encoder1, [10,10,2,4], True)
encoder5 = coder(encoder2, [10,10,4,8], True)
encoder6 = coder(encoder3, [10,10,4,8], True)
W_fc1 = weight_variable([256*212*4, 1024])
encoder4_last_flat = tf.reshape(encoder4, [-1, 256*212*4])
h_fc1 = tf.matmul(encoder4_last_flat, W_fc1)
W_fc2 = weight_variable([128*106*8, 1024])
encoder5_last_flat = tf.reshape(encoder5, [-1, 128*106*8])
h_fc2 = tf.matmul(encoder5_last_flat, W_fc2)
W_fc3 = weight_variable([64*53*8, 1024])
encoder6_last_flat = tf.reshape(encoder6, [-1, 64*53*8])
h_fc3 = tf.matmul(encoder6_last_flat, W_fc3)
merge_layer = tf.nn.sigmoid(h_fc3 + h_fc2 + h_fc1)
W_fc2 = weight_variable([1024, 3])
h_fc2 = tf.nn.sigmoid(tf.matmul(merge_layer, W_fc2))
return h_fc2
def coder(input_layer, shape, do_relu):
W_conv = weight_variable(shape)
if do_relu:
h_conv = tf.nn.leaky_relu(conv2d(input_layer, W_conv))
return h_conv
else:
h_conv = conv2d(input_layer, W_conv)
return h_conv
def conv2d(x, W):
"""conv2d returns a 2d convolution layer with full stride."""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
"""max_pool_2x2 downsamples a feature map by 2X."""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def main():
scale_val = 1.0/8.0
x = tf.placeholder(tf.float32, [None, 424, 512, 1])
y_ = tf.placeholder(tf.float32, [None, 3])
y_conv = cnn_model_fn(x)
#cost = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)
cost = tf.square(y_ - y_conv)
train_step = tf.train.AdamOptimizer(1e-4).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
print("------------------OUT SHAPES-------------------")
print(y_.get_shape())
print(y_conv.get_shape())
print("-----------------------------------------------")
cnt = 0
from navirice_get_image import KinectClient
kc = KinectClient('127.0.0.1', 29000)
kc.navirice_capture_settings(False, True, True)
s_train = False
r_train = False
train_set_input = []
train_set_expected =[]
train_set_size = 100000
saver = tf.train.Saver()
while(True):
img_set, last_count = kc.navirice_get_image()
if(s_train):
s_train = False
if(img_set != None and img_set.IR.width > 0 and img_set.Depth.width > 0):
ir_image = navirice_ir_to_np(img_set.IR)
depth_image = navirice_image_to_np(img_set.Depth)
inverted_depth = np.ones(depth_image.shape)
inverted_depth = inverted_depth - depth_image
cv_result = get_head_from_img(ir_image)
if cv_result is not None:
arr = [cv_result[0], cv_result[1], cv_result[2]]
if len(train_set_input) < train_set_size:
train_set_input.append(inverted_depth)
train_set_expected.append(arr)
else:
if(random.randint(0, 10000) > -1):
i = random.randint(0, train_set_size-1)
train_set_input[i] = inverted_depth
train_set_expected[i] = arr
#train_step.run(session=sess, feed_dict={x: train_set_input, y_: train_set_expected})
dp = inverted_depth.copy()
cv2.circle(dp, (int(cv_result[0]*512), int(cv_result[1]*424)), int(cv_result[2]*400), (255, 0, 0), thickness=3, lineType=8, shift=0)
cv2.imshow("idl", dp)
print("db count: ", len(train_set_input))
if(img_set != None and img_set.IR.width > 0 and img_set.Depth.width > 0):
depth_image = navirice_image_to_np(img_set.Depth)
ir_image = navirice_ir_to_np(img_set.IR)
inverted_depth = np.ones(depth_image.shape)
inverted_depth = inverted_depth - depth_image
tests = []
tests.append(inverted_depth)
outs = sess.run(y_conv, feed_dict={x: tests})
xf = outs[0][0]
yf = outs[0][1]
radiusf = outs[0][2]
print("nnoutput x:", xf, "y: ", yf," r:", radiusf)
if radiusf < 0:
radiusf = 0
cv2.circle(tests[0], (int(xf*512), int(yf*424)), int(radiusf*400), (255, 0, 0), thickness=3, lineType=8, shift=0)
cv2.imshow("id",tests[0])
if(r_train):
tsi=[]
tse=[]
for i in range(100):
random_index = random.randint(0, len(train_set_input)-1)
tsi.append(train_set_input[random_index])
tse.append(train_set_expected[random_index])
print("TRAINING")
train_step.run(session=sess, feed_dict={x: tsi, y_: tse})
key = cv2.waitKey(10) & 0xFF
#print("key: ", key)
# train
if(key == ord('t')):
r_train = True
# rest
if(key == ord('r')):
r_train = False
# (space) capture
if(key == 32):
s_train = True
# save model
if(key == ord('s')):
loc = input("Enter file destination to save: ")
if(len(loc) > 0):
try:
saver.save(sess, loc)
except ValueError:
print("Error: Did not enter a path..")
# load model
if(key == ord('l')):
loc = input("Enter file destination to load: ")
if(len(loc) > 0):
try:
saver.restore(sess, loc)
except ValueError:
print("Error: no file with that destination")
if __name__ == "__main__":
main()
| 33.772512 | 152 | 0.57648 | 986 | 7,126 | 3.932049 | 0.238337 | 0.033015 | 0.026825 | 0.010317 | 0.30668 | 0.222337 | 0.166108 | 0.150632 | 0.117617 | 0.083054 | 0 | 0.058975 | 0.293292 | 7,126 | 210 | 153 | 33.933333 | 0.710882 | 0.068201 | 0 | 0.171053 | 1 | 0 | 0.044175 | 0.01407 | 0 | 0 | 0.000605 | 0 | 0 | 1 | 0.046053 | false | 0 | 0.085526 | 0 | 0.177632 | 0.059211 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ea94d8d31634c3fa968dde28070a8a994acc38b | 9,148 | py | Python | zigpy_deconz_parser/commands/responses.py | zha-ng/zigpy-deconz-parser | 9182b3578f20a145ccd46b0cfa002613c4cd38db | [
"Apache-2.0"
] | 2 | 2020-02-06T00:00:10.000Z | 2022-02-25T23:47:30.000Z | zigpy_deconz_parser/commands/responses.py | zha-ng/zigpy-deconz-parser | 9182b3578f20a145ccd46b0cfa002613c4cd38db | [
"Apache-2.0"
] | 2 | 2020-04-08T11:57:46.000Z | 2020-05-13T13:32:03.000Z | zigpy_deconz_parser/commands/responses.py | zha-ng/zigpy-deconz-parser | 9182b3578f20a145ccd46b0cfa002613c4cd38db | [
"Apache-2.0"
] | null | null | null | import attr
import binascii
import zigpy.types as t
import zigpy_deconz.types as dt
import zigpy_deconz_parser.types as pt
@attr.s
class Version(pt.Command):
SCHEMA = (t.uint32_t, )
version = attr.ib(factory=SCHEMA[0])
def pretty_print(self, *args):
self.print("Version: 0x{:08x}".format(self.version))
@attr.s
class ReadParameter(pt.Command):
SCHEMA = (t.uint16_t, pt.DeconzParameter, pt.Bytes)
payload_length = attr.ib(factory=SCHEMA[0])
parameter = attr.ib(factory=SCHEMA[1])
value = attr.ib(factory=SCHEMA[2])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.print(str(self.parameter))
self.print("Value: {}".format(binascii.hexlify(self.value)))
@attr.s
class WriteParameter(pt.Command):
SCHEMA = (t.uint16_t, pt.DeconzParameter, )
payload_length = attr.ib(factory=SCHEMA[0])
parameter = attr.ib(factory=SCHEMA[1])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.print(str(self.parameter))
@attr.s
class DeviceState(pt.Command):
SCHEMA = (pt.DeviceState, t.uint8_t, t.Optional(t.uint8_t), )
device_state = attr.ib(factory=SCHEMA[0])
reserved_2 = attr.ib(factory=SCHEMA[1])
reserved_3 = attr.ib(factory=SCHEMA[2])
def pretty_print(self, *args):
self.device_state.pretty_print()
self.print("Reserved: {} Shall be ignored".format(self.reserved_2))
self.print("Reserved: {} Shall be ignored".format(self.reserved_3))
@attr.s
class ChangeNetworkState(pt.Command):
SCHEMA = (pt.NetworkState, )
network_state = attr.ib(factory=SCHEMA[0])
def pretty_print(self, *args):
self.print(str(self.network_state))
@attr.s
class DeviceStateChanged(pt.Command):
SCHEMA = (pt.DeviceState, )
device_state = attr.ib(factory=SCHEMA[0])
def pretty_print(self, *args):
self.device_state.pretty_print()
@attr.s
class ApsDataIndication(pt.Command):
SCHEMA = (t.uint16_t, pt.DeviceState, dt.DeconzAddress, t.uint8_t,
dt.DeconzAddress, t.uint8_t, t.uint16_t, t.uint16_t,
t.LongOctetString, t.uint8_t, t.uint8_t, t.uint8_t, t.uint8_t,
t.uint8_t, t.uint8_t, t.uint8_t, t.int8s, )
payload_length = attr.ib(factory=SCHEMA[0])
device_state = attr.ib(factory=SCHEMA[1])
dst_addr = attr.ib(factory=SCHEMA[2])
dst_ep = attr.ib(factory=SCHEMA[3])
src_addr = attr.ib(factory=SCHEMA[4])
src_ep = attr.ib(factory=SCHEMA[5])
profile = attr.ib(factory=SCHEMA[6])
cluster_id = attr.ib(factory=SCHEMA[7])
asdu = attr.ib(factory=SCHEMA[8])
reserved_1 = attr.ib(factory=SCHEMA[9])
reserved_2 = attr.ib(factory=SCHEMA[10])
lqi = attr.ib(factory=SCHEMA[11])
reserved_3 = attr.ib(factory=SCHEMA[12])
reserved_4 = attr.ib(factory=SCHEMA[13])
reserved_5 = attr.ib(factory=SCHEMA[14])
reserved_6 = attr.ib(factory=SCHEMA[15])
rssi = attr.ib(factory=SCHEMA[16])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.device_state.pretty_print()
if self.profile == 0 and self.dst_ep == 0:
# ZDO
request_id = t.uint8_t.deserialize(self.asdu)[0]
else:
# ZCL
frame_control = self.asdu[0]
if frame_control & 0b0100:
request_id = self.asdu[3]
else:
request_id = self.asdu[1]
headline = "\t\t Request id: [0x{:02x}] ". \
format(request_id).ljust(self._lpad, '<')
print(headline + ' Dst Addr: {}'.format(self.dst_addr))
if self.dst_addr.address_mode in (1, 2, 4):
self.print("Dst address: 0x{:04x}".format(self.dst_addr.address))
self.print("Dst endpoint {}".format(self.dst_ep))
self.print("Src address: {}".format(self.src_addr))
if self.src_addr.address_mode in (1, 2, 4):
self.print("Src address: 0x{:04x}".format(self.src_addr.address))
self.print("Src endpoint: {}".format(self.src_ep))
self.print("Profile id: 0x{:04x}".format(self.profile))
self.print("Cluster id: 0x{:04x}".format(self.cluster_id))
self.print("ASDU: {}".format(binascii.hexlify(self.asdu)))
r = "reserved_1: 0x{:02x} Shall be ignored/Last hop since proto ver 0x0108"
self.print(r.format(self.reserved_1))
r = "reserved_2: 0x{:02x} Shall be ignored/Last hop since proto ver 0x0108"
self.print(r.format(self.reserved_2))
self.print("LQI: {}".format(self.lqi))
self.print("reserved_3: 0x{:02x} Shall be ignored".format(self.reserved_3))
self.print("reserved_4: 0x{:02x} Shall be ignored".format(self.reserved_4))
self.print("reserved_5: 0x{:02x} Shall be ignored".format(self.reserved_5))
self.print("reserved_6: 0x{:02x} Shall be ignored".format(self.reserved_6))
self.print("RSSI: {}".format(self.rssi))
@attr.s
class ApsDataRequest(pt.Command):
_lpad = pt.LPAD
SCHEMA = (
t.uint16_t, # payload length
pt.DeviceState, # Device state
t.uint8_t, # request_id
)
payload_length = attr.ib(factory=SCHEMA[0])
device_state = attr.ib(factory=SCHEMA[1])
request_id = attr.ib(factory=SCHEMA[2])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
headline = "\t\t Request id: [0x{:02x}] ". \
format(self.request_id).ljust(self._lpad, '<')
print(headline + ' ' + '^^^ Above status ^^^')
self.device_state.pretty_print()
@attr.s
class ApsDataConfirm(pt.Command):
SCHEMA = (
t.uint16_t, # payload length
pt.DeviceState, # Device State
t.uint8_t, # Request ID
dt.DeconzAddressEndpoint, # Destination address
t.uint8_t, # Source endpoint
pt.ConfirmStatus, # Confirm Status
t.uint8_t, # Reserved below
t.uint8_t,
t.uint8_t,
t.uint8_t,
)
payload_length = attr.ib(factory=SCHEMA[0])
device_state = attr.ib(factory=SCHEMA[1])
request_id = attr.ib(factory=SCHEMA[2])
dst_addr = attr.ib(factory=SCHEMA[3])
src_ep = attr.ib(factory=SCHEMA[4])
confirm_status = attr.ib(factory=SCHEMA[5])
reserved_1 = attr.ib(factory=SCHEMA[6])
reserved_2 = attr.ib(factory=SCHEMA[7])
reserved_3 = attr.ib(factory=SCHEMA[8])
reserved_4 = attr.ib(factory=SCHEMA[9])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.device_state.pretty_print()
headline = "\t\t Request id: [0x{:02x}] ". \
format(self.request_id).ljust(self._lpad, '<')
print(headline + ' ' + str(self.dst_addr))
if self.dst_addr.address_mode in (1, 2, 4):
self.print("NWK: 0x{:04x}".format(self.dst_addr.address))
self.print("Src endpoint: {}".format(self.src_ep))
self.print("TX Status: {}".format(str(self.confirm_status)))
r = "reserved_1: 0x{:02x} Shall be ignored"
self.print(r.format(self.reserved_1))
r = "reserved_2: 0x{:02x} Shall be ignored"
self.print(r.format(self.reserved_2))
r = "reserved_3: 0x{:02x} Shall be ignored"
self.print(r.format(self.reserved_3))
r = "reserved_4: 0x{:02x} Shall be ignored"
self.print(r.format(self.reserved_4))
@attr.s
class MacPoll(pt.Command):
SCHEMA = (t.uint16_t, dt.DeconzAddress, t.uint8_t, t.int8s, )
payload_length = attr.ib(factory=SCHEMA[0])
some_address = attr.ib(factory=SCHEMA[1])
lqi = attr.ib(factory=SCHEMA[2])
rssi = attr.ib(factory=SCHEMA[3])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.print("Address: {}".format(self.some_address))
if self.some_address.address_mode in (1, 2, 4):
self.print("Address: 0x{:04x}".format(self.some_address.address))
self.print("LQI: {}".format(self.lqi))
self.print("RSSI: {}".format(self.rssi))
@attr.s
class ZGPDataInd(pt.Command):
SCHEMA = (t.LongOctetString, )
payload = attr.ib(factory=t.LongOctetString)
def pretty_print(self, *args):
self.print('Payload: {}'.format(binascii.hexlify(self.payload)))
@attr.s
class SimpleBeacon(pt.Command):
SCHEMA = (t.uint16_t, t.NWK, t.NWK, t.uint8_t, t.uint8_t, t.uint8_t, )
payload_length = attr.ib(factory=SCHEMA[0])
SrcNWK = attr.ib(factory=SCHEMA[1])
PanId = attr.ib(factory=SCHEMA[2])
channel = attr.ib(factory=SCHEMA[3])
flags = attr.ib(factory=SCHEMA[4])
updateId = attr.ib(factory=SCHEMA[5])
def pretty_print(self, *args):
self.print("Payload length: {}".format(self.payload_length))
self.print("Source NWK: {}".format(self.SrcNWK))
self.print("PAN ID: {}".format(self.PanId))
self.print("Channel: {}".format(self.channel))
self.print("Flags: 0x{:02x}".format(self.flags))
self.print("Update id: 0x{:02x}".format(self.updateId))
| 35.049808 | 83 | 0.636314 | 1,288 | 9,148 | 4.395963 | 0.101708 | 0.055104 | 0.119392 | 0.171141 | 0.72236 | 0.633698 | 0.551572 | 0.520311 | 0.44154 | 0.39615 | 0 | 0.032565 | 0.207805 | 9,148 | 260 | 84 | 35.184615 | 0.748724 | 0.016506 | 0 | 0.423645 | 0 | 0 | 0.121438 | 0 | 0 | 0 | 0.001336 | 0 | 0 | 1 | 0.059113 | false | 0 | 0.024631 | 0 | 0.463054 | 0.330049 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2eb1e585fcbec5ec479747784f42d3567bceb246 | 1,294 | py | Python | submission_config.py | ege-k/nlp4nethack | 8b8b45a2f0be09c5233b33a47f421906e9e4b561 | [
"MIT"
] | null | null | null | submission_config.py | ege-k/nlp4nethack | 8b8b45a2f0be09c5233b33a47f421906e9e4b561 | [
"MIT"
] | null | null | null | submission_config.py | ege-k/nlp4nethack | 8b8b45a2f0be09c5233b33a47f421906e9e4b561 | [
"MIT"
] | null | null | null | from agents.custom_agent import CustomAgent
from agents.torchbeast_agent import TorchBeastAgent
from envs.wrappers import addtimelimitwrapper_fn
################################################
# Import your own agent code #
# Set Submision_Agent to your agent #
# Set NUM_PARALLEL_ENVIRONMENTS as needed #
# Set submission_env_make_fn to your wrappers #
# Test with local_evaluation.py #
################################################
class SubmissionConfig:
## Add your own agent class
# AGENT = CustomAgent
AGENT = TorchBeastAgent
## Change the NUM_ENVIRONMENTS as you need
## for example reduce it if your GPU doesn't fit
## Increasing above 32 is not advisable for the Nethack Challenge 2021
NUM_ENVIRONMENTS = 32
## Add a function that creates your nethack env
## Mainly this is to add wrappers
## Add your wrappers to envs/wrappers.py and change the name here
## IMPORTANT: Don't "call" the function, only provide the name
MAKE_ENV_FN = addtimelimitwrapper_fn
class TestEvaluationConfig:
# Change this to locally check a different number of rollouts
# The AIcrowd submission evaluator will not use this
# It is only for your local evaluation
NUM_EPISODES = 512
| 33.179487 | 74 | 0.663833 | 161 | 1,294 | 5.236025 | 0.515528 | 0.023725 | 0.02847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010956 | 0.224111 | 1,294 | 38 | 75 | 34.052632 | 0.828685 | 0.592736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2ec35e8658fe8c350dc2c624d8f6bcac6016d398 | 12,527 | py | Python | plugin.video.220ro/default.py | keddyboys/keddy-repo | 5c3420828e19f97222714e0e8518a95d58b3f637 | [
"MIT"
] | 1 | 2019-09-08T05:39:36.000Z | 2019-09-08T05:39:36.000Z | plugin.video.220ro/default.py | keddyboys/keddy-repo | 5c3420828e19f97222714e0e8518a95d58b3f637 | [
"MIT"
] | 1 | 2017-12-03T09:17:31.000Z | 2019-01-13T08:48:40.000Z | plugin.video.220ro/default.py | keddyboys/keddy-repo | 5c3420828e19f97222714e0e8518a95d58b3f637 | [
"MIT"
] | null | null | null | import HTMLParser
import os
import re
import sys
import time
import urllib
import urllib2
import xbmc
import xbmcaddon
import xbmcgui
import xbmcplugin
__addon__ = xbmcaddon.Addon()
__cwd__ = xbmc.translatePath(__addon__.getAddonInfo('path')).decode("utf-8")
__resource__ = xbmc.translatePath(os.path.join(__cwd__, 'resources', 'lib')).decode("utf-8")
sys.path.append (__resource__)
settings = xbmcaddon.Addon(id='plugin.video.220ro')
search_thumb = os.path.join(settings.getAddonInfo('path'), 'resources', 'media', 'search.png')
movies_thumb = os.path.join(settings.getAddonInfo('path'), 'resources', 'media', 'movies.png')
next_thumb = os.path.join(settings.getAddonInfo('path'), 'resources', 'media', 'next.png')
def ROOT():
addDir('Video', 'http://www.220.ro/', 23, movies_thumb, 'video')
addDir('Shows', 'http://www.220.ro/', 23, movies_thumb, 'shows')
addDir('Best-Of', 'http://www.220.ro/', 23, movies_thumb, 'best-of')
addDir('Cauta', 'http://www.220.ro/', 3, search_thumb)
def CAUTA_LIST(url):
link = get_search(url)
match = re.compile('<div class=".+?>\n<div.+?\n<a.+?"(.+?)" title="(.+?)" class.+?\n<img src="(.+?)".+?\n.+?\n<span.+?>\n(.+?)\n', re.IGNORECASE | re.MULTILINE).findall(link)
if len(match) > 0:
print match
for legatura, name, img, length in match:
# name = HTMLParser.HTMLParser().unescape( codecs.decode(name, "unicode_escape") ) + " " + length
name = name + " " + length
the_link = legatura
image = img
sxaddLink(name, the_link, image, name, 10)
def CAUTA_VIDEO_LIST(url, meniu):
link = get_search(url)
# f = open( '/storage/.kodi/temp/files.py', 'w' )
# f.write( 'url = ' + repr(url) + '\n' )
# f.close()
if meniu == 'video':
match = re.compile('<div class=".+?>\n<a title="(.+?)" href="(.+?)" class=.+?><img.+?data-src="(.+?)".+?\n<span.+?\n(.+?)\n', re.IGNORECASE | re.MULTILINE).findall(link)
if len(match) > 0:
for name, legatura, img, length in match:
# name = HTMLParser.HTMLParser().unescape( codecs.decode(name, "unicode_escape") ) + " " + length
the_link = legatura
image = img
sxaddLink(name, the_link, image, name, 10, name, length)
elif meniu == 'shows':
match = re.compile('<div class="tabel_show">\n<a href="(.+?)" title="(.+?)".+? data-src="(.+?)".+?\n.+?\n.+?\n.+?\n<p>(.+?)</p>', re.IGNORECASE | re.MULTILINE).findall(link)
if len(match) > 0:
for legatura, name, image, descript in match:
addDir(name, legatura, 5, image, 'sub_shows', descript)
elif meniu == 'sub_shows':
match = re.compile('<div class="left thumbnail">\n<a href="(.+?)" title="(.+?)".+?data-src="(.+?)".+?<span.+?>(.+?)</span>.+?<p>(.+?)</p>', re.IGNORECASE | re.MULTILINE | re.DOTALL).findall(link)
if len(match) > 0:
for legatura, name, image, length, descript in match:
sxaddLink(name, legatura, image, name, 10, descript, length)
elif meniu == 'best-month':
match = re.compile('<div class=".+?>\n<div.+?\n<a.+?"(.+?)" title="(.+?)" class.+?\n<img src="(.+?)".+?\n.+?\n<span.+?>\n(.+?)\n.+?\n.+?\n.+?\n.+?\n<p>(.+?)</p>', re.IGNORECASE | re.MULTILINE).findall(link)
if len(match) > 0:
for legatura, name, image, length, descript in match:
sxaddLink(name, legatura, image, name, 10, descript, length)
match = re.compile('<li><a href=".+?" title="Pagina (\d+)">', re.IGNORECASE).findall(link)
if len(match) > 0:
if meniu == 'best-month':
page_num = re.compile('.+?220.+?\d+/\d+/(\d+)', re.IGNORECASE).findall(url)
nexturl = re.sub('.+?220.+?\d+/\d+/(\d+)', match[0], url)
else:
page_num = re.compile('.+?220.+?(\d+)', re.IGNORECASE).findall(url)
nexturl = re.sub('.+?220.+?(\d+)', match[0], url)
if nexturl.find("/\d+") == -1:
nexturl = url[:-1]
if page_num:
pagen = page_num[0]
pagen = int(pagen)
pagen += 1
nexturl += str(pagen)
else:
nexturl = url + match[0]
addNext('Next', nexturl, 5, next_thumb, meniu)
def CAUTA(url, autoSearch=None):
keyboard = xbmc.Keyboard('')
keyboard.doModal()
if (keyboard.isConfirmed() is False):
return
search_string = keyboard.getText()
if len(search_string) == 0:
return
if autoSearch is None:
autoSearch = ""
CAUTA_LIST(get_search_url(search_string + "" + autoSearch))
def CAUTA_VIDEO(url, gen, autoSearch=None):
CAUTA_VIDEO_LIST(get_search_video_url(gen), meniu=None)
def SXVIDEO_GENERIC_PLAY(sxurl):
progress = xbmcgui.DialogProgress()
progress.create('220.ro', 'Se incarca videoclipul \n')
url = sxurl
src = get_url(urllib.quote(url, safe="%/:=&?~#+!$,;'@()*[]"))
title = ''
# title
match = re.compile('<title>(.+?)<.+?>.?\s*.+?videosrc:\'(.+?)\'.+?og:description.+?"(.+?)".+?<p class="date">(.+?)</p>', re.IGNORECASE | re.DOTALL).findall(src)
title = HTMLParser.HTMLParser().unescape(match[0][0])
title = re.sub('VIDEO.?- ', '', title) + " " + match[0][3]
location = match[0][1]
progress.update(0, "", title, "")
if progress.iscanceled():
return False
listitem = xbmcgui.ListItem(path=location)
listitem.setInfo('video', {'Title': title, 'Plot': match[0][2]})
# xbmcplugin.setResolvedUrl(1, True, listitem)
progress.close()
xbmc.Player().play(item=(location + '|Host=s2.220.t1.ro'), listitem=listitem)
def get_url(url):
req = urllib2.Request(url)
req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3')
try:
response = urllib2.urlopen(req)
link = response.read()
response.close()
return link
except:
return False
def get_search_url(keyword, offset=None):
url = 'http://www.220.ro/cauta/' + urllib.quote_plus(keyword) + '/video'
return url
def get_search_video_url(gen, offset=None):
url = 'http://www.220.ro/' + gen + '/'
return url
def get_search(url):
params = {}
req = urllib2.Request(url, urllib.urlencode(params))
req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3')
req.add_header('Content-type', 'application/x-www-form-urlencoded')
try:
response = urllib2.urlopen(req)
link = response.read()
response.close()
return link
except:
return False
def get_params():
param = []
paramstring = sys.argv[2]
if len(paramstring) >= 2:
params = sys.argv[2]
cleanedparams = params.replace('?', '')
if (params[len(params) - 1] == '/'):
params = params[0:len(params) - 2]
pairsofparams = cleanedparams.split('&')
param = {}
for i in range(len(pairsofparams)):
splitparams = {}
splitparams = pairsofparams[i].split('=')
if (len(splitparams)) == 2:
param[splitparams[0]] = splitparams[1]
return param
def sxaddLink(name, url, iconimage, movie_name, mode=4, descript=None, length=None):
ok = True
u = sys.argv[0] + "?url=" + urllib.quote_plus(url) + "&mode=" + str(mode) + "&name=" + urllib.quote_plus(name)
liz = xbmcgui.ListItem(name, iconImage=iconimage, thumbnailImage=iconimage)
if descript is not None:
liz.setInfo(type="Video", infoLabels={"Title": movie_name, "Plot": descript})
else:
liz.setInfo(type="Video", infoLabels={"Title": movie_name, "Plot": name})
if length is not None:
liz.setInfo(type="Video", infoLabels={"duration": int(get_sec(length))})
xbmcplugin.setContent(int(sys.argv[1]), 'movies')
ok = xbmcplugin.addDirectoryItem(handle=int(sys.argv[1]), url=u, listitem=liz, isFolder=False)
return ok
def get_sec(time_str):
m, s = time_str.split(':')
return int(m) * 60 + int(s)
def addLink(name, url, iconimage, movie_name):
ok = True
liz = xbmcgui.ListItem(name, iconImage="DefaultVideo.png", thumbnailImage=iconimage)
liz.setInfo(type="Video", infoLabels={"Title": movie_name})
ok = xbmcplugin.addDirectoryItem(handle=int(sys.argv[1]), url=url, listitem=liz)
return ok
def addNext(name, page, mode, iconimage, meniu=None):
u = sys.argv[0] + "?url=" + urllib.quote_plus(page) + "&mode=" + str(mode) + "&name=" + urllib.quote_plus(name)
if meniu is not None:
u += "&meniu=" + urllib.quote_plus(meniu)
liz = xbmcgui.ListItem(name, iconImage="DefaultFolder.png", thumbnailImage=iconimage)
liz.setInfo(type="Video", infoLabels={"Title": name})
xbmcplugin.setContent(int(sys.argv[1]), 'movies')
ok = xbmcplugin.addDirectoryItem(handle=int(sys.argv[1]), url=u, listitem=liz, isFolder=True)
return ok
def addDir(name, url, mode, iconimage, meniu=None, descript=None):
u = sys.argv[0] + "?url=" + urllib.quote_plus(url) + "&mode=" + str(mode) + "&name=" + urllib.quote_plus(name)
if meniu is not None:
u += "&meniu=" + urllib.quote_plus(meniu)
if descript is not None:
u += "&descriere=" + urllib.quote_plus(descript)
ok = True
liz = xbmcgui.ListItem(name, iconImage=iconimage, thumbnailImage=iconimage)
liz.setInfo(type="Video", infoLabels={"Genre": name})
if descript is not None:
liz.setInfo(type="Video", infoLabels={"Title": name, "Plot": descript})
else:
liz.setInfo(type="Video", infoLabels={"Title": name})
ok = xbmcplugin.addDirectoryItem(handle=int(sys.argv[1]), url=u, listitem=liz, isFolder=True)
return ok
def parse_menu(url, meniu):
if url is None:
url = 'http://www.220.ro/'
if meniu == 'video':
url = url + meniu + '/'
link = get_search(url)
match = re.compile('</a>\n<a title="(.+?)" href="(.+?)">', re.IGNORECASE | re.MULTILINE).findall(link)
match.append(['Sexy', 'http://www.220.ro/sexy/'])
elif meniu == 'shows':
match = [('Cele mai tari', 'http://www.220.ro/shows/'), ('Ultimele actualizate', 'http://www.220.ro/shows/ultimele-actualizate/'), ('Alfabetic', 'http://www.220.ro/shows/alfabetic/')]
elif meniu == 'best-of':
now = time.localtime()
# x = (now.tm_year - 2005) * 12 + (now.tm_mon - 5)
x = (now.tm_year - 2005) + 1
# match = [time.localtime(time.mktime((now.tm_year, now.tm_mon - n, 1, 0, 0, 0, 0, 0, 0)))[:1] for n in range(x)]
match = [time.localtime(time.mktime((now.tm_year - n, 12, 0, 0, 0, 0, 0, 0, 0)))[:2] for n in range(x)]
# match=[(), (), (), (), (), (), (), (), (), (), (), ()]
elif meniu == 'best-year':
match = [('Ianuarie', '01'), ('Februarie', '02'), ('Martie', '03'), ('Aprilie', '04'), ('Mai', '05'), ('Iunie', '06'), ('Iulie', '07'), ('August', '08'), ('Septembrie', '09'), ('Octombrie', '10'), ('Noiembrie', '11'), ('Decembrie', '12')]
if len(match) > 0:
print match
if meniu == 'best-of':
for titlu, an in match:
image = "DefaultVideo.png"
year_link = 'http://www.220.ro/best-of/' + str(titlu) + '/'
addDir(str(titlu), year_link, 23, image, 'best-year')
elif meniu == 'best-year':
for titlu, luna in match:
image = "DefaultVideo.png"
month_link = url + str(luna) + '/'
addDir(str(titlu), month_link, 5, image, 'best-month')
else:
for titlu, url in match:
image = "DefaultVideo.png"
addDir(titlu, url, 5, image, meniu, titlu)
xbmcplugin.setContent(int(sys.argv[1]), 'movies')
params = get_params()
url = None
mode = None
meniu = None
try:
url = urllib.unquote_plus(params["url"])
except:
pass
try:
mode = int(params["mode"])
except:
pass
try:
meniu = urllib.unquote_plus(params["meniu"])
except:
pass
# print "Mode: "+str(mode)
# print "URL: "+str(url)
# print "Name: "+str(name)
if mode is None or url is None or len(url) < 1:
ROOT()
elif mode == 1:
CAUTA_VIDEO(url, 'faze-tari')
elif mode == 2:
CAUTA_LIST(url)
elif mode == 3:
CAUTA(url)
elif mode == 5:
CAUTA_VIDEO_LIST(url, meniu)
elif mode == 23:
parse_menu(url, meniu)
elif mode == 10:
SXVIDEO_GENERIC_PLAY(url)
xbmcplugin.endOfDirectory(int(sys.argv[1]))
| 38.192073 | 246 | 0.58346 | 1,616 | 12,527 | 4.450495 | 0.169554 | 0.01168 | 0.016685 | 0.020022 | 0.511263 | 0.464683 | 0.415044 | 0.365545 | 0.331758 | 0.285595 | 0 | 0.024517 | 0.218568 | 12,527 | 327 | 247 | 38.308869 | 0.710185 | 0.050291 | 0 | 0.342308 | 0 | 0.026923 | 0.183371 | 0.045527 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011538 | 0.042308 | null | null | 0.007692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ec5499aa58ee90bddbc543db1bbb0895014d0e6 | 1,089 | py | Python | conftest.py | Budapest-Quantum-Computing-Group/piquassoboost | fd384be8f59cfd20d62654cf86c89f69d3cf8b8c | [
"Apache-2.0"
] | 4 | 2021-11-29T13:28:19.000Z | 2021-12-21T22:57:09.000Z | conftest.py | Budapest-Quantum-Computing-Group/piquassoboost | fd384be8f59cfd20d62654cf86c89f69d3cf8b8c | [
"Apache-2.0"
] | 11 | 2021-09-24T18:02:26.000Z | 2022-01-27T18:51:47.000Z | conftest.py | Budapest-Quantum-Computing-Group/piquassoboost | fd384be8f59cfd20d62654cf86c89f69d3cf8b8c | [
"Apache-2.0"
] | 1 | 2021-11-13T10:06:52.000Z | 2021-11-13T10:06:52.000Z | #
# Copyright 2021 Budapest Quantum Computing Group
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
import pytest
import piquassoboost as pqb
@pytest.fixture(autouse=True)
def _patch(request):
regexp = re.compile(f"{re.escape(str(request.config.rootdir))}\/(.+?)\/(.*)")
result = regexp.search(str(request.fspath))
if result.group(1) == "piquasso-module":
# NOTE: Only override the simulators, when the origin Piquasso Python tests are
# executed. For tests originating in PiquassoBoost, handle everything manually!
pqb.patch()
| 33 | 87 | 0.724518 | 152 | 1,089 | 5.184211 | 0.684211 | 0.076142 | 0.032995 | 0.040609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010124 | 0.183655 | 1,089 | 32 | 88 | 34.03125 | 0.876265 | 0.674013 | 0 | 0 | 0 | 0 | 0.20178 | 0.15727 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2ec7a0b5bdeca4cd119b116e98c327c2a10981dd | 799 | py | Python | app/core/tests/test_models.py | Skyprince-gh/recipe-app-api | a4f0ead6ab546b1fea69c32caa3c269898c4086f | [
"MIT"
] | null | null | null | app/core/tests/test_models.py | Skyprince-gh/recipe-app-api | a4f0ead6ab546b1fea69c32caa3c269898c4086f | [
"MIT"
] | null | null | null | app/core/tests/test_models.py | Skyprince-gh/recipe-app-api | a4f0ead6ab546b1fea69c32caa3c269898c4086f | [
"MIT"
] | null | null | null | from django.test import TestCase
from django.contrib.auth import get_user_model
class ModelTests(TestCase):
def test_create_user_with_emamil_successful(self):
"""Test creating a new user with an email is successful"""
email = 'test@test.com'
password = 'testpass123'
user = get_user_model().objects.create_user( #call the create user function from the user model do not import models directly
email=email, #adds email note all these are custom properties since the user model will be changed
password=password #add password note all these are custom properties since the user model will be changed
)
self.assertEqual(user.email, email)
self.assertTrue(user.check_password(password)) #you use the check_password function because passwords are encrypted | 47 | 129 | 0.768461 | 116 | 799 | 5.189655 | 0.474138 | 0.074751 | 0.059801 | 0.049834 | 0.202658 | 0.202658 | 0.202658 | 0.202658 | 0.202658 | 0.202658 | 0 | 0.004532 | 0.171464 | 799 | 17 | 130 | 47 | 0.904834 | 0.461827 | 0 | 0 | 0 | 0 | 0.056872 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0.25 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2ecd9594ca823d5651d395e05b565a78030a392e | 35,203 | py | Python | cardinal_pythonlib/sphinxtools.py | bopopescu/pythonlib | 9c2187d6092ba133342ca3374eb7c86f9d296c30 | [
"Apache-2.0"
] | null | null | null | cardinal_pythonlib/sphinxtools.py | bopopescu/pythonlib | 9c2187d6092ba133342ca3374eb7c86f9d296c30 | [
"Apache-2.0"
] | null | null | null | cardinal_pythonlib/sphinxtools.py | bopopescu/pythonlib | 9c2187d6092ba133342ca3374eb7c86f9d296c30 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# cardinal_pythonlib/sphinxtools.py
"""
===============================================================================
Original code copyright (C) 2009-2020 Rudolf Cardinal (rudolf@pobox.com).
This file is part of cardinal_pythonlib.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
===============================================================================
**Functions to help with Sphinx, in particular the generation of autodoc
files.**
Rationale: if you want Sphinx ``autodoc`` code to appear as "one module per
Sphinx page" (which I normally do), you need one ``.rst`` file per module.
"""
from enum import Enum
from fnmatch import fnmatch
import glob
import logging
from os.path import (
abspath, basename, dirname, exists, expanduser, isdir, isfile, join,
relpath, sep, splitext
)
from typing import Dict, Iterable, List, Union
from cardinal_pythonlib.fileops import mkdir_p, relative_filename_within_dir
from cardinal_pythonlib.logs import BraceStyleAdapter
from cardinal_pythonlib.reprfunc import auto_repr
from pygments.lexer import Lexer
from pygments.lexers import get_lexer_for_filename
from pygments.util import ClassNotFound
log = BraceStyleAdapter(logging.getLogger(__name__))
# =============================================================================
# Constants
# =============================================================================
AUTOGENERATED_COMMENT = ".. THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT."
DEFAULT_INDEX_TITLE = "Automatic documentation of source code"
DEFAULT_SKIP_GLOBS = ["__init__.py"]
EXT_PYTHON = ".py"
EXT_RST = ".rst"
CODE_TYPE_NONE = "none"
class AutodocMethod(Enum):
"""
Enum to specify the method of autodocumenting a file.
"""
BEST = 0
CONTENTS = 1
AUTOMODULE = 2
# =============================================================================
# Helper functions
# =============================================================================
def rst_underline(heading: str, underline_char: str) -> str:
"""
Underlines a heading for RST files.
Args:
heading: text to underline
underline_char: character to use
Returns:
underlined heading, over two lines (without a final terminating
newline)
"""
assert "\n" not in heading
assert len(underline_char) == 1
return heading + "\n" + (underline_char * len(heading))
def fail(msg: str) -> None:
log.critical(msg)
raise RuntimeError(msg)
def write_if_allowed(filename: str,
content: str,
overwrite: bool = False,
mock: bool = False) -> None:
"""
Writes the contents to a file, if permitted.
Args:
filename: filename to write
content: contents to write
overwrite: permit overwrites?
mock: pretend to write, but don't
Raises:
RuntimeError: if file exists but overwriting not permitted
"""
# Check we're allowed
if not overwrite and exists(filename):
fail(f"File exists, not overwriting: {filename!r}")
# Make the directory, if necessary
directory = dirname(filename)
if not mock:
mkdir_p(directory)
# Write the file
log.info("Writing to {!r}", filename)
if mock:
log.warning("Skipping writes as in mock mode")
else:
with open(filename, "wt") as outfile:
outfile.write(content)
# =============================================================================
# FileToAutodocument
# =============================================================================
class FileToAutodocument(object):
"""
Class representing a file to document automatically via Sphinx autodoc.
Example:
.. code-block:: python
import logging
from cardinal_pythonlib.logs import *
from cardinal_pythonlib.sphinxtools import *
main_only_quicksetup_rootlogger(level=logging.DEBUG)
f = FileToAutodocument(
source_filename="~/Documents/code/cardinal_pythonlib/cardinal_pythonlib/sphinxtools.py",
project_root_dir="~/Documents/code/cardinal_pythonlib",
target_rst_filename="~/Documents/code/cardinal_pythonlib/docs/source/autodoc/sphinxtools.rst",
)
print(f)
f.source_extension
f.is_python
f.source_filename_rel_project_root
f.rst_dir
f.source_filename_rel_rst_file
f.rst_filename_rel_project_root
f.rst_filename_rel_autodoc_index(
"~/Documents/code/cardinal_pythonlib/docs/source/autodoc/_index.rst")
f.python_module_name
f.pygments_code_type
print(f.rst_content(prefix=".. Hello!"))
print(f.rst_content(prefix=".. Hello!", method=AutodocMethod.CONTENTS))
f.write_rst(prefix=".. Hello!")
""" # noqa
def __init__(self,
source_filename: str,
project_root_dir: str,
target_rst_filename: str,
method: AutodocMethod = AutodocMethod.BEST,
python_package_root_dir: str = None,
source_rst_title_style_python: bool = True,
pygments_language_override: Dict[str, str] = None) -> None:
"""
Args:
source_filename: source file (e.g. Python, C++, XML file) to
document
project_root_dir: root directory of the whole project
target_rst_filename: filenamd of an RST file to write that will
document the source file
method: instance of :class:`AutodocMethod`; for example, should we
ask Sphinx's ``autodoc`` to read docstrings and build us a
pretty page, or just include the contents with syntax
highlighting?
python_package_root_dir: if your Python modules live in a directory
other than ``project_root_dir``, specify it here
source_rst_title_style_python: if ``True`` and the file is a Python
file and ``method == AutodocMethod.AUTOMODULE``, the heading
used will be in the style of a Python module, ``x.y.z``.
Otherwise, it will be a path (``x/y/z``).
pygments_language_override: if specified, a dictionary mapping
file extensions to Pygments languages (for example: a ``.pro``
file will be autodetected as Prolog, but you might want to
map that to ``none`` for Qt project files).
"""
self.source_filename = abspath(expanduser(source_filename))
self.project_root_dir = abspath(expanduser(project_root_dir))
self.target_rst_filename = abspath(expanduser(target_rst_filename))
self.method = method
self.source_rst_title_style_python = source_rst_title_style_python
self.python_package_root_dir = (
abspath(expanduser(python_package_root_dir))
if python_package_root_dir else self.project_root_dir
)
self.pygments_language_override = pygments_language_override or {} # type: Dict[str, str] # noqa
assert isfile(self.source_filename), (
f"Not a file: source_filename={self.source_filename!r}")
assert isdir(self.project_root_dir), (
f"Not a directory: project_root_dir={self.project_root_dir!r}")
assert relative_filename_within_dir(
filename=self.source_filename,
directory=self.project_root_dir
), (
f"Source file {self.source_filename!r} is not within "
f"project directory {self.project_root_dir!r}"
)
assert relative_filename_within_dir(
filename=self.python_package_root_dir,
directory=self.project_root_dir
), (
f"Python root {self.python_package_root_dir!r} is not within "
f"project directory {self.project_root_dir!r}"
)
assert isinstance(method, AutodocMethod)
def __repr__(self) -> str:
return auto_repr(self)
@property
def source_extension(self) -> str:
"""
Returns the extension of the source filename.
"""
return splitext(self.source_filename)[1]
@property
def is_python(self) -> bool:
"""
Is the source file a Python file?
"""
return self.source_extension == EXT_PYTHON
@property
def source_filename_rel_project_root(self) -> str:
"""
Returns the name of the source filename, relative to the project root.
Used to calculate file titles.
"""
return relpath(self.source_filename, start=self.project_root_dir)
@property
def source_filename_rel_python_root(self) -> str:
"""
Returns the name of the source filename, relative to the Python package
root. Used to calculate the name of Python modules.
"""
return relpath(self.source_filename,
start=self.python_package_root_dir)
@property
def rst_dir(self) -> str:
"""
Returns the directory of the target RST file.
"""
return dirname(self.target_rst_filename)
@property
def source_filename_rel_rst_file(self) -> str:
"""
Returns the source filename as seen from the RST filename that we
will generate. Used for ``.. include::`` commands.
"""
return relpath(self.source_filename, start=self.rst_dir)
@property
def rst_filename_rel_project_root(self) -> str:
"""
Returns the filename of the target RST file, relative to the project
root directory. Used for labelling the RST file itself.
"""
return relpath(self.target_rst_filename, start=self.project_root_dir)
def rst_filename_rel_autodoc_index(self, index_filename: str) -> str:
"""
Returns the filename of the target RST file, relative to a specified
index file. Used to make the index refer to the RST.
"""
index_dir = dirname(abspath(expanduser(index_filename)))
return relpath(self.target_rst_filename, start=index_dir)
@property
def python_module_name(self) -> str:
"""
Returns the name of the Python module that this instance refers to,
in dotted Python module notation, or a blank string if it doesn't.
"""
if not self.is_python:
return ""
filepath = self.source_filename_rel_python_root
dirs_and_base = splitext(filepath)[0]
dir_and_file_parts = dirs_and_base.split(sep)
return ".".join(dir_and_file_parts)
@property
def pygments_language(self) -> str:
"""
Returns the code type annotation for Pygments; e.g. ``python`` for
Python, ``cpp`` for C++, etc.
"""
extension = splitext(self.source_filename)[1]
if extension in self.pygments_language_override:
return self.pygments_language_override[extension]
try:
lexer = get_lexer_for_filename(self.source_filename) # type: Lexer
return lexer.name
except ClassNotFound:
log.warning("Don't know Pygments code type for extension {!r}",
self.source_extension)
return CODE_TYPE_NONE
def rst_content(self,
prefix: str = "",
suffix: str = "",
heading_underline_char: str = "=",
method: AutodocMethod = None) -> str:
"""
Returns the text contents of an RST file that will automatically
document our source file.
Args:
prefix: prefix, e.g. RST copyright comment
suffix: suffix, after the part we're creating
heading_underline_char: RST character to use to underline the
heading
method: optional method to override ``self.method``; see
constructor
Returns:
the RST contents
"""
spacer = " "
# Choose our final method
if method is None:
method = self.method
is_python = self.is_python
if method == AutodocMethod.BEST:
if is_python:
method = AutodocMethod.AUTOMODULE
else:
method = AutodocMethod.CONTENTS
elif method == AutodocMethod.AUTOMODULE:
if not is_python:
method = AutodocMethod.CONTENTS
# Write the instruction
if method == AutodocMethod.AUTOMODULE:
if self.source_rst_title_style_python:
title = self.python_module_name
else:
title = self.source_filename_rel_project_root
instruction = (
f".. automodule:: {self.python_module_name}\n"
f" :members:"
)
elif method == AutodocMethod.CONTENTS:
title = self.source_filename_rel_project_root
# Using ".. include::" with options like ":code: python" doesn't
# work properly; everything comes out as Python.
# Instead, see http://www.sphinx-doc.org/en/1.4.9/markup/code.html;
# we need ".. literalinclude::" with ":language: LANGUAGE".
instruction = (
".. literalinclude:: {filename}\n"
"{spacer}:language: {language}".format(
filename=self.source_filename_rel_rst_file,
spacer=spacer,
language=self.pygments_language
)
)
else:
raise ValueError("Bad method!")
# Create the whole file
content = """
.. {filename}
{AUTOGENERATED_COMMENT}
{prefix}
{underlined_title}
{instruction}
{suffix}
""".format(
filename=self.rst_filename_rel_project_root,
AUTOGENERATED_COMMENT=AUTOGENERATED_COMMENT,
prefix=prefix,
underlined_title=rst_underline(
title, underline_char=heading_underline_char),
instruction=instruction,
suffix=suffix,
).strip() + "\n"
return content
def write_rst(self,
prefix: str = "",
suffix: str = "",
heading_underline_char: str = "=",
method: AutodocMethod = None,
overwrite: bool = False,
mock: bool = False) -> None:
"""
Writes the RST file to our destination RST filename, making any
necessary directories.
Args:
prefix: as for :func:`rst_content`
suffix: as for :func:`rst_content`
heading_underline_char: as for :func:`rst_content`
method: as for :func:`rst_content`
overwrite: overwrite the file if it exists already?
mock: pretend to write, but don't
"""
content = self.rst_content(
prefix=prefix,
suffix=suffix,
heading_underline_char=heading_underline_char,
method=method
)
write_if_allowed(self.target_rst_filename, content,
overwrite=overwrite, mock=mock)
# =============================================================================
# AutodocIndex
# =============================================================================
class AutodocIndex(object):
"""
Class to make an RST file that indexes others.
Example:
.. code-block:: python
import logging
from cardinal_pythonlib.logs import *
from cardinal_pythonlib.sphinxtools import *
main_only_quicksetup_rootlogger(level=logging.INFO)
# Example where one index contains another:
subidx = AutodocIndex(
index_filename="~/Documents/code/cardinal_pythonlib/docs/source/autodoc/_index2.rst",
highest_code_dir="~/Documents/code/cardinal_pythonlib",
project_root_dir="~/Documents/code/cardinal_pythonlib",
autodoc_rst_root_dir="~/Documents/code/cardinal_pythonlib/docs/source/autodoc",
source_filenames_or_globs="~/Documents/code/cardinal_pythonlib/docs/*.py",
)
idx = AutodocIndex(
index_filename="~/Documents/code/cardinal_pythonlib/docs/source/autodoc/_index.rst",
highest_code_dir="~/Documents/code/cardinal_pythonlib",
project_root_dir="~/Documents/code/cardinal_pythonlib",
autodoc_rst_root_dir="~/Documents/code/cardinal_pythonlib/docs/source/autodoc",
source_filenames_or_globs="~/Documents/code/cardinal_pythonlib/cardinal_pythonlib/*.py",
)
idx.add_index(subidx)
print(idx.index_content())
idx.write_index_and_rst_files(overwrite=True, mock=True)
# Example with a flat index:
flatidx = AutodocIndex(
index_filename="~/Documents/code/cardinal_pythonlib/docs/source/autodoc/_index.rst",
highest_code_dir="~/Documents/code/cardinal_pythonlib/cardinal_pythonlib",
project_root_dir="~/Documents/code/cardinal_pythonlib",
autodoc_rst_root_dir="~/Documents/code/cardinal_pythonlib/docs/source/autodoc",
source_filenames_or_globs="~/Documents/code/cardinal_pythonlib/cardinal_pythonlib/*.py",
)
print(flatidx.index_content())
flatidx.write_index_and_rst_files(overwrite=True, mock=True)
""" # noqa
def __init__(self,
index_filename: str,
project_root_dir: str,
autodoc_rst_root_dir: str,
highest_code_dir: str,
python_package_root_dir: str = None,
source_filenames_or_globs: Union[str, Iterable[str]] = None,
index_heading_underline_char: str = "-",
source_rst_heading_underline_char: str = "~",
title: str = DEFAULT_INDEX_TITLE,
introductory_rst: str = "",
recursive: bool = True,
skip_globs: List[str] = None,
toctree_maxdepth: int = 1,
method: AutodocMethod = AutodocMethod.BEST,
rst_prefix: str = "",
rst_suffix: str = "",
source_rst_title_style_python: bool = True,
pygments_language_override: Dict[str, str] = None) -> None:
"""
Args:
index_filename:
filename of the index ``.RST`` (ReStructured Text) file to
create
project_root_dir:
top-level directory for the whole project
autodoc_rst_root_dir:
directory within which all automatically generated ``.RST``
files (each to document a specific source file) will be placed.
A directory hierarchy within this directory will be created,
reflecting the structure of the code relative to
``highest_code_dir`` (q.v.).
highest_code_dir:
the "lowest" directory such that all code is found within it;
the directory structure within ``autodoc_rst_root_dir`` is to
``.RST`` files what the directory structure is of the source
files, relative to ``highest_code_dir``.
python_package_root_dir:
if your Python modules live in a directory other than
``project_root_dir``, specify it here
source_filenames_or_globs:
optional string, or list of strings, each describing a file or
glob-style file specification; these are the source filenames
to create automatic RST` for. If you don't specify them here,
you can use :func:`add_source_files`. To add sub-indexes, use
:func:`add_index` and :func:`add_indexes`.
index_heading_underline_char:
the character used to underline the title in the index file
source_rst_heading_underline_char:
the character used to underline the heading in each of the
source files
title:
title for the index
introductory_rst:
extra RST for the index, which goes between the title and the
table of contents
recursive:
use :func:`glob.glob` in recursive mode?
skip_globs:
list of file names or file specifications to skip; e.g.
``['__init__.py']``
toctree_maxdepth:
``maxdepth`` parameter for the ``toctree`` command generated in
the index file
method:
see :class:`FileToAutodocument`
rst_prefix:
optional RST content (e.g. copyright comment) to put early on
in each of the RST files
rst_suffix:
optional RST content to put late on in each of the RST files
source_rst_title_style_python:
make the individual RST files use titles in the style of Python
modules, ``x.y.z``, rather than path style (``x/y/z``); path
style will be used for non-Python files in any case.
pygments_language_override:
if specified, a dictionary mapping file extensions to Pygments
languages (for example: a ``.pro`` file will be autodetected as
Prolog, but you might want to map that to ``none`` for Qt
project files).
"""
assert index_filename
assert project_root_dir
assert autodoc_rst_root_dir
assert isinstance(toctree_maxdepth, int)
assert isinstance(method, AutodocMethod)
self.index_filename = abspath(expanduser(index_filename))
self.title = title
self.introductory_rst = introductory_rst
self.project_root_dir = abspath(expanduser(project_root_dir))
self.autodoc_rst_root_dir = abspath(expanduser(autodoc_rst_root_dir))
self.highest_code_dir = abspath(expanduser(highest_code_dir))
self.python_package_root_dir = (
abspath(expanduser(python_package_root_dir))
if python_package_root_dir else self.project_root_dir
)
self.index_heading_underline_char = index_heading_underline_char
self.source_rst_heading_underline_char = source_rst_heading_underline_char # noqa
self.recursive = recursive
self.skip_globs = skip_globs if skip_globs is not None else DEFAULT_SKIP_GLOBS # noqa
self.toctree_maxdepth = toctree_maxdepth
self.method = method
self.rst_prefix = rst_prefix
self.rst_suffix = rst_suffix
self.source_rst_title_style_python = source_rst_title_style_python
self.pygments_language_override = pygments_language_override or {} # type: Dict[str, str] # noqa
assert isdir(self.project_root_dir), (
f"Not a directory: project_root_dir={self.project_root_dir!r}")
assert relative_filename_within_dir(
filename=self.index_filename,
directory=self.project_root_dir
), (
f"Index file {self.index_filename!r} is not within "
f"project directory {self.project_root_dir!r}"
)
assert relative_filename_within_dir(
filename=self.highest_code_dir,
directory=self.project_root_dir
), (
f"Highest code directory {self.highest_code_dir!r} is not within "
f"project directory {self.project_root_dir!r}"
)
assert relative_filename_within_dir(
filename=self.autodoc_rst_root_dir,
directory=self.project_root_dir
), (
f"Autodoc RST root directory {self.autodoc_rst_root_dir!r} is not "
f"within project directory {self.project_root_dir!r}"
)
assert isinstance(method, AutodocMethod)
assert isinstance(recursive, bool)
self.files_to_index = [] # type: List[Union[FileToAutodocument, AutodocIndex]] # noqa
if source_filenames_or_globs:
self.add_source_files(source_filenames_or_globs)
def __repr__(self) -> str:
return auto_repr(self)
def add_source_files(
self,
source_filenames_or_globs: Union[str, List[str]],
method: AutodocMethod = None,
recursive: bool = None,
source_rst_title_style_python: bool = None,
pygments_language_override: Dict[str, str] = None) -> None:
"""
Adds source files to the index.
Args:
source_filenames_or_globs: string containing a filename or a
glob, describing the file(s) to be added, or a list of such
strings
method: optional method to override ``self.method``
recursive: use :func:`glob.glob` in recursive mode? (If ``None``,
the default, uses the version from the constructor.)
source_rst_title_style_python: optional to override
``self.source_rst_title_style_python``
pygments_language_override: optional to override
``self.pygments_language_override``
"""
if not source_filenames_or_globs:
return
if method is None:
# Use the default
method = self.method
if recursive is None:
recursive = self.recursive
if source_rst_title_style_python is None:
source_rst_title_style_python = self.source_rst_title_style_python
if pygments_language_override is None:
pygments_language_override = self.pygments_language_override
# Get a sorted list of filenames
final_filenames = self.get_sorted_source_files(
source_filenames_or_globs,
recursive=recursive
)
# Process that sorted list
for source_filename in final_filenames:
self.files_to_index.append(FileToAutodocument(
source_filename=source_filename,
project_root_dir=self.project_root_dir,
python_package_root_dir=self.python_package_root_dir,
target_rst_filename=self.specific_file_rst_filename(
source_filename
),
method=method,
source_rst_title_style_python=source_rst_title_style_python,
pygments_language_override=pygments_language_override,
))
def get_sorted_source_files(
self,
source_filenames_or_globs: Union[str, List[str]],
recursive: bool = True) -> List[str]:
"""
Returns a sorted list of filenames to process, from a filename,
a glob string, or a list of filenames/globs.
Args:
source_filenames_or_globs: filename/glob, or list of them
recursive: use :func:`glob.glob` in recursive mode?
Returns:
sorted list of files to process
"""
if isinstance(source_filenames_or_globs, str):
source_filenames_or_globs = [source_filenames_or_globs]
final_filenames = [] # type: List[str]
for sfg in source_filenames_or_globs:
sfg_expanded = expanduser(sfg)
log.debug("Looking for: {!r}", sfg_expanded)
for filename in glob.glob(sfg_expanded, recursive=recursive):
log.debug("Trying: {!r}", filename)
if self.should_exclude(filename):
log.info("Skipping file {!r}", filename)
continue
final_filenames.append(filename)
final_filenames.sort()
return final_filenames
@staticmethod
def filename_matches_glob(filename: str, globtext: str) -> bool:
"""
The ``glob.glob`` function doesn't do exclusion very well. We don't
want to have to specify root directories for exclusion patterns. We
don't want to have to trawl a massive set of files to find exclusion
files. So let's implement a glob match.
Args:
filename: filename
globtext: glob
Returns:
does the filename match the glob?
See also:
- https://stackoverflow.com/questions/20638040/glob-exclude-pattern
"""
# Quick check on basename-only matching
if fnmatch(filename, globtext):
log.debug("{!r} matches {!r}", filename, globtext)
return True
bname = basename(filename)
if fnmatch(bname, globtext):
log.debug("{!r} matches {!r}", bname, globtext)
return True
# Directory matching: is actually accomplished by the code above!
# Otherwise:
return False
def should_exclude(self, filename) -> bool:
"""
Should we exclude this file from consideration?
"""
for skip_glob in self.skip_globs:
if self.filename_matches_glob(filename, skip_glob):
return True
return False
def add_index(self, index: "AutodocIndex") -> None:
"""
Add a sub-index file to this index.
Args:
index: index file to add, as an instance of :class:`AutodocIndex`
"""
self.files_to_index.append(index)
def add_indexes(self, indexes: List["AutodocIndex"]) -> None:
"""
Adds multiple sub-indexes to this index.
Args:
indexes: list of sub-indexes
"""
for index in indexes:
self.add_index(index)
def specific_file_rst_filename(self, source_filename: str) -> str:
"""
Gets the RST filename corresponding to a source filename.
See the help for the constructor for more details.
Args:
source_filename: source filename within current project
Returns:
RST filename
Note in particular: the way we structure the directories means that we
won't get clashes between files with idential names in two different
directories. However, we must also incorporate the original source
filename, in particular for C++ where ``thing.h`` and ``thing.cpp``
must not generate the same RST filename. So we just add ``.rst``.
"""
highest_code_to_target = relative_filename_within_dir(
source_filename, self.highest_code_dir)
bname = basename(source_filename)
result = join(self.autodoc_rst_root_dir,
dirname(highest_code_to_target),
bname + EXT_RST)
log.debug("Source {!r} -> RST {!r}", source_filename, result)
return result
def write_index_and_rst_files(self, overwrite: bool = False,
mock: bool = False) -> None:
"""
Writes both the individual RST files and the index.
Args:
overwrite: allow existing files to be overwritten?
mock: pretend to write, but don't
"""
for f in self.files_to_index:
if isinstance(f, FileToAutodocument):
f.write_rst(
prefix=self.rst_prefix,
suffix=self.rst_suffix,
heading_underline_char=self.source_rst_heading_underline_char, # noqa
overwrite=overwrite,
mock=mock,
)
elif isinstance(f, AutodocIndex):
f.write_index_and_rst_files(overwrite=overwrite, mock=mock)
else:
fail(f"Unknown thing in files_to_index: {f!r}")
self.write_index(overwrite=overwrite, mock=mock)
@property
def index_filename_rel_project_root(self) -> str:
"""
Returns the name of the index filename, relative to the project root.
Used for labelling the index file.
"""
return relpath(self.index_filename, start=self.project_root_dir)
def index_filename_rel_other_index(self, other: str) -> str:
"""
Returns the filename of this index, relative to the director of another
index. (For inserting a reference to this index into ``other``.)
Args:
other: the other index
Returns:
relative filename of our index
"""
return relpath(self.index_filename, start=dirname(other))
def index_content(self) -> str:
"""
Returns the contents of the index RST file.
"""
# Build the toctree command
index_filename = self.index_filename
spacer = " "
toctree_lines = [
".. toctree::",
spacer + f":maxdepth: {self.toctree_maxdepth}",
""
]
for f in self.files_to_index:
if isinstance(f, FileToAutodocument):
rst_filename = spacer + f.rst_filename_rel_autodoc_index(
index_filename)
elif isinstance(f, AutodocIndex):
rst_filename = (
spacer + f.index_filename_rel_other_index(index_filename)
)
else:
fail(f"Unknown thing in files_to_index: {f!r}")
rst_filename = "" # won't get here; for the type checker
toctree_lines.append(rst_filename)
toctree = "\n".join(toctree_lines)
# Create the whole file
content = """
.. {filename}
{AUTOGENERATED_COMMENT}
{prefix}
{underlined_title}
{introductory_rst}
{toctree}
{suffix}
""".format(
filename=self.index_filename_rel_project_root,
AUTOGENERATED_COMMENT=AUTOGENERATED_COMMENT,
prefix=self.rst_prefix,
underlined_title=rst_underline(
self.title, underline_char=self.index_heading_underline_char),
introductory_rst=self.introductory_rst,
toctree=toctree,
suffix=self.rst_suffix,
).strip() + "\n"
return content
def write_index(self, overwrite: bool = False, mock: bool = False) -> None:
"""
Writes the index file, if permitted.
Args:
overwrite: allow existing files to be overwritten?
mock: pretend to write, but don't
"""
write_if_allowed(self.index_filename, self.index_content(),
overwrite=overwrite, mock=mock)
| 37.81203 | 106 | 0.599068 | 3,953 | 35,203 | 5.122186 | 0.126233 | 0.022471 | 0.026274 | 0.019557 | 0.410658 | 0.345269 | 0.309018 | 0.2621 | 0.240024 | 0.207329 | 0 | 0.001306 | 0.304207 | 35,203 | 930 | 107 | 37.852688 | 0.825345 | 0.408346 | 0 | 0.348341 | 0 | 0 | 0.084642 | 0.025786 | 0 | 0 | 0 | 0.030108 | 0.042654 | 1 | 0.07346 | false | 0 | 0.028436 | 0.004739 | 0.182464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ecd9c823bd13321a5aed037237d0b2d3b1ca359 | 481 | py | Python | monsterapi/migrations/0005_monster_name.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | 1 | 2018-11-05T13:08:48.000Z | 2018-11-05T13:08:48.000Z | monsterapi/migrations/0005_monster_name.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | null | null | null | monsterapi/migrations/0005_monster_name.py | merenor/momeback | 33195c43abd2757a361dfc5cb7e3cf56f6b57402 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2018-11-08 21:12
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('monsterapi', '0004_name'),
]
operations = [
migrations.AddField(
model_name='monster',
name='name',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='monsterapi.Name'),
),
]
| 24.05 | 126 | 0.634096 | 56 | 481 | 5.392857 | 0.642857 | 0.07947 | 0.092715 | 0.145695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052055 | 0.241164 | 481 | 19 | 127 | 25.315789 | 0.775342 | 0.093555 | 0 | 0 | 1 | 0 | 0.103687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ed00b75cde30c9fb64baa2e01095d04529cd5dc | 2,398 | py | Python | src/sentry/migrations/0028_user_reports.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | src/sentry/migrations/0028_user_reports.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | src/sentry/migrations/0028_user_reports.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.27 on 2020-01-23 19:07
from __future__ import unicode_literals
import logging
from django.db import migrations
from sentry import eventstore
from sentry.utils.query import RangeQuerySetWrapper
from sentry.utils.snuba import (
SnubaError,
QueryOutsideRetentionError,
QueryOutsideGroupActivityError,
)
logger = logging.getLogger(__name__)
def backfill_user_reports(apps, schema_editor):
"""
Processes user reports that are missing event data, and adds the appropriate data
if the event exists in Clickhouse.
"""
UserReport = apps.get_model("sentry", "UserReport")
user_reports = UserReport.objects.filter(group__isnull=True, environment__isnull=True)
for report in RangeQuerySetWrapper(user_reports, step=1000):
try:
event = eventstore.get_event_by_id(report.project_id, report.event_id)
except (SnubaError, QueryOutsideGroupActivityError, QueryOutsideRetentionError) as se:
logger.warn(
"failed to fetch event %s for project %d: %s"
% (report.event_id, report.project_id, se)
)
continue
if event:
report.update(group_id=event.group_id, environment=event.get_environment())
class Migration(migrations.Migration):
# This flag is used to mark that a migration shouldn't be automatically run in
# production. We set this to True for operations that we think are risky and want
# someone from ops to run manually and monitor.
# General advice is that if in doubt, mark your migration as `is_dangerous`.
# Some things you should always mark as dangerous:
# - Adding indexes to large tables. These indexes should be created concurrently,
# unfortunately we can't run migrations outside of a transaction until Django
# 1.10. So until then these should be run manually.
# - Large data migrations. Typically we want these to be run manually by ops so that
# they can be monitored. Since data migrations will now hold a transaction open
# this is even more important.
# - Adding columns to highly active tables, even ones that are NULL.
is_dangerous = True
dependencies = [
("sentry", "0027_exporteddata"),
]
operations = [
migrations.RunPython(backfill_user_reports, migrations.RunPython.noop),
]
| 36.892308 | 94 | 0.708924 | 307 | 2,398 | 5.42671 | 0.501629 | 0.033013 | 0.018007 | 0.020408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015566 | 0.223103 | 2,398 | 64 | 95 | 37.46875 | 0.87869 | 0.410759 | 0 | 0 | 1 | 0 | 0.05942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.181818 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ed64ffbfe98feb2efc11a0c58fa360ff20cbac1 | 4,184 | py | Python | redshift-dataapi/iac/app.py | InfrastructureHQ/AWS-CDK-Accelerators | a8a3f61040f4419a6c3485c8f4b8df6204a55940 | [
"Apache-2.0"
] | null | null | null | redshift-dataapi/iac/app.py | InfrastructureHQ/AWS-CDK-Accelerators | a8a3f61040f4419a6c3485c8f4b8df6204a55940 | [
"Apache-2.0"
] | null | null | null | redshift-dataapi/iac/app.py | InfrastructureHQ/AWS-CDK-Accelerators | a8a3f61040f4419a6c3485c8f4b8df6204a55940 | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2021, SteelHead Industry Cloud, Inc. <info@steelheadhq.com>.
# All Rights Reserved. *
# #
# Import Core Modules
# For consistency with other languages, `cdk` is the preferred import name for the CDK's core module.
from aws_cdk import core as cdk
from aws_cdk.core import App, Stack, Tags
import os
# Import CICD Stack
from stacks.cicdstack import CICDStack
# Import Stacks
# from stacks.appflowstack import AppFlowStack
from stacks.apistack import APIStack
from stacks.sharedinfrastack import SharedInfraStack
from stacks.lambdastack import LambdaStack
from stacks.eventbridgestack import EventBridgeStack
from stacks.vpcstack import VPCStack
from stacks.redshiftstack import RedshiftStack
# Import Global & Stack Specific Settings
from settings.globalsettings import GlobalSettings
from settings.apistacksettings import APIStackSettings
globalsettings = GlobalSettings()
apistacksettings = APIStackSettings()
# Stack Environment: Region and Account
AWS_ACCOUNT_ID = globalsettings.AWS_ACCOUNT_ID
AWS_REGION = globalsettings.AWS_REGION
OWNER = globalsettings.OWNER
PRODUCT = globalsettings.PRODUCT
PACKAGE = globalsettings.PACKAGE
STAGE = globalsettings.STAGE
app = cdk.App()
# Stack Environment: Region and Account
ENV = {
"region": AWS_REGION,
"account": AWS_ACCOUNT_ID,
}
# ***************** CICD Stack ******************* #
# CICD Stack
cicd_stack = CICDStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER,
PRODUCT=PRODUCT,
PACKAGE=PACKAGE,
STACKNAME="CICDStack",
STAGE=STAGE,
),
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(cicd_stack).add("Package", PACKAGE)
# ***************** All Other Stack ******************* #
"""
# Shared Infra Stack
sharedinfra_stack = SharedInfraStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER,
PRODUCT=PRODUCT,
PACKAGE=PACKAGE,
STACKNAME="SharedInfraStack",
STAGE=STAGE,
),
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(sharedinfra_stack).add("Package", PACKAGE)
# Lambda Stack
lambda_stack = LambdaStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER,
PRODUCT=PRODUCT,
PACKAGE=PACKAGE,
STACKNAME="LambdaStack",
STAGE=STAGE,
),
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(lambda_stack).add("Package", PACKAGE)
# API Stack
api_stack = APIStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER, PRODUCT=PRODUCT, PACKAGE=PACKAGE, STACKNAME="APIStack", STAGE=STAGE
),
sharedinfra_stack,
lambda_stack,
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(api_stack).add("Package", PACKAGE)
# EventBridge Stack
eventbridge_stack = EventBridgeStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER,
PRODUCT=PRODUCT,
PACKAGE=PACKAGE,
STACKNAME="EventBridgeStack",
STAGE=STAGE,
),
sharedinfra_stack,
lambda_stack,
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(eventbridge_stack).add("Package", PACKAGE)
# AppFlow Stack
appflow_stack = AppFlowStack(
app,
"{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
OWNER=OWNER,
PRODUCT=PRODUCT,
PACKAGE=PACKAGE,
STACKNAME="AppFlowStack",
STAGE=STAGE,
),
env=ENV,
)
# Add a tag to all constructs in the Stack
Tags.of(appflow_stack).add("Package", PACKAGE)
"""
# vpc_stack = VPCStack(
# app,
# "{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
# OWNER=OWNER,
# PRODUCT=PRODUCT,
# PACKAGE=PACKAGE,
# STACKNAME="VPCStack",
# STAGE=STAGE,
# ),
# env=ENV,
# )
# redshift_stack = RedshiftStack(
# app,
# "{OWNER}-{PRODUCT}-{PACKAGE}-{STACKNAME}-{STAGE}".format(
# OWNER=OWNER,
# PRODUCT=PRODUCT,
# PACKAGE=PACKAGE,
# STACKNAME="RedshiftStack",
# STAGE=STAGE,
# ),
# env=ENV,
# )
app.synth()
| 24.757396 | 101 | 0.658461 | 459 | 4,184 | 5.938998 | 0.172113 | 0.074835 | 0.044021 | 0.064563 | 0.423331 | 0.399853 | 0.399853 | 0.399853 | 0.399853 | 0.399853 | 0 | 0.001207 | 0.207935 | 4,184 | 168 | 102 | 24.904762 | 0.821364 | 0.25717 | 0 | 0 | 0 | 0 | 0.062346 | 0.038556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2ee996c7e56c15fda4c16fc6ca5f0fcf36384ef4 | 284 | py | Python | default_settings.py | RobMilinski/Xero-Starter-Branched-Test | c82382e674b34c2336ee164f5a079d6becd1ed46 | [
"MIT"
] | null | null | null | default_settings.py | RobMilinski/Xero-Starter-Branched-Test | c82382e674b34c2336ee164f5a079d6becd1ed46 | [
"MIT"
] | null | null | null | default_settings.py | RobMilinski/Xero-Starter-Branched-Test | c82382e674b34c2336ee164f5a079d6becd1ed46 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
from os.path import dirname, join
SECRET_KEY = os.urandom(16)
# configure file based session
SESSION_TYPE = "filesystem"
SESSION_FILE_DIR = join(dirname(__file__), "cache")
# configure flask app for local development
ENV = "development"
| 23.666667 | 52 | 0.714789 | 38 | 284 | 5.131579 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.176056 | 284 | 11 | 53 | 25.818182 | 0.820513 | 0.323944 | 0 | 0 | 0 | 0 | 0.146893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2eea53e1e9eb945c94d4c33420b934da2fc613c6 | 5,610 | py | Python | v0.2/crawl_tool.py | sivanWu0222/GetLinksFromSoBooks | 9f429b5f8b359e4faf25381a7b59f5effd21b5ca | [
"Apache-2.0"
] | 8 | 2019-02-09T05:00:50.000Z | 2020-11-02T11:30:03.000Z | v0.2/crawl_tool.py | sivanWu0222/GetLinksFromSoBooks | 9f429b5f8b359e4faf25381a7b59f5effd21b5ca | [
"Apache-2.0"
] | null | null | null | v0.2/crawl_tool.py | sivanWu0222/GetLinksFromSoBooks | 9f429b5f8b359e4faf25381a7b59f5effd21b5ca | [
"Apache-2.0"
] | null | null | null | import requests
from bs4 import BeautifulSoup
import re
from selenium import webdriver
import model
URL = "https://sobooks.cc"
VERIFY_KEY = '2019777'
def convert_to_beautifulsoup(data):
"""
用于将传过来的data数据包装成BeautifulSoup对象
:param data: 对应网页的html内容数据
:return: 对应data的BeautifulSoup对象
"""
bs = BeautifulSoup(data, "html.parser")
return bs
def url_pattern():
"""
匹配URL的正则表达式
:return:
"""
pattern = '(http|ftp|https):\/\/[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?'
pattern = re.compile(pattern)
return pattern
def get_category_link(url):
"""
爬取导航栏各个分类下的URL,并将其添加到一个列表中
:param URL:
:return:
"""
navbar_links = []
data = requests.get(url).text
bs = convert_to_beautifulsoup(data)
navbar_contents = bs.select('.menu-item')
for navbar_content in navbar_contents:
pattern = url_pattern()
navbar_link = pattern.search(str(navbar_content))
navbar_links.append(navbar_link.group())
return navbar_links
def get_url_content(url):
"""
返回url对应网页的内容,用于分析和提取有价值的内容
:param url: 网页地址
:return: url对应的网页html内容
"""
return requests.get(url).text
def get_book_card_content(url, data):
"""
得到每页书籍卡片的内容,从而为获取书籍作者名字和链接提供方便
:param url: 网页的url地址
:param data: url对应的网页内容
:return:
"""
books_perpage = convert_to_beautifulsoup(data).select('h3')
return books_perpage
def get_url_book(url, data):
"""
获得对应页面URL链接存放的每个书籍对应的URL
:param url: 网页的url地址
:param data: url对应的网页内容
:return: 返回该URL所在页面的每个书籍对应的URL组成的列表
"""
book_links = []
# 通过h3标签查找每页书籍
books_perpage = get_book_card_content(url, data)
for book_content in books_perpage:
pattern = url_pattern()
# 获取每本书的链接
book_link = pattern.search(str(book_content))
book_links.append(book_link.group())
return book_links
def has_next_page(url, data):
"""
判断url对应的页面是否有 下一页
:param url: 网页的url地址
:param data: url对应的网页内容
:return: 有下一页 返回下一页对应的URL地址
没有下一页 返回False
"""
bs = BeautifulSoup(data, "html.parser")
next_page = bs.select('.next-page')
if next_page:
url_next_page = url_pattern().search(str(next_page))
return url_next_page.group()
else:
return False
def get_url_books_name(url, data):
"""
判断书籍列表中url对应的页面的书名组成的列表
:param url: 网页的url地址
:param data: url对应的网页内容
:return: 返回url对应网址的书籍名称组成的列表
"""
books_name = []
books_perpage = get_book_card_content(url, data)
for book in books_perpage:
book_name = book.select('a')[0].get('title')
books_name.append(book_name)
return books_name
def get_book_baidu_neturl(url):
"""
获取每个书籍详情页面的百度网盘链接
:param url: 每本书详情页面的URL
:return: 返回每本书的百度网盘链接,如果没有返回 False
"""
data = requests.get(url).text
bs = convert_to_beautifulsoup(data)
for a_links in bs.select('a'):
if a_links.get_text() == '百度网盘':
book_baidu_url = a_links.get('href')
# 提取百度网盘链接的正则表达式
pattern = '(http|ftp|https):\/\/pan\.[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?'
pattern = re.compile(pattern)
book_baidu_url = pattern.search(book_baidu_url).group()
return book_baidu_url
def get_book_baidu_password(url):
"""
获取对应url链接存储的书籍百度网盘的提取密码
:param url: 要获取提取密码的url链接所对应的书籍
:return: 如果存在返回提取密码
否则返回None
"""
# @TODO 1. 尝试使用爬虫的方式获取提交的页面来获得百度网盘提取码
# @TODO 2. 如果不可以的话,就使用selenium模拟浏览器来爬取内容吧
browser = webdriver.Chrome()
browser.get(url)
try:
browser.find_element_by_class_name('euc-y-s')
secret_key = browser.find_element_by_class_name('euc-y-i')
secret_key.send_keys(VERIFY_KEY)
browser.find_element_by_class_name('euc-y-s').click()
except Exception as e:
browser.close()
password = str(browser.find_element_by_class_name('e-secret').text)
if password:
return password[-4:]
else:
return None
def get_book_author(url, data):
"""
获得url对应的书籍列表页面中的作者列表
:param url: 对应书籍列表页面的url
:param data: 对应书籍列表页面的html内容
:return: 返回url对应的作者列表
"""
book_authors = []
bs = convert_to_beautifulsoup(data)
for book_author in bs.select('div > p > a'):
book_authors.append(book_author.text)
return book_authors
def analy_url_page(url):
"""
分析url对应的网址,包括如下几个方面
1. 提取当前url所有书籍的链接
2. 判断当前url是否有下一页,如果有, 继续步骤3
如果没有,继续从新的分类开始爬取,
如果新的分类已经爬取完成,则爬取完成
3. 获取当前页面所有书籍,并依次为每个书籍创建对象(进行初始化,爬取书籍的名称、作者名、书籍详情页、书籍百度网盘地址、书籍百度网盘提取码)
4. 继续步骤2
:param url: 网页的url地址
:return: None
"""
while url:
data = get_url_content(url)
url_links_page = get_url_book(url, data)
url_next_page = has_next_page(url, data)
books_name = get_url_books_name(url, data)
for i in range(len(books_name)):
book_name = books_name[i]
book_author = get_book_author(url, data)[i]
book_info_url = url_links_page[i]
book_baidu_url = get_book_baidu_neturl(url_links_page[i])
book_baidu_password = get_book_baidu_password(url_links_page[i])
book = model.Book(book_name, book_info_url, book_author, book_baidu_url, book_baidu_password)
print(book)
if url_next_page:
url = url_next_page
else:
break
if __name__ == '__main__':
root_url = URL
for url in get_category_link(root_url):
analy_url_page(url) | 26.842105 | 106 | 0.636364 | 662 | 5,610 | 5.114804 | 0.253776 | 0.024808 | 0.021264 | 0.038393 | 0.282339 | 0.203485 | 0.151506 | 0.103071 | 0.087123 | 0.055523 | 0 | 0.004721 | 0.24492 | 5,610 | 209 | 107 | 26.842105 | 0.794618 | 0.229768 | 0 | 0.152381 | 0 | 0.009524 | 0.077039 | 0.043915 | 0 | 0 | 0 | 0.004785 | 0 | 1 | 0.114286 | false | 0.057143 | 0.047619 | 0 | 0.285714 | 0.009524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2eebfa288fa7ac7e2d9bbee1fae33a9fe39c1ae3 | 620 | py | Python | txter/migrations/0001_initial.py | KanataIZUMIKAWA/TXTer | 6cbf67a229db30452e412883cd55584a204199a7 | [
"MIT"
] | null | null | null | txter/migrations/0001_initial.py | KanataIZUMIKAWA/TXTer | 6cbf67a229db30452e412883cd55584a204199a7 | [
"MIT"
] | null | null | null | txter/migrations/0001_initial.py | KanataIZUMIKAWA/TXTer | 6cbf67a229db30452e412883cd55584a204199a7 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-01-05 03:33
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Posts',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user', models.CharField(default='noname', max_length=64)),
('note', models.TextField(default='')),
('read', models.BooleanField(default=False)),
],
),
]
| 25.833333 | 114 | 0.564516 | 62 | 620 | 5.580645 | 0.758065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03908 | 0.298387 | 620 | 23 | 115 | 26.956522 | 0.756322 | 0.072581 | 0 | 0 | 1 | 0 | 0.04712 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2eee76235b6e429c851cf2af43ca0728c03365df | 1,241 | py | Python | arquivo.py | raphaelss/programagestalt | dabe073bda7d34a16368cdc881e9d1a7150263cc | [
"MIT"
] | null | null | null | arquivo.py | raphaelss/programagestalt | dabe073bda7d34a16368cdc881e9d1a7150263cc | [
"MIT"
] | null | null | null | arquivo.py | raphaelss/programagestalt | dabe073bda7d34a16368cdc881e9d1a7150263cc | [
"MIT"
] | null | null | null | import csv
import re
class Arquivo:
def __init__(self, path):
rp = re.compile("^ *(\d+) +(\d+)\n?$")
self.alturas = []
self.duracoes = []
linecount = 1
with open(path) as f:
for line in f:
match = rp.match(line)
if match:
self.alturas.append(int(match.group(1)))
self.duracoes.append(int(match.group(2)))
else:
print("Erro na linha", linecount, ":", line)
exit()
linecount = linecount + 1
def gerar_altura(self):
return self.alturas
def gerar_duracao(self):
return self.duracoes
class Csv:
def __init__(self, path):
self.alturas = []
self.duracoes = []
with open(path) as f:
for row in csv.reader(f, delimiter=';'):
self.alturas.append(int(row[3]))
self.duracoes.append(round(float(row[1].replace(',','.')) * 60))
def gerar_altura(self):
return self.alturas
def gerar_duracao(self):
return self.duracoes
def abrir(path):
if path.endswith('csv'):
return Csv(path)
else:
return Arquivo(path)
| 26.404255 | 80 | 0.507655 | 141 | 1,241 | 4.382979 | 0.35461 | 0.106796 | 0.090615 | 0.048544 | 0.291262 | 0.291262 | 0.23301 | 0.23301 | 0.23301 | 0.23301 | 0 | 0.010127 | 0.363417 | 1,241 | 46 | 81 | 26.978261 | 0.772152 | 0 | 0 | 0.461538 | 0 | 0 | 0.031426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179487 | false | 0 | 0.051282 | 0.102564 | 0.435897 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
2ef2497a0bb99f58197f6f6252d8254c97895330 | 2,502 | py | Python | train/tf/nlp/trainer/tf.py | charliemorning/mlws | 8e9bad59ca9f5e774cc1ae7fe454ff3b8a8e1784 | [
"MIT"
] | null | null | null | train/tf/nlp/trainer/tf.py | charliemorning/mlws | 8e9bad59ca9f5e774cc1ae7fe454ff3b8a8e1784 | [
"MIT"
] | null | null | null | train/tf/nlp/trainer/tf.py | charliemorning/mlws | 8e9bad59ca9f5e774cc1ae7fe454ff3b8a8e1784 | [
"MIT"
] | null | null | null | from models.torch.trainer import SupervisedNNModelTrainConfig, Trainer
class KerasTrainer(Trainer):
def __init__(
self,
train_config: SupervisedNNModelTrainConfig
):
super(KerasTrainer, self).__init__(train_config)
def fit(self, train_data, eval_data=None, callbacks=None, verbose=2):
super(KerasTrainer, self).fit(train_data=train_data, eval_data=eval_data)
xs_train, ys_train = train_data
self.model.fit(xs_train, ys_train,
batch_size=self.train_config.train_batch_size,
epochs=self.train_config.epoch,
validation_data=eval_data,
callbacks=callbacks,
verbose=verbose)
def evaluate(self, eval_data):
super(KerasTrainer, self).evaluate(eval_data=eval_data)
xs_test, ys_test = eval_data
self.model.evaluate(xs_test, ys_test, self.train_config.eval_batch_size)
def predict(self, xs_test):
return self.model.predict(xs_test)
def load_model(self, model_file_path):
self.model.load_weights(model_file_path)
class TensorFlowEstimatorTrainer(Trainer):
def __init__(self,
train_config: SupervisedNNModelTrainConfig
):
super(TensorFlowEstimatorTrainer, self).__init__()
self.train_config = train_config
def __input_fn_builder(self, xs_test, ys_test=None):
pass
def __model_fn_builder(self):
pass
def fit(self, xs_train, ys_train):
input_fn = self.__input_fn_builder(xs_train, ys_train)
self.estimator.train(input_fn=input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)
def evaluate(self, xs_valid, ys_valid):
input_fn = self.__input_fn_builder(xs_valid, ys_valid)
self.estimator.evaluate(input_fn=input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)
def predict(self, xs_test):
input_fn = self.__input_fn_builder(xs_test)
self.estimator.predict(input_fn=input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)
def load_model(self, model_file_path):
self.estimator.export_saved_model(model_file_path,
# serving_input_receiver_fn,
assets_extra=None,
as_text=False,
checkpoint_path=None)
| 37.909091 | 113 | 0.643086 | 298 | 2,502 | 5.003356 | 0.197987 | 0.061033 | 0.060362 | 0.037559 | 0.361502 | 0.312542 | 0.312542 | 0.258216 | 0.132797 | 0.132797 | 0 | 0.000552 | 0.276179 | 2,502 | 65 | 114 | 38.492308 | 0.82275 | 0.010392 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.041667 | 0.020833 | 0.020833 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2efb047b94d6832acae0ceb75434aca43e199244 | 515 | py | Python | good_spot/places/migrations/0028_fieldtype_is_shown_in_about_place.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | null | null | null | good_spot/places/migrations/0028_fieldtype_is_shown_in_about_place.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | 5 | 2020-02-12T03:13:11.000Z | 2022-01-13T01:41:14.000Z | good_spot/places/migrations/0028_fieldtype_is_shown_in_about_place.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2017-12-29 18:33
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('places', '0027_auto_20171229_1606'),
]
operations = [
migrations.AddField(
model_name='fieldtype',
name='is_shown_in_about_place',
field=models.BooleanField(default=False, verbose_name='Show in About Place section'),
),
]
| 24.52381 | 97 | 0.646602 | 60 | 515 | 5.316667 | 0.8 | 0.043887 | 0.075235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084615 | 0.242718 | 515 | 20 | 98 | 25.75 | 0.733333 | 0.132039 | 0 | 0 | 1 | 0 | 0.198198 | 0.103604 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2effa2910ab4cf2ceba74e75f327552e40cb5a00 | 10,926 | py | Python | learning/pytorch/models/rnn_models.py | thomasehuang/Ithemal-Extension | 821a875962a261de003c6da6e2d3e9b49918d68a | [
"MIT"
] | 105 | 2019-08-05T21:27:33.000Z | 2022-02-16T03:35:10.000Z | learning/pytorch/models/rnn_models.py | thomasehuang/Ithemal-Extension | 821a875962a261de003c6da6e2d3e9b49918d68a | [
"MIT"
] | 16 | 2019-08-06T21:12:11.000Z | 2021-03-22T14:09:21.000Z | learning/pytorch/models/rnn_models.py | thomasehuang/Ithemal-Extension | 821a875962a261de003c6da6e2d3e9b49918d68a | [
"MIT"
] | 25 | 2019-08-11T22:41:57.000Z | 2021-11-10T08:02:50.000Z | #this file contains models that I have tried out for different tasks, which are reusable
#plus it has the training framework for those models given data - each model has its own data requirements
import numpy as np
import common_libs.utilities as ut
import random
import torch.nn as nn
import torch.autograd as autograd
import torch.optim as optim
import torch
import math
class ModelAbs(nn.Module):
"""
Abstract model without the forward method.
lstm for processing tokens in sequence and linear layer for output generation
lstm is a uni-directional single layer lstm
num_classes = 1 - for regression
num_classes = n - for classifying into n classes
"""
def __init__(self, hidden_size, embedding_size, num_classes):
super(ModelAbs, self).__init__()
self.hidden_size = hidden_size
self.name = 'should be overridden'
#numpy array with batchsize, embedding_size
self.embedding_size = embedding_size
self.num_classes = num_classes
#lstm - input size, hidden size, num layers
self.lstm_token = nn.LSTM(self.embedding_size, self.hidden_size)
#hidden state for the rnn
self.hidden_token = self.init_hidden()
#linear layer for regression - in_features, out_features
self.linear = nn.Linear(self.hidden_size, self.num_classes)
def init_hidden(self):
return (autograd.Variable(torch.zeros(1, 1, self.hidden_size)),
autograd.Variable(torch.zeros(1, 1, self.hidden_size)))
#this is to set learnable embeddings
def set_learnable_embedding(self, mode, dictsize, seed = None):
self.mode = mode
if mode != 'learnt':
embedding = nn.Embedding(dictsize, self.embedding_size)
if mode == 'none':
print 'learn embeddings form scratch...'
initrange = 0.5 / self.embedding_size
embedding.weight.data.uniform_(-initrange, initrange)
self.final_embeddings = embedding
elif mode == 'seed':
print 'seed by word2vec vectors....'
embedding.weight.data = torch.FloatTensor(seed)
self.final_embeddings = embedding
else:
print 'using learnt word2vec embeddings...'
self.final_embeddings = seed
#remove any references you may have that inhibits garbage collection
def remove_refs(self, item):
return
class ModelSequentialRNN(ModelAbs):
"""
Prediction at every hidden state of the unrolled rnn.
Input - sequence of tokens processed in sequence by the lstm
Output - predictions at the every hidden state
uses lstm and linear setup of ModelAbs
each hidden state is given as a seperate batch to the linear layer
"""
def __init__(self, hidden_size, embedding_size, num_classes, intermediate):
super(ModelSequentialRNN, self).__init__(hidden_size, embedding_size, num_classes)
if intermediate:
self.name = 'sequential RNN intermediate'
else:
self.name = 'sequential RNN'
self.intermediate = intermediate
def forward(self, item):
self.hidden_token = self.init_hidden()
#convert to tensor
if self.mode == 'learnt':
acc_embeds = []
for token in item.x:
acc_embeds.append(self.final_embeddings[token])
embeds = torch.FloatTensor(acc_embeds)
else:
embeds = self.final_embeddings(torch.LongTensor(item.x))
#prepare for lstm - seq len, batch size, embedding size
seq_len = embeds.shape[0]
embeds_for_lstm = embeds.unsqueeze(1)
#lstm outputs
#output, (h_n,c_n)
#output - (seq_len, batch = 1, hidden_size * directions) - h_t for each t final layer only
#h_n - (layers * directions, batch = 1, hidden_size) - h_t for t = seq_len
#c_n - (layers * directions, batch = 1, hidden_size) - c_t for t = seq_len
#lstm inputs
#input, (h_0, c_0)
#input - (seq_len, batch, input_size)
lstm_out, self.hidden_token = self.lstm_token(embeds_for_lstm, self.hidden_token)
if self.intermediate:
#input to linear - seq_len, hidden_size (seq_len is the batch size for the linear layer)
#output - seq_len, num_classes
values = self.linear(lstm_out[:,0,:].squeeze()).squeeze()
else:
#input to linear - hidden_size
#output - num_classes
values = self.linear(self.hidden_token[0].squeeze()).squeeze()
return values
class ModelHierarchicalRNN(ModelAbs):
"""
Prediction at every hidden state of the unrolled rnn for instructions.
Input - sequence of tokens processed in sequence by the lstm but seperated into instructions
Output - predictions at the every hidden state
lstm predicting instruction embedding for sequence of tokens
lstm_ins processes sequence of instruction embeddings
linear layer process hidden states to produce output
"""
def __init__(self, hidden_size, embedding_size, num_classes, intermediate):
super(ModelHierarchicalRNN, self).__init__(hidden_size, embedding_size, num_classes)
self.hidden_ins = self.init_hidden()
self.lstm_ins = nn.LSTM(self.hidden_size, self.hidden_size)
if intermediate:
self.name = 'hierarchical RNN intermediate'
else:
self.name = 'hierarchical RNN'
self.intermediate = intermediate
def copy(self, model):
self.linear = model.linear
self.lstm_token = model.lstm_token
self.lstm_ins = model.lstm_ins
def forward(self, item):
self.hidden_token = self.init_hidden()
self.hidden_ins = self.init_hidden()
ins_embeds = autograd.Variable(torch.zeros(len(item.x),self.embedding_size))
for i, ins in enumerate(item.x):
if self.mode == 'learnt':
acc_embeds = []
for token in ins:
acc_embeds.append(self.final_embeddings[token])
token_embeds = torch.FloatTensor(acc_embeds)
else:
token_embeds = self.final_embeddings(torch.LongTensor(ins))
#token_embeds = torch.FloatTensor(ins)
token_embeds_lstm = token_embeds.unsqueeze(1)
out_token, hidden_token = self.lstm_token(token_embeds_lstm,self.hidden_token)
ins_embeds[i] = hidden_token[0].squeeze()
ins_embeds_lstm = ins_embeds.unsqueeze(1)
out_ins, hidden_ins = self.lstm_ins(ins_embeds_lstm, self.hidden_ins)
if self.intermediate:
values = self.linear(out_ins[:,0,:]).squeeze()
else:
values = self.linear(hidden_ins[0].squeeze()).squeeze()
return values
class ModelHierarchicalRNNRelational(ModelAbs):
def __init__(self, embedding_size, num_classes):
super(ModelHierarchicalRNNRelational, self).__init__(embedding_size, num_classes)
self.hidden_ins = self.init_hidden()
self.lstm_ins = nn.LSTM(self.hidden_size, self.hidden_size)
self.linearg1 = nn.Linear(2 * self.hidden_size, self.hidden_size)
self.linearg2 = nn.Linear(self.hidden_size, self.hidden_size)
def forward(self, item):
self.hidden_token = self.init_hidden()
self.hidden_ins = self.init_hidden()
ins_embeds = autograd.Variable(torch.zeros(len(item.x),self.hidden_size))
for i, ins in enumerate(item.x):
if self.mode == 'learnt':
acc_embeds = []
for token in ins:
acc_embeds.append(self.final_embeddings[token])
token_embeds = torch.FloatTensor(acc_embeds)
else:
token_embeds = self.final_embeddings(torch.LongTensor(ins))
#token_embeds = torch.FloatTensor(ins)
token_embeds_lstm = token_embeds.unsqueeze(1)
out_token, hidden_token = self.lstm_token(token_embeds_lstm,self.hidden_token)
ins_embeds[i] = hidden_token[0].squeeze()
ins_embeds_lstm = ins_embeds.unsqueeze(1)
out_ins, hidden_ins = self.lstm_ins(ins_embeds_lstm, self.hidden_ins)
seq_len = len(item.x)
g_variable = autograd.Variable(torch.zeros(self.hidden_size))
for i in range(seq_len):
for j in range(i,seq_len):
concat = torch.cat((out_ins[i].squeeze(),out_ins[j].squeeze()),0)
g1 = nn.functional.relu(self.linearg1(concat))
g2 = nn.functional.relu(self.linearg2(g1))
g_variable += g2
output = self.linear(g_variable)
return output
class ModelSequentialRNNComplex(nn.Module):
"""
Prediction using the final hidden state of the unrolled rnn.
Input - sequence of tokens processed in sequence by the lstm
Output - the final value to be predicted
we do not derive from ModelAbs, but instead use a bidirectional, multi layer
lstm and a deep MLP with non-linear activation functions to predict the final output
"""
def __init__(self, embedding_size):
super(ModelFinalHidden, self).__init__()
self.name = 'sequential RNN'
self.hidden_size = 256
self.embedding_size = embedding_size
self.layers = 2
self.directions = 1
self.is_bidirectional = (self.directions == 2)
self.lstm_token = torch.nn.LSTM(input_size = self.embedding_size,
hidden_size = self.hidden_size,
num_layers = self.layers,
bidirectional = self.is_bidirectional)
self.linear1 = nn.Linear(self.layers * self. directions * self.hidden_size, self.hidden_size)
self.linear2 = nn.Linear(self.hidden_size,1)
self.hidden_token = self.init_hidden()
def init_hidden(self):
return (autograd.Variable(torch.zeros(self.layers * self.directions, 1, self.hidden_size)),
autograd.Variable(torch.zeros(self.layers * self.directions, 1, self.hidden_size)))
def forward(self, item):
self.hidden_token = self.init_hidden()
#convert to tensor
if self.mode == 'learnt':
acc_embeds = []
for token in item.x:
acc_embeds.append(self.final_embeddings[token])
embeds = torch.FloatTensor(acc_embeds)
else:
embeds = self.final_embeddings(torch.LongTensor(item.x))
#prepare for lstm - seq len, batch size, embedding size
seq_len = embeds.shape[0]
embeds_for_lstm = embeds.unsqueeze(1)
lstm_out, self.hidden_token = self.lstm_token(embeds_for_lstm, self.hidden_token)
f1 = nn.functional.relu(self.linear1(self.hidden_token[0].squeeze().view(-1)))
f2 = self.linear2(f1)
return f2
| 34.358491 | 106 | 0.645158 | 1,372 | 10,926 | 4.943149 | 0.159621 | 0.066352 | 0.051607 | 0.023887 | 0.552197 | 0.483486 | 0.448393 | 0.418313 | 0.409614 | 0.383958 | 0 | 0.007126 | 0.267893 | 10,926 | 317 | 107 | 34.466877 | 0.84073 | 0.106718 | 0 | 0.505952 | 0 | 0 | 0.030209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c00b39003b7029e872ee5035a075a953fa079f3 | 9,669 | py | Python | ddsc/config.py | Duke-GCB/DukeDSClient | 7f119a5ee2e674e8deaff1f080caed1953c5cc61 | [
"MIT"
] | 4 | 2020-06-18T12:30:13.000Z | 2020-10-12T21:25:54.000Z | ddsc/config.py | Duke-GCB/DukeDSClient | 7f119a5ee2e674e8deaff1f080caed1953c5cc61 | [
"MIT"
] | 239 | 2016-02-18T14:44:08.000Z | 2022-03-11T14:38:56.000Z | ddsc/config.py | Duke-GCB/DukeDSClient | 7f119a5ee2e674e8deaff1f080caed1953c5cc61 | [
"MIT"
] | 10 | 2016-02-22T15:01:28.000Z | 2022-02-21T22:46:26.000Z | """ Global configuration for the utility based on config files and environment variables."""
import os
import re
import math
import yaml
import multiprocessing
from ddsc.core.util import verify_file_private
from ddsc.exceptions import DDSUserException
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
GLOBAL_CONFIG_FILENAME = '/etc/ddsclient.conf'
LOCAL_CONFIG_FILENAME = '~/.ddsclient'
LOCAL_CONFIG_ENV = 'DDSCLIENT_CONF'
DUKE_DATA_SERVICE_URL = 'https://api.dataservice.duke.edu/api/v1'
D4S2_SERVICE_URL = 'https://datadelivery.genome.duke.edu/api/v1'
MB_TO_BYTES = 1024 * 1024
DDS_DEFAULT_UPLOAD_CHUNKS = 100 * MB_TO_BYTES
DDS_DEFAULT_DOWNLOAD_CHUNK_SIZE = 20 * MB_TO_BYTES
AUTH_ENV_KEY_NAME = 'DUKE_DATA_SERVICE_AUTH'
# when uploading skip .DS_Store, our key file, and ._ (resource fork metadata)
FILE_EXCLUDE_REGEX_DEFAULT = '^\.DS_Store$|^\.ddsclient$|^\.\_'
MAX_DEFAULT_WORKERS = 8
GET_PAGE_SIZE_DEFAULT = 100 # fetch 100 items per page
DEFAULT_FILE_DOWNLOAD_RETRIES = 5
DEFAULT_BACKING_STORAGE = "dds"
def get_user_config_filename():
user_config_filename = os.environ.get(LOCAL_CONFIG_ENV)
if user_config_filename:
return user_config_filename
else:
return LOCAL_CONFIG_FILENAME
def create_config(allow_insecure_config_file=False):
"""
Create config based on /etc/ddsclient.conf and ~/.ddsclient.conf($DDSCLIENT_CONF)
:param allow_insecure_config_file: bool: when true we will not check ~/.ddsclient permissions.
:return: Config with the configuration to use for DDSClient.
"""
config = Config()
config.add_properties(GLOBAL_CONFIG_FILENAME)
user_config_filename = get_user_config_filename()
if user_config_filename == LOCAL_CONFIG_FILENAME and not allow_insecure_config_file:
verify_file_private(user_config_filename)
config.add_properties(user_config_filename)
return config
def default_num_workers():
"""
Return the number of workers to use as default if not specified by a config file.
Returns the number of CPUs or MAX_DEFAULT_WORKERS (whichever is less).
"""
return min(multiprocessing.cpu_count(), MAX_DEFAULT_WORKERS)
class Config(object):
"""
Global configuration object based on config files an environment variables.
"""
URL = 'url' # specifies the dataservice host we are connecting too
USER_KEY = 'user_key' # user key: /api/v1/current_user/api_key
AGENT_KEY = 'agent_key' # software_agent key: /api/v1/software_agents/{id}/api_key
AUTH = 'auth' # Holds actual auth token for connecting to the dataservice
UPLOAD_BYTES_PER_CHUNK = 'upload_bytes_per_chunk' # bytes per chunk we will upload
UPLOAD_WORKERS = 'upload_workers' # how many worker processes used for uploading
DOWNLOAD_WORKERS = 'download_workers' # how many worker processes used for downloading
DOWNLOAD_BYTES_PER_CHUNK = 'download_bytes_per_chunk' # bytes per chunk we will download
DEBUG_MODE = 'debug' # show stack traces
D4S2_URL = 'd4s2_url' # url for use with the D4S2 (share/deliver service)
FILE_EXCLUDE_REGEX = 'file_exclude_regex' # allows customization of which filenames will be uploaded
GET_PAGE_SIZE = 'get_page_size' # page size used for GET pagination requests
STORAGE_PROVIDER_ID = 'storage_provider_id' # setting to override the default storage provider
FILE_DOWNLOAD_RETRIES = 'file_download_retries' # number of times to retry a failed file download
BACKING_STORAGE = 'backing_storage' # backing storage either "dds" or "azure"
def __init__(self):
self.values = {}
def add_properties(self, filename):
"""
Add properties to config based on filename replacing previous values.
:param filename: str path to YAML file to pull top level properties from
"""
filename = os.path.expanduser(filename)
if os.path.exists(filename):
with open(filename, 'r') as yaml_file:
config_data = yaml.safe_load(yaml_file)
if config_data:
self.update_properties(config_data)
else:
raise DDSUserException("Error: Empty config file {}".format(filename))
def update_properties(self, new_values):
"""
Add items in new_values to the internal list replacing existing values.
:param new_values: dict properties to set
"""
self.values = dict(self.values, **new_values)
@property
def url(self):
"""
Specifies the dataservice host we are connecting too.
:return: str url to a dataservice host
"""
return self.values.get(Config.URL, DUKE_DATA_SERVICE_URL)
def get_portal_url_base(self):
"""
Determine root url of the data service from the url specified.
:return: str root url of the data service (eg: https://dataservice.duke.edu)
"""
api_url = urlparse(self.url).hostname
portal_url = re.sub('^api\.', '', api_url)
portal_url = re.sub(r'api', '', portal_url)
return portal_url
@property
def user_key(self):
"""
Contains user key user created from /api/v1/current_user/api_key used to create a login token.
:return: str user key that can be used to create an auth token
"""
return self.values.get(Config.USER_KEY, None)
@property
def agent_key(self):
"""
Contains user agent key created from /api/v1/software_agents/{id}/api_key used to create a login token.
:return: str agent key that can be used to create an auth token
"""
return self.values.get(Config.AGENT_KEY, None)
@property
def auth(self):
"""
Contains the auth token for use with connecting to the dataservice.
:return:
"""
return self.values.get(Config.AUTH, os.environ.get(AUTH_ENV_KEY_NAME, None))
@property
def upload_bytes_per_chunk(self):
"""
Return the bytes per chunk to be sent to external store.
:return: int bytes per upload chunk
"""
value = self.values.get(Config.UPLOAD_BYTES_PER_CHUNK, DDS_DEFAULT_UPLOAD_CHUNKS)
return Config.parse_bytes_str(value)
@property
def upload_workers(self):
"""
Return the number of parallel works to use when uploading a file.
:return: int number of workers. Specify None or 1 to disable parallel uploading
"""
return self.values.get(Config.UPLOAD_WORKERS, default_num_workers())
@property
def download_workers(self):
"""
Return the number of parallel works to use when downloading a file.
:return: int number of workers. Specify None or 1 to disable parallel downloading
"""
default_workers = int(math.ceil(default_num_workers()))
return self.values.get(Config.DOWNLOAD_WORKERS, default_workers)
@property
def download_bytes_per_chunk(self):
return self.values.get(Config.DOWNLOAD_BYTES_PER_CHUNK, DDS_DEFAULT_DOWNLOAD_CHUNK_SIZE)
@property
def debug_mode(self):
"""
Return true if we should show stack traces on error.
:return: boolean True if debugging is enabled
"""
return self.values.get(Config.DEBUG_MODE, False)
@property
def d4s2_url(self):
"""
Returns url for D4S2 service or '' if not setup.
:return: str url
"""
return self.values.get(Config.D4S2_URL, D4S2_SERVICE_URL)
@staticmethod
def parse_bytes_str(value):
"""
Given a value return the integer number of bytes it represents.
Trailing "MB" causes the value multiplied by 1024*1024
:param value:
:return: int number of bytes represented by value.
"""
if type(value) == str:
if "MB" in value:
return int(value.replace("MB", "")) * MB_TO_BYTES
else:
return int(value)
else:
return value
@property
def file_exclude_regex(self):
"""
Returns regex that should be used to filter out filenames.
:return: str: regex that when matches we should exclude a file from uploading.
"""
return self.values.get(Config.FILE_EXCLUDE_REGEX, FILE_EXCLUDE_REGEX_DEFAULT)
@property
def page_size(self):
"""
Returns the page size used to fetch paginated lists from DukeDS.
For DukeDS APIs that fail related to timeouts lowering this value can help.
:return:
"""
return self.values.get(Config.GET_PAGE_SIZE, GET_PAGE_SIZE_DEFAULT)
@property
def storage_provider_id(self):
"""
Returns storage provider id from /api/v1/storage_providers DukeDS API or None to use default.
:return: str: uuid of storage provider
"""
return self.values.get(Config.STORAGE_PROVIDER_ID, None)
@property
def file_download_retries(self):
"""
Returns number of times to retry failed external file downloads
:return: int: number of retries allowed before failure
"""
return self.values.get(Config.FILE_DOWNLOAD_RETRIES, DEFAULT_FILE_DOWNLOAD_RETRIES)
@property
def backing_storage(self):
return self.values.get(Config.BACKING_STORAGE, DEFAULT_BACKING_STORAGE)
| 39.145749 | 114 | 0.664288 | 1,247 | 9,669 | 4.945469 | 0.2085 | 0.029188 | 0.03162 | 0.046214 | 0.243392 | 0.184855 | 0.111886 | 0.092427 | 0.067456 | 0.067456 | 0 | 0.007549 | 0.260213 | 9,669 | 246 | 115 | 39.304878 | 0.854606 | 0.361361 | 0 | 0.145038 | 0 | 0 | 0.076479 | 0.021825 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175573 | false | 0 | 0.076336 | 0.015267 | 0.549618 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c0c6d4c1880f94e8964a089e665e4a8c770f8e9 | 616 | py | Python | app/app/models/task.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | app/app/models/task.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | app/app/models/task.py | gooocho/fastapi_todo | b88177e651f1c6984a636262a4d686935b67ed6f | [
"MIT"
] | null | null | null | from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import relationship
from app.db.settings import Base
from app.models.assignment import ModelAssignment
class ModelTask(Base):
__tablename__ = "tasks"
id = Column(Integer, primary_key=True, index=True)
title = Column(String, index=True)
description = Column(String, index=True)
priority = Column(Integer, index=True)
status = Column(Integer, index=True)
users = relationship(
"ModelUser",
secondary=ModelAssignment.__tablename__,
order_by="ModelUser.id",
back_populates="tasks",
)
| 26.782609 | 54 | 0.709416 | 69 | 616 | 6.173913 | 0.507246 | 0.105634 | 0.079812 | 0.098592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196429 | 616 | 22 | 55 | 28 | 0.860606 | 0 | 0 | 0 | 0 | 0 | 0.050325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c1092a7d5dfc15f1ac47f20c842e794dad8aa14 | 13,051 | py | Python | src/ion.py | lksrmp/paw_structure | 560a0a601a90114fd80f98096aa1f0e012121c69 | [
"Apache-2.0"
] | null | null | null | src/ion.py | lksrmp/paw_structure | 560a0a601a90114fd80f98096aa1f0e012121c69 | [
"Apache-2.0"
] | 2 | 2022-03-22T15:27:17.000Z | 2022-03-30T14:16:26.000Z | src/ion.py | lksrmp/paw_structure | 560a0a601a90114fd80f98096aa1f0e012121c69 | [
"Apache-2.0"
] | null | null | null | """
paw_structure.ion
-----------------
Ion complex detection using geometric :ref:`algorithm<Control_ION_algorithm>`.
Main routine is :func:`.ion_find_parallel`.
Dependencies:
:py:mod:`functools`
:py:mod:`miniutils`
:py:mod:`numpy`
:py:mod:`pandas`
:mod:`.neighbor`
:mod:`.utility`
:class:`.Snap`
.. autosummary::
ion_find_parallel
ion_load
ion_save
ion_single
"""
import numpy as np
import pandas as pd
from functools import partial
import miniutils.progress_bar as progress
# MODULES WITHIN PROJECT
from . import neighbor
from . import utility
from .tra import Snap
########################################################################################################################
# FIND ION COMPLEX FOR A SINGLE SNAPSHOT
########################################################################################################################
# INPUT
# class Snap snap snapshot containing all information
# str id1 identifier for atom used as center (e.g. 'MN'); only one allowed to be in snap
# str id2 identifier for atoms as possible first neighbors (e.g. 'O_')
# str id3 identifier for atoms as possible neighbors of first neighbors (e.g. 'H_')
# float cut1 cutoff distance for first neighbor search
# float cut2 cutoff distance for second neighbor search
#####
# OUTPUT
# pandas DataFrame contains the whole complex centered around id1
########################################################################################################################
def ion_single(snap, id1, id2, id3, cut1, cut2):
"""
Find ion complex of a single snapshot of atomic positions.
Args:
snap (:class:`.Snap`): single snapshot containing the atomic information
id1 (str): identifier for atom used as center (e.g. 'MN')
id2 (str): identifier for atoms as possible first neighbors (e.g. 'O\_')
id3 (str): identifier for atoms as possible neighbors of first neighbors (e.g. 'H\_')
cut1 (float): cutoff distance for first neighbor search
cut2 (float): cutoff distance for second neighbor search
Returns:
:class:`.Snap`: snapshot containing an ion complex
Todo:
Implement possibility for more atoms of type id1 or allow selection by name.
"""
# check if only one atom is selected as ion
if len(snap.atoms[snap.atoms['id'] == id1]) != 1:
utility.err('ion_single', 0, [len(snap.atoms[snap.atoms['id'] == id1])])
# check if all three are different species
if id1 == id2 or id2 == id3 or id1 == id3:
utility.err('ion_single', 1, [id1, id2, id3])
# search first neighbors
next1 = neighbor.neighbor_name(snap, id1, id2, cut1)
# extract name lists
id1_list = [atom[0] for atom in next1]
id2_list = [y for x in [atom[1:] for atom in next1] for y in x]
# search second neighbors
next2 = neighbor.neighbor_name(snap, id2, id3, cut2, names=id2_list)
# extract name list
id3_list = [y for x in [atom[1:] for atom in next2] for y in x]
# extract correct atom information
id1_list = snap.atoms.loc[snap.atoms['name'].isin(id1_list)]
id2_list = snap.atoms.loc[snap.atoms['name'].isin(id2_list)]
id3_list = snap.atoms.loc[snap.atoms['name'].isin(id3_list)]
comp = pd.concat([id1_list, id2_list, id3_list])
return Snap(snap.iter, snap.time, snap.cell, None, None, dataframe=comp)
########################################################################################################################
# SAVE INFORMATION FROM ion_find TO FILE <root>.ext FOR LATER ANALYSIS
# TODO: check if snapshots is empty
########################################################################################################################
# INPUT
# str root root name for saving file
# list class Snap snapshots list with information to be saved
# str id1 identifier for atom used as center (e.g. 'MN'); only one allowed to be in snap
# str id2 identifier for atoms as possible first neighbors (e.g. 'O_')
# str id3 identifier for atoms as possible neighbors of first neighbors (e.g. 'H_')
# float cut1 cutoff distance for first neighbor search
# float cut2 cutoff distance for second neighbor search
# str ext (optional) extension for the saved file: name = root + ext
########################################################################################################################
def ion_save(root, snapshots, id1, id2, id3, cut1, cut2, ext='.ion'):
"""
Save results to file :ref:`Output_ion`.
Args:
root (str): root name for saving file
snapshots (list[:class:`.Snap`]): list of snapshots containing an ion complex
id1 (str): identifier for atom used as center (e.g. 'MN')
id2 (str): identifier for atoms as possible first neighbors (e.g. 'O\_')
id3 (str): identifier for atoms as possible neighbors of first neighbors (e.g. 'H\_')
cut1 (float): cutoff distance for first neighbor search
cut2 (float): cutoff distance for second neighbor search
ext (str, optional): default ".ion" - extension for the saved file: name = root + ext
Todo:
Check if snapshots is empty.
"""
# open file
path = root + ext
try:
f = open(path, 'w')
except IOError:
utility.err_file('ion_save', path)
# write header
f.write(utility.write_header())
f.write("ION COMPLEXES\n")
f.write("%-14s%14.8f\n" % ("T1", snapshots[0].time))
f.write("%-14s%14.8f\n" % ("T2", snapshots[-1].time))
f.write("%-14s%14d\n" % ("SNAPSHOTS", len(snapshots)))
f.write("%-14s%14s\n" % ("ID1", id1))
f.write("%-14s%14s\n" % ("ID2", id2))
f.write("%-14s%14s\n" % ("ID3", id3))
f.write("%-14s%14.8f\n" % ("CUT1", cut1))
f.write("%-14s%14.8f\n" % ("CUT2", cut2))
f.write("%-14s\n" % ("UNIT CELL"))
np.savetxt(f, snapshots[0].cell, fmt="%14.8f")
# write structure information
for i in range(len(snapshots)):
f.write("-" * 84 + "\n")
f.write("%-14s%-14.8f%-14s%-14d%-14s%-14d\n" %
("TIME", snapshots[i].time, "ITERATION", snapshots[i].iter, "ATOMS", len(snapshots[i].atoms)))
f.write("%-14s%-14s%-14s%14s%14s%14s\n" % ('NAME', 'ID', 'INDEX', 'X', 'Y', 'Z'))
np.savetxt(f, snapshots[i].atoms, fmt="%-14s%-14s%-14d%14.8f%14.8f%14.8f")
f.close()
return
########################################################################################################################
# LOAD INFORMATION PREVIOUSLY SAVED BY ion_save()
# WARNING: READING IS LINE SENSITIVE! ONLY USE ON UNCHANGED FILES WRITTEN BY ion_save()
########################################################################################################################
# INPUT
# str root root name for the file to be loaded
# str ext (optional) extension for the file to be loaded: name = root + ext
#####
# OUTPUT
# list class Snap snapshots list of all information
########################################################################################################################
def ion_load(root, ext='.ion'):
"""
Load information from the :ref:`Output_ion` file previously created by :func:`.ion_save`.
Args:
root (str): root name for the file to be loaded
ext (str, optional): default ".ion" - extension for the file to be loaded: name = root + ext
Returns:
list[:class:`.Snap`]: list of snapshots containing an ion complex
Note:
Reading is line sensitive. Do not alter the output file before loading.
"""
path = root + ext
try:
f = open(path, 'r')
except IOError:
utility.err_file('ion_load', path)
text = f.readlines() # read text as lines
for i in range(len(text)):
text[i] = text[i].split() # split each line into list with strings as elements
snapshots = [] # storage list
for i in range(len(text)):
if len(text[i]) > 1:
if text[i][0] == 'UNIT':
cell = np.array(text[i+1:i+4], dtype=float) # get unit cell
if text[i][0] == "TIME": # search for trigger of new snapshot
iter = int(text[i][3])
time = float(text[i][1])
n_atoms = int(text[i][5])
test = np.array(text[i + 2:i + 2 + n_atoms])
atoms = {}
atoms['name'] = test[:, 0]
atoms['id'] = test[:, 1]
atoms['index'] = np.array(test[:, 2], dtype=int)
df = pd.DataFrame(data=atoms)
# save information as class Snap
snapshots.append(Snap(iter, time, cell, np.array(test[:, 3:6], dtype=np.float64), df))
return snapshots
########################################################################################################################
# FIND ION COMPLEXES IN MULTIPLE SNAPSHOTS
# WARNING: NOT IN USE BECAUSE NO PARALLEL COMPUTING
########################################################################################################################
# INPUT
# str root root name for saving file
# list class Snap snapshots list with information to be saved
# str id1 identifier for atom used as center (e.g. 'MN'); only one allowed to be in snap
# str id2 identifier for atoms as possible first neighbors (e.g. 'O_')
# str id3 identifier for atoms as possible neighbors of first neighbors (e.g. 'H_')
# float cut1 (optional) cutoff distance for first neighbor search
# float cut2 (optional) cutoff distance for second neighbor search
#####
# OUTPUT
# list class Snap complex list with all ion complexes found
########################################################################################################################
# def ion_find(root, snapshots, id1, id2, id3, cut1=3.0, cut2=1.4):
# complex = []
# # loop through different snapshots
# for snap in snapshots:
# # get complex information
# comp = ion_single(snap, id1, id2, id3, cut1, cut2)
# # append Snap object for data storage
# complex.append(Snap(snap.iter, snap.time, snap.cell, None, None, dataframe=comp))
# # save information to file
# ion_save(root, complex, id1, id2, id3, cut1, cut2)
# return complex
########################################################################################################################
# ROUTINE TO FIND ION COMPLEXES FOR MULTIPLE SNAPSHOTS
# PARALLEL VERSION OF ion_find() WITH PROGRESS BAR IN CONSOLE
########################################################################################################################
# INPUT
# str root root name for saving file
# list class Snap snapshots list with information to be saved
# str id1 identifier for atom used as center (e.g. 'MN'); only one allowed to be in snap
# str id2 identifier for atoms as possible first neighbors (e.g. 'O_')
# str id3 identifier for atoms as possible neighbors of first neighbors (e.g. 'H_')
# float cut1 (optional) cutoff distance for first neighbor search
# float cut2 (optional) cutoff distance for second neighbor search
#####
# OUTPUT
# list class Snap ion_comp list of ion complexes found
########################################################################################################################
def ion_find_parallel(root, snapshots, id1, id2, id3, cut1, cut2):
"""
Find ion complexes for multiple snapshots of atomic configurations.
Args:
root (str): root name of the files
snapshots (list[:class:`.Snap`]): list of snapshots containing the atomic information
id1 (str): identifier for atom used as center (e.g. 'MN')
id2 (str): identifier for atoms as possible first neighbors (e.g. 'O\_')
id3 (str): identifier for atoms as possible neighbors of first neighbors (e.g. 'H\_')
cut1 (float): cutoff distance for first neighbor search
cut2 (float): cutoff distance for second neighbor search
Returns:
list[:class:`.Snap`]: list of snapshots containing an ion complex
Parallelization based on :py:mod:`multiprocessing`.
Note:
Only one atom of type :data:`id1` allowed to be in a snapshot at the moment.
"""
print("ION COMPLEX DETECTION IN PROGRESS")
# set other arguments (necessary for parallel computing)
multi_one = partial(ion_single, id1=id1, id2=id2, id3=id3, cut1=cut1, cut2=cut2)
# run data extraction
ion_comp = progress.parallel_progbar(multi_one, snapshots)
# create output file
ion_save(root, ion_comp, id1, id2, id3, cut1, cut2)
print("ION COMPLEX DETECTION FINISHED")
return ion_comp
| 46.610714 | 120 | 0.539192 | 1,598 | 13,051 | 4.357947 | 0.154568 | 0.039202 | 0.036186 | 0.040207 | 0.540063 | 0.517088 | 0.454049 | 0.428489 | 0.374067 | 0.374067 | 0 | 0.023986 | 0.223738 | 13,051 | 279 | 121 | 46.777778 | 0.663409 | 0.541185 | 0 | 0.098765 | 0 | 0 | 0.113885 | 0.024961 | 0 | 0 | 0 | 0.010753 | 0 | 1 | 0.049383 | false | 0 | 0.08642 | 0 | 0.185185 | 0.024691 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c14a991d87c2cf60960166a41fc56bf0090a775 | 11,475 | py | Python | python/3D-rrt/pvtrace/Visualise.py | rapattack88/mcclanahoochie | 6df72553ba954b52e949a6847a213b22f9e90157 | [
"Apache-2.0"
] | 1 | 2020-12-27T21:37:35.000Z | 2020-12-27T21:37:35.000Z | python/3D-rrt/pvtrace/Visualise.py | rapattack88/mcclanahoochie | 6df72553ba954b52e949a6847a213b22f9e90157 | [
"Apache-2.0"
] | null | null | null | python/3D-rrt/pvtrace/Visualise.py | rapattack88/mcclanahoochie | 6df72553ba954b52e949a6847a213b22f9e90157 | [
"Apache-2.0"
] | null | null | null | # pvtrace is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# pvtrace is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import division
try:
import visual
VISUAL_INSTALLED = True
print "Python module visual is installed..."
except:
print "Python module visual is not installed... telling all Visualiser object to not render."
VISUAL_INSTALLED = False
import numpy as np
import Geometry as geo
import ConstructiveGeometry as csg
import external.transformations as tf
class Visualiser (object):
"""Visualiser a class that converts project geometry objects into vpython objects and draws them. It can be used programmatically: just add objects as they are created and the changes will update in the display."""
VISUALISER_ON = True
if not VISUAL_INSTALLED:
VISUALISER_ON = False
def __init__(self, background=(0,0,0), ambient=1.):
super(Visualiser, self).__init__()
if not Visualiser.VISUALISER_ON:
return
self.display = visual.display(title='PVTrace', x=0, y=0, width=800, height=600, background=background, ambient=ambient)
self.display.exit = False
visual.curve(pos=[(0,0,0), (.2,0,0)], radius=0.001, color=visual.color.red)
visual.curve(pos=[(0,0,0), (0,.2,0)], radius=0.001, color=visual.color.green)
visual.curve(pos=[(0,0,0), (0,0,.2)], radius=0.001, color=visual.color.blue)
visual.label(pos=(.22, 0, 0), text='X', linecolor=visual.color.red)
visual.label(pos=(0, .22, 0), text='Y', linecolor=visual.color.green)
visual.label(pos=(0, 0, .22), text='Z', linecolor=visual.color.blue)
def addBox(self, box, colour=None):
if not Visualiser.VISUALISER_ON:
return
if isinstance(box, geo.Box):
if colour == None:
colour = visual.color.red
org = geo.transform_point(box.origin, box.transform)
ext = geo.transform_point(box.extent, box.transform)
print "Visualiser: box origin=%s, extent=%s" % (str(org), str(ext))
size = np.abs(ext - org)
pos = org + 0.5*size
print "Visualiser: box position=%s, size=%s" % (str(pos), str(size))
angle, direction, point = tf.rotation_from_matrix(box.transform)
print "colour,", colour
if colour == [0,0,0]:
visual.box(pos=pos, size=size, opacity=0.3, material=visual.materials.plastic)
else:
visual.box(pos=pos, size=size, color=geo.norm(colour), opacity=0.5)
def addFinitePlane(self, plane, colour=None, opacity=0.):
if not Visualiser.VISUALISER_ON:
return
if isinstance(plane, geo.FinitePlane):
if colour == None:
colour = visual.color.blue
# visual doesn't support planes, so we draw a very thin box
H = .001
pos = (plane.length/2, plane.width/2, H/2)
pos = geo.transform_point(pos, plane.transform)
size = (plane.length, plane.width, H)
axis = geo.transform_direction((0,0,1), plane.transform)
visual.box(pos=pos, size=size, color=colour, opacity=0)
def addPolygon(self, polygon, colour=None):
if not Visualiser.VISUALISER_ON:
return
if isinstance(polygon, geo.Polygon):
if colour == None:
visual.convex(pos=polygon.pts, color=geo.norm([0.1,0.1,0.1]), material=visual.materials.plastic)
else:
visual.convex(pos=convex.points, color=geo.norm(colour), opacity=0.5)
def addConvex(self, convex, colour=None):
"""docstring for addConvex"""
if not Visualiser.VISUALISER_ON:
return
if isinstance(convex, geo.Convex):
if colour == None:
print "Color is none"
visual.convex(pos=polygon.pts, color=geo.norm([0.1,0.1,0.1]), material=visual.materials.plastic)
else:
import pdb; pdb.set_trace()
print "Colour is", geo.norm(colour)
visual.convex(pos=convex.points, color=geo.norm(colour), material=visual.materials.plastic)
def addRay(self, ray, colour=None):
if not Visualiser.VISUALISER_ON:
return
if isinstance(ray, geo.Ray):
if colour == None:
colour = visual.color.white
pos = ray.position
axis = ray.direction * 5
visual.cylinder(pos=pos, axis=axis, radius=0.0001, color=geo.norm(colour))
def addSmallSphere(self, point, colour=None):
if not Visualiser.VISUALISER_ON:
return
if colour == None:
colour = visual.color.blue
visual.sphere(pos=point, radius=0.00012, color=geo.norm(colour))
#visual.curve(pos=[point], radius=0.0005, color=geo.norm(colour))
def addLine(self, start, stop, colour=None):
if not Visualiser.VISUALISER_ON:
return
if colour == None:
colour = visual.color.white
axis = np.array(stop) - np.array(start)
visual.cylinder(pos=start, axis=axis, radius=0.0001, color=geo.norm(colour))
def addCylinder(self, cylinder, colour=None):
if not Visualiser.VISUALISER_ON:
return
if colour == None:
colour = visual.color.blue
#angle, direction, point = tf.rotation_from_matrix(cylinder.transform)
#axis = direction * cylinder.length
position = geo.transform_point([0,0,0], cylinder.transform)
axis = geo.transform_direction([0,0,1], cylinder.transform)
print cylinder.transform, "Cylinder:transform"
print position, "Cylinder:position"
print axis, "Cylinder:axis"
print colour, "Cylinder:colour"
print cylinder.radius, "Cylinder:radius"
visual.cylinder(pos=position, axis=axis, color=colour, radius=cylinder.radius, opacity=0.5, length = cylinder.length)
def addCSG(self, CSGobj, res,origin,extent,colour=None):
"""
Visualise a CSG structure in a space subset defined by xmin, xmax, ymin, .... with division factor (i.e. ~ resolution) res
"""
#INTone = Box(origin = (-1.,-1.,-0.), extent = (1,1,3))
#INTtwo = Box(origin = (-0.5,-0.5,0), extent = (0.5,0.5,3))
#INTtwo.append_transform(tf.translation_matrix((0,0.5,0)))
#INTtwo.append_transform(tf.rotation_matrix(np.pi/4, (0,0,1)))
#CSGobj = CSGsub(INTone, INTtwo)
#xmin = -1.
#xmax = 1.
#ymin = -1.
#ymax = 1.
#zmin = -1.
#zmax = 3.
#resolution = 0.05
#print "Resolution: ", res
xmin = origin[0]
xmax = extent[0]
ymin = origin[1]
ymax = extent[1]
zmin = origin[2]
zmax = extent[2]
"""
Determine Voxel size from resolution
"""
voxelextent = (res*(xmax-xmin), res*(ymax-ymin), res*(zmax-zmin))
pex = voxelextent
"""
Scan space
"""
x = xmin
y = ymin
z = zmin
print 'Visualisation of ', CSGobj.reference, ' started...'
while x < xmax:
y=ymin
z=zmin
while y < ymax:
z = zmin
while z < zmax:
pt = (x, y, z)
if CSGobj.contains(pt):
origin = (pt[0]-pex[0]/2, pt[1]-pex[1]/2, pt[2]-pex[2]/2)
extent = (pt[0]+pex[0]/2, pt[1]+pex[1]/2, pt[2]+pex[2]/2)
voxel = geo.Box(origin = origin, extent = extent)
self.addCSGvoxel(voxel, colour=colour)
z = z + res*(zmax-zmin)
y = y + res*(ymax-ymin)
x = x + res*(xmax-xmin)
print 'Complete.'
def addCSGvoxel(self, box, colour):
"""
16/03/10: To visualise CSG objects
"""
if colour == None:
colour = visual.color.red
org = box.origin
ext = box.extent
size = np.abs(ext - org)
pos = org + 0.5*size
visual.box(pos=pos, size=size, color=colour, opacity=0.2)
def addPhoton(self, photon):
"""Draws a smallSphere with direction arrow and polariation (if data is avaliable)."""
self.addSmallSphere(photon.position)
visual.arrow(pos=photon.position, axis=photon.direction * 0.0005, shaftwidth=0.0003, color=visual.color.magenta, opacity=0.8)
if photon.polarisation != None:
visual.arrow(pos=photon.position, axis=photon.polarisation * 0.0005, shaftwidth=0.0003, color=visual.color.white, opacity=0.4 )
def addObject(self, obj, colour=None, opacity=0.5, res=0.05, origin=(-0.02,-0.02,0.), extent = (0.02,0.02,1.)):
if not Visualiser.VISUALISER_ON:
return
if isinstance(obj, geo.Box):
self.addBox(obj, colour=colour)
if isinstance(obj, geo.Ray):
self.addRay(obj, colour=colour)
if isinstance(obj, geo.Cylinder):
self.addCylinder(obj, colour=colour)
if isinstance(obj, geo.FinitePlane):
self.addFinitePlane(obj, colour, opacity)
if isinstance(obj, csg.CSGadd) or isinstance (obj, csg.CSGint) or isinstance (obj, csg.CSGsub):
self.addCSG(obj, res, origin, extent, colour)
if isinstance(obj, geo.Polygon):
self.addPolygon(obj, colour=colour)
if isinstance(obj, geo.Convex):
self.addConvex(obj, colour=colour)
if False:
box1 = geo.Box(origin=[0,0,0], extent=[2,2,2])
box2 = geo.Box(origin=[2,2,2], extent=[2.1,4,4])
ray1 = geo.Ray(position=[-1,-1,-1], direction=[1,1,1])
ray2 = geo.Ray(position=[-1,-1,-1], direction=[1,0,0])
vis = Visualiser()
vis.addObject(box1)
import time
time.sleep(1)
vis.addObject(ray1)
time.sleep(1)
vis.addObject(ray2)
time.sleep(1)
vis.addObject(box2)
time.sleep(1)
vis.addLine([0,0,0],[5,4,5])
"""
# TEST TEST TEST
vis = Visualiser()
INTone = geo.Box(origin = (-1.,-1.,-0.), extent = (1,1,3))
INTtwo = geo.Box(origin = (-0.5,-0.5,0), extent = (0.5,0.5,3))
#INTtwo.append_transform(tf.translation_matrix((0,0.5,0)))
INTtwo.append_transform(tf.rotation_matrix(np.pi/4, (0,0,1)))
myobj = csg.CSGsub(INTone, INTtwo)
#vis.addObject(INTone, colour=visual.color.green)
#vis.addObject(INTtwo, colour=visual.color.blue)
vis.addObject(myobj, res=0.05, colour = visual.color.green)
"""
| 38.506711 | 218 | 0.573943 | 1,474 | 11,475 | 4.436906 | 0.187924 | 0.008869 | 0.005046 | 0.038226 | 0.379511 | 0.350153 | 0.321254 | 0.245719 | 0.181804 | 0.168043 | 0 | 0.037152 | 0.298649 | 11,475 | 297 | 219 | 38.636364 | 0.775472 | 0.103355 | 0 | 0.28877 | 0 | 0 | 0.037932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042781 | null | null | 0.074866 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c19cbf8d9bb1b9d43bac56824e10f5d3f24bc92 | 3,092 | py | Python | hw1/deco.py | Tymeade/otus-python | b5ba2ab4f9c91abc97e6417a5600e4de1bcdb95c | [
"MIT"
] | null | null | null | hw1/deco.py | Tymeade/otus-python | b5ba2ab4f9c91abc97e6417a5600e4de1bcdb95c | [
"MIT"
] | null | null | null | hw1/deco.py | Tymeade/otus-python | b5ba2ab4f9c91abc97e6417a5600e4de1bcdb95c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from functools import update_wrapper, wraps
def disable(func):
"""
Disable a decorator by re-assigning the decorator's name
to this function. For example, to turn off memoization:
>>> memo = disable
"""
return func
def decorator(decorator_func):
"""
Decorate a decorator so that it inherits the docstrings
and stuff from the function it's decorating.
"""
return update_wrapper(decorator_func, 1)
def countcalls(func):
"""Decorator that counts calls made to the function decorated."""
@wraps(func)
def wrapped(*args, **kwargs):
wrapped.calls += 1
return func(*args, **kwargs)
wrapped.calls = 0
return wrapped
def memo(func):
"""
Memoize a function so that it caches all return values for
faster future lookups.
"""
memory = {}
@wraps(func)
def decorated(*args):
if args in memory:
return memory[args]
answer = func(*args)
memory[args] = answer
return answer
return decorated
def n_ary(func):
"""
Given binary function f(x, y), return an n_ary function such
that f(x, y, z) = f(x, f(y,z)), etc. Also allow f(x) = x.
"""
@wraps(func)
def wrapped(*args):
if len(args) == 1:
return args[0]
if len(args) == 2:
return func(*args)
return func(args[0], wrapped(*args[1:]))
return wrapped
def trace(ident):
"""Trace calls made to function decorated.
@trace("____")
def fib(n):
....
>>> fib(3)
--> fib(3)
____ --> fib(2)
________ --> fib(1)
________ <-- fib(1) == 1
________ --> fib(0)
________ <-- fib(0) == 1
____ <-- fib(2) == 2
____ --> fib(1)
____ <-- fib(1) == 1
<-- fib(3) == 3
"""
def deco(func):
@wraps(func)
def wrapped(*args, **kwargs):
arguments = [str(a) for a in args] + ["%s=%s" % (key, value) for key, value in kwargs.iteritems()]
argument_string = ",".join(arguments)
func_name = "%s(%s)" % (func.__name__, argument_string)
wrapped.call_level += 1
print ident * wrapped.call_level, "-->", func_name
answer = func(*args, **kwargs)
print ident * wrapped.call_level, "<--", func_name, "==", answer
wrapped.call_level -= 1
return answer
wrapped.call_level = 0
return wrapped
return deco
@memo
@countcalls
@n_ary
def foo(a, b):
return a + b
@countcalls
@memo
@n_ary
def bar(a, b):
return a * b
@countcalls
@trace("####")
@memo
def fib(n):
"""Some doc"""
return 1 if n <= 1 else fib(n-1) + fib(n-2)
def main():
print foo(4, 3)
print foo(4, 3, 2)
print foo(4, 3)
print "foo was called", foo.calls, "times"
print bar(4, 3)
print bar(4, 3, 2)
print bar(4, 3, 2, 1)
print "bar was called", bar.calls, "times"
print fib.__doc__
fib(3)
print fib.calls, 'calls made'
if __name__ == '__main__':
main()
| 20.342105 | 110 | 0.554334 | 411 | 3,092 | 3.961071 | 0.257908 | 0.014742 | 0.04914 | 0.035012 | 0.175061 | 0.14742 | 0.065111 | 0.04914 | 0 | 0 | 0 | 0.022664 | 0.300776 | 3,092 | 151 | 111 | 20.476821 | 0.730342 | 0.013583 | 0 | 0.276316 | 0 | 0 | 0.038023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013158 | null | null | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c1a887bbd2d5ae6437d92ebd5ad09e7ede709e9 | 1,048 | py | Python | AprendeAyudando/forum/migrations/0001_initial.py | memoriasIT/AprendeAyudando | 0a32f59d3606075abb99a74ce1983a6171aa34cd | [
"CC0-1.0"
] | 1 | 2021-09-09T09:54:04.000Z | 2021-09-09T09:54:04.000Z | AprendeAyudando/forum/migrations/0001_initial.py | memoriasIT/AprendeAyudando | 0a32f59d3606075abb99a74ce1983a6171aa34cd | [
"CC0-1.0"
] | null | null | null | AprendeAyudando/forum/migrations/0001_initial.py | memoriasIT/AprendeAyudando | 0a32f59d3606075abb99a74ce1983a6171aa34cd | [
"CC0-1.0"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-28 23:37
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Forum',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=200)),
('enrolled_users', models.ManyToManyField(blank=True, related_name='forums', to=settings.AUTH_USER_MODEL)),
('teacher', models.ForeignKey(limit_choices_to={'groups__name': 'Profesor'}, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'Foro',
'verbose_name_plural': 'Foros',
},
),
]
| 33.806452 | 179 | 0.622137 | 113 | 1,048 | 5.584071 | 0.60177 | 0.038035 | 0.07607 | 0.099842 | 0.0729 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023047 | 0.254771 | 1,048 | 30 | 180 | 34.933333 | 0.784891 | 0.042939 | 0 | 0 | 1 | 0 | 0.100899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c1c0560ca30de9f1d31898b330aaa3130eb4feb | 804 | py | Python | clay/markdown_ext/jinja.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | clay/markdown_ext/jinja.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | clay/markdown_ext/jinja.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | # coding=utf-8
import os
import jinja2
import jinja2.ext
from .render import md_to_jinja
MARKDOWN_EXTENSION = '.md'
class MarkdownExtension(jinja2.ext.Extension):
def preprocess(self, source, name, filename=None):
if name is None or os.path.splitext(name)[1] != MARKDOWN_EXTENSION:
return source
_source, meta = md_to_jinja(source)
self.meta = meta or {}
self.environment.globals.update(meta)
return _source
def _from_string(self, source, globals=None, template_class=None):
env = self.environment
globals = env.make_globals(globals)
cls = template_class or env.template_class
template_name = 'markdown_from_string.md'
return cls.from_code(env, env.compile(source, template_name), globals, None)
| 27.724138 | 84 | 0.689055 | 105 | 804 | 5.095238 | 0.390476 | 0.072897 | 0.033645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007974 | 0.220149 | 804 | 28 | 85 | 28.714286 | 0.845295 | 0.014925 | 0 | 0 | 0 | 0 | 0.032911 | 0.029114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c22a320a2928428c08eb5053a706398b340f23f | 2,235 | py | Python | src/predict.py | ashishnegi2000/FinalYr_1 | 14fddaa7463141a19bb6c2a25003115847f63395 | [
"MIT"
] | null | null | null | src/predict.py | ashishnegi2000/FinalYr_1 | 14fddaa7463141a19bb6c2a25003115847f63395 | [
"MIT"
] | null | null | null | src/predict.py | ashishnegi2000/FinalYr_1 | 14fddaa7463141a19bb6c2a25003115847f63395 | [
"MIT"
] | null | null | null | #Predictions performed by this module
#dependencies
import base64
import numpy as np
import io
from PIL import Image
import keras
from keras import backend as K
from keras.models import Sequential
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator, img_to_array
from model import Model, DecoderType
from main import infer2
from flask import request
from flask import jsonify
from flask import Flask
from imageio import imread
app = Flask(__name__)
"""
def get_model():
This function loads the already-built keras model
global model
model = load_model('model.h5')
print("Model loaded!")"""
def preprocess_image(image, target_size):
if image.mode != "RGB":
image = image.convert("RGB")
image = image.resize(target_size)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
return image
"""print(" * Loading Keras model ... ")
get_model()"""
@app.route("/predict", methods=["POST"])
def predict():
"""
whenever something is posted from /predict,
this function will process the info posted through POST http method
message: json from POST method
encoded: key is 'image', value is base64encoded image sent from client
decoded: as it says
image: decoded is bytes in a file, not an actual image,
image.open converts those bytes into PIL file
"""
message = request.get_json(force=True)
encoded = message['image']
encoded = encoded.replace("data:image/jpeg;base64,", "")
print(encoded)
decoded = base64.b64decode(encoded)
image = imread(io.BytesIO(decoded))
"""
processed_image = preprocess_image(image, target_size=(224,224))"""
"""prediction = model.predict(processed_image).tolist()"""
model = Model(list(open("/home/shikhar/Desktop/simpleHTR/SimpleHTR/model/charList.txt").read()), decoder_type=0, must_restore=True, dump=True)
response = infer2(model, image)
response = {
'text': response['text'],
'probability': str(response['probability'])
}
return jsonify(response)
@app.route("/", methods=["GET"])
def hello():
return 'Hello'
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000) | 27.592593 | 146 | 0.694407 | 295 | 2,235 | 5.155932 | 0.444068 | 0.039448 | 0.029586 | 0.027613 | 0.039448 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016013 | 0.189709 | 2,235 | 81 | 147 | 27.592593 | 0.823854 | 0.175392 | 0 | 0 | 0 | 0 | 0.110117 | 0.057123 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.348837 | 0.023256 | 0.488372 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2c22fe7b5968c4acadcc023580c5ccbb977f7642 | 11,875 | py | Python | fair-data-model/scripts/rdfizer/rdfizer.py | longmanplus/beat-covid | fc5c88b191d7aa1e70cef8a055c25803b6d013a6 | [
"MIT"
] | 1 | 2021-11-09T23:26:49.000Z | 2021-11-09T23:26:49.000Z | fair-data-model/scripts/rdfizer/rdfizer.py | longmanplus/beat-covid | fc5c88b191d7aa1e70cef8a055c25803b6d013a6 | [
"MIT"
] | 1 | 2021-07-08T01:25:55.000Z | 2021-07-08T01:25:55.000Z | fair-data-model/scripts/rdfizer/rdfizer.py | longmanplus/beat-covid | fc5c88b191d7aa1e70cef8a055c25803b6d013a6 | [
"MIT"
] | 4 | 2020-11-16T06:31:58.000Z | 2021-07-14T12:50:23.000Z | # @name: rdfizer.py
# @description: Script to generate RDF data
# @version: 1.0
# @date: 28-04-2021
# @author: Núria Queralt Rosinach
# @email: n.queralt_rosinach@lumc.nl
"""Script to generate RDF data for Beat-COVID cytokine clinical measurements"""
import sys, os
from rdflib import Namespace, Graph, BNode, Literal
from rdflib.namespace import RDF, RDFS, XSD, DCTERMS
# Prefixes
bc = Namespace("https://rdf.biosemantics.org/resources/beat-covid/")
bco = Namespace("http://purl.org/beat-covid/cytokines-semantic-model.owl#")
obo = Namespace("http://purl.obolibrary.org/obo/")
sio = Namespace("http://semanticscience.org/resource/")
efo = Namespace("http://www.ebi.ac.uk/efo/")
prov = Namespace("http://www.w3.org/ns/prov#")
has_output = sio.SIO_000229
has_value = sio.SIO_000300
# Functions
def generate_rdf(variables_dict):
"""
This function generates an RDF data structure from a tuple of values
:param variables_dict:
:return:
"""
# binds
rdf = Graph()
rdf.bind("bc", bc)
rdf.bind("bco", bco)
rdf.bind("obo", obo)
rdf.bind("sio", sio)
rdf.bind("efo", efo)
rdf.bind("prov", prov)
# entries
# Entity
if not variables_dict['clinical_id']: variables_dict['clinical_id'] = "NA"
person = bc["person/BEATCOVID_" + variables_dict['beat_id'] + "_CLINICAL_" + variables_dict['clinical_id']]
# LAB MEASUREMENTS (MEASUREMENT PROCESS) MODEL
# Identifier
person_study_id = bc["person_study_id/" + variables_dict['beat_id']]
# Role
person_study_role = bc["person_study_role/BEATCOVID_" + variables_dict['record_id']
+ "_" + variables_dict['beat_id']]
# age
age = bc["person_age/" + variables_dict['age']]
# ward
ward = bc["ward/" + variables_dict['ward']]
# institute
institute = bc["institute/" + variables_dict['institute_abbreviation']]
# measurement process date
measurement_process_date = bc["lab/measurement_process_date/BEATCOVID_"
+ variables_dict['lum_date_meas']]
# BIOSAMPLES (SAMPLING PROCESS) MODEL
# Biosample
biosample = bc["biosample/BEATCOVID_" + variables_dict['record_id']]
# Process
sampling_process = bc["biosample/sampling_process/BEATCOVID_"
+ variables_dict['record_id']]
# order
order = bc["biosample/order_" + variables_dict['order']]
# sampling process date
sampling_process_date = bc["biosample/sampling_process_date/BEATCOVID_"
+ variables_dict['date_sampling']]
# Attribute/object
organ = "blood_serum"
biosample_object = bc["object/" + organ]
# Role
person_donor_role = bc["person_donor_role/BEATCOVID_" + variables_dict['record_id']]
# Identifier
person_donor_id = bc["person_donor_id/" + variables_dict['beat_id']]
biosample_id = bc["biosample/biosample_id/BEATCOVID_" + variables_dict['record_id']]
# CLINICAL OBSERVATIONS (EXAMINATION PROCESS) MODEL
# Identifier
clinical = bc["clinical/patient_id/" + variables_dict['clinical_id']]
# Observation
# observation = bc["clinical/observation/BEATCOVID_" + variables_dict['clinical_observations']]
# add triples to entry
# LAB MEASUREMENTS (MEASUREMENT PROCESS) MODEL
# Entity
rdf.add((person, RDF.type, sio.SIO_000498))
rdf.add((person, sio.SIO_000228, person_study_role))
rdf.add((person, sio.SIO_000228, person_donor_role))
#rdf.add((person, sio.SIO_000228, person_patient_role))
#rdf.add((person, sio.SIO_000008, bc.phenotype_))
# Identifier
rdf.add((person_study_id, RDF.type, bco.beat_covid_id))
rdf.add((person_study_id, sio.SIO_000300, Literal(variables_dict['beat_id'], datatype=XSD.string)))
rdf.add((person_study_id, obo.IAO_0000219, person_study_role))
# age
rdf.add((age, RDF.type, sio.SIO_001013))
rdf.add((age, sio.SIO_000300, Literal(variables_dict['age'], datatype=XSD.integer)))
rdf.add((age, sio.SIO_000001, person_study_role))
# ward
rdf.add((ward, RDF.type, obo.NCIT_C21541))
rdf.add((ward, sio.SIO_000300, Literal(variables_dict['ward'], datatype=XSD.string)))
rdf.add((ward, RDFS.label, Literal(variables_dict['ward'], lang='en')))
rdf.add((ward, obo.BFO_0000050, institute))
# institute
rdf.add((institute, RDF.type, sio.SIO_000688))
rdf.add((institute, sio.SIO_000300, Literal(variables_dict['institute_abbreviation'], datatype=XSD.string)))
rdf.add((institute, RDFS.label, Literal(variables_dict['institute_abbreviation'], lang='en')))
# Role
rdf.add((person_study_role, RDF.type, sio.SIO_000883))
rdf.add((person_study_role, obo.RO_0001025, ward))
rdf.add((person_study_role, obo.RO_0001025, institute))
# measurement process date
rdf.add((measurement_process_date, RDF.type, obo.NCIT_C25164))
rdf.add((measurement_process_date, DCTERMS.date, Literal(variables_dict['lum_date_meas'], datatype=XSD.date)))
# BIOSAMPLES (SAMPLING PROCESS) MODEL
# Process
rdf.add((sampling_process, RDF.type, sio.SIO_001049))
rdf.add((sampling_process, sio.SIO_000291, biosample_object))
rdf.add((sampling_process, sio.SIO_000230, person))
rdf.add((sampling_process, sio.SIO_000229, biosample))
rdf.add((sampling_process, obo.RO_0002091, order))
rdf.add((sampling_process, sio.SIO_000008, sampling_process_date))
# Biosample
rdf.add((biosample, RDF.type, sio.SIO_001050))
rdf.add((biosample, sio.SIO_000628, biosample_object))
# Attribute/object
rdf.add((biosample_object, RDF.type, sio.SIO_010003))
rdf.add((biosample_object, obo.BFO_0000050, person))
# order
rdf.add((order, RDF.type, obo.NCIT_C48906))
rdf.add((order, sio.SIO_000300, Literal(variables_dict['order'], datatype=XSD.string)))
# sampling process date
rdf.add((sampling_process_date, RDF.type, obo.NCIT_C25164))
rdf.add((sampling_process_date, DCTERMS.date, Literal(variables_dict['date_sampling'], datatype=XSD.date)))
# Role
rdf.add((person_donor_role, RDF.type, obo.OBI_1110087))
rdf.add((person_donor_role, sio.SIO_000356, sampling_process))
# Identifier
# biosample
rdf.add((biosample_id, RDF.type, bco.record_id))
rdf.add((biosample_id, sio.SIO_000300, Literal(variables_dict['record_id'], datatype=XSD.string)))
rdf.add((biosample_id, sio.SIO_000672, biosample))
# person_donor
rdf.add((person_donor_id, RDF.type, obo.NCIT_C164796))
rdf.add((person_donor_id, obo.IAO_0000219, person_donor_role))
# CLINICAL OBSERVATIONS (EXAMINATION PROCESS) MODEL
# Identifier
rdf.add((clinical, RDF.type, bco.clinical_id))
# Observation
# observation = bc["clinical/observation/BEATCOVID_" + variables_dict['clinical_observations']]
# Lab measurement information
measurement_number = 0
for measurement in variables_dict.keys():
if "lum_date_" in measurement:
continue
if "lum" in measurement:
measurement_number += 1
device_string, protein_string, kit_string = measurement.split("_")
# kit
kit = bc["lab/kit_" + kit_string]
rdf.add((kit, RDF.type, obo.OBI_0000272))
rdf.add((kit, sio.SIO_000300, Literal(kit_string, datatype=XSD.string)))
rdf.add((kit, RDFS.label, Literal(f"Kit {kit_string}", lang='en')))
# device
device = bc["lab/device_" + device_string]
rdf.add((device, RDF.type, obo.OBI_0000968))
rdf.add((device, sio.SIO_000300, Literal(device_string, datatype=XSD.string)))
if device_string == "lum":
rdf.add((device, RDFS.label, Literal("Luminex", lang='en')))
# Attribute/object
trait = bc["trait/" + protein_string]
rdf.add((trait, RDF.type, sio.SIO_010043))
rdf.add((trait, sio.SIO_000300, Literal(protein_string, datatype=XSD.string)))
rdf.add((trait, obo.BFO_0000050, person))
# cytokine gene
gene = bc["gene/" + protein_string]
rdf.add((gene, RDF.type, sio.SIO_010035))
rdf.add((trait, sio.SIO_010079, gene))
# Measurement
quantitative_trait = bc["lab/quantitative_trait/BEATCOVID_" + variables_dict['record_id']
+ "_" + measurement + "_" + str(measurement_number)]
rdf.add((quantitative_trait, RDF.type, obo.IAO_0000109))
rdf.add((quantitative_trait, RDFS.label, Literal(measurement, datatype=XSD.string)))
rdf.add((quantitative_trait, sio.SIO_000221, efo.EFO_0004385))
if variables_dict[measurement] == 'OOR <' or variables_dict[measurement] == 'OOR >':
rdf.add((quantitative_trait, sio.SIO_000300, Literal(variables_dict[measurement], datatype=XSD.string)))
else:
rdf.add((quantitative_trait, sio.SIO_000300, Literal(variables_dict[measurement], datatype=XSD.float)))
rdf.add((quantitative_trait, sio.SIO_000628, trait))
#rdf.add((trait, sio.SIO_000216, quantitative_trait))
# unit
unit = bc["lab/measurement_unit/pg_ml"]
rdf.add((unit, RDF.type, obo.IAO_0000003))
rdf.add((unit, RDFS.label, Literal("pg/ml", datatype=XSD.string)))
# print(measurement_number, measurement, device, protein, kit)
# Process
lab_meas_process = bc["lab/measurement_process/BEATCOVID_" + variables_dict['record_id']
+ measurement]
rdf.add((lab_meas_process, RDF.type, obo.OBI_0000070))
rdf.add((lab_meas_process, sio.SIO_000291, trait))
rdf.add((lab_meas_process, sio.SIO_000230, biosample))
rdf.add((lab_meas_process, sio.SIO_000229, quantitative_trait))
rdf.add((lab_meas_process, sio.SIO_000008, measurement_process_date))
rdf.add((lab_meas_process, DCTERMS.conformsTo, kit))
rdf.add((lab_meas_process, sio.SIO_000132, device))
rdf.add((lab_meas_process, sio.SIO_000628, clinical))
rdf.add((lab_meas_process, prov.wasInformedBy, sampling_process))
# role
rdf.add((person_study_role, sio.SIO_000356, lab_meas_process))
return rdf
if __name__ == "__main__":
# # args
# if len(sys.argv) < 3:
# print("Missing input parameters. Usage:")
# print(f"\tpython {sys.arv[0] cytokine_csv_file_path rdf_dir_path}")
# exit(1)
#
# # output
# out_path = sys.argv[2]
# if not os.path.isdir(out_path): os.makedirs(out_path)
#
# # rdf
# with open(sys.argv[1]) as file:
# # skip header
# next(file)
#
# for line in file:
# values_tuple = line.rstrip().split(",")
# rdf = generate_rdf(values_tuple)
# rdf.serialize(f"{out_path}/{values_tuple[0].zfill(5)}.ttl", format="turtle")
out_path = "/home/nur/workspace/beat-covid/fair-data-model/rdf"
if not os.path.isdir(out_path): os.makedirs(out_path)
header = 1
rows_list = list()
for line in open("/home/nur/workspace/beat-covid/fair-data-model/cytokine/synthetic-data/"
"BEAT-COVID1_excel_export_2020-05-28_Luminex_synthetic-data.csv"):
if header:
header_tuple = line.rstrip().split("\t")
header = 0
continue
values_tuple = line.rstrip().split("\t")
raw_data_dict = dict(zip(header_tuple,values_tuple))
rows_list.append(raw_data_dict)
for row in rows_list:
crf = generate_rdf(row)
crf.serialize(f"{out_path}/{row['record_id'].zfill(5)}.ttl", format="turtle")
# print(f"row: {row}\nheader: {header_tuple}\nvalues: {values_tuple}")
print(f"row: {row}")
| 40.52901 | 120 | 0.655747 | 1,506 | 11,875 | 4.934927 | 0.169987 | 0.060549 | 0.025834 | 0.028122 | 0.39303 | 0.250538 | 0.142088 | 0.103606 | 0.067008 | 0.057051 | 0 | 0.046079 | 0.208674 | 11,875 | 292 | 121 | 40.667808 | 0.744812 | 0.176084 | 0 | 0.013245 | 1 | 0.006623 | 0.139472 | 0.061148 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006623 | false | 0 | 0.019868 | 0 | 0.033113 | 0.006623 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c257b998065928806b35178c7ceda8c16c41579 | 508 | py | Python | Stack/1249. Minimum Remove to Make Valid Parentheses.py | Into-Y0u/Github-Baby | 5e4e6b02f49c2c99533289be9d49911006cad919 | [
"MIT"
] | 2 | 2022-01-25T04:30:26.000Z | 2022-01-25T10:36:15.000Z | Stack/1249. Minimum Remove to Make Valid Parentheses.py | Into-Y0u/Leetcode-Baby | 681ad4df01ee908f76d888aa4ccc10f04c03c56f | [
"MIT"
] | null | null | null | Stack/1249. Minimum Remove to Make Valid Parentheses.py | Into-Y0u/Leetcode-Baby | 681ad4df01ee908f76d888aa4ccc10f04c03c56f | [
"MIT"
] | null | null | null | class Solution:
def minRemoveToMakeValid(self, s: str) -> str:
if not s :
return ""
s = list(s)
st = []
for i,n in enumerate(s):
if n == "(":
st.append(i)
elif n == ")" :
if st :
st.pop()
else :
s[i] = ""
while st:
s[st.pop()] = ""
return "".join(s)
| 21.166667 | 50 | 0.275591 | 44 | 508 | 3.181818 | 0.522727 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.606299 | 508 | 23 | 51 | 22.086957 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.003937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2574c91dd0017e03291fbe071a3fc02152437d2a | 193 | py | Python | base_model/admin.py | kriwil/django-base-model | e6e989fce282200df3f6d114af27cfa4a618203f | [
"0BSD"
] | null | null | null | base_model/admin.py | kriwil/django-base-model | e6e989fce282200df3f6d114af27cfa4a618203f | [
"0BSD"
] | null | null | null | base_model/admin.py | kriwil/django-base-model | e6e989fce282200df3f6d114af27cfa4a618203f | [
"0BSD"
] | null | null | null | from django.contrib import admin
class BaseModelAdmin(admin.ModelAdmin):
exclude = (
'created_time',
'modified_time',
'is_removed',
'removed_time',
)
| 16.083333 | 39 | 0.601036 | 18 | 193 | 6.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295337 | 193 | 11 | 40 | 17.545455 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.243523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2575c1d96c57160a201fba4b65403230c9c3cfc4 | 12,137 | py | Python | quantization/Quantizelayer.py | fengxiaoshuai/CNN_model_optimizer | 4c48420989ffe31a4075d36a5133fee0d999466a | [
"Apache-2.0"
] | null | null | null | quantization/Quantizelayer.py | fengxiaoshuai/CNN_model_optimizer | 4c48420989ffe31a4075d36a5133fee0d999466a | [
"Apache-2.0"
] | 1 | 2021-01-05T10:41:24.000Z | 2021-01-05T10:41:24.000Z | quantization/Quantizelayer.py | fengxiaoshuai/CNN_model_optimizer | 4c48420989ffe31a4075d36a5133fee0d999466a | [
"Apache-2.0"
] | 1 | 2020-08-07T02:56:20.000Z | 2020-08-07T02:56:20.000Z | from __future__ import division
from __future__ import print_function
import numpy as np
import copy
from scipy import stats
class QuantizeLayer:
def __init__(self, name="None", num_bin=2001):
self.name = name
self.min = 0.0
self.max = 0.0
self.edge = 0.0
self.num_bins = num_bin
self.distribution_interval = 0.0
self.data_distribution = []
@staticmethod
def get_max_min_edge(blob_data):
max_val = np.max(blob_data)
min_val = np.min(blob_data)
data_edge = max(abs(max_val), abs(min_val))
return max_val, min_val, data_edge
def initial_histograms(self, blob_data):
max_val, min_val, data_edge = self.get_max_min_edge(blob_data)
hist, hist_edges = np.histogram(blob_data, bins=self.num_bins, range=(-data_edge, data_edge))
self.distribution_interval = 2 * data_edge / len(hist)
self.data_distribution = hist
self.edge = data_edge
self.min = min_val
self.max = max_val
def combine_histograms(self, blob_data):
"""
:param blob_data:
:return:
"""
# hist is the num of each bin, the edge of each bin is [)
max_val, min_val, data_edge = self.get_max_min_edge(blob_data)
if data_edge <= self.edge:
hist, _ = np.histogram(blob_data, bins=len(self.data_distribution), range=(-self.edge, self.edge))
self.data_distribution += hist
else:
old_num_bins = len(self.data_distribution)
old_step = 2 * self.edge / old_num_bins
half_increased_bins = int((data_edge - self.edge) // old_step + 1)
new_num_bins = half_increased_bins * 2 + old_num_bins
data_edge = half_increased_bins * old_step + self.edge
hist, hist_edges = np.histogram(blob_data, bins=new_num_bins, range=(-data_edge, data_edge))
hist[half_increased_bins:new_num_bins - half_increased_bins] += self.data_distribution
self.data_distribution = hist
self.edge = data_edge
self.min = min(min_val, self.min)
self.max = max(max_val, self.max)
self.distribution_interval = 2 * self.edge / len(self.data_distribution)
@staticmethod
def smooth_distribution(p, eps=0.0001):
is_zeros = (p == 0).astype(np.float32)
is_nonzeros = (p != 0).astype(np.float32)
n_zeros = is_zeros.sum()
n_nonzeros = p.size - n_zeros
if not n_nonzeros:
raise ValueError('The discrete probability distribution is malformed. All entries are 0.')
eps1 = eps * float(n_zeros) / float(n_nonzeros)
assert eps1 < 1.0, 'n_zeros=%d, n_nonzeros=%d, eps1=%f' % (n_zeros, n_nonzeros, eps1)
hist = p.astype(np.float32)
hist += eps * is_zeros + (-eps1) * is_nonzeros
assert (hist <= 0).sum() == 0
return hist
@property
def threshold_distribution(self, target_bin=256):
"""
:param quantized_dtype:
:param target_bin:
:return:
"""
num_bins = len(self.data_distribution)
distribution = self.data_distribution
assert (num_bins % 2 == 1)
# if min_val >= 0 and quantized_dtype in ['auto', 'uint8']:
# target_bin = 128
threshold_sum = sum(distribution[target_bin:])
kl_divergence = np.zeros(num_bins - target_bin)
for threshold in range(target_bin, num_bins):
sliced_nd_hist = copy.deepcopy(distribution[:threshold])
# generate reference distribution p
p = sliced_nd_hist.copy()
p[threshold - 1] += threshold_sum
threshold_sum = threshold_sum - distribution[threshold]
# is_nonzeros[k] indicates whether hist[k] is nonzero
p = np.array(p)
nonzero_loc = (p != 0).astype(np.int64)
#
quantized_bins = np.zeros(target_bin, dtype=np.int64)
# calculate how many bins should be merged to generate quantized distribution q
num_merged_bins = len(sliced_nd_hist) // target_bin
# merge hist into num_quantized_bins bins
for j in range(target_bin):
start = j * num_merged_bins
stop = start + num_merged_bins
quantized_bins[j] = sliced_nd_hist[start:stop].sum()
quantized_bins[-1] += sliced_nd_hist[target_bin * num_merged_bins:].sum()
# expand quantized_bins into p.size bins
q = np.zeros(sliced_nd_hist.size, dtype=np.float64)
for j in range(target_bin):
start = j * num_merged_bins
if j == target_bin - 1:
stop = -1
else:
stop = start + num_merged_bins
norm = nonzero_loc[start:stop].sum()
if norm != 0:
q[start:stop] = quantized_bins[j] / norm
q[p == 0] = 0.0001
p = self.smooth_distribution(p)
# calculate kl_divergence between q and p
kl_divergence[threshold - target_bin] = stats.entropy(p, q)
min_kl_divergence = np.argmin(kl_divergence)
threshold_bin = min_kl_divergence + target_bin
threshold_value = (threshold_bin + 0.5) * self.distribution_interval + (-self.edge)
return threshold_value
@staticmethod
def max_slide_window(seq, m):
num = len(seq)
seq = seq.tolist()
assert isinstance(seq, (list, tuple, set)) and isinstance(m, int), "seq array"
assert len(seq) > m, "len(seq) must >m"
max_seq = 0
loc = 0
for i in range(0, num):
if (i + m) <= num:
temp_seq = seq[i:i + m]
temp_sum = sum(temp_seq)
if max_seq <= temp_sum:
max_seq = temp_sum
loc = i
else:
return max_seq, loc
@property
def distribution_min_max(self, target_bin=256):
num_bins = len(self.data_distribution)
distribution = self.data_distribution
assert (num_bins % 2 == 1)
kl_divergence = np.zeros(num_bins - target_bin)
kl_loc = np.zeros(num_bins - target_bin)
for threshold in range(target_bin, num_bins):
#print("num:", threshold)
_, loc = self.max_slide_window(distribution, threshold)
sliced_nd_hist = copy.deepcopy(distribution[loc:loc + threshold])
# generate reference distribution p
p = sliced_nd_hist.copy()
right_sum = sum(distribution[loc + threshold:])
left_sum = sum(distribution[:loc])
p[threshold - 1] += right_sum
p[0] += left_sum
# is_nonzeros[k] indicates whether hist[k] is nonzero
p = np.array(p)
nonzero_loc = (p != 0).astype(np.int64)
#
quantized_bins = np.zeros(target_bin, dtype=np.int64)
# calculate how many bins should be merged to generate quantized distribution q
num_merged_bins = len(sliced_nd_hist) // target_bin
# merge hist into num_quantized_bins bins
for j in range(target_bin):
start = j * num_merged_bins
stop = start + num_merged_bins
quantized_bins[j] = sliced_nd_hist[start:stop].sum()
quantized_bins[-1] += sliced_nd_hist[target_bin * num_merged_bins:].sum()
# expand quantized_bins into p.size bins
q = np.zeros(sliced_nd_hist.size, dtype=np.float64)
for j in range(target_bin):
start = j * num_merged_bins
if j == target_bin - 1:
stop = -1
else:
stop = start + num_merged_bins
norm = nonzero_loc[start:stop].sum()
if norm != 0:
q[start:stop] = quantized_bins[j] / norm
q[p == 0] = 0.0001
p = self.smooth_distribution(p)
# calculate kl_divergence between q and p
kl_divergence[threshold - target_bin] = stats.entropy(p, q)
kl_loc[threshold - target_bin] = loc
min_kl_divergence = np.argmin(kl_divergence)
min = kl_loc[min_kl_divergence]
max = min + target_bin + min_kl_divergence
min = (min + 0.5) * self.distribution_interval + (-self.edge)
max = (max + 0.5) * self.distribution_interval + (-self.edge)
return min, max
@property
def distribution_test(self, target_bin=256):
num_bins = len(self.data_distribution)
distribution = self.data_distribution
assert (num_bins % 2 == 1)
kl_divergence = np.zeros(num_bins - target_bin)
kl_loc = np.zeros(num_bins - target_bin)
for threshold in range(target_bin, num_bins):
#print("num:", threshold)
_, loc = self.max_slide_window(distribution, threshold)
sliced_nd_hist = copy.deepcopy(distribution[loc:loc + threshold])
# generate reference distribution p
p = sliced_nd_hist.copy()
right_sum = sum(distribution[loc + threshold:])
left_sum = sum(distribution[:loc])
p[threshold - 1] += right_sum
p[0] += left_sum
# is_nonzeros[k] indicates whether hist[k] is nonzero
p = np.array(p)
nonzero_loc = (p != 0).astype(np.int64)
#
quantized_bins = np.zeros(target_bin, dtype=np.int64)
# calculate how many bins should be merged to generate quantized distribution q
num_merged_bins = len(sliced_nd_hist) // target_bin
# merge hist into num_quantized_bins bins
for j in range(target_bin):
start = j * num_merged_bins
stop = start + num_merged_bins
quantized_bins[j] = sliced_nd_hist[start:stop].sum()
quantized_bins[-1] += sliced_nd_hist[target_bin * num_merged_bins:].sum()
# expand quantized_bins into p.size bins
q = np.zeros(sliced_nd_hist.size, dtype=np.float64)
for j in range(target_bin):
start = j * num_merged_bins
if j == target_bin - 1:
stop = -1
else:
stop = start + num_merged_bins
norm = nonzero_loc[start:stop].sum()
if norm != 0:
q[start:stop] = quantized_bins[j] / norm
q[p == 0] = 0.0001
p = self.smooth_distribution(p)
# calculate kl_divergence between q and p
kl_divergence[threshold - target_bin] = stats.wasserstein_distance(p, q)
kl_loc[threshold - target_bin] = loc
min_kl_divergence = np.argmin(kl_divergence)
min = kl_loc[min_kl_divergence]
max = min + target_bin + min_kl_divergence
min = (min + 0.5) * self.distribution_interval + (-self.edge)
max = (max + 0.5) * self.distribution_interval + (-self.edge)
return min, max
data = np.random.randn(10000,)
print(data)
layer = QuantizeLayer(name="con_1")
layer.initial_histograms(data)
print("min:", layer.min)
print("max:", layer.max)
print("edge:", layer.edge)
print("distribution_interval:", layer.distribution_interval)
print("bins:", len(layer.data_distribution))
data = np.random.randn(10000,).astype()
layer.combine_histograms(data)
print("min:", layer.min)
print("max:", layer.max)
print("edge:", layer.edge)
print("distribution_interval:", layer.distribution_interval)
print("bins:", len(layer.data_distribution))
data = np.random.randn(10000,)
data[9999] = 20
layer.combine_histograms(data)
print("min:", layer.min)
print("max:", layer.max)
print("edge:", layer.edge)
print("distribution_interval:", layer.distribution_interval)
print("bins:", len(layer.data_distribution))
import matplotlib.pyplot as plt
plt.plot(layer.data_distribution)
plt.show()
print(layer.threshold_distribution)
print(layer.distribution_min_max)
#print(layer.distribution_test) | 37.928125 | 110 | 0.596853 | 1,559 | 12,137 | 4.399615 | 0.10263 | 0.052486 | 0.031491 | 0.020994 | 0.716431 | 0.687855 | 0.664383 | 0.651115 | 0.633037 | 0.633037 | 0 | 0.017838 | 0.302546 | 12,137 | 320 | 111 | 37.928125 | 0.792439 | 0.094092 | 0 | 0.625 | 0 | 0 | 0.023672 | 0.006056 | 0 | 0 | 0 | 0 | 0.030172 | 1 | 0.038793 | false | 0 | 0.025862 | 0 | 0.094828 | 0.081897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2579c36b1d400b2989548b5ef20920bc5aa3d5ac | 17,767 | py | Python | ASV/ASV/nodes/averager.py | Southampton-Maritime-Robotics/Autonomous-Ship-and-Wavebuoys | bea27ac87b0e2991096da7f1b1c2197f1d620a51 | [
"MIT"
] | 4 | 2017-11-09T12:05:14.000Z | 2021-06-25T05:59:15.000Z | ASV/ASV/nodes/averager.py | Southampton-Maritime-Robotics/Autonomous-Ship-and-Wavebuoys | bea27ac87b0e2991096da7f1b1c2197f1d620a51 | [
"MIT"
] | null | null | null | ASV/ASV/nodes/averager.py | Southampton-Maritime-Robotics/Autonomous-Ship-and-Wavebuoys | bea27ac87b0e2991096da7f1b1c2197f1d620a51 | [
"MIT"
] | 1 | 2021-05-08T20:09:50.000Z | 2021-05-08T20:09:50.000Z | #!/usr/bin/python
##############################################################################
#averager.py
#
#This code has been created by Enrico Anderlini (ea3g09@soton.ac.uk) for
#averaging the main readings required during the QinetiQ tests. These values
#averaged over one minute will be published to an external logfile.
#
#Modifications to code
#16/02/2013 code created
#17/02/2013 removal of the calls to library_highlevel.py because whenever
# one of the nodes was not being published the node exited with
# errors.
#
##############################################################################
#Notes
#
#At the moment this file publishes to an external log file the values for the
#motor demand (rpm, voltage or power), the propeller rpm, the motor voltage or
#power, the battery voltage and the case temperature (hence, 4 values in total
#plus the time at which they have been sampled). Other variables may be added
#as required.
#
##############################################################################
import roslib; roslib.load_manifest('ASV')
import rospy
import time
import csv
import os
import numpy
from datetime import datetime
from std_msgs.msg import Float32
from std_msgs.msg import Int8
from std_msgs.msg import String
from ASV.msg import status
# Defining global variables
global time_zero
global counter
global Motor_setting
global Motor_target
global total_motor
global Prop_rpm
global total_rpm
global avg_rpm
global Voltage
global total_voltage
global avg_voltage
global Motor_current
global total_current
global avg_current
global Power
global total_power
global avg_power
global battery_voltage
global total_BatteryVoltage
global avg_BatteryVoltage
global Temperature
global total_temperature
global avg_temperature
global Thrust
global total_thrust
global avg_thrust
###############################################################
#The following functions write the values this node subscribes to into different
#log files in .cvs format within the folder ~/logFiles created within the main
#function.
###############################################################
def printer(setting, target, rpm, voltage, current, power, BatteryVoltage, temperature, thrust):
#The stringtime variable is used in all these functions to store the time of
#the reading (starting from the time of the start-up (zero))-expressed in seconds.
stringtime = time.time()-time_zero
averageList = [stringtime, setting, target, rpm, voltage, current, power, BatteryVoltage, temperature, thrust]
title = ['time', 'setting', 'target', 'rpm', 'volt', 'current', 'power', 'battery', 'temp', 'thrust']
print title
print averageList
with open('%s/averageLog.csv' %(dirname), "a") as f:
try:
Writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
Writer.writerow(title)
Writer.writerow(averageList)
except ValueError:
print 'writerow error'
########################## Callback Functions #################################
def motor_setting_cb(Motor_setting):
global motor_setting
motor_setting = Motor_setting.data
def motor_target_cb(Motor_target):
global motor_target
motor_target = Motor_target.data
def prop_rpm_cb(Prop_rpm):
global prop_rpm
prop_rpm = Prop_rpm.data
def motor_voltage_cb(Voltage):
global voltage
voltage = Voltage.data
def motor_current_cb(Motor_current):
global motor_current
motor_current = Motor_current.data
def motor_power_cb(Motor_power):
global motor_power
motor_power = Motor_power.data
def thrust_cb(Thrust):
global thrust
thrust = Thrust.data
def battery_voltage_cb(battery_voltage):
global BatteryVoltage
BatteryVoltage = battery_voltage.data
def temperature_cb(Temperature):
global temperature
temperature = Temperature.data
##############################################################
#def shutdown():
#shutdown behaviour - close all files
#print 'shutting down'
# with open('%s/path.kml' %(dirname), "a") as f:
# try:
# f.write('</coordinates>\n </LineString>\n </Placemark>\n </kml>\n')
# except ValueError:
# print 'write error'
################################## MAIN FUNCTION ###############################
if __name__ == '__main__':
#Initialising the node
rospy.init_node('averager')
stringtime = datetime.now()
stringtime = stringtime.strftime('%Y-%m-%d_%H-%M-%S')
rospy.loginfo('Logger started at %s.'%(stringtime))
pub_folder = rospy.Publisher('folder', String)
########################################################################
######## FOLDERS #######################################################
########################################################################
#define files and writers
logfolder = 'AverageValues'
dirname = logfolder + '/' + stringtime
if not os.path.isdir(logfolder):
print 'made logfolder'
os.mkdir(logfolder)
if not os.path.isdir(dirname):
print 'made test folder'
os.mkdir(dirname)
time.sleep(5)
pub_folder.publish(dirname)
########################################################################
#Setting the zero time
time_zero = time.time()
# Initialising global variables
counter =0
motor_setting =0
motor_target =0
prop_rpm =0
voltage =0
motor_current =0
motor_power =0
BatteryVoltage =0
temperature =0
thrust =0
total_motor =0
avg_motor =0
total_rpm =0
avg_rpm =0
total_voltage =0
avg_voltage =0
total_current =0
avg_current =0
total_power =0
avg_power =0
total_BatteryVoltage =0
avg_BatteryVoltage =0
total_temperature =0
avg_temperature =0
total_thrust =0
avg_thrust =0
########################SET UP THE SUBSCRIBERS##########################
rospy.Subscriber('setMotorTargetMethod', Int8, motor_setting_cb)
rospy.Subscriber('setMotorTarget', Float32, motor_target_cb)
rospy.Subscriber('prop_rpm', Float32, prop_rpm_cb)
rospy.Subscriber('motor_voltage', Float32, motor_voltage_cb)
rospy.Subscriber('motor_current', Float32, motor_current_cb)
rospy.Subscriber('motor_power', Float32, motor_power_cb)
rospy.Subscriber('thrust', Float32, thrust_cb)
rospy.Subscriber('battery_voltage', Float32, battery_voltage_cb)
rospy.Subscriber('CaseTemperature', Float32, temperature_cb)
#Publish the propeller rpm demand only when the node is not shutdown
#while not rospy.is_shutdown():
while (time.time()-time_zero)<=20:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>20 and (time.time()-time_zero)<=40:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>40 and (time.time()-time_zero)<=60:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>60 and (time.time()-time_zero)<=80:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>80 and (time.time()-time_zero)<=100:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>100 and (time.time()-time_zero)<=120:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>120 and (time.time()-time_zero)<=140:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>140 and (time.time()-time_zero)<=160:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
while (time.time()-time_zero)>160 and (time.time()-time_zero)<=180:
counter = counter + 1
total_rpm = prop_rpm + total_rpm
total_voltage = voltage + total_voltage
total_current = motor_current + total_current
total_power = motor_power + total_power
total_BatteryVoltage = BatteryVoltage
total_temperature = temperature + total_temperature
total_thrust = thrust + total_thrust
rospy.sleep(0.1)
#For debugging purposes only
#print counter
avg_rpm = total_rpm / (counter+1)
avg_voltage = total_voltage / (counter+1)
avg_current = total_current / (counter+1)
avg_power = total_power / (counter+1)
avg_BatteryVoltage = total_BatteryVoltage / (counter+1)
avg_temperature = total_temperature / (counter+1)
avg_thrust = total_thrust / (counter+1)
printer(motor_setting, motor_target, avg_rpm, avg_voltage, avg_current, avg_power, avg_BatteryVoltage, avg_temperature, avg_thrust)
| 39.394678 | 139 | 0.585017 | 1,867 | 17,767 | 5.304767 | 0.124264 | 0.058158 | 0.059976 | 0.029079 | 0.617326 | 0.586228 | 0.583704 | 0.583704 | 0.583704 | 0.570376 | 0 | 0.01605 | 0.298644 | 17,767 | 450 | 140 | 39.482222 | 0.77875 | 0.113975 | 0 | 0.525773 | 0 | 0 | 0.020867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037801 | null | null | 0.051546 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
257fa21be7c52550321debef98c5629ed602cf83 | 3,412 | py | Python | model.py | shivam13verma/han-chainer | ca1e34b1dcd8ecfdf55690de62b89c59c3699f82 | [
"MIT"
] | null | null | null | model.py | shivam13verma/han-chainer | ca1e34b1dcd8ecfdf55690de62b89c59c3699f82 | [
"MIT"
] | null | null | null | model.py | shivam13verma/han-chainer | ca1e34b1dcd8ecfdf55690de62b89c59c3699f82 | [
"MIT"
] | null | null | null | from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_dim, input_shape=(90582, 517)))
model.add(GRU(512, return_sequences=True))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=True))
model.add(Dropout(0.2))
model.add(TimeDistributedDense(1))
model.add(Activation('softmax'))
#word-gru layer
language_model = Sequential()
language_model.add(Embedding(vocab_size, 256, input_length=max_caption_len))
language_model.add(GRU(output_dim=128, return_sequences=True))
#word-attention
model = Sequential()
model.add(Dense(50, input_dim=100, init='uniform'))
model.add(Activation('tanh'))
#sentence-gru layer
#sentence-attention
def build_model(opts, verbose=False):
k = 2 * opts.lstm_units # 300
L = opts.xmaxlen # 20
N = opts.xmaxlen + opts.ymaxlen + 1 # for delim
print "x len", L, "total len", N
print "k", k, "L", L
main_input = Input(shape=(N,), dtype='int32', name='main_input')
x = Embedding(output_dim=opts.emb, input_dim=opts.max_features, input_length=N, name='x')(main_input)
drop_out = Dropout(0.1, name='dropout')(x)
lstm_fwd = LSTM(opts.lstm_units, return_sequences=True, name='lstm_fwd')(drop_out)
lstm_bwd = LSTM(opts.lstm_units, return_sequences=True, go_backwards=True, name='lstm_bwd')(drop_out)
bilstm = merge([lstm_fwd, lstm_bwd], name='bilstm', mode='concat')
drop_out = Dropout(0.1)(bilstm)
h_n = Lambda(get_H_n, output_shape=(k,), name="h_n")(drop_out)
Y = Lambda(get_Y, arguments={"xmaxlen": L}, name="Y", output_shape=(L, k))(drop_out)
Whn = Dense(k, W_regularizer=l2(0.01), name="Wh_n")(h_n)
Whn_x_e = RepeatVector(L, name="Wh_n_x_e")(Whn)
WY = TimeDistributed(Dense(k, W_regularizer=l2(0.01)), name="WY")(Y)
merged = merge([Whn_x_e, WY], name="merged", mode='sum')
M = Activation('tanh', name="M")(merged)
alpha_ = TimeDistributed(Dense(1, activation='linear'), name="alpha_")(M)
flat_alpha = Flatten(name="flat_alpha")(alpha_)
alpha = Dense(L, activation='softmax', name="alpha")(flat_alpha)
Y_trans = Permute((2, 1), name="y_trans")(Y) # of shape (None,300,20)
r_ = merge([Y_trans, alpha], output_shape=(k, 1), name="r_", mode=get_R)
r = Reshape((k,), name="r")(r_)
Wr = Dense(k, W_regularizer=l2(0.01))(r)
Wh = Dense(k, W_regularizer=l2(0.01))(h_n)
merged = merge([Wr, Wh], mode='sum')
h_star = Activation('tanh')(merged)
out = Dense(3, activation='softmax')(h_star)
output = out
model = Model(input=[main_input], output=output)
if verbose:
model.summary()
# plot(model, 'model.png')
# # model.compile(loss={'output':'binary_crossentropy'}, optimizer=Adam())
# model.compile(loss={'output':'categorical_crossentropy'}, optimizer=Adam(options.lr))
model.compile(loss='categorical_crossentropy',optimizer=Adam(options.lr))
return model
def compute_acc(X, Y, vocab, model, opts):
scores = model.predict(X, batch_size=options.batch_size)
prediction = np.zeros(scores.shape)
for i in range(scores.shape[0]):
l = np.argmax(scores[i])
prediction[i][l] = 1.0
assert np.array_equal(np.ones(prediction.shape[0]), np.sum(prediction, axis=1))
plabels = np.argmax(prediction, axis=1)
tlabels = np.argmax(Y, axis=1)
acc = accuracy(tlabels, plabels)
return acc, acc
| 37.086957 | 105 | 0.681125 | 520 | 3,412 | 4.303846 | 0.280769 | 0.039321 | 0.042449 | 0.032172 | 0.179625 | 0.165326 | 0.125112 | 0.072386 | 0.048257 | 0.048257 | 0 | 0.026144 | 0.148007 | 3,412 | 91 | 106 | 37.494505 | 0.743722 | 0.084115 | 0 | 0.09375 | 0 | 0 | 0.066174 | 0.00771 | 0 | 0 | 0 | 0 | 0.015625 | 0 | null | null | 0 | 0.03125 | null | null | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
258221159670850092053b3b46e03afa8f767d41 | 7,449 | py | Python | simplivity/resources/external_stores.py | HewlettPackard/simplivity-python-sdk | 03d8e92a02fe66e878ed22b37944e5a6ce991ef1 | [
"Apache-2.0"
] | 7 | 2020-02-28T09:03:09.000Z | 2022-03-28T15:52:23.000Z | simplivity/resources/external_stores.py | HewlettPackard/simplivity-python-sdk | 03d8e92a02fe66e878ed22b37944e5a6ce991ef1 | [
"Apache-2.0"
] | 47 | 2020-01-16T20:32:19.000Z | 2020-08-27T04:43:00.000Z | simplivity/resources/external_stores.py | HewlettPackard/simplivity-python-sdk | 03d8e92a02fe66e878ed22b37944e5a6ce991ef1 | [
"Apache-2.0"
] | 16 | 2020-01-10T14:15:17.000Z | 2021-04-06T13:31:01.000Z | ###
# (C) Copyright [2019-2020] Hewlett Packard Enterprise Development LP
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from simplivity.resources.resource import ResourceBase
from simplivity.resources import omnistack_clusters
URL = '/external_stores'
DATA_FIELD = 'external_stores'
class ExternalStores(ResourceBase):
"""Implements features available for SimpliVity External store resources."""
def __init__(self, connection):
super(ExternalStores, self).__init__(connection)
def get_all(self, pagination=False, page_size=0, limit=500, offset=0,
sort=None, order='descending', filters=None, fields=None,
case_sensitive=True):
"""
Get all external stores
Args:
pagination: True if need pagination
page_size: Size of the page (Required when pagination is on)
limit: A positive integer that represents the maximum number of results to return
offset: A positive integer that directs the service to start returning
the <offset value> instance, up to the limit.
sort: The name of the field where the sort occurs
order: The sort order preference. Valid values: ascending or descending.
filters: Dictionary with filter values. Example: {'name': 'name'}
name: The name of the external_stores to return.
Accepts: Single value, comma-separated list, pattern using one or more asterisk characters as a wildcard.
omnistack_cluster_id: The name of the omnistack_cluster that is associated with the instances to return
cluster_group_id:The unique identifiers (UIDs) of the cluster_groups associated with the external stores to return
Accepts: Single value, comma-separated list
management_ip: The IP address of the external store
Accepts: Single value, comma-separated list, pattern using one or more asterisk characters as a wildcard
type: The type of external store
Default: StoreOnceOnPrem
Returns:
list: list of resources
"""
return self._client.get_all(URL,
members_field=DATA_FIELD,
pagination=pagination,
page_size=page_size,
limit=limit,
offset=offset,
sort=sort,
order=order,
filters=filters,
fields=fields,
case_sensitive=case_sensitive)
def get_by_data(self, data):
"""Gets ExternalStore object from data.
Args:
data: ExternalStore data
Returns:
object: ExternalStore object.
"""
return ExternalStore(self._connection, self._client, data)
def register_external_store(self, management_ip, name, cluster, username, password, management_port=9387,
storage_port=9388, external_store_type='StoreOnceOnPrem', timeout=-1):
""" Register the external store.
Args:
management_ip: The IP address of the external store
name: The name of the external_store
cluster: Destination OmnistackCluster object/name.
username: The client name of the external store
password: The client password of the external store
management_port: The management IP port of the external store. Default: 9387
storage_port: The storage IP port of the external store. Default: 9388
external_store_type: The type of external store. Default: StoreOnceOnPrem
timeout: Time out for the request in seconds.
Returns:
object: External store object.
"""
data = {'management_ip': management_ip, 'management_port': management_port, 'name': name,
'username': username, 'password': password, 'storage_port': storage_port,
'type': external_store_type}
if not isinstance(cluster, omnistack_clusters.OmnistackCluster):
# if passed name of the cluster
clusters_obj = omnistack_clusters.OmnistackClusters(self._connection)
cluster = clusters_obj.get_by_name(cluster)
data['omnistack_cluster_id'] = cluster.data['id']
custom_headers = {'Content-type': 'application/vnd.simplivity.v1.11+json'}
self._client.do_post(URL, data, timeout, custom_headers)
return self.get_by_name(name)
def update_credentials(self, name, username, password, management_ip=None, timeout=-1):
"""Update the IP address or credentials that HPE SimpliVity uses to access the external stores
Args:
name: The name of the external_store
username: The client name of the external store
password: The client password of the external store
management_ip: The IP address of the external store
timeout: Time out for the request in seconds.
Returns:
object: External store object.
"""
resource_uri = "{}/update_credentials".format(URL)
data = {'name': name, 'username': username, 'password': password}
if management_ip:
data['management_ip'] = management_ip
custom_headers = {'Content-type': 'application/vnd.simplivity.v1.15+json'}
self._client.do_post(resource_uri, data, timeout, custom_headers)
class ExternalStore(object):
"""Implements features available for a single External store resources."""
def __init__(self, connection, resource_client, data):
self.data = data
self._connection = connection
self._client = resource_client
def unregister_external_store(self, cluster, timeout=-1):
""" Removes the external store as a backup destination for the cluster.
Backups remain on the external store,but they can no longer be managed by HPE SimpliVity.
Args:
cluster: Destination OmnistackCluster object/name.
timeout: Time out for the request in seconds.
Returns:
None
"""
resource_uri = "{}/unregister".format(URL)
data = {'name': self.data["name"]}
if not isinstance(cluster, omnistack_clusters.OmnistackCluster):
# if passed name of the cluster
clusters_obj = omnistack_clusters.OmnistackClusters(self._connection)
cluster = clusters_obj.get_by_name(cluster)
data['omnistack_cluster_id'] = cluster.data['id']
custom_headers = {'Content-type': 'application/vnd.simplivity.v1.15+json'}
self._client.do_post(resource_uri, data, timeout, custom_headers)
| 44.873494 | 130 | 0.638475 | 853 | 7,449 | 5.445487 | 0.259086 | 0.069968 | 0.048224 | 0.042626 | 0.424973 | 0.389666 | 0.369645 | 0.325296 | 0.304629 | 0.270398 | 0 | 0.008536 | 0.292254 | 7,449 | 165 | 131 | 45.145455 | 0.872534 | 0.473218 | 0 | 0.214286 | 0 | 0 | 0.113097 | 0.038676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.071429 | 0.035714 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
25858779947cdd6eff807639ff98f34b7425aeeb | 1,285 | py | Python | setup.py | bentettmar/nertivia4py | a9b758000632e40306bc610a6966cb8d0a643c20 | [
"MIT"
] | 3 | 2022-01-24T16:31:20.000Z | 2022-02-03T22:44:51.000Z | setup.py | bentettmar/nertivia4py | a9b758000632e40306bc610a6966cb8d0a643c20 | [
"MIT"
] | 9 | 2022-03-05T19:01:48.000Z | 2022-03-06T11:38:53.000Z | setup.py | bentettmar/nertivia4py | a9b758000632e40306bc610a6966cb8d0a643c20 | [
"MIT"
] | null | null | null | from distutils.core import setup
readme = """
# Nertivia4PY
A Python wrapper for the Nertivia API.
Support Nertivia server : https://nertivia.net/i/nertivia4py
> ### Install
> ```
> pip install nertivia4py
> ```
> ### Example
> ```python
> import nertivia4py
>
> token = "TOKEN_HERE"
> prefix = "!"
>
> bot = nertivia4py.Bot(prefix)
>
> @bot.event
> def on_success(event):
> print("Connected!")
>
> @bot.command(name="ping", description="Ping command.")
> def ping_command(message, args):
> message.reply("Pong!")
>
> bot.run(token)
> ```
>
> For more examples, take a look at the examples folder in the github repo.
"""
setup(
name='nertivia4py',
packages=['nertivia4py', 'nertivia4py.gateway', 'nertivia4py.utils', 'nertivia4py.commands'],
version='1.0.8',
license='MIT',
description='A Python wrapper for the Nertivia API',
long_description_content_type="text/markdown",
long_description=readme,
author='Ben Tettmar',
author_email='hello@benny.fun',
url='https://github.com/bentettmar/nertivia4py',
keywords=["nertivia", "api", "wrapper", "python",
"bot", "nertivia.py", "nertivia4py"],
install_requires=["requests", 'python-socketio[client]'],
)
| 25.196078 | 98 | 0.633463 | 141 | 1,285 | 5.70922 | 0.574468 | 0.040994 | 0.034783 | 0.042236 | 0.077019 | 0.077019 | 0.077019 | 0 | 0 | 0 | 0 | 0.014648 | 0.203113 | 1,285 | 50 | 99 | 25.7 | 0.771484 | 0 | 0 | 0.065217 | 0 | 0 | 0.709312 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
258626b69bb2d19543b8603f01ee4f2de96f5e1d | 7,867 | py | Python | scheduler/args.py | jian-yu/autotx | eed17a8881b6c3ee80d93d044abd2c67b150ccf1 | [
"Apache-2.0"
] | 1 | 2019-10-14T04:58:13.000Z | 2019-10-14T04:58:13.000Z | scheduler/args.py | jian-yu/autotx | eed17a8881b6c3ee80d93d044abd2c67b150ccf1 | [
"Apache-2.0"
] | 1 | 2021-06-02T00:30:31.000Z | 2021-06-02T00:30:31.000Z | scheduler/args.py | jian-yu/autotx | eed17a8881b6c3ee80d93d044abd2c67b150ccf1 | [
"Apache-2.0"
] | 1 | 2020-08-11T02:48:38.000Z | 2020-08-11T02:48:38.000Z | class PoolArgs:
def __init__(self, bankerBufCap, bankerMaxBufNumber, signerBufCap, signerBufMaxNumber, broadcasterBufCap, broadcasterMaxNumber, stakingBufCap, stakingMaxNumber, distributionBufCap, distributionMaxNumber, errorBufCap, errorMaxNumber):
self.BankerBufCap = bankerBufCap
self.BankerMaxBufNumber = bankerMaxBufNumber
self.SignerBufCap = signerBufCap
self.SignerBufMaxNumber = signerBufMaxNumber
self.BroadcasterBufCap = broadcasterBufCap
self.BroadcasterMaxNumber = broadcasterMaxNumber
self.StakingBufCap = stakingBufCap
self.StakingMaxNumber = stakingMaxNumber
self.DistributionBufCap = distributionBufCap
self.DistributionMaxNumber = distributionMaxNumber
self.ErrorBufCap = errorBufCap
self.ErrorMaxNumber = errorMaxNumber
def Check(self):
if self.BankerBufCap == 0:
return PoolArgsError('zero banker buffer capacity')
if self.BankerMaxBufNumber == 0:
return PoolArgsError('zero banker max buffer number')
if self.SignerBufCap == 0:
return PoolArgsError('zero signer buffer capacity')
if self.SignerBufMaxNumber == 0:
return PoolArgsError('zero signer max buffer number')
if self.BroadcasterBufCap == 0:
return PoolArgsError('zero broadcaster buffer capacity')
if self.BroadcasterMaxNumber == 0:
return PoolArgsError('zero broadcaster max buffer number')
if self.StakingBufCap == 0:
return PoolArgsError('zero staking buffer capacity')
if self.StakingMaxNumber == 0:
return PoolArgsError('zero staking max buffer number')
if self.DistributionBufCap == 0:
return PoolArgsError('zero distribution buffer capacity')
if self.DistributionMaxNumber == 0:
return PoolArgsError('zero distribution max buffer number')
if self.ErrorBufCap == 0:
return PoolArgsError('zero error buffer capacity')
if self.ErrorMaxNumber == 0:
return PoolArgsError('zero error max buffer number')
return None
class PoolArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class ModuleArgs:
def __init__(self, bankers, signers, broadcasters, stakings, distributors):
self.Bankers = bankers
self.Signers = signers
self.Broadcasters = broadcasters
self.Stakings = stakings
self.Distributors = distributors
def Check(self):
if len(self.Bankers) == 0:
return ModuleArgsError('empty banker list')
if len(self.Signers) == 0:
return ModuleArgsError('empty signer list')
if len(self.Broadcasters) == 0:
return ModuleArgsError('empty broadcaster list')
if len(self.Stakings) == 0:
return ModuleArgsError('empty stakinger list')
if len(self.Distributors) == 0:
return ModuleArgsError('empty distributor list')
return None
class ModuleArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class SendCoinArgs:
def __init__(self, srcAccount, dstAccount, coins, fees, gas, gasAdjust):
self.srcAccount = srcAccount
self.dstAccount = dstAccount
self.coins = coins
self.fees = fees
self.gas = gas
self.gasAdjust = gasAdjust
def Check(self):
if self.srcAccount is None or self.srcAccount.getAddress() == '':
return SendCoinArgsError('srcAccount is invalid')
if self.dstAccount is None or self.dstAccount.getAddress() == '':
return SendCoinArgsError('dstAccount is invalid')
if self.coins is None or len(self.coins) == 0:
return SendCoinArgsError('empty coins')
if self.fees is None or len(self.fees) == 0:
return SendCoinArgsError('empty fess')
if self.gas is None:
return SendCoinArgsError('empty gas')
if self.gasAdjust is None:
return SendCoinArgsError('empty gasAdjust')
return None
class SendCoinArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class SendSignArgs:
def __init__(self, srcAccount, sendedJsonFilePath, node):
self.srcAccount = srcAccount
self.sendedJsonFilePath = sendedJsonFilePath
self.node = node
def Check(self):
if self.srcAccount is None or self.srcAccount.getAddress() == '':
return SendSignArgsError('srcAccount is invalid')
if self.sendedJsonFilePath is None:
return SendSignArgsError('empty sendedJsonFilePath')
if self.node is None:
return SendSignArgsError('empty node')
return None
class SendSignArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class SendBroadcastArgs:
def __init__(self, srcAccount, body, mode='sync'):
self.srcAccount = srcAccount
self.body = body
self.mode = mode
def Check(self):
if self.body is None:
return SendBroadcastArgsError('empty broadcast body')
if self.srcAccount is None:
return SendBroadcastArgsError('unknown tx src account')
return None
class SendBroadcastArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class DelegateArgs():
def __init__(self, delegator, validator, coin, fees, gas, gasAdjust):
self.delegator = delegator
self.validator = validator
self.coin = coin
self.fees = fees
self.gas = gas
self.gasAdjust = gasAdjust
def Check(self):
if self.delegator is None or self.delegator.getAddress() == '':
return DelegateArgsError('delegator is invalid')
if self.validator is None:
return DelegateArgsError('validator is invalid')
if self.coin is None:
return DelegateArgsError('empty coins')
if self.fees is None or len(self.fees) == 0:
return DelegateArgsError('empty fess')
if self.gas is None:
return DelegateArgsError('empty gas')
if self.gasAdjust is None:
return DelegateArgsError('empty gasAdjust')
return None
class StakingArgs():
def __init__(self, _type, data):
self._type = _type
self.data = data
def getType(self):
return self._type
def getData(self):
return self.data
class WithdrawDelegatorOneRewardArgs():
def __init__(self, delegator, validator, fees, gas, gasAdjust):
self.delegator = delegator
self.validator = validator
self.fees = fees
self.gas = gas
self.gasAdjust = gasAdjust
def Check(self):
if self.delegator is None or self.delegator.getAddress() == '':
return DelegateArgsError('delegator is invalid')
if self.validator is None:
return DelegateArgsError('validator is invalid')
if self.fees is None or len(self.fees) == 0:
return DelegateArgsError('empty fess')
if self.gas is None:
return DelegateArgsError('empty gas')
if self.gasAdjust is None:
return DelegateArgsError('empty gasAdjust')
return None
class DistributionArgs():
def __init__(self, _type, data):
self._type = _type
self.data = data
def getType(self):
return self._type
def getData(self):
return self.data
class DelegateArgsError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
| 33.054622 | 237 | 0.643702 | 801 | 7,867 | 6.207241 | 0.109863 | 0.04103 | 0.033186 | 0.057924 | 0.522124 | 0.355591 | 0.355591 | 0.355591 | 0.342518 | 0.342518 | 0 | 0.003687 | 0.275963 | 7,867 | 237 | 238 | 33.194093 | 0.869206 | 0 | 0 | 0.484375 | 0 | 0 | 0.102072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.052083 | 0.536458 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2586e81e4e9946ad1a836db00f8a88e5409f7e9b | 871 | py | Python | models/user.py | NeonWizard/php-mood-tracker | 51f7945412d3077b81af29a229a9dbe66d2abdc2 | [
"MIT"
] | null | null | null | models/user.py | NeonWizard/php-mood-tracker | 51f7945412d3077b81af29a229a9dbe66d2abdc2 | [
"MIT"
] | null | null | null | models/user.py | NeonWizard/php-mood-tracker | 51f7945412d3077b81af29a229a9dbe66d2abdc2 | [
"MIT"
] | null | null | null | class UserModel(Table):
def __init__(self):
self.tableName = "User"
self.requiredFields = ['firstName', 'lastName', 'username', 'password']
self.optionalFields = ['email']
def check(self, data):
for req in self.requiredFields:
if req not in data:
return False
for opt in self.optionalFields:
if opt not in data:
data[opt] = ""
return data
def getById(self, id):
rows = self.select([
"id LIKE {}".format(id)
])
if rows:
return rows[0]
else:
None
def getByUsername(self, username):
rows = self.select([
"username LIKE '{}'".format(username)
])
if rows:
return rows[0]
else:
None
def add(self, data):
import bcrypt
data = self.check(data)
if not data:
return False
data['password'] = bcrypt.hashpw(data['password'].encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
self.insert(data)
| 17.77551 | 102 | 0.639495 | 116 | 871 | 4.767241 | 0.387931 | 0.065099 | 0.03255 | 0.057866 | 0.101266 | 0.101266 | 0.101266 | 0.101266 | 0 | 0 | 0 | 0.005822 | 0.211251 | 871 | 48 | 103 | 18.145833 | 0.799127 | 0 | 0 | 0.388889 | 0 | 0 | 0.110218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0.055556 | 0.027778 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
259148bec81d808f337cab94d5e80017025d5d2d | 455 | py | Python | src/gui/migrations/0014_feedback_childprotection.py | digitalfabrik/ish-goalkeeper | a500c7a628ef66897941dadc0addb0be01658e02 | [
"MIT"
] | 12 | 2021-10-30T12:57:26.000Z | 2021-10-31T11:33:20.000Z | src/gui/migrations/0014_feedback_childprotection.py | digitalfabrik/ish-goalkeeper | a500c7a628ef66897941dadc0addb0be01658e02 | [
"MIT"
] | 53 | 2019-07-31T12:44:44.000Z | 2021-10-21T12:40:29.000Z | src/gui/migrations/0014_feedback_childprotection.py | digitalfabrik/ish-goalkeeper | a500c7a628ef66897941dadc0addb0be01658e02 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.1 on 2020-03-10 18:24
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('gui', '0013_auto_20200310_1742'),
]
operations = [
migrations.AddField(
model_name='feedback',
name='childprotection',
field=models.TextField(blank=True, max_length=1000, verbose_name='Kinderschutzrelevante Information'),
),
]
| 23.947368 | 114 | 0.63956 | 48 | 455 | 5.9375 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102639 | 0.250549 | 455 | 18 | 115 | 25.277778 | 0.733138 | 0.098901 | 0 | 0 | 1 | 0 | 0.20098 | 0.107843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.