hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f7a95c509ce50bcdd7048409ca9b8c7d9c279bfa | 432 | py | Python | Dataset/Leetcode/valid/98/736.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/98/736.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/valid/98/736.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution:
def XXX(self, root: TreeNode) -> bool:
stack = []
cur = root
last = float("-inf")
while cur or stack:
while cur:
stack.append(cur)
cur = cur.left
cur = stack.pop()
if cur.val > last:
last = cur.val
else:
return False
cur = cur.right
return True
| 24 | 42 | 0.414352 | 44 | 432 | 4.068182 | 0.568182 | 0.100559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 432 | 17 | 43 | 25.411765 | 0.828704 | 0 | 0 | 0 | 0 | 0 | 0.009281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7abc4036e6849052f1ad734c829603c8746cd22 | 237 | py | Python | data/ck/check_data.py | jorgimello/meta-learning-fer | 793610ae8471f794a6837930d8bb51866c1f7c02 | [
"MIT"
] | 4 | 2020-10-10T03:33:15.000Z | 2022-01-17T08:00:32.000Z | data/ck/check_data.py | jorgimello/meta-learning-facial-expression-recognition | 793610ae8471f794a6837930d8bb51866c1f7c02 | [
"MIT"
] | null | null | null | data/ck/check_data.py | jorgimello/meta-learning-facial-expression-recognition | 793610ae8471f794a6837930d8bb51866c1f7c02 | [
"MIT"
] | null | null | null | import numpy as np
import os, cv2
imgs = np.load('test_set_ck_extended_no_resize.npy')
lbls = np.load('test_labels_ck_extended_no_resize.npy')
for i in range(imgs.shape[0]):
print (lbls[i])
cv2.imshow('img', imgs[i])
cv2.waitKey(0)
| 21.545455 | 55 | 0.734177 | 45 | 237 | 3.644444 | 0.6 | 0.073171 | 0.121951 | 0.219512 | 0.256098 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.113924 | 237 | 10 | 56 | 23.7 | 0.757143 | 0 | 0 | 0 | 0 | 0 | 0.312236 | 0.299578 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7add4b7f65c543a8a0fd87ede46693f7cb004d9 | 773 | py | Python | app/db/schemas/users.py | ergo-pad/paideia-api | 7ffc78366567c72722d107f06ad37aa7557b05be | [
"MIT"
] | null | null | null | app/db/schemas/users.py | ergo-pad/paideia-api | 7ffc78366567c72722d107f06ad37aa7557b05be | [
"MIT"
] | null | null | null | app/db/schemas/users.py | ergo-pad/paideia-api | 7ffc78366567c72722d107f06ad37aa7557b05be | [
"MIT"
] | null | null | null | from pydantic import BaseModel
import typing as t
### SCHEMAS FOR USERS ###
class UserBase(BaseModel):
alias: str
primary_wallet_address_id: t.Optional[int]
profile_img_url: t.Optional[str]
is_active: bool = True
is_superuser: bool = False
class UserOut(UserBase):
pass
class UserCreate(UserBase):
password: str
class Config:
orm_mode = True
class UserEdit(UserBase):
password: t.Optional[str] = None
class Config:
orm_mode = True
class User(UserBase):
id: int
class Config:
orm_mode = True
class CreateErgoAddress(BaseModel):
user_id: int
address: str
is_smart_contract: bool
class ErgoAddress(CreateErgoAddress):
id: int
class Config:
orm_mode = True
| 14.865385 | 46 | 0.667529 | 96 | 773 | 5.229167 | 0.447917 | 0.087649 | 0.111554 | 0.143426 | 0.2251 | 0.2251 | 0.10757 | 0 | 0 | 0 | 0 | 0 | 0.256145 | 773 | 51 | 47 | 15.156863 | 0.873043 | 0.021992 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.066667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f7af40aed66aeeaae2505edaa30898f512812b45 | 329 | py | Python | Mundo 1/ex_014.py | Shock3/Python_Exercicios | 4420569e881b883728168aabe76b0e9f3a42597f | [
"MIT"
] | null | null | null | Mundo 1/ex_014.py | Shock3/Python_Exercicios | 4420569e881b883728168aabe76b0e9f3a42597f | [
"MIT"
] | null | null | null | Mundo 1/ex_014.py | Shock3/Python_Exercicios | 4420569e881b883728168aabe76b0e9f3a42597f | [
"MIT"
] | null | null | null | """
Escreva um programa que converta uma temperatura,
digitando em graus Celsius e converta para graus Fahrenheit.
"""
celsius = int(input('Digite a temperatura: '))
fahrenheit = (celsius / 5) * 9 + 32
Kelvin = celsius + 273
print(f'A temperatura {celsius}°C em Fahrenheit é {fahrenheit}°F')
print(f'E em Kevin fica {Kelvin} K')
| 32.9 | 66 | 0.723404 | 51 | 329 | 4.705882 | 0.607843 | 0.141667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02518 | 0.155015 | 329 | 9 | 67 | 36.555556 | 0.830935 | 0.334347 | 0 | 0 | 0 | 0 | 0.492891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7bd078884fa7f447ad7081c6426bb1a2e21941b | 625 | py | Python | forms_builder/forms/migrations/0004_auto_20180727_1256.py | maqmigh/django-forms-builder | 1a0068d1d07498f4a2e160c46ec85b9a5f2ddd98 | [
"BSD-2-Clause"
] | null | null | null | forms_builder/forms/migrations/0004_auto_20180727_1256.py | maqmigh/django-forms-builder | 1a0068d1d07498f4a2e160c46ec85b9a5f2ddd98 | [
"BSD-2-Clause"
] | null | null | null | forms_builder/forms/migrations/0004_auto_20180727_1256.py | maqmigh/django-forms-builder | 1a0068d1d07498f4a2e160c46ec85b9a5f2ddd98 | [
"BSD-2-Clause"
] | null | null | null | # coding=utf-8
# Generated by Django 2.0.7 on 2018-07-27 10:56
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('forms', '0003_auto_20180522_0820'),
]
operations = [
migrations.AlterField(
model_name='field',
name='help_text',
field=models.CharField(blank=True, max_length=2000, verbose_name='Help text'),
),
migrations.AlterField(
model_name='form',
name='slug',
field=models.SlugField(max_length=100, unique=True, verbose_name='Slug'),
),
]
| 25 | 90 | 0.5968 | 70 | 625 | 5.185714 | 0.671429 | 0.110193 | 0.137741 | 0.15978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086667 | 0.28 | 625 | 24 | 91 | 26.041667 | 0.72 | 0.0928 | 0 | 0.235294 | 1 | 0 | 0.111702 | 0.04078 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7bf187ba4675f05a89f42e9783052fe7bcd13c5 | 647 | py | Python | docs/_docs/bash/az3166_patch_binary.py | skolbin-ssi/azure-iot-developer-kit | 24035c8870e9c342d055bcd586529441078af0a0 | [
"MIT"
] | 43 | 2017-10-03T23:03:23.000Z | 2019-04-27T18:57:16.000Z | docs/_docs/bash/az3166_patch_binary.py | skolbin-ssi/azure-iot-developer-kit | 24035c8870e9c342d055bcd586529441078af0a0 | [
"MIT"
] | 114 | 2017-09-20T02:51:28.000Z | 2019-05-06T06:13:14.000Z | docs/_docs/bash/az3166_patch_binary.py | skolbin-ssi/azure-iot-developer-kit | 24035c8870e9c342d055bcd586529441078af0a0 | [
"MIT"
] | 48 | 2017-09-19T08:18:52.000Z | 2019-04-19T11:44:32.000Z | # ----------------------------------------------------------------------------
# Copyright (C) Microsoft. All rights reserved.
# Licensed under the MIT license.
# ----------------------------------------------------------------------------
import os
import binascii
import struct
import shutil
import inspect
import sys
def binary_hook(binf, outf):
with open(binf,'rb') as f:
appbin = f.read()
with open('boot.bin', 'rb') as f:
bootbin = f.read()
with open(outf ,'wb') as f:
f.write(bootbin + ('\xFF' * (0xc000 - len(bootbin))) + appbin)
if __name__ == '__main__':
binary_hook(sys.argv[1], sys.argv[2]) | 29.409091 | 78 | 0.482226 | 72 | 647 | 4.194444 | 0.611111 | 0.07947 | 0.033113 | 0.086093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011194 | 0.171561 | 647 | 22 | 79 | 29.409091 | 0.552239 | 0.360124 | 0 | 0 | 0 | 0 | 0.063415 | 0 | 0 | 0 | 0.014634 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.4 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f7bfccc428289385cc22ed6c618de770f292647a | 590 | py | Python | setup.py | FireXStuff/firex-bundle-ci | 05ef1d9017b3553e8f4249da9a96e313f0ad7047 | [
"BSD-3-Clause"
] | 1 | 2021-01-08T19:50:33.000Z | 2021-01-08T19:50:33.000Z | setup.py | FireXStuff/firex-bundle-ci | 05ef1d9017b3553e8f4249da9a96e313f0ad7047 | [
"BSD-3-Clause"
] | null | null | null | setup.py | FireXStuff/firex-bundle-ci | 05ef1d9017b3553e8f4249da9a96e313f0ad7047 | [
"BSD-3-Clause"
] | null | null | null | import versioneer
from setuptools import setup
setup(name='firex-bundle-ci',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='FireX CI services',
url='https://github.com/FireXStuff/firex-bundle-ci.git',
author='Core FireX Team',
author_email='firex-dev@gmail.com',
license='BSD-3-Clause',
packages=['firex_bundle_ci'],
zip_safe=True,
install_requires=[
"firexapp",
"firex-keeper",
"lxml",
"xunitmerge",
"unittest-xml-reporting"
],
)
| 26.818182 | 62 | 0.60678 | 63 | 590 | 5.571429 | 0.68254 | 0.094017 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002278 | 0.255932 | 590 | 21 | 63 | 28.095238 | 0.797267 | 0 | 0 | 0 | 0 | 0 | 0.335593 | 0.037288 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7c4b93a5f9fe2cd51baa68e74a1491e4f04cbf5 | 1,535 | py | Python | nipy/labs/spatial_models/tests/test_bsa_io.py | arokem/nipy | d6b2e862c65558bb5747c36140fd6261a7e1ecfe | [
"BSD-3-Clause"
] | null | null | null | nipy/labs/spatial_models/tests/test_bsa_io.py | arokem/nipy | d6b2e862c65558bb5747c36140fd6261a7e1ecfe | [
"BSD-3-Clause"
] | null | null | null | nipy/labs/spatial_models/tests/test_bsa_io.py | arokem/nipy | d6b2e862c65558bb5747c36140fd6261a7e1ecfe | [
"BSD-3-Clause"
] | null | null | null | from __future__ import with_statement
from nose.tools import assert_true
from os.path import exists
import numpy as np
from nibabel import Nifti1Image
from numpy.testing import assert_equal
from ...utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
from ..bsa_io import make_bsa_image
from nibabel.tmpdirs import InTemporaryDirectory
def test_parcel_intra_from_3d_images_list():
"""Test that a parcellation is generated, starting from a list of 3D images
"""
# Generate an image
shape = (5, 5, 5)
contrast_id = 'plop'
mask_image = Nifti1Image(np.ones(shape), np.eye(4))
#mask_images = [mask_image for _ in range(5)]
with InTemporaryDirectory() as dir_context:
data_image = ['image_%d.nii' % i for i in range(5)]
for datim in data_image:
surrogate_3d_dataset(mask=mask_image, out_image_file=datim)
#run the algo
landmark, hrois = make_bsa_image(
mask_image, data_image, threshold=10., smin=0, sigma=1.,
prevalence_threshold=0, prevalence_pval=0.5, write_dir=dir_context,
algorithm='density', contrast_id=contrast_id)
assert_equal(landmark, None)
assert_equal(len(hrois), 5)
assert_true(exists('density_%s.nii' % contrast_id))
assert_true(exists('prevalence_%s.nii' % contrast_id))
assert_true(exists('AR_%s.nii' % contrast_id))
assert_true(exists('CR_%s.nii' % contrast_id))
if __name__ == "__main__":
import nose
nose.run(argv=['', __file__])
| 34.111111 | 79 | 0.699674 | 216 | 1,535 | 4.643519 | 0.430556 | 0.069791 | 0.063809 | 0.055833 | 0.089731 | 0.089731 | 0.089731 | 0 | 0 | 0 | 0 | 0.016353 | 0.203257 | 1,535 | 44 | 80 | 34.886364 | 0.803761 | 0.099023 | 0 | 0 | 0 | 0 | 0.058182 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.033333 | false | 0 | 0.333333 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f7cfecaa2797756809c5e754e4b6bf4f05823087 | 1,006 | py | Python | narrative2vec/logging_instance/pose.py | code-iai/narrative2vec | 948071d09838ea41ee9749325af6804427a060d2 | [
"MIT"
] | null | null | null | narrative2vec/logging_instance/pose.py | code-iai/narrative2vec | 948071d09838ea41ee9749325af6804427a060d2 | [
"MIT"
] | null | null | null | narrative2vec/logging_instance/pose.py | code-iai/narrative2vec | 948071d09838ea41ee9749325af6804427a060d2 | [
"MIT"
] | null | null | null | from narrative2vec.logging_instance.logging_instance import LoggingInstance, _get_first_rdf_query_result
from narrative2vec.logging_instance.reasoning_task import ReasoningTask
from narrative2vec.ontology.neemNarrativeDefinitions import QUATERNION
from narrative2vec.ontology.ontologyHandler import get_knowrob_uri
class Pose(LoggingInstance):
def get_translation(self):
read_translation = self._get_property_('translation')
return read_translation.strip().split()
def get_quaternion(self):
read_orientation = self._get_property_(QUATERNION)
return read_orientation.strip().split()
def get_reasoning_task__id(self):
reasoning_task_property = self._graph_.subjects(get_knowrob_uri('parameter2'), self.context)
reasoning_task = _get_first_rdf_query_result(reasoning_task_property)
if reasoning_task and not reasoning_task.startswith('file://'):
return ReasoningTask(reasoning_task, self._graph_).get_id()
return '' | 43.73913 | 104 | 0.781312 | 114 | 1,006 | 6.482456 | 0.359649 | 0.140731 | 0.064953 | 0.086604 | 0.05954 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005807 | 0.144135 | 1,006 | 23 | 105 | 43.73913 | 0.852497 | 0 | 0 | 0 | 0 | 0 | 0.027805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.235294 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f7d2351d64f6c5df1c1015aaa80a18aa25236a08 | 239 | py | Python | safexl/__init__.py | ThePoetCoder/safexl | d2fb91ad45d33b6f51946e99c78e7fcf7564e82e | [
"MIT"
] | 6 | 2020-08-28T16:00:28.000Z | 2022-01-17T14:48:04.000Z | safexl/__init__.py | ThePoetCoder/safexl | d2fb91ad45d33b6f51946e99c78e7fcf7564e82e | [
"MIT"
] | null | null | null | safexl/__init__.py | ThePoetCoder/safexl | d2fb91ad45d33b6f51946e99c78e7fcf7564e82e | [
"MIT"
] | null | null | null | # Copyright (c) 2020 safexl
from safexl.toolkit import *
import safexl.xl_constants as xl_constants
import safexl.colors as colors
__author__ = "Eric Smith"
__email__ = "ThePoetCoder@gmail.com"
__license__ = "MIT"
__version__ = "0.0.7"
| 19.916667 | 42 | 0.76569 | 33 | 239 | 5 | 0.69697 | 0.145455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033981 | 0.138075 | 239 | 11 | 43 | 21.727273 | 0.76699 | 0.104603 | 0 | 0 | 0 | 0 | 0.188679 | 0.103774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f7d39269257b5bc266bf53edfc897cb41af5201f | 402 | py | Python | ballot_source/sources/migrations/0004_auto_20200824_1444.py | Ballot-Drop/ballot-source | 5dd9692ca5e9237a6073833a81771a17ad2c1dc9 | [
"MIT"
] | 3 | 2020-09-05T06:02:08.000Z | 2020-09-28T23:44:05.000Z | ballot_source/sources/migrations/0004_auto_20200824_1444.py | Ballot-Drop/ballot-source | 5dd9692ca5e9237a6073833a81771a17ad2c1dc9 | [
"MIT"
] | 18 | 2020-08-28T18:09:54.000Z | 2020-09-19T17:36:08.000Z | ballot_source/sources/migrations/0004_auto_20200824_1444.py | Ballot-Drop/ballot-source | 5dd9692ca5e9237a6073833a81771a17ad2c1dc9 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.9 on 2020-08-24 20:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('sources', '0003_sourcedetail_last_pull'),
]
operations = [
migrations.AlterField(
model_name='sourcedetail',
name='diff',
field=models.TextField(blank=True, null=True),
),
]
| 21.157895 | 58 | 0.606965 | 43 | 402 | 5.581395 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065517 | 0.278607 | 402 | 18 | 59 | 22.333333 | 0.762069 | 0.11194 | 0 | 0 | 1 | 0 | 0.140845 | 0.076056 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7d8750cdaa9ce35d0790079eee8be949cbd02ee | 1,443 | py | Python | code-buddy.py | xl3ehindTim/Code-buddy | e04b7b4327a0b3ff2790d22aef93dca6fce021f4 | [
"MIT"
] | 8 | 2019-11-29T09:20:11.000Z | 2020-11-02T10:55:35.000Z | code-buddy.py | xl3ehindTim/Code-buddy | e04b7b4327a0b3ff2790d22aef93dca6fce021f4 | [
"MIT"
] | 2 | 2019-12-02T13:48:01.000Z | 2019-12-02T17:00:56.000Z | code-buddy.py | xl3ehindTim/Code-buddy | e04b7b4327a0b3ff2790d22aef93dca6fce021f4 | [
"MIT"
] | 3 | 2019-11-29T10:03:44.000Z | 2020-10-01T10:23:55.000Z | import os
from getArgs import getArgs
from modules import python, javascript, html, php, bootstrap, cca
# from folder import file
# code-buddy.py create (file type) (directory name)
# Checks for "create"
if getArgs(1) == "create":
# Checks for which file type
projectType = getArgs(2)
# Checks for file name
if projectType == "python":
name = getArgs(3)
python.createPythonProject(name)
print("Folder created succesfully")
elif projectType == "javascript":
name = getArgs(3)
javascript.createJavascriptProject(name)
print("Folder created succesfully")
elif projectType == "html":
name = getArgs(3)
html.createHtmlProject(name)
print("Folder created succesfully")
elif projectType == "php":
name = getArgs(3)
php.createPhpProject(name)
print("Folder created succesfully")
elif projectType == "bootstrap":
name = getArgs(3)
bootstrap.createPhpProject(name)
print("Folder created succesfully")
elif projectType == "cca"
name = getArgs(3)
cca.createCcaProject(name)
print("Folder created succesfully")
# If not valid file type
else:
print(f"argument {getArgs(2)} is unknown, try: 'python, javascript, html, php or bootstrap'")
else:
# If invalid "create"
print(f"argument {getArgs(1)} is unknown, use 'create' to create a folder")
| 33.55814 | 101 | 0.644491 | 161 | 1,443 | 5.776398 | 0.310559 | 0.070968 | 0.077419 | 0.141935 | 0.327957 | 0.292473 | 0.292473 | 0.137634 | 0 | 0 | 0 | 0.009268 | 0.252252 | 1,443 | 42 | 102 | 34.357143 | 0.852641 | 0.132363 | 0 | 0.424242 | 0 | 0.030303 | 0.277331 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.242424 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7e1dfd58619e2e27eaf63ac95f9bbd2215fc5c4 | 565 | py | Python | setup.py | oubiwann/myriad-worlds | bfbbab713e35c5700e37158a892c3a66a8c9f37a | [
"MIT"
] | 3 | 2015-01-29T05:24:32.000Z | 2021-05-10T01:47:36.000Z | setup.py | oubiwann/myriad-worlds | bfbbab713e35c5700e37158a892c3a66a8c9f37a | [
"MIT"
] | null | null | null | setup.py | oubiwann/myriad-worlds | bfbbab713e35c5700e37158a892c3a66a8c9f37a | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
from myriad import meta
from myriad.util import dist
setup(
name=meta.display_name,
version=meta.version,
description=meta.description,
long_description=meta.long_description,
author=meta.author,
author_email=meta.author_email,
url=meta.url,
license=meta.license,
packages=find_packages() + ["twisted.plugins"],
package_data={
"twisted": ['plugins/example_server.py']
},
install_requires=meta.requires,
zip_safe=False
)
dist.refresh_plugin_cache()
| 21.730769 | 51 | 0.709735 | 68 | 565 | 5.705882 | 0.5 | 0.061856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187611 | 565 | 25 | 52 | 22.6 | 0.845316 | 0 | 0 | 0 | 0 | 0 | 0.083186 | 0.044248 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7ec17b78bb1ba2ad0135e9a1b1bf5b7c8916ff3 | 4,225 | py | Python | src/cmdsh/utils.py | kotfu/cmdsh | c9083793de9117e4c5c4dfcccdeee1b83a0be7ab | [
"MIT"
] | null | null | null | src/cmdsh/utils.py | kotfu/cmdsh | c9083793de9117e4c5c4dfcccdeee1b83a0be7ab | [
"MIT"
] | null | null | null | src/cmdsh/utils.py | kotfu/cmdsh | c9083793de9117e4c5c4dfcccdeee1b83a0be7ab | [
"MIT"
] | null | null | null | #
# -*- coding: utf-8 -*-
#
# Copyright (c) 2019 Jared Crapo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
"""
Utility functions (not classes)
"""
import inspect
import types
from typing import Callable
def validate_callable_param_count(func: Callable, count: int) -> None:
"""Ensure a function has the given number of parameters."""
signature = inspect.signature(func)
# validate that the callable has the right number of parameters
nparam = len(signature.parameters)
if nparam != count:
raise TypeError('{} has {} positional arguments, expected {}'.format(
func.__name__,
nparam,
count,
))
def validate_callable_argument(func, argnum, typ) -> None:
"""Validate that a certain argument of func is annotated for a specific type"""
signature = inspect.signature(func)
paramname = list(signature.parameters.keys())[argnum-1]
param = signature.parameters[paramname]
if param.annotation != typ:
raise TypeError('argument {} of {} has incompatible type {}, expected {}'.format(
argnum,
func.__name__,
param.annotation,
typ.__name__,
))
def validate_callable_return(func, typ) -> None:
"""Validate that func is annotated to return a specific type"""
signature = inspect.signature(func)
if typ:
typname = typ.__name__
else:
typname = 'None'
if signature.return_annotation != typ:
raise TypeError("{} must declare return a return type of '{}'".format(
func.__name__,
typname,
))
def rebind_method(method, obj) -> None:
"""Rebind method from one object to another
Call it something like this:
rebind_method(obj1, obj2.do_command)
This rebinds the ``do_command`` method from obj2 to obj1. Meaning
after this function call you can:
obj1.do_command()
This works only on instantiated objects, not on classes.
"""
#
# this is dark python magic
#
# if we were doing this in a hardcoded way, we might do:
#
# obj.method_name = types.MethodType(self.method_name.__func__, obj)
#
# TODO add force keyword parameter which defaults to false. If false, raise an
# exception if the method already exists on obj
method_name = method.__name__
setattr(obj, method_name, types.MethodType(method.__func__, obj))
def bind_function(func, obj) -> None:
"""Bind a function to an object
You must define func with a ``self`` parameter, which is gonna look wierd:
def myfunc(self, param):
return param
shell = cmdsh.Shell()
utils.bind_function(myfunc, shell)
You can use this function to bind a function to a class, so that all future
objects of that class have the method:
cmdsh.utils.bind_function(cmdsh.parsers.SimpleParser.parse, cmdsh.Shell)
"""
#
# this is dark python magic
#
# if we were doing this in a hardcoded way, we would:
#
# obj.method_name = types.Methodtype(func, obj)
#
func_name = func.__name__
setattr(obj, func_name, types.MethodType(func, obj))
# TODO write bind_attribute()
| 32.5 | 89 | 0.680947 | 562 | 4,225 | 5.012456 | 0.380783 | 0.031239 | 0.018459 | 0.030884 | 0.110401 | 0.068868 | 0.068868 | 0.039049 | 0.039049 | 0.039049 | 0 | 0.00341 | 0.23645 | 4,225 | 129 | 90 | 32.751938 | 0.869808 | 0.590769 | 0 | 0.225 | 0 | 0 | 0.091882 | 0 | 0 | 0 | 0 | 0.015504 | 0 | 1 | 0.125 | false | 0 | 0.075 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7f1a1740efc36292fbb917d24b84a88544cbd25 | 40,478 | py | Python | src/legohdl/workspace.py | c-rus/legoHDL | d7d77c05514c8d6dc1070c4efe589f392307daac | [
"MIT"
] | 6 | 2021-12-16T05:40:37.000Z | 2022-02-07T15:04:39.000Z | src/legohdl/workspace.py | c-rus/legoHDL | d7d77c05514c8d6dc1070c4efe589f392307daac | [
"MIT"
] | 61 | 2021-09-28T03:05:13.000Z | 2022-01-16T00:03:14.000Z | src/legohdl/workspace.py | c-rus/legoHDL | d7d77c05514c8d6dc1070c4efe589f392307daac | [
"MIT"
] | 1 | 2021-12-16T07:03:18.000Z | 2021-12-16T07:03:18.000Z | # ------------------------------------------------------------------------------
# Project: legohdl
# Script: workspace.py
# Author: Chase Ruskin
# Description:
# The Workspace class. A Workspace object has a path and a list of available
# vendors. This is what the user keeps their work's scope within for a given
# "organization".
# ------------------------------------------------------------------------------
import os, shutil, glob
import logging as log
from datetime import datetime
from .vendor import Vendor
from .apparatus import Apparatus as apt
from .cfg import Cfg, Section, Key
from .map import Map
from .git import Git
from .block import Block
class Workspace:
#store all workspaces in dictionary
Jar = Map()
#active-workspace is a workspace object
_ActiveWorkspace = None
DIR = apt.fs(apt.HIDDEN+"workspaces/")
LOG_FILE = "refresh.log"
MIN_RATE = -1
MAX_RATE = 1440
def __init__(self, name, path, vendors=[], ask=True):
'''
Create a workspace instance.
Parameters:
name (str): the identity for the workspace
path (str): the local path where blocks will be looked for
vendors ([str]): the list of vendors that are tied to this workspace
ask (bool): will ask user if wishing to enter workspace path
Returns:
None
'''
self._name = name
#do not create workspace if the name is already taken
if(self.getName().lower() in self.Jar.keys()):
log.error("Skipping workspace "+self.getName()+" due to duplicate naming conflict.")
return
#set the path
self._path = ''
self.setPath(path)
#do not create workspace if the path is empty
if(self.getPath() == ''):
if(ask == False):
log.error("Skipping workspace "+self.getName()+" due to empty local path.")
return
else:
#keep asking to set path until one is decided/input
try:
path = input("Enter path for workspace "+self.getName()+": ")
except KeyboardInterrupt:
apt.CFG.remove('workspace.'+self.getName())
Workspace.save(inc_active=False)
print()
exit(log.info("Workspace not created."))
while(self.setPath(path) == False):
try:
path = input("Enter path for workspace "+self.getName()+": ")
except KeyboardInterrupt:
apt.CFG.remove('workspace.'+self.getName())
Workspace.save(inc_active=False)
print()
exit(log.info("Workspace not created."))
self._ws_dir = apt.fs(self.DIR+self.getName()+"/")
#ensure all workspace hidden directories exist
if(os.path.isdir(self.getDir()) == False):
log.info("Setting up workspace "+self.getName()+"...")
os.makedirs(self.getDir(), exist_ok=True)
#create workspace's cache where installed blocks will be stored
os.makedirs(self.getDir()+"cache", exist_ok=True)
#create the refresh log if DNE
if(os.path.isfile(self.getDir()+self.LOG_FILE) == False):
open(self.getDir()+self.LOG_FILE, 'w').close()
self._vendors = []
#find all vendor objects by name and store in list
for vndr in vendors:
if(vndr.lower() in Vendor.Jar.keys()):
self._vendors += [Vendor.Jar[vndr]]
else:
log.warning("Could not link unknown vendor "+vndr+" to "+self.getName()+".")
pass
#add to class Jar
self.Jar[self.getName()] = self
pass
def setPath(self, p):
'''
Set the workspace's local path to a new value. Will ask user if okay
to create the path if DNE.
Parameters:
p (str): the path string
Returns:
(bool): true if successfully changed the path attribute
'''
#cannot set an empty path
if(p == '' or p == None):
log.info("Local path for workspace "+self.getName()+" cannot be empty.")
return False
p = apt.fs(p)
#create the workspace's local path if it does not exist
if(os.path.exists(p) == False):
#prompt user
carry_on = apt.confirmation("Workspace "+self.getName()+"'s local path does not exist. Create "+p+"?")
if(carry_on):
os.makedirs(p, exist_ok=True)
self._path = p
return True
else:
log.info("Did not set "+p+" as local path.")
return False
else:
self._path = p
return True
def setName(self, n):
'''
Change the workspace's name if the name is not already taken.
Parameters:
n (str): new name for workspace
Returns:
(bool): true if name successfully altered and updated in Jar
'''
if(n == '' or n == None):
log.error("Workspace name cannot be empty.")
return False
if(n.lower() in self.Jar.keys()):
log.error("Cannot rename workspace to "+n+" due to name conflict.")
return False
else:
#remove old name from Jar
if(self.getName().lower() in self.Jar.keys()):
del self.Jar[self.getName()]
#rename hidden directory if exists
new_dir = apt.fs(self.DIR+n+"/")
if(hasattr(self, "_ws_dir")):
os.rename(self.getDir(), new_dir)
#set the hidden workspace directory
self._ws_dir = new_dir
#change to new name
self._name = n
#update the Jar
self.Jar[self.getName()] = self
return True
def remove(self):
'''
Removes the workspace object from the Jar and its hidden directory.
Parameters:
None
Returns:
None
'''
log.info("Removing workspace "+self.getName()+"...")
#delete the hidden workspace directory
shutil.rmtree(self.getDir(), onerror=apt.rmReadOnly)
#remove from class Jar
del self.Jar[self.getName()]
#remove from cfg file
apt.CFG.remove('workspace.'+self.getName())
apt.CFG.write()
pass
def linkVendor(self, vndr):
'''
Attempts to add a vendor to the workspace's vendor list.
Parameters:
vndr (str): name of the vendor to add
Returns:
(bool): true if the vendor list was modified (successful add)
'''
if(vndr.lower() in Vendor.Jar.keys()):
vndr_obj = Vendor.Jar[vndr]
if(vndr_obj in self.getVendors()):
log.info("Vendor "+vndr_obj.getName()+" is already linked to this workspace.")
return False
else:
log.info("Linking vendor "+vndr_obj.getName()+" to the workspace...")
self._vendors += [vndr_obj]
return True
else:
log.warning("Could not link unknown vendor "+vndr+" to "+self.getName()+".")
return False
def setVendors(self, vndrs):
'''
Overrides entire _vendors attr by setting it equal to 'vndrs'.
Parameters:
vndrs ([str]): list of vendors
Returns:
(bool): success if all vendors listed were added
'''
#reset vendors list
self._vendors = []
success = True
#iterate through every given vendor
for vndr in vndrs:
#verify the vendor exists
if(vndr.lower() in Vendor.Jar.keys()):
vndr_obj = Vendor.Jar[vndr]
#check if the vendor has already been linked
if(vndr_obj in self.getVendors()):
log.info("Vendor "+vndr_obj.getName()+" is already linked to this workspace.")
#link the vendor to this workspace
else:
log.info("Linking vendor "+vndr_obj.getName()+" to the workspace...")
self._vendors += [vndr_obj]
else:
log.warning("Could not link unknown vendor "+vndr+" to "+self.getName()+".")
sucess = False
return success
def unlinkVendor(self, vndr):
'''
Attempts to remove a vendor from the workspace's vendor list.
Parameters:
vndr (str): name of the vendor to remove
Returns:
(bool): true if the vendor list was modified (successful remove)
'''
if(vndr.lower() in Vendor.Jar.keys()):
vndr_obj = Vendor.Jar[vndr]
if(vndr_obj not in self.getVendors()):
log.info("Vendor "+vndr_obj.getName()+" is already unlinked from the workspace.")
return False
else:
log.info("Unlinking vendor "+vndr_obj.getName()+" from the workspace...")
self._vendors.remove(vndr_obj)
return True
else:
log.warning("Could not unlink unknown vendor "+vndr+" from "+self.getName()+".")
return False
def loadBlocks(self, id_dsgns=False):
'''
Loads all blocks found at all levels: dnld (workspace path), instl (workspace
cache), avail (workspace vendors).
When id_dsgns is True, this method uses the 'multi-develop' setting to
determine which level has precedence in loadHDL().
'multi-develop' set to False will only loadHDL() from cache. 'multi-develop'
set to True will first try to loadHDL() from dnld, and if DNE, then try
to loadHDL() from block's cache.
Either way, if inside a current block, that block's HDL will be loaded over
its cache.
Dynamically creates _visible_blocks ([Block]) attribute to be reused.
Parameters:
id_dsgns (bool): identify design units (loadHDL) from blocks
Returns:
_visible_blocks ([Block]): list of all block objects in cache or path
'''
if(hasattr(self, "_visible_blocks")):
return self._visible_blocks
self._visible_blocks = []
#read the setting for multi-develop
mult_dev = apt.getMultiDevelop()
#1. Search for downloaded blocks
#glob on the local workspace path
#print("Local Blocks on:",self.getPath())
marker_files = glob.glob(self.getPath()+"**/*/"+apt.MARKER, recursive=True)
#iterate through all found downloads
for mf in marker_files:
b = Block(mf, self, Block.Level.DNLD)
#if the user is within a current block, load the HDL from its DNLD level (not INSTL)
if(mult_dev == True or Block.getCurrent(bypass=True) == b):
self._visible_blocks += [b]
if(id_dsgns):
b.loadHDL()
pass
#2. Search for installed blocks
#glob on the workspace cache path
#print("Cache Blocks on:",self.getCachePath())
marker_files = glob.glob(self.getCachePath()+"**/*/"+apt.MARKER, recursive=True)
#iterate through all found installations
for mf in marker_files:
#the block must also have a valid git repository at its root
root,_ = os.path.split(mf)
#note: only the head installation has the git repository
if(Git.isValidRepo(root, remote=False)):
b = Block(mf, self, Block.Level.INSTL)
#get the spot for this block's download
dnld_b = Block.Inventory[b.M()][b.L()][b.N()][Block.Level.DNLD.value]
#add this block if a download DNE or the dnld does not match current when
#not in multi-develop mode
if(dnld_b == None or (mult_dev == False and Block.getCurrent(bypass=True) != dnld_b)):
self._visible_blocks += [b]
if(id_dsgns):
b.loadHDL()
pass
#3. Search for available blocks
#glob on each vendor path
marker_files = []
#find all marker files in each of the workspace's vendors
for vndr in self.getVendors():
marker_files += glob.glob(vndr.getVendorDir()+"**/*/"+apt.MARKER, recursive=True)
#iterate through all found availables
for mf in marker_files:
b = Block(mf, self, Block.Level.AVAIL)
#do not add this block to list of visible blocks because it has no
#units associated with it, only metadata
pass
#4. ID all specific version blocks if identifying designs (except current block)
spec_vers_blocks = []
for vis_block in self._visible_blocks:
if(vis_block == Block.getCurrent(bypass=True)):
continue
for spec_block in vis_block.getInstalls().values():
spec_vers_blocks += [spec_block]
if(id_dsgns):
spec_block.loadHDL()
pass
pass
self._visible_blocks += spec_vers_blocks
return self._visible_blocks
def shortcut(self, title, req_entity=False, visibility=True, ref_current=True):
'''
Returns the Block from a shortened title. If title is empty and
'ref_current' is set, then tries to refer to the current block.
Sometimes an entity is required for certain commands; so it can be
assumed entity (instead of block name) if only thing given.
Parameters:
title (str): partial or full M.L.N with optional E attached
req_entity (bool): determine if only thing given then it is an entity
visibility (bool): determine if to only look for visible blocks
ref_current (bool): determine if to try to assign empty title to current block
Returns:
(Block): the identified block from the shortened title
'''
if(title == None):
title = ''
#split into pieces
pieces = title.split('.')
sects = ['']*3
diff = 3 - len(pieces)
for i in range(len(pieces)-1, -1, -1):
sects[diff+i] = pieces[i]
#check final piece if it has an entity attached
entity = ''
if(sects[2].count(apt.ENTITY_DELIM)):
i = sects[2].find(apt.ENTITY_DELIM)
entity = sects[2][i+1:]
sects[2] = sects[2][:i]
#assume only name given is actually the entity
elif(req_entity):
entity = sects[2]
sects[2] = ''
# [!] load all necessary blocks before searching
blocks = self.loadBlocks()
#use all blocks when visibility is off :todo: is this design intent?
if(visibility == False):
blocks = Block.getAllBlocks()
#track list of possible blocks as moving up the chain
possible_blocks = []
#search for an entity
if(len(entity)):
#collect list of all entities
reg = Map()
reg[entity] = []
#iterate through every block and create a mapping for their entity names
for bk in blocks:
#get the entity names from this block
es = bk.loadHDL(returnnames=True)
#print(es)
#create mappings of entity names to their block owners
for e in es:
if(e.lower() not in reg.keys()):
reg[e] = []
reg[e] += [bk]
#see how many blocks were fit to entity name's mapping
num_blocks = len(reg[entity])
#algorithm only detected one possible solution
if(num_blocks == 1):
#make sure rest of sections are correct before returning result
potential = reg[entity][0]
title = potential.getTitle(index=2, dist=2)
#verify each part of block identifier matches what was requested
for i in range(len(sects)):
#print(sects[i])
if(len(sects[i]) and sects[i].lower() != title[i].lower()):
return None
pass
return potential
#algorithm detected multiple possible solutions (cannot infer)
elif(num_blocks > 1):
possible_blocks = reg[entity]
#only was given an entity name, algorithm cannot solve requested entity
if(len(sects[2]) == 0):
log.info("Ambiguous unit; conflicts with")
#display the units/titles that conflict with input
for bk in reg[entity]:
print('\t '+bk.getFull()+":"+entity)
print()
exit()
#no blocks matched the entity name being passed
else:
return None
pass
#search through all block names
for start in range(len(sects)-1, -1, -1):
term = sects[start]
#exit loop if next term is empty
if(len(term) == 0):
break
reg = Map()
reg[term] = []
for bk in blocks:
t = bk.getTitle(index=start, dist=0)[0]
#store the block under the given section name
if(t.lower() not in reg.keys()):
reg[t] = []
reg[t] += [bk]
#count how many blocks occupy this same name
num_blocks = len(reg[term])
#algorithm only detected one possible solution
if(num_blocks == 1):
#make sure rest of sections are correct before returning result
potential = reg[term][0]
title = potential.getTitle(index=2, dist=2)
#verify each part of block identifier matches what was requested
for i in range(len(sects)):
#print(sects[i])
if(len(sects[i]) and sects[i].lower() != title[i].lower()):
return None
pass
return potential
#algorithm detected multiple solutions (cannot infer on this step)
elif(num_blocks > 1):
#compare with blocks for a match and dwindle down choices
next_blocks = []
for bk in reg[term]:
if(bk in possible_blocks or (start == len(sects)-1 and entity == '')):
next_blocks += [bk]
#dwindled down to a single block
if(len(next_blocks) == 1):
#print("FOUND:",next_blocks[0].getTitle(index=2, dist=2))
return next_blocks[0]
#carry on to using next title section
if(len(sects[start-1])):
#continue to using next term
possible_blocks = next_blocks
continue
else:
#ran out of guesses...report the conflicting titles/units
if(req_entity):
log.info("Ambiguous unit; conflicts with")
else:
log.info("Ambiguous title; conflicts with")
for bk in reg[term]:
if(req_entity):
print('\t '+bk.getFull()+":"+entity)
else:
print('\t '+bk.getFull())
exit(print())
pass
#using the current block if title is empty string
if(ref_current and (title == '' or title == None)):
return Block.getCurrent()
#return None if all attempts have failed and not returned anything yet
return None
def decodeUnits(self):
'''
Decodes every available unit to get the complete graphing data structure.
Parameters:
None
Returns:
None
'''
blocks = self.loadBlocks()
#print(blocks)
log.info("Collecting all unit data...")
for b in blocks:
us = b.loadHDL()
for u in us.values():
u.getLanguageFile().decode(u, recursive=False)
log.info("done.")
pass
def listBlocks(self, title, alpha=False, instl=False, dnld=False, avail=False):
'''
Print a formatted table of the available blocks.
Parameters:
title (str): block title to be broken into parts for searching
alpha (bool): determine if to alphabetize the block list order (L.N.V)
instl (bool): determine if to capture only blocks that are installed
dnld (bool): determine if to capture only blocks that are downloaded
avail (bool): determine if to capture blocks available from vendor
Returns:
None
'''
#[!] load the necessary blocks
self.loadBlocks()
#collect if multi-develop is on
mult_dev = apt.getMultiDevelop()
#split the title into parts
M,L,N,_ = Block.snapTitle(title, inc_ent=False)
#get all blocks from the catalog
#store each block's text line in a map to sort keys for alpha flag
catalog = Map()
#iterate through every vendor
for vndr_k,vndrs in Block.Inventory.items():
if(vndr_k.startswith(M.lower()) == False):
continue
#iterate through every library
for lib_k,libs in vndrs.items():
if(lib_k.startswith(L.lower()) == False):
continue
#iterate through every block
for blk_k,lvls in libs.items():
if(blk_k.startswith(N.lower()) == False):
continue
downloaded = installed = available = ' '
disp_d = disp_i = disp_a = False
#if none were set on command-line default to display everything
if((dnld or instl or avail) == False):
dnld = instl = avail = True
#with each lower level, overwrite the block object to print
if(lvls[Block.Level.AVAIL.value] != None):
bk = lvls[Block.Level.AVAIL.value]
available = 'A'
disp_a = True
if(lvls[Block.Level.INSTL.value] != None):
bk = lvls[Block.Level.INSTL.value]
installed = 'I'
disp_i = True
if(lvls[Block.Level.DNLD.value] != None):
if(dnld):
bk = lvls[Block.Level.DNLD.value]
downloaded = 'D'
# if(mult_dev):
# downloaded = 'D'
# installed = installed.lower()
disp_d = True
#one condition pair must be true to display the block
if((disp_a and avail) or (disp_i and instl) or (disp_d and dnld)):
pass
else:
continue
#character to separate different status bits
spacer = ' '
#format the status column's data
sts = downloaded + spacer + installed + spacer + available
#leave version empty if its been unreleased
v = '' if(bk.getVersion() == '0.0.0') else bk.getVersion()
#check if can be updated
#prioritize installation level for checking updates
instllr = bk.getLvlBlock(Block.Level.INSTL)
cmp_v = instllr.getVersion() if(instllr != None and mult_dev == False) else bk.getVersion()
#a '^' is an update symbol indicating the latest referenced version (dnld or instl) is not the actually the latest version found
if(Block.cmpVer(bk.getHighestAvailVersion(), cmp_v) != cmp_v):
sts = sts+' ^'
v = cmp_v
#format the data to print to the console and store in catalog (L.N.V str format)
catalog[bk.L()+'.'+bk.N()+'.'+bk.M()] = '{:<16}'.format(bk.L())+' '+'{:<20}'.format(bk.N())+' '+'{:<8}'.format(sts)+' '+'{:<10}'.format(v)+' '+'{:<16}'.format(bk.M())
pass
pass
keys = list(catalog.keys())
#check if to sort by alphabet
if(alpha):
keys.sort()
#print(keys)
print('{:<16}'.format("Library"),'{:<20}'.format("Block"),'{:<8}'.format("Status"+("*"*int(mult_dev))),'{:<10}'.format("Version"),'{:<16}'.format("Vendor"))
print("-"*16+" "+"-"*20+" "+"-"*8+" "+"-"*10+" "+"-"*16)
#iterate through catalog and print each textline
for k in keys:
print(catalog[k])
pass
def listUnits(self, title, alpha=False, usable=False, ignore_tb=False):
'''
Print a formatted table of all the design units.
Parameters:
title (str): block title to be broken into parts for searching
alpha (bool): determine if to alphabetize the block list order (E.V.L.N)
usable (bool): determine if to display units that can be used
ignore_tb (bool): determine if to ignore testbench files
Returns:
None
'''
#[!] load blocks into inventory
visible = self.loadBlocks()
#:todo: add flag to print 'variations' of an entity/unit (what specific version names exist)
#todo: print status of the unit and which status is usable (D or I)
M,L,N,V,E = Block.snapTitle(title, inc_ent=True)
#print(M,L,N,V,E)
#store each entity's print line in map (key = <unit>:<block-id>) to ensure uniqueness
catalog = Map()
for bk in Block.getAllBlocks():
#for lvl in Block.Inventory[bk.M()][bk.L()][bk.N()]:
block_title = bk.getFull(inc_ver=False)
if(bk.M().lower().startswith(M.lower()) == False):
continue
if(bk.L().lower().startswith(L.lower()) == False):
continue
if(bk.N().lower().startswith(N.lower()) == False):
continue
#collect all units
if(apt.getMultiDevelop() == False):
if(bk.getLvlBlock(Block.Level.INSTL) != None):
bk = bk.getLvlBlock(Block.Level.INSTL)
#skip this block if only displaying usable units and multi-develop off
elif(usable):
continue
units = bk.loadHDL(returnnames=False).values()
for u in units:
if(len(E) and u.E().lower().startswith(E.lower()) == False):
continue
if(ignore_tb and u.isTb()):
continue
#format if unit is visible/usable
vis = '-'
if(bk in visible):
vis = 'yes'
#format design unit name according to its natural language
dsgn = u.getDesign().name.lower()
if(u.getLang() == u.Language.VERILOG and dsgn == 'entity'):
dsgn = 'module'
catalog[u.E()+':'+block_title] = '{:<22}'.format(u.E())+' '+'{:<7}'.format(vis)+' '+'{:<10}'.format(dsgn)+' '+'{:<38}'.format(block_title)
pass
pass
keys = list(catalog.keys())
#check if to sort by alphabet
if(alpha):
keys.sort()
#print to console
print('{:<22}'.format("Unit"),'{:<7}'.format("Usable"),'{:<10}'.format("Type"),'{:<38}'.format("Block"))
print("-"*22+" "+"-"*7+" "+"-"*10+" "+"-"*38)
for k in keys:
print(catalog[k])
pass
pass
@classmethod
def tidy(cls):
'''
Removes any stale hidden workspace directories that aren't mapped to a
workspace found in the class Jar container.
Parameters:
None
Returns:
None
'''
#list all hidden workspace directories
hidden_dirs = os.listdir(cls.DIR)
for hd in hidden_dirs:
if(hd.lower() not in cls.Jar.keys()):
log.info("Removing stale workspace data for "+hd+"...")
if(os.path.isdir(cls.DIR+hd)):
shutil.rmtree(cls.DIR+hd, onerror=apt.rmReadOnly)
#remove all files from workspace directory
else:
os.remove(cls.DIR+hd)
pass
def autoRefresh(self, rate):
'''
Automatically refreshes all vendors for the given workspace. Reads its
log file to determine if past next interval for refresh.
Parameters:
rate (int): how often to ask a refresh within a 24-hour period
Returns:
None
'''
def timeToFloat(prt):
'''
Converts a time object into a float type.
Parameters:
prt (datetime): iso format of current time
Returns:
(float): 0.00 (inclusive) - 24.00 (exclusive)
'''
time_stamp = str(prt).split(' ')[1]
time_sects = time_stamp.split(':')
hrs = int(time_sects[0])
#convert to 'hours'.'minutes'
time_fmt = (float(hrs)+(float(float(time_sects[1])/60)))
return time_fmt
refresh = False
last_punch = None
stage = 1
cur_time = datetime.now()
#do not perform refresh if the rate is 0
if(rate == 0):
return
#always refresh if the rate is set below 0 (-1)
elif(rate <= self.MIN_RATE):
refresh = True
#divide the 24 hour period into even checkpoints
max_hours = float(24)
spacing = float(max_hours / rate)
intervals = []
for i in range(rate):
intervals += [spacing*i]
#ensure log file exists
if(os.path.exists(self.getDir()+self.LOG_FILE) == False):
open(self.getDir()+self.LOG_FILE, 'w').close()
#read log file
#read when the last refresh time occurred
with open(self.getDir()+self.LOG_FILE, 'r') as log_file:
#read the latest date
data = log_file.readlines()
#no refreshes have occurred so automatically need a refresh
if(len(data) == 0):
last_punch = cur_time
refresh = True
else:
last_punch = datetime.fromisoformat(data[0])
#determine if its time to refresh
#get latest time that was punched
last_time_fmt = timeToFloat(last_punch)
#determine the next checkpoint available for today
next_checkpoint = max_hours
for i in range(len(intervals)):
if(last_time_fmt < intervals[i]):
next_checkpoint = intervals[i]
stage = i + 1
break
#print('next checkpoint',next_checkpoint)
cur_time_fmt = timeToFloat(cur_time)
#check if the time has occurred on a previous day, (automatically update because its a new day)
next_day = cur_time.year > last_punch.year or cur_time.month > last_punch.month or cur_time.day > last_punch.day
#print(next_day)
#print("currently",cur_time_fmt)
#determine if the current time has passed the next checkpoint or if its a new day
if(next_day or cur_time_fmt >= next_checkpoint):
last_punch = cur_time
refresh = True
log_file.close()
#determine if its time to refresh
if(refresh):
#display what interval is being refreshed on the day
infoo = "("+str(stage)+"/"+str(rate)+")" if(rate > 0) else ''
log.info("Automatically refreshing workspace "+self.getName()+" vendors... "+infoo)
#refresh all vendors attached to this workspace
for vndr in self.getVendors():
vndr.refresh()
pass
#write updated time value to log file
with open(self.getDir()+self.LOG_FILE, 'w') as lf:
lf.write(str(cur_time))
pass
@classmethod
def load(cls):
'''Load all workspaces from settings.'''
wspcs = apt.CFG.get('workspace', dtype=Section)
for ws in wspcs.keys():
#skip over immediate keys
if(isinstance(wspcs[ws], Section) == False):
continue
path = ''
vendors = '()'
#verify that a path key and vendors key exists under each workspace
apt.CFG.set('workspace.'+ws+'.path', path, override=False)
apt.CFG.set('workspace.'+ws+'.vendors', vendors, override=False)
#retrieve path and vendors keys
if('path' in wspcs[ws].keys()):
path = wspcs[ws]['path']._val
if('vendors' in wspcs[ws].keys()):
vendors = Cfg.castList(wspcs[ws]['vendors']._val)
#create Workspace objects
Workspace(wspcs[ws]._name, path, vendors)
pass
#save if made any changes
if(apt.CFG._modified):
apt.CFG.write()
pass
@classmethod
def save(cls, inc_active=True):
'''
Serializes the Workspace objects and saves them to the settings dictionary.
Parameters:
inc_active (bool): determine if to save the active workspace to settings
Returns:
None
'''
serialized = {}
#serialize the Workspace objects into dictionary format for settings
for ws in cls.Jar.values():
#do not save any workspace that has no path
if(ws.getPath() == ''):
continue
serialized[ws.getName()] = {}
serialized[ws.getName()]['path'] = ws.getPath()
serialized[ws.getName()]['vendors'] = Cfg.castStr(ws.getVendors(returnnames=True, lowercase=False), tab_cnt=2, drop_list=False)
#update settings dictionary
apt.CFG.set('workspace', Section(serialized), override=True)
#update active workspace
if(inc_active):
if(cls.getActive() != None):
apt.CFG.set('general.active-workspace', cls.getActive().getName())
else:
apt.CFG.set('general.active-workspace', '')
apt.save()
pass
@classmethod
def inWorkspace(cls):
'''
Determine if an active workspace is selected.
Parameters:
None
Returns:
(bool): true if ActiveWorkspace is not None
'''
return cls._ActiveWorkspace != None
@classmethod
def setActiveWorkspace(cls, ws):
'''
Set the active workspace after initializing all workspaces into Jar. If
the input name is invalid, it will set the first workspace in the Jar as
active if one is not already assigned.
Parameters:
ws (str): workspace name
Returns:
(bool): true if active-workspace was set
'''
#properly set the active workspace from one found in Jar
if(ws != None and ws.lower() in cls.Jar.keys()):
re_assign = (cls._ActiveWorkspace != None)
#set the active workspace obj from found workspace
cls._ActiveWorkspace = cls.Jar[ws]
#only give prompt if reassigning the active-workspace
if(re_assign):
log.info("Assigning workspace "+cls._ActiveWorkspace.getName()+" as active workspace...")
return True
#try to randomly assign active workspace if not already assigned.
elif(len(cls.Jar.keys()) and cls._ActiveWorkspace == None):
random_ws = list(cls.Jar.keys())[0]
cls._ActiveWorkspace = cls.Jar[random_ws]
msgi = "No active workspace set."
if(ws != ''):
msgi = "Workspace "+ws+" does not exist."
log.info(msgi+" Auto-assigning active workspace to "+cls._ActiveWorkspace.getName()+"...")
return True
#still was not able to set the active workspace with the given argument
elif(cls._ActiveWorkspace != None):
log.info("Workspace "+ws+" does not exist. Keeping "+cls._ActiveWorkspace.getName()+" as active.")
else:
log.error("No workspace set as active.")
return False
def isLinked(self):
'''Returns if any vendors are tied to this workspace (bool).'''
return len(self.getVendors())
def getPath(self):
'''Returns the local path where downloaded blocks are located (str).'''
return self._path
def getDir(self):
'''Returns the base hidden directory where the workspace data is kept (str).'''
return self._ws_dir
def getCachePath(self):
'''Returns the hidden directory where workspace installations are kept. (str).'''
return self.getDir()+"cache/"
def getName(self):
'''Returns the workspace's identifier (str).'''
return self._name
def isActive(self):
'''Returns is this workspace is the active workspace (bool).'''
return self == self.getActive()
def getVendors(self, returnnames=False, lowercase=True):
'''
Return the vendor objects associated with the given workspace.
Parameters:
returnnames (bool): true will return vendor names
lowercase (bool): true will return lower-case names if returnnames is enabled
Returns:
([Vendor]) or ([str]): list of available vendors
'''
if(returnnames):
vndr_names = []
for vndr in self._vendors:
name = vndr.getName()
if(lowercase):
name = name.lower()
vndr_names += [name]
return vndr_names
else:
return self._vendors
@classmethod
def printList(cls):
'''
Prints formatted list for workspaces with vendor availability and which is active.
Parameters:
None
Returns:
None
'''
print('{:<16}'.format("Workspace"),'{:<6}'.format("Active"),'{:<40}'.format("Path"),'{:<14}'.format("Vendors"))
print("-"*16+" "+"-"*6+" "+"-"*40+" "+"-"*14+" ")
for ws in cls.Jar.values():
vndrs = apt.listToStr(ws.getVendors(returnnames=True))
act = 'yes' if(ws == cls.getActive()) else '-'
print('{:<16}'.format(ws.getName()),'{:<6}'.format(act),'{:<40}'.format(ws.getPath()),'{:<14}'.format(vndrs))
pass
pass
@classmethod
def printAll(cls):
for key,ws in cls.Jar.items():
print('key:',key)
print(ws)
@classmethod
def getActive(cls):
'''Returns the active workspace and will exit on error (Workspace).'''
if(cls._ActiveWorkspace == None):
exit(log.error("Not in a workspace!"))
return cls._ActiveWorkspace
# uncomment to use for debugging
# def __str__(self):
# return f'''
# ID: {hex(id(self))}
# Name: {self.getName()}
# Path: {self.getPath()}
# Active: {self.isActive()}
# Hidden directory: {self.getDir()}
# Linked to: {self.isLinked()}
# Vendors: {self.getVendors(returnnames=True)}
# '''
pass | 38.079022 | 186 | 0.531424 | 4,653 | 40,478 | 4.568236 | 0.147002 | 0.01242 | 0.011291 | 0.007998 | 0.234381 | 0.180514 | 0.1441 | 0.138878 | 0.123353 | 0.11719 | 0 | 0.00573 | 0.361925 | 40,478 | 1,063 | 187 | 38.079022 | 0.817253 | 0.32262 | 0 | 0.378428 | 0 | 0 | 0.070203 | 0.00186 | 0.003656 | 0 | 0 | 0.001881 | 0 | 1 | 0.053016 | false | 0.060329 | 0.016453 | 0 | 0.157221 | 0.036563 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f7f1c343e2c46298649ddf9fe556e96b2bec9514 | 3,871 | py | Python | ev_de.py | avinashmnit30/Electric-Vehicle-Optimal-Charging | 7f09bdbb9904285ddbbfeaa28cf402f7ef6f4cb4 | [
"BSD-3-Clause"
] | 7 | 2018-03-09T11:19:39.000Z | 2022-01-19T13:45:20.000Z | ev_de.py | avinashmnit30/Electric-Vehicle-Optimal-Charging | 7f09bdbb9904285ddbbfeaa28cf402f7ef6f4cb4 | [
"BSD-3-Clause"
] | null | null | null | ev_de.py | avinashmnit30/Electric-Vehicle-Optimal-Charging | 7f09bdbb9904285ddbbfeaa28cf402f7ef6f4cb4 | [
"BSD-3-Clause"
] | 1 | 2022-03-03T12:08:52.000Z | 2022-03-03T12:08:52.000Z | # -*- coding: utf-8 -*-
"""
Created on Wed Dec 16 18:01:24 2015
@author: Avinash
"""
import numpy as np
from numpy import *
import numpy
from math import *
import ev_charge_schedule_modification1 as ev
#import ev_charge_schedule.static as func1
#import ev_charge_schedule.dynamic as func2
import time
#from numba import double
from numba.decorators import autojit
func1=ev.static
func=autojit(func1)
mode=1
runs=1
maxiter=2000
F=0.5 # Mutation Factor between 0 to 2
CR=0.2 # Probability 1. Put 0.9 if parameters are dependent while 0.2 if parameters are independent(seperable)
N=40
D=100*24 # Number of particles
ev.global_var(var_set=0,N_veh=int(D/float(24)))
# boundary constraints
ub=numpy.random.random(size=(1,D))[0]
lb=numpy.random.random(size=(1,D))[0]
i=0
while i<D:
ub[i]=8.8
lb[i]=2.2
i+=1
fitness_val=numpy.zeros(shape=(runs,maxiter))
best_pos=numpy.zeros(shape=(runs,D))
for run_no in range(runs):
# target vector initializtion
x=numpy.random.uniform(size=(N,D))
i=0
while i<N:
j=0
while j<D:
x[i][j]=lb[j]+x[i][j]*(ub[j]-lb[j])
j+=1
i+=1
v=np.zeros_like(x) # donar vectors
u=np.zeros_like(x) # trail vector
g=numpy.zeros(shape=(1,D))[0] # best vector found so far
# target vector initial fitness evaluation
x_fit=numpy.random.uniform(size=(1,N))[0]
i=0
while i<N:
x_fit[i]=func(x[i],mode=mode)
i+=1
u_fit=np.zeros_like(x_fit)
j=0
i=1
while i<N:
if x_fit[j]>x_fit[i]:
j=i
i+=1
g_fit=x_fit[j]
g=x[j].copy()
time1=time.time()
it=0
while it<maxiter:
# Mutation stage
for i in range(N):
r1=i
while r1==i:
r1=np.random.randint(low=0,high=N)
r2=i
while r2==i or r2==r1:
r2=np.random.randint(low=0,high=N)
r3=i
while r3==i or r3==r1 or r3==r2:
r3=np.random.randint(low=0,high=N)
v[i]=x[r1]+(x[r2]-x[r3])*F
for j in range(D):
# if v[i][j]>ub[j]:
# v[i][j]=v[i][j]-(1+numpy.random.rand())*(v[i][j]-ub[j])
# if v[i][j]<lb[j]:
# v[i][j]=v[i][j]-(1+numpy.random.rand())*(v[i][j]-lb[j])
# if v[i][j]>ub[j]:
# v[i][j]=ub[j]
# if v[i][j]<lb[j]:
# v[i][j]=lb[j]
if v[i][j]>ub[j]:
#v[i][j]=v[i][j]-1.1*(v[i][j]-ub[j])
v[i][j]=lb[j]+numpy.random.random()*(ub[j]-lb[j])
if v[i][j]<lb[j]:
v[i][j]=lb[j]+numpy.random.random()*(ub[j]-lb[j])
#v[i][j]=v[i][j]-1.1*(v[i][j]-lb[j])
# Recombination stage
for i in range(N):
for j in range(D):
if np.random.random()<=CR or j==numpy.random.randint(0,D):
u[i][j]=v[i][j]
else:
u[i][j]=x[i][j]
# Selection stage
for i in range(N):
u_fit[i]=func(u[i],mode=mode)
if u_fit[i]<x_fit[i]:
x[i]=u[i].copy()
x_fit[i]=u_fit[i]
if u_fit[i]<g_fit:
g=u[i].copy()
g_fit=u_fit[i]
fitness_val[run_no][it]=g_fit
print it,g_fit
it+=1
best_pos[run_no]=g.copy()
time2=time.time()
print time2-time1
run_no+=1
numpy.savetxt("DE_fitness_d1_m2"+str(mode)+str(D)+".csv",fitness_val,delimiter=",")
numpy.savetxt("DE_bestpos_d1_m2"+str(mode)+str(D)+".csv",best_pos,delimiter=",")
| 29.105263 | 112 | 0.482046 | 641 | 3,871 | 2.836193 | 0.207488 | 0.031903 | 0.037954 | 0.028603 | 0.256876 | 0.237624 | 0.19527 | 0.109461 | 0.108361 | 0.108361 | 0 | 0.042443 | 0.348747 | 3,871 | 132 | 113 | 29.325758 | 0.678699 | 0.222681 | 0 | 0.202128 | 0 | 0 | 0.015059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074468 | null | null | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7f61f99b14ff05744c7eb403d860339bcd27eae | 3,970 | py | Python | auth/decorators.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | null | null | null | auth/decorators.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | null | null | null | auth/decorators.py | dongboyan77/quay | 8018e5bd80f17e6d855b58b7d5f2792d92675905 | [
"Apache-2.0"
] | null | null | null | import logging
from functools import wraps
from flask import request, session
from prometheus_client import Counter
from auth.basic import validate_basic_auth
from auth.oauth import validate_bearer_auth
from auth.cookie import validate_session_cookie
from auth.signedgrant import validate_signed_grant
from util.http import abort
logger = logging.getLogger(__name__)
authentication_count = Counter(
"quay_authentication_attempts_total",
"number of authentication attempts accross the registry and API",
labelnames=["auth_kind", "success"],
)
def _auth_decorator(pass_result=False, handlers=None):
""" Builds an auth decorator that runs the given handlers and, if any return successfully,
sets up the auth context. The wrapped function will be invoked *regardless of success or
failure of the auth handler(s)*
"""
def processor(func):
@wraps(func)
def wrapper(*args, **kwargs):
auth_header = request.headers.get("authorization", "")
result = None
for handler in handlers:
result = handler(auth_header)
# If the handler was missing the necessary information, skip it and try the next one.
if result.missing:
continue
# Check for a valid result.
if result.auth_valid:
logger.debug("Found valid auth result: %s", result.tuple())
# Set the various pieces of the auth context.
result.apply_to_context()
# Log the metric.
authentication_count.labels(result.kind, True).inc()
break
# Otherwise, report the error.
if result.error_message is not None:
# Log the failure.
authentication_count.labels(result.kind, False).inc()
break
if pass_result:
kwargs["auth_result"] = result
return func(*args, **kwargs)
return wrapper
return processor
process_oauth = _auth_decorator(handlers=[validate_bearer_auth, validate_session_cookie])
process_auth = _auth_decorator(handlers=[validate_signed_grant, validate_basic_auth])
process_auth_or_cookie = _auth_decorator(handlers=[validate_basic_auth, validate_session_cookie])
process_basic_auth = _auth_decorator(handlers=[validate_basic_auth], pass_result=True)
process_basic_auth_no_pass = _auth_decorator(handlers=[validate_basic_auth])
def require_session_login(func):
""" Decorates a function and ensures that a valid session cookie exists or a 401 is raised. If
a valid session cookie does exist, the authenticated user and identity are also set.
"""
@wraps(func)
def wrapper(*args, **kwargs):
result = validate_session_cookie()
if result.has_nonrobot_user:
result.apply_to_context()
authentication_count.labels(result.kind, True).inc()
return func(*args, **kwargs)
elif not result.missing:
authentication_count.labels(result.kind, False).inc()
abort(401, message="Method requires login and no valid login could be loaded.")
return wrapper
def extract_namespace_repo_from_session(func):
""" Extracts the namespace and repository name from the current session (which must exist)
and passes them into the decorated function as the first and second arguments. If the
session doesn't exist or does not contain these arugments, a 400 error is raised.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if "namespace" not in session or "repository" not in session:
logger.error("Unable to load namespace or repository from session: %s", session)
abort(400, message="Missing namespace in request")
return func(session["namespace"], session["repository"], *args, **kwargs)
return wrapper
| 35.446429 | 101 | 0.668766 | 483 | 3,970 | 5.325052 | 0.333333 | 0.024495 | 0.033048 | 0.056376 | 0.18196 | 0.144246 | 0.066096 | 0 | 0 | 0 | 0 | 0.004071 | 0.257431 | 3,970 | 111 | 102 | 35.765766 | 0.868385 | 0.215365 | 0 | 0.301587 | 0 | 0 | 0.11195 | 0.011162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.063492 | 0.142857 | 0 | 0.365079 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f7fb1109bf89db5bf87c82699fc7b9493c2500d3 | 1,035 | py | Python | tests/continuous_integration.py | kfaRabi/online-judge-tools | 79de8d37e1aa78a7c4c82c6a666f1f1602caf545 | [
"MIT"
] | null | null | null | tests/continuous_integration.py | kfaRabi/online-judge-tools | 79de8d37e1aa78a7c4c82c6a666f1f1602caf545 | [
"MIT"
] | null | null | null | tests/continuous_integration.py | kfaRabi/online-judge-tools | 79de8d37e1aa78a7c4c82c6a666f1f1602caf545 | [
"MIT"
] | null | null | null | import os
import subprocess
import sys
import unittest
# TODO: these command should be written at once, at only .travis.yml or at only here
paths = ['oj', 'onlinejudge', 'setup.py', 'tests']
class ContinuousIntegrationTest(unittest.TestCase):
"""A dummy test to run the commands same to CI on local environments"""
@unittest.skipIf('CI' in os.environ, 'the same command is call from .travis.yml')
def test_isort(self):
subprocess.check_call(['isort', '--check-only', '--diff', '--recursive'] + paths, stdout=sys.stdout, stderr=sys.stderr)
@unittest.skipIf('CI' in os.environ, 'the same command is call from .travis.yml')
def test_yapf(self):
output = subprocess.check_output(['yapf', '--diff', '--recursive'] + paths, stderr=sys.stderr)
self.assertEqual(output, b'')
@unittest.skipIf('CI' in os.environ, 'the same command is call from .travis.yml')
def test_mypy(self):
subprocess.check_call(['mypy', '--show-traceback'] + paths, stdout=sys.stdout, stderr=sys.stderr)
| 39.807692 | 127 | 0.68599 | 144 | 1,035 | 4.888889 | 0.416667 | 0.051136 | 0.068182 | 0.076705 | 0.384943 | 0.384943 | 0.384943 | 0.285511 | 0.285511 | 0.285511 | 0 | 0 | 0.166184 | 1,035 | 25 | 128 | 41.4 | 0.815759 | 0.143961 | 0 | 0.1875 | 0 | 0 | 0.261364 | 0 | 0 | 0 | 0 | 0.04 | 0.0625 | 1 | 0.1875 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7ff646590489831f35fa9fe7ca9c0fe9f2f76be | 592 | py | Python | ProjectEuler_plus/euler_042.py | byung-u/HackerRank | 4c02fefff7002b3af774b99ebf8d40f149f9d163 | [
"MIT"
] | null | null | null | ProjectEuler_plus/euler_042.py | byung-u/HackerRank | 4c02fefff7002b3af774b99ebf8d40f149f9d163 | [
"MIT"
] | null | null | null | ProjectEuler_plus/euler_042.py | byung-u/HackerRank | 4c02fefff7002b3af774b99ebf8d40f149f9d163 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys
from math import sqrt
# (n * (n + 1)) / 2 -> n ** 2 + n - (2 * x)
# Solved with quadratic equation
# https://en.wikipedia.org/wiki/Quadratic_equation
for _ in range(int(input().strip())):
t = int(input().strip())
d = (sqrt(4 * 2 * t + 1) - 1)
if d.is_integer():
print(int(d) // 2)
else:
print(-1)
def e42():
for _ in range(int(input().strip())):
n = int(input().strip())
root = int(sqrt(2 * n))
if (root * (root + 1)) // 2 == n:
print(root)
else:
print(-1)
| 21.925926 | 52 | 0.489865 | 86 | 592 | 3.325581 | 0.453488 | 0.027972 | 0.181818 | 0.090909 | 0.160839 | 0.160839 | 0 | 0 | 0 | 0 | 0 | 0.041872 | 0.314189 | 592 | 26 | 53 | 22.769231 | 0.662562 | 0.248311 | 0 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.176471 | 0.235294 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
790266e9a7bcf554bd70851b9a13216ab9f797e3 | 11,530 | py | Python | src/gdata/spreadsheets/data.py | Cloudlock/gdata-python3 | a6481a13590bfa225f91a97b2185cca9aacd1403 | [
"Apache-2.0"
] | 19 | 2017-06-09T13:38:03.000Z | 2020-12-12T07:45:48.000Z | src/gdata/spreadsheets/data.py | AlexxIT/gdata-python3 | 5cc5a83a469d87f804d1fda8760ec76bcb6050c9 | [
"Apache-1.1"
] | 11 | 2017-07-22T07:09:54.000Z | 2020-12-02T15:08:48.000Z | src/gdata/spreadsheets/data.py | AlexxIT/gdata-python3 | 5cc5a83a469d87f804d1fda8760ec76bcb6050c9 | [
"Apache-1.1"
] | 25 | 2017-07-03T11:30:39.000Z | 2020-10-01T02:21:13.000Z | #!/usr/bin/env python
#
# Copyright (C) 2009 Google Inc.
#
# Licensed under the Apache License 2.0;
# This module is used for version 2 of the Google Data APIs.
"""Provides classes and constants for the XML in the Google Spreadsheets API.
Documentation for the raw XML which these classes represent can be found here:
http://code.google.com/apis/spreadsheets/docs/3.0/reference.html#Elements
"""
# __author__ = 'j.s@google.com (Jeff Scudder)'
import atom.core
import gdata.data
GS_TEMPLATE = '{http://schemas.google.com/spreadsheets/2006}%s'
GSX_NAMESPACE = 'http://schemas.google.com/spreadsheets/2006/extended'
INSERT_MODE = 'insert'
OVERWRITE_MODE = 'overwrite'
WORKSHEETS_REL = 'http://schemas.google.com/spreadsheets/2006#worksheetsfeed'
BATCH_POST_ID_TEMPLATE = ('https://spreadsheets.google.com/feeds/cells'
'/%s/%s/private/full')
BATCH_ENTRY_ID_TEMPLATE = '%s/R%sC%s'
BATCH_EDIT_LINK_TEMPLATE = '%s/batch'
class Error(Exception):
pass
class FieldMissing(Exception):
pass
class HeaderNotSet(Error):
"""The desired column header had no value for the row in the list feed."""
class Cell(atom.core.XmlElement):
"""The gs:cell element.
A cell in the worksheet. The <gs:cell> element can appear only as a child
of <atom:entry>.
"""
_qname = GS_TEMPLATE % 'cell'
col = 'col'
input_value = 'inputValue'
numeric_value = 'numericValue'
row = 'row'
class ColCount(atom.core.XmlElement):
"""The gs:colCount element.
Indicates the number of columns in the worksheet, including columns that
contain only empty cells. The <gs:colCount> element can appear as a child
of <atom:entry> or <atom:feed>
"""
_qname = GS_TEMPLATE % 'colCount'
class Field(atom.core.XmlElement):
"""The gs:field element.
A field single cell within a record. Contained in an <atom:entry>.
"""
_qname = GS_TEMPLATE % 'field'
index = 'index'
name = 'name'
class Column(Field):
"""The gs:column element."""
_qname = GS_TEMPLATE % 'column'
class Data(atom.core.XmlElement):
"""The gs:data element.
A data region of a table. Contained in an <atom:entry> element.
"""
_qname = GS_TEMPLATE % 'data'
column = [Column]
insertion_mode = 'insertionMode'
num_rows = 'numRows'
start_row = 'startRow'
class Header(atom.core.XmlElement):
"""The gs:header element.
Indicates which row is the header row. Contained in an <atom:entry>.
"""
_qname = GS_TEMPLATE % 'header'
row = 'row'
class RowCount(atom.core.XmlElement):
"""The gs:rowCount element.
Indicates the number of total rows in the worksheet, including rows that
contain only empty cells. The <gs:rowCount> element can appear as a
child of <atom:entry> or <atom:feed>.
"""
_qname = GS_TEMPLATE % 'rowCount'
class Worksheet(atom.core.XmlElement):
"""The gs:worksheet element.
The worksheet where the table lives.Contained in an <atom:entry>.
"""
_qname = GS_TEMPLATE % 'worksheet'
name = 'name'
class Spreadsheet(gdata.data.GDEntry):
"""An Atom entry which represents a Google Spreadsheet."""
def find_worksheets_feed(self):
return self.find_url(WORKSHEETS_REL)
FindWorksheetsFeed = find_worksheets_feed
def get_spreadsheet_key(self):
"""Extracts the spreadsheet key unique to this spreadsheet."""
return self.get_id().split('/')[-1]
GetSpreadsheetKey = get_spreadsheet_key
class SpreadsheetsFeed(gdata.data.GDFeed):
"""An Atom feed listing a user's Google Spreadsheets."""
entry = [Spreadsheet]
class WorksheetEntry(gdata.data.GDEntry):
"""An Atom entry representing a single worksheet in a spreadsheet."""
row_count = RowCount
col_count = ColCount
def get_worksheet_id(self):
"""The worksheet ID identifies this worksheet in its spreadsheet."""
return self.get_id().split('/')[-1]
GetWorksheetId = get_worksheet_id
class WorksheetsFeed(gdata.data.GDFeed):
"""A feed containing the worksheets in a single spreadsheet."""
entry = [WorksheetEntry]
class Table(gdata.data.GDEntry):
"""An Atom entry that represents a subsection of a worksheet.
A table allows you to treat part or all of a worksheet somewhat like a
table in a database that is, as a set of structured data items. Tables
don't exist until you explicitly create them before you can use a table
feed, you have to explicitly define where the table data comes from.
"""
data = Data
header = Header
worksheet = Worksheet
def get_table_id(self):
if self.id.text:
return self.id.text.split('/')[-1]
return None
GetTableId = get_table_id
class TablesFeed(gdata.data.GDFeed):
"""An Atom feed containing the tables defined within a worksheet."""
entry = [Table]
class Record(gdata.data.GDEntry):
"""An Atom entry representing a single record in a table.
Note that the order of items in each record is the same as the order of
columns in the table definition, which may not match the order of
columns in the GUI.
"""
field = [Field]
def value_for_index(self, column_index):
for field in self.field:
if field.index == column_index:
return field.text
raise FieldMissing('There is no field for %s' % column_index)
ValueForIndex = value_for_index
def value_for_name(self, name):
for field in self.field:
if field.name == name:
return field.text
raise FieldMissing('There is no field for %s' % name)
ValueForName = value_for_name
def get_record_id(self):
if self.id.text:
return self.id.text.split('/')[-1]
return None
class RecordsFeed(gdata.data.GDFeed):
"""An Atom feed containing the individuals records in a table."""
entry = [Record]
class ListRow(atom.core.XmlElement):
"""A gsx column value within a row.
The local tag in the _qname is blank and must be set to the column
name. For example, when adding to a ListEntry, do:
col_value = ListRow(text='something')
col_value._qname = col_value._qname % 'mycolumnname'
"""
_qname = '{http://schemas.google.com/spreadsheets/2006/extended}%s'
class ListEntry(gdata.data.GDEntry):
"""An Atom entry representing a worksheet row in the list feed.
The values for a particular column can be get and set using
x.get_value('columnheader') and x.set_value('columnheader', 'value').
See also the explanation of column names in the ListFeed class.
"""
def get_value(self, column_name):
"""Returns the displayed text for the desired column in this row.
The formula or input which generated the displayed value is not accessible
through the list feed, to see the user's input, use the cells feed.
If a column is not present in this spreadsheet, or there is no value
for a column in this row, this method will return None.
"""
values = self.get_elements(column_name, GSX_NAMESPACE)
if len(values) == 0:
return None
return values[0].text
def set_value(self, column_name, value):
"""Changes the value of cell in this row under the desired column name.
Warning: if the cell contained a formula, it will be wiped out by setting
the value using the list feed since the list feed only works with
displayed values.
No client side checking is performed on the column_name, you need to
ensure that the column_name is the local tag name in the gsx tag for the
column. For example, the column_name will not contain special characters,
spaces, uppercase letters, etc.
"""
# Try to find the column in this row to change an existing value.
values = self.get_elements(column_name, GSX_NAMESPACE)
if len(values) > 0:
values[0].text = value
else:
# There is no value in this row for the desired column, so add a new
# gsx:column_name element.
new_value = ListRow(text=value)
new_value._qname = new_value._qname % (column_name,)
self._other_elements.append(new_value)
def to_dict(self):
"""Converts this row to a mapping of column names to their values."""
result = {}
values = self.get_elements(namespace=GSX_NAMESPACE)
for item in values:
result[item._get_tag()] = item.text
return result
def from_dict(self, values):
"""Sets values for this row from the dictionary.
Old values which are already in the entry will not be removed unless
they are overwritten with new values from the dict.
"""
for column, value in values.items():
self.set_value(column, value)
class ListsFeed(gdata.data.GDFeed):
"""An Atom feed in which each entry represents a row in a worksheet.
The first row in the worksheet is used as the column names for the values
in each row. If a header cell is empty, then a unique column ID is used
for the gsx element name.
Spaces in a column name are removed from the name of the corresponding
gsx element.
Caution: The columnNames are case-insensitive. For example, if you see
a <gsx:e-mail> element in a feed, you can't know whether the column
heading in the original worksheet was "e-mail" or "E-Mail".
Note: If two or more columns have the same name, then subsequent columns
of the same name have _n appended to the columnName. For example, if the
first column name is "e-mail", followed by columns named "E-Mail" and
"E-mail", then the columnNames will be gsx:e-mail, gsx:e-mail_2, and
gsx:e-mail_3 respectively.
"""
entry = [ListEntry]
class CellEntry(gdata.data.BatchEntry):
"""An Atom entry representing a single cell in a worksheet."""
cell = Cell
class CellsFeed(gdata.data.BatchFeed):
"""An Atom feed contains one entry per cell in a worksheet.
The cell feed supports batch operations, you can send multiple cell
operations in one HTTP request.
"""
entry = [CellEntry]
def add_set_cell(self, row, col, input_value):
"""Adds a request to change the contents of a cell to this batch request.
Args:
row: int, The row number for this cell. Numbering starts at 1.
col: int, The column number for this cell. Starts at 1.
input_value: str, The desired formula/content this cell should contain.
"""
self.add_update(CellEntry(
id=atom.data.Id(text=BATCH_ENTRY_ID_TEMPLATE % (
self.id.text, row, col)),
cell=Cell(col=str(col), row=str(row), input_value=input_value)))
return self
AddSetCell = add_set_cell
def build_batch_cells_update(spreadsheet_key, worksheet_id):
"""Creates an empty cells feed for adding batch cell updates to.
Call batch_set_cell on the resulting CellsFeed instance then send the batch
request TODO: fill in
Args:
spreadsheet_key: The ID of the spreadsheet
worksheet_id:
"""
feed_id_text = BATCH_POST_ID_TEMPLATE % (spreadsheet_key, worksheet_id)
return CellsFeed(
id=atom.data.Id(text=feed_id_text),
link=[atom.data.Link(
rel='edit', href=BATCH_EDIT_LINK_TEMPLATE % (feed_id_text,))])
BuildBatchCellsUpdate = build_batch_cells_update
| 31.162162 | 82 | 0.674761 | 1,646 | 11,530 | 4.63305 | 0.217497 | 0.011802 | 0.014424 | 0.019276 | 0.209546 | 0.168109 | 0.132835 | 0.098217 | 0.06845 | 0.056386 | 0 | 0.004212 | 0.238075 | 11,530 | 369 | 83 | 31.246612 | 0.863859 | 0.492454 | 0 | 0.153285 | 0 | 0 | 0.092205 | 0 | 0 | 0 | 0 | 0.00271 | 0 | 1 | 0.094891 | false | 0.014599 | 0.014599 | 0.007299 | 0.686131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
79028a174225260b671df8c8ac4560369e16c2c8 | 710 | py | Python | tests/test_issues/test_member_example.py | hsolbrig/pyjsg | 5ef46d9af6a94a0cd0e91ebf8b22f61c17e78429 | [
"CC0-1.0"
] | 3 | 2017-07-23T11:11:23.000Z | 2020-11-30T15:36:51.000Z | tests/test_issues/test_member_example.py | hsolbrig/pyjsg | 5ef46d9af6a94a0cd0e91ebf8b22f61c17e78429 | [
"CC0-1.0"
] | 15 | 2018-01-05T17:18:34.000Z | 2021-12-13T17:40:25.000Z | tests/test_issues/test_member_example.py | hsolbrig/pyjsg | 5ef46d9af6a94a0cd0e91ebf8b22f61c17e78429 | [
"CC0-1.0"
] | null | null | null | import unittest
from pyjsg.validate_json import JSGPython
class MemberExampleTestCase(unittest.TestCase):
def test1(self):
x = JSGPython('''doc {
last_name : @string, # exactly one last name of type string
first_name : @string+ # array or one or more first names
age : @int?, # optional age of type int
weight : @number* # array of zero or more weights
}
''')
rslts = x.conforms('''
{ "last_name" : "snooter",
"first_name" : ["grunt", "peter"],
"weight" : []
}''')
self.assertTrue(rslts.success)
if __name__ == '__main__':
unittest.main()
| 28.4 | 77 | 0.533803 | 73 | 710 | 5.013699 | 0.589041 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002169 | 0.350704 | 710 | 24 | 78 | 29.583333 | 0.791757 | 0 | 0 | 0 | 0 | 0 | 0.612676 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7918bd9392635ed706771c33b08bee283e79ec85 | 838 | py | Python | ExpenseTracker/grocery/migrations/0004_auto_20200908_1918.py | lennyAiko/LifeExpenses | ec345228bca00742b0b08cf3fc294dba6574b515 | [
"MIT"
] | null | null | null | ExpenseTracker/grocery/migrations/0004_auto_20200908_1918.py | lennyAiko/LifeExpenses | ec345228bca00742b0b08cf3fc294dba6574b515 | [
"MIT"
] | null | null | null | ExpenseTracker/grocery/migrations/0004_auto_20200908_1918.py | lennyAiko/LifeExpenses | ec345228bca00742b0b08cf3fc294dba6574b515 | [
"MIT"
] | 1 | 2020-09-01T15:38:19.000Z | 2020-09-01T15:38:19.000Z | # Generated by Django 3.1.1 on 2020-09-08 18:18
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('grocery', '0003_auto_20200908_1417'),
]
operations = [
migrations.AlterField(
model_name='item',
name='list',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='item', to='grocery.list'),
),
migrations.AlterField(
model_name='list',
name='user',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='list', to=settings.AUTH_USER_MODEL),
),
]
| 31.037037 | 142 | 0.656325 | 98 | 838 | 5.469388 | 0.44898 | 0.059701 | 0.078358 | 0.123134 | 0.287313 | 0.287313 | 0.287313 | 0.287313 | 0.287313 | 0.287313 | 0 | 0.047766 | 0.225537 | 838 | 26 | 143 | 32.230769 | 0.77812 | 0.053699 | 0 | 0.2 | 1 | 0 | 0.083439 | 0.029077 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7919878f4085d6d12cdcb153170df1fa3bde8e8d | 1,035 | py | Python | my_hello_world_app/web_api/router.py | gsjay980/data-science-IP | 715550d1cbf67e552c0df533619460c0fee15b94 | [
"MIT"
] | 5 | 2020-05-26T09:33:54.000Z | 2021-07-01T02:42:30.000Z | my_hello_world_app/web_api/router.py | gsjay980/data-science-IP | 715550d1cbf67e552c0df533619460c0fee15b94 | [
"MIT"
] | 3 | 2019-12-26T17:34:24.000Z | 2020-02-04T03:16:23.000Z | my_hello_world_app/web_api/router.py | gsjay980/data-science-IP | 715550d1cbf67e552c0df533619460c0fee15b94 | [
"MIT"
] | 2 | 2021-12-17T00:46:03.000Z | 2022-02-26T11:04:55.000Z | from os import getenv
from typing import Optional, Dict
from flask import Flask
TestConfig = Optional[Dict[str, bool]]
def create_app(test_config: TestConfig = None) -> Flask:
""" App factory method to initialize the application with given configuration """
app: Flask = Flask(__name__)
if test_config is not None:
app.config.from_mapping(test_config)
@app.route("/")
def index() -> str: # pylint: disable=unused-variable
return "My Hello World App is working..."
@app.route("/version")
def version() -> str: # pylint: disable=unused-variable
"""
DOCKER_IMAGE_TAG is passed in the app from Dockerfile as ARG.
It should be setup in docker build task..
It is used in .gitlab-ci.yaml to pass the hash of the latest commit as docker image tag.
E.g. docker build --build-arg docker_image_tag="my-version" -t my-image-name:my-version .
"""
return getenv("DOCKER_IMAGE_TAG") or "DOCKER_IMAGE_TAG haven't been setup"
return app
| 32.34375 | 97 | 0.672464 | 149 | 1,035 | 4.557047 | 0.483221 | 0.081001 | 0.103093 | 0.064801 | 0.088365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229952 | 1,035 | 31 | 98 | 33.387097 | 0.851945 | 0.407729 | 0 | 0 | 0 | 0 | 0.164875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
791be8749fa60c1fc2eb6569f7089a3ef2f48994 | 11,259 | py | Python | SpoTwillio/lib/python3.6/site-packages/twilio/rest/api/v2010/account/call/feedback.py | Natfan/funlittlethings | 80d5378b45b5c0ead725942ee50403bd057514a6 | [
"MIT"
] | 3 | 2019-11-12T07:55:51.000Z | 2020-04-01T11:19:18.000Z | SpoTwillio/lib/python3.6/site-packages/twilio/rest/api/v2010/account/call/feedback.py | Natfan/funlittlethings | 80d5378b45b5c0ead725942ee50403bd057514a6 | [
"MIT"
] | 7 | 2020-06-06T01:06:19.000Z | 2022-02-10T11:15:14.000Z | SpoTwillio/lib/python3.6/site-packages/twilio/rest/api/v2010/account/call/feedback.py | Natfan/funlittlethings | 80d5378b45b5c0ead725942ee50403bd057514a6 | [
"MIT"
] | 2 | 2019-10-20T14:54:47.000Z | 2020-06-11T07:29:37.000Z | # coding=utf-8
"""
This code was generated by
\ / _ _ _| _ _
| (_)\/(_)(_|\/| |(/_ v1.0.0
/ /
"""
from twilio.base import deserialize
from twilio.base import values
from twilio.base.instance_context import InstanceContext
from twilio.base.instance_resource import InstanceResource
from twilio.base.list_resource import ListResource
from twilio.base.page import Page
class FeedbackList(ListResource):
def __init__(self, version, account_sid, call_sid):
"""
Initialize the FeedbackList
:param Version version: Version that contains the resource
:param account_sid: The account_sid
:param call_sid: A 34 character string that uniquely identifies this resource.
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackList
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackList
"""
super(FeedbackList, self).__init__(version)
# Path Solution
self._solution = {
'account_sid': account_sid,
'call_sid': call_sid,
}
def get(self):
"""
Constructs a FeedbackContext
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
"""
return FeedbackContext(
self._version,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def __call__(self):
"""
Constructs a FeedbackContext
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
"""
return FeedbackContext(
self._version,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def __repr__(self):
"""
Provide a friendly representation
:returns: Machine friendly representation
:rtype: str
"""
return '<Twilio.Api.V2010.FeedbackList>'
class FeedbackPage(Page):
def __init__(self, version, response, solution):
"""
Initialize the FeedbackPage
:param Version version: Version that contains the resource
:param Response response: Response from the API
:param account_sid: The account_sid
:param call_sid: A 34 character string that uniquely identifies this resource.
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackPage
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackPage
"""
super(FeedbackPage, self).__init__(version, response)
# Path Solution
self._solution = solution
def get_instance(self, payload):
"""
Build an instance of FeedbackInstance
:param dict payload: Payload response from the API
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
return FeedbackInstance(
self._version,
payload,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def __repr__(self):
"""
Provide a friendly representation
:returns: Machine friendly representation
:rtype: str
"""
return '<Twilio.Api.V2010.FeedbackPage>'
class FeedbackContext(InstanceContext):
def __init__(self, version, account_sid, call_sid):
"""
Initialize the FeedbackContext
:param Version version: Version that contains the resource
:param account_sid: The account_sid
:param call_sid: The call sid that uniquely identifies the call
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
"""
super(FeedbackContext, self).__init__(version)
# Path Solution
self._solution = {
'account_sid': account_sid,
'call_sid': call_sid,
}
self._uri = '/Accounts/{account_sid}/Calls/{call_sid}/Feedback.json'.format(**self._solution)
def create(self, quality_score, issue=values.unset):
"""
Create a new FeedbackInstance
:param unicode quality_score: The quality_score
:param FeedbackInstance.Issues issue: The issue
:returns: Newly created FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
data = values.of({
'QualityScore': quality_score,
'Issue': issue,
})
payload = self._version.create(
'POST',
self._uri,
data=data,
)
return FeedbackInstance(
self._version,
payload,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def fetch(self):
"""
Fetch a FeedbackInstance
:returns: Fetched FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
params = values.of({})
payload = self._version.fetch(
'GET',
self._uri,
params=params,
)
return FeedbackInstance(
self._version,
payload,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def update(self, quality_score, issue=values.unset):
"""
Update the FeedbackInstance
:param unicode quality_score: An integer from 1 to 5
:param FeedbackInstance.Issues issue: Issues experienced during the call
:returns: Updated FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
data = values.of({
'QualityScore': quality_score,
'Issue': issue,
})
payload = self._version.update(
'POST',
self._uri,
data=data,
)
return FeedbackInstance(
self._version,
payload,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
def __repr__(self):
"""
Provide a friendly representation
:returns: Machine friendly representation
:rtype: str
"""
context = ' '.join('{}={}'.format(k, v) for k, v in self._solution.items())
return '<Twilio.Api.V2010.FeedbackContext {}>'.format(context)
class FeedbackInstance(InstanceResource):
class Issues(object):
AUDIO_LATENCY = "audio-latency"
DIGITS_NOT_CAPTURED = "digits-not-captured"
DROPPED_CALL = "dropped-call"
IMPERFECT_AUDIO = "imperfect-audio"
INCORRECT_CALLER_ID = "incorrect-caller-id"
ONE_WAY_AUDIO = "one-way-audio"
POST_DIAL_DELAY = "post-dial-delay"
UNSOLICITED_CALL = "unsolicited-call"
def __init__(self, version, payload, account_sid, call_sid):
"""
Initialize the FeedbackInstance
:returns: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
super(FeedbackInstance, self).__init__(version)
# Marshaled Properties
self._properties = {
'account_sid': payload['account_sid'],
'date_created': deserialize.rfc2822_datetime(payload['date_created']),
'date_updated': deserialize.rfc2822_datetime(payload['date_updated']),
'issues': payload['issues'],
'quality_score': deserialize.integer(payload['quality_score']),
'sid': payload['sid'],
}
# Context
self._context = None
self._solution = {
'account_sid': account_sid,
'call_sid': call_sid,
}
@property
def _proxy(self):
"""
Generate an instance context for the instance, the context is capable of
performing various actions. All instance actions are proxied to the context
:returns: FeedbackContext for this FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackContext
"""
if self._context is None:
self._context = FeedbackContext(
self._version,
account_sid=self._solution['account_sid'],
call_sid=self._solution['call_sid'],
)
return self._context
@property
def account_sid(self):
"""
:returns: The account_sid
:rtype: unicode
"""
return self._properties['account_sid']
@property
def date_created(self):
"""
:returns: The date_created
:rtype: datetime
"""
return self._properties['date_created']
@property
def date_updated(self):
"""
:returns: The date_updated
:rtype: datetime
"""
return self._properties['date_updated']
@property
def issues(self):
"""
:returns: The issues
:rtype: FeedbackInstance.Issues
"""
return self._properties['issues']
@property
def quality_score(self):
"""
:returns: 1 to 5 quality score
:rtype: unicode
"""
return self._properties['quality_score']
@property
def sid(self):
"""
:returns: The sid
:rtype: unicode
"""
return self._properties['sid']
def create(self, quality_score, issue=values.unset):
"""
Create a new FeedbackInstance
:param unicode quality_score: The quality_score
:param FeedbackInstance.Issues issue: The issue
:returns: Newly created FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
return self._proxy.create(
quality_score,
issue=issue,
)
def fetch(self):
"""
Fetch a FeedbackInstance
:returns: Fetched FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
return self._proxy.fetch()
def update(self, quality_score, issue=values.unset):
"""
Update the FeedbackInstance
:param unicode quality_score: An integer from 1 to 5
:param FeedbackInstance.Issues issue: Issues experienced during the call
:returns: Updated FeedbackInstance
:rtype: twilio.rest.api.v2010.account.call.feedback.FeedbackInstance
"""
return self._proxy.update(
quality_score,
issue=issue,
)
def __repr__(self):
"""
Provide a friendly representation
:returns: Machine friendly representation
:rtype: str
"""
context = ' '.join('{}={}'.format(k, v) for k, v in self._solution.items())
return '<Twilio.Api.V2010.FeedbackInstance {}>'.format(context)
| 29.551181 | 101 | 0.606981 | 1,122 | 11,259 | 5.895722 | 0.124777 | 0.05291 | 0.04127 | 0.057143 | 0.688587 | 0.66319 | 0.636886 | 0.636886 | 0.616931 | 0.597732 | 0 | 0.015336 | 0.293454 | 11,259 | 380 | 102 | 29.628947 | 0.816216 | 0.365663 | 0 | 0.490798 | 1 | 0 | 0.12021 | 0.030094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.147239 | false | 0 | 0.03681 | 0 | 0.337423 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
79222572360ae305c1ba2a36f8edf19a01cdcedf | 2,410 | py | Python | tests/instrumentation/sqlite_tests.py | dsanders11/opbeat_python | 4bdfe494ed4dba12550dff86366b4402613bce92 | [
"BSD-3-Clause"
] | 99 | 2015-02-27T02:21:41.000Z | 2021-02-09T15:13:25.000Z | tests/instrumentation/sqlite_tests.py | dsanders11/opbeat_python | 4bdfe494ed4dba12550dff86366b4402613bce92 | [
"BSD-3-Clause"
] | 114 | 2015-01-16T15:06:49.000Z | 2018-04-13T20:29:18.000Z | tests/instrumentation/sqlite_tests.py | dsanders11/opbeat_python | 4bdfe494ed4dba12550dff86366b4402613bce92 | [
"BSD-3-Clause"
] | 51 | 2015-01-07T12:13:56.000Z | 2019-05-06T14:16:35.000Z | import sqlite3
import mock
import opbeat.instrumentation.control
from tests.helpers import get_tempstoreclient
from tests.utils.compat import TestCase
class InstrumentSQLiteTest(TestCase):
def setUp(self):
self.client = get_tempstoreclient()
opbeat.instrumentation.control.instrument()
@mock.patch("opbeat.traces.RequestsStore.should_collect")
def test_connect(self, should_collect):
should_collect.return_value = False
self.client.begin_transaction("transaction.test")
conn = sqlite3.connect(":memory:")
cursor = conn.cursor()
cursor.execute("""CREATE TABLE testdb (id integer, username text)""")
cursor.execute("""INSERT INTO testdb VALUES (1, "Ron")""")
cursor.execute("""DROP TABLE testdb""")
self.client.end_transaction("MyView")
transactions, traces = self.client.instrumentation_store.get_all()
expected_signatures = ['transaction', 'sqlite3.connect :memory:',
'CREATE TABLE', 'INSERT INTO testdb',
'DROP TABLE']
self.assertEqual(set([t['signature'] for t in traces]),
set(expected_signatures))
# Reorder according to the kinds list so we can just test them
sig_dict = dict([(t['signature'], t) for t in traces])
traces = [sig_dict[k] for k in expected_signatures]
self.assertEqual(traces[0]['signature'], 'transaction')
self.assertEqual(traces[0]['kind'], 'transaction')
self.assertEqual(traces[0]['transaction'], 'MyView')
self.assertEqual(traces[1]['signature'], 'sqlite3.connect :memory:')
self.assertEqual(traces[1]['kind'], 'db.sqlite.connect')
self.assertEqual(traces[1]['transaction'], 'MyView')
self.assertEqual(traces[2]['signature'], 'CREATE TABLE')
self.assertEqual(traces[2]['kind'], 'db.sqlite.sql')
self.assertEqual(traces[2]['transaction'], 'MyView')
self.assertEqual(traces[3]['signature'], 'INSERT INTO testdb')
self.assertEqual(traces[3]['kind'], 'db.sqlite.sql')
self.assertEqual(traces[3]['transaction'], 'MyView')
self.assertEqual(traces[4]['signature'], 'DROP TABLE')
self.assertEqual(traces[4]['kind'], 'db.sqlite.sql')
self.assertEqual(traces[4]['transaction'], 'MyView')
self.assertEqual(len(traces), 5)
| 38.253968 | 77 | 0.641494 | 265 | 2,410 | 5.773585 | 0.328302 | 0.166667 | 0.205882 | 0.104575 | 0.213072 | 0.070588 | 0.070588 | 0 | 0 | 0 | 0 | 0.01107 | 0.212863 | 2,410 | 62 | 78 | 38.870968 | 0.795467 | 0.024896 | 0 | 0 | 0 | 0 | 0.237223 | 0.017888 | 0 | 0 | 0 | 0 | 0.395349 | 1 | 0.046512 | false | 0 | 0.116279 | 0 | 0.186047 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7923b39638368ab2ae741c772b643949cd865155 | 423 | py | Python | gaphor/RAAML/stpa/connectors.py | Texopolis/gaphor | 3b190620075fd413258af1e7a007b4b2167a7564 | [
"Apache-2.0"
] | 867 | 2018-01-09T00:19:09.000Z | 2022-03-31T02:49:23.000Z | gaphor/RAAML/stpa/connectors.py | burakozturk16/gaphor | 86267a5200ac4439626d35d306dbb376c3800107 | [
"Apache-2.0"
] | 790 | 2018-01-13T23:47:07.000Z | 2022-03-31T16:04:27.000Z | gaphor/RAAML/stpa/connectors.py | burakozturk16/gaphor | 86267a5200ac4439626d35d306dbb376c3800107 | [
"Apache-2.0"
] | 117 | 2018-01-09T02:24:49.000Z | 2022-03-23T08:07:42.000Z | from gaphor.diagram.connectors import Connector
from gaphor.diagram.presentation import Classified
from gaphor.RAAML.raaml import RelevantTo
from gaphor.RAAML.stpa import RelevantToItem
from gaphor.SysML.requirements.connectors import DirectedRelationshipPropertyPathConnect
@Connector.register(Classified, RelevantToItem)
class RelevantToConnect(DirectedRelationshipPropertyPathConnect):
relation_type = RelevantTo
| 35.25 | 88 | 0.87234 | 41 | 423 | 8.97561 | 0.487805 | 0.13587 | 0.092391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080378 | 423 | 11 | 89 | 38.454545 | 0.946015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
792592d09cfb1da8cbdd06e8e2cb4970a31ce4e6 | 553 | py | Python | data_browser/migrations/0002_auto_20200331_1842.py | me2d09/django-data-browser | 1108f714229aab8c30a27d93f264f2f26b8b0aee | [
"BSD-3-Clause"
] | null | null | null | data_browser/migrations/0002_auto_20200331_1842.py | me2d09/django-data-browser | 1108f714229aab8c30a27d93f264f2f26b8b0aee | [
"BSD-3-Clause"
] | null | null | null | data_browser/migrations/0002_auto_20200331_1842.py | me2d09/django-data-browser | 1108f714229aab8c30a27d93f264f2f26b8b0aee | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.0.13 on 2020-03-31 17:42
from django.db import migrations, models
import data_browser.models
class Migration(migrations.Migration):
dependencies = [
("data_browser", "0001_initial"),
]
operations = [
migrations.AlterField(
model_name="view",
name="id",
field=models.CharField(
default=data_browser.models.get_id,
max_length=12,
primary_key=True,
serialize=False,
),
),
]
| 21.269231 | 51 | 0.549729 | 56 | 553 | 5.285714 | 0.75 | 0.111486 | 0.114865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061453 | 0.352622 | 553 | 25 | 52 | 22.12 | 0.765363 | 0.083183 | 0 | 0.111111 | 1 | 0 | 0.059406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7927bbe2f2d0526128722c38428b7bbf96221e46 | 2,389 | py | Python | armada/tests/unit/utils/test_lint.py | One-Fine-Day/armada | 9cd71c8b55173a9c9c45bfb939d19277fabd902d | [
"Apache-2.0"
] | null | null | null | armada/tests/unit/utils/test_lint.py | One-Fine-Day/armada | 9cd71c8b55173a9c9c45bfb939d19277fabd902d | [
"Apache-2.0"
] | null | null | null | armada/tests/unit/utils/test_lint.py | One-Fine-Day/armada | 9cd71c8b55173a9c9c45bfb939d19277fabd902d | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 The Armada Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import yaml
from armada.utils import lint
class LintTestCase(unittest.TestCase):
def test_lint_armada_yaml_pass(self):
config = yaml.load("""
armada:
release_prefix: armada-test
charts:
- chart_group:
- chart:
name: chart
release_name: chart
namespace: chart
""")
resp = lint.valid_manifest(config)
self.assertTrue(resp)
def test_lint_armada_keyword_removed(self):
config = yaml.load("""
armasda:
release_prefix: armada-test
charts:
- chart_group:
- chart:
name: chart
release_name: chart
namespace: chart
""")
with self.assertRaises(Exception):
lint.valid_manifest(config)
def test_lint_prefix_keyword_removed(self):
config = yaml.load("""
armada:
release: armada-test
charts:
- chart_group:
- chart:
name: chart
release_name: chart
namespace: chart
""")
with self.assertRaises(Exception):
lint.valid_manifest(config)
def test_lint_armada_removed(self):
config = yaml.load("""
sarmada:
release_prefix: armada-test
charts:
- chart_group:
- chart:
name: chart
release_name: chart
namespace: chart
""")
with self.assertRaises(Exception):
lint.valid_manifest(config)
| 29.8625 | 74 | 0.541231 | 238 | 2,389 | 5.306723 | 0.39916 | 0.057007 | 0.034838 | 0.057007 | 0.49327 | 0.473476 | 0.393508 | 0.393508 | 0.393508 | 0.393508 | 0 | 0.005479 | 0.388866 | 2,389 | 79 | 75 | 30.240506 | 0.859589 | 0.232733 | 0 | 0.767857 | 0 | 0 | 0.582188 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0.017857 | 0.053571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7927d5d5ec363318061f6e9faac288240c333204 | 7,149 | py | Python | mliv/dgps.py | microsoft/AdversarialGMM | 7a5cd51353c8a81e16c01220b71f77e4e1102add | [
"MIT"
] | 23 | 2020-12-01T22:55:40.000Z | 2022-01-26T04:11:14.000Z | mliv/dgps.py | microsoft/AdversarialGMM | 7a5cd51353c8a81e16c01220b71f77e4e1102add | [
"MIT"
] | null | null | null | mliv/dgps.py | microsoft/AdversarialGMM | 7a5cd51353c8a81e16c01220b71f77e4e1102add | [
"MIT"
] | 10 | 2020-12-05T17:12:49.000Z | 2022-01-10T23:42:37.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import numpy as np
# continuously differentiable
fn_dict_cdiff = {'2dpoly': 1, 'sigmoid': 2,
'sin': 3, 'frequent_sin': 4,
'3dpoly': 7, 'linear': 8}
# continuous but not differentiable
fn_dict_cont = {'abs': 0, 'abs_sqrt': 5, 'rand_pw': 9,
'abspos': 10, 'sqrpos': 11, 'pwlinear': 15}
# discontinuous
fn_dict_disc = {'step': 6, 'band': 12, 'invband': 13,
'steplinear': 14}
# monotone
fn_dict_monotone = {'sigmoid': 2,
'step': 6, 'linear': 8,
'abspos': 10, 'sqrpos': 11, 'pwlinear': 15}
# convex
fn_dict_convex = {'abs': 0, '2dpoly': 1, 'linear': 8,
'abspos': 10, 'sqrpos': 11}
# all functions
fn_dict = {'abs': 0, '2dpoly': 1, 'sigmoid': 2,
'sin': 3, 'frequent_sin': 4, 'abs_sqrt': 5,
'step': 6, '3dpoly': 7, 'linear': 8, 'rand_pw': 9,
'abspos': 10, 'sqrpos': 11, 'band': 12, 'invband': 13,
'steplinear': 14, 'pwlinear': 15}
def generate_random_pw_linear(lb=-2, ub=2, n_pieces=5):
splits = np.random.choice(np.arange(lb, ub, 0.1),
n_pieces - 1, replace=False)
splits.sort()
slopes = np.random.uniform(-4, 4, size=n_pieces)
start = []
start.append(np.random.uniform(-1, 1))
for t in range(n_pieces - 1):
start.append(start[t] + slopes[t] * (splits[t] -
(lb if t == 0 else splits[t - 1])))
return lambda x: [start[ind] + slopes[ind] * (x - (lb if ind == 0 else splits[ind - 1])) for ind in [np.searchsorted(splits, x)]][0]
def get_tau_fn(func):
def first(x):
return x[:, [0]] if len(x.shape) == 2 else x
# func describes the relation between response and treatment
if func == fn_dict['abs']:
def tau_fn(x): return np.abs(first(x))
elif func == fn_dict['2dpoly']:
def tau_fn(x): return -1.5 * first(x) + .9 * (first(x)**2)
elif func == fn_dict['sigmoid']:
def tau_fn(x): return 2 / (1 + np.exp(-2 * first(x)))
elif func == fn_dict['sin']:
def tau_fn(x): return np.sin(first(x))
elif func == fn_dict['frequent_sin']:
def tau_fn(x): return np.sin(3 * first(x))
elif func == fn_dict['abs_sqrt']:
def tau_fn(x): return np.sqrt(np.abs(first(x)))
elif func == fn_dict['step']:
def tau_fn(x): return 1. * (first(x) < 0) + 2.5 * (first(x) >= 0)
elif func == fn_dict['3dpoly']:
def tau_fn(x): return -1.5 * first(x) + .9 * \
(first(x)**2) + first(x)**3
elif func == fn_dict['linear']:
def tau_fn(x): return first(x)
elif func == fn_dict['rand_pw']:
pw_linear = generate_random_pw_linear()
def tau_fn(x):
return np.array([pw_linear(x_i) for x_i in first(x).flatten()]).reshape(-1, 1)
elif func == fn_dict['abspos']:
def tau_fn(x): return np.abs(first(x)) * (first(x) >= 0)
elif func == fn_dict['sqrpos']:
def tau_fn(x): return (first(x)**2) * (first(x) >= 0)
elif func == fn_dict['band']:
def tau_fn(x): return 1.0 * (first(x) >= -.75) * (first(x) <= .75)
elif func == fn_dict['invband']:
def tau_fn(x): return 1. - 1. * (first(x) >= -.75) * (first(x) <= .75)
elif func == fn_dict['steplinear']:
def tau_fn(x): return 2. * (first(x) >= 0) - first(x)
elif func == fn_dict['pwlinear']:
def tau_fn(x):
q = first(x)
return (q + 1) * (q <= -1) + (q - 1) * (q >= 1)
else:
raise NotImplementedError()
return tau_fn
def standardize(z, p, y, fn):
ym = y.mean()
ystd = y.std()
y = (y - ym) / ystd
def newfn(x): return (fn(x) - ym) / ystd
return z, p, y, newfn
def get_data(n_samples, n_instruments, iv_strength, tau_fn, dgp_num):
# Construct dataset
# z:- instruments (features included here, can be high-dimensional)
# p :- treatments (features included here as well, can be high-dimensional)
# y :- response (is a scalar always)
confounder = np.random.normal(0, 1, size=(n_samples, 1))
z = np.random.normal(0, 1, size=(n_samples, n_instruments))
fn = tau_fn
if dgp_num == 1:
# DGP 1 in the paper
p = 2 * z[:, [0]] * (z[:, [0]] > 0) * iv_strength \
+ 2 * z[:, [1]] * (z[:, [1]] < 0) * iv_strength \
+ 2 * confounder * (1 - iv_strength) + \
np.random.normal(0, .1, size=(n_samples, 1))
y = fn(p) + 2 * confounder + \
np.random.normal(0, .1, size=(n_samples, 1))
elif dgp_num == 2:
# DGP 2 in the paper
p = 2 * z[:, [0]] * iv_strength \
+ 2 * confounder * (1 - iv_strength) + \
np.random.normal(0, .1, size=(n_samples, 1))
y = fn(p) + 2 * confounder + \
np.random.normal(0, .1, size=(n_samples, 1))
elif dgp_num == 3:
# DeepIV's DGP - has feature variables as well
# z is 3-dimensional: composed of (1) 1D z, (2) t - time unif~(0,10), and (3) s - customer type {1,...,7}
# y is related to p and z in a complex non-linear, non separable manner
# p is related to z again in a non-separable manner, rho is endogeneity parameter
rho = 0.8
psd = 3.7
pmu = 17.779
ysd = 158.
ymu = -292.1
z_1 = np.random.normal(0, 1, size=(n_samples, 1))
v = np.random.normal(0, 1, size=(n_samples, 1))
t = np.random.uniform(0, 10, size=(n_samples, 1))
s = np.random.randint(1, 8, size=(n_samples, 1))
e = rho * v + \
np.random.normal(0, np.sqrt(1 - rho**2), size=(n_samples, 1))
def psi(t): return 2 * (np.power(t - 5, 4) / 600 +
np.exp(-4 * np.power(t - 5, 2)) + t / 10 - 2)
p = 25 + (z_1 + 3) * psi(t) + v
p = (p - pmu) / psd
g = (10 + p) * s * psi(t) - 2 * p + e
y = (g - ymu) / ysd
z = np.hstack((z_1, s, t))
p = np.hstack((p, s, t))
def fn(p): return ((10 + p[:, 0]) * p[:, 1]
* psi(p[:, 2]) - 2 * p[:, 0] - ymu) / ysd
elif dgp_num == 4:
# Many weak Instruments DGP - n_instruments can be very large
z = np.random.normal(0.5, 1, size=(n_samples, n_instruments))
p = np.amin(z, axis=1).reshape(-1, 1) * iv_strength + confounder * \
(1 - iv_strength) + np.random.normal(0, 0.1, size=(n_samples, 1))
y = fn(p) + 2 * confounder + \
np.random.normal(0, 0.1, size=(n_samples, 1))
else:
# Here we have equal number of treatments and instruments and each
# instrument affects a separate treatment. Only the first treatment
# matters for the outcome.
z = np.random.normal(0, 2, size=(n_samples, n_instruments))
U = np.random.normal(0, 2, size=(n_samples, 1))
delta = np.random.normal(0, .1, size=(n_samples, 1))
zeta = np.random.normal(0, .1, size=(n_samples, 1))
p = iv_strength * z + (1 - iv_strength) * U + delta
y = fn(p) + U + zeta
return standardize(z, p, y, fn)
| 40.619318 | 136 | 0.527207 | 1,100 | 7,149 | 3.322727 | 0.186364 | 0.042681 | 0.059097 | 0.039398 | 0.424624 | 0.39699 | 0.291655 | 0.242955 | 0.190971 | 0.128044 | 0 | 0.052684 | 0.299063 | 7,149 | 175 | 137 | 40.851429 | 0.676711 | 0.136942 | 0 | 0.114504 | 0 | 0 | 0.06069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.183206 | false | 0 | 0.007634 | 0.145038 | 0.244275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
7928e18542e9bd6bf82dff12dad8c28ca120e4fe | 16,097 | py | Python | tests/test_definitions/test_expectations_cfe.py | OmriBromberg/great_expectations | 60eb81ebfb08fef5d37d55c316dc962928beb165 | [
"Apache-2.0"
] | 1 | 2021-11-09T05:07:43.000Z | 2021-11-09T05:07:43.000Z | tests/test_definitions/test_expectations_cfe.py | OmriBromberg/great_expectations | 60eb81ebfb08fef5d37d55c316dc962928beb165 | [
"Apache-2.0"
] | 1 | 2021-12-07T13:06:29.000Z | 2021-12-07T13:06:29.000Z | tests/test_definitions/test_expectations_cfe.py | OmriBromberg/great_expectations | 60eb81ebfb08fef5d37d55c316dc962928beb165 | [
"Apache-2.0"
] | null | null | null | import glob
import json
import os
import random
import string
import pandas as pd
import pytest
from great_expectations.execution_engine.pandas_batch_data import PandasBatchData
from great_expectations.execution_engine.sparkdf_batch_data import SparkDFBatchData
from great_expectations.execution_engine.sqlalchemy_batch_data import (
SqlAlchemyBatchData,
)
from great_expectations.self_check.util import (
BigQueryDialect,
candidate_test_is_on_temporary_notimplemented_list_cfe,
evaluate_json_test_cfe,
get_test_validator_with_data,
mssqlDialect,
mysqlDialect,
postgresqlDialect,
sqliteDialect,
)
from tests.conftest import build_test_backends_list_cfe
from tests.test_definitions.test_expectations import tmp_dir
def pytest_generate_tests(metafunc):
# Load all the JSON files in the directory
dir_path = os.path.dirname(os.path.realpath(__file__))
expectation_dirs = [
dir_
for dir_ in os.listdir(dir_path)
if os.path.isdir(os.path.join(dir_path, dir_))
]
parametrized_tests = []
ids = []
backends = build_test_backends_list_cfe(metafunc)
for expectation_category in expectation_dirs:
test_configuration_files = glob.glob(
dir_path + "/" + expectation_category + "/*.json"
)
for c in backends:
for filename in test_configuration_files:
file = open(filename)
test_configuration = json.load(file)
for d in test_configuration["datasets"]:
datasets = []
if candidate_test_is_on_temporary_notimplemented_list_cfe(
c, test_configuration["expectation_type"]
):
skip_expectation = True
schemas = validator_with_data = None
else:
skip_expectation = False
if isinstance(d["data"], list):
sqlite_db_path = os.path.abspath(
os.path.join(
tmp_dir,
"sqlite_db"
+ "".join(
[
random.choice(
string.ascii_letters + string.digits
)
for _ in range(8)
]
)
+ ".db",
)
)
for dataset in d["data"]:
datasets.append(
get_test_validator_with_data(
c,
dataset["data"],
dataset.get("schemas"),
table_name=dataset.get("dataset_name"),
sqlite_db_path=sqlite_db_path,
)
)
validator_with_data = datasets[0]
else:
schemas = d["schemas"] if "schemas" in d else None
validator_with_data = get_test_validator_with_data(
c, d["data"], schemas=schemas
)
for test in d["tests"]:
generate_test = True
skip_test = False
if "only_for" in test:
# if we're not on the "only_for" list, then never even generate the test
generate_test = False
if not isinstance(test["only_for"], list):
raise ValueError("Invalid test specification.")
if validator_with_data and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
):
# Call out supported dialects
if "sqlalchemy" in test["only_for"]:
generate_test = True
elif (
"sqlite" in test["only_for"]
and sqliteDialect is not None
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
sqliteDialect,
)
):
generate_test = True
elif (
"postgresql" in test["only_for"]
and postgresqlDialect is not None
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
postgresqlDialect,
)
):
generate_test = True
elif (
"mysql" in test["only_for"]
and mysqlDialect is not None
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
mysqlDialect,
)
):
generate_test = True
elif (
"mssql" in test["only_for"]
and mssqlDialect is not None
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
mssqlDialect,
)
):
generate_test = True
elif (
"bigquery" in test["only_for"]
and BigQueryDialect is not None
and hasattr(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
"name",
)
and validator_with_data.execution_engine.active_batch_data.sql_engine_dialect.name
== "bigquery"
):
generate_test = True
elif validator_with_data and isinstance(
validator_with_data.execution_engine.active_batch_data,
PandasBatchData,
):
if "pandas" in test["only_for"]:
generate_test = True
if (
"pandas_022" in test["only_for"]
or "pandas_023" in test["only_for"]
) and int(pd.__version__.split(".")[1]) in [22, 23]:
generate_test = True
if ("pandas>=24" in test["only_for"]) and int(
pd.__version__.split(".")[1]
) > 24:
generate_test = True
elif validator_with_data and isinstance(
validator_with_data.execution_engine.active_batch_data,
SparkDFBatchData,
):
if "spark" in test["only_for"]:
generate_test = True
if not generate_test:
continue
if "suppress_test_for" in test and (
(
"sqlalchemy" in test["suppress_test_for"]
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
)
or (
"sqlite" in test["suppress_test_for"]
and sqliteDialect is not None
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
sqliteDialect,
)
)
or (
"postgresql" in test["suppress_test_for"]
and postgresqlDialect is not None
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
postgresqlDialect,
)
)
or (
"mysql" in test["suppress_test_for"]
and mysqlDialect is not None
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
mysqlDialect,
)
)
or (
"mssql" in test["suppress_test_for"]
and mssqlDialect is not None
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
and isinstance(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
mssqlDialect,
)
)
or (
"bigquery" in test["suppress_test_for"]
and BigQueryDialect is not None
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
and hasattr(
validator_with_data.execution_engine.active_batch_data.sql_engine_dialect,
"name",
)
and validator_with_data.execution_engine.active_batch_data.sql_engine_dialect.name
== "bigquery"
)
or (
"pandas" in test["suppress_test_for"]
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
PandasBatchData,
)
)
or (
"spark" in test["suppress_test_for"]
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SparkDFBatchData,
)
)
):
skip_test = True
# Known condition: SqlAlchemy does not support allow_cross_type_comparisons
if (
"allow_cross_type_comparisons" in test["in"]
and validator_with_data
and isinstance(
validator_with_data.execution_engine.active_batch_data,
SqlAlchemyBatchData,
)
):
skip_test = True
parametrized_tests.append(
{
"expectation_type": test_configuration[
"expectation_type"
],
"validator_with_data": validator_with_data,
"test": test,
"skip": skip_expectation or skip_test,
}
)
ids.append(
c
+ "/"
+ expectation_category
+ "/"
+ test_configuration["expectation_type"]
+ ":"
+ test["title"]
)
metafunc.parametrize("test_case", parametrized_tests, ids=ids)
@pytest.mark.order(index=0)
def test_case_runner_cfe(test_case):
if test_case["skip"]:
pytest.skip()
# Note: this should never be done in practice, but we are wiping expectations to reuse batches during testing.
# test_case["batch"]._initialize_expectations()
if "parse_strings_as_datetimes" in test_case["test"]["in"]:
with pytest.deprecated_call():
evaluate_json_test_cfe(
validator=test_case["validator_with_data"],
expectation_type=test_case["expectation_type"],
test=test_case["test"],
)
else:
evaluate_json_test_cfe(
validator=test_case["validator_with_data"],
expectation_type=test_case["expectation_type"],
test=test_case["test"],
)
| 48.927052 | 118 | 0.391067 | 1,093 | 16,097 | 5.411711 | 0.15645 | 0.101099 | 0.132206 | 0.105495 | 0.583094 | 0.539138 | 0.515469 | 0.475571 | 0.449197 | 0.449197 | 0 | 0.002697 | 0.562341 | 16,097 | 328 | 119 | 49.07622 | 0.836906 | 0.022861 | 0 | 0.470588 | 0 | 0 | 0.049866 | 0.003435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006536 | false | 0 | 0.042484 | 0 | 0.04902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
792c98d61321846aacf5f5f89a160ce13339bfd4 | 940 | py | Python | pyqt_sql_demo/syntax_highlighter/sql.py | nshiell/pyqt-sql-demo | 9e64ba069de744f69c2ecc2eeddac5b0b9f0968a | [
"Unlicense"
] | 18 | 2018-05-14T16:27:24.000Z | 2022-02-24T06:47:45.000Z | pyqt_sql_demo/syntax_highlighter/sql.py | nshiell/pyqt-sql-demo | 9e64ba069de744f69c2ecc2eeddac5b0b9f0968a | [
"Unlicense"
] | 2 | 2020-09-11T07:56:05.000Z | 2021-03-05T14:50:36.000Z | pyqt_sql_demo/syntax_highlighter/sql.py | nshiell/pyqt-sql-demo | 9e64ba069de744f69c2ecc2eeddac5b0b9f0968a | [
"Unlicense"
] | 9 | 2019-01-16T16:03:51.000Z | 2021-03-14T01:01:55.000Z | from pygments import highlight as _highlight
from pygments.lexers import SqlLexer
from pygments.formatters import HtmlFormatter
def style():
style = HtmlFormatter().get_style_defs()
return style
def highlight(text):
# Generated HTML contains unnecessary newline at the end
# before </pre> closing tag.
# We need to remove that newline because it's screwing up
# QTextEdit formatting and is being displayed
# as a non-editable whitespace.
highlighted_text = _highlight(text, SqlLexer(), HtmlFormatter()).strip()
# Split generated HTML by last newline in it
# argument 1 indicates that we only want to split the string
# by one specified delimiter from the right.
parts = highlighted_text.rsplit("\n", 1)
# Glue back 2 split parts to get the HTML without last
# unnecessary newline
highlighted_text_no_last_newline = "".join(parts)
return highlighted_text_no_last_newline
| 33.571429 | 76 | 0.738298 | 127 | 940 | 5.354331 | 0.566929 | 0.088235 | 0.05 | 0.061765 | 0.082353 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003989 | 0.2 | 940 | 27 | 77 | 34.814815 | 0.900266 | 0.456383 | 0 | 0 | 0 | 0 | 0.004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
79346eb30e63c170afbf3ea69f6c87de3e761345 | 4,100 | py | Python | mc-core/mc/data_gen/gnb_status_indication_pb2.py | copslock/o-ran_ric-app_mc | 243f8671c28596b1dc70dd295029d6151c9dd778 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | mc-core/mc/data_gen/gnb_status_indication_pb2.py | copslock/o-ran_ric-app_mc | 243f8671c28596b1dc70dd295029d6151c9dd778 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | mc-core/mc/data_gen/gnb_status_indication_pb2.py | copslock/o-ran_ric-app_mc | 243f8671c28596b1dc70dd295029d6151c9dd778 | [
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-07-07T06:43:16.000Z | 2021-07-07T06:43:16.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: gnb_status_indication.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
import x2ap_common_types_pb2 as x2ap__common__types__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='gnb_status_indication.proto',
package='streaming_protobufs',
syntax='proto3',
serialized_options=_b('Z1gerrit.o-ran-sc.org/r/ric-plt/streaming-protobufs'),
serialized_pb=_b('\n\x1bgnb_status_indication.proto\x12\x13streaming_protobufs\x1a\x17x2ap_common_types.proto\"W\n\x13GNBStatusIndication\x12@\n\x0bprotocolIEs\x18\x01 \x01(\x0b\x32+.streaming_protobufs.GNBStatusIndicationIEs\"h\n\x16GNBStatusIndicationIEs\x12N\n\x19id_GNBOverloadInformation\x18\x01 \x01(\x0b\x32+.streaming_protobufs.GNBOverloadInformationB3Z1gerrit.o-ran-sc.org/r/ric-plt/streaming-protobufsb\x06proto3')
,
dependencies=[x2ap__common__types__pb2.DESCRIPTOR,])
_GNBSTATUSINDICATION = _descriptor.Descriptor(
name='GNBStatusIndication',
full_name='streaming_protobufs.GNBStatusIndication',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='protocolIEs', full_name='streaming_protobufs.GNBStatusIndication.protocolIEs', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=77,
serialized_end=164,
)
_GNBSTATUSINDICATIONIES = _descriptor.Descriptor(
name='GNBStatusIndicationIEs',
full_name='streaming_protobufs.GNBStatusIndicationIEs',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id_GNBOverloadInformation', full_name='streaming_protobufs.GNBStatusIndicationIEs.id_GNBOverloadInformation', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=166,
serialized_end=270,
)
_GNBSTATUSINDICATION.fields_by_name['protocolIEs'].message_type = _GNBSTATUSINDICATIONIES
_GNBSTATUSINDICATIONIES.fields_by_name['id_GNBOverloadInformation'].message_type = x2ap__common__types__pb2._GNBOVERLOADINFORMATION
DESCRIPTOR.message_types_by_name['GNBStatusIndication'] = _GNBSTATUSINDICATION
DESCRIPTOR.message_types_by_name['GNBStatusIndicationIEs'] = _GNBSTATUSINDICATIONIES
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
GNBStatusIndication = _reflection.GeneratedProtocolMessageType('GNBStatusIndication', (_message.Message,), {
'DESCRIPTOR' : _GNBSTATUSINDICATION,
'__module__' : 'gnb_status_indication_pb2'
# @@protoc_insertion_point(class_scope:streaming_protobufs.GNBStatusIndication)
})
_sym_db.RegisterMessage(GNBStatusIndication)
GNBStatusIndicationIEs = _reflection.GeneratedProtocolMessageType('GNBStatusIndicationIEs', (_message.Message,), {
'DESCRIPTOR' : _GNBSTATUSINDICATIONIES,
'__module__' : 'gnb_status_indication_pb2'
# @@protoc_insertion_point(class_scope:streaming_protobufs.GNBStatusIndicationIEs)
})
_sym_db.RegisterMessage(GNBStatusIndicationIEs)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 35.652174 | 426 | 0.79439 | 446 | 4,100 | 6.939462 | 0.295964 | 0.058158 | 0.024556 | 0.031018 | 0.427787 | 0.331502 | 0.331502 | 0.310178 | 0.294023 | 0.294023 | 0 | 0.023306 | 0.1 | 4,100 | 114 | 427 | 35.964912 | 0.815447 | 0.08439 | 0 | 0.544444 | 1 | 0.011111 | 0.269621 | 0.225841 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7935bc304873f87a9dd8551b03972144b4f09bb2 | 582 | py | Python | src/pymor/vectorarrays/constructions.py | JuliaBru/pymor | 46343b527267213f4279ea36f208b542ab291c4e | [
"Unlicense"
] | null | null | null | src/pymor/vectorarrays/constructions.py | JuliaBru/pymor | 46343b527267213f4279ea36f208b542ab291c4e | [
"Unlicense"
] | null | null | null | src/pymor/vectorarrays/constructions.py | JuliaBru/pymor | 46343b527267213f4279ea36f208b542ab291c4e | [
"Unlicense"
] | null | null | null | # This file is part of the pyMOR project (http://www.pymor.org).
# Copyright 2013-2016 pyMOR developers and contributors. All rights reserved.
# License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause)
def cat_arrays(vector_arrays):
"""Return a new |VectorArray| which a concatenation of the arrays in `vector_arrays`."""
vector_arrays = list(vector_arrays)
total_length = sum(map(len, vector_arrays))
cated_arrays = vector_arrays[0].empty(reserve=total_length)
for a in vector_arrays:
cated_arrays.append(a)
return cated_arrays
| 41.571429 | 92 | 0.74055 | 85 | 582 | 4.917647 | 0.576471 | 0.200957 | 0.129187 | 0.110048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022403 | 0.156357 | 582 | 13 | 93 | 44.769231 | 0.828921 | 0.512027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7935f670a579e41f9498d1b3fe1e3afe2409108d | 407 | py | Python | swampytodo/urls.py | mrbaboon/swampytodo | 096c39a57db0d8640e03262550dd1ed07191ecde | [
"MIT"
] | null | null | null | swampytodo/urls.py | mrbaboon/swampytodo | 096c39a57db0d8640e03262550dd1ed07191ecde | [
"MIT"
] | 2 | 2015-04-23T00:21:01.000Z | 2015-04-23T00:29:23.000Z | swampytodo/urls.py | mrbaboon/swampytodo | 096c39a57db0d8640e03262550dd1ed07191ecde | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, include, url
from django.contrib import admin
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'swampytodo.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^monitor', 'monitor.views.monitor_view', name='monitor'),
url(r'^todo', include('todo.urls', namespace='todo')),
url(r'^admin/', include(admin.site.urls)),
)
| 29.071429 | 67 | 0.643735 | 53 | 407 | 4.924528 | 0.433962 | 0.076628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142506 | 407 | 13 | 68 | 31.307692 | 0.747851 | 0.238329 | 0 | 0 | 0 | 0 | 0.215686 | 0.084967 | 0 | 0 | 0 | 0.076923 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
793dc7d4ffbc96247a33db7d9520735900231242 | 1,283 | py | Python | pictures/tests.py | FredAtei/Photo-app | 5f9e72948af6a27b1c6c438fa22652c06fc4f6d4 | [
"MIT"
] | null | null | null | pictures/tests.py | FredAtei/Photo-app | 5f9e72948af6a27b1c6c438fa22652c06fc4f6d4 | [
"MIT"
] | null | null | null | pictures/tests.py | FredAtei/Photo-app | 5f9e72948af6a27b1c6c438fa22652c06fc4f6d4 | [
"MIT"
] | null | null | null | from django.test import TestCase
from .models import Image,Location,Category
# Create your tests here.
class CategoryTestClass(TestCase):
def setUp(self):
self.travel = Category(name='travel')
def test_instance(self):
self.assertTrue(isinstance(self.travel,Category))
def test_save_method(self):
self.travel.save_category()
categories = Category.objects.all()
self.assertTrue(len(categories)>0)
class LocationTestClass(TestCase):
def setUp(self):
self.Paris = Location(name='Paris')
def test_instance(self):
self.assertTrue(isinstance(self.Paris,Location))
def test_save_method(self):
self.Paris.save_location()
locations = Location.objects.all()
self.assertTrue(len(locations)>0)
class ImageTestClass(TestCase):
def setUp(self):
self.new_image=Image(image_name='Eot',image_description='Great things',image_category=self.travel,image_location=self.locations)
self.new_image.save_image()
def tearDown(self):
Category.objects.all().delete()
Location.objects.all().delete()
Image.objects.all().delete()
def test_get_images(self):
all_images = Image.get_images()
self.assertTrue(len(all_images)>0) | 31.292683 | 136 | 0.683554 | 154 | 1,283 | 5.564935 | 0.279221 | 0.065344 | 0.056009 | 0.070012 | 0.315053 | 0.168028 | 0.109685 | 0.109685 | 0 | 0 | 0 | 0.002915 | 0.197974 | 1,283 | 41 | 137 | 31.292683 | 0.829932 | 0.017927 | 0 | 0.225806 | 0 | 0 | 0.020651 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 1 | 0.290323 | false | 0 | 0.064516 | 0 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f700e260a7d6b3f4dc9cdfd4df281f246d308a20 | 2,504 | py | Python | tests/test_validators.py | fakeezz/edipy | 00c125621201e7290add135240c131c22feb3a72 | [
"MIT"
] | 1 | 2018-05-15T18:27:31.000Z | 2018-05-15T18:27:31.000Z | tests/test_validators.py | fakeezz/edipy | 00c125621201e7290add135240c131c22feb3a72 | [
"MIT"
] | null | null | null | tests/test_validators.py | fakeezz/edipy | 00c125621201e7290add135240c131c22feb3a72 | [
"MIT"
] | 2 | 2020-12-25T16:37:56.000Z | 2021-06-22T13:13:18.000Z | # coding: utf-8
import pytest
from edipy import fields, validators, exceptions
@pytest.mark.parametrize('fixed_type, data', [
(fields.Integer(1, validators=[validators.Range(1, 5)]), '1'),
(fields.Integer(1, validators=[validators.MaxValue(3)]), '2'),
(fields.Integer(1, validators=[validators.MinValue(1)]), '5'),
(fields.String(5, validators=[validators.Regex(r"[0-9]+")]), '12345'),
(fields.String(12, validators=[validators.Email()]), 'abc@mail.com'),
])
def test_using_validators(fixed_type, data):
try:
fixed_type.encode(data)
except exceptions.ValidationError:
pytest.fail(u"ValidationError should not be thrown")
@pytest.mark.parametrize('fixed_type, data', [
(fields.Integer(1, validators=[validators.Range(1, 5)]), '0'),
(fields.Integer(1, validators=[validators.Range(1, 5)]), '6'),
])
def test_validate_range(fixed_type, data):
with pytest.raises(exceptions.ValidationError):
fixed_type.encode(data)
@pytest.mark.parametrize('fixed_type, data', [
(fields.Integer(1, validators=[validators.MaxValue(1)]), '2'),
(fields.Integer(1, validators=[validators.MaxValue(5)]), '6'),
])
def test_validate_max_value(fixed_type, data):
with pytest.raises(exceptions.ValidationError):
fixed_type.encode(data)
@pytest.mark.parametrize('fixed_type, data', [
(fields.Integer(1, validators=[validators.MinValue(1)]), '0'),
(fields.Integer(1, validators=[validators.MinValue(5)]), '4'),
])
def test_validate_min_value(fixed_type, data):
with pytest.raises(exceptions.ValidationError):
fixed_type.encode(data)
@pytest.mark.parametrize('fixed_type, data', [
(fields.String(5, validators=[validators.Regex(r"[0-9]+")]), 'a123f'),
(fields.String(5, validators=[validators.Regex(r"\d")]), 'abcde'),
(fields.String(5, validators=[validators.Regex(r"[A-Z]{6}")]), 'ABCDE'),
])
def test_validate_regex(fixed_type, data):
with pytest.raises(exceptions.ValidationError):
fixed_type.encode(data)
def test_throws_exception_when_regex_is_invalid():
with pytest.raises(ValueError):
field = fields.String(5, validators=[validators.Regex(")")])
@pytest.mark.parametrize('fixed_type, data', [
(fields.String(11, validators=[validators.Email()]), 'edimail.com'),
(fields.String(11, validators=[validators.Email()]), 'edi@mailcom'),
])
def test_validate_email(fixed_type, data):
with pytest.raises(exceptions.ValidationError):
fixed_type.encode(data)
| 33.837838 | 76 | 0.69369 | 311 | 2,504 | 5.463023 | 0.228296 | 0.09535 | 0.091819 | 0.127134 | 0.74691 | 0.727487 | 0.58505 | 0.508534 | 0.457328 | 0.412596 | 0 | 0.025126 | 0.125799 | 2,504 | 73 | 77 | 34.30137 | 0.751028 | 0.005192 | 0 | 0.433962 | 0 | 0 | 0.087656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132075 | false | 0 | 0.037736 | 0 | 0.169811 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7024605869dd7788905637cfccaa41707efb6c3 | 256 | py | Python | data/scripts/reverse.py | levindu/OpenCC | 345ea91303e5b3d9332dc51ea73370dac83e4c6b | [
"Apache-2.0"
] | 43 | 2018-09-17T00:45:35.000Z | 2021-11-14T23:56:45.000Z | data/scripts/reverse.py | levindu/OpenCC | 345ea91303e5b3d9332dc51ea73370dac83e4c6b | [
"Apache-2.0"
] | 7 | 2019-11-26T10:48:14.000Z | 2021-06-13T04:49:58.000Z | data/scripts/reverse.py | levindu/OpenCC | 345ea91303e5b3d9332dc51ea73370dac83e4c6b | [
"Apache-2.0"
] | 6 | 2018-09-17T02:09:59.000Z | 2020-08-15T13:57:44.000Z | #!/usr/bin/env python
#coding: utf-8
import sys
from common import reverse_items
if len(sys.argv) != 3:
print("Reverse key and value of all pairs")
print(("Usage: ", sys.argv[0], "[input] [output]"))
exit(1)
reverse_items(sys.argv[1], sys.argv[2])
| 21.333333 | 53 | 0.671875 | 44 | 256 | 3.863636 | 0.704545 | 0.164706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.144531 | 256 | 11 | 54 | 23.272727 | 0.748858 | 0.128906 | 0 | 0 | 0 | 0 | 0.257919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f703531b591af3d5317bed220eaa477c0403e4d5 | 2,576 | py | Python | stock_predictions/web/template.py | abakhru/stock_prediction | bfb4483ac888bc67e2a8928fdf037d23acbf48f9 | [
"MIT"
] | 1 | 2020-07-14T09:05:56.000Z | 2020-07-14T09:05:56.000Z | stock_predictions/web/template.py | abakhru/stock_prediction | bfb4483ac888bc67e2a8928fdf037d23acbf48f9 | [
"MIT"
] | null | null | null | stock_predictions/web/template.py | abakhru/stock_prediction | bfb4483ac888bc67e2a8928fdf037d23acbf48f9 | [
"MIT"
] | null | null | null | template = """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Title of the document</title>
<script type="text/javascript" src="https://s3.tradingview.com/tv.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/milligram/1.3.0/milligram.min.css">
<style>
.tradingview-widget-container {{
position: sticky;
top: 20px;
}}
.stocks-view {{
display: flex;
flex-wrap: nowrap;
}}
.stocks-listing {{
width: 780px;
flex-wrap: nowrap;
padding: 20px;
}}
.stocks-graph {{
flex-wrap: nowrap;
padding: 20px;
}}
th.sticky-header {{
position: sticky;
top: 0;
z-index: 10;
background-color: white;
}}
.positive-movement {{
color: green;
font-weight: bold;
}}
.negative-movement {{
color: red;
font-weight: bold;
}}
.blue-category {{
background-color: lightsteelblue;
}}
</style>
</head>
<body>
{}
<div class="stocks-view">
<div class="stocks-listing">
<table>
<thead>
<tr>
<th class="sticky-header">Symbol</th>
<th class="sticky-header">April 1 2019</th>
<th class="sticky-header">Dec 2 2019</th>
<th class="sticky-header">Today</th>
<th class="sticky-header">Movement since April 1 2019</th>
<th class="sticky-header">Movement since Dec 2 2019</th>
<th class="sticky-header">Bankruptcy probability</th>
</tr>
</thead>
<tbody>
{}
</tbody>
</table>
</div>
<div class="stocks-graph"
<!-- TradingView Widget BEGIN -->
<div class="tradingview-widget-container">
<div id="tradingview_63a66"></div>
<div class="tradingview-widget-copyright"><a href="https://www.tradingview.com/symbols/AAPL/" rel="noopener" target="_blank"><span class="blue-text">AAPL Chart</span></a> by TradingView</div>
</div>
<!-- TradingView Widget END -->
</div>
</div>
<script type="text/javascript">
function renderChart(symbol) {{
new TradingView.widget(
{{
"width": 750,
"height": 500,
"symbol": symbol,
"interval": "180",
"timezone": "Etc/UTC",
"theme": "light",
"style": "1",
"locale": "en",
"toolbar_bg": "#f1f3f6",
"enable_publishing": false,
"allow_symbol_change": true,
"container_id": "tradingview_63a66"
}}
);
}}
document.addEventListener('DOMContentLoaded', function(){{
renderChart('BA');
}}, false);
</script>
</body>
</html>"""
| 24.533333 | 195 | 0.572593 | 284 | 2,576 | 5.165493 | 0.464789 | 0.06544 | 0.062031 | 0.090661 | 0.162236 | 0.113838 | 0.113838 | 0.0818 | 0 | 0 | 0 | 0.029882 | 0.246506 | 2,576 | 104 | 196 | 24.769231 | 0.725914 | 0 | 0 | 0.252525 | 0 | 0.030303 | 0.993012 | 0.202252 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f70ef0f412e5276c5b8da11a1ad63834bedea5f9 | 593 | py | Python | venv/lib/python3.6/site-packages/gensim/__init__.py | bopopescu/wired_cli | 844b5c2bf32c95ad2974663f0501a85ff6134bd4 | [
"MIT"
] | 2 | 2021-06-09T20:55:17.000Z | 2021-11-03T03:07:37.000Z | venv/lib/python3.6/site-packages/gensim/__init__.py | bopopescu/wired_cli | 844b5c2bf32c95ad2974663f0501a85ff6134bd4 | [
"MIT"
] | 4 | 2020-07-26T02:10:42.000Z | 2021-03-31T18:48:58.000Z | venv/lib/python3.6/site-packages/gensim/__init__.py | bopopescu/wired_cli | 844b5c2bf32c95ad2974663f0501a85ff6134bd4 | [
"MIT"
] | 1 | 2020-07-25T23:57:23.000Z | 2020-07-25T23:57:23.000Z | """This package contains interfaces and functionality to compute pair-wise document similarities within a corpus
of documents.
"""
from gensim import parsing, corpora, matutils, interfaces, models, similarities, summarization, utils # noqa:F401
import logging
__version__ = '3.5.0'
class NullHandler(logging.Handler):
"""For python versions <= 2.6; same as `logging.NullHandler` in 2.7."""
def emit(self, record):
pass
logger = logging.getLogger('gensim')
if len(logger.handlers) == 0: # To ensure reload() doesn't add another one
logger.addHandler(NullHandler())
| 28.238095 | 114 | 0.726813 | 77 | 593 | 5.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022133 | 0.161889 | 593 | 20 | 115 | 29.65 | 0.837022 | 0.409781 | 0 | 0 | 0 | 0 | 0.032641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f711bbbc339573d1744df69fd2b79a94a7b3f1b9 | 2,615 | py | Python | gateway/builders/authorization_builder.py | TarlanPayments/gw-python-client | a0dd5292c877ab06bf549693a1bfc9fb06ef9d19 | [
"MIT"
] | null | null | null | gateway/builders/authorization_builder.py | TarlanPayments/gw-python-client | a0dd5292c877ab06bf549693a1bfc9fb06ef9d19 | [
"MIT"
] | null | null | null | gateway/builders/authorization_builder.py | TarlanPayments/gw-python-client | a0dd5292c877ab06bf549693a1bfc9fb06ef9d19 | [
"MIT"
] | null | null | null | # The MIT License
#
# Copyright (c) 2017 Tarlan Payments.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
class AuthorizationBuilder(object):
def __init__(self, __client_auth_data_set, __client_mandatory_fields):
from gateway.data_sets.request_parameters import (
RequestParameters,
RequestParametersTypes
)
self.__data_sets = RequestParameters
self.__data_types = RequestParametersTypes
self.__auth_mandatory_fields = __client_mandatory_fields
self.__auth_data_set = __client_auth_data_set
def add_account_guid(self, guid=None):
"""
Tarlan Payments Merchant Account GUID.
Args:
guid (str): Tarlan Payments Merchant Account GUID.
"""
self.__auth_mandatory_fields[self.__data_sets.AUTH_DATA_ACCOUNT_GUID] = self.__data_types.AUTH_DATA_ACCOUNT_GUID
self.__auth_data_set[self.__data_sets.AUTH_DATA_ACCOUNT_GUID] = guid
def add_secret_key(self, value=None):
"""
Tarlan Payments Merchant Password
Args:
value (str): Tarlan Payments Merchant Password
"""
self.__auth_mandatory_fields[self.__data_sets.AUTH_DATA_SECRET_KEY] = self.__data_types.AUTH_DATA_SECRET_KEY
self.__auth_data_set[self.__data_sets.AUTH_DATA_SECRET_KEY] = value
def add_session_id(self, id_value=None):
"""
Tarlan Payments Gateway Session ID
Args:
id_value (str): Tarlan Payments Gateway Session ID
"""
self.__auth_data_set[self.__data_sets.AUTH_DATA_SECRET_KEY] = id_value
| 41.507937 | 120 | 0.728489 | 349 | 2,615 | 5.146132 | 0.372493 | 0.057906 | 0.036748 | 0.044543 | 0.238307 | 0.13363 | 0.13363 | 0.11637 | 0.11637 | 0.048998 | 0 | 0.001938 | 0.210707 | 2,615 | 62 | 121 | 42.177419 | 0.868217 | 0.521224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f718b4fadc70811185014ceea7a2ac977f84aa08 | 1,472 | py | Python | src/server/core/tests/test_config.py | Freshia/masakhane-web | acf5eaef7ab8109d6f10f212765572a1dc893cd5 | [
"MIT"
] | 20 | 2021-04-09T09:08:53.000Z | 2022-03-16T09:45:36.000Z | src/server/core/tests/test_config.py | Freshia/masakhane-web | acf5eaef7ab8109d6f10f212765572a1dc893cd5 | [
"MIT"
] | 15 | 2021-04-19T07:04:56.000Z | 2022-03-12T00:57:44.000Z | src/server/core/tests/test_config.py | Freshia/masakhane-web | acf5eaef7ab8109d6f10f212765572a1dc893cd5 | [
"MIT"
] | 14 | 2021-04-19T04:39:04.000Z | 2021-10-08T22:19:58.000Z | import os
import unittest
from flask import current_app
from flask_testing import TestCase
from core import masakhane
class TestDevelopmentConfig(TestCase):
def create_app(self):
masakhane.config.from_object('core.config.DevelopmentConfig')
return masakhane
def test_app_is_development(self):
self.assertTrue(masakhane.config['SECRET_KEY'] == "super-secret-key")
self.assertFalse(current_app is None)
self.assertTrue(
masakhane.config['SQLALCHEMY_DATABASE_URI'] ==
os.getenv('DATABASE_TEST_URL', "sqlite:///masakhane.db")
)
class TestTestingConfig(TestCase):
def create_app(self):
masakhane.config.from_object('core.config.StagingConfig')
return masakhane
def test_app_is_testing(self):
self.assertTrue(masakhane.config['SECRET_KEY'] == "key_testing")
self.assertTrue(masakhane.config['TESTING'])
self.assertTrue(
masakhane.config['SQLALCHEMY_DATABASE_URI'] ==
os.getenv('DATABASE_TEST_URL', "sqlite:///masakhane.db")
)
class TestProductionConfig(TestCase):
def create_app(self):
masakhane.config.from_object('core.config.ProductionConfig')
return masakhane
def test_app_is_production(self):
self.assertTrue(masakhane.config['SECRET_KEY'] == "key_production")
self.assertFalse(masakhane.config['TESTING'])
if __name__ == '__main__':
unittest.main() | 32 | 77 | 0.688179 | 160 | 1,472 | 6.0875 | 0.275 | 0.154004 | 0.141684 | 0.178645 | 0.63963 | 0.595483 | 0.51232 | 0.469199 | 0.376797 | 0.376797 | 0 | 0 | 0.201766 | 1,472 | 46 | 78 | 32 | 0.828936 | 0 | 0 | 0.333333 | 0 | 0 | 0.202987 | 0.116769 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.166667 | false | 0 | 0.138889 | 0 | 0.472222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f71b9e37908dd5da30752301903bfc85504aa496 | 728 | py | Python | Examples/AcceptAllRevisions.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 14 | 2018-07-15T17:01:52.000Z | 2018-11-29T06:15:33.000Z | Examples/AcceptAllRevisions.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 1 | 2018-09-28T12:59:34.000Z | 2019-10-08T08:42:59.000Z | Examples/AcceptAllRevisions.py | aspose-words-cloud/aspose-words-cloud-python | 65c7b55fa4aac69b60d41e7f54aed231df285479 | [
"MIT"
] | 2 | 2020-12-21T07:59:17.000Z | 2022-02-16T21:41:25.000Z | import os
import asposewordscloud
import asposewordscloud.models.requests
from asposewordscloud.rest import ApiException
from shutil import copyfile
words_api = WordsApi(client_id = '####-####-####-####-####', client_secret = '##################')
file_name = 'test_doc.docx'
# Upload original document to cloud storage.
my_var1 = open(file_name, 'rb')
my_var2 = file_name
upload_file_request = asposewordscloud.models.requests.UploadFileRequest(file_content=my_var1, path=my_var2)
words_api.upload_file(upload_file_request)
# Calls AcceptAllRevisions method for document in cloud.
my_var3 = file_name
request = asposewordscloud.models.requests.AcceptAllRevisionsRequest(name=my_var3)
words_api.accept_all_revisions(request) | 38.315789 | 108 | 0.787088 | 91 | 728 | 6.032967 | 0.505495 | 0.058288 | 0.163934 | 0.134791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.085165 | 728 | 19 | 109 | 38.315789 | 0.815315 | 0.133242 | 0 | 0 | 0 | 0 | 0.09062 | 0.038156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f7216012bdabcc6a4f76ac1521c5236c58f42c7a | 393 | py | Python | bookitoBackend/User/urls.py | mazdakdev/Bookito | 38e18fee22aafea95429da01e9769acf2748f676 | [
"MIT"
] | 10 | 2021-12-09T04:39:03.000Z | 2022-02-07T05:42:29.000Z | bookitoBackend/User/urls.py | mazdakdev/Bookito | 38e18fee22aafea95429da01e9769acf2748f676 | [
"MIT"
] | 2 | 2022-02-07T18:12:54.000Z | 2022-02-10T10:27:37.000Z | bookitoBackend/User/urls.py | mazdakdev/Bookito | 38e18fee22aafea95429da01e9769acf2748f676 | [
"MIT"
] | null | null | null | from django.urls import path
from .api import *
from knox import views as knox_views
urlpatterns = [
#domain.dn/api/v1/register/ | POST
path('register/' , SignUpAPI.as_view() , name='register'),
#domain.dn/api/v1/register/ | POST
path('login/' , SignInAPI.as_view() , name='login'),
#domain.dn/api/v1/user | GET
path('user/', MainUser.as_view() , name='user'),
] | 21.833333 | 62 | 0.64631 | 55 | 393 | 4.545455 | 0.418182 | 0.096 | 0.132 | 0.156 | 0.232 | 0.232 | 0.232 | 0 | 0 | 0 | 0 | 0.009317 | 0.180662 | 393 | 18 | 63 | 21.833333 | 0.767081 | 0.236641 | 0 | 0 | 0 | 0 | 0.124161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f72d2d7694c02f9baefa28ab714fa7d648759fe9 | 8,778 | py | Python | groupbunk.py | shine-jayakumar/groupbunk-fb | ddf3d66cd902343e419dd2cf0c86f42850315f08 | [
"MIT"
] | 1 | 2022-02-11T05:31:48.000Z | 2022-02-11T05:31:48.000Z | groupbunk.py | shine-jayakumar/groupbunk-fb | ddf3d66cd902343e419dd2cf0c86f42850315f08 | [
"MIT"
] | null | null | null | groupbunk.py | shine-jayakumar/groupbunk-fb | ddf3d66cd902343e419dd2cf0c86f42850315f08 | [
"MIT"
] | null | null | null | """
GroupBunk v.1.2
Leave your Facebook groups quietly
Author: Shine Jayakumar
Github: https://github.com/shine-jayakumar
LICENSE: MIT
"""
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import StaleElementReferenceException
from webdriver_manager.chrome import ChromeDriverManager
import argparse
import logging
import sys
from datetime import datetime
import time
from groupfuncs import *
import os
# suppress webdriver manager logs
os.environ['WDM_LOG_LEVEL'] = '0'
IGNORE_DIV = ['your feed', 'discover', 'your notifications']
FB_GROUP_URL = 'https://www.facebook.com/groups/feed/'
def display_intro():
'''
Displays intro of the script
'''
intro = """
GroupBunk v.1.2
Leave your Facebook groups quietly
Author: Shine Jayakumar
Github: https://github.com/shine-jayakumar
"""
print(intro)
def time_taken(start_time, logger):
'''
Calculates the time difference from now and start time
'''
end_time = time.time()
logger.info(f"Total time taken: {round(end_time - start_time, 4)} seconds")
def cleanup_and_quit(driver):
'''
Quits driver and exits the script
'''
if driver:
driver.quit()
sys.exit()
start_time = time.time()
# ====================================================
# Argument parsing
# ====================================================
description = "Leave your Facebook groups quietly"
usage = "groupbunk.py username password [-h] [-eg FILE] [-et TIMEOUT] [-sw WAIT] [-gr RETRYCOUNT] [-dg FILE]"
examples="""
Examples:
groupbunk.py bob101@email.com bobspassword101
groupbunk.py bob101@email.com bobspassword101 -eg keepgroups.txt
groupbunk.py bob101@email.com bobspassword101 -et 60 --scrollwait 10 -gr 7
groupbunk.py bob101@email.com bobspassword101 --dumpgroups mygroup.txt --groupretry 5
"""
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=description,
usage=usage,
epilog=examples,
prog='groupbunk')
# required arguments
parser.add_argument('username', type=str, help='Facebook username')
parser.add_argument('password', type=str, help='Facebook password')
# optional arguments
parser.add_argument('-eg', '--exgroups', type=str, metavar='', help='file with group names to exclude (one group per line)')
parser.add_argument('-et', '--eltimeout', type=int, metavar='', help='max timeout for elements to be loaded', default=30)
parser.add_argument('-sw', '--scrollwait', type=int, metavar='', help='time to wait after each scroll', default=4)
parser.add_argument('-gr', '--groupretry', type=int, metavar='', help='retry count while recapturing group names', default=5)
parser.add_argument('-dg', '--dumpgroups', type=str, metavar='', help='do not leave groups; only dump group names to a file')
parser.add_argument('-v', '--version', action='version', version='%(prog)s v.1.2')
args = parser.parse_args()
# ====================================================
# Setting up logger
# =====================================================
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s:%(name)s:%(lineno)d:%(levelname)s:%(message)s")
file_handler = logging.FileHandler(f'groupbunk_{datetime.now().strftime("%d_%m_%Y__%H_%M_%S")}.log', 'w', 'utf-8')
file_handler.setFormatter(formatter)
stdout_formatter = logging.Formatter("[*] => %(message)s")
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(stdout_formatter)
logger.addHandler(file_handler)
logger.addHandler(stdout_handler)
#=======================================================
try:
display_intro()
logger.info("script started")
# loading group names to be excluded
if args.exgroups:
logger.info("Loading group names to be excluded")
excluded_group_names = get_excluded_group_names(args.exgroups)
IGNORE_DIV.extend(excluded_group_names)
options = Options()
# supresses notifications
options.add_argument("--disable-notifications")
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_argument("--log-level=3")
logger.info("Downloading latest chrome webdriver")
# UNCOMMENT TO SPECIFY DRIVER LOCATION
# driver = webdriver.Chrome("D:/chromedriver/98/chromedriver.exe", options=options)
driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
if not driver:
raise Exception('Unable to download chrome webdriver for your version of Chrome browser')
logger.info("Successfully downloaded chrome webdriver")
wait = WebDriverWait(driver, args.eltimeout)
logger.info(f"Opening FB GROUPS URL: {FB_GROUP_URL}")
driver.get(FB_GROUP_URL)
logger.info("Sending username")
wait.until(EC.visibility_of_element_located((By.ID, 'email'))).send_keys(args.username)
logger.info("Sending password")
driver.find_element(By.ID, 'pass').send_keys(args.password)
logger.info("Clicking on Log In")
wait.until(EC.presence_of_element_located((By.ID, 'loginbutton'))).click()
# get all the links inside divs representing group names
group_links = get_group_link_elements(driver, wait)
if not group_links:
raise Exception("Unable to find links")
no_of_currently_loaded_links = 0
logger.info(f"Initial link count: {len(group_links)-3}")
logger.info("Scrolling down to capture all the links")
# scroll until no new group links are loaded
while len(group_links) > no_of_currently_loaded_links:
no_of_currently_loaded_links = len(group_links)
logger.info(f"Updated link count: {no_of_currently_loaded_links-3}")
scroll_into_view(driver, group_links[no_of_currently_loaded_links-1])
time.sleep(args.scrollwait)
# re-capturing
group_links = get_group_link_elements(driver, wait)
logger.info(f"Total number of links found: {len(group_links)-3}")
# only show the group names and exit
if args.dumpgroups:
logger.info('Only dumping group names to file. Not leaving groups')
logger.info(f"Dumping group names to: {args.dumpgroups}")
dump_groups(group_links, args.dumpgroups)
time_taken(start_time, logger)
cleanup_and_quit(driver)
# first 3 links are for Your feed, 'Discover, Your notifications
i = 0
save_state = 0
no_of_retries = 0
failed_groups = []
total_groups = len(group_links)
while i < total_groups:
try:
# need only the group name and not Last Active
group_name = group_links[i].text.split('\n')[0]
# if group name not in ignore list
if group_name.lower() not in IGNORE_DIV:
logger.info(f"Leaving group: {group_name}")
link = group_links[i].get_attribute('href')
logger.info(f"Opening group link: {link}")
switch_tab(driver, open_new_tab(driver))
driver.get(link)
if not leave_group(wait):
logger.info('Unable to leave the group. You might not be a member of this group.')
driver.close()
switch_tab(driver, driver.window_handles[0])
else:
if group_name.lower() not in ['your feed', 'discover', 'your notifications']:
logger.info(f"Skipping group : {group_name}")
i += 1
except StaleElementReferenceException:
logger.error('Captured group elements gone stale. Recapturing...')
if no_of_retries > args.groupretry:
logger.error('Reached max number of retry attempts')
break
save_state = i
group_links = get_group_link_elements(driver, wait)
no_of_retries += 1
except Exception as ex:
logger.error(f"Unable to leave group {group_name}. Error: {ex}")
failed_groups.append(group_name)
i += 1
total_no_of_groups = len(group_links)-3
total_no_failed_groups = len(failed_groups)
logger.info(f"Total groups: {total_no_of_groups}")
logger.info(f"No. of groups failed to leave: {total_no_failed_groups}")
logger.info(f"Success percentage: {((total_no_of_groups - total_no_failed_groups)/total_no_of_groups) * 100} %")
if failed_groups:
failed_group_names = ", ".join(failed_groups)
logger.info(f"Failed groups: \n{failed_group_names}")
except Exception as ex:
logger.error(f"Script ended with exception: {ex}")
finally:
time_taken(start_time, logger)
cleanup_and_quit(driver) | 35.97541 | 127 | 0.670084 | 1,111 | 8,778 | 5.133213 | 0.273627 | 0.04033 | 0.025075 | 0.016658 | 0.205331 | 0.147817 | 0.092934 | 0.070139 | 0.049097 | 0.033666 | 0 | 0.009004 | 0.190248 | 8,778 | 244 | 128 | 35.97541 | 0.793331 | 0.124288 | 0 | 0.096154 | 0 | 0.012821 | 0.323309 | 0.037163 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0.051282 | 0.076923 | 0 | 0.096154 | 0.00641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f72d949d658d47131c4a502292aadd093d90b245 | 212 | py | Python | test-examples/million_points.py | tlambert03/image-demos | a2974bcc7f040fd4d14e659c4cbfeabcf726c707 | [
"BSD-3-Clause"
] | null | null | null | test-examples/million_points.py | tlambert03/image-demos | a2974bcc7f040fd4d14e659c4cbfeabcf726c707 | [
"BSD-3-Clause"
] | null | null | null | test-examples/million_points.py | tlambert03/image-demos | a2974bcc7f040fd4d14e659c4cbfeabcf726c707 | [
"BSD-3-Clause"
] | null | null | null | """Test converting an image to a pyramid.
"""
import numpy as np
import napari
points = np.random.randint(100, size=(50_000, 2))
with napari.gui_qt():
viewer = napari.view_points(points, face_color='red')
| 19.272727 | 57 | 0.712264 | 34 | 212 | 4.323529 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.150943 | 212 | 10 | 58 | 21.2 | 0.766667 | 0.179245 | 0 | 0 | 0 | 0 | 0.017964 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f7367b85ef33529c5c360e68d214cb8e6a80a38f | 4,752 | py | Python | dist/Platform.app/Contents/Resources/lib/python3.7/wx/lib/colourchooser/canvas.py | njalloul90/Genomics_Oncology_Platform | 9bf6d0edca5df783f4e371fa1bc46b7b1576fe70 | [
"MIT"
] | 6 | 2021-07-26T14:21:25.000Z | 2021-07-26T14:32:01.000Z | dist/Platform.app/Contents/Resources/lib/python3.7/wx/lib/colourchooser/canvas.py | njalloul90/Genomics_Oncology_Platform | 9bf6d0edca5df783f4e371fa1bc46b7b1576fe70 | [
"MIT"
] | 9 | 2021-03-18T23:10:27.000Z | 2022-03-11T23:43:55.000Z | dist/Platform.app/Contents/Resources/lib/python3.7/wx/lib/colourchooser/canvas.py | njalloul90/Genomics_Oncology_Platform | 9bf6d0edca5df783f4e371fa1bc46b7b1576fe70 | [
"MIT"
] | 2 | 2019-03-11T05:06:49.000Z | 2019-03-22T21:48:49.000Z | """
PyColourChooser
Copyright (C) 2002 Michael Gilfix <mgilfix@eecs.tufts.edu>
This file is part of PyColourChooser.
This version of PyColourChooser is open source; you can redistribute it
and/or modify it under the licensed terms.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
"""
# 12/14/2003 - Jeff Grimmett (grimmtooth@softhome.net)
#
# o 2.5 compatibility update.
#
# 12/21/2003 - Jeff Grimmett (grimmtooth@softhome.net)
#
# o wxPyColorChooser -> PyColorChooser
# o wxPyColourChooser -> PyColourChooser
#
# Tags: phoenix-port
import wx
class BitmapBuffer(wx.MemoryDC):
"""A screen buffer class.
This class implements a screen output buffer. Data is meant to
be drawn in the buffer class and then blitted directly to the
output device, or on-screen window.
"""
def __init__(self, width, height, colour):
"""Initialize the empty buffer object."""
wx.MemoryDC.__init__(self)
self.width = width
self.height = height
self.colour = colour
self.bitmap = wx.Bitmap(self.width, self.height)
self.SelectObject(self.bitmap)
# Initialize the buffer to the background colour
self.SetBackground(wx.Brush(self.colour, wx.BRUSHSTYLE_SOLID))
self.Clear()
# Make each logical unit of the buffer equal to 1 pixel
self.SetMapMode(wx.MM_TEXT)
def GetBitmap(self):
"""Returns the internal bitmap for direct drawing."""
return self.bitmap
# GetPixel seems to always return (-1, -1, -1, 255)
# on OS X so this is a workaround for that issue.
def GetPixelColour(self, x, y):
"""Gets the color value of the pixel at the given
cords.
"""
img = self.GetAsBitmap().ConvertToImage()
red = img.GetRed(x, y)
green = img.GetGreen(x, y)
blue = img.GetBlue(x, y)
return wx.Colour(red, green, blue)
class Canvas(wx.Window):
"""A canvas class for arbitrary drawing.
The Canvas class implements a window that allows for drawing
arbitrary graphics. It implements a double buffer scheme and
blits the off-screen buffer to the window during paint calls
by the windowing system for speed.
Some other methods for determining the canvas colour and size
are also provided.
"""
def __init__(self, parent, id,
pos=wx.DefaultPosition,
style=wx.SIMPLE_BORDER,
forceClientSize=None):
"""Creates a canvas instance and initializes the off-screen
buffer. Also sets the handler for rendering the canvas
automatically via size and paint calls from the windowing
system."""
wx.Window.__init__(self, parent, id, pos, style=style)
if forceClientSize:
self.SetMaxClientSize(forceClientSize)
self.SetMinClientSize(forceClientSize)
# Perform an intial sizing
self.ReDraw()
# Register event handlers
self.Bind(wx.EVT_SIZE, self.onSize)
self.Bind(wx.EVT_PAINT, self.onPaint)
def MakeNewBuffer(self):
size = self.GetClientSize()
self.buffer = BitmapBuffer(size[0], size[1],
self.GetBackgroundColour())
def onSize(self, event):
"""Perform actual redraw to off-screen buffer only when the
size of the canvas has changed. This saves a lot of computation
since the same image can be re-used, provided the canvas size
hasn't changed."""
self.MakeNewBuffer()
self.DrawBuffer()
self.Refresh()
def ReDraw(self):
"""Explicitly tells the canvas to redraw it's contents."""
self.onSize(None)
def Refresh(self):
"""Re-draws the buffer contents on-screen."""
dc = wx.ClientDC(self)
self.Blit(dc)
def onPaint(self, event):
"""Renders the off-screen buffer on-screen."""
dc = wx.PaintDC(self)
self.Blit(dc)
def Blit(self, dc):
"""Performs the blit of the buffer contents on-screen."""
width, height = self.buffer.GetSize()
dc.Blit(0, 0, width, height, self.buffer, 0, 0)
def GetBoundingRect(self):
"""Returns a tuple that contains the co-ordinates of the
top-left and bottom-right corners of the canvas."""
x, y = self.GetPosition()
w, h = self.GetSize()
return(x, y + h, x + w, y)
def DrawBuffer(self):
"""Actual drawing function for drawing into the off-screen
buffer. To be overrideen in the implementing class. Do nothing
by default."""
pass
| 32.547945 | 71 | 0.643729 | 620 | 4,752 | 4.9 | 0.414516 | 0.020737 | 0.024687 | 0.0237 | 0.078341 | 0.025016 | 0.025016 | 0 | 0 | 0 | 0 | 0.01006 | 0.267887 | 4,752 | 145 | 72 | 32.772414 | 0.863179 | 0.494739 | 0 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0.017544 | 0.017544 | 0 | 0.298246 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7387b7a0fda396aca3fe13d2312bd4427223bec | 1,317 | py | Python | nasbench/scripts/generate-all-graphs.py | bkj/nasbench | a238cf26d843aaffbe037569528ef96d3e37eb04 | [
"Apache-2.0"
] | null | null | null | nasbench/scripts/generate-all-graphs.py | bkj/nasbench | a238cf26d843aaffbe037569528ef96d3e37eb04 | [
"Apache-2.0"
] | null | null | null | nasbench/scripts/generate-all-graphs.py | bkj/nasbench | a238cf26d843aaffbe037569528ef96d3e37eb04 | [
"Apache-2.0"
] | 1 | 2021-07-25T16:36:34.000Z | 2021-07-25T16:36:34.000Z | #!/usr/bin/env python
"""
generate-all-graphs.py
python generate-all-graphs.py | gzip -c > all-graphs.gz
"""
import sys
import json
import itertools
import numpy as np
from tqdm import tqdm
from nasbench.lib import graph_util
from joblib import delayed, Parallel
max_vertices = 7
num_ops = 3
max_edges = 9
def make_graphs(vertices, bits):
matrix = np.fromfunction(graph_util.gen_is_edge_fn(bits), (vertices, vertices), dtype=np.int8)
if graph_util.num_edges(matrix) > max_edges:
return []
if not graph_util.is_full_dag(matrix):
return []
out = []
for labeling in itertools.product(*[range(num_ops) for _ in range(vertices-2)]):
labeling = [-1] + list(labeling) + [-2]
out.append({
"hash" : graph_util.hash_module(matrix, labeling),
"adj" : matrix.tolist(),
"labeling" : labeling,
})
return out
adjs = []
for vertices in range(2, max_vertices+1):
for bits in range(2 ** (vertices * (vertices-1) // 2)):
adjs.append((vertices, bits))
adjs = [adjs[i] for i in np.random.permutation(len(adjs))]
jobs = [delayed(make_graphs)(*adj) for adj in adjs]
res = Parallel(n_jobs=40, backend='multiprocessing', verbose=10)(jobs)
for r in res:
for rr in r:
print(json.dumps(rr)) | 24.388889 | 98 | 0.642369 | 189 | 1,317 | 4.359788 | 0.433862 | 0.054612 | 0.041262 | 0.055825 | 0.06068 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015702 | 0.226272 | 1,317 | 54 | 99 | 24.388889 | 0.792934 | 0.07593 | 0 | 0.057143 | 1 | 0 | 0.024917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.2 | 0 | 0.314286 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f738e084271100fae4934591514291316a9bafdd | 1,500 | py | Python | ui/mext.py | szymonkaliski/nott | fa85e64b570f71733ea199dddbd0bc0f013a613b | [
"MIT"
] | 25 | 2019-07-01T14:58:48.000Z | 2021-11-13T17:00:44.000Z | ui/mext.py | szymonkaliski/nott | fa85e64b570f71733ea199dddbd0bc0f013a613b | [
"MIT"
] | 6 | 2019-12-30T02:50:19.000Z | 2021-05-10T16:41:47.000Z | ui/mext.py | szymonkaliski/nott | fa85e64b570f71733ea199dddbd0bc0f013a613b | [
"MIT"
] | 2 | 2020-01-05T13:02:07.000Z | 2020-05-21T15:54:57.000Z | # FIXME: fix all "happy paths coding" issues
import liblo
from threading import Thread
class Mext(object):
device = None
def __init__(self, device_port=5000):
self.device_receiver = liblo.ServerThread(device_port)
self.device_receiver.add_method("/monome/grid/key", "iii", self.on_grid_key)
self.device_receiver.add_method(
"/serialosc/device", "ssi", self.on_serialosc_device
)
self.device_receiver.start()
liblo.send(liblo.Address(12002), "/serialosc/list", "127.0.0.1", device_port)
def set_grid_key_callback(self, fn):
self.grid_key_callback = fn
def set_led_level(self, x, y, value):
Thread(
target=(
lambda: liblo.send(
self.device, "/monome/grid/led/level/set", x, y, value
)
)
).start()
def set_led_map(self, offset_x, offset_y, values):
Thread(
target=(
lambda: liblo.send(
self.device,
"/monome/grid/led/level/map",
offset_x,
offset_y,
*values
)
)
).start()
def on_grid_key(self, path, args):
x, y, edge = args
if self.grid_key_callback:
self.grid_key_callback(x, y, edge)
def on_serialosc_device(self, path, args):
_, sysId, port = args
self.device = liblo.Address(port)
| 26.315789 | 85 | 0.544 | 175 | 1,500 | 4.44 | 0.337143 | 0.10296 | 0.092664 | 0.073359 | 0.262548 | 0.14157 | 0.14157 | 0.14157 | 0.14157 | 0.14157 | 0 | 0.015337 | 0.348 | 1,500 | 56 | 86 | 26.785714 | 0.779141 | 0.028 | 0 | 0.195122 | 0 | 0 | 0.078984 | 0.035714 | 0 | 0 | 0 | 0.017857 | 0 | 1 | 0.146341 | false | 0 | 0.04878 | 0 | 0.243902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f73ea882b3c478b64d849ace9aad77a4fd64c642 | 504 | py | Python | trees.py | dmancevo/trees | a76a8d9c8e11c67042e3d947d58a84fee83ad6b5 | [
"Apache-2.0"
] | null | null | null | trees.py | dmancevo/trees | a76a8d9c8e11c67042e3d947d58a84fee83ad6b5 | [
"Apache-2.0"
] | null | null | null | trees.py | dmancevo/trees | a76a8d9c8e11c67042e3d947d58a84fee83ad6b5 | [
"Apache-2.0"
] | null | null | null | from ctypes import *
class Node(Structure): pass
Node._fields_ = [
("leaf", c_int),
("g", c_float),
("min_samples", c_int),
("split_ind", c_int),
("split", c_float),
("left", POINTER(Node)),
("right", POINTER(Node))]
trees = CDLL("./trees.so")
trees.get_root.argtypes = (c_int, )
trees.get_root.restype = POINTER(Node)
class Tree(object):
def __init__(self, min_samples=1):
self.root = trees.get_root(min_samples)
if __name__ == '__main__':
tree = Tree()
| 21 | 47 | 0.621032 | 69 | 504 | 4.144928 | 0.521739 | 0.055944 | 0.125874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002463 | 0.194444 | 504 | 23 | 48 | 21.913043 | 0.70197 | 0 | 0 | 0 | 0 | 0 | 0.113095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f73f526fb320491a4e8c361c6ccf86f4cd4462be | 8,080 | py | Python | purity_fb/purity_fb_1dot12/models/multi_protocol_rule.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 5 | 2017-09-08T20:47:22.000Z | 2021-06-29T02:11:05.000Z | purity_fb/purity_fb_1dot12/models/multi_protocol_rule.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 16 | 2017-11-27T20:57:48.000Z | 2021-11-23T18:46:43.000Z | purity_fb/purity_fb_1dot12/models/multi_protocol_rule.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 22 | 2017-10-13T15:33:05.000Z | 2021-11-08T19:56:21.000Z | # coding: utf-8
"""
Pure Storage FlashBlade REST 1.12 Python SDK
Pure Storage FlashBlade REST 1.12 Python SDK. Compatible with REST API versions 1.0 - 1.12. Developed by [Pure Storage, Inc](http://www.purestorage.com/). Documentations can be found at [purity-fb.readthedocs.io](http://purity-fb.readthedocs.io/).
OpenAPI spec version: 1.12
Contact: info@purestorage.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class MultiProtocolRule(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
#BEGIN_CUSTOM
# IR-51527: Prevent Pytest from attempting to collect this class based on name.
__test__ = False
#END_CUSTOM
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'access_control_style': 'str',
'safeguard_acls': 'bool'
}
attribute_map = {
'access_control_style': 'access_control_style',
'safeguard_acls': 'safeguard_acls'
}
def __init__(self, access_control_style=None, safeguard_acls=None): # noqa: E501
"""MultiProtocolRule - a model defined in Swagger""" # noqa: E501
self._access_control_style = None
self._safeguard_acls = None
self.discriminator = None
if access_control_style is not None:
self.access_control_style = access_control_style
if safeguard_acls is not None:
self.safeguard_acls = safeguard_acls
@property
def access_control_style(self):
"""Gets the access_control_style of this MultiProtocolRule. # noqa: E501
The access control style that is utilized for client actions such as setting file and directory ACLs. Possible values include `nfs`, `smb`, `shared`, `independent`, and `mode-bits`. If `nfs` is specified, then SMB clients will be unable to set permissions on files and directories. If `smb` is specified, then NFS clients will be unable to set permissions on files and directories. If `shared` is specified, then NFS and SMB clients will both be able to set permissions on files and directories. Any client will be able to overwrite the permissions set by another client, regardless of protocol. If `independent` is specified, then NFS and SMB clients will both be able to set permissions on files and directories, and can access files and directories created over any protocol. Permissions set by SMB clients will not affect NFS clients and vice versa. NFS clients will be restricted to only using mode bits to set permissions. If `mode-bits` is specified, then NFS and SMB clients will both be able to set permissions on files and directories, but only mode bits may be used to set permissions for NFS clients. When SMB clients set an ACL, it will be converted to have the same permission granularity as NFS mode bits. # noqa: E501
:return: The access_control_style of this MultiProtocolRule. # noqa: E501
:rtype: str
"""
return self._access_control_style
@access_control_style.setter
def access_control_style(self, access_control_style):
"""Sets the access_control_style of this MultiProtocolRule.
The access control style that is utilized for client actions such as setting file and directory ACLs. Possible values include `nfs`, `smb`, `shared`, `independent`, and `mode-bits`. If `nfs` is specified, then SMB clients will be unable to set permissions on files and directories. If `smb` is specified, then NFS clients will be unable to set permissions on files and directories. If `shared` is specified, then NFS and SMB clients will both be able to set permissions on files and directories. Any client will be able to overwrite the permissions set by another client, regardless of protocol. If `independent` is specified, then NFS and SMB clients will both be able to set permissions on files and directories, and can access files and directories created over any protocol. Permissions set by SMB clients will not affect NFS clients and vice versa. NFS clients will be restricted to only using mode bits to set permissions. If `mode-bits` is specified, then NFS and SMB clients will both be able to set permissions on files and directories, but only mode bits may be used to set permissions for NFS clients. When SMB clients set an ACL, it will be converted to have the same permission granularity as NFS mode bits. # noqa: E501
:param access_control_style: The access_control_style of this MultiProtocolRule. # noqa: E501
:type: str
"""
self._access_control_style = access_control_style
@property
def safeguard_acls(self):
"""Gets the safeguard_acls of this MultiProtocolRule. # noqa: E501
If set to `true`, prevents NFS clients from erasing a configured ACL when setting NFS mode bits. If this is `true`, then attempts to set mode bits on a file or directory will fail if they cannot be combined with the existing ACL set on a file or directory without erasing the ACL. Attempts to set mode bits that would not erase an existing ACL will still succeed and the mode bit changes will be merged with the existing ACL. This must be `false` when `access_control_style` is set to either `independent` or `mode-bits`. # noqa: E501
:return: The safeguard_acls of this MultiProtocolRule. # noqa: E501
:rtype: bool
"""
return self._safeguard_acls
@safeguard_acls.setter
def safeguard_acls(self, safeguard_acls):
"""Sets the safeguard_acls of this MultiProtocolRule.
If set to `true`, prevents NFS clients from erasing a configured ACL when setting NFS mode bits. If this is `true`, then attempts to set mode bits on a file or directory will fail if they cannot be combined with the existing ACL set on a file or directory without erasing the ACL. Attempts to set mode bits that would not erase an existing ACL will still succeed and the mode bit changes will be merged with the existing ACL. This must be `false` when `access_control_style` is set to either `independent` or `mode-bits`. # noqa: E501
:param safeguard_acls: The safeguard_acls of this MultiProtocolRule. # noqa: E501
:type: bool
"""
self._safeguard_acls = safeguard_acls
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(MultiProtocolRule, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, MultiProtocolRule):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 53.509934 | 1,242 | 0.688738 | 1,146 | 8,080 | 4.754799 | 0.191972 | 0.057258 | 0.079281 | 0.033034 | 0.699211 | 0.656267 | 0.636998 | 0.6069 | 0.55423 | 0.525601 | 0 | 0.010154 | 0.244307 | 8,080 | 150 | 1,243 | 53.866667 | 0.882247 | 0.628094 | 0 | 0.060606 | 0 | 0 | 0.051793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.045455 | 0 | 0.378788 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f747c5f4148789ffa72b834beacbd044a5cb6421 | 2,468 | py | Python | linear_error_analysis/src/main.py | spacesys-finch/Science | 623c9d77de6a52e87571debf7970cea7af591f2a | [
"MIT"
] | null | null | null | linear_error_analysis/src/main.py | spacesys-finch/Science | 623c9d77de6a52e87571debf7970cea7af591f2a | [
"MIT"
] | null | null | null | linear_error_analysis/src/main.py | spacesys-finch/Science | 623c9d77de6a52e87571debf7970cea7af591f2a | [
"MIT"
] | 1 | 2021-10-09T19:35:26.000Z | 2021-10-09T19:35:26.000Z | """
main.py
Main driver for the Linear Error Analysis program.
Can be run using `lea.sh`.
Can choose which plots to see by toggling on/off `show_fig` param.
Author(s): Adyn Miles, Shiqi Xu, Rosie Liang
"""
import os
import matplotlib.pyplot as plt
import numpy as np
import config
import libs.gta_xch4 as gta_xch4
import libs.photon_noise as pn
from errors import Errors
from forward import Forward
from isrf import ISRF
from optim import Optim
if __name__ == "__main__":
cfg = config.parse_config()
forward = Forward(cfg)
surface, molec, atm, sun_lbl = forward.get_atm_params()
optics = forward.opt_properties()
(
wave_meas,
rad_tot,
rad_ch4,
rad_co2,
rad_h2o,
d_rad_ch4,
d_rad_co2,
d_rad_h2o,
rad_conv_tot,
rad_conv_ch4,
rad_conv_co2,
rad_conv_h2o,
dev_conv_ch4,
dev_conv_co2,
dev_conv_h2o,
) = forward.plot_transmittance(show_fig=False)
state_vector = forward.produce_state_vec()
isrf = ISRF(cfg)
isrf_func = isrf.define_isrf(show_fig=False)
isrf_conv = isrf.convolve_isrf(rad_tot, show_fig=False)
lea = Errors(cfg, wave_meas)
sys_errors = lea.sys_errors()
rand_errors = lea.rand_errors()
# sys_nonlinearity = lea.sys_err_vector(1)
# sys_stray_light = lea.sys_err_vector(2)
# sys_crosstalk = lea.sys_err_vector(3)
# sys_flat_field = lea.sys_err_vector(4)
# sys_bad_px = lea.sys_err_vector(5)
# sys_key_smile = lea.sys_err_vector(6)
# sys_striping = lea.sys_err_vector(7)
# sys_memory = lea.sys_err_vector(8)
ecm = lea.error_covariance()
path_root = os.path.dirname(os.path.dirname(__file__))
np.savetxt(os.path.join(path_root, "outputs", "ecm.csv"), ecm, delimiter=",")
optim = Optim(cfg, wave_meas)
jacobian = optim.jacobian(dev_conv_ch4, dev_conv_co2, dev_conv_h2o, show_fig=False)
gain = optim.gain(ecm)
modified_meas_vector = optim.modify_meas_vector(state_vector, rad_conv_tot, ecm)
spectral_res, snr = optim.state_estimate(ecm, modified_meas_vector, sys_errors)
print("Estimated Solution: " + str(spectral_res))
print("Uncertainty of Solution: " + str(snr))
# plot interpolated photon noise
# plt.plot(lea.wave_meas, lea.photon_noise_interp)
# plt.title("Interpolated Photon Noise")
# plt.xlabel("Wavelength (nm)")
# plt.ylabel("Photon Noise (UNITS?)") # TODO
# plt.show()
| 28.697674 | 87 | 0.687601 | 366 | 2,468 | 4.314208 | 0.398907 | 0.034199 | 0.045598 | 0.075997 | 0.037999 | 0.037999 | 0.037999 | 0.037999 | 0.037999 | 0 | 0 | 0.012821 | 0.209887 | 2,468 | 85 | 88 | 29.035294 | 0.796923 | 0.286872 | 0 | 0 | 0 | 0 | 0.039125 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0 | false | 0 | 0.204082 | 0 | 0.204082 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f74b1a4debef16881b74152eff013915a6f5da94 | 8,550 | py | Python | user_chainload.py | Phidica/sublime-execline | a3c1b76de0c9a420ae73467c28f445b698f7f508 | [
"MIT"
] | 2 | 2020-08-28T16:04:37.000Z | 2020-08-28T20:06:21.000Z | user_chainload.py | Phidica/sublime-execline | a3c1b76de0c9a420ae73467c28f445b698f7f508 | [
"MIT"
] | null | null | null | user_chainload.py | Phidica/sublime-execline | a3c1b76de0c9a420ae73467c28f445b698f7f508 | [
"MIT"
] | null | null | null | import logging
import os
import re
import sublime
# external dependencies (see dependencies.json)
import jsonschema
import yaml # pyyaml
# This plugin generates a hidden syntax file containing rules for additional
# chainloading commands defined by the user. The syntax is stored in the cache
# directory to avoid the possibility of it falling under user version control in
# the usual packages directory
userSyntaxName = 'execline-user-chainload.sublime-syntax'
pkgName = 'execline'
settingsName = 'execline.sublime-settings'
mainSyntaxPath = 'Packages/{}/execline.sublime-syntax'.format(pkgName)
schemaPath = 'Packages/{}/execline.sublime-settings.schema.json'.format(pkgName)
ruleNamespaces = {
'keyword': 'keyword.other',
'function': 'support.function',
}
ruleContexts = {
'argument': {
'generic': 'command-call-common-arg-aside-&pop',
'variable': 'command-call-common-variable-&pop',
'pattern': 'command-call-common-glob-&pop',
},
'block': {
'program': 'block-run-prog',
'arguments': 'block-run-arg',
'trap': 'block-trap',
'multidefine': 'block-multidefine',
},
'options': {
'list': 'command-call-common-opt-list-&pop',
'list-with-args': {
'match': '(?=-[{}])',
'push': 'command-call-common-opt-arg-&pop',
'include': 'command-call-common-opt-list-&pop',
},
},
}
logging.basicConfig()
logger = logging.getLogger(__name__)
# Fully resolve the name of a context in the main syntax file
def _resolve_context(context):
return mainSyntaxPath + '#' + context
# Create a match rule describing a command of a certain type, made of a list of
# elements
def _make_rule(cmd_name, cmd_elements, cmd_type):
try:
namespace = ruleNamespaces[cmd_type]
except KeyError:
logger.warning("Ignoring command of unrecognised type '{}'".format(cmd_type))
return
rule = {}
# Careful to sanitise user input. Only literal command names accepted here
rule['match'] = r'{{chain_pre}}' + re.escape(cmd_name) + r'{{chain_post}}'
rule['scope'] = ' '.join([
'meta.function-call.name.execline',
'{}.user.{}.execline'.format(namespace, cmd_name),
'meta.string.unquoted.execline',
])
contextSeq = []
for elem in cmd_elements:
context = None
# Resolve the element into a name and possible argument
elemType,elemSubtype = elem[0:2]
try:
elemArg = elem[2]
except IndexError:
elemArg = ''
# Look up the context named by this element
try:
contextData = ruleContexts[elemType][elemSubtype]
if isinstance(contextData, str):
contextData = { 'include': contextData }
except KeyError:
logger.warning("Ignoring key '{}' not found in context dictionary".format(elem))
continue
if len(contextData) > 1 and not elemArg:
logger.warning("Ignoring element '{}' with missing data".format(elem))
continue
if len(contextData) == 1:
# context = _resolve_context(contextData['include'])
# Although a basic include could be provided as the target context name
# directly to the 'push' list, this can break if there are a mix of other
# types of contexts being pushed to the stack. A context containing a sole
# include is safe from this
context = [ {'include': _resolve_context(contextData['include'])} ]
elif elemType == 'options':
# Careful to sanitise user input, this must behave as a list of characters
matchPattern = contextData['match'].format( re.escape(elemArg) )
context = [
{'match': matchPattern, 'push': _resolve_context(contextData['push'])},
{'include': _resolve_context(contextData['include'])},
]
if context:
contextSeq.append(context)
# Convert context sequence into context stack
if contextSeq:
rule['push'] = contextSeq
rule['push'].reverse()
return rule
def _validate_settings():
# Read the schema using Sublime Text's builtin JSON parser
try:
schema = sublime.decode_value( sublime.load_resource(schemaPath) )
except Exception as ex:
logger.error("Failed loading schema: {}".format(ex))
return validSets
settings = sublime.load_settings(settingsName)
activeSets = settings.get('user_chainload_active')
if not activeSets:
return []
validSets = []
for setName in activeSets:
if not setName:
sublime.error_message("Error in {}: Set name cannot be the empty string".format(settingsName))
continue
setName = 'user_chainload_set_' + setName
setDict = settings.get(setName)
if setDict == None:
sublime.error_message("Error in {}: Couldn't find expected setting '{}'".format(settingsName, setName))
continue
try:
jsonschema.validate(setDict, schema)
logger.debug("Validation success for {}".format(setName))
validSets.append(setName)
except jsonschema.exceptions.SchemaError as ex:
# A problem in the schema itself for me as the developer to resolve
logger.error("Failed validating schema: {}".format(ex))
break
except jsonschema.exceptions.ValidationError as ex:
# A problem in the settings file for the user to resolve
sublime.error_message("Error in {} in setting '{}': \n{}".format(settingsName, setName, str(ex)))
continue
return validSets if validSets else None
def _write_user_chainload():
# Read settings file and validate
settings = sublime.load_settings(settingsName)
validSets = _validate_settings()
# Prepare output syntax file
cacheDir = os.path.join(sublime.cache_path(), pkgName)
if not os.path.isdir(cacheDir):
os.mkdir(cacheDir)
userSyntaxPath = os.path.join(cacheDir, userSyntaxName)
userSyntaxExists = os.path.isfile(userSyntaxPath)
# Skip writing the syntax if it already exists in a valid form and we don't
# have a valid set of rules for regenerating it
if userSyntaxExists:
if validSets == None:
logger.warning("Not regenerating syntax due to lack of any valid settings")
return
else:
logger.info("Regenerating syntax with sets: {}".format(validSets))
else:
logger.info("Generating syntax with sets: {}".format(validSets))
userSyntax = open(userSyntaxPath, 'w')
# Can't seem to get PyYAML to write a header, so do it manually
header = '\n'.join([
r'%YAML 1.2',
r'# THIS IS AN AUTOMATICALLY GENERATED FILE.',
r'# DO NOT EDIT. CHANGES WILL BE LOST.',
r'---',
'',
])
userSyntax.write(header)
yaml.dump({'hidden': True, 'scope': 'source.shell.execline'}, userSyntax)
# Repeat all the variables from the main syntax file, for convenience
mainDB = yaml.load(sublime.load_resource(mainSyntaxPath),
Loader = yaml.BaseLoader)
yaml.dump({'variables': mainDB['variables']}, userSyntax)
# Create list of rules from the sets of user settings which are currently
# valid
rulesList = []
for rule in [r for s in validSets for r in settings.get(s)]:
# Schema validation guarantees we can trust all the following inputs
# Read a name or list of names
cmdNames = rule['name']
if isinstance(cmdNames, str):
cmdNames = [cmdNames]
# Get type with 'function' being default if not provided
cmdType = rule.get('type', 'function')
cmdElements = []
for elem in rule['elements']:
# Get the sole kv pair, apparently this is most efficient way
key,value = next( iter(elem.items()) )
if key in ruleContexts:
cmdElements.append( (key,value) )
elif 'options_then_' in key:
opts = ''.join( value.get('options_taking_arguments', []) )
if opts:
cmdElements.append( ('options', 'list-with-args', opts) )
else:
cmdElements.append( ('options', 'list') )
then = key.split('_')[-1]
if then == 'end':
# Ignore all further elements
break
else:
# Add the block, etc
cmdElements.append( (then, value[then]) )
for cmdName in cmdNames:
rulesList.append( _make_rule(cmdName, cmdElements, cmdType) )
# Only keep non-empty rules. Sublime doesn't mind if the list of rules ends up
# empty
content = {'contexts': {'main': [r for r in rulesList if r]}}
yaml.dump(content, userSyntax)
def plugin_loaded():
settings = sublime.load_settings(settingsName)
settings.clear_on_change(__name__)
settings.add_on_change(__name__, _write_user_chainload)
if settings.get('user_chainload_debugging'):
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.WARNING)
_write_user_chainload()
| 31.090909 | 109 | 0.68 | 1,061 | 8,550 | 5.408106 | 0.310085 | 0.015859 | 0.017776 | 0.010457 | 0.106483 | 0.027536 | 0.012199 | 0 | 0 | 0 | 0 | 0.001177 | 0.205029 | 8,550 | 274 | 110 | 31.20438 | 0.843019 | 0.227251 | 0 | 0.144444 | 1 | 0 | 0.232034 | 0.074909 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.033333 | 0.005556 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f74f0070abe0b831d8cd12d2943b6e264b00e54d | 215 | py | Python | arc/arc009/arc009b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | arc/arc009/arc009b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | arc/arc009/arc009b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | def conv(x):
return int(''.join(t[c] for c in x))
b = input().split()
N = int(input())
a = [input() for _ in range(N)]
t = {b[i]: str(i) for i in range(10)}
a.sort(key = lambda x: conv(x))
print(*a, sep='\n')
| 19.545455 | 40 | 0.548837 | 45 | 215 | 2.6 | 0.533333 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011561 | 0.195349 | 215 | 10 | 41 | 21.5 | 0.66474 | 0 | 0 | 0 | 0 | 0 | 0.009302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0.125 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f75197db0d5043fe351ad2be154d400c859209b0 | 965 | py | Python | setup.py | refnode/spartakiade-2021-session-effective-python | 6b1a25c4ec79261de4ed6385a81b6a31a06d6b58 | [
"Apache-2.0"
] | 1 | 2021-06-04T14:05:31.000Z | 2021-06-04T14:05:31.000Z | setup.py | refnode/spartakiade-2021-session-effective-python | 6b1a25c4ec79261de4ed6385a81b6a31a06d6b58 | [
"Apache-2.0"
] | null | null | null | setup.py | refnode/spartakiade-2021-session-effective-python | 6b1a25c4ec79261de4ed6385a81b6a31a06d6b58 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""The setup script."""
from setuptools import setup, find_packages
with open("README.adoc") as fh_readme:
readme = fh_readme.read()
install_reqs = []
setup(
author="Sven Wilhelm",
author_email='refnode@gmail.com',
python_requires='>=3.8',
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'Natural Language :: English',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.8',
],
description="Spartakiade 2021 Session Effective Python",
install_requires=install_reqs,
long_description=readme,
include_package_data=True,
keywords='spartakiade-2021-session-effective-python',
name='spartakiade-2021-session-effective-python',
packages=find_packages(where="src"),
url='https://github.com/refnode/spartakiade-2021-session-effective-python',
version='0.1.0',
zip_safe=False,
)
| 28.382353 | 79 | 0.675648 | 111 | 965 | 5.756757 | 0.603604 | 0.093897 | 0.137715 | 0.194053 | 0.231612 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031726 | 0.18342 | 965 | 33 | 80 | 29.242424 | 0.779188 | 0.039378 | 0 | 0 | 0 | 0 | 0.444083 | 0.089034 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f75630cbc7b1eef703d5e902537d65487d1b7612 | 4,126 | py | Python | sim/pid.py | jmagine/rf-selection | ba9dcb5ca550916873ce68baa71da983f2dd4be5 | [
"MIT"
] | 1 | 2020-05-06T01:28:06.000Z | 2020-05-06T01:28:06.000Z | sim/pid.py | jmagine/multiuav-rf | ba9dcb5ca550916873ce68baa71da983f2dd4be5 | [
"MIT"
] | null | null | null | sim/pid.py | jmagine/multiuav-rf | ba9dcb5ca550916873ce68baa71da983f2dd4be5 | [
"MIT"
] | null | null | null | '''*-----------------------------------------------------------------------*---
Author: Jason Ma
Date : Oct 18 2018
TODO
File Name : pid.py
Description: TODO
---*-----------------------------------------------------------------------*'''
import time
import matplotlib.animation as anim
import matplotlib.pyplot as plt
import threading
import math
import numpy as np
'''[Global Vars]------------------------------------------------------------'''
ORIGIN_X = 0.0
ORIGIN_Y = 0.0
C_R = 10
#plt.autoscale(enable=True, axis="both")
fig = plt.figure()
ax = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
scat = ax.scatter([], [])
ax.set_xlim([-1 * C_R - 1, C_R + 1])
ax.set_ylim([-1 * C_R - 1, C_R + 1])
scat.set_facecolors(['g', 'r'])
scat.set_sizes([31, 31])
prev_time = time.time()
vel = np.array([0.0, 0.0])
errors = [0, 1]
error_plot, = ax2.plot([i for i in range(len(errors))], errors, color="g")
class drone():
def __init__(self, p, vel):
self.pos = np.array(p)
self.v = np.array(vel)
self.prev_error = np.zeros((2))
self.integral = np.zeros((2))
self.dt = 0.01
self.kp = 0.8 * 2.0
self.ki = 0
self.kd = 0
#self.ki = 2.0 * self.kp / 2.0
#self.kd = self.kp * 2.0 / 8.0
#self.ki = 2 * self.kp / 1.0
#self.kd = self.kp * 0.01 / 8
def callback(self):
pass
def run(self, ref_pos, vx=None, vy=None):
self.pos += self.v
#print(self.integral)
if vx:
self.v[0] = vx
if vy:
self.v[1] = vy
#compute PID output
error = ref_pos - self.pos
self.integral = self.integral * 0.99 + error * self.dt
'''
for i in range(2):
if self.integral[i] > 1:
self.integral[i] = 1
elif self.integral[i] < -1:
self.integral[i] = -1
'''
#print(self.integral)
derivative = (error - self.prev_error) / self.dt
for i in range(2):
if derivative[i] > 0.1:
derivative[i] = 0.1
elif derivative[i] < -0.1:
derivative[i] = -0.1
self.prev_error = error
pid_output = (self.kp * error) + (self.ki * self.integral) + (self.kd * derivative)
print(self.pos, pid_output, self.kp * error, self.ki * self.integral, self.kd * derivative)
#print(error[0])
#errors.append(error[0])
return pid_output
d = drone([ORIGIN_X + C_R, ORIGIN_Y], [0.0, 0.0])
def dist(x1, y1, x2, y2):
return ((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1))**(1/2)
def dist(p1, p2):
assert len(p1) == len(p2)
dims = len(p1)
total = 0
for i in range(dims):
total += (p2[i] - p1[i]) * (p2[i] - p1[i])
return (total)**(1/2)
#def pid_angle(x, y, ref_x, ref_y, d):
# return math.atan(-1 * (C_R - dist(x, y, ORIGIN_X, ORIGIN_Y)) / d) + math.atan((y - ORIGIN_Y) / (x - ORIGIN_X)) + math.pi / 2
def ref(t):
return np.array([ORIGIN_X + C_R * math.cos(t), ORIGIN_Y + C_R * math.sin(t)])
def update(i):
global prev_time, vel
#update reference point position
curr_time = time.time()
ref_point = ref(i / 25.0)
#ref_x = ref_point[0]
#ref_y = ref_point[1]
out = d.run(ref_point)
for i in range(2):
if out[i] > 10 or out[i] < -10:
out = out * 10 / out[i]
#print(d.pos, out)
d.v = out
while time.time() - prev_time < d.dt:
time.sleep(d.dt / 10)
prev_time = time.time()
#print the desired angle of drone
#pid_ang = pid_angle(d.x, d.y, ref_point[0], ref_point[1], 0.05)
#print(math.cos(pid_ang), math.sin(pid_ang))
#d.run(math.cos(pid_ang), math.sin(pid_ang))
scat.set_offsets([[ref_point[0], ref_point[1]], [d.pos[0], d.pos[1]]])
errors.append(dist(ref_point, d.pos))
error_plot.set_xdata([i for i in range(len(errors))])
error_plot.set_ydata(errors)
ax2.set_xlim([-1, len(errors) + 1])
ax2.set_ylim([1, min(errors)])
def main():
d = drone(ORIGIN_X + C_R, ORIGIN_Y, 1)
if __name__ == '__main__':
#main()
a = anim.FuncAnimation(fig, update, range(1000), interval=1, blit=False, repeat=False)
plt.show()
| 25.469136 | 127 | 0.537082 | 666 | 4,126 | 3.205706 | 0.217718 | 0.061827 | 0.016862 | 0.030913 | 0.250117 | 0.223888 | 0.200468 | 0.174239 | 0.078689 | 0.055269 | 0 | 0.047895 | 0.246001 | 4,126 | 161 | 128 | 25.627329 | 0.63838 | 0.26127 | 0 | 0.046512 | 0 | 0 | 0.003953 | 0 | 0 | 0 | 0 | 0.012422 | 0.011628 | 1 | 0.093023 | false | 0.011628 | 0.069767 | 0.023256 | 0.22093 | 0.011628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f75c066d3ec31ec2f99d70612e1572ff45c4ae07 | 930 | py | Python | Regression/multiple_linear_regression.py | Rupii/Machine-Learning | 2b00698815efb04346d5cb980b68af76f27a5ca6 | [
"MIT"
] | null | null | null | Regression/multiple_linear_regression.py | Rupii/Machine-Learning | 2b00698815efb04346d5cb980b68af76f27a5ca6 | [
"MIT"
] | null | null | null | Regression/multiple_linear_regression.py | Rupii/Machine-Learning | 2b00698815efb04346d5cb980b68af76f27a5ca6 | [
"MIT"
] | 1 | 2019-09-04T05:43:31.000Z | 2019-09-04T05:43:31.000Z | # -*- coding: utf-8 -*-
"""
Created on Sat Feb 24 23:18:54 2018
@author: Rupesh
"""
# Multiple Linear Regression
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("ggplot")
# loading dependies
df = pd.read_csv("50_Startups.csv")
df.head()
X = df.iloc[:, :-1].values
y = df.iloc[:, 4].values
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
X_cat = LabelEncoder()
X[:, 3] = X_cat.fit_transform(X[:, 3])
onehot = OneHotEncoder(categorical_features = [3])
X = onehot.fit_transform(X).toarray()
# avoiding the dummy variable trap
X = X[:, 1:]
# train test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
# model
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(X_train, y_train)
# predict
y_pred = reg.predict(X_test)
import skl | 19.787234 | 74 | 0.731183 | 147 | 930 | 4.47619 | 0.52381 | 0.050152 | 0.06383 | 0.045593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028858 | 0.143011 | 930 | 47 | 75 | 19.787234 | 0.796738 | 0.198925 | 0 | 0 | 0 | 0 | 0.028689 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f761849b5a4f4a9a3e0e3b79a8c5c9b1f726ae8e | 3,444 | py | Python | projects/ide/sublime/src/Bolt/api/inspect/highlighting.py | boltjs/bolt | c2666c876b34b1a61486a432eef3141ca8d1e411 | [
"BSD-3-Clause"
] | 11 | 2015-09-29T19:19:34.000Z | 2020-11-20T09:14:46.000Z | projects/ide/sublime/src/Bolt/api/inspect/highlighting.py | boltjs/bolt | c2666c876b34b1a61486a432eef3141ca8d1e411 | [
"BSD-3-Clause"
] | null | null | null | projects/ide/sublime/src/Bolt/api/inspect/highlighting.py | boltjs/bolt | c2666c876b34b1a61486a432eef3141ca8d1e411 | [
"BSD-3-Clause"
] | null | null | null | import sublime
from ui.read import settings as read_settings
from ui.write import write, highlight as write_highlight
from lookup import file_type as lookup_file_type
from ui.read import x as ui_read
from ui.read import spots as read_spots
from ui.read import regions as ui_regions
from core.read import read as core_read
from structs.general_thread import *
from structs.thread_handler import *
from structs.highlight_list import *
from structs.flag_region import *
from core.analyse import analyse
def flags():
return [
FlagRegion('bolt.incorrect', 'comment', 'light_x', 0),
FlagRegion('bolt.missing', 'string', 'arrow_right', 0),
FlagRegion('bolt.unused', 'comment', 'dot', sublime.DRAW_OUTLINED),
FlagRegion('bolt.wrong_module', 'comment', 'light_x', 0)
]
def highlight_setting():
return 'bolt.live.highlight'
def rate_setting():
return 'bolt.live.highlight.rate'
def is_enabled():
settings = read_settings.load_settings()
return settings.get(highlight_setting(), False)
def get_rate():
settings = read_settings.load_settings()
return settings.get(rate_setting(), 1000)
def set_enabled(state):
settings = read_settings.load_settings()
settings.set(highlight_setting(), state)
write.save_settings()
def toggle(view):
def noop(v):
return True
handler = ThreadHandler(noop, noop, noop)
prev = is_enabled()
current = not prev
if (current):
run(view, handler)
else:
clear(view)
set_enabled(current)
def run(view, handler):
valid = lookup_file_type.is_bolt_module(view)
if not valid:
open_file = view.file_name() if view.file_name() != None else '-- no view'
print 'View is not a bolt module: ' + open_file
handler.cancel()
else:
read_view = ui_read.all(view)
spots = read_spots.spots(view)
plasmas = core_read.plasmas(read_view.ptext)
def update_ui(highlights, module_wrong):
def run():
regions = write_highlight.regions(view, highlights)
module_region = [ui_regions.module_name(view)] if module_wrong else []
flag_info = zip(flags(), [regions.incorrect, regions.missing, regions.unused, module_region])
def highlight_flag(x):
if len(x[1]) > 0:
write_highlight.highlight(view, x[1], x[0]),
else:
write_highlight.remove_highlight(view, x[0])
map(highlight_flag, flag_info)
sublime.set_timeout(run, 0)
thread = GeneralThread(_highlighter(read_view, spots, plasmas, update_ui), handler.success, handler.failure)
sublime.set_timeout(thread.start, 0)
handler.init(thread)
def clear(view):
def run():
write_highlight.remove_highlights(view, flags())
sublime.set_timeout(run, 0)
def _highlighter(read_view, spots, plasmas, callback):
def r():
try:
highlights = analyse.all(read_view.base, read_view.nests, plasmas, spots, read_view.external)
module_wrong = analyse.module_wrong(read_view)
callback(highlights, module_wrong)
except Exception as exc:
print "Error during identifying highlighted regions: " + str(exc)
traceback.print_exc(limit=10)
callback(HighlightLists([], [], []), False)
return r
| 29.689655 | 116 | 0.654472 | 433 | 3,444 | 5.018476 | 0.258661 | 0.029452 | 0.018408 | 0.029452 | 0.135297 | 0.045099 | 0.045099 | 0.045099 | 0 | 0 | 0 | 0.006518 | 0.242741 | 3,444 | 115 | 117 | 29.947826 | 0.826687 | 0 | 0 | 0.117647 | 0 | 0 | 0.068235 | 0.006969 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.152941 | null | null | 0.035294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7639fdfd1c81876235b0d816ccef91c2a2888bb | 903 | py | Python | spellingcorrector/utils/count.py | NazcaLines/spelling-corrector | ae315a3988e94ee46f60ff4ac7d2ee7609ebc24b | [
"MIT"
] | null | null | null | spellingcorrector/utils/count.py | NazcaLines/spelling-corrector | ae315a3988e94ee46f60ff4ac7d2ee7609ebc24b | [
"MIT"
] | null | null | null | spellingcorrector/utils/count.py | NazcaLines/spelling-corrector | ae315a3988e94ee46f60ff4ac7d2ee7609ebc24b | [
"MIT"
] | null | null | null | import os
import functools
CORPUS_DIR = str(os.getcwd())[:str(os.getcwd()).index('spellingcorrector/')] \
+ 'data/corpus.txt'
NWORD = {}
def checkCorpus(fn):
@functools.wraps(fn)
def new_func(*args, **kwargs):
t = os.path.isfile(CORPUS_DIR)
if t == False:
raise IOError('cannot find corpus in data/')
return fn(*args, **kwargs)
return new_func
@checkCorpus
def train():
global NWORD
with open(CORPUS_DIR, 'r') as f:
for line in f:
split = line.split()
#tmp = {split[0]:float(split[1])}
NWORD[split[0]] = float(split[1])
def getTrain():
"""
simple singleton implement
"""
global NWORD
if len(NWORD) == 0:
train()
return NWORD
if __name__ == "__main__":
getTrain()
print CORPUS_DIR
print os.path.isfile(CORPUS_DIR)
print len(NWORD)
| 19.630435 | 78 | 0.572536 | 113 | 903 | 4.442478 | 0.469027 | 0.089641 | 0.043825 | 0.071713 | 0.151394 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007776 | 0.287929 | 903 | 45 | 79 | 20.066667 | 0.772939 | 0.035437 | 0 | 0.066667 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f76d62143b8e1fa514207d6381b4adbf58120f1a | 2,889 | py | Python | skos/method.py | edmondchuc/voc-view | 57bd965facacc77f40f218685c88e8b858d4925c | [
"MIT"
] | 3 | 2021-07-31T16:23:26.000Z | 2022-01-24T01:28:17.000Z | skos/method.py | edmondchuc/voc-view | 57bd965facacc77f40f218685c88e8b858d4925c | [
"MIT"
] | null | null | null | skos/method.py | edmondchuc/voc-view | 57bd965facacc77f40f218685c88e8b858d4925c | [
"MIT"
] | 1 | 2019-08-07T06:02:52.000Z | 2019-08-07T06:02:52.000Z | from pyldapi.renderer import Renderer
from pyldapi.view import View
from flask import render_template, Response
from rdflib import Graph, URIRef, BNode
import skos
from skos.common_properties import CommonPropertiesMixin
from config import Config
class Method(CommonPropertiesMixin):
def __init__(self, uri):
CommonPropertiesMixin.__init__(self, uri)
self.uri = uri
self.purpose = skos.get_method_purpose(uri)
self.scope = skos.get_method_scope(uri)
self.equipment = skos.get_method_equipment(uri)
self.time_required = skos.get_method_time_required(uri)
self.instructions = skos.get_method_instructions(uri)
self.additional_note = skos.get_method_additional_note(uri)
self.parameters = skos.get_parameter_relations(uri)
self.categorical_variables = skos.get_categorical_variables_relations(uri)
class MethodRenderer(Renderer):
def __init__(self, uri, request):
self.uri = uri
views = {
'method': View(
'Method',
'A TERN method.',
['text/html'] + Renderer.RDF_MIMETYPES,
'text/html',
namespace='https://w3id.org/tern/ontologies/tern/'
)
}
super().__init__(request, uri, views, 'method')
# TODO: Make a base class and make this a method of the base class.
def render_rdf(self):
g = Graph()
for subj, pred, obj in Config.g.triples((URIRef(self.uri), None, None)):
g.add((subj, pred, obj))
if type(obj) == BNode:
for s, p, o in Config.g.triples((obj, None, None)):
g.add((s, p, o))
return Response(g.serialize(format=self.format), mimetype=self.format)
def render(self):
if not hasattr(self, 'format'):
self.format = 'text/html'
if self.view == 'method':
if self.format == 'text/html':
cc = Method(self.uri)
return render_template('method.html', title=cc.label, c=cc,
skos_class=('https://w3id.org/tern/ontologies/tern/Method', 'Method'),
formats=[(format, format.split('/')[-1]) for format in self.views.get('method').formats])
elif self.format in Renderer.RDF_MIMETYPES:
return self.render_rdf()
else:
# In theory, this line should never execute because if an invalid format has been entered, the pyldapi
# will default to the default format. In this case, The default format for the default view (skos) is
# text/html.
raise RuntimeError('Invalid format error')
else:
# Let pyldapi handle the rendering of the 'alternates' view.
return super(MethodRenderer, self).render()
| 40.125 | 128 | 0.602631 | 343 | 2,889 | 4.941691 | 0.326531 | 0.037168 | 0.046018 | 0.016519 | 0.035398 | 0.035398 | 0 | 0 | 0 | 0 | 0 | 0.001474 | 0.295604 | 2,889 | 71 | 129 | 40.690141 | 0.83145 | 0.116303 | 0 | 0.072727 | 0 | 0 | 0.080879 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 1 | 0.072727 | false | 0 | 0.127273 | 0 | 0.309091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7836ff545709d136c298d62a1c6e262234ad38c | 2,692 | py | Python | Python/Activies/Classroom10-1.py | FranciscoMends/Python_Codes | fd0b33443d67b56b092beeea0e778285be6a42a9 | [
"MIT"
] | null | null | null | Python/Activies/Classroom10-1.py | FranciscoMends/Python_Codes | fd0b33443d67b56b092beeea0e778285be6a42a9 | [
"MIT"
] | null | null | null | Python/Activies/Classroom10-1.py | FranciscoMends/Python_Codes | fd0b33443d67b56b092beeea0e778285be6a42a9 | [
"MIT"
] | null | null | null | '''
nome = input('Insira seu nome: ')
if nome == 'Mendes':
print('Que nome lindo você tem!')
else:
print('Seu nome é tão normal!')
print('Bom dia {}!'.format(nome))
'''
#DESAFIO_28
'''
from random import randint
from time import sleep
x = randint(0,5)
y = int(input('Digite um número de 0 à 5: '))
print('Loading...')
sleep(2)
if x == y:
print('Parabéns, você venceu!')
else:
print('Tente novamente, você perdeu!')
print(x)
'''
#DESAFIO_29
'''
velocity = int(input('Qual a velocidade atual do seu carro em Km/h? '))
if velocity > 80:
print('Você foi multado por excesso de velocidade!')
print('Velocidade permitia: 80km/h')
print('Velocidade ultrapassada: {}km/h'.format(velocity))
infraction = (velocity - 80) * 7
print('Valor da multa: R${},00'.format(infraction))
'''
#DESAFIO_30
'''
number = int(input('Insira um número inteiro: '))
if number % 2 == 0:
print('Seu número é PAR!')
else:
print('Seu número é ÍMPAR!')
'''
#DESAFIO_31
'''
distance = int(input('Qual a distância em Km que deseja viajar? '))
if distance <= 200:
final_value = distance * 0.50
else:
final_value = distance * 0.45
print('Valor da passagem: R${:.2f}'.format(final_value))
'''
#DESAFIO_32
'''
from datetime import date
year = int(input('Insira um ano (Coloque "0" caso queira analisar a data atual): '))
if year == 0:
year = date.today().year
if year % 4 == 0 and year % 100 != 0 or year % 400 == 0:
print(year, 'é um ano BISSEXTO!')
else:
print(year, 'não é um ano BISSEXTO!')
'''
#DESAFIO_33
'''
x = int(input('Digite o primeiro número: '))
y = int(input('Digite o segundo número: '))
z = int(input('Digite o terceiro número: '))
number_max = max(x,y,z)
number_min = min(x,y,z)
print('Maior número:',number_max)
print('Menor número:',number_min)
'''
#DESAFIO_34
'''
wage = float(input('Insira seu salário: R$'))
if wage > 1250:
salary_increase = ((10/100) * wage) + wage
percent = 10
else:
salary_increase = ((15/100) * wage) + wage
percent = 15
print()
print('Salário atual: R${:.2f}'.format(wage))
print('Aumento de {}%'.format(percent))
print('Salário final: R${:.2f}'.format(salary_increase))
'''
#DESAFIO_35
'''
line1 = float(input('Insira o comprimento da primeira reta: '))
line2 = float(input('Insira o comprimento da segunda reta: '))
line3 = float(input('Insira o comprimento da terceira reta: '))
if line1 < line2 + line3 and line2 < line1 + line3 and line3 < line1 + line2:
print('Podem formar um triângulo!')
else:
print('Não podem formar um triângulo!')
'''
#PROVA
'''
s = 'prova de python'
x = len(s)
print(x)
x = 'curso de python no cursoemvideo'
y = x[:5]
print(y)
'''
x = 3 * 5 + 4 ** 2
print(x) | 24.697248 | 84 | 0.643759 | 408 | 2,692 | 4.203431 | 0.355392 | 0.037318 | 0.032653 | 0.026239 | 0.052478 | 0.052478 | 0 | 0 | 0 | 0 | 0 | 0.041364 | 0.182764 | 2,692 | 109 | 85 | 24.697248 | 0.738182 | 0.094354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f78374a6b9c098ca930042e6331630796196647c | 4,902 | py | Python | temp logger complete.py | nevillethenev/Beer | a8fae43e7b2f846e208daad4a9b025703f0acb2a | [
"Unlicense"
] | null | null | null | temp logger complete.py | nevillethenev/Beer | a8fae43e7b2f846e208daad4a9b025703f0acb2a | [
"Unlicense"
] | null | null | null | temp logger complete.py | nevillethenev/Beer | a8fae43e7b2f846e208daad4a9b025703f0acb2a | [
"Unlicense"
] | null | null | null | #/usr/bin/python
import serial
import time
import matplotlib.pyplot as plt
import numpy as np
import os
"""""""""""""""""""""""""""""""""""
"""""""NEVS BEER SCRIPT""""""""""""
"""""""""""""""""""""""""""""""""""
###need to add exception handler for serial disconnection
## SETUP SERIAL PORT
try:
ser = serial.Serial('COM3',9600) # open serial port
print('Serial connection established!')
except:
print('ERR: Unable to connect to arduino...retrying')
time.sleep(3)
try:
ser = serial.Serial('COM3',9600)
except:
raw_input('ERR: Unable to connect to arduino....check connections and press Enter to continue')
try:
ser = serial.Serial('COM3',9600)
except:
raw_input('ERR: Unable to connect to arduino...Press Enter to exit..')
## STRIKE WATER CALCULATOR
##strike water calculator
##volume of water is heated inside an insulated mash tun
##grain is added to mash tun
## Tw = (Tm((Sw*mw)+(Sg*mg))-(Sg*mg*Tg))/(Sw*mw)
## Tw = strike water temp.
## Tm = mash temp.
Sw = 1; ##Specific heat water
Sg = 0.4; ##Specific heat grain
beername = raw_input("Please enter the name of the beer:")
Tm = input("Mash Temp.(\xb0C)")
Vw = input("Water Volume(L)")
mw = Vw; ##mass water(kg) = volume water(L)
mg = input("Grain mass(kg)")
Tg = input("Grain temp.(\xb0C)")
print("Calculating...")
time.sleep(1)
Tw = (Tm*((Sw*mw)+(Sg*mg))-(Sg*mg*Tg))/(Sw*mw)
Tw = round(Tw,1)
##print "Strike temp.(\xb0C) = "+str(Tw)
## MASH INSTRUCTIONS
print 'Set strike temperature to ' + str(Tw) + '\xb0C'
raw_input('Press Enter to continue...')
temperaturefloat = 0
##measure temperature
while True:
try:
temperaturefloat = round(float((ser.read(7))),1) #read
except: ##handle all serial read errors
try:
ser = serial.Serial('COM3',9600) # open serial port
except:
ser.close()
ser = serial.Serial('COM3',9600) # open serial port
temperaturefloat = 0
time.sleep(0.1)
print str(temperaturefloat) + '\xb0C'
time.sleep(0.1)
## if temperaturefloat > Tm: #### check temperature 5 times
## dragon = np.ones(5)
## for i in range(0,4):
## try:
## temperaturefloat = round(float(ser.read(7)),1)
## except: ##handle all serial read errors
## temperaturefloat = 0
##
## if temperaturefloat < 0:
## temperaturefloat = 0
##
## print str(temperaturefloat) + '\xb0C'
## dragon[i] = temperaturefloat
## print str(dragon)
## time.sleep(0.1)
## if sum(dragon)/5 > Tm:
## print 'SUCCESS'
## break
if temperaturefloat > Tm:
print 'Stike temperature reached! Please stir the water and prepare grain for submersion...'
mashtime1 = 60*input('Enter total mash time (min):')
raw_input('Submerge grain and press enter to coninue...')
print 'Mash in progress, please wait ' + str(mashtime1/60) + ' minutes...'
break
## TEMPERATURE LOGGING
ser.close() ## restart Com port
ser = serial.Serial('COM3',9600)
print 'Temp(\xb0C)\tTime(s)'
nowtimefloat = 0
temperaturefloat = 0
#read from serial and exit when user wants
while nowtimefloat < mashtime1:
try:
temperaturefloat = round(float((ser.read(7))),1) #read
except: ##handle all serial read errors
try:
ser = serial.Serial('COM3',9600) # open serial port
except:
ser.close()
ser = serial.Serial('COM3',9600) # open serial port
temperaturefloat = 0
time.sleep(0.1)
nowtimefloat = round(time.clock(),1)
nowtimestring = str(nowtimefloat)
temperaturesting = str(temperaturefloat)
goblin = open('templog.txt','a') #open txt file
datastring = temperaturesting + '\t' + nowtimestring + '\n'
print(datastring) #print temp to console
goblin.write(datastring)
## goblin.flush()
## ser.close() # close port
else:
print "Mash complete!"
raw_input('Press Enter to save the data..')
goblin.close()
os.rename('templog.txt',beername + 'templog.txt')
print 'Data saved!'
raw_input('Press Enter to exit...')
## DATA ANALYSIS
##plt.axis([0,3600,55,75])
###temperature lines
##plt.hlines(70,0,3600,colors='r')
##plt.hlines(60,0,3600,colors='r')
##
##dragon = np.loadtxt('templog.txt', delimiter="\t")
##x = dragon[:,1]
##y = dragon[:,0]
##
##plt.scatter(x,y)
####plt.draw()
##plt.show()
##plt.waitforbuttonpress()
####plt.pause(0.1)
##
##raw_input('Press Enter to exit...')
##
| 29.178571 | 104 | 0.563035 | 587 | 4,902 | 4.688245 | 0.301533 | 0.026163 | 0.043605 | 0.055233 | 0.304869 | 0.272529 | 0.234012 | 0.234012 | 0.220203 | 0.205669 | 0 | 0.032569 | 0.279682 | 4,902 | 167 | 105 | 29.353293 | 0.746814 | 0.342513 | 0 | 0.395349 | 0 | 0 | 0.265201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.05814 | null | null | 0.127907 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f788b1d1658062d96ad83c42b9cd26071a4b8418 | 374 | py | Python | my_spotless_app/migrations/0002_alter_service_picture_url.py | AntociM/Spotless | 8cd2d7f76eccee046d42f7a836cf91af04527186 | [
"ADSL"
] | null | null | null | my_spotless_app/migrations/0002_alter_service_picture_url.py | AntociM/Spotless | 8cd2d7f76eccee046d42f7a836cf91af04527186 | [
"ADSL"
] | 29 | 2022-01-22T19:05:56.000Z | 2022-03-01T08:57:14.000Z | my_spotless_app/migrations/0002_alter_service_picture_url.py | AntociM/Project-4 | 8cd2d7f76eccee046d42f7a836cf91af04527186 | [
"ADSL"
] | 1 | 2022-03-02T11:00:59.000Z | 2022-03-02T11:00:59.000Z | # Generated by Django 3.2 on 2022-02-27 11:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('my_spotless_app', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='service',
name='picture_url',
field=models.TextField(),
),
]
| 19.684211 | 45 | 0.590909 | 39 | 374 | 5.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068441 | 0.296791 | 374 | 18 | 46 | 20.777778 | 0.752852 | 0.114973 | 0 | 0 | 1 | 0 | 0.136778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f78d23bb7041a7dd86f556d3f4cd134329c150dd | 2,604 | py | Python | tests/utils_tests/testing_tests/assertions_tests/test_assert_is_bbox_dataset.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | 1 | 2018-12-27T03:47:45.000Z | 2018-12-27T03:47:45.000Z | tests/utils_tests/testing_tests/assertions_tests/test_assert_is_bbox_dataset.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | null | null | null | tests/utils_tests/testing_tests/assertions_tests/test_assert_is_bbox_dataset.py | souravsingh/chainercv | 8f76510472bc95018c183e72f37bc6c34a89969c | [
"MIT"
] | 2 | 2019-12-16T02:20:26.000Z | 2022-01-17T02:00:49.000Z | import numpy as np
import unittest
from chainer.dataset import DatasetMixin
from chainer import testing
from chainercv.utils import assert_is_bbox_dataset
from chainercv.utils import generate_random_bbox
class BboxDataset(DatasetMixin):
def __init__(self, options=(), empty_bbox=False):
self.options = options
self.empty_bbox = empty_bbox
def __len__(self):
return 10
def get_example(self, i):
img = np.random.randint(0, 256, size=(3, 48, 64))
if self.empty_bbox:
n_bbox = 0
else:
n_bbox = np.random.randint(10, 20)
bbox = generate_random_bbox(n_bbox, (48, 64), 5, 20)
label = np.random.randint(0, 20, size=n_bbox).astype(np.int32)
return (img, bbox, label) + self.options
class InvalidSampleSizeDataset(BboxDataset):
def get_example(self, i):
img, bbox, label = super(
InvalidSampleSizeDataset, self).get_example(i)[:3]
return img, bbox
class InvalidImageDataset(BboxDataset):
def get_example(self, i):
img, bbox, label = super(InvalidImageDataset, self).get_example(i)[:3]
return img[0], bbox, label
class InvalidBboxDataset(BboxDataset):
def get_example(self, i):
img, bbox, label = super(InvalidBboxDataset, self).get_example(i)[:3]
bbox += 1000
return img, bbox, label
class InvalidLabelDataset(BboxDataset):
def get_example(self, i):
img, bbox, label = super(InvalidLabelDataset, self).get_example(i)[:3]
label += 1000
return img, bbox, label
class MismatchLengthDataset(BboxDataset):
def get_example(self, i):
img, bbox, label = super(
MismatchLengthDataset, self).get_example(i)[:3]
return img, bbox, label[1:]
@testing.parameterize(
{'dataset': BboxDataset(), 'valid': True},
{'dataset': BboxDataset(empty_bbox=True), 'valid': True},
{'dataset': BboxDataset(('option',)), 'valid': True},
{'dataset': InvalidSampleSizeDataset(), 'valid': False},
{'dataset': InvalidImageDataset(), 'valid': False},
{'dataset': InvalidBboxDataset(), 'valid': False},
{'dataset': InvalidLabelDataset(), 'valid': False},
{'dataset': MismatchLengthDataset(), 'valid': False},
)
class TestAssertIsBboxDataset(unittest.TestCase):
def test_assert_is_bbox_dataset(self):
if self.valid:
assert_is_bbox_dataset(self.dataset, 20)
else:
with self.assertRaises(AssertionError):
assert_is_bbox_dataset(self.dataset, 20)
testing.run_module(__name__, __file__)
| 28.304348 | 78 | 0.656298 | 302 | 2,604 | 5.480132 | 0.228477 | 0.066465 | 0.065257 | 0.061631 | 0.306344 | 0.273112 | 0.227795 | 0.174018 | 0.138973 | 0.138973 | 0 | 0.023164 | 0.220814 | 2,604 | 91 | 79 | 28.615385 | 0.792509 | 0 | 0 | 0.222222 | 1 | 0 | 0.039171 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.142857 | false | 0 | 0.095238 | 0.015873 | 0.460317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f78da1263e700a0f21ebec44c019c94ee9c11482 | 3,002 | py | Python | seahub/utils/http.py | Xandersoft/seahub | f75f238b3e0a907e8a8003f419e367fa36e992e7 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | seahub/utils/http.py | Xandersoft/seahub | f75f238b3e0a907e8a8003f419e367fa36e992e7 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | seahub/utils/http.py | Xandersoft/seahub | f75f238b3e0a907e8a8003f419e367fa36e992e7 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2012-2016 Seafile Ltd.
from __future__ import unicode_literals
import unicodedata
import urlparse
import json
from functools import wraps
from django.http import HttpResponse, HttpResponseBadRequest, HttpResponseForbidden
class _HTTPException(Exception):
def __init__(self, message=''):
self.message = message
def __str__(self):
return '%s: %s' % (self.__class__.__name__, self.message)
class BadRequestException(_HTTPException):
pass
class RequestForbbiddenException(_HTTPException):
pass
JSON_CONTENT_TYPE = 'application/json; charset=utf-8'
def json_response(func):
@wraps(func)
def wrapped(*a, **kw):
try:
result = func(*a, **kw)
except BadRequestException, e:
return HttpResponseBadRequest(e.message)
except RequestForbbiddenException, e:
return HttpResponseForbidden(e.messages)
if isinstance(result, HttpResponse):
return result
else:
return HttpResponse(json.dumps(result), status=200,
content_type=JSON_CONTENT_TYPE)
return wrapped
def int_param(request, key):
v = request.GET.get(key, None)
if not v:
raise BadRequestException()
try:
return int(v)
except ValueError:
raise BadRequestException()
def is_safe_url(url, host=None):
"""
https://github.com/django/django/blob/fc6d147a63f89795dbcdecb0559256470fff4380/django/utils/http.py
Return ``True`` if the url is a safe redirection (i.e. it doesn't point to
a different host and uses a safe scheme).
Always returns ``False`` on an empty url.
"""
if url is not None:
url = url.strip()
if not url:
return False
# Chrome treats \ completely as / in paths but it could be part of some
# basic auth credentials so we need to check both URLs.
return _is_safe_url(url, host) and _is_safe_url(url.replace('\\', '/'), host)
def _is_safe_url(url, host):
# Chrome considers any URL with more than two slashes to be absolute, but
# urlparse is not so flexible. Treat any url with three slashes as unsafe.
if url.startswith('///'):
return False
url_info = urlparse.urlparse(url)
# Forbid URLs like http:///example.com - with a scheme, but without a hostname.
# In that URL, example.com is not the hostname but, a path component. However,
# Chrome will still consider example.com to be the hostname, so we must not
# allow this syntax.
if not url_info.netloc and url_info.scheme:
return False
# Forbid URLs that start with control characters. Some browsers (like
# Chrome) ignore quite a few control characters at the start of a
# URL and might consider the URL as scheme relative.
if unicodedata.category(url[0])[0] == 'C':
return False
return ((not url_info.netloc or url_info.netloc == host) and
(not url_info.scheme or url_info.scheme in ['http', 'https']))
| 35.317647 | 103 | 0.676549 | 396 | 3,002 | 5.005051 | 0.414141 | 0.024723 | 0.018163 | 0.024218 | 0.027245 | 0.019173 | 0 | 0 | 0 | 0 | 0 | 0.017016 | 0.236509 | 3,002 | 84 | 104 | 35.738095 | 0.847731 | 0.24517 | 0 | 0.181818 | 0 | 0 | 0.026958 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.036364 | 0.109091 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7904ac31330990ac63a4b3068ea84654cf9b168 | 6,172 | py | Python | pextant/sextant.py | norheim/pextant | f4235719279c0e6f178ae1e0f8b1ea3346533915 | [
"MIT"
] | null | null | null | pextant/sextant.py | norheim/pextant | f4235719279c0e6f178ae1e0f8b1ea3346533915 | [
"MIT"
] | 1 | 2019-12-03T03:52:41.000Z | 2019-12-04T14:50:36.000Z | pextant/sextant.py | norheim/pextant | f4235719279c0e6f178ae1e0f8b1ea3346533915 | [
"MIT"
] | 1 | 2019-12-03T02:37:57.000Z | 2019-12-03T02:37:57.000Z | from flask_settings import GEOTIFF_FULL_PATH
import sys
import traceback
sys.path.append('../')
import numpy as np
import json
from datetime import timedelta
from functools import update_wrapper
from pextant.EnvironmentalModel import GDALMesh
from pextant.explorers import Astronaut
from pextant.analysis.loadWaypoints import JSONloader
from pextant.lib.geoshapely import GeoPolygon, LAT_LONG
from pextant.solvers.astarMesh import astarSolver
from flask import Flask
from flask import make_response, request, current_app
app = Flask(__name__)
def crossdomain(origin=None, methods=None, headers=None,
max_age=21600, attach_to_all=True,
automatic_options=True):
if methods is not None:
methods = ', '.join(sorted(x.upper() for x in methods))
if headers is not None and not isinstance(headers, basestring):
headers = ', '.join(x.upper() for x in headers)
if not isinstance(origin, basestring):
origin = ', '.join(origin)
if isinstance(max_age, timedelta):
max_age = max_age.total_seconds()
def get_methods():
if methods is not None:
return methods
options_resp = current_app.make_default_options_response()
return options_resp.headers['allow']
def decorator(f):
def wrapped_function(*args, **kwargs):
if automatic_options and request.method == 'OPTIONS':
resp = current_app.make_default_options_response()
else:
resp = make_response(f(*args, **kwargs))
if not attach_to_all and request.method != 'OPTIONS':
return resp
h = resp.headers
h['Access-Control-Allow-Origin'] = origin
h['Access-Control-Allow-Methods'] = get_methods()
h['Access-Control-Max-Age'] = str(max_age)
if headers is not None:
h['Access-Control-Allow-Headers'] = headers
return resp
f.provide_automatic_options = False
return update_wrapper(wrapped_function, f)
return decorator
def main(argv):
print 'STARTING SEXTANT'
geotiff_full_path = ""
try:
geotiff_full_path = argv[0]
except IndexError:
# print 'Syntax is "sextant <inputfile>"'
pass
if not geotiff_full_path or geotiff_full_path == 'sextant:app':
geotiff_full_path = GEOTIFF_FULL_PATH
print geotiff_full_path
gdal_mesh = GDALMesh(geotiff_full_path)
explorer = Astronaut(80)
solver, waypoints, environmental_model = None, None, None
@app.route('/test', methods=['GET', 'POST'])
@crossdomain(origin='*')
def test():
print str(request)
return json.dumps({'test':'test'})
@app.route('/setwaypoints', methods=['GET', 'POST'])
@crossdomain(origin='*')
def set_waypoints():
try:
global solver, waypoints, environmental_model
print('in set waypoints')
request_data = request.get_json(force=True)
xp_json = request_data['xp_json']
json_loader = JSONloader(xp_json['sequence'])
print 'loaded xp json'
waypoints = json_loader.get_waypoints()
print 'gdal mesh is built from %s' % str(geotiff_full_path)
environmental_model = gdal_mesh.loadSubSection(waypoints.geoEnvelope(), cached=True)
solver = astarSolver(environmental_model, explorer, optimize_on='Energy')
print('loaded fine')
return json.dumps({'loaded': True})
except Exception, e:
traceback.print_exc()
response = {'error': str(e),
'status_code': 400}
return response
@app.route('/solve', methods=['GET', 'POST'])
@crossdomain(origin='*')
def solve():
global solver, waypoints, environmental_model
print 'in solve'
request_data = request.get_json(force=True)
return_type = request_data['return']
if 'xp_json' in request_data:
xp_json = request_data['xp_json']
json_loader = JSONloader(xp_json['sequence'])
waypoints = json_loader.get_waypoints()
print(waypoints.to(LAT_LONG))
environmental_model = gdal_mesh.loadSubSection(waypoints.geoEnvelope(), cached=True)
solver = astarSolver(environmental_model, explorer, optimize_on='Energy')
search_results, rawpoints, _ = solver.solvemultipoint(waypoints)
return_json = {
'latlong':[]
}
if return_type == 'segmented':
for search_result in search_results.list:
lat, lon = GeoPolygon(environmental_model.ROW_COL, *np.array(search_result.raw).transpose()).to(LAT_LONG)
return_json['latlong'].append({'latitudes': list(lat), 'longitudes': list(lon)})
else:
lat, lon = GeoPolygon(environmental_model.ROW_COL, *np.array(rawpoints).transpose()).to(LAT_LONG)
return_json['latlong'].append({'latitudes': list(lat), 'longitudes': list(lon)})
return json.dumps(return_json)
# OLD Stuff: delete
@app.route('/', methods=['GET', 'POST'])
@crossdomain(origin='*')
def get_waypoints():
print('got request')
data = request.get_json(force=True)
data_np = np.array(data['waypoints']).transpose()
#json_waypoints = JSONloader(xpjson)
waypoints = GeoPolygon(LAT_LONG, *data_np)
print waypoints.to(LAT_LONG)
environmental_model = gdal_mesh.loadSubSection(waypoints.geoEnvelope(), cached=True)
explorer = Astronaut(80)
solver = astarSolver(environmental_model, explorer, optimize_on='Energy', cached=True)
_, rawpoints, _ = solver.solvemultipoint(waypoints)
lat, lon = GeoPolygon(environmental_model.ROW_COL, *np.array(rawpoints).transpose()).to(LAT_LONG)
print((lat, lon))
return json.dumps({'latitudes': list(lat), 'longitudes': list(lon)})
if argv[0] != 'sextant:app':
app.run(host='localhost', port=5000)
# if __name__ == "__main__":
main(sys.argv[1:])
#main(['../data/maps/dem/HI_air_imagery.tif']) | 38.098765 | 121 | 0.638043 | 707 | 6,172 | 5.373409 | 0.24611 | 0.056857 | 0.039484 | 0.026323 | 0.392735 | 0.367465 | 0.305344 | 0.254277 | 0.214004 | 0.201632 | 0 | 0.004091 | 0.24757 | 6,172 | 162 | 122 | 38.098765 | 0.813953 | 0.026572 | 0 | 0.24812 | 0 | 0 | 0.085124 | 0.017491 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007519 | 0.105263 | null | null | 0.097744 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f79637ff2082c4edbb504887dfd73b4aed28edc7 | 37,112 | py | Python | bitten/model.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | 1 | 2016-08-28T03:13:03.000Z | 2016-08-28T03:13:03.000Z | bitten/model.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | null | null | null | bitten/model.py | dokipen/bitten | d4d2829c63eec84bcfab05ec7035a23e85d90c00 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (C) 2005-2007 Christopher Lenz <cmlenz@gmx.de>
# Copyright (C) 2007 Edgewall Software
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://bitten.edgewall.org/wiki/License.
"""Model classes for objects persisted in the database."""
from trac.attachment import Attachment
from trac.db import Table, Column, Index
from trac.resource import Resource
from trac.util.text import to_unicode
import codecs
import os
__docformat__ = 'restructuredtext en'
class BuildConfig(object):
"""Representation of a build configuration."""
_schema = [
Table('bitten_config', key='name')[
Column('name'), Column('path'), Column('active', type='int'),
Column('recipe'), Column('min_rev'), Column('max_rev'),
Column('label'), Column('description')
]
]
def __init__(self, env, name=None, path=None, active=False, recipe=None,
min_rev=None, max_rev=None, label=None, description=None):
"""Initialize a new build configuration with the specified attributes.
To actually create this configuration in the database, the `insert`
method needs to be called.
"""
self.env = env
self._old_name = None
self.name = name
self.path = path or ''
self.active = bool(active)
self.recipe = recipe or ''
self.min_rev = min_rev or None
self.max_rev = max_rev or None
self.label = label or ''
self.description = description or ''
def __repr__(self):
return '<%s %r>' % (type(self).__name__, self.name)
exists = property(fget=lambda self: self._old_name is not None,
doc='Whether this configuration exists in the database')
resource = property(fget=lambda self: Resource('build', '%s' % self.name),
doc='Build Config resource identification')
def delete(self, db=None):
"""Remove a build configuration and all dependent objects from the
database."""
assert self.exists, 'Cannot delete non-existing configuration'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
for platform in list(TargetPlatform.select(self.env, self.name, db=db)):
platform.delete(db=db)
for build in list(Build.select(self.env, config=self.name, db=db)):
build.delete(db=db)
# Delete attachments
Attachment.delete_all(self.env, 'build', self.resource.id, db)
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_config WHERE name=%s", (self.name,))
if handle_ta:
db.commit()
self._old_name = None
def insert(self, db=None):
"""Insert a new configuration into the database."""
assert not self.exists, 'Cannot insert existing configuration'
assert self.name, 'Configuration requires a name'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_config (name,path,active,"
"recipe,min_rev,max_rev,label,description) "
"VALUES (%s,%s,%s,%s,%s,%s,%s,%s)",
(self.name, self.path, int(self.active or 0),
self.recipe or '', self.min_rev, self.max_rev,
self.label or '', self.description or ''))
if handle_ta:
db.commit()
self._old_name = self.name
def update(self, db=None):
"""Save changes to an existing build configuration."""
assert self.exists, 'Cannot update a non-existing configuration'
assert self.name, 'Configuration requires a name'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
cursor = db.cursor()
cursor.execute("UPDATE bitten_config SET name=%s,path=%s,active=%s,"
"recipe=%s,min_rev=%s,max_rev=%s,label=%s,"
"description=%s WHERE name=%s",
(self.name, self.path, int(self.active or 0),
self.recipe, self.min_rev, self.max_rev,
self.label, self.description, self._old_name))
if self.name != self._old_name:
cursor.execute("UPDATE bitten_platform SET config=%s "
"WHERE config=%s", (self.name, self._old_name))
cursor.execute("UPDATE bitten_build SET config=%s "
"WHERE config=%s", (self.name, self._old_name))
if handle_ta:
db.commit()
self._old_name = self.name
def fetch(cls, env, name, db=None):
"""Retrieve an existing build configuration from the database by
name.
"""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT path,active,recipe,min_rev,max_rev,label,"
"description FROM bitten_config WHERE name=%s", (name,))
row = cursor.fetchone()
if not row:
return None
config = BuildConfig(env)
config.name = config._old_name = name
config.path = row[0] or ''
config.active = bool(row[1])
config.recipe = row[2] or ''
config.min_rev = row[3] or None
config.max_rev = row[4] or None
config.label = row[5] or ''
config.description = row[6] or ''
return config
fetch = classmethod(fetch)
def select(cls, env, include_inactive=False, db=None):
"""Retrieve existing build configurations from the database that match
the specified criteria.
"""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
if include_inactive:
cursor.execute("SELECT name,path,active,recipe,min_rev,max_rev,"
"label,description FROM bitten_config ORDER BY name")
else:
cursor.execute("SELECT name,path,active,recipe,min_rev,max_rev,"
"label,description FROM bitten_config "
"WHERE active=1 ORDER BY name")
for name, path, active, recipe, min_rev, max_rev, label, description \
in cursor:
config = BuildConfig(env, name=name, path=path or '',
active=bool(active), recipe=recipe or '',
min_rev=min_rev or None,
max_rev=max_rev or None, label=label or '',
description=description or '')
config._old_name = name
yield config
select = classmethod(select)
class TargetPlatform(object):
"""Target platform for a build configuration."""
_schema = [
Table('bitten_platform', key='id')[
Column('id', auto_increment=True), Column('config'), Column('name')
],
Table('bitten_rule', key=('id', 'propname'))[
Column('id', type='int'), Column('propname'), Column('pattern'),
Column('orderno', type='int')
]
]
def __init__(self, env, config=None, name=None):
"""Initialize a new target platform with the specified attributes.
To actually create this platform in the database, the `insert` method
needs to be called.
"""
self.env = env
self.id = None
self.config = config
self.name = name
self.rules = []
def __repr__(self):
return '<%s %r>' % (type(self).__name__, self.id)
exists = property(fget=lambda self: self.id is not None,
doc='Whether this target platform exists in the database')
def delete(self, db=None):
"""Remove the target platform from the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
for build in Build.select(self.env, platform=self.id, status=Build.PENDING, db=db):
build.delete()
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_rule WHERE id=%s", (self.id,))
cursor.execute("DELETE FROM bitten_platform WHERE id=%s", (self.id,))
if handle_ta:
db.commit()
def insert(self, db=None):
"""Insert a new target platform into the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert not self.exists, 'Cannot insert existing target platform'
assert self.config, 'Target platform needs to be associated with a ' \
'configuration'
assert self.name, 'Target platform requires a name'
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_platform (config,name) "
"VALUES (%s,%s)", (self.config, self.name))
self.id = db.get_last_id(cursor, 'bitten_platform')
if self.rules:
cursor.executemany("INSERT INTO bitten_rule VALUES (%s,%s,%s,%s)",
[(self.id, propname, pattern, idx)
for idx, (propname, pattern)
in enumerate(self.rules)])
if handle_ta:
db.commit()
def update(self, db=None):
"""Save changes to an existing target platform."""
assert self.exists, 'Cannot update a non-existing platform'
assert self.config, 'Target platform needs to be associated with a ' \
'configuration'
assert self.name, 'Target platform requires a name'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
cursor = db.cursor()
cursor.execute("UPDATE bitten_platform SET name=%s WHERE id=%s",
(self.name, self.id))
cursor.execute("DELETE FROM bitten_rule WHERE id=%s", (self.id,))
if self.rules:
cursor.executemany("INSERT INTO bitten_rule VALUES (%s,%s,%s,%s)",
[(self.id, propname, pattern, idx)
for idx, (propname, pattern)
in enumerate(self.rules)])
if handle_ta:
db.commit()
def fetch(cls, env, id, db=None):
"""Retrieve an existing target platform from the database by ID."""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT config,name FROM bitten_platform "
"WHERE id=%s", (id,))
row = cursor.fetchone()
if not row:
return None
platform = TargetPlatform(env, config=row[0], name=row[1])
platform.id = id
cursor.execute("SELECT propname,pattern FROM bitten_rule "
"WHERE id=%s ORDER BY orderno", (id,))
for propname, pattern in cursor:
platform.rules.append((propname, pattern))
return platform
fetch = classmethod(fetch)
def select(cls, env, config=None, db=None):
"""Retrieve existing target platforms from the database that match the
specified criteria.
"""
if not db:
db = env.get_db_cnx()
where_clauses = []
if config is not None:
where_clauses.append(("config=%s", config))
if where_clauses:
where = "WHERE " + " AND ".join([wc[0] for wc in where_clauses])
else:
where = ""
cursor = db.cursor()
cursor.execute("SELECT id FROM bitten_platform %s ORDER BY name"
% where, [wc[1] for wc in where_clauses])
for (id,) in cursor:
yield TargetPlatform.fetch(env, id)
select = classmethod(select)
class Build(object):
"""Representation of a build."""
_schema = [
Table('bitten_build', key='id')[
Column('id', auto_increment=True), Column('config'), Column('rev'),
Column('rev_time', type='int'), Column('platform', type='int'),
Column('slave'), Column('started', type='int'),
Column('stopped', type='int'), Column('status', size=1),
Index(['config', 'rev', 'platform'], unique=True)
],
Table('bitten_slave', key=('build', 'propname'))[
Column('build', type='int'), Column('propname'), Column('propvalue')
]
]
# Build status codes
PENDING = 'P'
IN_PROGRESS = 'I'
SUCCESS = 'S'
FAILURE = 'F'
# Standard slave properties
IP_ADDRESS = 'ipnr'
MAINTAINER = 'owner'
OS_NAME = 'os'
OS_FAMILY = 'family'
OS_VERSION = 'version'
MACHINE = 'machine'
PROCESSOR = 'processor'
TOKEN = 'token'
def __init__(self, env, config=None, rev=None, platform=None, slave=None,
started=0, stopped=0, rev_time=0, status=PENDING):
"""Initialize a new build with the specified attributes.
To actually create this build in the database, the `insert` method needs
to be called.
"""
self.env = env
self.id = None
self.config = config
self.rev = rev and str(rev) or None
self.platform = platform
self.slave = slave
self.started = started or 0
self.stopped = stopped or 0
self.rev_time = rev_time
self.status = status
self.slave_info = {}
def __repr__(self):
return '<%s %r>' % (type(self).__name__, self.id)
exists = property(fget=lambda self: self.id is not None,
doc='Whether this build exists in the database')
completed = property(fget=lambda self: self.status != Build.IN_PROGRESS,
doc='Whether the build has been completed')
successful = property(fget=lambda self: self.status == Build.SUCCESS,
doc='Whether the build was successful')
resource = property(fget=lambda self: Resource('build', '%s/%s' % (self.config, self.id)),
doc='Build resource identification')
def delete(self, db=None):
"""Remove the build from the database."""
assert self.exists, 'Cannot delete a non-existing build'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
for step in list(BuildStep.select(self.env, build=self.id)):
step.delete(db=db)
# Delete attachments
Attachment.delete_all(self.env, 'build', self.resource.id, db)
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_slave WHERE build=%s", (self.id,))
cursor.execute("DELETE FROM bitten_build WHERE id=%s", (self.id,))
if handle_ta:
db.commit()
def insert(self, db=None):
"""Insert a new build into the database."""
assert not self.exists, 'Cannot insert an existing build'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert self.config and self.rev and self.rev_time and self.platform
assert self.status in (self.PENDING, self.IN_PROGRESS, self.SUCCESS,
self.FAILURE)
if not self.slave:
assert self.status == self.PENDING
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_build (config,rev,rev_time,platform,"
"slave,started,stopped,status) "
"VALUES (%s,%s,%s,%s,%s,%s,%s,%s)",
(self.config, self.rev, int(self.rev_time),
self.platform, self.slave or '', self.started or 0,
self.stopped or 0, self.status))
self.id = db.get_last_id(cursor, 'bitten_build')
if self.slave_info:
cursor.executemany("INSERT INTO bitten_slave VALUES (%s,%s,%s)",
[(self.id, name, value) for name, value
in self.slave_info.items()])
if handle_ta:
db.commit()
def update(self, db=None):
"""Save changes to an existing build."""
assert self.exists, 'Cannot update a non-existing build'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert self.config and self.rev
assert self.status in (self.PENDING, self.IN_PROGRESS, self.SUCCESS,
self.FAILURE)
if not self.slave:
assert self.status == self.PENDING
cursor = db.cursor()
cursor.execute("UPDATE bitten_build SET slave=%s,started=%s,"
"stopped=%s,status=%s WHERE id=%s",
(self.slave or '', self.started or 0,
self.stopped or 0, self.status, self.id))
cursor.execute("DELETE FROM bitten_slave WHERE build=%s", (self.id,))
if self.slave_info:
cursor.executemany("INSERT INTO bitten_slave VALUES (%s,%s,%s)",
[(self.id, name, value) for name, value
in self.slave_info.items()])
if handle_ta:
db.commit()
def fetch(cls, env, id, db=None):
"""Retrieve an existing build from the database by ID."""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT config,rev,rev_time,platform,slave,started,"
"stopped,status FROM bitten_build WHERE id=%s", (id,))
row = cursor.fetchone()
if not row:
return None
build = Build(env, config=row[0], rev=row[1], rev_time=int(row[2]),
platform=int(row[3]), slave=row[4],
started=row[5] and int(row[5]) or 0,
stopped=row[6] and int(row[6]) or 0, status=row[7])
build.id = int(id)
cursor.execute("SELECT propname,propvalue FROM bitten_slave "
"WHERE build=%s", (id,))
for propname, propvalue in cursor:
build.slave_info[propname] = propvalue
return build
fetch = classmethod(fetch)
def select(cls, env, config=None, rev=None, platform=None, slave=None,
status=None, db=None, min_rev_time=None, max_rev_time=None):
"""Retrieve existing builds from the database that match the specified
criteria.
"""
if not db:
db = env.get_db_cnx()
where_clauses = []
if config is not None:
where_clauses.append(("config=%s", config))
if rev is not None:
where_clauses.append(("rev=%s", str(rev)))
if platform is not None:
where_clauses.append(("platform=%s", platform))
if slave is not None:
where_clauses.append(("slave=%s", slave))
if status is not None:
where_clauses.append(("status=%s", status))
if min_rev_time is not None:
where_clauses.append(("rev_time>=%s", min_rev_time))
if max_rev_time is not None:
where_clauses.append(("rev_time<=%s", max_rev_time))
if where_clauses:
where = "WHERE " + " AND ".join([wc[0] for wc in where_clauses])
else:
where = ""
cursor = db.cursor()
cursor.execute("SELECT id FROM bitten_build %s "
"ORDER BY rev_time DESC,config,slave"
% where, [wc[1] for wc in where_clauses])
for (id,) in cursor:
yield Build.fetch(env, id)
select = classmethod(select)
class BuildStep(object):
"""Represents an individual step of an executed build."""
_schema = [
Table('bitten_step', key=('build', 'name'))[
Column('build', type='int'), Column('name'), Column('description'),
Column('status', size=1), Column('started', type='int'),
Column('stopped', type='int')
],
Table('bitten_error', key=('build', 'step', 'orderno'))[
Column('build', type='int'), Column('step'), Column('message'),
Column('orderno', type='int')
]
]
# Step status codes
SUCCESS = 'S'
FAILURE = 'F'
def __init__(self, env, build=None, name=None, description=None,
status=None, started=None, stopped=None):
"""Initialize a new build step with the specified attributes.
To actually create this build step in the database, the `insert` method
needs to be called.
"""
self.env = env
self.build = build
self.name = name
self.description = description
self.status = status
self.started = started
self.stopped = stopped
self.errors = []
self._exists = False
exists = property(fget=lambda self: self._exists,
doc='Whether this build step exists in the database')
successful = property(fget=lambda self: self.status == BuildStep.SUCCESS,
doc='Whether the build step was successful')
def delete(self, db=None):
"""Remove the build step from the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
for log in list(BuildLog.select(self.env, build=self.build,
step=self.name, db=db)):
log.delete(db=db)
for report in list(Report.select(self.env, build=self.build,
step=self.name, db=db)):
report.delete(db=db)
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_step WHERE build=%s AND name=%s",
(self.build, self.name))
cursor.execute("DELETE FROM bitten_error WHERE build=%s AND step=%s",
(self.build, self.name))
if handle_ta:
db.commit()
self._exists = False
def insert(self, db=None):
"""Insert a new build step into the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert self.build and self.name
assert self.status in (self.SUCCESS, self.FAILURE)
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_step (build,name,description,status,"
"started,stopped) VALUES (%s,%s,%s,%s,%s,%s)",
(self.build, self.name, self.description or '',
self.status, self.started or 0, self.stopped or 0))
if self.errors:
cursor.executemany("INSERT INTO bitten_error (build,step,message,"
"orderno) VALUES (%s,%s,%s,%s)",
[(self.build, self.name, message, idx)
for idx, message in enumerate(self.errors)])
if handle_ta:
db.commit()
self._exists = True
def fetch(cls, env, build, name, db=None):
"""Retrieve an existing build from the database by build ID and step
name."""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT description,status,started,stopped "
"FROM bitten_step WHERE build=%s AND name=%s",
(build, name))
row = cursor.fetchone()
if not row:
return None
step = BuildStep(env, build, name, row[0] or '', row[1],
row[2] and int(row[2]), row[3] and int(row[3]))
step._exists = True
cursor.execute("SELECT message FROM bitten_error WHERE build=%s "
"AND step=%s ORDER BY orderno", (build, name))
for row in cursor:
step.errors.append(row[0] or '')
return step
fetch = classmethod(fetch)
def select(cls, env, build=None, name=None, status=None, db=None):
"""Retrieve existing build steps from the database that match the
specified criteria.
"""
if not db:
db = env.get_db_cnx()
assert status in (None, BuildStep.SUCCESS, BuildStep.FAILURE)
where_clauses = []
if build is not None:
where_clauses.append(("build=%s", build))
if name is not None:
where_clauses.append(("name=%s", name))
if status is not None:
where_clauses.append(("status=%s", status))
if where_clauses:
where = "WHERE " + " AND ".join([wc[0] for wc in where_clauses])
else:
where = ""
cursor = db.cursor()
cursor.execute("SELECT build,name FROM bitten_step %s ORDER BY stopped"
% where, [wc[1] for wc in where_clauses])
for build, name in cursor:
yield BuildStep.fetch(env, build, name, db=db)
select = classmethod(select)
class BuildLog(object):
"""Represents a build log."""
_schema = [
Table('bitten_log', key='id')[
Column('id', auto_increment=True), Column('build', type='int'),
Column('step'), Column('generator'), Column('orderno', type='int'),
Column('filename'),
Index(['build', 'step'])
],
]
# Message levels
DEBUG = 'D'
INFO = 'I'
WARNING = 'W'
ERROR = 'E'
UNKNOWN = ''
LEVELS_SUFFIX = '.levels'
def __init__(self, env, build=None, step=None, generator=None,
orderno=None, filename=None):
"""Initialize a new build log with the specified attributes.
To actually create this build log in the database, the `insert` method
needs to be called.
"""
self.env = env
self.id = None
self.build = build
self.step = step
self.generator = generator or ''
self.orderno = orderno and int(orderno) or 0
self.filename = filename or None
self.messages = []
self.logs_dir = env.config.get('bitten', 'logs_dir', 'log/bitten')
if not os.path.isabs(self.logs_dir):
self.logs_dir = os.path.join(env.path, self.logs_dir)
if not os.path.exists(self.logs_dir):
os.makedirs(self.logs_dir)
exists = property(fget=lambda self: self.id is not None,
doc='Whether this build log exists in the database')
def get_log_file(self, filename):
"""Returns the full path to the log file"""
if filename != os.path.basename(filename):
raise ValueError("Filename may not contain path: %s" % (filename,))
return os.path.join(self.logs_dir, filename)
def delete(self, db=None):
"""Remove the build log from the database."""
assert self.exists, 'Cannot delete a non-existing build log'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
if self.filename:
log_file = self.get_log_file(self.filename)
if os.path.exists(log_file):
try:
self.env.log.debug("Deleting log file: %s" % log_file)
os.remove(log_file)
except Exception, e:
self.env.log.warning("Error removing log file %s: %s" % (log_file, e))
level_file = log_file + self.LEVELS_SUFFIX
if os.path.exists(level_file):
try:
self.env.log.debug("Deleting level file: %s" % level_file)
os.remove(level_file)
except Exception, e:
self.env.log.warning("Error removing level file %s: %s" \
% (level_file, e))
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_log WHERE id=%s", (self.id,))
if handle_ta:
db.commit()
self.id = None
def insert(self, db=None):
"""Insert a new build log into the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert self.build and self.step
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_log (build,step,generator,orderno) "
"VALUES (%s,%s,%s,%s)", (self.build, self.step,
self.generator, self.orderno))
id = db.get_last_id(cursor, 'bitten_log')
log_file = "%s.log" % (id,)
cursor.execute("UPDATE bitten_log SET filename=%s WHERE id=%s", (log_file, id))
if self.messages:
log_file_name = self.get_log_file(log_file)
level_file_name = log_file_name + self.LEVELS_SUFFIX
codecs.open(log_file_name, "wb", "UTF-8").writelines([to_unicode(msg[1]+"\n") for msg in self.messages])
codecs.open(level_file_name, "wb", "UTF-8").writelines([to_unicode(msg[0]+"\n") for msg in self.messages])
if handle_ta:
db.commit()
self.id = id
def fetch(cls, env, id, db=None):
"""Retrieve an existing build from the database by ID."""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT build,step,generator,orderno,filename FROM bitten_log "
"WHERE id=%s", (id,))
row = cursor.fetchone()
if not row:
return None
log = BuildLog(env, int(row[0]), row[1], row[2], row[3], row[4])
log.id = id
if log.filename:
log_filename = log.get_log_file(log.filename)
if os.path.exists(log_filename):
log_lines = codecs.open(log_filename, "rb", "UTF-8").readlines()
else:
log_lines = []
level_filename = log.get_log_file(log.filename + cls.LEVELS_SUFFIX)
if os.path.exists(level_filename):
log_levels = dict(enumerate(codecs.open(level_filename, "rb", "UTF-8").readlines()))
else:
log_levels = {}
log.messages = [(log_levels.get(line_num, BuildLog.UNKNOWN).rstrip("\n"), line.rstrip("\n")) for line_num, line in enumerate(log_lines)]
else:
log.messages = []
return log
fetch = classmethod(fetch)
def select(cls, env, build=None, step=None, generator=None, db=None):
"""Retrieve existing build logs from the database that match the
specified criteria.
"""
if not db:
db = env.get_db_cnx()
where_clauses = []
if build is not None:
where_clauses.append(("build=%s", build))
if step is not None:
where_clauses.append(("step=%s", step))
if generator is not None:
where_clauses.append(("generator=%s", generator))
if where_clauses:
where = "WHERE " + " AND ".join([wc[0] for wc in where_clauses])
else:
where = ""
cursor = db.cursor()
cursor.execute("SELECT id FROM bitten_log %s ORDER BY orderno"
% where, [wc[1] for wc in where_clauses])
for (id, ) in cursor:
yield BuildLog.fetch(env, id, db=db)
select = classmethod(select)
class Report(object):
"""Represents a generated report."""
_schema = [
Table('bitten_report', key='id')[
Column('id', auto_increment=True), Column('build', type='int'),
Column('step'), Column('category'), Column('generator'),
Index(['build', 'step', 'category'])
],
Table('bitten_report_item', key=('report', 'item', 'name'))[
Column('report', type='int'), Column('item', type='int'),
Column('name'), Column('value')
]
]
def __init__(self, env, build=None, step=None, category=None,
generator=None):
"""Initialize a new report with the specified attributes.
To actually create this build log in the database, the `insert` method
needs to be called.
"""
self.env = env
self.id = None
self.build = build
self.step = step
self.category = category
self.generator = generator or ''
self.items = []
exists = property(fget=lambda self: self.id is not None,
doc='Whether this report exists in the database')
def delete(self, db=None):
"""Remove the report from the database."""
assert self.exists, 'Cannot delete a non-existing report'
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
cursor = db.cursor()
cursor.execute("DELETE FROM bitten_report_item WHERE report=%s",
(self.id,))
cursor.execute("DELETE FROM bitten_report WHERE id=%s", (self.id,))
if handle_ta:
db.commit()
self.id = None
def insert(self, db=None):
"""Insert a new build log into the database."""
if not db:
db = self.env.get_db_cnx()
handle_ta = True
else:
handle_ta = False
assert self.build and self.step and self.category
# Enforce uniqueness of build-step-category.
# This should be done by the database, but the DB schema helpers in Trac
# currently don't support UNIQUE() constraints
assert not list(Report.select(self.env, build=self.build,
step=self.step, category=self.category,
db=db)), 'Report already exists'
cursor = db.cursor()
cursor.execute("INSERT INTO bitten_report "
"(build,step,category,generator) VALUES (%s,%s,%s,%s)",
(self.build, self.step, self.category, self.generator))
id = db.get_last_id(cursor, 'bitten_report')
for idx, item in enumerate([item for item in self.items if item]):
cursor.executemany("INSERT INTO bitten_report_item "
"(report,item,name,value) VALUES (%s,%s,%s,%s)",
[(id, idx, key, value) for key, value
in item.items()])
if handle_ta:
db.commit()
self.id = id
def fetch(cls, env, id, db=None):
"""Retrieve an existing build from the database by ID."""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT build,step,category,generator "
"FROM bitten_report WHERE id=%s", (id,))
row = cursor.fetchone()
if not row:
return None
report = Report(env, int(row[0]), row[1], row[2] or '', row[3] or '')
report.id = id
cursor.execute("SELECT item,name,value FROM bitten_report_item "
"WHERE report=%s ORDER BY item", (id,))
items = {}
for item, name, value in cursor:
items.setdefault(item, {})[name] = value
report.items = items.values()
return report
fetch = classmethod(fetch)
def select(cls, env, config=None, build=None, step=None, category=None,
db=None):
"""Retrieve existing reports from the database that match the specified
criteria.
"""
where_clauses = []
joins = []
if config is not None:
where_clauses.append(("config=%s", config))
joins.append("INNER JOIN bitten_build ON (bitten_build.id=build)")
if build is not None:
where_clauses.append(("build=%s", build))
if step is not None:
where_clauses.append(("step=%s", step))
if category is not None:
where_clauses.append(("category=%s", category))
if where_clauses:
where = "WHERE " + " AND ".join([wc[0] for wc in where_clauses])
else:
where = ""
if not db:
db = env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT bitten_report.id FROM bitten_report %s %s "
"ORDER BY category" % (' '.join(joins), where),
[wc[1] for wc in where_clauses])
for (id, ) in cursor:
yield Report.fetch(env, id, db=db)
select = classmethod(select)
schema = BuildConfig._schema + TargetPlatform._schema + Build._schema + \
BuildStep._schema + BuildLog._schema + Report._schema
schema_version = 10
| 37.000997 | 148 | 0.551304 | 4,518 | 37,112 | 4.430279 | 0.06795 | 0.004596 | 0.004496 | 0.01214 | 0.659223 | 0.613609 | 0.571942 | 0.530975 | 0.463279 | 0.406175 | 0 | 0.003432 | 0.332588 | 37,112 | 1,002 | 149 | 37.037924 | 0.804675 | 0.016302 | 0 | 0.506631 | 0 | 0 | 0.158794 | 0.018643 | 0.006631 | 0 | 0 | 0 | 0.037135 | 0 | null | null | 0 | 0.007958 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f7979a1edf5e664d9fd5011a9f7390b351722d3b | 834 | py | Python | tests/profiles/fontval_test.py | kennethormandy/fontbakery | ec569215cd7919e125089bd6f65346afa9e75546 | [
"Apache-2.0"
] | null | null | null | tests/profiles/fontval_test.py | kennethormandy/fontbakery | ec569215cd7919e125089bd6f65346afa9e75546 | [
"Apache-2.0"
] | null | null | null | tests/profiles/fontval_test.py | kennethormandy/fontbakery | ec569215cd7919e125089bd6f65346afa9e75546 | [
"Apache-2.0"
] | 1 | 2020-06-14T17:13:59.000Z | 2020-06-14T17:13:59.000Z | import os
import pytest
from fontbakery.utils import TEST_FILE
from fontbakery.checkrunner import ERROR
def test_check_fontvalidator():
""" MS Font Validator checks """
from fontbakery.profiles.fontval import com_google_fonts_check_fontvalidator as check
font = TEST_FILE("mada/Mada-Regular.ttf")
# we want to run all FValidator checks only once,
# so here we cache all results:
fval_results = list(check(font))
# Then we make sure that there wasn't an ERROR
# which would mean FontValidator is not properly installed:
for status, message in fval_results:
assert status != ERROR
# Simulate FontVal missing.
old_path = os.environ["PATH"]
os.environ["PATH"] = ""
with pytest.raises(OSError) as _:
status, message = list(check(font))[-1]
assert status == ERROR
os.environ["PATH"] = old_path
| 28.758621 | 87 | 0.732614 | 119 | 834 | 5.02521 | 0.579832 | 0.070234 | 0.065217 | 0.056856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001458 | 0.177458 | 834 | 28 | 88 | 29.785714 | 0.870262 | 0.279377 | 0 | 0 | 0 | 0 | 0.055932 | 0.035593 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f797a2004904bea8641ef96760d4f8b68d968963 | 3,662 | py | Python | app/views.py | sinantan/TechRSS | f07d21b5553534ef6ecb6da6dc89524a8bbdb505 | [
"MIT"
] | 3 | 2019-10-26T13:31:21.000Z | 2020-02-26T20:46:35.000Z | app/views.py | sinantan/TechRSS | f07d21b5553534ef6ecb6da6dc89524a8bbdb505 | [
"MIT"
] | null | null | null | app/views.py | sinantan/TechRSS | f07d21b5553534ef6ecb6da6dc89524a8bbdb505 | [
"MIT"
] | null | null | null | from run import app
from functools import wraps
from flask import render_template,flash,redirect,logging,session,url_for,request
from .models.database import user_register, user_login, get_feed, get_user_info, update_feed, change_password
#kullanıcı giriş decorator'u. bu yapı tüm decoratorlarda aynı.
def login_required(f): #bunu kullanıcı girişi gerektiren her sayfada kullanabiliriz.
@wraps(f)
def decorated_function(*args, **kwargs):
if "logged_in" in session:
return f(*args, **kwargs)
else:
flash("Lütfen giriş yap.","warning")
return redirect(url_for("auth"))
return decorated_function
def is_logged(f): #kullanıcı giriş yaptıysa login ve register a ulaşmamalı.
@wraps(f)
def decorated_function(*args, **kwargs):
if "logged_in" in session:
return redirect(url_for("index"))
else:
return f(*args, **kwargs)
return decorated_function
@app.route("/")
@login_required
def index():
topics = get_feed()
return render_template("index.html",topics = topics)
@app.route("/auth",methods=["GET","POST"])
@is_logged
def auth():
if request.method=="POST":
if request.form.get("login")=="Giriş Yap":
email = request.form["user_email"]
password = request.form["user_password"]
status = user_login(email,password)
if status==True:
return redirect(url_for("auth"))
elif status=="Yanlış şifre":
flash("Hatalı şifre girdiniz.","danger")
return redirect(url_for("auth"))
elif status=="Hesap yok":
flash("Böyle bir hesap yok.","warning")
return redirect(url_for("auth"))
elif request.form.get("register")=="Kayıt Ol":
username = request.form["user_name"]
email = request.form["user_email"]
password = request.form["user_password"]
if user_register(username,email,password):
flash("Başarıyla kayıt olundu! Giriş yapabilirsin.","success")
return redirect(url_for("auth"))
else:
flash("Bir hata meydana geldi.","warning")
return redirect(url_for("auth"))
else:
return render_template("auth.html")
@app.route("/settings",methods=["POST","GET"])
@login_required
def settings():
if request.method=="POST":
if request.form.get("save_feed_settings") != None:
webtekno_status = True if request.form.get("webtekno") != None else False #unchecked = None , checked = on
technopat_status = True if request.form.get("technopat") != None else False
shiftdelete_status = True if request.form.get("shiftdelete") != None else False
donanimhaber_status = True if request.form.get("donanimhaber") != None else False
chiptr_status = True if request.form.get("chiptr") != None else False
query_status= update_feed(webtekno_status,technopat_status,shiftdelete_status,donanimhaber_status,chiptr_status)
if query_status:
flash("Ayarlarınız kaydedildi.", "success")
else:
flash("Bir hata meydana geldi.","danger")
return redirect(url_for("settings"))
elif request.form.get("save_password_settings") != None:
old_password = request.form["oldpassword"]
new_password = request.form["newpassword"]
if change_password(old_password,new_password):
flash("Parola başarıyla değiştirildi.","success")
else:
flash("Parolaları kontrol ediniz.","warning")
return redirect(url_for("settings"))
else:
selected_websites,user_email = get_user_info()
return render_template("settings.html",selected_websites=selected_websites,user_email=user_email,all_websites=["webtekno","shiftdelete","chiptr","donanimhaber","technopat"])
@app.route("/logout")
def logout():
session.clear()
return redirect(url_for("auth")) | 34.87619 | 176 | 0.706445 | 466 | 3,662 | 5.396996 | 0.266094 | 0.06998 | 0.067594 | 0.079523 | 0.310934 | 0.271571 | 0.147913 | 0.120875 | 0.093042 | 0.093042 | 0 | 0 | 0.159203 | 3,662 | 105 | 177 | 34.87619 | 0.816824 | 0.057073 | 0 | 0.4 | 0 | 0 | 0.206515 | 0.006575 | 0 | 0 | 0 | 0 | 0 | 1 | 0.094118 | false | 0.105882 | 0.047059 | 0 | 0.341176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f799698de0ff8776338f8a1ec460edf6e103c58f | 703 | py | Python | tests/test_core.py | emauton/aoc2015 | f321571b623a0e7acaa173be57506e64bd32765f | [
"MIT"
] | null | null | null | tests/test_core.py | emauton/aoc2015 | f321571b623a0e7acaa173be57506e64bd32765f | [
"MIT"
] | null | null | null | tests/test_core.py | emauton/aoc2015 | f321571b623a0e7acaa173be57506e64bd32765f | [
"MIT"
] | null | null | null | from aoc2015.core import dispatch
def test_dispatch_fail(capsys):
'''Dispatch fails properly when passed a bad day'''
# capsys is a pytest fixture that allows asserts agains stdout/stderr
# https://docs.pytest.org/en/stable/capture.html
dispatch(['204'])
captured = capsys.readouterr()
assert 'No module named aoc2015.day204' in captured.out
def test_dispatch_day0(capsys):
'''Dispatch to "template" day0 module works'''
# capsys is a pytest fixture that allows asserts agains stdout/stderr
# https://docs.pytest.org/en/stable/capture.html
dispatch(['0', 'arg1', 'arg2'])
captured = capsys.readouterr()
assert "day0: ['arg1', 'arg2']" in captured.out
| 35.15 | 73 | 0.702703 | 95 | 703 | 5.157895 | 0.515789 | 0.028571 | 0.061224 | 0.061224 | 0.416327 | 0.416327 | 0.416327 | 0.416327 | 0.416327 | 0.416327 | 0 | 0.037931 | 0.174964 | 703 | 19 | 74 | 37 | 0.806897 | 0.450925 | 0 | 0.222222 | 0 | 0 | 0.172043 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f79b68b39e1d3fc6804f9e60df51a84aec79e5e5 | 6,016 | py | Python | Utility.py | psarkozy/HWTester | 2553398f4ac8645a897b4f41bd36a21d54d2b177 | [
"MIT"
] | null | null | null | Utility.py | psarkozy/HWTester | 2553398f4ac8645a897b4f41bd36a21d54d2b177 | [
"MIT"
] | null | null | null | Utility.py | psarkozy/HWTester | 2553398f4ac8645a897b4f41bd36a21d54d2b177 | [
"MIT"
] | 2 | 2019-11-11T12:44:17.000Z | 2020-11-20T11:08:53.000Z | import os
from StringIO import StringIO
from zipfile import ZipFile
import subprocess
import shutil
import fcntl
import time
import signal
import imp
import sys,traceback
def dir_clean_error(function,path,excinfo):
print 'WARNING: Ran into issues trying to remove directory:',path,str(function),str(excinfo)
def clean_dir(target_dir):
if os.path.exists(target_dir):
shutil.rmtree(target_dir, ignore_errors=True,onerror= dir_clean_error)
def unzip(data, target_dir, filename=None):
submission_zipfile = ZipFile(StringIO(data))
if not os.path.exists(target_dir):
os.makedirs(target_dir)
if filename:
submission_zipfile.extract(filename,target_dir)
else:
submission_zipfile.extractall(target_dir)
def magic_quote_splitter(params):
# hftester -param "hehehe" -v "path to file here":/usr/src/myapp -
out = []
inquotes = False
for i,param in enumerate(params.split()):
if inquotes:
out[-1]+=' ' +str(param)
else:
out+=[str(param)]
for c in param:
if c == '"' or c =='\'':
inquotes = not inquotes
print 'magic_quote_splitter: ',out
return out
def run(command_with_arguments, input, timeout = 5.0, dockercleanup = False):
timeout = 60
if dockercleanup:
cleanup_cmd = "docker rm -f hftester"
print "Running docker cleanup:",cleanup_cmd
os.system(cleanup_cmd)
pipe_buffer_size = 4096
if len(input) > pipe_buffer_size:
stdin_buffer_file = open('stdin_buffer_file.tmp','w')
stdin_buffer_file.write(input)
stdin_buffer_file.close()
stdin_buffer_file = open('stdin_buffer_file.tmp')
sp = subprocess.Popen(magic_quote_splitter(command_with_arguments), stdin=stdin_buffer_file, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=pipe_buffer_size, preexec_fn=os.setsid, universal_newlines = True)
else:
sp = subprocess.Popen(magic_quote_splitter(command_with_arguments), stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=pipe_buffer_size, preexec_fn=os.setsid)
starttime = time.clock()
file_flags = fcntl.fcntl(sp.stdout.fileno(), fcntl.F_GETFL)
fcntl.fcntl(sp.stdout.fileno(), fcntl.F_SETFL, file_flags | os.O_NDELAY)
file_flags = fcntl.fcntl(sp.stderr.fileno(), fcntl.F_GETFL)
fcntl.fcntl(sp.stderr.fileno(), fcntl.F_SETFL, file_flags | os.O_NDELAY)
extraerrList = []
stdoutList = []
stderrList = []
linecount = 0
try:
#for line in input.split('\n'):
# print linecount,line
if (len(input) <= pipe_buffer_size):
sp.stdin.write(input)
sp.stdin.close()
#time.sleep(1)
#sp.stdin.flush()
totalOutput = 0
while totalOutput < 4096 * 1024 and sp.poll() is None and time.clock() - starttime < timeout:
try:
r = sp.stdout.read()
totalOutput = totalOutput + len(r)
stdoutList.append(r)
except IOError:
pass
except Exception, e:
print 'stdout:',sys.exc_info()
pass
try:
r = sp.stderr.read()
totalOutput = totalOutput + len(r)
stderrList.append(r)
except IOError:
pass
except Exception, e:
print 'stderr:',sys.exc_info()
pass
if sp.poll() is None:
if totalOutput >= 4096 * 1024:
extraerrList.append("Too much output data received, killing process!\n")
if time.clock() - starttime >= timeout - 0.5 :
print "Process killed because of timeout"
extraerrList.append("Maximum allowed time exceeded, killing process! First 10000 chars of input was: [%s]\n"%(input[0:min(10000,len(input))]))
os.killpg(os.getpgid(sp.pid), signal.SIGTERM)
os.system("sudo docker stop hftester")
#sp.kill()
#except ValueError:
#pass
except Exception, e:
print sys.exc_info()
extraerrList.append("Error:"+str(e))
joined_extraerrors = '\n'.join(extraerrList)
print 'extraerrList:',joined_extraerrors[0:min(200,len(joined_extraerrors))]
#raise e
joined_extraerrors = '\n'.join(extraerrList)
if len(stderrList)>0 or len(extraerrList)>0:
stderrList = list(filter(lambda x: "read unix /var/run/docker.sock: connection reset by peer" not in x, stderrList))
#for line in sdterrList:
print "Finished running command sdterr :", "".join(stderrList)," extraerr:", joined_extraerrors[0:min(200,len(joined_extraerrors))]
#sp.communicate(input=input)
return ("".join(stdoutList), "".join(stderrList), "".join(extraerrList))
def run_firejail(command_with_arguments, input, firejail_profile_file=None, timeout = 5.0):
params = ["firejail", "--quiet"]
if firejail_profile_file:
params.append("--profile=%s" % firejail_profile_file)
params.extend(command_with_arguments.split())
return run(" ".join(params), input=input, timeout=timeout)
def run_python_docker(python_file_path, input, firejail_profile_file=None, timeout = 5.0):
pydir, sep, pyfilename = python_file_path.rpartition(os.sep)
cmd = 'docker run -i --rm -m 400M --memory-swap -1 --ulimit cpu=%d --name hftester -v %s:/usr/src/myapp -w /usr/src/myapp python:3-alpine python %s'%(timeout, pydir, pyfilename)
print 'Running python docker command',cmd
#return None
return run(cmd, input=input,timeout = timeout,dockercleanup = True)
def get_class(classpath):
if not classpath.endswith(".py"):
classpath = classpath + ".py"
modname = os.path.basename(classpath).replace(".py", "")
mod = evaluatormod = imp.load_source(modname, classpath)
clazz = getattr(mod, modname)
return clazz
| 39.064935 | 223 | 0.636137 | 742 | 6,016 | 5.021563 | 0.301887 | 0.019324 | 0.02818 | 0.016103 | 0.279388 | 0.240472 | 0.208803 | 0.180354 | 0.117016 | 0.074074 | 0 | 0.013274 | 0.24867 | 6,016 | 153 | 224 | 39.320261 | 0.811062 | 0.041556 | 0 | 0.158333 | 0 | 0.05 | 0.121523 | 0.014604 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.033333 | 0.083333 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f79bf4e8cdd9d2e6fe7f0243351b84e61c125647 | 1,432 | py | Python | wagtailsharing/tests/test_urls.py | mikiec84/wagtail-sharing | e3c338dae3327d955f058b5eb2f311d4dc0cbbf7 | [
"CC0-1.0"
] | 1 | 2019-02-25T21:56:56.000Z | 2019-02-25T21:56:56.000Z | wagtailsharing/tests/test_urls.py | mikiec84/wagtail-sharing | e3c338dae3327d955f058b5eb2f311d4dc0cbbf7 | [
"CC0-1.0"
] | null | null | null | wagtailsharing/tests/test_urls.py | mikiec84/wagtail-sharing | e3c338dae3327d955f058b5eb2f311d4dc0cbbf7 | [
"CC0-1.0"
] | null | null | null | from __future__ import absolute_import, unicode_literals
try:
from importlib import reload
except ImportError:
pass
from django.conf.urls import url
from django.test import TestCase
from mock import patch
try:
import wagtail.core.urls as wagtail_core_urls
except ImportError: # pragma: no cover; fallback for Wagtail <2.0
import wagtail.wagtailcore.urls as wagtail_core_urls
import wagtailsharing.urls
class TestUrlPatterns(TestCase):
def setUp(self):
def test_view():
pass # pragma: no cover
root_patterns = [
url(r'^foo/$', url, name='foo'),
url(r'^((?:[\w\-]+/)*)$', url, name='wagtail_serve'),
url(r'^bar/$', url, name='bar'),
]
self.patcher = patch.object(
wagtail_core_urls,
'urlpatterns',
root_patterns
)
self.patcher.start()
self.addCleanup(self.patcher.stop)
reload(wagtailsharing.urls)
self.urlpatterns = wagtailsharing.urls.urlpatterns
def test_leaves_previous_urls_alone(self):
self.assertEqual(self.urlpatterns[0].name, 'foo')
def test_replaces_wagtail_serve(self):
self.assertEqual(self.urlpatterns[1].name, 'wagtail_serve')
self.assertEqual(self.urlpatterns[1].callback.__name__, 'ServeView')
def test_leaves_later_urls_alone(self):
self.assertEqual(self.urlpatterns[2].name, 'bar')
| 28.078431 | 76 | 0.657821 | 170 | 1,432 | 5.352941 | 0.358824 | 0.082418 | 0.065934 | 0.131868 | 0.213187 | 0.094505 | 0.094505 | 0 | 0 | 0 | 0 | 0.00545 | 0.231145 | 1,432 | 50 | 77 | 28.64 | 0.821072 | 0.041899 | 0 | 0.157895 | 0 | 0 | 0.06355 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.131579 | false | 0.052632 | 0.263158 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f7a103da5061022bae213f777c49b0abb01710f8 | 656 | py | Python | LeapYearFinderClass.py | MichaelWiciak/LeapYearFinderClass | 1bc1326f60115bddc1639ff50256888448dd9645 | [
"MIT"
] | null | null | null | LeapYearFinderClass.py | MichaelWiciak/LeapYearFinderClass | 1bc1326f60115bddc1639ff50256888448dd9645 | [
"MIT"
] | null | null | null | LeapYearFinderClass.py | MichaelWiciak/LeapYearFinderClass | 1bc1326f60115bddc1639ff50256888448dd9645 | [
"MIT"
] | null | null | null | class LeapYearFinder(object):
def __init__(self):
pass
def findLeapYear(self, startYear, endYear):
leapYearRecord = []
for i in range(int(startYear),int(endYear)):
year = i
print(year,end = "\t")
#If year is divisible by 4 and not 100 unless it is also divisable by 400
if year % 4 == 0 and (year % 100 != 0 or year % 400 == 0):
print(year, 'is a leap year.')
leapYearRecord.append(str(year))
else:
print(year, 'is not leap year.')
return leapYearRecord
| 31.238095 | 88 | 0.492378 | 74 | 656 | 4.310811 | 0.554054 | 0.08464 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044271 | 0.414634 | 656 | 21 | 89 | 31.238095 | 0.786458 | 0.109756 | 0 | 0 | 0 | 0 | 0.060498 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0 | 0 | 0.285714 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e38ad8911f43a8dc1cf2caa5fecf9c3fdcb3062c | 1,916 | py | Python | parsetab.py | UVG-Teams/analizador-lexico-sintactico | 71ac98e11fc63c6fcba36e94d9d40f0e59b6d55f | [
"MIT"
] | null | null | null | parsetab.py | UVG-Teams/analizador-lexico-sintactico | 71ac98e11fc63c6fcba36e94d9d40f0e59b6d55f | [
"MIT"
] | null | null | null | parsetab.py | UVG-Teams/analizador-lexico-sintactico | 71ac98e11fc63c6fcba36e94d9d40f0e59b6d55f | [
"MIT"
] | null | null | null |
# parsetab.py
# This file is automatically generated. Do not edit.
# pylint: disable=W,C,R
_tabversion = '3.10'
_lr_method = 'LALR'
_lr_signature = 'leftIMPLIESSIMPLIESleftANDORleftRPARENLPARENrightNEGATIONALPHABET AND IMPLIES LPAREN NEGATION OR PREDICATE RPAREN SIMPLIESexpr : expr AND exprexpr : ALPHABETexpr : expr OR exprexpr : NEGATION exprexpr : expr IMPLIES exprexpr : expr SIMPLIES exprexpr : LPAREN expr RPAREN'
_lr_action_items = {'ALPHABET':([0,3,4,5,6,7,8,],[2,2,2,2,2,2,2,]),'NEGATION':([0,3,4,5,6,7,8,],[3,3,3,3,3,3,3,]),'LPAREN':([0,3,4,5,6,7,8,],[4,4,4,4,4,4,4,]),'$end':([1,2,9,11,12,13,14,15,],[0,-2,-4,-1,-3,-5,-6,-7,]),'AND':([1,2,9,10,11,12,13,14,15,],[5,-2,-4,5,-1,-3,5,5,-7,]),'OR':([1,2,9,10,11,12,13,14,15,],[6,-2,-4,6,-1,-3,6,6,-7,]),'IMPLIES':([1,2,9,10,11,12,13,14,15,],[7,-2,-4,7,-1,-3,-5,-6,-7,]),'SIMPLIES':([1,2,9,10,11,12,13,14,15,],[8,-2,-4,8,-1,-3,-5,-6,-7,]),'RPAREN':([2,9,10,11,12,13,14,15,],[-2,-4,15,-1,-3,-5,-6,-7,]),}
_lr_action = {}
for _k, _v in _lr_action_items.items():
for _x,_y in zip(_v[0],_v[1]):
if not _x in _lr_action: _lr_action[_x] = {}
_lr_action[_x][_k] = _y
del _lr_action_items
_lr_goto_items = {'expr':([0,3,4,5,6,7,8,],[1,9,10,11,12,13,14,]),}
_lr_goto = {}
for _k, _v in _lr_goto_items.items():
for _x, _y in zip(_v[0], _v[1]):
if not _x in _lr_goto: _lr_goto[_x] = {}
_lr_goto[_x][_k] = _y
del _lr_goto_items
_lr_productions = [
("S' -> expr","S'",1,None,None,None),
('expr -> expr AND expr','expr',3,'p_and','main.py',48),
('expr -> ALPHABET','expr',1,'p_expr','main.py',52),
('expr -> expr OR expr','expr',3,'p_or','main.py',56),
('expr -> NEGATION expr','expr',2,'p_negation','main.py',60),
('expr -> expr IMPLIES expr','expr',3,'p_implies','main.py',64),
('expr -> expr SIMPLIES expr','expr',3,'p_simplies','main.py',69),
('expr -> LPAREN expr RPAREN','expr',3,'p_parens','main.py',73),
]
| 50.421053 | 538 | 0.601775 | 383 | 1,916 | 2.827676 | 0.198433 | 0.016621 | 0.022161 | 0.051708 | 0.245614 | 0.186519 | 0.156971 | 0.131117 | 0.11819 | 0.062789 | 0 | 0.135056 | 0.111169 | 1,916 | 37 | 539 | 51.783784 | 0.500881 | 0.043841 | 0 | 0.074074 | 1 | 0.037037 | 0.344828 | 0.035577 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e38f485bd754322f09d50cbe4ef3dae03d97f83a | 417 | py | Python | th_watchdog/failure_email.py | hwjeremy/th-watchdog | c32682f838fffa3396cabc3d83eeb4960c765fc9 | [
"MIT"
] | null | null | null | th_watchdog/failure_email.py | hwjeremy/th-watchdog | c32682f838fffa3396cabc3d83eeb4960c765fc9 | [
"MIT"
] | null | null | null | th_watchdog/failure_email.py | hwjeremy/th-watchdog | c32682f838fffa3396cabc3d83eeb4960c765fc9 | [
"MIT"
] | null | null | null | """
Thornleigh Farm - VPN Watchdog
Failure Email Module
author: hugh@blinkybeach.com
"""
from th_watchdog.email import Email
class FailureEmail(Email):
"""
An email notifying the administrator of a failed state
"""
SUBJECT = 'Starport VPN connection lost'
BODY = 'Starport has lost connection to the VPN'
def __init__(self):
super().__init__(self.SUBJECT, self.BODY)
return
| 21.947368 | 58 | 0.690647 | 52 | 417 | 5.365385 | 0.692308 | 0.057348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220624 | 417 | 18 | 59 | 23.166667 | 0.858462 | 0.323741 | 0 | 0 | 0 | 0 | 0.258687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e3a03a276ee7eba66fe85aa5ecec8c492d7bc5fa | 950 | py | Python | demo.py | ademilly/sqs-service | cd6cb1e7ca904472376eafb8682621675c310f2e | [
"MIT"
] | null | null | null | demo.py | ademilly/sqs-service | cd6cb1e7ca904472376eafb8682621675c310f2e | [
"MIT"
] | null | null | null | demo.py | ademilly/sqs-service | cd6cb1e7ca904472376eafb8682621675c310f2e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import time
import sqs_service
"""Usage:
python demo.py
Expected set environment variables:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION
- AWS_SESSION_TOKEN for IAM roles
- AWS_SECURITY_TOKEN for IAM roles
Send 'Hello World' to queue 'TEST', listen to the queue and
print first message received
"""
def run():
sqs_svc = sqs_service.SQSService(queue_name='TEST')
sqs_svc.send(body='Hello World', attributes={
'MessageType': 'Greeting'
})
t_end = time.time() + 30
while time.time() < t_end:
sqs_svc.listen(for_how_many=1, with_attributes=['MessageType'])
if sqs_svc.has_got_messages():
first_message = sqs_svc.get_first_message()
print 'Message received:', first_message.body()
print 'Message is a', first_message.get_attribute('MessageType')
first_message.delete()
if __name__ == '__main__':
run()
| 22.093023 | 76 | 0.678947 | 128 | 950 | 4.710938 | 0.53125 | 0.119403 | 0.036484 | 0.053068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005326 | 0.209474 | 950 | 42 | 77 | 22.619048 | 0.797603 | 0.022105 | 0 | 0 | 0 | 0 | 0.145768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3a19136ac88239183f6ccc7f508189df0b1db51 | 675 | py | Python | utils/sys_utils.py | machine2learn/galaina | 47ea16dd99687b38307674dd16ab7b7e99453910 | [
"BSD-3-Clause"
] | 3 | 2019-05-04T16:46:27.000Z | 2021-03-05T14:37:05.000Z | utils/sys_utils.py | machine2learn/galaina | 47ea16dd99687b38307674dd16ab7b7e99453910 | [
"BSD-3-Clause"
] | 2 | 2019-08-08T13:01:32.000Z | 2019-08-19T13:32:22.000Z | utils/sys_utils.py | machine2learn/galaina | 47ea16dd99687b38307674dd16ab7b7e99453910 | [
"BSD-3-Clause"
] | null | null | null | import os
import shutil
def delete_configs(config, dataset, username):
if config != 'all':
paths = [os.path.join('user_data', username, dataset, config)]
else:
paths = [os.path.join('user_data', username, dataset, d) for d in
os.listdir(os.path.join('user_data', username, dataset)) if
os.path.isdir(os.path.join('user_data', username, dataset, d)) and d != 'input' and d != 'factor']
for path in paths:
shutil.rmtree(path)
def delete_dataset(APP_ROOT, username, dataset):
path = os.path.join(APP_ROOT, 'user_data', username, dataset)
print('removing ...' + str(path))
shutil.rmtree(path)
| 33.75 | 115 | 0.631111 | 93 | 675 | 4.483871 | 0.333333 | 0.086331 | 0.119904 | 0.275779 | 0.345324 | 0.345324 | 0.345324 | 0.266187 | 0 | 0 | 0 | 0 | 0.222222 | 675 | 19 | 116 | 35.526316 | 0.794286 | 0 | 0 | 0.133333 | 0 | 0 | 0.105185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.266667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3a354a397453e432d52c1cd363a4b2592457f4b | 1,150 | py | Python | pycam/pycam/Utils/progress.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | pycam/pycam/Utils/progress.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | pycam/pycam/Utils/progress.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | from pycam.Utils.events import get_event_handler, get_mainloop
class ProgressContext:
def __init__(self, title):
self._title = title
self._progress = get_event_handler().get("progress")
def __enter__(self):
if self._progress:
self._progress.update(text=self._title, percent=0)
# start an indefinite pulse (until we receive more details)
self._progress.update()
else:
self._progress = None
return self
def __exit__(self, exc_type, exc_value, traceback):
if self._progress:
self._progress.finish()
def update(self, *args, **kwargs):
mainloop = get_mainloop()
if mainloop is None:
return False
mainloop.update()
if self._progress:
return self._progress.update(*args, **kwargs)
else:
return False
def set_multiple(self, count, base_text=None):
if self._progress:
self._progress.set_multiple(count, base_text=base_text)
def update_multiple(self):
if self._progress:
self._progress.update_multiple()
| 28.75 | 71 | 0.616522 | 131 | 1,150 | 5.091603 | 0.366412 | 0.233883 | 0.104948 | 0.107946 | 0.185907 | 0.107946 | 0.107946 | 0 | 0 | 0 | 0 | 0.001232 | 0.293913 | 1,150 | 39 | 72 | 29.487179 | 0.820197 | 0.049565 | 0 | 0.3 | 0 | 0 | 0.007333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.033333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3aad4147f45eb6d3a2f6a2928f807f8445336c7 | 1,171 | py | Python | helper/storageHelper.py | LHGames-2018/DCI5espaces | 8f71ca3b6cf2bae78822d8a4a8546b5482eaa627 | [
"MIT"
] | null | null | null | helper/storageHelper.py | LHGames-2018/DCI5espaces | 8f71ca3b6cf2bae78822d8a4a8546b5482eaa627 | [
"MIT"
] | null | null | null | helper/storageHelper.py | LHGames-2018/DCI5espaces | 8f71ca3b6cf2bae78822d8a4a8546b5482eaa627 | [
"MIT"
] | 5 | 2017-10-07T14:54:28.000Z | 2018-09-27T20:16:59.000Z | import json
import os.path
class StorageHelper:
__document = None
__path = None
@staticmethod
def write(key, data):
StorageHelper.__init()
StorageHelper.__document[key] = json.dumps(data)
StorageHelper.__store()
@staticmethod
def read(key):
StorageHelper.__init()
data = StorageHelper.__document[key]
if data is None:
return None
return json.loads(data)
@staticmethod
def __init():
if StorageHelper.__path is None:
if 'LOCAL_STORAGE' in os.environ:
StorageHelper.__path = os.environ['LOCAL_STORAGE'] + '/document.json'
else:
StorageHelper.__path = '/data/document.json'
if StorageHelper.__document is None:
if os.path.isfile(StorageHelper.__path) is False:
StorageHelper.__document = dict()
else:
file = open(StorageHelper.__path)
StorageHelper.__document = json.load(file)
@staticmethod
def __store():
with open(StorageHelper.__path, 'w+') as file:
json.dump(StorageHelper.__document, file)
| 27.880952 | 85 | 0.599488 | 115 | 1,171 | 5.756522 | 0.313043 | 0.222054 | 0.072508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309991 | 1,171 | 41 | 86 | 28.560976 | 0.819307 | 0 | 0 | 0.235294 | 0 | 0 | 0.052092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3ab35bc88f90fb1279165d05f8411f9b2a64d26 | 12,383 | py | Python | ddot/cx_services_old-8-31-17/align.py | pupster90/ddot2 | 1952bff30383b35dff72b332592e1471201d40f3 | [
"MIT"
] | 1 | 2018-11-08T14:41:43.000Z | 2018-11-08T14:41:43.000Z | ddot/cx_services_old-8-31-17/align.py | pupster90/ddot2 | 1952bff30383b35dff72b332592e1471201d40f3 | [
"MIT"
] | null | null | null | ddot/cx_services_old-8-31-17/align.py | pupster90/ddot2 | 1952bff30383b35dff72b332592e1471201d40f3 | [
"MIT"
] | null | null | null | import ndex.client as nc
from ndex.networkn import NdexGraph
import io
import json
from IPython.display import HTML
from time import sleep
import os, time, tempfile
import sys
import time
import logging
import grpc
import networkx as nx
import cx_pb2
import cx_pb2_grpc
import numpy as np
import inspect
from concurrent import futures
from itertools import combinations
from subprocess import Popen, PIPE, STDOUT
import pandas as pd
from ddot import Ontology, align_hierarchies
from ddot.utils import update_nx_with_alignment
from ddot.cx_services.cx_utils import yield_ndex, required_params, cast_params
from ddot.config import default_params
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
verbose = True
class CyServiceServicer(cx_pb2_grpc.CyServiceServicer):
def format_params(self, params):
required = [
'ndex_user',
'ndex_pass',
'ndex_server',
'ont1_ndex_uuid',
'ont2_ndex_uuid',
]
required_params(params, required)
cast = [
('iterations', int),
('threads', int),
('ont1_ndex_uuid', str),
('ont2_ndex_uuid', str)
]
cast_params(params, cast)
def StreamElements(self, element_iterator, context):
try:
params = {
'name' : 'Data-Driven Ontology',
'ont1_ndex_uuid' : None,
'ont2_ndex_uuid' : None,
'iterations' : 3,
'threads' : 4
}
params.update(default_params)
G, params, errors = self.read_element_stream(element_iterator, params)
self.format_params(params)
if verbose:
print('Parameters:', params)
## Read graphs using NDEx client
hier1, hier2 = self.read_hierarchies(params)
if True:
hier1_ont = Ontology.from_NdexGraph(hier1)
hier2_ont = Ontology.from_NdexGraph(hier2)
print('Summary of hier1_ont:', hier1_ont.summary())
print('Summary of hier2_ont:', hier2_ont.summary())
hier1_collapsed, hier2_collapsed = Ontology.mutual_collapse(hier1_ont, hier2_ont, verbose=True)
assert len(hier1_collapsed.terms) < 3000, len(hier1_collapsed.terms)
assert len(hier2_collapsed.terms) < 3000, len(hier2_collapsed.terms)
if verbose:
print 'Aligning hierarchies'
alignment = align_hierarchies(
hier1_collapsed,
hier2_collapsed,
params['iterations'],
params['threads'])
if verbose:
print('One-to-one term alignments:', alignment.shape[0])
print(alignment.iloc[:30,:])
update_nx_with_alignment(hier1, alignment)
ont_url = hier1.upload_to(params['ndex_server'],
params['ndex_user'],
params['ndex_pass'])
if verbose:
print 'ontology_url:', ont_url
for elt in yield_ndex(ont_url):
yield elt
else:
for caught_error in errors:
error = self.create_internal_crash_error(caught_error.message, 500)
log_error(error)
yield error
except Exception as e:
message = "Unexpected error: " + str(e)
error = self.create_internal_crash_error(message, 500)
log_error(error)
import traceback
print traceback.print_exc()
yield error
def read_hierarchies(self, params):
# Read hierarchy 1
hier1 = NdexGraph(server=params['ndex_server'],
username=params['ndex_user'],
password=params['ndex_pass'],
uuid=params['ont1_ndex_uuid'])
# Read hierarchy 2
hier2 = NdexGraph(server=params['ndex_server'],
username=params['ndex_user'],
password=params['ndex_pass'],
uuid=params['ont2_ndex_uuid'])
return hier1, hier2
def stream_ontology(self, ontology, term_sizes, term_2_uuid):
node_2_id = {}
node_id = 0
for node_name in ontology.genes:
yield self.create_node(node_id, node_name)
yield self.create_nodeAttribute(node_id, 'Gene_or_Term', 'Gene')
yield self.create_nodeAttribute(node_id, 'Size', '1')
node_2_id[node_name] = node_id
node_id += 1
for node_name in ontology.terms:
yield self.create_node(node_id, node_name)
yield self.create_nodeAttribute(node_id, 'Gene_or_Term', 'Term')
yield self.create_nodeAttribute(node_id, 'ndex:internalLink', term_2_uuid[node_name])
yield self.create_nodeAttribute(node_id, 'Size', str(term_sizes[node_name]))
node_2_id[node_name] = node_id
node_id += 1
edge_id = 0
for g in ontology.genes:
for t_i in ontology.gene_2_terms[g]:
t = ontology.terms[t_i]
yield self.create_edge(edge_id, node_2_id[g], node_2_id[t])
yield self.create_edgeAttribute(edge_id, 'Relation', 'Gene-Term Annotation')
edge_id += 1
for p, c_list in ontology.term_2_terms.iteritems():
for c in c_list:
yield self.create_edge(edge_id, node_2_id[c], node_2_id[p])
yield self.create_edgeAttribute(edge_id, 'Relation', 'Child-Parent Hierarchical Relation')
edge_id += 1
def upload_subnetworks_2_ndex(self, ontology, arr, arr_genes_index,
ndex_server, ndex_user, ndex_pass, name):
"""Push subnetworks"""
#print ontology.get_term_2_genes()
term_2_url = {}
for t in ontology.terms:
#print 't:', t
genes = np.array([ontology.genes[g] for g in ontology.get_term_2_genes()[t]])
#print 'genes:', genes
idx = np.array([arr_genes_index[g] for g in genes])
#print 'idx:', idx
subarr = arr[idx,:][:,idx]
# Set nan to 0
subarr[np.isnan(subarr)] = 0
row, col = subarr.nonzero()
row, col = row[row < col], col[row < col]
G = NdexGraph()
G.create_from_edge_list(zip(genes[row], genes[col]))
for i in np.arange(row.size):
G.set_edge_attribute(i+1, "similarity", str(subarr[row[i], col[i]]))
G.set_name('%s supporting network for CLIXO:%s' % (name, t))
G.set_network_attribute('Description', '%s supporting network for CLIXO:%s' % (name, t))
ndex_url = G.upload_to(ndex_server, ndex_user, ndex_pass)
term_2_url[t] = ndex_url
return term_2_url
def create_node(self, node_id, node_name):
element = cx_pb2.Element()
node = element.node
node.id = node_id
node.name = node_name
return element
def create_nodeAttribute(self, node_id, key, val):
element = cx_pb2.Element()
attr = element.nodeAttribute
attr.nodeId = node_id
attr.name = key
attr.value = val
return element
def create_edge(self, edge_id, node1, node2):
element = cx_pb2.Element()
edge = element.edge
edge.id = edge_id
edge.sourceId = node1
edge.targetId = node2
return element
def create_edgeAttribute(self, edge_id, key, val):
element = cx_pb2.Element()
attr = element.edgeAttribute
attr.edgeId = edge_id
attr.name = key
attr.value = val
return element
# def create_output_attribute(self, node_id, value, attribute_name, suffix):
# element = cx_pb2.Element()
# attr = element.nodeAttribute
# attr.nodeId = node_id
# attr.name = attribute_name + suffix
# attr.value = value
# return element
def create_internal_crash_error(self, message, status):
element = cx_pb2.Element()
error = element.error
error.status = status
error.code = 'cy://align-hierarchies/' + str(status)
error.message = message
error.link = 'http://align-hierarchies'
return element
def read_element_stream(self, element_iter, parameters):
errors = []
edgesAttr_dict = {}
nodesAttr_dict = {}
edges_dict = {}
nodes_dict = {}
for element in element_iter:
ele_type = element.WhichOneof('value')
if ele_type == 'error':
errors.append(element.error)
elif ele_type == 'parameter':
param = element.parameter
parameters[param.name] = param.value
elif ele_type == 'node':
node = element.node
nodes_dict[node.id] = node.name
elif ele_type == 'edge':
edge = element.edge
edges_dict[edge.id] = (edge.sourceId, edge.targetId)
elif ele_type == 'nodeAttribute':
pass
elif ele_type == 'edgeAttribute':
edgeAttr = element.edgeAttribute
if edgesAttr_dict.has_key(edgeAttr.name):
edgesAttr_dict[edgeAttr.name][edgeAttr.edgeId] = edgeAttr.value
else:
edgesAttr_dict[edgeAttr.name] = {edgeAttr.edgeId : edgeAttr.value}
G = nx.Graph()
for n_id, u in nodes_dict.iteritems():
G.add_node(u, node_id=n_id)
edge_attributes_list = edgesAttr_dict.keys()
for e_id, (u, v) in edges_dict.iteritems():
G.add_edge(nodes_dict[u], nodes_dict[v],
attr_dict={k : edgesAttr_dict[k][e_id] for k in edge_attributes_list if edgesAttr_dict[k].has_key(e_id)},
edge_id=e_id)
return G, parameters, errors
# def read_element_stream(self, element_iter, parameters):
# errors = []
# edges_dict = {}
# nodes_dict = {}
# for element in element_iter:
# ele_type = element.WhichOneof('value')
# if ele_type == 'error':
# errors.append(element.error)
# elif ele_type == 'parameter':
# param = element.parameter
# parameters[param.name] = param.value
# elif ele_type == 'node':
# node = element.node
# nodes_dict[node.id] = node.name
# elif ele_type == 'edge':
# edge = element.edge
# if edges_dict.has_key(edge.id):
# edges_dict[edge.id][:2] = [edge.sourceId, edge.targetId]
# else:
# edges_dict[edge.id] = [edge.sourceId, edge.targetId, None]
# elif ele_type == 'nodeAttribute':
# pass
# elif ele_type == 'edgeAttribute':
# edgeAttr = element.edgeAttribute
# if edgeAttr.name == 'similarity':
# if edges_dict.has_key(edgeAttr.edgeId):
# edges_dict[edgeAttr.edgeId][2] = float(edgeAttr.value)
# else:
# edges_dict[edgeAttr.edgeId] = [None, None, float(edgeAttr.value)]
# return (nodes_dict, edges_dict), parameters, errors
def log_info(message):
logging.info(message)
def log_error(message):
logging.error(message)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
cx_pb2_grpc.add_CyServiceServicer_to_server(
CyServiceServicer(), server)
server.add_insecure_port('0.0.0.0:8081')
server.start()
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO, format='%(asctime)s %(message)s')
log_info("Listening for requests on '0.0.0.0:8081'")
serve()
| 36.313783 | 128 | 0.562868 | 1,406 | 12,383 | 4.726174 | 0.174964 | 0.019865 | 0.024831 | 0.012641 | 0.338149 | 0.317983 | 0.287434 | 0.268473 | 0.225132 | 0.193228 | 0 | 0.015077 | 0.341194 | 12,383 | 340 | 129 | 36.420588 | 0.799461 | 0.134782 | 0 | 0.17284 | 0 | 0 | 0.078347 | 0.002161 | 0 | 0 | 0 | 0 | 0.00823 | 0 | null | null | 0.028807 | 0.102881 | null | null | 0.032922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3b3a2b9c400072459039396551edf7edb2673da | 5,552 | py | Python | Lessons/source/bases.py | ericanaglik/cs13 | 6dc2dd41e0b82a43999145b226509d8fc0adb366 | [
"MIT"
] | null | null | null | Lessons/source/bases.py | ericanaglik/cs13 | 6dc2dd41e0b82a43999145b226509d8fc0adb366 | [
"MIT"
] | 8 | 2019-04-26T06:29:56.000Z | 2019-08-17T01:48:07.000Z | Lessons/source/bases.py | ericanaglik/cs13 | 6dc2dd41e0b82a43999145b226509d8fc0adb366 | [
"MIT"
] | null | null | null | #!python
import string
# Hint: Use these string constants to encode/decode hexadecimal digits and more
# string.digits is '0123456789'
# string.hexdigits is '0123456789abcdefABCDEF'
# string.ascii_lowercase is 'abcdefghijklmnopqrstuvwxyz'
# string.ascii_uppercase is 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
# string.ascii_letters is ascii_lowercase + ascii_uppercase
# string.printable is digits + ascii_letters + punctuation + whitespace
digit_value = {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9, 'a': 10, 'b': 11, 'c': 12, 'd': 13, 'e': 14, 'f': 15, 'g': 16, 'h': 17, 'i': 18, 'j': 19, 'k': 20, 'l': 21, 'm': 22, 'n': 23, 'o': 24, 'p': 25, 'q': 26, 'r': 27, 's': 28, 't': 29, 'u': 30, 'v': 31, 'w': 32, 'x': 33, 'y': 34, 'z': 35}
value_digit = {0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5', 6: '6', 7: '7', 8: '8', 9: '9', 10: 'a', 11: 'b', 12: 'c', 13: 'd', 14: 'e', 15: 'f', 16: 'g', 17: 'h', 18: 'i', 19: 'j', 20: 'k', 21: 'l', 22: 'm', 23: 'n', 24: 'o', 25: 'p', 26: 'q', 27: 'r', 28: 's', 29: 't', 30: 'u', 31: 'v', 32: 'w', 33: 'x', 34: 'y', 35: 'z'}
def decode(digits, base):
"""Decode given digits in given base to number in base 10.
digits: str -- string representation of number (in given base)
base: int -- base of given number
return: int -- integer representation of number (in base 10)"""
# Handle up to base 36 [0-9a-z]
assert 2 <= base <= 36, 'base is out of range: {}'.format(base)
# TODO: Decode digits from binary (base 2)
digits_list = list(digits.lower())
digits_list.reverse()
# print(digits_list)
# go through the array and figure out what each 1 and 0 mean
total = 0
for i, value in enumerate(digits_list):
place_value = base ** i
# print(place_value, value)
total += digit_value[value] * place_value
# print(place_value, digit_value[value], digit_value[value] * place_value, total)
return total
# ...
# TODO: Decode digits from hexadecimal (base 16)
# TODO: Decode digits from any base (2 up to 36)
# ...
def encode(number, base):
"""Encode given number in base 10 to digits in given base.
number: int -- integer representation of number (in base 10)
base: int -- base to convert to
return: str -- string representation of number (in given base)"""
# Handle up to base 36 [0-9a-z]
assert 2 <= base <= 36, 'base is out of range: {}'.format(base)
# Handle unsigned numbers only for now
assert number >= 0, 'number is negative: {}'.format(number)
# TODO: Encode number in binary (base 2)
numbers = []
while number > 0:
remainder = number % base
if number < base:
remainder = number
number = number//base
numbers.append(value_digit[remainder])
numbers.reverse()
numbers_string = ''.join(numbers)
return numbers_string
# TODO: Encode number in hexadecimal (base 16)
# ...
# TODO: Encode number in any base (2 up to 36)
# ...
def convert(digits, base1, base2):
"""Convert given digits in base1 to digits in base2.
digits: str -- string representation of number (in base1)
base1: int -- base of given number
base2: int -- base to convert to
return: str -- string representation of number (in base2)"""
# Handle up to base 36 [0-9a-z]
assert 2 <= base1 <= 36, 'base1 is out of range: {}'.format(base1)
assert 2 <= base2 <= 36, 'base2 is out of range: {}'.format(base2)
decoded = decode(digits, base1)
encoded = encode(decoded, base2)
return encoded
# TODO: Convert digits from base 2 to base 16 (and vice versa)
# ...
# TODO: Convert digits from base 2 to base 10 (and vice versa)
# ...
# TODO: Convert digits from base 10 to base 16 (and vice versa)
# ...
# TODO: Convert digits from any base to any base (2 up to 36)
result = decode(digits, base1)
return encode(result, base2)
# ...
def convert_fractional(digits, base1, base2):
# begin with the decimal fraction and multiply by 2
# grab the whole number from the result and add to the right of the point
# convert to string
# string split at decimal
# create a var for everything right of the decimal and then multiply by 2
#convert a fractional num from base1 to decimal
#convert that decimal fraction to base2
# split string at decimal
digits = digits.split(".")
# convert the whole number to binary
whole = convert(digits[0], 10, 2)
# cleaning up decimal so I can convert to binary
deci = "." + digits[1]
deci = float(deci)
to_binary = ""
while deci > 0:
deci *= base2
if deci >= 1:
to_binary += "1"
deci -= 1
else:
to_binary += "0"
return whole + "." + to_binary
def convert_negative(digits, base1, base2):
pass
def main():
"""Read command-line arguments and convert given digits between bases."""
import sys
args = sys.argv[1:] # Ignore script file name
if len(args) == 3:
digits = args[0]
base1 = int(args[1])
base2 = int(args[2])
# Convert given digits between bases
result = convert(digits, base1, base2)
print('{} in base {} is {} in base {}'.format(digits, base1, result, base2))
else:
print('Usage: {} digits base1 base2'.format(sys.argv[0]))
print('Converts digits from base1 to base2')
if __name__ == '__main__':
# main()
print(convert_fractional(".625", 10, 2))
| 36.287582 | 328 | 0.600865 | 818 | 5,552 | 4.031785 | 0.234719 | 0.026683 | 0.040024 | 0.043663 | 0.26228 | 0.205882 | 0.201637 | 0.177683 | 0.115525 | 0.115525 | 0 | 0.068991 | 0.255944 | 5,552 | 152 | 329 | 36.526316 | 0.729363 | 0.445245 | 0 | 0.063492 | 0 | 0 | 0.101615 | 0 | 0 | 0 | 0 | 0.019737 | 0.079365 | 1 | 0.095238 | false | 0.015873 | 0.031746 | 0 | 0.206349 | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3b3eb4f092c715b7640f0a297086182d40badaa | 3,667 | py | Python | ecl/provider_connectivity/v2/address_assignment.py | keiichi-hikita/eclsdk | c43afb982fd54eb1875cdc22d46044644d804c4a | [
"Apache-2.0"
] | null | null | null | ecl/provider_connectivity/v2/address_assignment.py | keiichi-hikita/eclsdk | c43afb982fd54eb1875cdc22d46044644d804c4a | [
"Apache-2.0"
] | null | null | null | ecl/provider_connectivity/v2/address_assignment.py | keiichi-hikita/eclsdk | c43afb982fd54eb1875cdc22d46044644d804c4a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from ecl.provider_connectivity import provider_connectivity_service
from ecl import resource2
from ecl.network.v2 import network
from ecl.network.v2 import subnet
import hashlib
class AddressAssignment(resource2.Resource):
resources_key = "address_assignments"
resource_key = "address_assignment"
service = provider_connectivity_service.ProviderConnectivityService("v2.0")
base_path = '/' + service.version + \
'/tenant_connection_requests/' \
'%(tenant_connection_request_id)s/address_assignments'
# capabilities
allow_list = True
#: tenant_connection_request unique ID.
tenant_connection_request_id = resource2.URI(
"tenant_connection_request_id")
#: tenant_connection unique ID.
tenant_connection_id = resource2.Body("tenant_connection_id")
#: Network unique ID
network_id = resource2.Body("network_id")
#: mac address assigned with port
mac_address = resource2.Body("mac_address")
#: List of fixes IP addresses assign to port.
fixed_ips = resource2.Body("fixed_ips")
#: Allowed address pairs
allowed_address_pairs = resource2.Body("allowed_address_pairs")
@staticmethod
def _get_id(value):
if isinstance(value, resource2.Resource):
# Don't check _alternate_id unless we need to. It's an uncommon
# case and it involves looping through the class' dict.
id = value.id or getattr(
value, value._alternate_id(),
hashlib.new('md5', str(value)).hexdigest())
return id
else:
return value
def __getattribute__(self, name):
"""Return an attribute on this instance
This is mostly a pass-through except for a specialization on
the 'id' name, as this can exist under a different name via the
`alternate_id` argument to resource.Body.
"""
if name == "id":
if name in self._body:
return self._body[name]
elif self._alternate_id():
return self._body[self._alternate_id()]
else:
return hashlib.new('md5', str(self)).hexdigest()
else:
return object.__getattribute__(self, name)
class ICCNetwork(network.Network):
service = provider_connectivity_service.ProviderConnectivityService("v2.0")
base_path = '/' + service.version + \
'/tenant_connection_requests/' \
'%(tenant_connection_request_id)s/network'
# Capabilities
allow_list = False
allow_create = False
allow_delete = False
allow_update = False
allow_get = True
def get(self, session, tenant_connection_request_id):
uri = self.base_path % {
"tenant_connection_request_id": tenant_connection_request_id
}
resp = session.get(uri, endpoint_filter=self.service)
self._translate_response(resp, has_body=True)
return self
class ICCSubnet(subnet.Subnet):
service = provider_connectivity_service.ProviderConnectivityService("v2.0")
base_path = '/' + service.version + \
'/tenant_connection_requests/' \
'%(tenant_connection_request_id)s/subnets'
id = resource2.Body("id")
tenant_connection_request_id = resource2.URI(
"tenant_connection_request_id")
# Capabilities
allow_list = True
allow_create = False
allow_delete = False
allow_update = False
allow_get = True
dhcp_server_address = resource2.Body('dhcp_server_address')
| 32.166667 | 79 | 0.648487 | 405 | 3,667 | 5.595062 | 0.308642 | 0.120035 | 0.11165 | 0.110327 | 0.338041 | 0.314651 | 0.289497 | 0.289497 | 0.289497 | 0.289497 | 0 | 0.008557 | 0.266976 | 3,667 | 113 | 80 | 32.451327 | 0.834449 | 0.154895 | 0 | 0.371429 | 0 | 0 | 0.148294 | 0.105315 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.071429 | 0 | 0.657143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e3b455062720d39836f878d513bb8f75e9ad6e80 | 675 | py | Python | tests/test_gifGenerator.py | wmokrogulski/gifGenerator | fa2b36d082e32f310583935a361d7b7a2bf29fe6 | [
"MIT"
] | null | null | null | tests/test_gifGenerator.py | wmokrogulski/gifGenerator | fa2b36d082e32f310583935a361d7b7a2bf29fe6 | [
"MIT"
] | 2 | 2021-12-23T11:01:14.000Z | 2022-03-12T01:01:15.000Z | tests/test_gifGenerator.py | wmokrogulski/gifGenerator | fa2b36d082e32f310583935a361d7b7a2bf29fe6 | [
"MIT"
] | null | null | null | import unittest
from unittest import TestCase
from src.gifGenerator import GifGenerator
class TestGifGenerator(TestCase):
def setUp(self) -> None:
self.gg = GifGenerator()
def test_set_text_position(self):
position = (50, 90)
self.gg.setTextPosition(position)
self.assertEqual(self.gg.text_position, position)
def test_set_font(self):
self.assertTrue(True)
def test_load_image(self):
# path='test.png'
self.assertTrue(True)
def test_crop_images(self):
self.assertTrue(True)
def test_generate(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
| 20.454545 | 57 | 0.666667 | 80 | 675 | 5.3875 | 0.425 | 0.081207 | 0.167053 | 0.153132 | 0.192575 | 0.134571 | 0 | 0 | 0 | 0 | 0 | 0.007737 | 0.234074 | 675 | 32 | 58 | 21.09375 | 0.825919 | 0.022222 | 0 | 0.2 | 0 | 0 | 0.012158 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.3 | false | 0 | 0.15 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3b8997cfd0dae36bdb5f953799806c281136e2c | 9,915 | py | Python | PSP/GAME/Python/python/bsddb/test/test_dbshelve.py | TheMindVirus/pspy | e9d1bba4f6b7486c3010bede93d88afdfc036492 | [
"MIT"
] | 7 | 2015-04-06T15:17:13.000Z | 2020-10-21T04:57:00.000Z | PSP/GAME/Python/python/bsddb/test/test_dbshelve.py | TheMindVirus/pspy | e9d1bba4f6b7486c3010bede93d88afdfc036492 | [
"MIT"
] | 1 | 2021-04-11T15:01:12.000Z | 2021-04-11T15:01:12.000Z | PSP/GAME/Python/python/bsddb/test/test_dbshelve.py | TheMindVirus/pspy | e9d1bba4f6b7486c3010bede93d88afdfc036492 | [
"MIT"
] | 4 | 2016-05-16T17:53:08.000Z | 2020-11-28T17:18:50.000Z | """
TestCases for checking dbShelve objects.
"""
import sys, os, string
import tempfile, random
from pprint import pprint
from types import *
import unittest
try:
# For Pythons w/distutils pybsddb
from bsddb3 import db, dbshelve
except ImportError:
# For Python 2.3
from bsddb import db, dbshelve
from test_all import verbose
#----------------------------------------------------------------------
# We want the objects to be comparable so we can test dbshelve.values
# later on.
class DataClass:
def __init__(self):
self.value = random.random()
def __cmp__(self, other):
return cmp(self.value, other)
class DBShelveTestCase(unittest.TestCase):
def setUp(self):
self.filename = tempfile.mktemp()
self.do_open()
def tearDown(self):
self.do_close()
try:
os.remove(self.filename)
except os.error:
pass
def mk(self, key):
"""Turn key into an appropriate key type for this db"""
# override in child class for RECNO
return key
def populateDB(self, d):
for x in string.letters:
d[self.mk('S' + x)] = 10 * x # add a string
d[self.mk('I' + x)] = ord(x) # add an integer
d[self.mk('L' + x)] = [x] * 10 # add a list
inst = DataClass() # add an instance
inst.S = 10 * x
inst.I = ord(x)
inst.L = [x] * 10
d[self.mk('O' + x)] = inst
# overridable in derived classes to affect how the shelf is created/opened
def do_open(self):
self.d = dbshelve.open(self.filename)
# and closed...
def do_close(self):
self.d.close()
def test01_basics(self):
if verbose:
print '\n', '-=' * 30
print "Running %s.test01_basics..." % self.__class__.__name__
self.populateDB(self.d)
self.d.sync()
self.do_close()
self.do_open()
d = self.d
l = len(d)
k = d.keys()
s = d.stat()
f = d.fd()
if verbose:
print "length:", l
print "keys:", k
print "stats:", s
assert 0 == d.has_key(self.mk('bad key'))
assert 1 == d.has_key(self.mk('IA'))
assert 1 == d.has_key(self.mk('OA'))
d.delete(self.mk('IA'))
del d[self.mk('OA')]
assert 0 == d.has_key(self.mk('IA'))
assert 0 == d.has_key(self.mk('OA'))
assert len(d) == l-2
values = []
for key in d.keys():
value = d[key]
values.append(value)
if verbose:
print "%s: %s" % (key, value)
self.checkrec(key, value)
dbvalues = d.values()
assert len(dbvalues) == len(d.keys())
values.sort()
dbvalues.sort()
assert values == dbvalues
items = d.items()
assert len(items) == len(values)
for key, value in items:
self.checkrec(key, value)
assert d.get(self.mk('bad key')) == None
assert d.get(self.mk('bad key'), None) == None
assert d.get(self.mk('bad key'), 'a string') == 'a string'
assert d.get(self.mk('bad key'), [1, 2, 3]) == [1, 2, 3]
d.set_get_returns_none(0)
self.assertRaises(db.DBNotFoundError, d.get, self.mk('bad key'))
d.set_get_returns_none(1)
d.put(self.mk('new key'), 'new data')
assert d.get(self.mk('new key')) == 'new data'
assert d[self.mk('new key')] == 'new data'
def test02_cursors(self):
if verbose:
print '\n', '-=' * 30
print "Running %s.test02_cursors..." % self.__class__.__name__
self.populateDB(self.d)
d = self.d
count = 0
c = d.cursor()
rec = c.first()
while rec is not None:
count = count + 1
if verbose:
print rec
key, value = rec
self.checkrec(key, value)
rec = c.next()
del c
assert count == len(d)
count = 0
c = d.cursor()
rec = c.last()
while rec is not None:
count = count + 1
if verbose:
print rec
key, value = rec
self.checkrec(key, value)
rec = c.prev()
assert count == len(d)
c.set(self.mk('SS'))
key, value = c.current()
self.checkrec(key, value)
del c
def test03_append(self):
# NOTE: this is overridden in RECNO subclass, don't change its name.
if verbose:
print '\n', '-=' * 30
print "Running %s.test03_append..." % self.__class__.__name__
self.assertRaises(dbshelve.DBShelveError,
self.d.append, 'unit test was here')
def checkrec(self, key, value):
# override this in a subclass if the key type is different
x = key[1]
if key[0] == 'S':
assert type(value) == StringType
assert value == 10 * x
elif key[0] == 'I':
assert type(value) == IntType
assert value == ord(x)
elif key[0] == 'L':
assert type(value) == ListType
assert value == [x] * 10
elif key[0] == 'O':
assert type(value) == InstanceType
assert value.S == 10 * x
assert value.I == ord(x)
assert value.L == [x] * 10
else:
raise AssertionError, 'Unknown key type, fix the test'
#----------------------------------------------------------------------
class BasicShelveTestCase(DBShelveTestCase):
def do_open(self):
self.d = dbshelve.DBShelf()
self.d.open(self.filename, self.dbtype, self.dbflags)
def do_close(self):
self.d.close()
class BTreeShelveTestCase(BasicShelveTestCase):
dbtype = db.DB_BTREE
dbflags = db.DB_CREATE
class HashShelveTestCase(BasicShelveTestCase):
dbtype = db.DB_HASH
dbflags = db.DB_CREATE
class ThreadBTreeShelveTestCase(BasicShelveTestCase):
dbtype = db.DB_BTREE
dbflags = db.DB_CREATE | db.DB_THREAD
class ThreadHashShelveTestCase(BasicShelveTestCase):
dbtype = db.DB_HASH
dbflags = db.DB_CREATE | db.DB_THREAD
#----------------------------------------------------------------------
class BasicEnvShelveTestCase(DBShelveTestCase):
def do_open(self):
self.homeDir = homeDir = os.path.join(
os.path.dirname(sys.argv[0]), 'db_home')
try: os.mkdir(homeDir)
except os.error: pass
self.env = db.DBEnv()
self.env.open(homeDir, self.envflags | db.DB_INIT_MPOOL | db.DB_CREATE)
self.filename = os.path.split(self.filename)[1]
self.d = dbshelve.DBShelf(self.env)
self.d.open(self.filename, self.dbtype, self.dbflags)
def do_close(self):
self.d.close()
self.env.close()
def tearDown(self):
self.do_close()
import glob
files = glob.glob(os.path.join(self.homeDir, '*'))
for file in files:
os.remove(file)
class EnvBTreeShelveTestCase(BasicEnvShelveTestCase):
envflags = 0
dbtype = db.DB_BTREE
dbflags = db.DB_CREATE
class EnvHashShelveTestCase(BasicEnvShelveTestCase):
envflags = 0
dbtype = db.DB_HASH
dbflags = db.DB_CREATE
class EnvThreadBTreeShelveTestCase(BasicEnvShelveTestCase):
envflags = db.DB_THREAD
dbtype = db.DB_BTREE
dbflags = db.DB_CREATE | db.DB_THREAD
class EnvThreadHashShelveTestCase(BasicEnvShelveTestCase):
envflags = db.DB_THREAD
dbtype = db.DB_HASH
dbflags = db.DB_CREATE | db.DB_THREAD
#----------------------------------------------------------------------
# test cases for a DBShelf in a RECNO DB.
class RecNoShelveTestCase(BasicShelveTestCase):
dbtype = db.DB_RECNO
dbflags = db.DB_CREATE
def setUp(self):
BasicShelveTestCase.setUp(self)
# pool to assign integer key values out of
self.key_pool = list(range(1, 5000))
self.key_map = {} # map string keys to the number we gave them
self.intkey_map = {} # reverse map of above
def mk(self, key):
if key not in self.key_map:
self.key_map[key] = self.key_pool.pop(0)
self.intkey_map[self.key_map[key]] = key
return self.key_map[key]
def checkrec(self, intkey, value):
key = self.intkey_map[intkey]
BasicShelveTestCase.checkrec(self, key, value)
def test03_append(self):
if verbose:
print '\n', '-=' * 30
print "Running %s.test03_append..." % self.__class__.__name__
self.d[1] = 'spam'
self.d[5] = 'eggs'
self.assertEqual(6, self.d.append('spam'))
self.assertEqual(7, self.d.append('baked beans'))
self.assertEqual('spam', self.d.get(6))
self.assertEqual('spam', self.d.get(1))
self.assertEqual('baked beans', self.d.get(7))
self.assertEqual('eggs', self.d.get(5))
#----------------------------------------------------------------------
def test_suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(DBShelveTestCase))
suite.addTest(unittest.makeSuite(BTreeShelveTestCase))
suite.addTest(unittest.makeSuite(HashShelveTestCase))
suite.addTest(unittest.makeSuite(ThreadBTreeShelveTestCase))
suite.addTest(unittest.makeSuite(ThreadHashShelveTestCase))
suite.addTest(unittest.makeSuite(EnvBTreeShelveTestCase))
suite.addTest(unittest.makeSuite(EnvHashShelveTestCase))
suite.addTest(unittest.makeSuite(EnvThreadBTreeShelveTestCase))
suite.addTest(unittest.makeSuite(EnvThreadHashShelveTestCase))
suite.addTest(unittest.makeSuite(RecNoShelveTestCase))
return suite
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
| 27.618384 | 79 | 0.558548 | 1,201 | 9,915 | 4.512073 | 0.200666 | 0.019192 | 0.018454 | 0.053515 | 0.32423 | 0.305407 | 0.258166 | 0.209079 | 0.162392 | 0.108138 | 0 | 0.01194 | 0.290469 | 9,915 | 358 | 80 | 27.695531 | 0.758351 | 0.092688 | 0 | 0.349594 | 0 | 0 | 0.044419 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 0 | null | null | 0.00813 | 0.04065 | null | null | 0.060976 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3cd17e1ce16cc51bbf2c4408a071cf80ad1dcea | 851 | py | Python | src/main/generic_cpu/test3/generic_cpu.py | cicerone/kosim | a9f718a19019c11fd6e6c6fc0164d4d214bbb5e2 | [
"BSL-1.0"
] | 2 | 2019-11-15T19:15:36.000Z | 2022-03-14T12:53:18.000Z | src/main/generic_cpu/test3/generic_cpu.py | cicerone/kosim | a9f718a19019c11fd6e6c6fc0164d4d214bbb5e2 | [
"BSL-1.0"
] | null | null | null | src/main/generic_cpu/test3/generic_cpu.py | cicerone/kosim | a9f718a19019c11fd6e6c6fc0164d4d214bbb5e2 | [
"BSL-1.0"
] | null | null | null | #!/usr/bin/env python
#==============================================================================================
# Copyright (c) 2009 Kotys LLC. Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
# Author: Cicerone Mihalache
# Support: kosim@kotys.biz
#==============================================================================================
import sys
import libkosim_generic_cpu_test3 as kosim_generic_cpu
#print len(sys.argv)
#for arg in sys.argv:
# print "arg(%s)\n" % (arg)
opt_builder = kosim_generic_cpu.OptionsBuilder()
for arg in sys.argv:
opt_builder.SetArgument(arg)
opt_builder.BuildArgv()
opt_builder.InitProgramOptions()
kosim_generic_cpu.run_sim()
print "--- Test DONE ---"
| 31.518519 | 95 | 0.551116 | 97 | 851 | 4.649485 | 0.608247 | 0.088692 | 0.099778 | 0.053215 | 0.066519 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015048 | 0.141011 | 851 | 26 | 96 | 32.730769 | 0.601915 | 0.638073 | 0 | 0 | 0 | 0 | 0.057432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3d014948574aa9afc4263cc074b784b2bb1665c | 1,538 | py | Python | cogs/ObjectCache.py | Deivedux/Shiramine | bbaf651a4ccd5f65c8aef1eb09ba8899bb2958db | [
"MIT"
] | 6 | 2019-03-20T15:15:31.000Z | 2022-02-23T20:11:24.000Z | cogs/ObjectCache.py | Deivedux/Shiramine | bbaf651a4ccd5f65c8aef1eb09ba8899bb2958db | [
"MIT"
] | 1 | 2021-11-20T00:25:48.000Z | 2021-11-20T00:25:48.000Z | cogs/ObjectCache.py | Deivedux/Shiramine | bbaf651a4ccd5f65c8aef1eb09ba8899bb2958db | [
"MIT"
] | 8 | 2019-11-22T05:56:40.000Z | 2021-12-04T17:38:38.000Z | import time
import json
import sqlite3
import os
conn = sqlite3.connect('configs/Database.db')
c = conn.cursor()
start_time = time.time()
with open('configs/config.json') as json_data:
config = json.load(json_data)
server_config_raw = c.execute("SELECT * FROM ServerConfig").fetchall()
server_config = {}
def server_cache(db_response):
server_config[int(db_response[0])] = {}
if db_response[1]:
server_config[int(db_response[0])]['prefix'] = db_response[1]
server_config[int(db_response[0])]['language'] = db_response[2]
if db_response[3]:
server_config[int(db_response[0])]['img_filter'] = int(db_response[3])
server_config[int(db_response[0])]['member_persistence'] = int(db_response[12])
if db_response[13]:
server_config[int(db_response[0])]['server_log'] = int(db_response[13])
for i in server_config_raw:
server_cache(i)
del server_config_raw
db_response = c.execute("SELECT * FROM URLFilters").fetchall()
url_filters = dict()
def url_filter_cache(db_response):
try:
url_filters[db_response[0]].append(db_response[1])
except KeyError:
url_filters[db_response[0]] = [db_response[1]]
for i in db_response:
url_filter_cache(i)
response_string = {}
for i in os.listdir('./languages'):
if i.endswith('.json'):
with open(os.path.join('./languages', i)) as file:
response = json.load(file)
response_string[i.strip('.json')] = response
def get_lang(guild, response):
try:
return response_string[server_config[guild.id]['language']][response]
except:
return response_string['english'][response]
| 27.464286 | 80 | 0.737321 | 235 | 1,538 | 4.587234 | 0.297872 | 0.204082 | 0.108534 | 0.09462 | 0.22449 | 0.185529 | 0.137291 | 0.137291 | 0.137291 | 0 | 0 | 0.016739 | 0.106632 | 1,538 | 55 | 81 | 27.963636 | 0.767831 | 0 | 0 | 0.045455 | 0 | 0 | 0.121586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.090909 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3d0ea8dfddd487de8fd53ee32a9b4f750e83af2 | 4,749 | py | Python | src/python_lib_for_me/date.py | silverag-corgi/python-lib-for-me | ed30c7b879396ca6af53c762d7c919b0ea44bea7 | [
"MIT"
] | null | null | null | src/python_lib_for_me/date.py | silverag-corgi/python-lib-for-me | ed30c7b879396ca6af53c762d7c919b0ea44bea7 | [
"MIT"
] | 1 | 2022-02-06T08:21:56.000Z | 2022-02-06T15:48:26.000Z | src/python_lib_for_me/date.py | silverag-corgi/python-lib-for-me | ed30c7b879396ca6af53c762d7c919b0ea44bea7 | [
"MIT"
] | null | null | null | '''
日付モジュール
'''
import calendar
from datetime import date, datetime, timedelta
from typing import Iterator
from zoneinfo import ZoneInfo
from dateutil.relativedelta import relativedelta
def get_first_date_of_this_month(base_date: date) -> date:
'''
今月初日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した今月初日
'''
base_date_by_month: date = base_date + relativedelta(months=0)
base_date_by_month_day: date = base_date_by_month.replace(
day=1
)
return base_date_by_month_day
def get_last_date_of_this_month(base_date: date) -> date:
'''
今月末日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した今月末日
'''
base_date_by_month: date = base_date + relativedelta(months=0)
base_date_by_month_day: date = base_date_by_month.replace(
day=calendar.monthrange(base_date_by_month.year, base_date_by_month.month)[1]
)
return base_date_by_month_day
def get_first_date_of_next_month(base_date: date) -> date:
'''
来月初日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した来月初日
'''
base_date_by_month: date = base_date + relativedelta(months=1)
base_date_by_month_day: date = base_date_by_month.replace(
day=1
)
return base_date_by_month_day
def get_last_date_of_next_month(base_date: date) -> date:
'''
来月末日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した来月末日
'''
base_date_by_month: date = base_date + relativedelta(months=1)
base_date_by_month_day: date = base_date_by_month.replace(
day=calendar.monthrange(base_date_by_month.year, base_date_by_month.month)[1]
)
return base_date_by_month_day
def get_first_date_of_last_month(base_date: date) -> date:
'''
先月初日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した先月初日
'''
base_date_by_month: date = base_date + relativedelta(months=-1)
base_date_by_month_day: date = base_date_by_month.replace(
day=1
)
return base_date_by_month_day
def get_last_date_of_last_month(base_date: date) -> date:
'''
先月末日取得
Args:
base_date (date): 基底日付
Returns:
date: 基底日付から算出した先月末日
'''
base_date_by_month: date = base_date + relativedelta(months=-1)
base_date_by_month_day: date = base_date_by_month.replace(
day=calendar.monthrange(base_date_by_month.year, base_date_by_month.month)[1]
)
return base_date_by_month_day
def gen_date_range(start_date: date, end_date: date) -> Iterator[date]:
'''
日付範囲生成
Args:
start_date (date) : 開始日付
end_date (date) : 終了日付
Yields:
Iterator[date]: 日付範囲
'''
for count in range((end_date - start_date).days + 1):
yield start_date + timedelta(days=count)
def convert_timestamp_to_jst(
src_timestamp: str,
src_timestamp_format: str = '%Y-%m-%d %H:%M:%S%z',
jst_timestamp_format: str = '%Y-%m-%d %H:%M:%S'
) -> str:
'''
タイムスタンプJST変換
Args:
src_timestamp (str) : 変換元タイムスタンプ
src_timestamp_format (str, optional) : 変換元タイムスタンプのフォーマット
jst_timestamp_format (str, optional) : 変換先タイムスタンプ(JST)のフォーマット
Returns:
str: タイムスタンプ(JST)
'''
src_datetime: datetime = datetime.strptime(src_timestamp, src_timestamp_format)
jst_datetime: datetime = src_datetime.astimezone(ZoneInfo('Japan'))
jst_timestamp: str = datetime.strftime(jst_datetime, jst_timestamp_format)
return jst_timestamp
def convert_timestamp_to_utc(
src_timestamp: str,
src_timestamp_format: str = '%Y-%m-%d %H:%M:%S%z',
utc_timestamp_format: str = '%Y-%m-%d %H:%M:%S'
) -> str:
'''
タイムスタンプUTC変換
Args:
src_timestamp (str) : 変換元タイムスタンプ
src_timestamp_format (str, optional) : 変換元タイムスタンプのフォーマット
utc_timestamp_format (str, optional) : 変換先タイムスタンプ(UTC)のフォーマット
Returns:
str: タイムスタンプ(UTC)
'''
src_datetime: datetime = datetime.strptime(src_timestamp, src_timestamp_format)
utc_datetime: datetime = src_datetime.astimezone(ZoneInfo('UTC'))
utc_timestamp: str = datetime.strftime(utc_datetime, utc_timestamp_format)
return utc_timestamp
| 24.863874 | 90 | 0.606443 | 564 | 4,749 | 4.739362 | 0.150709 | 0.143659 | 0.112233 | 0.16835 | 0.712682 | 0.681631 | 0.647961 | 0.578376 | 0.520015 | 0.520015 | 0 | 0.003942 | 0.305538 | 4,749 | 190 | 91 | 24.994737 | 0.806549 | 0.222573 | 0 | 0.516129 | 0 | 0 | 0.025616 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145161 | false | 0 | 0.080645 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3d2b660c79791266d30c8a38f66f8ca7ec0c0c0 | 682 | py | Python | project/api/views.py | akxen/pyomo-drf-docker | 9299561e61ce0cc6b40968e078aea84bded1228b | [
"Apache-2.0"
] | null | null | null | project/api/views.py | akxen/pyomo-drf-docker | 9299561e61ce0cc6b40968e078aea84bded1228b | [
"Apache-2.0"
] | null | null | null | project/api/views.py | akxen/pyomo-drf-docker | 9299561e61ce0cc6b40968e078aea84bded1228b | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.response import Response
from .serializers import ModelDataSerializer
from .optimisation.model import run_model
class RunModel(APIView):
"""Construct, run, and solve model with data posted by user"""
def post(self, request, format=None):
serializer = ModelDataSerializer(data=request.data)
if serializer.is_valid():
result = run_model(data=serializer.data)
return Response(result)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
| 31 | 78 | 0.758065 | 84 | 682 | 6.047619 | 0.511905 | 0.047244 | 0.100394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005329 | 0.174487 | 682 | 21 | 79 | 32.47619 | 0.89698 | 0.082111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.5 | 0 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e3db1939642019da218fde1bd068b8be2f4606ff | 3,811 | py | Python | qanda/views.py | Fnechz/StakeOverflow-Clone | 7f17bdb80ebc23a2a5210eb31db6121c5d41e70c | [
"MIT"
] | null | null | null | qanda/views.py | Fnechz/StakeOverflow-Clone | 7f17bdb80ebc23a2a5210eb31db6121c5d41e70c | [
"MIT"
] | null | null | null | qanda/views.py | Fnechz/StakeOverflow-Clone | 7f17bdb80ebc23a2a5210eb31db6121c5d41e70c | [
"MIT"
] | null | null | null | from django.contrib.auth.mixins import LoginRequiredMixin
from django.http.response import HttpResponseRedirect, HttpResponseBadRequest
from django.urls.base import reverse
from django.utils import timezone
from django.views.generic import (
CreateView,
DayArchiveView,
DetailView,
RedirectView,
TemplateView,
UpdateView,
)
from qanda.forms import QuestionForm, AnswerForm, AnswerAcceptanceForm
from qanda.models import Question, Answer
from qanda.service.elasticsearch import search_for_questions
from django.shortcuts import render
# Creating my views here.
class SearchView(TemplateView):
template_name = 'qanda/search.html'
def get_context_data(self, **kwargs):
query = self.request.GET.get('q', None)
ctx = super().get_context_data(query=query, **kwargs)
if query:
results = search_for_questions(query)
ctx['hits'] = results
return ctx
class TodaysQuestionList(RedirectView):
def get_redirect_url(self, *args, **kwargs):
today = timezone.now()
return reverse(
'questions:daily_questions',
kwargs={
'day': today.day,
'month': today.month,
'year': today.year,
}
)
class DailyQuestionList(DayArchiveView):
queryset = Question.objects.all()
date_field = 'created'
month_format = '%m'
allow_empty = True
class UpdateAnswerAcceptanceView(LoginRequiredMixin, UpdateView):
form_class = AnswerAcceptanceForm
queryset = Answer.objects.all()
def get_success_url(self):
return self.object.question.get_absolute_url()
def form_invalid(self, form):
return HttpResponseRedirect(
redirect_to=self.object.question.get_absolute_url())
class AskQuestionView(LoginRequiredMixin, CreateView):
form_class = QuestionForm
template_name = 'qanda/ask.html'
def get_initial(self):
return {
'user': self.request.user.id
}
def form_valid(self, form):
action = self.request.POST.get('action')
if action =='SAVE':
#save and redirect as usual
return super().form_valid(form)
elif action == 'PREVIEW':
preview = Question(
question=form.cleaned_data['question'],
title=form.cleaned_data['title'])
ctx = self.get_context_data(preview=preview)
return self.render_to_response(context=ctx)
return HttpResponseBadRequest()
class QuestionDetailView(DetailView):
model = Question
ACCEPT_FORM = AnswerAcceptanceForm(initial={'accepted': True})
REJECT_FORM = AnswerAcceptanceForm(initial={'accepted': False})
def get_context_data(self, **kwargs):
ctx = super().get_context_data(**kwargs)
ctx.update({
'answer_form': AnswerForm(initial={
'user': self.request.user.id,
'question': self.object.id,
})
})
if self.object.can_accept_answers(self.request.user):
ctx.update({
'accept_form': self.ACCEPT_FORM,
'reject_form': self.REJECT_FORM,
})
return ctx
class CreateAnswerView(LoginRequiredMixin, CreateView):
form_class = AnswerForm
template_name = 'qanda/create_answer.html'
def get_initial(self):
return {
'question': self.get_question().id,
'user': self.request.user.id,
}
def get_context_data(self, **kwargs):
return super().get_context_data(question=self.get_question(),
**kwargs)
def get_success_url(self):
return self.object.question.get_absolute_url()
def form_valid(self, form):
action = self.request.POST.get('action')
if action =='SAVE':
#save and redirect as usual
return super().form_valid(form)
elif action == 'PREVIEW':
ctx = self.get_context_data(preview=form.cleaned_data['answer'])
return self.render_to_response(context=ctx)
return HttpResponseBadRequest()
def get_question(self):
return Question.objects.get(pk=self.kwargs['pk'])
| 26.282759 | 78 | 0.708738 | 447 | 3,811 | 5.885906 | 0.268456 | 0.020525 | 0.042569 | 0.019384 | 0.315849 | 0.291144 | 0.190422 | 0.190422 | 0.190422 | 0.141771 | 0 | 0 | 0.178693 | 3,811 | 144 | 79 | 26.465278 | 0.840575 | 0.01968 | 0 | 0.314815 | 0 | 0 | 0.066369 | 0.013664 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.083333 | 0.064815 | 0.537037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e3e0c634baf400be713a2f06ce7ace7a4e212de8 | 2,071 | py | Python | ClydeLog.py | bnadeau/open-test-jig | 99891aa96740eac267352d76a45b9dd5e1f55e0e | [
"Apache-2.0"
] | null | null | null | ClydeLog.py | bnadeau/open-test-jig | 99891aa96740eac267352d76a45b9dd5e1f55e0e | [
"Apache-2.0"
] | null | null | null | ClydeLog.py | bnadeau/open-test-jig | 99891aa96740eac267352d76a45b9dd5e1f55e0e | [
"Apache-2.0"
] | null | null | null | import logging
import time
import os
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
format = "%(asctime)s %(levelname)-10s %(message)s"
id = time.strftime("%Y%m%d-%H%M%S")
#These are the sequences need to get colored ouput
RESET_SEQ = "\033[0m"
COLOR_SEQ = "\033[1;%dm"
BOLD_SEQ = "\033[1m"
def formatter_message(message, use_color = True):
if use_color:
message = message.replace("$RESET", RESET_SEQ).replace("$BOLD", BOLD_SEQ)
else:
message = message.replace("$RESET", "").replace("$BOLD", "")
return message
COLORS = {
'WARNING': YELLOW,
'INFO': WHITE,
'DEBUG': BLUE,
'CRITICAL': YELLOW,
'ERROR': RED,
'PASS': GREEN
}
class ColoredFormatter(logging.Formatter):
def __init__(self, msg, use_color = True):
logging.Formatter.__init__(self, msg)
self.use_color = use_color
def format(self, record):
levelname = record.levelname
if self.use_color and levelname in COLORS:
levelname_color = COLOR_SEQ % (30 + COLORS[levelname]) + levelname + RESET_SEQ
record.levelname = levelname_color
return logging.Formatter.format(self, record)
PASS_LEVEL_NUM = 45
logging.addLevelName(PASS_LEVEL_NUM, 'PASS')
def success(self, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
self._log(PASS_LEVEL_NUM, message, args, **kws)
logging.Logger.success = success
def getLogger(name = 'clyde_log'):
return logging.getLogger();
log = getLogger()
log.setLevel(logging.DEBUG)
# Make sure log directory exists
if not os.path.exists('log'):
os.makedirs('log')
# Log to file
formatter = logging.Formatter(format)
filehandler = logging.FileHandler("log/clyde_%s.log" % id, "w")
filehandler.setLevel(logging.INFO)
filehandler.setFormatter(formatter)
log.addHandler(filehandler)
COLOR_FORMAT = formatter_message(format, True)
color_formatter = ColoredFormatter(COLOR_FORMAT)
# Log to stdout too
streamhandler = logging.StreamHandler()
streamhandler.setLevel(logging.DEBUG)
streamhandler.setFormatter(color_formatter)
log.addHandler(streamhandler)
| 27.25 | 84 | 0.713182 | 270 | 2,071 | 5.32963 | 0.37037 | 0.033357 | 0.025017 | 0.036136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010832 | 0.153066 | 2,071 | 75 | 85 | 27.613333 | 0.809578 | 0.072912 | 0 | 0 | 0 | 0 | 0.087728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.072727 | 0.054545 | 0.018182 | 0.218182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e3e15e7f00bea2796ee5bd52b11a09a192eae24f | 4,485 | py | Python | private_market/test.py | sigmoid3/Dapper | 469ddca6de3b5e977bcba05de57b9e07bf46dd13 | [
"MIT"
] | 974 | 2015-01-01T08:37:37.000Z | 2022-03-29T16:41:11.000Z | private_market/test.py | sigmoid3/Dapper | 469ddca6de3b5e977bcba05de57b9e07bf46dd13 | [
"MIT"
] | 45 | 2015-05-04T15:57:26.000Z | 2022-03-22T14:40:24.000Z | private_market/test.py | sigmoid3/Dapper | 469ddca6de3b5e977bcba05de57b9e07bf46dd13 | [
"MIT"
] | 414 | 2015-01-05T14:43:01.000Z | 2022-03-28T18:30:58.000Z | from ethereum import tester as t
from ethereum import utils
def test():
s = t.state()
test_company = s.abi_contract('company.se', ADMIN_ACCOUNT=utils.decode_int(t.a0))
order_book = s.abi_contract('orders.se')
test_currency = s.abi_contract('currency.se', sender=t.k0)
assert test_company.getAdmin() == t.a0.encode('hex')
# Issue 1000 shares to user a1
test_company.issueShares(1000, t.a1, sender=t.k0)
# Issue 50000 coins to users a2 and a3
test_currency.sendCoin(50000, t.a2, sender=t.k0)
test_currency.sendCoin(50000, t.a3, sender=t.k0)
# User a1 can have as many shares as he wants, but must retain at
# least 800
test_company.setShareholderMaxShares(t.a1, 2**100, sender=t.k0)
test_company.setShareholderMinShares(t.a1, 800, sender=t.k0)
# User a2 can have up to 500 shares
test_company.setShareholderMaxShares(t.a2, 500, sender=t.k0)
# User a2 tries to give himself the right to unlimited shares,
# fails because he is not the admin
test_company.setShareholderMaxShares(t.a2, 2**100, sender=t.k2)
# A few sanity checks
assert test_company.getCurrentShareholdingsOf(t.a1) == 1000
assert test_company.getShareholderMinShares(t.a1) == 800
assert test_company.getShareholderMaxShares(t.a2) == 500
# User a1 transfers 150 shares to a2
assert test_company.sendCoin(150, t.a2, sender=t.k1) is True
# User a1 tries to transfer 150 shares to a2 again, fails because
# such a transaction would result a1 having 700 shares, which is
# below his limit
assert test_company.sendCoin(150, t.a2, sender=t.k1) is False
# Check shareholdings
assert test_company.getCurrentShareholdingsOf(t.a1) == 850
assert test_company.getCurrentShareholdingsOf(t.a2) == 150
# Authorize the order book contract to accept lockups
test_company.setContractAuthorized(order_book.address, True)
# User a1 puts up 50 shares for sale; however, he tries to do
# this without first authorizing the order book to withdraw so
# the operation fails
assert order_book.mkSellOrder(test_company.address, 50,
test_currency.address, 10000,
sender=t.k1) == -1
# Now, try to create the order properly
test_company.authorizeLockup(order_book.address, 50, sender=t.k1)
_id = order_book.mkSellOrder(test_company.address, 50,
test_currency.address, 10000, sender=t.k1)
assert _id >= 0
assert test_company.getLockedShareholdingsOf(t.a1) == 50
# Accept the order by a3. This should fail because a3 has not
# authorized the order_book to withdraw coins
assert order_book.claimSellOrder(_id, sender=t.k3) is False
# Do the authorization
test_currency.approveOnce(order_book.address, 10000, sender=t.k3)
# It should still fail because a3 is not authorized to hold shares
assert order_book.claimSellOrder(_id, sender=t.k3) is False
# Now do it properly
test_currency.approveOnce(order_book.address, 10000, sender=t.k2)
assert order_book.claimSellOrder(_id, sender=t.k2) is True
# Check shareholdings and balances
assert test_company.getCurrentShareholdingsOf(t.a1) == 800
assert test_company.getCurrentShareholdingsOf(t.a2) == 200
assert test_company.getLockedShareholdingsOf(t.a1) == 0
assert test_currency.coinBalanceOf(t.a1) == 10000
assert test_currency.coinBalanceOf(t.a2) == 40000
assert test_currency.coinBalanceOf(t.a3) == 50000
# Authorize a3 to hold shares
test_company.setShareholderMaxShares(t.a3, 500)
# A3 buys shares
test_currency.approveOnce(order_book.address, 20000, sender=t.k3)
_id2 = order_book.mkBuyOrder(test_company.address, 100,
test_currency.address, 20000, sender=t.k3)
assert _id2 >= 0, _id2
test_company.authorizeLockup(order_book.address, 100, sender=t.k2)
assert order_book.claimBuyOrder(_id2, sender=t.k2) is True
# Check shareholdings and balances
assert test_company.getCurrentShareholdingsOf(t.a1) == 800
assert test_company.getCurrentShareholdingsOf(t.a2) == 100
assert test_company.getCurrentShareholdingsOf(t.a3) == 100
assert test_company.getLockedShareholdingsOf(t.a1) == 0
assert test_currency.coinBalanceOf(t.a1) == 10000
assert test_currency.coinBalanceOf(t.a2) == 60000
assert test_currency.coinBalanceOf(t.a3) == 30000
if __name__ == '__main__':
test()
| 50.965909 | 85 | 0.716611 | 626 | 4,485 | 5.004792 | 0.246006 | 0.101819 | 0.086818 | 0.107245 | 0.546122 | 0.442387 | 0.30865 | 0.298755 | 0.298755 | 0.266199 | 0 | 0.068402 | 0.194872 | 4,485 | 87 | 86 | 51.551724 | 0.799225 | 0.230769 | 0 | 0.137931 | 0 | 0 | 0.011981 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.017241 | false | 0 | 0.034483 | 0 | 0.051724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3e6303d7750f26636e0532318d99a61631c9c10 | 17,884 | py | Python | EXOSIMS/Completeness/BrownCompleteness.py | dgarrett622/EXOSIMS | ce41adc8c162b6330eb9cefee83f3a395bcff614 | [
"BSD-3-Clause"
] | null | null | null | EXOSIMS/Completeness/BrownCompleteness.py | dgarrett622/EXOSIMS | ce41adc8c162b6330eb9cefee83f3a395bcff614 | [
"BSD-3-Clause"
] | 2 | 2016-08-13T18:39:39.000Z | 2020-06-26T00:18:37.000Z | EXOSIMS/Completeness/BrownCompleteness.py | douglase/EXOSIMS | ce41adc8c162b6330eb9cefee83f3a395bcff614 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import time
import numpy as np
from scipy import interpolate
import astropy.units as u
import astropy.constants as const
import os, inspect
try:
import cPickle as pickle
except:
import pickle
import hashlib
from EXOSIMS.Prototypes.Completeness import Completeness
from EXOSIMS.util.eccanom import eccanom
from EXOSIMS.util.deltaMag import deltaMag
class BrownCompleteness(Completeness):
"""Completeness class template
This class contains all variables and methods necessary to perform
Completeness Module calculations in exoplanet mission simulation.
Args:
\*\*specs:
user specified values
Attributes:
minComp (float):
Minimum completeness level for detection
Nplanets (integer):
Number of planets for initial completeness Monte Carlo simulation
classpath (string):
Path on disk to Brown Completeness
filename (string):
Name of file where completeness interpolant is stored
visits (ndarray):
Number of observations corresponding to each star in the target list
(initialized in gen_update)
updates (nx5 ndarray):
Completeness values of successive observations of each star in the
target list (initialized in gen_update)
"""
def __init__(self, Nplanets=1e8, **specs):
# bring in inherited Completeness prototype __init__ values
Completeness.__init__(self, **specs)
# Number of planets to sample
self.Nplanets = int(Nplanets)
# get path to completeness interpolant stored in a pickled .comp file
self.classpath = os.path.split(inspect.getfile(self.__class__))[0]
self.filename = specs['modules']['PlanetPopulation']
atts = ['arange','erange','prange','Rprange','Mprange','scaleOrbits','constrainOrbits']
extstr = ''
for att in atts:
extstr += '%s: ' % att + str(getattr(self.PlanetPopulation, att)) + ' '
ext = hashlib.md5(extstr).hexdigest()
self.filename += ext
def target_completeness(self, targlist):
"""Generates completeness values for target stars
This method is called from TargetList __init__ method.
Args:
targlist (TargetList):
TargetList class object
Returns:
comp0 (ndarray):
1D numpy array of completeness values for each target star
"""
# set up "ensemble visit photometric and obscurational completeness"
# interpolant for initial completeness values
# bins for interpolant
bins = 1000
# xedges is array of separation values for interpolant
xedges = np.linspace(0., self.PlanetPopulation.rrange[1].value, bins)*\
self.PlanetPopulation.arange.unit
xedges = xedges.to('AU').value
# yedges is array of delta magnitude values for interpolant
ymin = np.round((-2.5*np.log10(self.PlanetPopulation.prange[1]*\
(self.PlanetPopulation.Rprange[1]/(self.PlanetPopulation.rrange[0]))\
.decompose().value**2)))
ymax = np.round((-2.5*np.log10(self.PlanetPopulation.prange[0]*\
(self.PlanetPopulation.Rprange[0]/(self.PlanetPopulation.rrange[1]))\
.decompose().value**2*1e-11)))
yedges = np.linspace(ymin, ymax, bins)
# number of planets for each Monte Carlo simulation
nplan = int(np.min([1e6,self.Nplanets]))
# number of simulations to perform (must be integer)
steps = int(self.Nplanets/nplan)
# path to 2D completeness pdf array for interpolation
Cpath = os.path.join(self.classpath, self.filename+'.comp')
Cpdf, xedges2, yedges2 = self.genC(Cpath, nplan, xedges, yedges, steps)
EVPOCpdf = interpolate.RectBivariateSpline(xedges, yedges, Cpdf.T)
EVPOC = np.vectorize(EVPOCpdf.integral)
# calculate separations based on IWA
smin = np.tan(targlist.OpticalSystem.IWA)*targlist.dist
if np.isinf(targlist.OpticalSystem.OWA):
smax = xedges[-1]*u.AU
else:
smax = np.tan(targlist.OpticalSystem.OWA)*targlist.dist
# calculate dMags based on limiting dMag
dMagmax = targlist.OpticalSystem.dMagLim #np.array([targlist.OpticalSystem.dMagLim]*targlist.nStars)
dMagmin = ymin
if self.PlanetPopulation.scaleOrbits:
L = np.where(targlist.L>0, targlist.L, 1e-10) #take care of zero/negative values
smin = smin/np.sqrt(L)
smax = smax/np.sqrt(L)
dMagmin -= 2.5*np.log10(L)
dMagmax -= 2.5*np.log10(L)
comp0 = EVPOC(smin.to('AU').value, smax.to('AU').value, dMagmin, dMagmax)
return comp0
def gen_update(self, targlist):
"""Generates dynamic completeness values for multiple visits of each
star in the target list
Args:
targlist (TargetList):
TargetList module
"""
print 'Beginning completeness update calculations'
self.visits = np.array([0]*targlist.nStars)
self.updates = []
# number of planets to simulate
nplan = int(2e4)
# normalization time
dt = 1e9*u.day
# sample quantities which do not change in time
a = self.PlanetPopulation.gen_sma(nplan) # AU
e = self.PlanetPopulation.gen_eccen(nplan)
I = self.PlanetPopulation.gen_I(nplan) # deg
O = self.PlanetPopulation.gen_O(nplan) # deg
w = self.PlanetPopulation.gen_w(nplan) # deg
p = self.PlanetPopulation.gen_albedo(nplan)
Rp = self.PlanetPopulation.gen_radius(nplan) # km
Mp = self.PlanetPopulation.gen_mass(nplan) # kg
rmax = a*(1.+e)
rmin = a*(1.-e)
# sample quantity which will be updated
M = np.random.uniform(high=2.*np.pi,size=nplan)
newM = np.zeros((nplan,))
# population values
smin = (np.tan(targlist.OpticalSystem.IWA)*targlist.dist).to('AU')
if np.isfinite(targlist.OpticalSystem.OWA):
smax = (np.tan(targlist.OpticalSystem.OWA)*targlist.dist).to('AU')
else:
smax = np.array([np.max(self.PlanetPopulation.arange.to('AU').value)*\
(1.+np.max(self.PlanetPopulation.erange))]*targlist.nStars)*u.AU
# fill dynamic completeness values
for sInd in xrange(targlist.nStars):
Mstar = targlist.MsTrue[sInd]*const.M_sun
# remove rmax < smin and rmin > smax
inside = np.where(rmax > smin[sInd])[0]
outside = np.where(rmin < smax[sInd])[0]
pInds = np.intersect1d(inside,outside)
dynamic = []
# calculate for 5 successive observations
for num in xrange(5):
if not pInds.any():
dynamic.append(0.)
break
# find Eccentric anomaly
if num == 0:
E = eccanom(M[pInds],e[pInds])
newM[pInds] = M[pInds]
else:
E = eccanom(newM[pInds],e[pInds])
r = a[pInds]*(1.-e[pInds]*np.cos(E))
r1 = r*(np.cos(E) - e[pInds])
r1 = np.hstack((r1.reshape(len(r1),1), r1.reshape(len(r1),1), r1.reshape(len(r1),1)))
r2 = (r*np.sin(E)*np.sqrt(1. - e[pInds]**2))
r2 = np.hstack((r2.reshape(len(r2),1), r2.reshape(len(r2),1), r2.reshape(len(r2),1)))
a1 = np.cos(O[pInds])*np.cos(w[pInds]) - np.sin(O[pInds])*np.sin(w[pInds])*np.cos(I[pInds])
a2 = np.sin(O[pInds])*np.cos(w[pInds]) + np.cos(O[pInds])*np.sin(w[pInds])*np.cos(I[pInds])
a3 = np.sin(w[pInds])*np.sin(I[pInds])
A = np.hstack((a1.reshape(len(a1),1), a2.reshape(len(a2),1), a3.reshape(len(a3),1)))
b1 = -np.cos(O[pInds])*np.sin(w[pInds]) - np.sin(O[pInds])*np.cos(w[pInds])*np.cos(I[pInds])
b2 = -np.sin(O[pInds])*np.sin(w[pInds]) + np.cos(O[pInds])*np.cos(w[pInds])*np.cos(I[pInds])
b3 = np.cos(w[pInds])*np.sin(I[pInds])
B = np.hstack((b1.reshape(len(b1),1), b2.reshape(len(b2),1), b3.reshape(len(b3),1)))
# planet position, planet-star distance, apparent separation
r = (A*r1 + B*r2)*u.AU # position vector
d = np.sqrt(np.sum(r**2, axis=1)) # planet-star distance
s = np.sqrt(np.sum(r[:,0:2]**2, axis=1)) # apparent separation
beta = np.arccos(r[:,2]/d) # phase angle
Phi = self.PlanetPhysicalModel.calc_Phi(beta) # phase function
dMag = deltaMag(p[pInds],Rp[pInds],d,Phi) # difference in magnitude
toremoves = np.where((s > smin[sInd]) & (s < smax[sInd]))[0]
toremovedmag = np.where(dMag < targlist.OpticalSystem.dMagLim)[0]
toremove = np.intersect1d(toremoves, toremovedmag)
pInds = np.delete(pInds, toremove)
if num == 0:
dynamic.append(targlist.comp0[sInd])
else:
dynamic.append(float(len(toremove))/nplan)
# update M
mu = const.G*(Mstar+Mp[pInds])
n = np.sqrt(mu/a[pInds]**3)
newM[pInds] = (newM[pInds] + n*dt)/(2*np.pi) % 1 * 2.*np.pi
self.updates.append(dynamic)
if (sInd+1) % 50 == 0:
print 'stars: %r / %r' % (sInd+1,targlist.nStars)
self.updates = np.array(self.updates)
print 'Completeness update calculations finished'
def completeness_update(self, sInd, targlist, obsbegin, obsend, nexttime):
"""Updates completeness value for stars previously observed
Args:
sInd (integer):
Index of star just observed
targlist (TargetList):
TargetList class module
obsbegin (astropy Quantity):
Time of observation begin in units of day
obsend (astropy Quantity):
Time of observation end in units of day
nexttime (astropy Quantity):
Time of next observational period in units of day
Returns:
comp0 (ndarray):
Completeness values for each star in the target list
"""
self.visits[sInd] += 1
if self.visits[sInd] > len(self.updates[sInd])-1:
targlist.comp0[sInd] = self.updates[sInd][-1]
else:
targlist.comp0[sInd] = self.updates[sInd][self.visits[sInd]]
return targlist.comp0
def genC(self, Cpath, nplan, xedges, yedges, steps):
"""Gets completeness interpolant for initial completeness
This function either loads a completeness .comp file based on specified
Planet Population module or performs Monte Carlo simulations to get
the 2D completeness values needed for interpolation.
Args:
Cpath (string):
path to 2D completeness value array
nplan (float):
number of planets used in each simulation
xedges (ndarray):
1D numpy ndarray of x edge of 2d histogram (separation)
yedges (ndarray):
1D numpy ndarray of y edge of 2d histogram (dMag)
steps (integer):
number of simulations to perform
Returns:
H (ndarray):
2D numpy ndarray of completeness probability density values
"""
# if the 2D completeness pdf array exists as a .comp file load it
if os.path.exists(Cpath):
print 'Loading cached completeness file from "%s".' % Cpath
H = pickle.load(open(Cpath, 'rb'))
print 'Completeness loaded from cache.'
#h, xedges, yedges = self.hist(nplan, xedges, yedges)
else:
# run Monte Carlo simulation and pickle the resulting array
print 'Cached completeness file not found at "%s".' % Cpath
print 'Beginning Monte Carlo completeness calculations.'
t0, t1 = None, None # keep track of per-iteration time
for i in xrange(steps):
t0, t1 = t1, time.time()
if t0 is None:
delta_t_msg = '' # no message
else:
delta_t_msg = '[%.3f s/iteration]' % (t1 - t0)
print 'Completeness iteration: %5d / %5d %s' % (i+1, steps, delta_t_msg)
# get completeness histogram
h, xedges, yedges = self.hist(nplan, xedges, yedges)
if i == 0:
H = h
else:
H += h
H = H/(self.Nplanets*(xedges[1]-xedges[0])*(yedges[1]-yedges[0]))
# store 2D completeness pdf array as .comp file
pickle.dump(H, open(Cpath, 'wb'))
print 'Monte Carlo completeness calculations finished'
print '2D completeness array stored in %r' % Cpath
return H, xedges, yedges
def hist(self, nplan, xedges, yedges):
"""Returns completeness histogram for Monte Carlo simulation
This function uses the inherited Planet Population module.
Args:
nplan (float):
Number of planets used
xedges (ndarray):
1D numpy ndarray of x edge of 2d histogram (separation)
yedges (ndarray):
1D numpy ndarray of y edge of 2d histogram (dMag)
Returns:
h (ndarray):
2D numpy ndarray containing completeness histogram
"""
s, dMag = self.genplans(nplan)
# get histogram
h, yedges, xedges = np.histogram2d(dMag, s.to('AU').value, bins=1000, \
range=[[yedges.min(), yedges.max()], [xedges.min(), xedges.max()]])
return h, xedges, yedges
def genplans(self, nplan):
"""Generates planet data needed for Monte Carlo simulation
Args:
nplan (integer):
Number of planets
Returns:
s (astropy Quantity array):
Planet apparent separations in units of AU
dMag (ndarray):
Difference in brightness
"""
nplan = int(nplan)
# sample uniform distribution of mean anomaly
M = np.random.uniform(high=2.*np.pi,size=nplan)
# sample semi-major axis
a = self.PlanetPopulation.gen_sma(nplan).to('AU').value
# sample other necessary orbital parameters
if np.sum(self.PlanetPopulation.erange) == 0:
# all circular orbits
r = a
e = 0.
E = M
else:
# sample eccentricity
if self.PlanetPopulation.constrainOrbits:
e = self.PlanetPopulation.gen_eccen_from_sma(nplan,a*u.AU)
else:
e = self.PlanetPopulation.gen_eccen(nplan)
# Newton-Raphson to find E
E = eccanom(M,e)
# orbital radius
r = a*(1-e*np.cos(E))
# orbit angle sampling
O = self.PlanetPopulation.gen_O(nplan).to('rad').value
w = self.PlanetPopulation.gen_w(nplan).to('rad').value
I = self.PlanetPopulation.gen_I(nplan).to('rad').value
r1 = r*(np.cos(E) - e)
r1 = np.hstack((r1.reshape(len(r1),1), r1.reshape(len(r1),1), r1.reshape(len(r1),1)))
r2 = r*np.sin(E)*np.sqrt(1. - e**2)
r2 = np.hstack((r2.reshape(len(r2),1), r2.reshape(len(r2),1), r2.reshape(len(r2),1)))
a1 = np.cos(O)*np.cos(w) - np.sin(O)*np.sin(w)*np.cos(I)
a2 = np.sin(O)*np.cos(w) + np.cos(O)*np.sin(w)*np.cos(I)
a3 = np.sin(w)*np.sin(I)
A = np.hstack((a1.reshape(len(a1),1), a2.reshape(len(a2),1), a3.reshape(len(a3),1)))
b1 = -np.cos(O)*np.sin(w) - np.sin(O)*np.cos(w)*np.cos(I)
b2 = -np.sin(O)*np.sin(w) + np.cos(O)*np.cos(w)*np.cos(I)
b3 = np.cos(w)*np.sin(I)
B = np.hstack((b1.reshape(len(b1),1), b2.reshape(len(b2),1), b3.reshape(len(b3),1)))
# planet position, planet-star distance, apparent separation
r = (A*r1 + B*r2)*u.AU
d = np.sqrt(np.sum(r**2, axis=1))
s = np.sqrt(np.sum(r[:,0:2]**2, axis=1))
# sample albedo, planetary radius, phase function
p = self.PlanetPopulation.gen_albedo(nplan)
Rp = self.PlanetPopulation.gen_radius(nplan)
beta = np.arccos(r[:,2]/d)
Phi = self.PlanetPhysicalModel.calc_Phi(beta)
# calculate dMag
dMag = deltaMag(p,Rp,d,Phi)
return s, dMag
| 42.08 | 109 | 0.537575 | 2,087 | 17,884 | 4.581696 | 0.1931 | 0.062748 | 0.038486 | 0.008785 | 0.315938 | 0.270446 | 0.199017 | 0.19201 | 0.158126 | 0.142021 | 0 | 0.021249 | 0.352662 | 17,884 | 424 | 110 | 42.179245 | 0.804699 | 0.104004 | 0 | 0.180952 | 0 | 0 | 0.044877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057143 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3e858c279c7da79f073153068c7d9c2b91c90b3 | 736 | py | Python | greensinversion/regularization.py | isuthermography/greensinversion | 92f272a3649bb2f6b132f8cd239edd68dd2a6a62 | [
"Unlicense"
] | 1 | 2020-07-25T23:23:04.000Z | 2020-07-25T23:23:04.000Z | greensinversion/regularization.py | isuthermography/greensinversion | 92f272a3649bb2f6b132f8cd239edd68dd2a6a62 | [
"Unlicense"
] | 1 | 2018-10-04T01:43:25.000Z | 2018-11-28T17:59:12.000Z | greensinversion/regularization.py | isuthermography/greensinversion | 92f272a3649bb2f6b132f8cd239edd68dd2a6a62 | [
"Unlicense"
] | 1 | 2020-07-25T23:23:06.000Z | 2020-07-25T23:23:06.000Z | import numpy as np
def apply_tikhonov_regularization(u,s,v,usetikparam,vector):
#alpha = usetikparam*np.sqrt(u.shape[0]/v.shape[1]) # Tikhonov parameter interpreted as scaled by sqrt(matrix rows/matrix cols) so that it is directly interpretable as NETD/NESI (noise equivalent temperature difference over noise equivalent source intensity, with NETD measured in deg. K and NESI measured in J/m^2
# NOTE: u and v no longer orthogonal as they have already been pre-multiplied by scaling factors
# tikhonov scaling temporarily disabled
alpha=usetikparam
d = s/(s**2+alpha**2)
#inverse = np.dot(v.T*(d.reshape(1,d.shape[0])),u.T)
#return inverse
return np.dot(v.T,np.dot(u.T,vector)*d)
| 36.8 | 319 | 0.716033 | 120 | 736 | 4.375 | 0.575 | 0.028571 | 0.022857 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011647 | 0.183424 | 736 | 19 | 320 | 38.736842 | 0.861897 | 0.694293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e3f69e5b14024599fb273e979ccbc45a1c411ded | 8,652 | py | Python | spydrnet/plugins/namespace_manager/tests/test_edif_namespace.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 34 | 2020-03-12T15:40:49.000Z | 2022-02-28T07:13:47.000Z | spydrnet/plugins/namespace_manager/tests/test_edif_namespace.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 104 | 2020-01-06T20:32:19.000Z | 2022-01-02T00:20:14.000Z | spydrnet/plugins/namespace_manager/tests/test_edif_namespace.py | ganeshgore/spydrnet | 22672b8fc7d63461a71077bd20f29df6d38e96f4 | [
"BSD-3-Clause"
] | 10 | 2020-09-02T20:24:00.000Z | 2022-02-24T16:10:07.000Z | import unittest
import spydrnet as sdn
class TestEdifNamespace(unittest.TestCase):
original_default = None
@classmethod
def setUpClass(cls) -> None:
cls.original_default = sdn.namespace_manager.default
sdn.namespace_manager.default = "EDIF"
@classmethod
def tearDownClass(cls) -> None:
sdn.namespace_manager.default = cls.original_default
def gen_netlist(self):
netlist = sdn.Netlist()
return netlist
def gen_library(self):
netlist = self.gen_netlist()
lib = netlist.create_library()
return lib
def gen_definition(self):
lib = self.gen_library()
defin = lib.create_definition()
return defin
def test_basic_setup(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib2 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib1"
lib2['EDIF.identifier'] = "my_lib2"
def1 = lib1.create_definition()
def1['EDIF.identifier'] = "d1"
def2 = lib2.create_definition()
def2['EDIF.identifier'] = "d1"
def3 = lib1.create_definition()
def3['EDIF.identifier'] = "my_lib1"
c1 = def1.create_cable()
p1 = def1.create_port()
i1 = def1.create_child()
c2 = def1.create_cable()
p2 = def1.create_port()
i2 = def1.create_child()
c1['EDIF.identifier'] = "&1"
i1['EDIF.identifier'] = "&1"
p1['EDIF.identifier'] = "&1"
c2['EDIF.identifier'] = "&2"
i2['EDIF.identifier'] = "&2"
p2['EDIF.identifier'] = "&2"
def test_dont_track_orphaned(self):
netlist = self.gen_netlist()
lib1 = sdn.Library()
lib2 = sdn.Library()
lib1['EDIF.identifier'] = "my_lib1"
lib2['EDIF.identifier'] = "my_lib1"
@unittest.expectedFailure
def test_duplicate_library_name(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib2 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib"
lib2['EDIF.identifier'] = "my_lib"
@unittest.expectedFailure
def test_duplicate_definition_name(self):
lib1 = self.gen_library()
def1 = lib1.create_definition()
def2 = lib1.create_definition()
def1['EDIF.identifier'] = "my_lib"
def2['EDIF.identifier'] = "my_lib"
def test_duplicate_definition_elements(self):
def1 = self.gen_definition()
port = def1.create_port()
instance = def1.create_child()
cable = def1.create_cable()
port['EDIF.identifier'] = "my_lib"
instance['EDIF.identifier'] = "my_lib"
cable['EDIF.identifier'] = "my_lib"
@unittest.expectedFailure
def test_duplicate_definition_ports(self):
def1 = self.gen_definition()
port = def1.create_port()
port2 = def1.create_port()
port['EDIF.identifier'] = "my_lib"
port2['EDIF.identifier'] = "my_lib"
@unittest.expectedFailure
def test_duplicate_definition_cables(self):
def1 = self.gen_definition()
cable = def1.create_cable()
cable2 = def1.create_cable()
cable['EDIF.identifier'] = "my_lib"
cable2['EDIF.identifier'] = "my_lib"
@unittest.expectedFailure
def test_duplicate_definition_children(self):
def1 = self.gen_definition()
instance = def1.create_child()
instance2 = def1.create_child()
instance['EDIF.identifier'] = "my_lib"
instance2['EDIF.identifier'] = "my_lib"
def test_rename(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib1"
lib1['EDIF.identifier'] = "my_lib2"
lib1['EDIF.identifier'] = "my_lib1"
lib2 = netlist.create_library()
lib2['EDIF.identifier'] = "my_lib2"
def1 = lib1.create_definition()
def1['EDIF.identifier'] = "my_lib1"
def1['EDIF.identifier'] = "my_lib2"
def1['EDIF.identifier'] = "my_lib1"
def2 = lib1.create_definition()
def2['EDIF.identifier'] = "my_lib2"
c = def1.create_cable()
c['EDIF.identifier'] = "&1"
c['EDIF.identifier'] = "&2"
c['EDIF.identifier'] = "&1"
p = def1.create_port()
p['EDIF.identifier'] = "&1"
p['EDIF.identifier'] = "&2"
p['EDIF.identifier'] = "&1"
i = def1.create_child()
i['EDIF.identifier'] = "&1"
i['EDIF.identifier'] = "&2"
i['EDIF.identifier'] = "&1"
def test_remove(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib1"
netlist.remove_library(lib1)
lib2 = netlist.create_library()
lib2['EDIF.identifier'] = "my_lib1"
def1 = lib2.create_definition()
def1['EDIF.identifier'] = "my_lib1"
lib2.remove_definition(def1)
def2 = lib2.create_definition()
def2['EDIF.identifier'] = "my_lib1"
c1 = def2.create_cable()
c2 = def2.create_cable()
p1 = def2.create_port()
p2 = def2.create_port()
i1 = def2.create_child()
i2 = def2.create_child()
c1['EDIF.identifier'] = "&1"
def2.remove_cable(c1)
c2['EDIF.identifier'] = "&1"
p1['EDIF.identifier'] = "&1"
def2.remove_port(p1)
p2['EDIF.identifier'] = "&1"
i1['EDIF.identifier'] = "&1"
def2.remove_child(i1)
i2['EDIF.identifier'] = "&1"
def test_orphaned_add(self):
netlist = self.gen_netlist()
lib1 = sdn.Library()
lib1["EDIF.identifier"] = '&1'
netlist.add_library(lib1)
@unittest.expectedFailure
def test_orphaned_add_collision(self):
netlist = self.gen_netlist()
lib1 = sdn.Library()
lib1["EDIF.identifier"] = '&1'
netlist.add_library(lib1)
lib2 = sdn.Library()
lib2["EDIF.identifier"] = '&1'
netlist.add_library(lib2)
def test_remove_twice_library(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib1"
netlist.remove_library(lib1)
self.assertRaises(Exception, netlist.remove_library, lib1)
def test_remove_twice_definition(self):
lib = self.gen_library()
d1 = lib.create_definition()
d1['EDIF.identifier'] = "&1"
lib.remove_definition(d1)
self.assertRaises(Exception, lib.remove_definition, d1)
def test_remove_untracked(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
def1 = lib1.create_definition()
c1 = def1.create_cable()
p1 = def1.create_port()
i1 = def1.create_child()
def1.remove_cable(c1)
def1.remove_child(i1)
def1.remove_port(p1)
lib1.remove_definition(def1)
netlist.remove_library(lib1)
def test_remove_tracked(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib1["EDIF.identifier"] = "test"
def1 = lib1.create_definition()
def1["EDIF.identifier"] = "test"
c1 = def1.create_cable()
c1["EDIF.identifier"] = "test"
p1 = def1.create_port()
p1["EDIF.identifier"] = "test"
i1 = def1.create_child()
i1["EDIF.identifier"] = "test"
def1.remove_cable(c1)
def1.remove_child(i1)
def1.remove_port(p1)
lib1.remove_definition(def1)
netlist.remove_library(lib1)
def test_pop_name(self):
netlist = self.gen_netlist()
lib1 = netlist.create_library()
lib1['EDIF.identifier'] = "my_lib1"
lib1.pop('EDIF.identifier')
lib2 = netlist.create_library()
lib2['EDIF.identifier'] = "my_lib1"
def1 = lib2.create_definition()
def1['EDIF.identifier'] = "my_lib1"
def1.pop('EDIF.identifier')
def2 = lib2.create_definition()
def2['EDIF.identifier'] = "my_lib1"
c1 = def2.create_cable()
c2 = def2.create_cable()
p1 = def2.create_port()
p2 = def2.create_port()
i1 = def2.create_child()
i2 = def2.create_child()
c1['EDIF.identifier'] = "&1"
c1.pop('EDIF.identifier')
c2['EDIF.identifier'] = "&1"
p1['EDIF.identifier'] = "&1"
p1.pop('EDIF.identifier')
p2['EDIF.identifier'] = "&1"
i1['EDIF.identifier'] = "&1"
i1.pop('EDIF.identifier')
i2['EDIF.identifier'] = "&1"
# TODO: rename an object
# TODO: orphan an object and see what happens
| 33.66537 | 66 | 0.592811 | 972 | 8,652 | 5.079218 | 0.087449 | 0.221187 | 0.113429 | 0.068868 | 0.686854 | 0.58335 | 0.531902 | 0.488151 | 0.449868 | 0.416852 | 0 | 0.041984 | 0.270458 | 8,652 | 256 | 67 | 33.796875 | 0.740177 | 0.007628 | 0 | 0.626087 | 0 | 0 | 0.173832 | 0 | 0 | 0 | 0 | 0.003906 | 0.008696 | 1 | 0.095652 | false | 0 | 0.008696 | 0 | 0.126087 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e3fdd8b8cbc3926690972bd648e3656a84878e8f | 1,457 | py | Python | plugins/maya/inventory/action_update_namespace.py | davidlatwe/reveries-config | 4a282dd64a32a9b87bd1a070759b6425ff785d68 | [
"MIT"
] | 3 | 2020-04-01T10:51:17.000Z | 2021-08-05T18:35:23.000Z | plugins/maya/inventory/action_update_namespace.py | davidlatwe/reveries-config | 4a282dd64a32a9b87bd1a070759b6425ff785d68 | [
"MIT"
] | null | null | null | plugins/maya/inventory/action_update_namespace.py | davidlatwe/reveries-config | 4a282dd64a32a9b87bd1a070759b6425ff785d68 | [
"MIT"
] | 1 | 2020-07-05T12:06:30.000Z | 2020-07-05T12:06:30.000Z |
import avalon.api
class UpdateNamespace(avalon.api.InventoryAction):
"""Update container imprinted namespace
Sometimes artist may import loaded subsets from other scene, which
may prefixing an extra namespace on top of those subsets but the
namespace attribute in the container did not update hence actions
like version updating bump into errors.
This action will lookup subset group node's namespace, and update
the container if namespace not consistent.
"""
label = "Namespace Dirty"
icon = "wrench"
color = "#F13A3A"
order = -101
@staticmethod
def is_compatible(container):
from reveries.maya import lib
if not ("subsetGroup" in container and container["subsetGroup"]):
return False
if container["loader"] in ["USDSetdressLoader", "USDLayoutLoader"]:
return False
namespace = lib.get_ns(container["subsetGroup"])
return container["namespace"] != namespace
def process(self, containers):
from maya import cmds
from avalon.tools import sceneinventory
from reveries.maya import lib
for container in containers:
namespace = lib.get_ns(container["subsetGroup"])
con_node = container["objectName"]
cmds.setAttr(con_node + ".namespace", namespace, type="string")
container["namespace"] = namespace
sceneinventory.app.window.refresh()
| 29.734694 | 75 | 0.671929 | 161 | 1,457 | 6.049689 | 0.552795 | 0.030801 | 0.032854 | 0.045175 | 0.12731 | 0.075975 | 0 | 0 | 0 | 0 | 0 | 0.005505 | 0.251887 | 1,457 | 48 | 76 | 30.354167 | 0.888073 | 0.264242 | 0 | 0.24 | 0 | 0 | 0.148792 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5408f0d69dd4b712a3e36a300e74e57a1812c78d | 4,433 | py | Python | dags/clix_static_visuals_dag.py | CLIxIndia-Dev/clix_dashboard_backend_AF | 4dc2f48fdd1ea312977f8237cec9b9fd71cc20b4 | [
"Apache-2.0"
] | null | null | null | dags/clix_static_visuals_dag.py | CLIxIndia-Dev/clix_dashboard_backend_AF | 4dc2f48fdd1ea312977f8237cec9b9fd71cc20b4 | [
"Apache-2.0"
] | null | null | null | dags/clix_static_visuals_dag.py | CLIxIndia-Dev/clix_dashboard_backend_AF | 4dc2f48fdd1ea312977f8237cec9b9fd71cc20b4 | [
"Apache-2.0"
] | 1 | 2020-03-17T06:40:25.000Z | 2020-03-17T06:40:25.000Z | # This DAG is for running python scripts to generate static visualisation data
# from syncthing every month end
import airflow
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.operators.dummy_operator import DummyOperator
from datetime import date, timedelta, datetime
import scripts.sync_school_data as sync_school_data
import scripts.process_raw_school_data as process_raw_school_data
import config.clix_config as clix_config
tools_modules_server_logs_datapath = clix_config.local_dst_state_data_logs
# --------------------------------------------------------------------------------
# set default arguments
# --------------------------------------------------------------------------------
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': airflow.utils.dates.days_ago(1),
#'email': ['airflow@example.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
'provide_context': True,
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
dag = DAG(
'clix_static_visuals_dag', default_args=default_args,
schedule_interval= '@monthly')
# --------------------------------------------------------------------------------
# Each state is synced independently. We have four states and syncthing data folders
# corresponding to those states are synced through sync_school_data
# --------------------------------------------------------------------------------
#sshHook = SSHHook(conn_id=<YOUR CONNECTION ID FROM THE UI>)
#dummy_operator = DummyOperator(task_id='dummy_task', retries=3, dag=dag)
list_of_state_vis = []
for each_state in clix_config.static_visuals_states:
src = clix_config.remote_src_static_vis + each_state
dst = clix_config.local_dst_static_vis + each_state
list_of_tasks_chunks = []
#sync_state_data = SSHExecuteOperator( task_id="task1",
#bash_command= rsync -avzhe ssh {0}@{1}:{2} {3}".format(user, ip, src, dst),
#ssh_hook=sshHook,
#dag=dag)
sync_state_data = PythonOperator(
task_id='sync_state_data_' + each_state,
python_callable=sync_school_data.rsync_data_ssh,
op_kwargs={'state': each_state, 'src': src, 'dst': dst, 'static_flag': True},
dag=dag, retries=0)
# For parallel processing of files in the list of schools updated
# we use three parallel tasks each taking the portion of the list
# of files. This is done instead of generating tasks dynamically.
# number of schools chunks is set to clix_config.num_school_chunks
# refer: https://stackoverflow.com/questions/55672724/airflow-creating-dynamic-tasks-from-xcom
for each in list(range(clix_config.num_school_chunks)):
if each_state == 'ts':
each_state_new = 'tg'
elif each_state == 'cg':
each_state_new = 'ct'
else:
each_state_new = each_state
process_state_raw_data = PythonOperator(
task_id='process_raw_state_data_' + str(each) + '_' + each_state_new,
python_callable=process_raw_school_data.process_school_data,
op_kwargs={'state': each_state_new, 'chunk': each},
dag=dag)
list_of_tasks_chunks.append(process_state_raw_data)
sync_state_data.set_downstream(process_state_raw_data)
combine_state_chunks = PythonOperator(
task_id='combine_chunks_' + each_state_new,
python_callable=process_raw_school_data.combine_chunks,
op_kwargs={'state': each_state_new},
dag=dag)
list_of_tasks_chunks >> combine_state_chunks
get_state_static_vis_data = PythonOperator(
task_id = 'get_static_vis_' + each_state_new,
python_callable = process_raw_school_data.get_state_static_vis_data,
op_kwargs = {'state': each_state_new, 'all_states_flag': False},
dag=dag)
list_of_state_vis.append(get_state_static_vis_data)
combine_state_chunks >> get_state_static_vis_data
get_static_vis_data_all = PythonOperator(
task_id = 'get_static_vis_data_allstates',
python_callable = process_raw_school_data.get_state_static_vis_data,
op_kwargs = {'state': None, 'all_states_flag': True},
dag=dag)
list_of_state_vis >> get_static_vis_data_all
| 39.230088 | 98 | 0.676291 | 571 | 4,433 | 4.865149 | 0.295972 | 0.058315 | 0.038877 | 0.043197 | 0.24658 | 0.199424 | 0.12923 | 0.114471 | 0.086393 | 0.048956 | 0 | 0.007048 | 0.167832 | 4,433 | 112 | 99 | 39.580357 | 0.746002 | 0.309046 | 0 | 0.090909 | 1 | 0 | 0.103721 | 0.024695 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
540df69177e6fb46fc901283c03665c416fc9242 | 24,887 | py | Python | src/OFS/CopySupport.py | MatthewWilkes/Zope | 740f934fc9409ae0062e8f0cd6dcfd8b2df00376 | [
"ZPL-2.1"
] | 1 | 2018-12-07T21:19:58.000Z | 2018-12-07T21:19:58.000Z | src/OFS/CopySupport.py | MatthewWilkes/Zope | 740f934fc9409ae0062e8f0cd6dcfd8b2df00376 | [
"ZPL-2.1"
] | null | null | null | src/OFS/CopySupport.py | MatthewWilkes/Zope | 740f934fc9409ae0062e8f0cd6dcfd8b2df00376 | [
"ZPL-2.1"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Copy interface
"""
from cgi import escape
from marshal import dumps
from marshal import loads
import re
import sys
import tempfile
from urllib import quote
from urllib import unquote
import warnings
from zlib import compress
from zlib import decompress
import transaction
from AccessControl import ClassSecurityInfo
from AccessControl import getSecurityManager
from AccessControl.class_init import InitializeClass
from AccessControl.Permissions import view_management_screens
from AccessControl.Permissions import copy_or_move
from AccessControl.Permissions import delete_objects
from Acquisition import aq_base
from Acquisition import aq_inner
from Acquisition import aq_parent
from App.Dialogs import MessageDialog
from App.special_dtml import HTML
from App.special_dtml import DTMLFile
from ExtensionClass import Base
from webdav.Lockable import ResourceLockedError
from zExceptions import Unauthorized, BadRequest
from ZODB.POSException import ConflictError
from zope.interface import implements
from zope.event import notify
from zope.lifecycleevent import ObjectCopiedEvent
from zope.lifecycleevent import ObjectMovedEvent
from zope.container.contained import notifyContainerModified
from OFS.event import ObjectWillBeMovedEvent
from OFS.event import ObjectClonedEvent
from OFS.interfaces import ICopyContainer
from OFS.interfaces import ICopySource
from OFS.Moniker import loadMoniker
from OFS.Moniker import Moniker
from OFS.subscribers import compatibilityCall
class CopyError(Exception):
pass
copy_re = re.compile('^copy([0-9]*)_of_(.*)')
_marker=[]
class CopyContainer(Base):
"""Interface for containerish objects which allow cut/copy/paste"""
implements(ICopyContainer)
security = ClassSecurityInfo()
# The following three methods should be overridden to store sub-objects
# as non-attributes.
def _setOb(self, id, object):
setattr(self, id, object)
def _delOb(self, id):
delattr(self, id)
def _getOb(self, id, default=_marker):
if hasattr(aq_base(self), id):
return getattr(self, id)
if default is _marker:
raise AttributeError(id)
return default
def manage_CopyContainerFirstItem(self, REQUEST):
return self._getOb(REQUEST['ids'][0])
def manage_CopyContainerAllItems(self, REQUEST):
return [self._getOb(i) for i in REQUEST['ids']]
security.declareProtected(delete_objects, 'manage_cutObjects')
def manage_cutObjects(self, ids=None, REQUEST=None):
"""Put a reference to the objects named in ids in the clip board"""
if ids is None and REQUEST is not None:
return eNoItemsSpecified
elif ids is None:
raise ValueError('ids must be specified')
if type(ids) is type(''):
ids=[ids]
oblist=[]
for id in ids:
ob=self._getOb(id)
if ob.wl_isLocked():
raise ResourceLockedError('Object "%s" is locked via WebDAV'
% ob.getId())
if not ob.cb_isMoveable():
raise CopyError(eNotSupported % escape(id))
m = Moniker(ob)
oblist.append(m.dump())
cp=(1, oblist)
cp=_cb_encode(cp)
if REQUEST is not None:
resp=REQUEST['RESPONSE']
resp.setCookie('__cp', cp, path='%s' % cookie_path(REQUEST))
REQUEST['__cp'] = cp
return self.manage_main(self, REQUEST)
return cp
security.declareProtected(view_management_screens, 'manage_copyObjects')
def manage_copyObjects(self, ids=None, REQUEST=None, RESPONSE=None):
"""Put a reference to the objects named in ids in the clip board"""
if ids is None and REQUEST is not None:
return eNoItemsSpecified
elif ids is None:
raise ValueError('ids must be specified')
if type(ids) is type(''):
ids=[ids]
oblist=[]
for id in ids:
ob=self._getOb(id)
if not ob.cb_isCopyable():
raise CopyError(eNotSupported % escape(id))
m = Moniker(ob)
oblist.append(m.dump())
cp=(0, oblist)
cp=_cb_encode(cp)
if REQUEST is not None:
resp=REQUEST['RESPONSE']
resp.setCookie('__cp', cp, path='%s' % cookie_path(REQUEST))
REQUEST['__cp'] = cp
return self.manage_main(self, REQUEST)
return cp
def _get_id(self, id):
# Allow containers to override the generation of
# object copy id by attempting to call its _get_id
# method, if it exists.
match = copy_re.match(id)
if match:
n = int(match.group(1) or '1')
orig_id = match.group(2)
else:
n = 0
orig_id = id
while 1:
if self._getOb(id, None) is None:
return id
id='copy%s_of_%s' % (n and n+1 or '', orig_id)
n=n+1
security.declareProtected(view_management_screens, 'manage_pasteObjects')
def manage_pasteObjects(self, cb_copy_data=None, REQUEST=None):
"""Paste previously copied objects into the current object.
If calling manage_pasteObjects from python code, pass the result of a
previous call to manage_cutObjects or manage_copyObjects as the first
argument.
Also sends IObjectCopiedEvent and IObjectClonedEvent
or IObjectWillBeMovedEvent and IObjectMovedEvent.
"""
if cb_copy_data is not None:
cp = cb_copy_data
elif REQUEST is not None and REQUEST.has_key('__cp'):
cp = REQUEST['__cp']
else:
cp = None
if cp is None:
raise CopyError(eNoData)
try:
op, mdatas = _cb_decode(cp)
except:
raise CopyError(eInvalid)
oblist = []
app = self.getPhysicalRoot()
for mdata in mdatas:
m = loadMoniker(mdata)
try:
ob = m.bind(app)
except ConflictError:
raise
except:
raise CopyError(eNotFound)
self._verifyObjectPaste(ob, validate_src=op+1)
oblist.append(ob)
result = []
if op == 0:
# Copy operation
for ob in oblist:
orig_id = ob.getId()
if not ob.cb_isCopyable():
raise CopyError(eNotSupported % escape(orig_id))
try:
ob._notifyOfCopyTo(self, op=0)
except ConflictError:
raise
except:
raise CopyError(MessageDialog(
title="Copy Error",
message=sys.exc_info()[1],
action='manage_main'))
id = self._get_id(orig_id)
result.append({'id': orig_id, 'new_id': id})
orig_ob = ob
ob = ob._getCopy(self)
ob._setId(id)
notify(ObjectCopiedEvent(ob, orig_ob))
self._setObject(id, ob)
ob = self._getOb(id)
ob.wl_clearLocks()
ob._postCopy(self, op=0)
compatibilityCall('manage_afterClone', ob, ob)
notify(ObjectClonedEvent(ob))
if REQUEST is not None:
return self.manage_main(self, REQUEST, update_menu=1,
cb_dataValid=1)
elif op == 1:
# Move operation
for ob in oblist:
orig_id = ob.getId()
if not ob.cb_isMoveable():
raise CopyError(eNotSupported % escape(orig_id))
try:
ob._notifyOfCopyTo(self, op=1)
except ConflictError:
raise
except:
raise CopyError(MessageDialog(
title="Move Error",
message=sys.exc_info()[1],
action='manage_main'))
if not sanity_check(self, ob):
raise CopyError(
"This object cannot be pasted into itself")
orig_container = aq_parent(aq_inner(ob))
if aq_base(orig_container) is aq_base(self):
id = orig_id
else:
id = self._get_id(orig_id)
result.append({'id': orig_id, 'new_id': id})
notify(ObjectWillBeMovedEvent(ob, orig_container, orig_id,
self, id))
# try to make ownership explicit so that it gets carried
# along to the new location if needed.
ob.manage_changeOwnershipType(explicit=1)
try:
orig_container._delObject(orig_id, suppress_events=True)
except TypeError:
orig_container._delObject(orig_id)
warnings.warn(
"%s._delObject without suppress_events is discouraged."
% orig_container.__class__.__name__,
DeprecationWarning)
ob = aq_base(ob)
ob._setId(id)
try:
self._setObject(id, ob, set_owner=0, suppress_events=True)
except TypeError:
self._setObject(id, ob, set_owner=0)
warnings.warn(
"%s._setObject without suppress_events is discouraged."
% self.__class__.__name__, DeprecationWarning)
ob = self._getOb(id)
notify(ObjectMovedEvent(ob, orig_container, orig_id, self, id))
notifyContainerModified(orig_container)
if aq_base(orig_container) is not aq_base(self):
notifyContainerModified(self)
ob._postCopy(self, op=1)
# try to make ownership implicit if possible
ob.manage_changeOwnershipType(explicit=0)
if REQUEST is not None:
REQUEST['RESPONSE'].setCookie('__cp', 'deleted',
path='%s' % cookie_path(REQUEST),
expires='Wed, 31-Dec-97 23:59:59 GMT')
REQUEST['__cp'] = None
return self.manage_main(self, REQUEST, update_menu=1,
cb_dataValid=0)
return result
security.declareProtected(view_management_screens, 'manage_renameForm')
manage_renameForm = DTMLFile('dtml/renameForm', globals())
security.declareProtected(view_management_screens, 'manage_renameObjects')
def manage_renameObjects(self, ids=[], new_ids=[], REQUEST=None):
"""Rename several sub-objects"""
if len(ids) != len(new_ids):
raise BadRequest('Please rename each listed object.')
for i in range(len(ids)):
if ids[i] != new_ids[i]:
self.manage_renameObject(ids[i], new_ids[i], REQUEST)
if REQUEST is not None:
return self.manage_main(self, REQUEST, update_menu=1)
return None
security.declareProtected(view_management_screens, 'manage_renameObject')
def manage_renameObject(self, id, new_id, REQUEST=None):
"""Rename a particular sub-object.
"""
try:
self._checkId(new_id)
except:
raise CopyError(MessageDialog(
title='Invalid Id',
message=sys.exc_info()[1],
action ='manage_main'))
ob = self._getOb(id)
if ob.wl_isLocked():
raise ResourceLockedError('Object "%s" is locked via WebDAV'
% ob.getId())
if not ob.cb_isMoveable():
raise CopyError(eNotSupported % escape(id))
self._verifyObjectPaste(ob)
try:
ob._notifyOfCopyTo(self, op=1)
except ConflictError:
raise
except:
raise CopyError(MessageDialog(
title="Rename Error",
message=sys.exc_info()[1],
action ='manage_main'))
notify(ObjectWillBeMovedEvent(ob, self, id, self, new_id))
try:
self._delObject(id, suppress_events=True)
except TypeError:
self._delObject(id)
warnings.warn(
"%s._delObject without suppress_events is discouraged." %
self.__class__.__name__, DeprecationWarning)
ob = aq_base(ob)
ob._setId(new_id)
# Note - because a rename always keeps the same context, we
# can just leave the ownership info unchanged.
try:
self._setObject(new_id, ob, set_owner=0, suppress_events=True)
except TypeError:
self._setObject(new_id, ob, set_owner=0)
warnings.warn(
"%s._setObject without suppress_events is discouraged." %
self.__class__.__name__, DeprecationWarning)
ob = self._getOb(new_id)
notify(ObjectMovedEvent(ob, self, id, self, new_id))
notifyContainerModified(self)
ob._postCopy(self, op=1)
if REQUEST is not None:
return self.manage_main(self, REQUEST, update_menu=1)
return None
# Why did we give this a manage_ prefix if its really
# supposed to be public since it does its own auth ?
#
# Because it's still a "management" function.
security.declarePublic('manage_clone')
def manage_clone(self, ob, id, REQUEST=None):
"""Clone an object, creating a new object with the given id.
"""
if not ob.cb_isCopyable():
raise CopyError(eNotSupported % escape(ob.getId()))
try:
self._checkId(id)
except:
raise CopyError(MessageDialog(
title='Invalid Id',
message=sys.exc_info()[1],
action ='manage_main'))
self._verifyObjectPaste(ob)
try:
ob._notifyOfCopyTo(self, op=0)
except ConflictError:
raise
except:
raise CopyError(MessageDialog(
title="Clone Error",
message=sys.exc_info()[1],
action='manage_main'))
orig_ob = ob
ob = ob._getCopy(self)
ob._setId(id)
notify(ObjectCopiedEvent(ob, orig_ob))
self._setObject(id, ob)
ob = self._getOb(id)
ob._postCopy(self, op=0)
compatibilityCall('manage_afterClone', ob, ob)
notify(ObjectClonedEvent(ob))
return ob
def cb_dataValid(self):
# Return true if clipboard data seems valid.
try: cp=_cb_decode(self.REQUEST['__cp'])
except: return 0
return 1
def cb_dataItems(self):
# List of objects in the clip board
try: cp=_cb_decode(self.REQUEST['__cp'])
except: return []
oblist=[]
app = self.getPhysicalRoot()
for mdata in cp[1]:
m = loadMoniker(mdata)
oblist.append(m.bind(app))
return oblist
validClipData=cb_dataValid
def _verifyObjectPaste(self, object, validate_src=1):
# Verify whether the current user is allowed to paste the
# passed object into self. This is determined by checking
# to see if the user could create a new object of the same
# meta_type of the object passed in and checking that the
# user actually is allowed to access the passed in object
# in its existing context.
#
# Passing a false value for the validate_src argument will skip
# checking the passed in object in its existing context. This is
# mainly useful for situations where the passed in object has no
# existing context, such as checking an object during an import
# (the object will not yet have been connected to the acquisition
# heirarchy).
if not hasattr(object, 'meta_type'):
raise CopyError(MessageDialog(
title = 'Not Supported',
message = ('The object <em>%s</em> does not support this' \
' operation' % escape(absattr(object.id))),
action = 'manage_main'))
if not hasattr(self, 'all_meta_types'):
raise CopyError(MessageDialog(
title = 'Not Supported',
message = 'Cannot paste into this object.',
action = 'manage_main'))
method_name = None
mt_permission = None
meta_types = absattr(self.all_meta_types)
for d in meta_types:
if d['name'] == object.meta_type:
method_name = d['action']
mt_permission = d.get('permission')
break
if mt_permission is not None:
sm = getSecurityManager()
if sm.checkPermission(mt_permission, self):
if validate_src:
# Ensure the user is allowed to access the object on the
# clipboard.
try:
parent = aq_parent(aq_inner(object))
except:
parent = None
if not sm.validate(None, parent, None, object):
raise Unauthorized(absattr(object.id))
if validate_src == 2: # moving
if not sm.checkPermission(delete_objects, parent):
raise Unauthorized('Delete not allowed.')
else:
raise CopyError(MessageDialog(
title = 'Insufficient Privileges',
message = ('You do not possess the %s permission in the '
'context of the container into which you are '
'pasting, thus you are not able to perform '
'this operation.' % mt_permission),
action = 'manage_main'))
else:
raise CopyError(MessageDialog(
title = 'Not Supported',
message = ('The object <em>%s</em> does not support this '
'operation.' % escape(absattr(object.id))),
action = 'manage_main'))
InitializeClass(CopyContainer)
class CopySource(Base):
"""Interface for objects which allow themselves to be copied."""
implements(ICopySource)
# declare a dummy permission for Copy or Move here that we check
# in cb_isCopyable.
security = ClassSecurityInfo()
security.setPermissionDefault(copy_or_move, ('Anonymous', 'Manager'))
def _canCopy(self, op=0):
"""Called to make sure this object is copyable.
The op var is 0 for a copy, 1 for a move.
"""
return 1
def _notifyOfCopyTo(self, container, op=0):
"""Overide this to be pickly about where you go!
If you dont want to go there, raise an exception. The op variable is 0
for a copy, 1 for a move.
"""
pass
def _getCopy(self, container):
# Commit a subtransaction to:
# 1) Make sure the data about to be exported is current
# 2) Ensure self._p_jar and container._p_jar are set even if
# either one is a new object
transaction.savepoint(optimistic=True)
if self._p_jar is None:
raise CopyError(
'Object "%s" needs to be in the database to be copied' %
`self`)
if container._p_jar is None:
raise CopyError(
'Container "%s" needs to be in the database' %
`container`)
# Ask an object for a new copy of itself.
f=tempfile.TemporaryFile()
self._p_jar.exportFile(self._p_oid,f)
f.seek(0)
ob=container._p_jar.importFile(f)
f.close()
return ob
def _postCopy(self, container, op=0):
# Called after the copy is finished to accomodate special cases.
# The op var is 0 for a copy, 1 for a move.
pass
def _setId(self, id):
# Called to set the new id of a copied object.
self.id=id
def cb_isCopyable(self):
# Is object copyable? Returns 0 or 1
if not (hasattr(self, '_canCopy') and self._canCopy(0)):
return 0
if not self.cb_userHasCopyOrMovePermission():
return 0
return 1
def cb_isMoveable(self):
# Is object moveable? Returns 0 or 1
if not (hasattr(self, '_canCopy') and self._canCopy(1)):
return 0
if hasattr(self, '_p_jar') and self._p_jar is None:
return 0
try: n=aq_parent(aq_inner(self))._reserved_names
except: n=()
if absattr(self.id) in n:
return 0
if not self.cb_userHasCopyOrMovePermission():
return 0
return 1
def cb_userHasCopyOrMovePermission(self):
if getSecurityManager().checkPermission(copy_or_move, self):
return 1
InitializeClass(CopySource)
def sanity_check(c, ob):
# This is called on cut/paste operations to make sure that
# an object is not cut and pasted into itself or one of its
# subobjects, which is an undefined situation.
ob = aq_base(ob)
while 1:
if aq_base(c) is ob:
return 0
inner = aq_inner(c)
if aq_parent(inner) is None:
return 1
c = aq_parent(inner)
def absattr(attr):
if callable(attr): return attr()
return attr
def _cb_encode(d):
return quote(compress(dumps(d), 9))
def _cb_decode(s):
return loads(decompress(unquote(s)))
def cookie_path(request):
# Return a "path" value for use in a cookie that refers
# to the root of the Zope object space.
return request['BASEPATH1'] or "/"
fMessageDialog = HTML("""
<HTML>
<HEAD>
<TITLE>&dtml-title;</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF">
<FORM ACTION="&dtml-action;" METHOD="GET" <dtml-if
target>TARGET="&dtml-target;"</dtml-if>>
<TABLE BORDER="0" WIDTH="100%%" CELLPADDING="10">
<TR>
<TD VALIGN="TOP">
<BR>
<CENTER><B><FONT SIZE="+6" COLOR="#77003B">!</FONT></B></CENTER>
</TD>
<TD VALIGN="TOP">
<BR><BR>
<CENTER>
<dtml-var message>
</CENTER>
</TD>
</TR>
<TR>
<TD VALIGN="TOP">
</TD>
<TD VALIGN="TOP">
<CENTER>
<INPUT TYPE="SUBMIT" VALUE=" Ok ">
</CENTER>
</TD>
</TR>
</TABLE>
</FORM>
</BODY></HTML>""", target='', action='manage_main', title='Changed')
eNoData=MessageDialog(
title='No Data',
message='No clipboard data found.',
action ='manage_main',)
eInvalid=MessageDialog(
title='Clipboard Error',
message='The data in the clipboard could not be read, possibly due ' \
'to cookie data being truncated by your web browser. Try copying ' \
'fewer objects.',
action ='manage_main',)
eNotFound=MessageDialog(
title='Item Not Found',
message='One or more items referred to in the clipboard data was ' \
'not found. The item may have been moved or deleted after you ' \
'copied it.',
action ='manage_main',)
eNotSupported=fMessageDialog(
title='Not Supported',
message=(
'The action against the <em>%s</em> object could not be carried '
'out. '
'One of the following constraints caused the problem: <br><br>'
'The object does not support this operation.'
'<br><br>-- OR --<br><br>'
'The currently logged-in user does not have the <b>Copy or '
'Move</b> permission respective to the object.'
),
action ='manage_main',)
eNoItemsSpecified=MessageDialog(
title='No items specified',
message='You must select one or more items to perform ' \
'this operation.',
action ='manage_main'
)
| 33.952251 | 79 | 0.571704 | 2,830 | 24,887 | 4.89682 | 0.187986 | 0.022225 | 0.018473 | 0.023091 | 0.380214 | 0.350411 | 0.313321 | 0.289652 | 0.279622 | 0.257902 | 0 | 0.006481 | 0.336642 | 24,887 | 732 | 80 | 33.998634 | 0.83294 | 0.104553 | 0 | 0.454373 | 0 | 0 | 0.133835 | 0.007006 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005703 | 0.077947 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
541c51e665974394ae0ab412789deb2f54ac881a | 1,879 | py | Python | python/rhinoscripts/example_csv_loading.py | tasbolat1/hmv-s16 | 7863c66ed645b463b72aef98a5c484a18cc9f396 | [
"BSD-3-Clause"
] | 1 | 2020-10-10T21:27:30.000Z | 2020-10-10T21:27:30.000Z | python/rhinoscripts/example_csv_loading.py | tasbolat1/hmv-s16 | 7863c66ed645b463b72aef98a5c484a18cc9f396 | [
"BSD-3-Clause"
] | null | null | null | python/rhinoscripts/example_csv_loading.py | tasbolat1/hmv-s16 | 7863c66ed645b463b72aef98a5c484a18cc9f396 | [
"BSD-3-Clause"
] | null | null | null | """Example code for importing a single rigid body trajectory into Rhino from a Optitrack CSV file.
Copyright (c) 2016, Garth Zeglin. All rights reserved. Licensed under the terms
of the BSD 3-clause license as included in LICENSE.
Example code for generating a path of Rhino 'planes' (e.g. coordinate frame)
from a trajectory data file. The path is returned as a list of Plane objects.
Each plane is created using an origin vector and X and Y basis vectors. The
time stamps and Z basis vectors in the trajectory file are ignored.
"""
# Load the Rhino API.
import rhinoscriptsyntax as rs
# Make sure that the Python libraries also contained within this course package
# are on the load path. This adds the parent folder to the load path, assuming that this
# script is still located with the rhinoscripts/ subfolder of the Python library tree.
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
# Load the Optitrack CSV file parser module.
import optitrack.csv_reader as csv
from optitrack.geometry import *
# Find the path to the test data file located alongside the script.
filename = os.path.join( os.path.abspath(os.path.dirname(__file__)), "sample_optitrack_take.csv")
# Read the file.
take = csv.Take().readCSV(filename)
# Print out some statistics
print "Found rigid bodies:", take.rigid_bodies.keys()
# Process the first rigid body into a set of planes.
bodies = take.rigid_bodies.values()
# for now:
xaxis = [1,0,0]
yaxis = [0,1,0]
if len(bodies) > 0:
body = bodies[0]
for pos,rot in zip(body.positions, body.rotations):
if pos is not None and rot is not None:
xaxis, yaxis = quaternion_to_xaxis_yaxis(rot)
plane = rs.PlaneFromFrame(pos, xaxis, yaxis)
# create a visible plane, assuming units are in meters
rs.AddPlaneSurface( plane, 0.1, 0.1 )
| 36.134615 | 98 | 0.734433 | 304 | 1,879 | 4.486842 | 0.473684 | 0.026393 | 0.028592 | 0.021994 | 0.043988 | 0.043988 | 0.043988 | 0 | 0 | 0 | 0 | 0.011796 | 0.187866 | 1,879 | 51 | 99 | 36.843137 | 0.882045 | 0.283662 | 0 | 0 | 0 | 0 | 0.055346 | 0.031447 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5421063007f16d8c808280360658d8af84912272 | 524 | py | Python | Projects/project04/pop_shrink.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | 2 | 2019-02-13T17:49:18.000Z | 2020-09-30T04:51:53.000Z | Projects/project04/pop_shrink.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | null | null | null | Projects/project04/pop_shrink.py | tonysulfaro/CSE-331 | b4f743b1127ebe531ba8417420d043e9c149135a | [
"MIT"
] | null | null | null | from Stack import Stack
def main():
stack = Stack()
stack.push(0)
stack.push(1)
stack.push(2)
stack.push(3)
assert stack.data == [0, 1, 2, 3]
assert stack.capacity == 4
assert stack.size == 4
popped = stack.pop()
assert popped == 3
popped = stack.pop()
assert popped == 2
print(stack)
assert stack.data == [0, 1]
assert stack.capacity == 2
assert stack.size == 2
print("Expected:", "['0', '1'] Capacity: 2")
print("Output:", str(stack))
main() | 17.466667 | 48 | 0.564885 | 73 | 524 | 4.054795 | 0.287671 | 0.222973 | 0.081081 | 0.108108 | 0.290541 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.274809 | 524 | 30 | 49 | 17.466667 | 0.728947 | 0 | 0 | 0.095238 | 0 | 0 | 0.072381 | 0 | 0 | 0 | 0 | 0 | 0.380952 | 1 | 0.047619 | false | 0 | 0.047619 | 0 | 0.095238 | 0.142857 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5423d564159a63ea1cc7a476c45ce6fae5bb3b4a | 1,670 | py | Python | bender/tests/test_predict_pipeline.py | otovo/bender | b64f0656658287b932ce44d52e6035682652fe33 | [
"Apache-2.0"
] | 2 | 2021-12-17T15:45:40.000Z | 2021-12-18T14:15:43.000Z | bender/tests/test_predict_pipeline.py | otovo/bender | b64f0656658287b932ce44d52e6035682652fe33 | [
"Apache-2.0"
] | 2 | 2022-03-30T14:31:12.000Z | 2022-03-31T14:25:25.000Z | bender/tests/test_predict_pipeline.py | otovo/bender | b64f0656658287b932ce44d52e6035682652fe33 | [
"Apache-2.0"
] | 1 | 2021-12-19T17:16:38.000Z | 2021-12-19T17:16:38.000Z | import numpy as np
import pytest
from pandas.core.frame import DataFrame
from bender.importers import DataImporters
from bender.model_loaders import ModelLoaders
from bender.model_trainer.decision_tree import DecisionTreeClassifierTrainer
from bender.split_strategies import SplitStrategies
pytestmark = pytest.mark.asyncio
async def test_predict_data() -> None:
model, data_set = await (
DataImporters.literal(DataFrame({'x': [0, 1], 'y': [0, 1], 'output': [0, 1]}))
# No test set
.split(SplitStrategies.ratio(1))
.train(DecisionTreeClassifierTrainer(), input_features=['x', 'y'], target_feature='output')
.run()
)
test_data = DataFrame({'x': [2, -3, 4], 'y': [2, -3, 4]})
expected = [1, 0, 1]
_, _, result = await (ModelLoaders.literal(model).import_data(DataImporters.literal(test_data)).predict().run())
assert np.all(expected == result)
"""
Supervised Regression
Vector[float] -> float
.train(
RegresionModels.linear(),
input_features=["area", "location"], # floats
target_feature="price" # float
)
"""
"""
Supervised Classification
Vector[float / int / bool / str] -> str / bool / int
.train(
ClassificationModels.DecisionTree(),
input_features=["sepal_length", "sepal_width"], # float / int / bool / str
target_feature="class_name" # str / bool / int
)
# Should only be avaialbe for clustering / classification problems
.predict_probability(
labels={
"setosa": "is_setosa_probability",
"versicolor": "is_versicolor_probability",
}
)
"""
| 27.833333 | 116 | 0.640719 | 179 | 1,670 | 5.832402 | 0.49162 | 0.038314 | 0.028736 | 0.028736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012422 | 0.228743 | 1,670 | 59 | 117 | 28.305085 | 0.798137 | 0.006587 | 0 | 0 | 0 | 0 | 0.019737 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0 | false | 0 | 0.473684 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
542466b53c52821ceb40707c73e0ab32ca5a0262 | 8,707 | py | Python | ptf/lib/runner.py | opennetworkinglab/tassen | 6e42ba79f83caa1bd6ecb40fd9bd1e9f8768ec09 | [
"Apache-2.0"
] | 4 | 2020-07-08T22:04:35.000Z | 2020-07-14T15:09:37.000Z | ptf/lib/runner.py | opennetworkinglab/tassen | 6e42ba79f83caa1bd6ecb40fd9bd1e9f8768ec09 | [
"Apache-2.0"
] | 1 | 2020-07-07T08:12:40.000Z | 2020-07-07T08:12:41.000Z | ptf/lib/runner.py | opennetworkinglab/tassen | 6e42ba79f83caa1bd6ecb40fd9bd1e9f8768ec09 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2
# Copyright 2013-present Barefoot Networks, Inc.
# SPDX-FileCopyrightText: 2018-present Open Networking Foundation
#
# SPDX-License-Identifier: Apache-2.0
import Queue
import argparse
import json
import logging
import os
import re
import subprocess
import sys
import threading
import time
from collections import OrderedDict
import google.protobuf.text_format
import grpc
from p4.v1 import p4runtime_pb2, p4runtime_pb2_grpc
PTF_ROOT = os.path.dirname(os.path.realpath(__file__))
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("PTF runner")
def error(msg, *args, **kwargs):
logger.error(msg, *args, **kwargs)
def warn(msg, *args, **kwargs):
logger.warn(msg, *args, **kwargs)
def info(msg, *args, **kwargs):
logger.info(msg, *args, **kwargs)
def debug(msg, *args, **kwargs):
logger.debug(msg, *args, **kwargs)
def check_ifaces(ifaces):
"""
Checks that required interfaces exist.
"""
ifconfig_out = subprocess.check_output(['ifconfig'])
iface_list = re.findall(r'^([a-zA-Z0-9]+)', ifconfig_out, re.S | re.M)
present_ifaces = set(iface_list)
ifaces = set(ifaces)
return ifaces <= present_ifaces
def build_bmv2_config(bmv2_json_path):
"""
Builds the device config for BMv2
"""
with open(bmv2_json_path) as f:
return f.read()
def update_config(p4info_path, bmv2_json_path, grpc_addr, device_id):
"""
Performs a SetForwardingPipelineConfig on the device
"""
channel = grpc.insecure_channel(grpc_addr)
stub = p4runtime_pb2_grpc.P4RuntimeStub(channel)
debug("Sending P4 config")
# Send master arbitration via stream channel
# This should go in library, to be re-used also by base_test.py.
stream_out_q = Queue.Queue()
stream_in_q = Queue.Queue()
def stream_req_iterator():
while True:
p = stream_out_q.get()
if p is None:
break
yield p
def stream_recv(stream):
for p in stream:
stream_in_q.put(p)
def get_stream_packet(type_, timeout=1):
start = time.time()
try:
while True:
remaining = timeout - (time.time() - start)
if remaining < 0:
break
msg = stream_in_q.get(timeout=remaining)
if not msg.HasField(type_):
continue
return msg
except: # timeout expired
pass
return None
stream = stub.StreamChannel(stream_req_iterator())
stream_recv_thread = threading.Thread(target=stream_recv, args=(stream,))
stream_recv_thread.start()
req = p4runtime_pb2.StreamMessageRequest()
arbitration = req.arbitration
arbitration.device_id = device_id
election_id = arbitration.election_id
election_id.high = 0
election_id.low = 1
stream_out_q.put(req)
rep = get_stream_packet("arbitration", timeout=5)
if rep is None:
error("Failed to establish handshake")
return False
try:
# Set pipeline config.
request = p4runtime_pb2.SetForwardingPipelineConfigRequest()
request.device_id = device_id
election_id = request.election_id
election_id.high = 0
election_id.low = 1
config = request.config
with open(p4info_path, 'r') as p4info_f:
config.p4info.ParseFromString(p4info_f.read())
config.p4_device_config = build_bmv2_config(bmv2_json_path)
request.action = p4runtime_pb2.SetForwardingPipelineConfigRequest.VERIFY_AND_COMMIT
try:
stub.SetForwardingPipelineConfig(request)
except Exception as e:
error("Error during SetForwardingPipelineConfig")
error(str(e))
return False
return True
finally:
stream_out_q.put(None)
stream_recv_thread.join()
def run_test(p4info_path, grpc_addr, device_id, cpu_port, ptfdir, port_map_path,
extra_args=()):
"""
Runs PTF tests included in provided directory.
Device must be running and configfured with appropriate P4 program.
"""
# TODO: check schema?
# "ptf_port" is ignored for now, we assume that ports are provided by
# increasing values of ptf_port, in the range [0, NUM_IFACES[.
port_map = OrderedDict()
with open(port_map_path, 'r') as port_map_f:
port_list = json.load(port_map_f)
for entry in port_list:
p4_port = entry["p4_port"]
iface_name = entry["iface_name"]
port_map[p4_port] = iface_name
if not check_ifaces(port_map.values()):
error("Some interfaces are missing")
return False
ifaces = []
# FIXME
# find base_test.py
pypath = os.path.dirname(os.path.abspath(__file__))
if 'PYTHONPATH' in os.environ:
os.environ['PYTHONPATH'] += ":" + pypath
else:
os.environ['PYTHONPATH'] = pypath
for iface_idx, iface_name in port_map.items():
ifaces.extend(['-i', '{}@{}'.format(iface_idx, iface_name)])
cmd = ['ptf']
cmd.extend(['--test-dir', ptfdir])
cmd.extend(ifaces)
test_params = 'p4info=\'{}\''.format(p4info_path)
test_params += ';grpcaddr=\'{}\''.format(grpc_addr)
test_params += ';device_id=\'{}\''.format(device_id)
test_params += ';cpu_port=\'{}\''.format(cpu_port)
cmd.append('--test-params={}'.format(test_params))
cmd.extend(extra_args)
debug("Executing PTF command: {}".format(' '.join(cmd)))
try:
# we want the ptf output to be sent to stdout
p = subprocess.Popen(cmd)
p.wait()
except:
error("Error when running PTF tests")
return False
return p.returncode == 0
def check_ptf():
try:
with open(os.devnull, 'w') as devnull:
subprocess.check_call(['ptf', '--version'],
stdout=devnull, stderr=devnull)
return True
except subprocess.CalledProcessError:
return True
except OSError: # PTF not found
return False
# noinspection PyTypeChecker
def main():
parser = argparse.ArgumentParser(
description="Compile the provided P4 program and run PTF tests on it")
parser.add_argument('--p4info',
help='Location of p4info proto in binary format',
type=str, action="store", required=True)
parser.add_argument('--bmv2-json',
help='Location BMv2 JSON output from p4c (if target is bmv2)',
type=str, action="store", required=False)
parser.add_argument('--grpc-addr',
help='Address to use to connect to P4 Runtime server',
type=str, default='localhost:50051')
parser.add_argument('--device-id',
help='Device id for device under test',
type=int, default=1)
parser.add_argument('--cpu-port',
help='CPU port ID of device under test',
type=int, required=True)
parser.add_argument('--ptf-dir',
help='Directory containing PTF tests',
type=str, required=True)
parser.add_argument('--port-map',
help='Path to JSON port mapping',
type=str, required=True)
args, unknown_args = parser.parse_known_args()
if not check_ptf():
error("Cannot find PTF executable")
sys.exit(1)
if not os.path.exists(args.p4info):
error("P4Info file {} not found".format(args.p4info))
sys.exit(1)
if not os.path.exists(args.bmv2_json):
error("BMv2 json file {} not found".format(args.bmv2_json))
sys.exit(1)
if not os.path.exists(args.port_map):
print "Port map path '{}' does not exist".format(args.port_map)
sys.exit(1)
try:
success = update_config(p4info_path=args.p4info,
bmv2_json_path=args.bmv2_json,
grpc_addr=args.grpc_addr,
device_id=args.device_id)
if not success:
sys.exit(2)
success = run_test(p4info_path=args.p4info,
device_id=args.device_id,
grpc_addr=args.grpc_addr,
cpu_port=args.cpu_port,
ptfdir=args.ptf_dir,
port_map_path=args.port_map,
extra_args=unknown_args)
if not success:
sys.exit(3)
except Exception:
raise
if __name__ == '__main__':
main()
| 31.547101 | 91 | 0.605949 | 1,062 | 8,707 | 4.788136 | 0.270245 | 0.022026 | 0.020452 | 0.014946 | 0.134907 | 0.053294 | 0.032448 | 0.032448 | 0.032448 | 0.015339 | 0 | 0.014549 | 0.289537 | 8,707 | 275 | 92 | 31.661818 | 0.807468 | 0.06535 | 0 | 0.168317 | 0 | 0 | 0.112462 | 0.003454 | 0 | 0 | 0 | 0.003636 | 0 | 0 | null | null | 0.004951 | 0.069307 | null | null | 0.004951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.